Technology forever changes our world, usually starting with science.

Galileo’s invention of the telescope sealed the fate of an anthropocentric universe. After the Janssens developed the microscope, Van Leeuwenhoek improved the device enough to reveal micro-organisms. And of course, computers like those Bill Hewlett and David Packard first developed in a Palo Alto garage are now found in every lab.

In our own time, robots and algorithms using artificial intelligence are becoming commonplace tools in research. AI is especially relevant where large volumes of data and information are processed – leading directly to the scholarly and scientific publishers that digest the data and produce even more.

https://beyondthebookcast.com/principles-for-trustworthy-ai/

Following 18 months’ work, the STM Association will release a white paper, Best Practice Principles for Ethical, Trustworthy and Human-centric AI, as part of the upcoming STM Spring Conference, to be held online April 27 through 29.

“When we developed this report, we realized what a unique position we have as publishers,” says Joris van Rossum  STM’s director of Research Integrity. “First of all, we are key providers of information and data and articles on which AI is run.

“Having the right data and high-quality data is really critical for having an efficient and trustworthy application of AI,” he tells CCC.

Topic:

Author: Christopher Kenneally

Christopher Kenneally hosts CCC's Velocity of Content podcast series, which debuted in 2006 and is the longest continuously running podcast covering the publishing industry. As CCC's Senior Director, Marketing, he is responsible for organizing and hosting programs that address the business needs of all stakeholders in publishing and research. His reporting has appeared in the New York Times, Boston Globe, Los Angeles Times, The Independent (London), WBUR-FM, NPR, and WGBH-TV.
Don't Miss a Post

Subscribe to the award-winning
Velocity of Content blog