Technology forever changes our world, usually starting with science.
Galileo’s invention of the telescope sealed the fate of an anthropocentric universe. After the Janssens developed the microscope, Van Leeuwenhoek improved the device enough to reveal micro-organisms. And of course, computers like those Bill Hewlett and David Packard first developed in a Palo Alto garage are now found in every lab.
In our own time, robots and algorithms using artificial intelligence are becoming commonplace tools in research. AI is especially relevant where large volumes of data and information are processed – leading directly to the scholarly and scientific publishers that digest the data and produce even more.
Following 18 months’ work, the STM Association will release a white paper, Best Practice Principles for Ethical, Trustworthy and Human-centric AI, as part of the upcoming STM Spring Conference, to be held online April 27 through 29.
“When we developed this report, we realized what a unique position we have as publishers,” says Joris van Rossum STM’s director of Research Integrity. “First of all, we are key providers of information and data and articles on which AI is run.
“Having the right data and high-quality data is really critical for having an efficient and trustworthy application of AI,” he tells CCC.