Long before ChatGPT arrived on the hype cycle, many of us were considering what artificial intelligence (AI) technology could mean for scholarly publishing – how it might change processes developed over centuries, and how publishers should react.
While generative AI such as ChatGPT gets all the buzz, AI in other, typically more targeted forms, has been used for years to solve business and research challenges.
CCC invited various speakers to share their experiences with ChatGPT and other AI tools and to express their concerns and questions about this rapidly changing technology.
Data quality is an expression of a data’s usefulness and value. We often describe the things we build as knowledge systems, or a system that takes data as its input and, through a series of data process steps, extracts as much of the ‘actionable information’ in the data as is possible.
In Thomas Kuhn’s work, paradigms are characterized by “universally recognized scientific achievements that for a time provide model problems and solutions to a community of practitioners.”
The U.S. Copyright Office announced a new artificial intelligence initiative that will “examine the copyright law and policy issues raised by AI, including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training.”
While training AI usually involves large data sets, significant AI innovation occurs today by virtue of tech companies (and others) using large datasets licensed by entities such as Getty, STM publishers, and news outlets, among others.