Velocity of Content | Copyright Clearance Center https://www.copyright.com Rights Licensing Expert Wed, 28 Sep 2022 13:32:59 +0000 en-US hourly 1 https://www.copyright.com/wp-content/uploads/2021/06/cropped-favicon-512x512-1-32x32.png Velocity of Content | Copyright Clearance Center https://www.copyright.com 32 32 Realizing Tomorrow’s Future Today with Standards https://www.copyright.com/blog/realizing-tomorrows-future-today-with-standards/ Wed, 28 Sep 2022 13:32:47 +0000 https://www.copyright.com/?post_type=blog_post&p=40386 Dan Plofchan shares his thoughts on the value of standards and on innovative approaches taken by standards development organizations, or SDOs.

The post Realizing Tomorrow’s Future Today with Standards appeared first on Copyright Clearance Center.

]]>
On a daily basis, our lives are impacted by established formulas distilled from the collective wisdom of technical experts that affect a wide range of activities including, but not limited to, how we educate our children, manufacture products, impact the environment, supply materials, deliver services, regulate safety, privacy, and security, and monitor our health. These formulas are generally referred to as “standards.”

Even if someone has not heard of standards, they have none the less benefited from their outcomes through enhanced design, development, and service technologies. Yet for most people, standards development processes are semi-invisible, despite being a critical element of thousands of regulations threaded across all areas of modern society. Occasionally they come into the light, such as when the US Department of Education endorsed the Common Core State Standards Initiative, but most of us live our lives oblivious to their existence, value, or impact. In this post, the first in a series about standards, I’ll share my thoughts on the value of standards and on innovative approaches taken by standards development organizations, or SDOs.

Standards are synonymous with rules, regulations, criterion, benchmarks, rubrics, or metrics. Society would not operate as we know if not for these mutually agreed upon best practices and technical specifications. Standards have been crucial for advancing manufacturing and technology development during each previous industrial revolution.

The digital evolution driven by Industrial Revolution 4.0 requires extensive enhancements and innovations. These in turn affect current applications, systems, and operations and surpass conventional processes – all of which are guided by standards. Each technological iteration requires users to adopt new modalities to fully realize the benefits afforded by each turn of the evolutionary cycle. This has never been truer than with the advances of smart manufacturing and autonomous systems, data-driven enterprises, artificial intelligence initiatives, machine learning, the Internet of Things (I0T), etc.

To meet these “future of” business opportunities, industries across the board are currently transforming to serve ever-changing global markets and needs, which in turn requires developers to accelerate the pace for delivering technological advancements, product designs, R&D, and product developments to “future proof” businesses. According to Tatiana Khayrullina, Consulting Partner for Standards and Technical Solutions, Outsell Inc:

“The concepts of “content” and “data” are converging, and new formats (e.g., video, audio) are becoming more dominant. At the same time, reproducible results are becoming more and more important.”

Regardless of industry, users desire greater accessibility to SDO content with a growing desire to integrate and /or aggregate standards data into their own proprietary end products. Should SDOs acquiesce, could we be on the verge of realizing tomorrow’s future, today?

Simon Klaris Friberg, Senior Librarian/Information Consultant at Rambøll, an engineering consultancy headquartered in Copenhagen, Denmark, argues that the evolution of standards requires altering the current “document centric” approach.

“It’s very much still a traditional sort of paper-based use and not materially different from that conducted in generations past. References to standards content are framed in the paper world – with pages, footnotes, paragraphs, and sections, despite the content existing in PDF and XML format online. As a 21st century engineering organization, Rambøll and its engineers are coming to see a need to move to a more content centric use of standards content.”

Achieving this level of accessibility would require SDOs to transverse the barriers confronting the end users; in turn requiring the SDOs and other developers, along with intermediary distributors, to significantly expand the access to and interconnectivity of standards data. (Friberg 2022)

Based on my interaction with SDOs, I’m seeing differing comfort levels as they consider and identify how best to engage with current and new customers as Industrial Revolution 4.0 content providers. They’re seeking ways to bridge the growing gap between the practical considerations surrounding standards development and their customers’ rapidly evolving data needs and contemporary manufacturing / supply-chain realities.

Those SDOs with the greatest potential to lead in the face of growing global market sophistication are transforming their capabilities from document-centric models into transparent, open access models that allow for on-going operational process, infrastructure, and technology improvements and upgrades.

Concurrently, these SDOs are carefully managing the practicalities of their own industry, including achieving content management efficiency, adapting pricing models, monitoring cybersecurity, defining, and licensing digital assets, adhering to legislation compliance, policing piracy, protecting copyright and intellectual property, maintaining technological relevancy, meeting mandated availability requirements, and addressing the impact and expectations of native users. Their active contemplation and development of new “use case” strategies require greater cooperation and collaboration among users, distributors, publishers, and other SDOs. For these organizations, “because we have always done it this way” is no longer an operational mantra.

In my next post, I’ll share my observations about SDO efforts to preserve the uniqueness of standards while moving move their organizations and their industry forward.

The post Realizing Tomorrow’s Future Today with Standards appeared first on Copyright Clearance Center.

]]>
CCC Hosts Eli Lilly and Company for Panel Discussion on Innovation and Copyright Compliance https://www.copyright.com/blog/ccc-hosts-eli-lilly-for-panel-discussion-on-innovation-and-copyright-compliance/ Tue, 27 Sep 2022 08:00:03 +0000 https://www.copyright.com/?post_type=blog_post&p=40356 Hear how their Legal and Library teams partner to help teams accelerate the innovation of new medicines, improve patient outcomes, and promote global copyright compliance.

The post CCC Hosts Eli Lilly and Company for Panel Discussion on Innovation and Copyright Compliance appeared first on Copyright Clearance Center.

]]>
Tomorrow, CCC will welcome senior leaders from Eli Lilly and Company’s Legal and Library teams for an engaging panel discussion on how they partner to accelerate the innovation of new medicines, improve patient outcomes, and promote global copyright compliance.

Wednesday, 28 September 2022

11:00 AM – 12:00 PM EDT /
5:00 PM – 6:00 PM CEST

Register here

Lilly strives to not only create high-quality medicines, but also ensure that the benefits and risks of medications are continuously monitored and well-understood by regulators, health care professionals and patients.

Whether R&D teams researching new discoveries, Pharmacovigilance teams monitoring adverse reactions, Regulatory teams reporting to health agencies, or Medical Communications teams working with doctors to promote patient safety, they rely on compliant access to the latest information and scientific literature to support that mission.

In the Legal and Library Partner to Accelerate Innovation and Improve Patient Outcomes webcast, hear how information management and copyright compliance come together to empower teams across the drug development pipeline whether in early discovery, clinical development or post-regulatory approval. Regardless of your industry, this session will serve as a model for anyone involved with managing information and copyright throughout the organization.

Joining the panel are:

Christina Bennett-McNew is the Associate Director of Library Operations at Eli Lilly and Company. Prior to working for Eli Lilly, Christina spent 15 years managing the CCC team of information management professionals based at Lilly. She has her Masters in Library Science from Indiana University and is currently a board member of the Pharma Documentation Ring (PDR) where she works with other Library Leaders in the Pharmaceutical industry.

Dr. John Rudolph is an Associate Vice President and Assistant General Counsel at Lilly where he is a member of Lilly’s Trademarks and Copyrights department.  In addition to trademark and trade dress matters, John works on copyright focusing on compliance, contracting and usage rights. In this role he also works with the Lilly Library helping to train Lilly colleagues on copyright matters.

Bruce Longbottom is Associate Vice President – Assistant General Counsel at Eli Lilly and Company, currently serving as Head of Lilly’s Trademark Department. His responsibilities include: (1) managing trademark and copyright compliance matters on a global basis, and (2) providing legal counsel in support of Lilly’s global anti-counterfeiting and product protection strategy.

Moderator: Stephen Casbeer is a Senior Client Engagement Director for Copyright Clearance Center. He has led global organizations and advised executives on adapting to and benefiting from rapid change in the information industry, encompassing issues of business and technology strategy, process re-engineering, and organizational design.

Join us to learn more about the partnership between Eli Lilly and Company’s Legal and Library teams and some of the ways that CCC supports them with compliance and content management solutions, or visit www.copyright.com/markets-business for more information.

The post CCC Hosts Eli Lilly and Company for Panel Discussion on Innovation and Copyright Compliance appeared first on Copyright Clearance Center.

]]>
Beware the “New Google” (And Much More) https://www.copyright.com/blog/beware-the-new-google-and-much-more/ Mon, 26 Sep 2022 12:42:59 +0000 https://www.copyright.com/?post_type=blog_post&p=40350 A NewsGuard investigation reveals that TikTok searches consistently feed false and misleading claims.

The post Beware the “New Google” (And Much More) appeared first on Copyright Clearance Center.

]]>
Grapefruit peel and lemon peel simmered slowly in water to extract the maximum quinine and vitamin C. It’s not a recipe for a trendy homemade energy drink, but a DIY prescription for hydroxychloroquine and touted online as a cure for COVID-19.

You can find the phony pharmaceutical on the world’s most popular website. No, not Google – the new Google, TikTok.

In growing numbers, people take questions about healthcare, politics, or finding the best restaurants not to Google, but to TikTok, the short-form video platform. This month, a NewsGuard investigation revealed that such TikTok searches consistently feed false and misleading claims to users, most of whom are teens and young adults.

“For example, when our analyst did a search for COVID vaccine, which is a kind of search that a young person might very well do to learn more about it, TikTok suggested that the search be for COVID vaccine injury or COVID vaccine truths or COVID vaccine exposed, COVID vaccine HIV, and COVID vaccine warning – in other words, highlighting the alarmist and often false claims about the COVID vaccine,” says Gordon Crovitz, NewsGuard co-founder.

Click below to listen to the latest episode of the Velocity of Content podcast.

Beware the “New Google” (And Much More)

NewsGuard is a journalism and technology tool that rates the credibility of news and information websites and tracks online misinformation for search engines, social media apps, and advertisers.

Of the 8,000 news and information sites NewsGuard has rated, close to 40% receive a “red” rating, categorizing them as untrustworthy. The NewsGuard assessments, says Steven Brill, also a NewsGuard co-founder, are grounded in well-established principles of best journalistic practices.

“It’s a scrupulous, careful, multi-person look at how every one of these websites scores against nine specific criteria. Does it have a transparent policy to make a correction when they realize they’ve made a mistake? Do they mix news and opinion in a way that people can’t tell if it’s news and opinion? The basics that any journalist learns and adheres to,” explains Brill, who founded The American Lawyer in 1979 and started Court-TV in 1989.

The post Beware the “New Google” (And Much More) appeared first on Copyright Clearance Center.

]]>
For Banned Books Week 2022, A Record Year https://www.copyright.com/blog/for-banned-books-week-2022-a-record-year/ Fri, 23 Sep 2022 12:53:53 +0000 https://www.copyright.com/?post_type=blog_post&p=40335 Through the first eight months of 2022, ALA has recorded 681 attempts to ban or restrict library resources in schools, universities, and public libraries.

The post For Banned Books Week 2022, A Record Year appeared first on Copyright Clearance Center.

]]>

The American Library Association is marking “Banned Books Week,” and celebrating the freedom to read, at a time when a surge of book bannings in public schools and libraries across the U.S. shows no signs of slowing.

Through the first eight months of 2022, ALA has recorded 681 attempts to ban or restrict library resources in schools, universities, and public libraries.

“That pace means 2022 will likely exceed the 729 challenges ALA tracked in 2021,” notes Andrew AlbanesePublishers Weekly senior writer.  “ALA officials say the bans have targeted 1,651 different titles—and that’s already more than all of 2021.”

Click below to listen to the latest episode of the Velocity of Content podcast.

For Banned Books Week 2022, A Record Year

Also this week, the EveryLibrary Institute, the research arm of EveryLibrary, a political action committee that advocates for libraries, released results of a national poll of more than 1,100 registered voters from August 31 to September 3.

“Of those polled, 91% were disinclined to ban books. Fully half of all respondents said there is, ‘absolutely no time’ when a book should be banned and that included 31% of Republican voters polled,” Albanese tells me.

The post For Banned Books Week 2022, A Record Year appeared first on Copyright Clearance Center.

]]>
Discovering the Unknown – How Deep Search Capabilities Advance Science [Q&A with Lauren Tulloch] https://www.copyright.com/blog/discovering-the-unknown-how-deep-search-capabilities-advance-science/ Thu, 22 Sep 2022 08:05:20 +0000 https://www.copyright.com/?post_type=blog_post&p=40292 We spoke with CCC’s Vice President and Managing Director of Corporate Solutions Lauren Tulloch to learn more about deep search solutions for advanced research purposes, and the ways CCC fits into the conversation.

The post Discovering the Unknown – How Deep Search Capabilities Advance Science [Q&A with Lauren Tulloch] appeared first on Copyright Clearance Center.

]]>
Overwhelmed with the sheer amount and diversity of data, many organizations struggle to provide employees — from top management to front-line staff — with easy access to the most relevant information and analytic insights.  

More and more, the term deep search is being buzzed about in industries that rely on having the most up to date information– areas like life sciences, biotechnology, chemicals, and agriculture.  

The concept of deep search originated in the early 2000s to ask the question: how can we search the breadth of content across the internet when it is not indexed by standard search engines? 

We spoke with CCC’s Vice President and Managing Director of Corporate Solutions Lauren Tulloch to learn more about deep search solutions for advanced research purposes, and the ways CCC fits into the conversation.  

Q: In the context of business, how would you describe deep search solutions?  

Deep search solutions provide automated capabilities that gather, analyze, and deliver targeted data from a range of online sources.  

Deep search solutions focus on what you are looking for by gathering targeted intelligence from data sources you choose and content types relevant to you, leveraging ontologies that are meaningful to your company. This then creates a view into that data that makes sense to for your organization and offers insights that your teams can act on. 

Q: How do deep search capabilities differ from traditional web searching? 

To find information, we know companies usually start out with a combination of search engines and curated databases. Using a traditional search engine is not only time consuming for this advanced level of search, but it’s easy to miss critical information because search engines do not index all web pages and may exclude pages with information of value to you. We also know they can introduce bias based on ad spend and algorithms, so search engines may offer only the information the provider thinks interest you based on what others find interesting or what will generate the most clicks on ads. In addition, results may be surfaced that display editorial bias and geographic bias. When you’re tasked with gathering all the information, you don’t want to be beholden to a search algorithm that has other priorities .  

Curated databases are important but may not be enough. For competitive advantage, companies need to find the information beyond what everyone else has.  

Q: Can you give an example of what deep search capabilities look like in the context of a life sciences company?  

Take, for example, a company that is interested in grants data, and specifically, the link between a principal investigator, their affiliation, and licensed grants. The traditional way of searching for this information would be time-consuming and laborious, often producing results that may be irrelevant for your specific need. With deep search solutions gathering information from funding organizations’ websites, the ability to target and connect these disparate pieces of information becomes easier and quicker. 

Consider a life sciences company wanting to monitor the competitive landscape – especially the emergence of new players. Deep search solutions can automatically alert you to potential partners or collaborators. 

Deep search solutions can help highlight signs of early invention in the marketplace – say from conference posters – which may indicate direction of competitive intentions.  

All of these are typical use cases for companies to gain advantage by deploying deep search solutions. 

Q: What are the main benefits for companies looking to explore deep search? 

The first is getting a competitive edge by gaining access to information first. And the second is efficiency and time savings. By streamlining the process of gathering and analyzing information, employees can make well-informed tactical and strategic decisions sooner.  

Q: What’s the CCC Connection?

CCC recently acquired DS9, a powerful set of software solutions that makes it easy to collect and analyze information from a variety of internal and external sources. Several multinational life science and crop science companies already use this technology to support a wide range of use cases.  

Deep search solutions from CCC are available immediately as we build on what DS9 has created with the help of our software professional services teams. As we look to the future, we will integrate these capabilities into our RightFind suite of solutions for our clients, creating new ways to capitalize on the data our deep search solutions can provide. 

Interested in learning more? Check out: 

The post Discovering the Unknown – How Deep Search Capabilities Advance Science [Q&A with Lauren Tulloch] appeared first on Copyright Clearance Center.

]]>
Research Results, Replicability, & Retractions https://www.copyright.com/blog/research-results-replicability-and-retractions/ Thu, 22 Sep 2022 06:45:36 +0000 https://www.copyright.com/?post_type=blog_post&p=40320 As with my earlier post in this series on the overall role of peer review in the scholarly and scientific …

The post Research Results, Replicability, & Retractions appeared first on Copyright Clearance Center.

]]>
As with my earlier post in this series on the overall role of peer review in the scholarly and scientific publishing process, for Peer Review Week 2022 we are reposting our pieces in this series. And little bit of updated information on the topics we covered:

* Psychology, in particular, seems to have a replicability problem; even outside of psych, everyone knows that bad papers are still getting published (and they may even have a discoverability advantage).

* Retraction Watch is still engaged in its useful work, and IMHO, everyone in scholarly and scientific publishing should subscribe to it. For example, this Retraction Watch collection highlights COVID-19 papers that had to be pulled.


Earlier in this series, I talked about the “QA” aspects of formal peer review; and then we took a look at preprints, the Versions of Record, and postprints. In this post, I’m focusing on the Why and the How of reproducibility, of replication studies. At the end of this post I’ll wrap up with a brief look at the graveyard of dead papers, the dread retraction.

A long time ago (1934 or so; from 1959 in English), the Viennese philosopher of science, Karl Popper, articulated what has since become known as the “criterion of falsifiability,” the notion that within the scientific disciplines, for a proposition to be considered scientific, it must be — at least in principle — capable of being refuted by an experiment. It may not be a perfect criterion, but applying it does serve to separate science from pseudo-science and that, in the words of Martha Stewart, “a good thing.”

In his book, The Logic of Scientific Discovery, Popper wrote:

“We do not take even our own observations quite seriously, or accept them as scientific observations, until we have repeated and tested them. Only by such repetitions can we convince ourselves that we are not dealing with a mere isolated ‘coincidence,’ but with events which, on account of their regularity and reproducibility, are in principle inter-subjectively testable.”

Admittedly, that is all a bit jargon-y; but what I see Popper getting at here is the noble practice, in scientific research and publishing, of demanding that experiments be re-run to see if their results hold up. Ideally, the initial results will check out;  but if they don’t, maybe the methods were invalid or the data wasn’t of sufficient quality in the first place. In other words, if something is amiss it is much better to find out as soon as possible. In biomedicine, this testing rigor is enforced through procedures of testing in vitro (essentially, in a dish), then in vivo (i.e., on live subjects, starting with animal testing); then on human subjects (control groups and all that) and —with luck—final approvals from the FDA (or other authorities) for specific uses. In essence, what we see in this instance is the scientific process at work, striving to provide treatments that are at once both safe and effective for doctors to prescribe and for people to use.

At the most abstract level, scientific studies – especially experiments which are designed to ferret out new knowledge – produce outcomes that indicate… something. Maybe the something is “the null hypothesis” – effectively, that no meaningful relation between this and that shows up in the results; or some new and interesting result may be uncovered, as when penicillin was shown to have a strong, general antibiotic effect.

But such discoveries are not complete – they are not considered reliable — until they have been replicated by others. In basic terms, replication studies are those in which the conditions, data and procedures of original experiments are re-run to see if they come out the same. Of course there is more to it than that.

Although I am sure there are many others, the greatest failure-to-replicate example that I know of is the Fleishmann-Pons “cold fusion” debacle of 1989. (I recall following the controversy in Usenet’s sci.physics newsgroups in near-real-time. It was fascinating, even to an outsider like me.) Bear in mind that “cold fusion” (sometimes referred to as “desktop fusion,” would have certainly changed the world of energy and fuel production—if the effect were real.

The summary paragraph provided in the Wikipedia entry on “Cold Fusion” is precise and to the point:

“In 1989, two electrochemists, Martin Fleischmann and Stanley Pons, reported that their apparatus had produced anomalous heat (“excess heat”) of a magnitude they asserted would defy explanation except in terms of nuclear processes. They further reported measuring small amounts of nuclear reaction byproducts, including neutrons and tritium. The small tabletop experiment involved electrolysis of heavy water on the surface of a palladium (Pd) electrode. The reported results received wide media attention and raised hopes of a cheap and abundant source of energy.”

Boiled down to essentials, the initial claims of the Fleishmann-Pons team, concerning detectable energy production — and particularly their inference as to its source — failed to replicate, and failed to hold up under closer scrutiny. The more specialists refined the experimental procedure, the less the net energy production effect, even when present, appeared. Although unfortunate for the reputations of those two scientists, the replication step did its job; relatively quickly it showed that there were significant problems in the experiment and that the claimed results were not to be relied on.

Note: At this distance, it appears that Fleishmann-Pons and follow-on experiments found something interesting; but not energy at the levels they thought, and not any form of anything that should be referred to as “cold fusion.:

Finally, even where the underlying research and the resulting article have cleared all the usual hurdles, and have been published in a reputable (not fly-by-night) journal, sometimes – rarely —major problems with the overall work are identified after-the-fact. The procedure for addressing such problems is known as “making a retraction” and no one likes to do it. The author or research team can understandably feel defeated or even rejected, the editor and journal likely feel they have suffered a loss of reputation, and readers may rightly feel let down by both. Retractions can occur fast, or they can be slow and take years to complete. For those who need to follow such things, Retraction Watch is a good aggregator of retraction events.

According to the Oxford Dictionaries, “quality assurance” may be functionally defined as “the maintenance of a desired level of quality in a service or product, especially by means of attention to every stage of the process of delivery or production.” Throughout this short series, we’ve focused on the many steps, and all stages, used in scientific and scholarly publishing to ensure and improve on the quality and reliability of published articles. As is often said about peer review, but I think equally true about the others, these are the inglorious (or, maybe, simply non-glorious) but necessary procedures to enforce if a top-quality product and reputation is be earned and maintained.

The post Research Results, Replicability, & Retractions appeared first on Copyright Clearance Center.

]]>
Understanding the Role of Preprints & Postprints & The Version of Record https://www.copyright.com/blog/the-role-of-preprints-postprints-the-version-of-record/ Wed, 21 Sep 2022 08:02:11 +0000 https://www.copyright.com/?post_type=blog_post&p=40278 For Peer Review Week 2022, Dave Davis focuses on the “before”, “in process” and “final” versions of articles. Here, calling them preprints, postprints, and the “versions of record.”

The post Understanding the Role of Preprints & Postprints & The Version of Record appeared first on Copyright Clearance Center.

]]>
As with my earlier post on the overall role of peer review in the scholarly and scientific publishing process, for Peer Review Week 2022 we are reposting this series and including some updated information on the topics we covered:

An article in Nature Medicine suggests access to preprints might be good for patient outcomes. I am not so sure; another looks at how the role of preprints shifted during the years of peak COVID.

We note that readers and writers keep attempting to clarify the distinctions among Preprints, Post prints, and Final Versions.

Additionally, it may be tangentially relevant to these topics to note that the Copyright Office is looking at its Deposit Copyright Requirements as well as at its “Best Edition” requirements (because all of preprints, postprints and final versions probably qualify as registrable for copyright protection but, if they are very similar, only one should be filed, with an assigned registration date, in order to be the basis for an infringement suit).


In my first post in this series, we looked at the role of peer review in scholarly and scientific publishing, and the critically important “quality assurance”-style value it brings to the process. In this one, I am focusing on the “before”, “in process” and “final” versions of articles, which is to say, preprints, postprints, and the “versions of record.”

  • Preprints, as the name implies, are versions of a submitted article which are made available to readers in advance of their formal review and acceptance for publication. Although they were around in the old paper world —they’ve effectively been around forever — today they are typically PDFs which either the author has supplied as a courtesy to colleagues in the field, or have been (legitimately) posted to a server dedicated to the specific purpose of making them more easily discoverable. A recent blog post for the Microbiology Society thus defined them: “A preprint is a version of a scientific manuscript that has been posted online, that has not completed the formal peer review process, i.e., it has not been editorially accepted for publication.”

If you are writing and publishing in the biomedical or related fields, preprints may have been around longer than many readers may realize. Although informal research-sharing networks have been around as long as there have been networked computers, arXiv (from its beginnings in 1991, when it was known as LANL) is generally recognized as the oldest continuously operating preprint server.

On the other hand…

  • Postprints, as the name itself suggests, are openly accessible copies of papers which have already been accepted for publication. These are usually hosted and made available via institutional repositories, most commonly where the researcher/scholar teaches or is otherwise affiliated. Postprints may also be made available through services such as SSRN or edu . The purpose of postprints (or more properly of links to postprints) is to ensure that the research field has access to this “good” research – “good” because it has been reviewed and accepted – ahead of publication, which may be a year or more away.  It also gives the author and her institution a version that is “theirs” and technically separate from the Version of Record, to which some or all copyright rights will likely have been conveyed to the publisher.

Note: Although some might expect to see Sci-Hub or LibGen also listed here as repositories of postprints, from my perspective they ought not be considered among legitimate services because they usually post materials without the permission of or notice to any of the author or the author’s institution or the publisher. Readers of this blog can, of course, weigh such issues for themselves.

A helpful graphic (via Wikimedia Commons, CC:BY 4.0) illustrates the relation among these publication statuses:

Both preprints and postprints can help advance more efficient scholarly communication and broader access to scientific outputs, both of which are core goals of the Open Access Movement. Comments and criticism of preprints, in particular, may serve as an informal supplement to editorial peer review.

  • Between preprints (not yet reviewed, let alone edited for publication) and postprints (reviewed but not yet edited for publication) we find the version of record. Basically, the version of record is considered the “final, published,” citable and (hopefully) stable and enduringly accessible version of the article — one which can then be considered part of the domain literature on the subject. Its significance is such that some university tenure committees will consider only the versions of record as part of a researcher’s scholarly output (this is the old “publish or perish” rule). The version of record is generally that on which the publisher has completed its formal peer review and publication steps, including editing, formatting and pagination. A DOI, as well as other identifying metadata, are assigned at this point in order that the article may be more easily discoverable by later workers in the field. This is the version indexed by such services as SCOPUS and Web of Science.

The diagram above suggests a logical, well-ordered process. The real world is of course much messier. All this versioning and back-and-forth can lead — I suspect commonly leads — to some tensions and frustrations (and delays) with the work load this all entails among authors, editors, and publishers, each in their way seeking get top-notch research results published quickly.

Although, to most of us, having multiple versions of scholarly and scientific articles floating around might seem needlessly complicated, each of the three version types above serves a significant need: preprints get the word out fast; postprints signal the article has been accepted and scheduled for publication; and the version of record is the bona fide “contribution to the field” which serves as the ”certifiably” reliable and citable version for extending the body of knowledge on the topic.

In my next post, I will focus on the role of Replication (in scientific experimentation and publishing) and finish up with a brief look at Retractions.

Further reading:

The post Understanding the Role of Preprints & Postprints & The Version of Record appeared first on Copyright Clearance Center.

]]>
Peer Review, Still a Critical Step in Scholarly & Scientific Publishing https://www.copyright.com/blog/peer-review-still-a-critical-step-in-scholarly-scientific-publishing/ Tue, 20 Sep 2022 13:01:53 +0000 https://www.copyright.com/?post_type=blog_post&p=40264 Dave Davis revisits his 2021 blog post observing the Peer Review process through the lens of QA (quality assurance).

The post Peer Review, Still a Critical Step in Scholarly & Scientific Publishing appeared first on Copyright Clearance Center.

]]>
The main points I made in this blog post from 2021 seem to hold up pretty well. In the time since it was written, I continue to observe the Peer Review process through the lens of QA, which I continue to think provides a helpful metaphor.

Peer Review, a Critical Step in Scholarly & Scientific Publishing

However, we work assiduously to keep up on our reading around here:

One blog I read recently suggests (with a pinch of hyperbole) that while peer review is a time-hallowed and important process, a more fully sustainable method for it has never actually been tried, while another writer suggests that the whole business of peer review continues to be in crisis and that nobody has solved it yet.

I feel they each have a point there. Perhaps peer view is well thought of as (in the old expression) “reformata, sed semper reformanda” (‘reformed, but ever reforming’)


Dave Davis revisits his 2021 blog post observing the Peer Review process through the lens of QA (quality assurance).(Disclaimer: I’m not a scientist, nor have I have ever edited or peer-reviewed a scientific paper, although in my background research for this post I communicated with several colleagues who do fulfill such descriptions. My degrees are in History, which may be conducted as social science, or as a form of literature, or both, depending on how you go about it.)

For several years now, I’ve been honored to work with The Concord Review, a journal featuring top-quality academic essays by high school students. My role is similar to that of the history teacher to whom a term paper is submitted: I read the work closely and offer a brief, constructive critique along several dimensions, focusing on evidence, internal logic, writing, and the writer’s application of valid historical methods.

In the paragraph above, I used the word “quality,” a multifaceted and rich term, and one whose meaning can slip around like a fish pulled from a stream. Checking with Merriam-Webster, I find the sense I want at 2(a): “degree of excellence.” A graded, or ranked, paper is seen as of high quality if it excels on those points upon which it is assessed.

A similar (but distinct) usage comes in from software engineering: “Quality Assurance” (QA) is a critical part of the development process where the programs under study are rigorously tested to determine that they function as designed in the specs and do not make critical errors, such as failing randomly or messing up the data.

Scientific peer review may be thought of as something a lot like the QA component of the software development process – that is, it is essentially the QA step for the production of high-quality articles which are published in high quality journals.

In early 2020, CCC (in cooperation with Outsell) published a map of the scientific publishing ecosystem, available for download here. The following graphic is a simplified version of the research-authoring-review-publishing cycle:

On this idealized representation, Peer Review appears in the middle right of the cycle, the step at which formal peer reviewers —who, as a rule, are independent, i.e. not employees of the publisher —are sent the submitted paper. Depending on the publisher’s expectations (administered in the person of the editor), at this stage the peer reviewers typically would —minimally, although they may do more— check for (a) appropriate application of method, (b) apparent valid application of the data presented, and (c) internal consistency/logical validity. Considered at a more granular level of detail, though, there are QA checks in each of the five main phases: at Research and Discovery, when the team conducts a literature review and then designs the study itself, in accord with known and published methods; at Authoring and Output, when the results are checked (and possibly triple-checked) such that the assertions and conclusions are fully constrained by what the data says – and doesn’t say; at Editorial Submission and Acceptance; at Publishing and Distribution; and then in the article’s Post-Publication life when it is weighed as a contribution to the domain’s literature. In short, for top-tier articles and journals at least, the pursuit of quality is “baked into” each phase of the cycle.

There are many examples of when these efforts have failed, either due to intentional misleading on the part of the researchers, or when the all-too-human peer reviewers have dropped the ball. An early example from modern times, the “Desktop Fusion” controversy of the late 1980’s, led to a frenzy in the popular media, and then to recriminations and retractions, when the Fleischmann-Pons results could not be replicated by any other researchers. A few years later, in 1988, a paper by British researcher Andrew Wakefield, which claimed to find a correlative link between vaccines and autism, was published in a major biological journal. Later, Wakefield’s data was shown to have been falsified and his results utterly invalid. The paper was not retracted for 12 years, however, during which time a massive anti-vaccination movement took hold and gained popularity, often citing Wakefield’s paper in support of their efforts. Both the incident of Fleishmann-Pons, and that of the Wakefield debacle, can be considered failures of the peer review process. These massive publication failures are rightly seen as outliers, to be sure; but they also should stand as warnings, and spurs to continually striving to improve the peer-review process. The stakes can sometimes be high indeed.

As I was writing this post, I learned that the STM Organization (in cooperation with NISO) has a pilot project underway, assessing a potentially standard Peer Review Taxonomy. As a fan of taxonomies —my other degree being in Library & Information Science— I experienced a little thrill when I heard about that. There are many aspects of peer review – e.g., internal versus external — that can be clarified by implementing a controlled vocabulary and concept hierarchy. Here’s hoping!

There is a pithy old chestnut, attributed (apparently erroneously) to 19th century art critic and aesthete John Ruskin, which runs “Quality is never an accident. It is always the result of intelligent effort. There must be the will to produce a superior thing.” Even if Ruskin never said it, someone did, and they were on to something. Quality, in the sense we’ve been discussing here, is the effort to get it right, as much as possible, as soon as possible – whatever “it” is.

The post Peer Review, Still a Critical Step in Scholarly & Scientific Publishing appeared first on Copyright Clearance Center.

]]>
Peer Review Week 2022: A Matter of Trust https://www.copyright.com/blog/peer-review-week-2022-a-matter-of-trust/ Mon, 19 Sep 2022 12:45:06 +0000 https://www.copyright.com/?post_type=blog_post&p=40251 As a sometimes-fractious global society, we’ve just been through a period where high-quality research results have sometimes been actively disbelieved due to psychological and other non-empirical factors.

The post Peer Review Week 2022: A Matter of Trust appeared first on Copyright Clearance Center.

]]>
Way back in college, I was asked by my teachers to read a short book contrasting the Hebrew term emunah (trust) with the Greek term pistis (belief). The rest of the content of this book is not relevant at present — check it out here if you like — but the distinction stuck with me. And it was brought to mind again recently when I learned  the theme selected for Peer Review Week 2022 — “Research Integrity: Creating and Supporting Trust in Research.”

As a sometimes-fractious global society, we’ve just been through a period where high-quality research results have sometimes been actively disbelieved due to psychological and other non-empirical factors, and this mistrust or active disbelief led to negative (and indeed tragic) consequences. Rather than dwell on the sources of that negative skepticism, this year’s theme explicitly offers us the occasion to think about what sorts of factors build trust in research, providing the conditions under which we properly accord trust (emunah) in our beliefs (pisteis) about scholarly and scientific findings.

Several generators, or builders, of trust are commonly identified in the literature. One is consistency – the discipline of working towards a consistent level of outputs. Consistency may well be the most critical component to the trust readers put in a journal’s – or an author’s – brand. Another is transparency – answering questions such as What methods did you use? Where and how were the data collected? And how reliable (i.e., how much trust should I invest in what you say?) should I see these results as? I also find that a little humility goes a long way—tell me where the gaps still lurk, what problems are still unresolved, and you will probably go a long way towards me trusting what it is you say you have found out.

Overall, it seems obvious that trust in research results does not simply fall from the sky, as it were. It is earned instead through consistent application of valid methodology and the rigorous adherence to the peer review process. The phrase “Research integrity” serves as a shorthand for that trust-building rigor.

But that attribute of research integrity comes with a price tag —in my experience, work of the highest quality is never free; it comes at a cost and sometimes with a price in dollars, pounds, euros or other coin of other realms. This might be where the PRW ’22 theme’s notion of support towards building trust in research comes in. Whether it is through government or charitable subsidy, or subscriber fees, underwriter fees or other means: if value is to be created, costs must (somehow) be paid.

I do indeed welcome this year’s theme, and I am looking forward to reading articles on the topic. As William Martin Joel once opined, “it’s a matter of trust.”

The post Peer Review Week 2022: A Matter of Trust appeared first on Copyright Clearance Center.

]]>
2022 “Global 50” Rankings for Publishers https://www.copyright.com/blog/2022-global-50-rankings-for-publishers/ Mon, 19 Sep 2022 12:35:19 +0000 https://www.copyright.com/?post_type=blog_post&p=40248 “The very big trade publishing houses have done their homework,” says the author of Global 50: The Ranking of the Publishing Industry.

The post 2022 “Global 50” Rankings for Publishers appeared first on Copyright Clearance Center.

]]>
The world’s largest trade book publisher came by its name as a joke.

Co-founders Bennett Cerf and Donald Klopfer, already owners of the Modern Library, called their new venture Random House in 1927 because they expected to publish whatever books they liked. And what they liked, readers liked, too, year after year. For the first half of 2022, Penguin Random House reported €1.9 billion in revenue.

Over the last century, all of publishing, not just Random House, has developed in ways that Cerf and Klopfer never imagined. Books are a global business today that throws off profits and spinoffs in a diverse media environment.

Click below to listen to the latest episode of the Velocity of Content podcast.

2022 “Global 50” Rankings for Publishers

Leading into the Frankfurt Book Fair in October, Rüdiger Wischenbart has surveyed this dynamic world, one increasingly dominated by digital, for his annual report, Global 50: The Ranking of the Publishing Industry.

“We saw over the past several years, and the pandemic was only an accelerator for that, that the very big trade publishing houses, consumer book publishing houses, have done their homework very, very properly,” Wischenbart observes.

“The very difficult moment of the pandemic, when bookshops had to shut down, when people were stuck at home in many major markets for lockdowns, that did not really impact the revenue or the profitability of these publishing companies. Guess what? Nothing was random here at all. It was really based on very hard work,” he tells me.

The post 2022 “Global 50” Rankings for Publishers appeared first on Copyright Clearance Center.

]]>