Peer Review in a COVID world, a Velocity of Content series
Over the coming months, CCC will publish a series of occasional posts on the topic of Peer Review in a COVID World, exploring various facets of the impact of data and peer review in scholarly publishing.
Peer review has, for centuries, been an essential step in publishing a scientific article. Peer review provides the necessary quality assurance step in scholarly publishing which, while not perfect, acts as a mechanism for publishing scientifically correct and informative articles. The quality of articles and the reproducibility of the science reported in them strengthen the journal’s and publisher’s brands, attract new authors and maintain readership – all critical components in the lifecycle of scientific communications.
Peer review offers the promise of improved quality and value in published content, but importantly, peer review takes time: time to find the right reviewers, time for the reviews to be written, and time to revise or reconsider the submitted manuscript based on reviewer comments. The peer review process has been under pressure for some time to shorten its lifecycle, but the outbreak of COVID-19 has accelerated the need to rapidly publish scientific work pertaining to the testing, mitigation and treatment of the virus. The pressure is on:
- The Lancet, for example, reported a fivefold increase in submissions this spring, with a 95% rejection rate.
- In May, Edward Campion, Executive Editor at the New England Journal of Medicine (NEJM), was quoted by thescientist.com as receiving over 40 COVID-19 manuscript submissions per day.
- As of this writing, over 7300 COVID-19 papers are tracked on pre-print servers bioRxiv and medRxiv.
The implications for getting the peer review process right are considerable. If the wrong reviewer is selected, articles may be published that require retraction or revision, readers may be misled or misinformed, and the costs of publishing increase. Inaccuracy is bad for brands, too: witness the recent discussions about retracted articles discussing a hydroxychloroquine-based therapy; these had to be pulled due to methodological errors.
As a result, publishers need more efficient tools for identifying relevant and qualified peer reviewers and to gain commitment by reviewers to complete reviews quickly. While there isn’t a single solution that is addressing every publisher’s needs, we are seeing considerable innovation and collaboration to address the challenge of quickly finding and securing qualified peer reviewers who are both available and reliable. At the heart of this challenge is good data that identifies, without ambiguity:
- The subject matter experts in a given field
- The articles that authors have previously published, with the associated metadata
- The author’s affiliations with specific institutions and other journals
- The reviewer’s affiliations with specific institutions and other journals
- The reviewer’s interest in reviewing the manuscript in question, and
- The reviewer’s availability in the specific timeframe when review is needed.
Managing peer review varies widely from publisher to publisher, and from journal to journal, and runs the gamut from a mix of memory and Excel spreadsheets to more innovative approaches:
- In April 2020, our colleagues at OASPA spearheaded an effort to “speed up the review process while ensuring rigor and reproducibility remain paramount,” soliciting researchers around the globe to sign up to participate in rapid reviews of COVID-related manuscripts.
- MIT Labs has launched a new publication it calls Rapid Reviews: COVID-19 —“an open-access overlay journal that seeks to accelerate peer review of COVID-19-related research and prevent the dissemination of false or misleading scientific news.” They’re using AI to identify promising research in preprint repositories, manage peer review, and publish results in a transparent process.
- The open access publisher Frontiers has announced “Collaborative Peer Review,” a process in which the average time from submission to decision is reduced to 90 days.
These examples address many of the challenges identified in the CCC/Outsell Scientific Publishing Ecosystem map published in 2019. By asking researchers, “What keeps you up at night?” we confirmed two key hypotheses related to peer review: 1) Authors were frustrated by persistent flaws in the peer review process, including what they perceive as a widespread lack of transparency; and 2) Reviewers were troubled by a lack of formal recognition for their peer review contributions. Similarly, one funder representative observed, “If publishing and peer review platforms break down, then our organization’s mission to publish and disseminate research results as quickly as possible will not be met.” All of these issues, and many others identified in our map, can be ameliorated by better data management.
Summing up: Peer review is foundational to the value of scholarly and scientific research. While the tools in wide use are helping, the community of researchers, publishers, funders and institutions are demanding more articles, better quality and faster results. COVID-19 is putting peer review as we know it today to the test. Data is at the heart of the issues, and better management of data is the way forward.