Data – Copyright Clearance Center http://www.copyright.com Rights Licensing Expert Fri, 14 Dec 2018 16:33:18 +0000 en-US hourly 1 http://www.copyright.com/wp-content/uploads/2016/05/cropped-ccc-favicon-32x32.png Data – Copyright Clearance Center http://www.copyright.com 32 32 MIT’s PubPub Seeks “New Info Ecosystem” http://www.copyright.com/blog/mits-pubpub-seeks-new-info-ecosystem/ http://www.copyright.com/blog/mits-pubpub-seeks-new-info-ecosystem/#respond Thu, 30 Aug 2018 14:17:49 +0000 http://www.copyright.com/?post_type=blog_post&p=17203 The MIT Press and the MIT Media Lab's collaborative Knowledge Futures Group have unveiled the innovative publishing platform PubPub, kicking off with a project inspired by Frankenstein.

The post MIT’s PubPub Seeks “New Info Ecosystem” appeared first on Copyright Clearance Center.

]]>
Two very different laboratories. Two very different experiments. Separated by two centuries, they share a common DNA.

MIT’s PubPub Seeks “New Info Ecosystem”

Frankenstein, or the Modern Prometheus is a novel whose composition resembles the famous creature itself – a stitched together assemblage of Gothic horror, Romantic philosophical reflection, and science fiction, published in 1818 by 20-year-old prodigy, Mary Shelley.

Frankenbook, launched online in January 2018 as part of Arizona State University’s celebration of the novel’s 200th anniversary, is a collection of contemporary scientific, technological, political, and ethical responses to the original Frankenstein text. The innovative publishing platform that hosts Frankenbook is PubPub, among the first experiments to escape the lab at the Knowledge Futures Group (KFG), a collaboration of The MIT Press and the MIT Media Lab.

FrankenbookWith a stated mission is to transform research publishing by incubating and deploying open source technologies meant to build a new information ecosystem, the Knowledge Futures Group is a leading edge/bleeding edge endeavor. Yet it’s worth noting that MIT Press and MIT Media Lab have deep roots in computing and communication. Since 1986, the MIT Media Lab has harnessed technology for creative expression. In 1995, MIT Press published in print and digital form one of the first “open access” books – William Mitchell’s City of Bits, in which the author presciently observed the ways that online communication was a powerful and liberating force.

“We would like to serve as a test kitchen, an incubator, and a staging platform for the development and launch of open source publishing technologies and aligned open access publications,” Terry Ehling, director of strategic initiatives for MIT Press, explains about the Knowledge Futures Group.

“The open source approach not only reduces the precarious dependency that most nonprofit academic publishers have on costly outsourced technologies and a limited network of commercial vendors, but it also provides a foundation for greater insourced experimentation and innovation,” she says. “This is really a way for us to control our future in many ways, which has been increasingly dominated by for-profit multinationals. We are no longer technology-informed, we are technology-driven. Much of that technology resides outside of our control.”

As co-developer of PubPub with his MIT Media Lab colleague Thariq Shihipar, Travis Rich positions PubPub as a platform for passion as much as publishing. “It was driven by the different way that research at the Media Lab is typically conducted,” he says.

“We don’t have traditional academic grants that have a start date and an end date with a very clear set of goals. It’s an undirected research model that is supported by a consortium of corporate members. We typically operate by driving some passion and not necessarily just writing that up and sending it off to be published at some point.

“We enjoy having feedback and conversations with member companies of the Media Lab. That iterative, feedback-driven, interactive, data-heavy approach… felt like the right way to do research. [Before PubPub,] we just didn’t have a tool that let us work the way we wanted to work.”

The post MIT’s PubPub Seeks “New Info Ecosystem” appeared first on Copyright Clearance Center.

]]>
http://www.copyright.com/blog/mits-pubpub-seeks-new-info-ecosystem/feed/ 0
Blockchain for Science: Part Three – Advanced Peer-to-Peer Systems in Research http://www.copyright.com/blog/blockchain-for-science-part-three-advanced-peer-to-peer-systems-in-research/ http://www.copyright.com/blog/blockchain-for-science-part-three-advanced-peer-to-peer-systems-in-research/#respond Thu, 02 Aug 2018 08:00:58 +0000 http://www.copyright.com/?post_type=blog_post&p=17046 What if researchers could claim their own contribution to a piece of work, or provide peer assessment of another’s work? Blockchain may be the solution.

The post Blockchain for Science: Part Three – Advanced Peer-to-Peer Systems in Research appeared first on Copyright Clearance Center.

]]>
Blockchain, the technology behind Bitcoin, offers a peer-to-peer network for trust that potentially can disintermediate traditional brokering authorities like banks, notaries – and perhaps even publishers. Copyright Clearance Center (CCC) and the International Council for Scientific and Technical Information (ICSTI) hosted a webinar led by industry experts to investigate what opportunities blockchain has to offer in the scholarly publishing world.

An academic librarian by training, Lambert Heller has a background in social sciences. He founded the Open Science Lab at TIB (German National Library of Science and Technology) in 2013, and runs a number of grant projects, some with partners from the Leibniz Research Alliance Science 2.0. Many TIB Open Science Labs efforts are about linked-data-based research information systems (VIVO), as well as communicating/cultivating what “Open Science” is all about. Below, he shares his own views on how and why scholarly objects as well as transactional metadata can and should be taken care of by advanced peer-to-peer systems.

Turning the Client-Server Paradigm Upside-Down

Today, researchers rely on a vast collection of scholarly objects – article PDFs, book chapters, annotations and personal notes, data sets, etc. – to conduct their work, particularly in the social sciences. To get access to these various and sundry objects, they must navigate a number of different platforms, deal with various APIs, interpret differing policies, and comply with divergent business models – a time-wasting and costly problem slowing science, which is prime for a technical solution.

Enter BitTorrent: a de-centralized web communication protocol for peer-to-peer file sharing which can be used to distribute data and files over the internet. Unlike traditional client-server relationships where increased activity slows service, loading gets easier the more people are interested. New protocols like IPFS and DAT also allow for web-like experiences. So, instead of gatekeeping and administrating a database of (open) works on a server, open protocols could be used to make scientific objects available online. The result is more resilient object storage, where privileged access is replaced with permission-less innovation, leveling the playing field for business model innovation.

Facilitating the Exchange of Value with Transparency and Decentralization

Another pain point of today’s research ecosystem is the disconnect between researchers, contributors, and the public, which rarely directly interact. Instead, journal editors or metadata aggregators intermediate, usually through a designated corresponding author who is trusted to answer on behalf of colleagues. This arrangement can lead to quality issues, inaccuracies, and other unnecessary challenges.

But what if researchers involved directly claimed their own contribution to a given piece of work, or their assessment or review of another’s work? Producing a scholarly metadata trail of sorts, blockchain could help researchers do just that, rather than relying on stewards or third parties to make information public. Responsible, efficient governance of the metadata trail may even have the power to set new standards among researchers and publishers, increasing the rigor and accuracy of science.

Blockchain in Action: Educational Certificates

For about three years now, institutions of higher education, such as the MIT Media Lab, or the Open University in UK, have been leveraging blockchain technology to convey certificates or diplomas, putting the autonomy of learners at the forefront. Traditional digital certificates issued by institutions operate under the assumption that the institution itself will endure into the future and remain available and accessible for verification. But very few institutions work this way in practice, being susceptible to and affected by economic, social, and political forces, and as such, change constantly. So, it’s a very important and relevant concept – that you could have immutable, cryptographic proof that you earned that certificate. In this scenario, it’s also completely within control of the learner as to how, when, and why they share their certificate with other people or institutions.

Scholarly Publishing: In Blockchain We Trust

Imagine applying this same blockchain technology model from higher education to peer review – one of the most fundamental processes in academia, central to publishing, tenure, funding, and hiring. We could have an ‘ownerless’ database of the highly valuable research metadata trail, with proof of exchange between peers that are directly controlled by the researchers involved. This is not to say that peer review won’t require facilitation from publishers, but rather that no one particular party is in control over the record of it. Just as the advent of the internet revolutionized science, so too will there be important advances from this new set of peer-to-peer systems and unprivileged scholarly interchange protocols.

Related Reading

The post Blockchain for Science: Part Three – Advanced Peer-to-Peer Systems in Research appeared first on Copyright Clearance Center.

]]>
http://www.copyright.com/blog/blockchain-for-science-part-three-advanced-peer-to-peer-systems-in-research/feed/ 0
Blockchain for Science: Part Two – A (R)evolution in Research http://www.copyright.com/blog/blockchain-for-science-part-two/ http://www.copyright.com/blog/blockchain-for-science-part-two/#respond Thu, 26 Jul 2018 08:00:37 +0000 http://www.copyright.com/?post_type=blog_post&p=17007 Sönke Bartling, founder of Blockchain For Science, explores developing novel online tools and concepts for knowledge creation.

The post Blockchain for Science: Part Two – A (R)evolution in Research appeared first on Copyright Clearance Center.

]]>
Blockchain, the technology behind Bitcoin, offers a peer-to-peer network for trust that potentially can disintermediate traditional brokering authorities like banks, notaries – and perhaps even publishers. Copyright Clearance Center (CCC) and the International Council for Scientific and Technical Information (ICSTI) hosted a webinar led by industry experts to investigate what opportunities blockchain has to offer in the scholarly publishing world.

Blockchain For Science

Sönke Bartling, founder of Blockchain For Science and associated researcher at the Humbold institute for internet and society, is interested in developing and describing novel online tools and concepts for knowledge creation. In addition to being a board-certified radiologist with a broad clinical experience and a researcher in basic medical imaging sciences, Sönke has co-editing the living book, http://www.openingscience.org/. He is currently organizing the 1st International Conference on Blockchain For Science and Knowledge Creation in Berlin.  Below, he shares his own views on the blockchain revolution and what it could mean for science and knowledge creation.

A New Way to Look at Databases

In today’s day and age, our digital lives are replete with databases, or centralized, online server-based information systems, that hold and record data. Facebook, for example, is a sort of database for our social activity, in the same way that we could think of online banking as a database for our savings and financial transactions. For better or worse, we as consumers take for granted that these service providers will not unduly manipulate what’s in the database – our number of Facebook ‘Likes’ or our checking account balance – but with the advent of blockchain, this doesn’t have to be the case.

Instead of using passwords for logging into services that are centrally managed by the provider, users could maintain their autonomy by way of a unique private digital key used to authorize transactions, or changes to the database.

What is Blockchain?

So, what is blockchain? In many ways, it’s an online database, but with special characteristics. Namely, it’s distributed and decentralized, so it doesn’t run on one computer but rather on several, so there’s no single source that could be targeted to disrupt the service.  It’s also immutable, meaning that it can’t be changed arbitrarily, and any and all changes to the database are recorded, so that they can be tracked and followed, producing a public, provable, cryptographic ledger of transactions. In this same vein, blockchain also has significant implications for controlling user accounts. Instead of using passwords for logging into services that are centrally managed by the provider, users could maintain their autonomy by way of a unique private digital key used to authorize transactions, or changes to the database.

Blockchain-ified Science: Open Start to Finish

So, what might a scientific database application for blockchain look like?

We’re all familiar with the research cycle: first, researchers conduct an experiment to acquire data, which they then process and analyze to produce a publication. The article or research artifact is evaluated, and then funding is allocated accordingly so that the next subsequent experiment can be initiated. In this traditional process, the science only becomes accessible to the wider world once the article is published.

While more recent developments like open data initiatives move this transparency forward somewhat to the analysis phase, blockchain has the potential to exponentially increase openness across the entire research cycle, starting with study registration and data acquisition. Research data could be deposited in an immutable, time-stamped blockchain database, encouraging scientists to be more objective about the information they collect and potentially enhancing reproducibility. Post-processing analysis could also be governed by blockchain technology, leaving an audit trail of the process and producing ‘smart’ evidence that researchers cannot unduly influence. It could also make the publication process more dynamic, with peer review, or even more micro-contributions, recorded on the blockchain.

Finally, the process for awarding funding could be substantially improved through the use of blockchain by encouraging objectivity and relieving reliance on third parties for orchestration. For example, a Russian biotech firm, ARNA Panacea, recently successfully issued its own fixed-value cryptocurrency to crowd-fund a decentralized universal repository of clinical trials, diagnostic information, disease progression, and more.

The Next Wave of Open Science

While throughput limitations of blockchain mean that it will never replace centralized systems and databases found in the scientific ecosystem, its revolutionary benefits in the areas of transparency, reproducibility, information dissemination, and researcher incentivization are significant. Going forward, we can expect that blockchain will play an expanding role in science – and knowledge creation more generally. A list of almost all Blockchain For Science projects can be found here.

 

Related Reading

The post Blockchain for Science: Part Two – A (R)evolution in Research appeared first on Copyright Clearance Center.

]]>
http://www.copyright.com/blog/blockchain-for-science-part-two/feed/ 0
Blockchain for Science: Part One – A Primer http://www.copyright.com/blog/blockchain-for-science-part-one-a-primer/ http://www.copyright.com/blog/blockchain-for-science-part-one-a-primer/#respond Thu, 07 Jun 2018 09:59:56 +0000 http://www.copyright.com/?post_type=blog_post&p=16650 Joris van Rossum, Director of Special Projects at Digital Science, shares his views on the ways blockchain could be a game changer in the ecosystem of scholarly publishing.

The post Blockchain for Science: Part One – A Primer appeared first on Copyright Clearance Center.

]]>
Blockchain, the technology behind Bitcoin, offers a peer-to-peer network for trust that potentially can disintermediate traditional brokering authorities like banks, notaries – and perhaps even publishers. Copyright Clearance Center (CCC) and the International Council for Scientific and Technical Information (ICSTI) hosted a webinar led by industry experts to investigate what opportunities blockchain has to offer in the scholarly publishing world.

Panelist Joris van Rossum, Director of Special Projects at Digital Science, recently authored a research report investigating the new possibilities of blockchain. Below, Joris shares his own views on the ways blockchain could be a game changer in the ecosystem of scholarly publishing.

Scholarly Communications: A Challenging Landscape Ripe for Change

Within the academic publishing ecosystem, there are a number of fundamental challenges with which all stakeholders wrestle in some capacity. The deficient state of research reproducibility impacts publishers, authors, funders, and institutions. Integral to the core values of scholarship and the efficiency of acquiring knowledge, reproducibility is central to rigorous scholarly communication, and yet current practices, methods, and models hinder it significantly.

In a similar way, the community also suffers from poor transparency into the peer review process, along with a lack of recognition for the fundamental and important work done by reviewers. Metrics for evaluating research and researchers, largely directed by the original constraints of print publishing, are also limited and outdated.

In a more macroscopic capacity, the industry as a whole is experiencing a commercial crisis of sorts, having yet to hit upon a business model that is sustainable for all parties well into the future.

A Cryptocurrency for Science

So, what is blockchain and how might it prove useful for scholarly communications? The most well-known application of blockchain is, of course, cryptocurrencies like Bitcoin – digital assets designed to work as a medium of exchange that use encryption to secure transactions, to control the creation of additional units, and to verify asset transfer.

What if we were to leverage blockchain to create a digital currency specifically for science? How might we use it? Some new organizations, such as Scienceroot, Pluto, and Einsteinium, envision a future in which the academic publishing ecosystem is driven by a closed token-based economy. Publishers, for example, might choose to grant all contributing peer reviewers digital ‘tokens,’ which researchers could then redeem for services, content, or even funding, bringing value and recognition to an exchange that is vital to scholarly communications yet currently asymmetrical.

From Information to Value

Another key feature of blockchain technology is that it excels at establishing ownership and preventing duplication – functions just as pertinent to banking as to scholarly communications. In the area of data rights management (DRM) for example, blockchain is well positioned to automate rights and permissions management, including the payment of royalties, when combined with smart contracts.

In this same vein, blockchain also opens the way for new business models apart from subscriptions, tokens, and open access, by making direct micropayments between two parties very easy. This new reality might look something like researchers paying small fees directly to publishers for each research article they download.

The Promise of a Single Science Repository

At a more fundamental level, blockchain is about data storage – but a very special variety. Unlike many other mechanisms we have today, blockchain is de-centralized and distributed, meaning that no one particular entity owns or controls it. Instead of a server in your office, or a system maintained in the cloud, data is divided into small pieces and scattered over a vast network. Hacking becomes nearly impossible, because there is no single point of entry. Data held in blockchain is also immutable and transparent while simultaneously remaining pseudonymous – a perfect foundation for a singular scientific data store. A data trail of research, from the point of submission all the way through to subsequent citation in other works, would enable the protection of IP and assignment of credit, the development of more sophisticated research evaluation metrics, and enhance reproducibility.

Taking Action

A handful of players are already hard at work to capture the potential blockchain has for the research process. ARTiFACTS, a new start-up, is tackling this concept of a “ledger of record for research,” that traces all transactions and linkages across all research artifacts, published or pre-published. Similarly, a collaborative effort between Springer Nature, Aries Editorial Manager, Katalysis, and ORCiD, known as the Peer Review Blockchain Initiative, initiative aims to look at practical solutions that leverage the distributed registry and smart contract elements of blockchain technologies. Later phases aim to establish a consortium of organizations committed to working together to solve scholarly communications challenges that center around peer review.

RELATED READING

The post Blockchain for Science: Part One – A Primer appeared first on Copyright Clearance Center.

]]>
http://www.copyright.com/blog/blockchain-for-science-part-one-a-primer/feed/ 0
Join CCC in Chicago at SSP’s 40th Annual Meeting http://www.copyright.com/blog/join-ccc-in-chicago-at-ssps-40th-annual-meeting/ http://www.copyright.com/blog/join-ccc-in-chicago-at-ssps-40th-annual-meeting/#respond Thu, 24 May 2018 08:00:15 +0000 http://www.copyright.com/?post_type=blog_post&p=16600 With topics ranging from metadata to OA to computer-assisted mining in scholarly publishing, the CCC team picks their favorite sessions at this year's 40th SSP Meeting in Chicago.

The post Join CCC in Chicago at SSP’s 40th Annual Meeting appeared first on Copyright Clearance Center.

]]>
SSP’s 40th Annual Meeting, one of the premier forums for discussion amongst scholarly publishers, librarians and academics, is right around the corner. This year’s theme, “Scholarly Publishing at the Crossroads: What’s working, what’s holding us back, where do we go from here?” highlights both the uncertain nature of our industry’s future as well as the great opportunities that lie ahead for us.

You can find CCC at Booth #211, and catch our photo booth at the 40th Anniversary Celebration at the Navy Pier. My colleagues and I will be at the show and wanted to share some of our “can’t miss” sessions at this year’s conference:

Jen Goodrich, Director of Product Management

Session 4D – Making Metadata Work for Everyone: A Functional View of Metadata in the Scholarly Supply Chain (Thursday 31 May, 4:45PM)

My first session choice is an expert panel, led by Marianne Calilhanna from Cenveo Publisher Services, about the entire lifecycle of metadata throughout the publishing workflow. This topic couldn’t be more timely or relevant, as it’s becoming increasingly clear that scholarly publishing can only be as good as our data.  I’m looking forward to hearing a detailed analysis how metadata flows—and sometimes gets caught—during the publishing workflow.

Sponsored Session: Diversity & Inclusion (Wednesday, May 30, 1:30PM)

My second pick is a sponsored session, moderated by my wonderful colleague, Rebecca Mcleod. She’ll be leading a very important discussion about the culture of the scholarly publishing community—specifically around efforts to create a more diverse and inclusive environment that welcomes people of all backgrounds. I’m really looking forward to this meaningful discussion and to hearing the panel’s thoughts on ways we can improve and grow together as a community.

Kurt Heisler, Sales Director

Plenary: Previews Session (Friday 1 June, 11:00AM)

The Previews Session is a roundup of the industry’s newest and most noteworthy products, platforms and content. I’m really looking forward to this one and think it’ll be a great synopsis of the most important recent developments in scholarly publishing; a definite “must-attend” on my calendar.

Session 2A – How Do We Move the Goal of Open Access from Concept to Reality? (Thursday 31 May, 2:00PM)

Moderated by ALPSP’s Audrey McCulloch, this session promises to be an informed and pragmatic analysis of the state of OA, including a rundown of some of the biggest challenges stakeholders are facing today. As the scholarly publishing industry begins to search for and uncover ways we can streamline the research workflow, I’m really looking forward to hearing the speakers offer their takes on ways we can improve.

Chuck Hemenway, Sales Director

Virtual Meeting Session 5A: Funders as Publishers—What does this mean for traditional publishers and the scholarly publishing industry as a whole…? (Friday 1 June, 11:00AM)

My first session pick, moderated by Sheridan PubFactory’s Tom Beyer, will take a look at the rise of publisher-funders like, Wellcome Trust. These firsts-of-their-kind are still finding their place within the market so I’m keen to hear the industry experts on this ticket offer their perspectives on how publisher-funders might find their place within—or perhaps disrupt—the scholarly publishing market.

Virtual Meeting Session 1D: The Gift That Keeps on Giving: Metadata & Persistent Identifiers Through the Research & Publication Cycle (Thursday 31 May, 10:30AM)

My second pick—and the session I’m most excited to attend—is this panel, lead by Ringgold’s Christine Orr, about metadata throughout the scholarly lifecycle. It’s becoming increasingly clear that we’re simply not doing enough with our metadata and that we’re missing opportunities to collect valuable information that would make the research workflow more seamless for everyone. I’m really looking forward to hearing what these industry heavyweights have to say about our current state and how we, as a community, can improve.

Darren Gillgrass, Business Development Director

Session 3F: (Don’t) Rage Against The Machine

My first session pick promises to be a forward-thinking discussion about why—and how—we should better incorporate computer-assisted mining activities into the scholarly, academic and research library communities. Moderated by DMedia’s David Myers, the panel’s experts are well-equipped to make the case for utilizing technology to better facilitate scientific progress. Looking forward to hearing their perspectives on how we can ensure the scholarly publishing community keeps pace with technology and benefits from its advances.

Session 2D: Unlimited Data Plans? Data Publication Charges (DPCs), DPC Sponsors, Data Availability Statements, and Licensing Options (Thursday 31 May, 2:00PM)

My next pick is a session about lesser-known article fees: data publication charges—or DPCs. Moderated by Anna Jester from eJournal Press, this session features four organizations which currently either require authors to deposit data or support authors in complying with data mandates. These data experts will explore what DPCs mean to scholarly publishing, from operational realities, to licensing, and beyond.

Which sessions are you looking forward to attending? Tell us in the comments section!

We hope to see you in Chicago. Follow along on social media with #SSP2018.

The post Join CCC in Chicago at SSP’s 40th Annual Meeting appeared first on Copyright Clearance Center.

]]>
http://www.copyright.com/blog/join-ccc-in-chicago-at-ssps-40th-annual-meeting/feed/ 0
Handle with Care: Metadata in Scholarly Publishing http://www.copyright.com/blog/handle-with-care-metadata-in-scholarly-publishing/ http://www.copyright.com/blog/handle-with-care-metadata-in-scholarly-publishing/#respond Thu, 17 May 2018 08:00:41 +0000 http://www.copyright.com/?post_type=blog_post&p=16543 Industry experts discuss the need for improved handling of crucial metadata throughout the scholarly workflow.

The post Handle with Care: Metadata in Scholarly Publishing appeared first on Copyright Clearance Center.

]]>
It’s readily apparent that metadata is an essential part of scholarly publishing. So why do we let so much of this treasured commodity slip through our fingers over the course of the publication process?

Each portion of the publication lifecycle requires important metadata, but not all of this information is carried all the way through the workflow. Instead, much of it remains in the isolated silos in which it’s collected. Inera’s CEO Bruce Rosenblum notes, “There’s just form after form after form of metadata collected [in submission systems] and it’s amazing how little of that makes it through to the final XML or beyond.”

For example, did you know that ORCID IDs (i.e. author IDs) often don’t make it out of the submission system? And that when publishers produce XML from manuscripts, Ringgold IDs collected at submission for author affiliations are often lost, effectively expunging hugely important data from publisher records?

But the lack of synchronization across publication phases—and the subsequent loss of this important metadata—persists. Ringgold’s North American Sales Director, Christine Orr, comments, “It negatively impacts all kinds of things downstream, and results in a lack of discoverability, lack of inoperability between other systems, and the inability to really, truly analyze your author base.” And it makes the publication workflow rife with inaccuracies. Bruce Rosenblum notes, “If it’s not automatically integrated into the workflow, then it’s a much more manual process, and hence a potentially inaccurate process.”

This information matters to both publishers and funders. Having unbridled access to the complete set of metadata collected throughout the publication lifecycle would mean infinitely better information about not only authors but also grant appropriation. It would enable better business analysis by publishers and funders alike, and would help all stakeholders identify trends in areas like open access, measure the impact of funding and make more informed decisions. Rosenblum notes, “Publishers need to understand there’s a huge value in integrated metadata. And by integrated, I mean that its shareable across systems.”

So what are we—the scholarly publishing community—waiting for? We need to begin by handling our existing metadata with care. And we need to invest in building out metadata-handling processes—holistically and systematically— within our own organizations to prepare for additional standards on the horizon. Finally, we need commitment from stakeholders across the scholarly publishing industry to use these standard identifiers that are being lost most often; namely grant IDs, funder names and author and co-author affiliation IDs.

Let’s continue the conversation at this year’s SSP Meeting in Chicago. Join me and fellow industry experts (listed below) as we analyze the research workflow, identify gaps, and discuss pragmatic ways we can work together to make the publication workflow more seamless and beneficial for all stakeholders.

Hope to see you in Chicago.

SSP Session Information:

Session 1D
The Gift That Keeps on Giving: Metadata & Persistent Identifiers Through the Research and Publication Cycle

Thursday, May 31 at 10:30AM
Virtual Session

Christine Orr, Ringgold
Bruce Rosenblum, Inera
Sarah Whalen, AAAS
Mary Seligy, Canadian Science Publishing
Howard Ratner, Chorus
Jennifer Goodrich, Copyright Clearance Center

The post Handle with Care: Metadata in Scholarly Publishing appeared first on Copyright Clearance Center.

]]>
http://www.copyright.com/blog/handle-with-care-metadata-in-scholarly-publishing/feed/ 0
Metadata 2020 Update: Project Groups Underway http://www.copyright.com/blog/metadata-2020-update-project-groups-underway/ http://www.copyright.com/blog/metadata-2020-update-project-groups-underway/#respond Thu, 10 May 2018 08:00:39 +0000 http://www.copyright.com/?post_type=blog_post&p=16492 This year, Metadata 2020 is focused on gathering information and use cases that will inform the final recommendations. CCC team members share updates on the progress so far.

The post Metadata 2020 Update: Project Groups Underway appeared first on Copyright Clearance Center.

]]>
Increasingly, we are asking metadata to do more than ever before. In the digital age, there is a growing view that content which cannot be discovered, linked or acquired electronically may as well not exist. Demands are increasing for content to become more interoperable, discoverable and machine readable and we have a parallel challenge to manage all aspects of the underlying metadata across content creators, aggregators and consumers.

Metadata 2020 is a collaboration that advocates richer, connected, and reusable, open metadata for all research outputs, which will advance scholarly pursuits for the benefit of society.

The Metadata 2020 initiative kicked off in 2017 as a set of industry communities discussing common challenges, and while its name implies that it’s a three-year project, the outcome of its efforts won’t stop there. Working towards a shared vocabulary, set of best practices and awareness for the greater good, this multifaceted effort is designed to facilitate communication between disparate communities. Because our scope is broad, the findings of Metadata 2020 are less about being prescriptive, and more about bridging gaps in understanding, technology and workflows that impede research, publishing or re-using content.

Last year’s community groups identified six key challenges to focus on. The 2018 project groups span the lifecycle of metadata: research, metadata elements and their definitions, understanding incentives for improving metadata, and best practices each group can follow to support the larger ecosystem. Overall, each project team shares a common overarching goal: educating people on why it’s important to care about and invest in rich metadata.

CCC’s services exist at the crossroads of numerous metadata uses including content management, discovery, rights licensing, text and data mining, open access and content delivery. This broad experience allows us to bring a unique viewpoint to the Metadata 2020 initiative and we have several staff members are participating in Metadata 2020’s project groups during 2018.

Each project group varies in size from a few people to two dozen and includes volunteers from across the industry with varying backgrounds, expertise and motivations.

Highlights from CCC’s involvement in Metadata 2020’s project groups include:

Group 3: Defining the Terms We Use About Metadata

Elizabeth Wolf, Manager, Data Quality, Data Operations

I have always been interested in the intersection between perspectives. In the past, I’ve been involved in integrated projects where one team uses a term and another team assumes a completely different meaning, causing misaligned features or requirements, missed hand-offs, and delays. The more we work in the wide world of cross-functional teams and release trains, supporting a range of customers across many disciplines, the more critical it is that we recognize and address these challenges.

While the Defining the Terms group is closely aligned with others, our mission is to come up with clarifying terminology so that we can have more meaningful global discussions. To understand what should be delivered and why anyone should care, we need a common vocabulary. We are looking to facilitate communication about metadata within and between communities. Our 16 group members represent Service Providers/Platform & Tools, Publishers, Librarians, and Researchers.

At this point, we are surveying different user groups to assess what people talk about when they talk about metadata. We think our contribution is to disambiguate and illustrate what terms mean, independent of implementation. Our anticipated outcome is a glossary, which will be released along with Group 2’s mapping project.

Group 4: Incentives for Improving Metadata Quality

 John Brucker, Metadata Librarian, Data Operations

As a metadata librarian, this project appealed because I think it can help address some of those inconsistency issues by helping the community understand the importance of metadata quality. I believe the community needs to commit resources towards creating and maintaining good metadata.

The mission of our group is to highlight downstream applications and the value of metadata for all parts of the community by telling real stories as evidence of how better metadata will support their goals.

Through my role at CCC, I can see that the quality of metadata we receive from our publishers can vary greatly. This is especially true for publication types other than books or journals, such as reports, websites, and standards.

The way I see it, this group will impact the industry by educating the industry about why they should care about metadata. Examples of this would be use cases where high-quality metadata positively impacts revenue, discoverability, and user experience.

Group 6: Metadata Evaluation and Guidance

Stephen Howe, Product Manager, Platform Services, Product

I was immediately drawn to this project because it aligns directly to what CCC is doing today and what I am doing at CCC. We just implemented a new works management system to help us improve the quality of our data. One of our biggest challenges is understanding exactly how to measure the quality of works’ metadata and to help our data source partners understand and measure the quality of the data that they send us.

The stated mission for this project is, “To identify and compare existing metadata evaluation tools and mechanisms for connecting the results of those evaluations to clear, cross-community guidance.” To state that in my own words, the point of this group is to define a common approach or toolset in which anyone can measure and report on the quality of metadata. Quality here is defined as completeness, accuracy, and consistency.

If we are successful, we will have better industry understanding on how to evaluate the quality of metadata and perhaps even a shared methodology / toolset with which to measure it.

 

Check back in November 2018 for the next update from CCC’s members of the Metadata 2020 team, reporting on the completion of the project groups.

Related Reading: 

The post Metadata 2020 Update: Project Groups Underway appeared first on Copyright Clearance Center.

]]>
http://www.copyright.com/blog/metadata-2020-update-project-groups-underway/feed/ 0
Join CCC at the STM U.S. Conference 2018 http://www.copyright.com/blog/join-ccc-stm-u-s-conference-2018/ http://www.copyright.com/blog/join-ccc-stm-u-s-conference-2018/#respond Thu, 05 Apr 2018 20:49:07 +0000 http://www.copyright.com/?post_type=blog_post&p=16280 Join CCC and Ixxus in Philadelphia for the STM U.S. Conference 2018 from April 24-26, where publishers and other stakeholders gather to collaboratively answer the question, “What can we do better, together?”

The post Join CCC at the STM U.S. Conference 2018 appeared first on Copyright Clearance Center.

]]>
Join CCC and Ixxus in Philadelphia for the STM U.S. Conference 2018 from April 24-26, where publishers and other stakeholders gather to collaboratively answer the question, “What can we do better, together?”

Catch us at the following sessions:

The future of access, part 1: The platform play and seamless content syndication

April 25, 2018 at 3:15    
Moderated by Roger Schonfeld‪ (Ithaka S+R)‪
Participants: Gaby Appleton (Mendeley); Yann Mahé (MyScienceWork); Rob McGrath (Readcube); Roy Kaufman (Copyright Clearance Center)

The fate of the music business looms over STM publishers like darkening storm clouds. Content providers wonder who will be our Spotify? Where will users go to get a legal, seamless aggregated search and discovery experience and what sort of sustainable business models will emerge?

Mendeley and Readcube propose syndicating content and brokering institutional access directly in their researcher productivity tools and reporting usage back to publishers in support of existing business models (Distributed Usage Logging).   Search engines like Google Scholar & Dimensions are serving up content directly now, expanding on their traditional role of referring traffic to publishers – and using new services like MyScienceWork to fulfill a user’s requested article with legal, freely available versions online – even if the user doesn’t have access to the version of record.  What is the future of the publisher’s own platform in this scenario? How will these new efforts to create seamless access impact traditional aggregators like EBSCO, ProQuest, and the document delivery market (CCC)? And most importantly, how will libraries be brought along in all of this?

 

Round Table: How will STM Tech Trends 2022 affect YOUR business?

April 26, 2018 at 9:30
Moderated by Chris Kenneally, Copyright Clearance Center
Participants: IJsbrand Jan Aalbersberg (Elsevier); Gerry Grenier (IEEE); Phill Jones (Digital Science); Stacy Malyil (Wolters Kluwer)

In a round table discussion moderated by Chris Kenneally (CCC), 4 members of STM’s Future Lab Forum will express their views on how the Tech Trends of 2022 will start impacting our publishing business now. Come and listen to be prepared for the future.

 

More “must attend” session picks:

Interactive forum discussion: digital ethics and data literacy

April 26, 2018 at 11:00
Moderated by Kent Anderson (Redlink)
Participants: Susan E McGrego (Columbia Journalism School); Patrick Vinck (Harvard University)

Are algorithms and social media outsmarting us, surveilling us, feeding us fake facts and alternative news, defining our views and opinions? Kent Anderson (Redlink) will engage in an interactive discussion on stage with thought leaders in digital integrity on topics such as ethics of algorithms, data literacy, user interface design, technology deployments, and current practice and policies. A very interactive session – so we expect you and the rest of the audience to chip in.

The Future of Access, part 2: RA21, Resource access in the 21st century

April 26, 2018 at 3:45
Chaired by Julia Wallace (RA21) and Heather Flanagan (RA21)

RA21 is a joint project by STM and NISO to drastically improve access to content, especially for mobile and off campus use. Access to scholarly and academic content should be as easy as logging in on Facebook and Google (but with stronger support for user privacy).

In its first year, the RA21 project, in which over 50 organisations collaborate, gained enormous traction among libraries, vendors, federation operators, ID management organisations and of course publishers. The three co-chairs of the project, Chris Shillum (Elsevier), Ralph Youngen (ACS) and Meltem Dincer (Wiley) will update you on the initial results of the pilots in academic and corporate environments and discuss possibilities for applying this information to your services. This session includes an interactive panel on frequently asked questions.

The post Join CCC at the STM U.S. Conference 2018 appeared first on Copyright Clearance Center.

]]>
http://www.copyright.com/blog/join-ccc-stm-u-s-conference-2018/feed/ 0
Using Data Analytics to Drive Strategy for Corporate Customers http://www.copyright.com/blog/using-data-analytics-drive-strategy-corporate-customers/ http://www.copyright.com/blog/using-data-analytics-drive-strategy-corporate-customers/#respond Thu, 13 Jul 2017 08:00:07 +0000 http://www.copyright.com/?post_type=blog_post&p=13638 Publishers have a vague sense that there’s opportunity in the corporate market, if only they could identify how to reach it.

The post Using Data Analytics to Drive Strategy for Corporate Customers appeared first on Copyright Clearance Center.

]]>
Let’s explore the question of how publishers might use analytics to better understand possibly the most mysterious customer group of them all: corporate customers. Publishers often suffer from a vague sense that there’s a land of opportunity in the corporate market, if only they could identify where it is and how to reach it. Meanwhile, their own direct sales to corporate customers, particularly Big Pharma, keep getting smaller and smaller as the industry consolidates, research budgets shrink, and corporate librarian roles are eliminated, replaced by purchasing departments that don’t really understand the value of a journal subscription in contrast to a carton of paper towels. Outside of the pharmaceutical industry, publishers don’t even know to whom they should be speaking. The anonymous, faceless individuals working on secretive projects in corporate R&D must need our content…right? Might analytics help us pull back the veil?

The anonymous, faceless individuals working on secretive projects in corporate R&D must need our content…right? Might analytics help us pull back the veil?

It’s common for corporate buyers, whether researchers or librarians or purchasing departments, to note that they can only justify paying for exactly what they need. It’s potentially easier for them to spend $10,000 on article-level purchases for only must-have articles than it is to spend $5,000 on a journal subscription where only a small percentage of the articles ever get downloaded. Think of it as micro ROI. For this reason, the traditional publisher license deal and title-level subscription models, both designed to meet the needs of academic libraries, just aren’t going to work for corporate customers much of the time. Don’t try to change that. Just figure out how your organization might do a better job providing the content corporate customers need in the format they need it and via a sales model that’s going to work for them.

Related Reading: Using Data Analytics to Drive Strategy in the Research Space

Again, let’s start by considering some of the decisions your organization may need to make in order to add more value for corporate customers and to drive sales:

  • What segments of the corporate market should you prioritize?
  • Should you attempt to increase direct sales to corporate customers, or is it better to work through intermediaries?
  • If you can sell directly to some segments, what should your sales model be? Do you need to hire additional sales people who specialize in those market segments?
  • Should you increase your publishing volume in certain areas of applied science in order to improve your value to corporate customers?
  • If so, are there key researchers in corporate R&D who should be involved, or with whom you should at least consult?
  • Are there other products or services you might provide to corporate R&D that are not traditional journals or books?

These are all big decisions, and analytics can help you answer many, but certainly not all, of the questions you’ll need to answer to make these decisions. Thinking broadly, you’ll need to determine:

  • What is the estimated overall market size and growth rate, not just for your organization but across the industry?
  • In what market segments are you strong, in terms of subscription sales, article sales, usage, and denials? Where are you weak?
  • Are there specific geographic areas where your sales and usage are particularly strong?
  • What are the top corporations using your content, what market segment are they in, and what other corporations in that segment are you not reaching?
  • What type of content is being utilized most frequently by corporate customers, and how should that knowledge impact your editorial decision-making?
  • How are corporate researchers discovering your content, and what does that tell us about how you might reach more of them?

When exploring markets that are less familiar, analytics can help you move beyond your initial state of ignorance and get you to a place where you’ve at least crossed some options off the list and made others a top priority. But your research shouldn’t stop there. With your numbers at your side, it’s essential that you get out there and talk to some of the key stakeholders at the corporations of interest. As much as the numbers might tell you, the people behind the numbers can illuminate even more.

About the Series

Analytics–everybody wants some, everybody agrees they’re incredibly important, but many STM publishing organizations just aren’t sure how to use them to positive effect. Looking backwards to see what happened in the past can be really interesting (or really boring!), but what does that tell us about what we should do right now, or what we should plan to do in a year? This is the first in a series of blog posts to attempt to provide STM publishers with some guidance on how best to use analytics–not simply to report on what happened, but to guide decision-making and drive impactful actions to achieve commercial and strategic advantage. To do that, we’ll put the focus exactly where it always should be: on our customers. Specifically, we’ll explore how to use analytics to better meet the needs of librarians, researchers, and corporate customers.

The post Using Data Analytics to Drive Strategy for Corporate Customers appeared first on Copyright Clearance Center.

]]>
http://www.copyright.com/blog/using-data-analytics-drive-strategy-corporate-customers/feed/ 0
Using Data Analytics to Drive Strategy in the Research Space http://www.copyright.com/blog/using-data-analytics-to-drive-strategy-in-the-research-space/ http://www.copyright.com/blog/using-data-analytics-to-drive-strategy-in-the-research-space/#respond Thu, 06 Jul 2017 08:00:12 +0000 http://www.copyright.com/?post_type=blog_post&p=13605 Researchers are valuable customers as decision makers, and the content they produce is the lifeblood of scholarly publishing.

The post Using Data Analytics to Drive Strategy in the Research Space appeared first on Copyright Clearance Center.

]]>
Researchers, of course, are not only the end users of scholarly content and publisher platforms. They’re also editors-in-chief, editorial board members, peer reviewers and, at least as importantly, authors of journal articles and/or books. The fact that researchers typically receive no compensation for most of these important contributions means that publishers should feel a particular obligation to understand their needs and motivations via analytics, not only because it’s the right thing to do but also because those publishers that treat these important customers better will almost certainly enjoy a larger share of the best content than they would otherwise.

Analytics are critical not only to editorial decision-making, but also to decisions informing overall organizational strategy and policies, marketing strategy and tactics, and technology development.

What do we know about researcher motivations, and what does that tell us about the types of decisions a publisher may need to make and, therefore, the analytical approaches that can inform that decision-making? Most researchers care very much about making a significant contribution to their field of study. In doing so, they want to be recognized by their community, not only for their research but also for their efforts as editors, peer reviewers and other roles that contribute to the research ecosystem. Finally, most researchers have the basic need to advance professionally, whether that means getting a promotion, achieving tenure, or receiving some other kind of compensation for their work.

Related Reading: Using Data Analytics to Drive Strategy in the Academic STM Librarian Space

Analytics are critical not only to editorial decision-making, but also to decisions informing overall organizational strategy and policies, marketing strategy and tactics, and technology development. As always, before determining the questions you want to answer via analytics, you should first articulate the decisions, both strategic and tactical, that your publishing organization needs to make. These might include:

  • Does a given journal need a new editor-in-chief?
  • What high-growth markets should you prioritize, and what actions can you take to expand your author base there?
  • Should you start a new gold Open Access journal?
  • Whom should you approach to sit on the editorial board of a new journal?
  • Should you adjust your OA APC’s?
  • What actions can you take to increase your share of the best submissions that might otherwise go to a competing journal?

If you focus on the decision around actions to take to increase your share of the best submissions, some questions you might answer with analytics are:

  • Who are the most “valuable” authors in the given field? (Value here would be determined by your organization’s strategy and priorities. It could mean driving usage or driving citations, and it could even mean driving social media attention and altmetrics.)
  • How should you segment your potential author base?
  • What are the specific priorities and pain points of the key author segments?
  • What marketing approach, in terms of both medium and message, has the greatest impact with the key author segments?
  • Would improvements to your submission and editorial management systems increase the volume of quality submissions?
  • If you were to take x% of rejected manuscripts from your top-tier journal and entice the authors to publish them in a lower-tier Gold OA journal, how might that affect submissions, OA revenue, and Impact Factor? Do authors the field have a demonstrated openness to this?

The potential data sources for such analytical questions are numerous. You might need any combination of:

  1. Internal, proprietary data, such as usage data, submission data or OA sales data
  2. Third-party datasets, such as the Web of Science, Scopus, Altmetric.com, etc., or data provided by an intermediary partner or vendor
  3. Data produced by primary market research that your organization or a vendor undertakes

As mentioned in our post on academic librarians, many analyses can be handled in-house using a basic tool like Excel. The bigger challenge might be getting your hands on all of the data you need and ensuring that data is clean. The increasingly widespread adoption of ORCID certainly helps increase the likelihood that researcher data from one source can be matched with researcher data from another source, but expect to invest significant time in data cleansing of one sort or another.

Finally, it’s particularly important to consider whether your organization needs a KPI, or at least some operational metrics, related to researchers. Researchers and the content they produce are the lifeblood of any scholarly publisher. On any given day, there are probably more decisions being made by people in editorial roles than in any other part of the organization. Might embedding analytics into the day-to-day processes of editorial colleagues help to drive the kinds of actions that will support the successful achievement of your strategy?

About the Series

Analytics–everybody wants some, everybody agrees they’re incredibly important, but many STM publishing organizations just aren’t sure how to use them to positive effect. Looking backwards to see what happened in the past can be really interesting (or really boring!), but what does that tell us about what we should do right now, or what we should plan to do in a year? This is the first in a series of blog posts to attempt to provide STM publishers with some guidance on how best to use analytics–not simply to report on what happened, but to guide decision-making and drive impactful actions to achieve commercial and strategic advantage. To do that, we’ll put the focus exactly where it always should be: on our customers. Specifically, we’ll explore how to use analytics to better meet the needs of librarians, researchers, and corporate customers.

The post Using Data Analytics to Drive Strategy in the Research Space appeared first on Copyright Clearance Center.

]]>
http://www.copyright.com/blog/using-data-analytics-to-drive-strategy-in-the-research-space/feed/ 0