Blog – Copyright Clearance Center Rights Licensing Expert Fri, 16 Mar 2018 18:29:25 +0000 en-US hourly 1 Blog – Copyright Clearance Center 32 32 Global Publishing Trends in 2018 Thu, 15 Mar 2018 17:45:00 +0000 The three driving forces behind digital disruption in global publishing are economic shifts, market fragmentation and consumer power.

The post Global Publishing Trends in 2018 appeared first on Copyright Clearance Center.

How big is global book publishing? And why should you care? Because within the business data lie critical clues for digital transformation.

Rüdiger Wischenbart , co-founder of BookMap, a non-profit initiative on international publishing statistics, believes an understanding of world book markets can drive decisions that will position your content to best advantage everywhere.

Global Publishing Trends in 2018

Author of the highly-regarded Global eBook Report, Wischenbart shared his latest data on the world’s biggest publishing markets during a recent Copyright Clearance Center webinar. As lines blur among books and other media, publishers must manage content assets and rights with the confidence that comes with quality data.

“When we speak here about digital, I’m not only talking about e-books. I’m talking about a digital transformation. I mean that a publishing company suddenly is driven and organized in a digitally organized value chain and work processes,” Wischenbart explains.

“Three major forces that really make the change. Number one, we have arrived – it’s not the future, it’s the present. We have arrived in a network economy for the book industry as well, and that means we have winner-take-all markets, where a few major and bigger and better-financed players are in a so much stronger position than all the little guys.

“This is reinforced by market fragmentation,” he continues. “When I have a big organization, I can play around here and experiment there and acquire a little start-up or a little imprint from somewhere else. I can really play across all those different niches and fields. I even can fix a mistake that I may have made when – just recently in the US, Michael Wolff’s Fire and Fury [has been] so much more successful than the publisher had expected. I have the tools to do this, and that is making the competition so much stronger against all the small and middle-sized publishing companies.

“Finally, a third factor [is] that is publishing traditionally thought that the publishers, the authors, and their offer are defining the market. But in a networked economy, in a corporate economy, in all these digital pipes and channels and platforms, it’s the consumers, it’s the customers who define it.”

View the transcript here.

The post Global Publishing Trends in 2018 appeared first on Copyright Clearance Center.

]]> 0
The OER Curriculum Development Process – Inside Emerging Solutions Wed, 14 Mar 2018 17:12:17 +0000 Let’s take a look at the Open Educational Resources (OER) development process and the new solutions that have emerged.

The post The OER Curriculum Development Process – Inside Emerging Solutions appeared first on Copyright Clearance Center.

“Free” open educational resources (OER) are gaining acceptance in U.S. schools. Fueled by funding from large foundations, several OER organizations have developed core curricular programs in reading and mathematics that are now in use in varying degrees in many school districts.

OER’s movement into development of core programs is a turning point in the evolution of open resources. For many years, most OER consisted of supplemental pieces – not large core programs that form the bedrock of instruction at the K-12 level.

But, development of core programs is complex and intensive. Programs must be aligned to academic standards, have scope and sequence and meet requirements some states and/or school districts impose to ensure programs are fit for purpose. The complexities increase further with the development of customized digital programs for personalized learning, which many school districts are adopting.

Despite the complexities, new technologies have been created to support development. Let’s take a look at the development process and the new solutions that have emerged.

Related Reading: Inside the Game-Changing OER Legislation for Publishers

The Development Process

At the outset, developers must establish an instructional design with learning goals, a pedagogy and a research base. The design determines the structure, scope, and sequence of the program so that the curriculum builds over a semester, a year, and across multiple grade levels. In addition, special components are often included (and sometimes required) to address differentiated instruction for a variety of learners:  English language learners, special education students, advanced learners, and other kinds of students.

As editorial work on the program begins, subject area experts are engaged to provide input and expertise. Developers must align content to the prevailing academic standards in each state. Such standards may be developed by states or by national not-for-profit organizations that have created Common Core State Standards in math or English/language arts, the Next Generation Science Standards, and other bodies of standards. Correlations between the standards and draft content are created.

Graphs, artwork, photos, and maps are created for use in the materials.  Permissions must be secured if the media is copyrighted. In keeping with the spirit of “open,” many OER developers seek to use media that is openly licensed.

The program must be fact checked and copy edited multiple times to ensure content is accurate and objective. Independent authorities, evaluators, and master teachers must also vet the prototype program before the program moves into the final phase of production.

Core programs also include ancillary materials for students, as well as teacher editions and other support materials. Publishers provide professional development so faculty can effectively deliver the program. Assessment services are sometimes provided, too.

Keep Learning: With Flexibility, Publishers Can Turn the OER Boom to their Advantage – Here’s How

Customized Curriculum & New Solutions

Development processes have evolved considerably in recent years, as many publishers and developers have started to create digital, personalized learning systems. Personalized learning marks a departure from textbooks and monolithic courses toward flexible, customized content that meets the needs, strengths, and skills of individual learners. These systems can include both OER content materials as well as proprietary or licensed materials.  The value proposition for publishers expands from simply providing high quality, well scoped and sequenced content (which, arguably, OER developers can provide) to include being able to provide the right material at the right time for a particular student or classroom.

Customization and personalization demands the use of a powerful content management systems, that provide content discovery, collaboration, production, and dissemination.

More specifically, discovery tools allow for content tagging, sequencing, and repurposing. Authoring and editing content can be done collaboratively with instructional designers.

All types of learning objects, including videos and interactive content can be stored and managed, along with metadata, standards, and rights and permissions. Courses can be disseminated to web services, apps, district learning management systems, and/or print production.

In sum, this system moves development and production into a “digital first” approach that has been embraced by many educational publishers. And, like so many other aspects of publishing, OER developers will embrace it too, assuming funders are willing to foot the bill.

As has been noted repeatedly in discussions about OER, quality content is not free. It requires funding to create it, organize it, support it and update it on a regular basis.


Ready to learn more? Explore the custom solutions Ixxus is building for your peers in publishing.

The post The OER Curriculum Development Process – Inside Emerging Solutions appeared first on Copyright Clearance Center.

]]> 0
Section 1201 Rulemaking – The Process Is Moving Along Mon, 12 Mar 2018 07:02:19 +0000 ‘1201’ exceptions are a topic of considerable discussion every few years. As it turns out, 2018 is one of those years.

The post Section 1201 Rulemaking – The Process Is Moving Along appeared first on Copyright Clearance Center.

Section 1201 is a curious little section of the US Copyright Act, added by the Digital Millennium Copyright Act (DMCA) of 1998. But the matter covered in that section is of great importance in our digital age and, due to its triennial rulemaking requirement, ‘1201’ exceptions are a topic of considerable discussion every few years. As it turns out, 2018 is one of those years.

For this (seventh) round the Copyright Office is trying out a “new, streamlined procedure for the renewal of exemptions that were granted during the sixth triennial rulemaking.” For this round, the Copyright Office has signaled its intent to streamline by taking into account exemptions which have been previously granted, and providing them a bit of a fast lane.

But let’s provide some context before digging in to these updates. One of the things that Congress realized at the time of passing the DMCA (1998)– and it’s something that Congress realizes all too infrequently – is that it was likely that technology would develop more quickly than laws and rules could be written to manage how the new technology would interact with everyone else’s rights and privileges. So Congress included within the DMCA a provision that became Section 1201 of the Copyright Act, under which the Copyright Office is instructed to update some of the ways in which technology and the law interact, by undertaking a rulemaking process every three years. That means hearing evidence and then granting – or denying –  specific exemptions from the limitations that the Copyright Act imposes on what users might do with works protected by copyright. The Copyright Office is now already in the middle stages of the seventh cycle of Section 1201 rulemakings (its conclusions are due to be published and effective at the end of October 2018).

This time through, twelve exemptions have been requested, with dozens of organizations weighing in. Among the exemptions requested are those that apply to different types of copyrighted works: audiovisual works (towards improving accessibility for specified purposes), computer programs (including unlocking smartphones for ‘jailbreaking’ and repair, as well as video game preservation) and two entirely new ones, one for flight-related software, and one involving an aspect of 3D printing. Some of these (particularly ‘jailbreaking’) have been frequently requested, and sometimes granted, before.

Some of these examples make immediate sense – for example, making licit the jailbreaking of phones has been written up quite a few times and nearly everyone (other than phone companies and manufacturers, of course) favors that in principle. Some video games and other older (consumer-facing) software are at risk of becoming completely inaccessible if the ability to ‘crack’ them open for examination, and running on modern devices, is somehow walled off by law. I myself favor, in general, what is sometimes called “the right to tinker,” which is to say that if I buy myself (for instance) a tractor, and I have alternative software that I wish to run on it in order to repair it, I should be able to do just that – at my own risk. It seems like overreach of copyright to use the all mighty c-in-a-circle to make me stick with what came shipped with the tool. I should be able to take my own chances with my own toy – even if it is a big toy, like a car or a tractor. The wisdom of this approach, however, is subject to debate. As a manufacturer, or a more cautious consumer, might point out, these are complicated machines, and if you don’t know who was writing that code you are installing, you might be opening yourself, and other people, up to problems you are not anticipating.

Of the 12 requested exemptions, two are new (to me at least): Class 11, Avionics, and Class 12, 3D Printing.

The proposed exemption for access to avionics data reads “A proposed exemption for access to aircraft flight, operations, maintenance and security data captured by computer programs or firmware. The digital avionics systems lock out access to collected aircraft flight, operations, maintenance and cyber security data necessary to comply with flight safety, maintenance and cyber security regulations and to maintain the safe and secure operation of an aircraft.” I don’t know enough about the field of avionics, including what are sometimes referred to as “e-Enabled aircraft,” to weigh in on the details, but as long as it doesn’t materially affect airplane safety I can see a valid argument for opening up the data systems and outputs here. Frankly, not to do so might amount to an anti-competitive policy (as well as a possible safety issue), and one that unnecessarily impedes technology innovation. But again, I don’t claim any expertise on these matters; it’s simply interesting, to me, to see the insertion of copyright issues into this mix. Did they anticipate such applications of copyright law at all, back in 1998?

The final request for exemption, in the domain of 3D Printing, reads, A proposed exemption for owners of 3D printers to circumvent technological protection measures on firmware or software in 3D printers to run the printers’ operating systems to allow use of non-manufacturer-approved feedstock.” I’ve long been fascinated with 3D printing. “Feedstock” is a jargon term for the raw materials used by the various 3D printer technologies, the most common of which are Selective laser sintering (SLS), Fused deposition modeling (FDM), and Stereolithography (SLA). Which is to say, plastics, metal powders and resins. The argument here seems like a descendant of the “toner wars” from traditional (2D) printing, whereby anticircumvention rules were used to prevent users from substituting aftermarket toner cartridges for laser printers. Under a 2017 Supreme Court Ruling, this sort of anti-consumer shenanigans are no longer allowed, and I’d expect a case focused on the supplies used in 3D printing to go the same way.

The next round of public hearings before the Copyright Office is coming up in mid-April. I’m looking forward to it; maybe we’ll see you there.

A version of this post originally appeared in IP Watch.

The post Section 1201 Rulemaking – The Process Is Moving Along appeared first on Copyright Clearance Center.

]]> 0
Levy vs. License: Collective Licensing in the United States Thu, 08 Mar 2018 08:00:59 +0000 The model that we've developed enables rightsholders to choose whether to participate, and if so, which works to license through the system.

The post Levy vs. License: Collective Licensing in the United States appeared first on Copyright Clearance Center.

Jessica Pettitt: I’m joined here today by Vice President, Secretary and General Counsel of Copyright Clearance Center, Fred Haber.

Fred, I’m interested in knowing more about the US approach to collective licensing— a market driven option approach.

Why is this more beneficial than a levy system, or something more compulsory, like what exists in other parts of the world?

Frederic Haber: The short answer is that our model provides for greater choice. The levy model, or another model like that, eliminates the possibility of choice on at least one side. What I mean by that is that, in a levy system, often one side or the other might want to opt out but effectively can’t. The model that we’ve developed over the years enables rights holders to choose whether to participate in the licensing system, and if so, which works to license through the system.

The classic example is that you can buy the New York Times on a newsstand for a dollar every day, but you can’t buy a high intensity research biology journal for less than $10,000 a year.

On the flip side, users choose whether or not to take a license based on the terms that are available. The US model is not exclusive in that if our price is too high for what it is that the user wants, for example, the user is able to go directly to the rights holder. If we’re out of line with what the market can support, then it’s possible for both rights holders and users to connect directly.

For example, the Wall Street Journal and a major bank probably have a one-to-one relationship for the use of the Wall Street Journal’s information within that bank. But the Wall Street Journal will participate with us as well, because we’re also going to issue licenses to companies that quarry rocks, or that run retail stores, or that are law firms, which might not be worth a one-to-one negotiation for the Wall Street Journal.

JP:  Are there any further advantages of a voluntary licensing services beyond choice

FH: Yes, market sensitivity. Market sensitivity determines participation. If, in the long run, rights holders and users don’t both agree with the price at which we’re offering licenses, then one or the other won’t participate.

In a levy system, you’re all in or you’re all out. You really don’t get anything more to it than that.

What our system has also provided is respect for this market sensitivity. It exists in some other systems to a degree, but it doesn’t work all that well there. We offer different prices to different groups of users in the marketplace. This is based on surveys that we do, which indicate that, for example, R&D companies use far more of our science-oriented, copyrighted information than anybody else. So, the prices are higher there, than, for example, in the retail industry, where our surveys indicate very little of the stuff that we have available is used, so the price there is brought down commensurately.

For distributions to rights holders, we also have market sensitivity in that our distribution model is a compromise between a pure volume model (that is, the more that’s copied, the more money you make from us) and a pure value model (that is, the higher your prices in the marketplace already are, the more money you make from us). The classic contrast intended to explain what we are trying to do is that you can buy the New York Times on a newsstand for a dollar or so every day, but you can’t buy a high intensity research biology journal for less than $10,000 a year. There’s something that the market is saying there about the relative value of the two items, and that relative value is built into our distribution model as well.

The post Levy vs. License: Collective Licensing in the United States appeared first on Copyright Clearance Center.

]]> 0
Should Pharma Embrace Social Listening for PV? Wed, 07 Mar 2018 06:38:42 +0000 Social listening can be a valuable tool for pharma companies and provide a rich data stream for PV, if properly managed.

The post Should Pharma Embrace Social Listening for PV? appeared first on Copyright Clearance Center.

Over the past few years, businesses of all sizes and sectors have started to monitor conversations that are relevant to their organization on social media – but it appears pharma has been slow to adopt this best practice.

By monitoring mentions on social media, better known as “social listening,” organizations can hear what customers are saying about their brand, and respond in a timely fashion. The average social media user expects a response from a brand within four hours, but firms typically keep them waiting 10 hours, according to research from Sprout Social.

The interesting part of that 10-hour figure? It’s if brands reply at all. Sprout Social reports 89% of social media messages to brands go ignored.

In pharma, social media adoption has been slow, period. While pharma firms are gradually becoming more active on digital platforms, the notion of listening in to what customers are saying on platforms like Twitter and Facebook is still fairly novel.

In a recent webinar chaired by pharmaphorum, titled ‘The evolution of pharma social media intelligence,’ just 15% of viewers said they used social media for listening, compared to 41% for broadcasting.

The benefits of social listening are apparent: it can provide organizations with a deeper insight into their customers; help identify pain points and gaps in the market; highlight brand perception; and garner product feedback.

So, why is it that pharma has been reluctant to mine social media for insights, when the advantages are there for all to see?

The PV conundrum: Reporting adverse events based on social listening?

One concern is that pharma firms will be responsible for reporting any adverse events that they come across on social media, which could inundate pharmacovigilance teams. According to Anurag Abinashi, a social media intelligence lead and participant in the aforementioned webinar, the volume of adverse events on social media is “extremely low.”

He added that there are now tools available to pharma firms that can automate the process of monitoring, so that social listening doesn’t become a resource-intensive task.

Related Reading: Why Text Mining for PV?

New research published in the journal npj Digital Medicine also makes the case for social listening in pharma, showing how it can yield valuable drug safety information – and suggests adverse event mentions on social media might be more common than Abinashi’s claim.

The study by the University of Manchester looked specifically at Twitter. They drew their conclusions after searching Twitter for the number of Tweets mentioning an adverse drug reaction involving glucocorticoid drugs – comparing this number with the official drug-related adverse events reported to the UK’s Yellow Card system run by the Medicines and Healthcare products Regulatory Agency (MHRA).

“Using glucocorticoids as an example, we have demonstrated that Twitter can be a potentially useful, supplementary source for post-marketing pharmacovigilance,” the researchers said in their report.

Between 2012 and 2015 there were 20,210 glucocorticoid-related adverse event Tweets compared with 3,022 adverse events logged through the Yellow Card system during the same period. However, it reported that the majority of the side effects reported via Twitter were “common but not serious.”

Social media is only one piece of the PV puzzle

Social listening can be a valuable tool for pharma companies and provide a rich data stream for pharmacovigilance, if properly managed. But to get a true view of the impact a brand and its products are having on customers, it is crucial that pharma companies bring other data sources into the mix.

Social listening, then, should not be considered the be-all-end-all, but a vital part of the jigsaw when trying to piece together a complete picture of patients’ experiences and perspectives. What do you think? Should pharma be adopting social listening?

Ready to learn more? Check out how one major pharmaceutical organization is using RightFind to streamline their literature monitoring process.

The post Should Pharma Embrace Social Listening for PV? appeared first on Copyright Clearance Center.

]]> 0
Join CCC at DIA’s #MASC2018 Conference from March 19-21 Tue, 06 Mar 2018 08:11:02 +0000 CCC is heading to Rancho Mirage from March 19-21 to attend DIA’s Medical Affairs and Scientific Communications Forum (MASC 2018).

The post Join CCC at DIA’s #MASC2018 Conference from March 19-21 appeared first on Copyright Clearance Center.

CCC is heading to Rancho Mirage from March 19-21 to attend DIA’s Medical Affairs and Scientific Communications Forum (MASC 2018) – and we hope we’ll see you there.

Here at CCC, we know the way organizations search, access, and obtain medical information is evolving. For medical affairs professionals, whose roles center around providing and obtaining accurate drug information, this is hugely important.

If this is intriguing to you, here are a few sessions you might want to check out at MASC 2018:

Gold! Gold! Gold from Medical Information!

Session Description: Medical information/communications (MI) departments within the pharmaceutical and biotech industry have existed for decades. These departments have also been documenting unsolicited drug information requests in electronic databases and utilizing the metrics to enhance data capture, business processes and workflows. While these efforts have served to improve the operations of the MI department, ongoing challenges in demonstrating the value and impact of the MI function remain. The database documenting the MI requests is a gold mine with untapped potential. Sharing the metrics is easy, but what do the metrics mean without context? What are some key performance indicators that can showcase the value of MI? What are insights, how are these identified, and who should be the audience of these? During this session the faculty will share their views and experiences on demonstrating the value of medical information.

Strategic Deployment of MI “Troops” and Capitalizing on New Technologies for MI Collaborations

Session Description: A dynamic offering for any medical communications professional. We invite you to come learn about FRESH and DIFFERENT ways two experienced medical information (MI) professionals are working in new ways. First from one member who is embedding their MI team members across functions within their organization to deliver value and “delight customers” based on insights. Second, we change gears to hear how an UBER-like platform can be used to collaborate across drug information professionals to deliver value to customer.

One Medical Voice: Ensuring Consistent, Quality Medical Information Responses Globally

Session Description: Medical Information (MI) plays a key role in providing quality answers to unsolicited requests from healthcare providers. MI’s answers must be of the highest quality, accurate, appropriately balanced, and delivered consistently across the globe, when appropriate. This represents a significant challenge as MI departments globalize and expand their reach into alternative channels. This session will discuss ways that MI departments across three pharmaceutical companies are addressing this challenge with a comprehensive content strategy and utilizing technology to create “One Source of the Truth” for their content. The solution is only part of the story, though. This session will also uncover each company’s journey, from proof-of-concept through implementation, and discuss the best practices and lessons learned that was acquired along the way.

Keynote Address: Future Capability and the World of Tomorrow

Description: In the near future, continuous and rapid technological advancements will enhance and challenge our current way of working. How do we reliably plan for the capability we will need as individuals and organizations, to adapt for success in this changing world? What are the core tenets of a data-driven world that will help us flourish in future medical affairs and scientific communications?


Stop by to say hi to CCC at Table 13!

We’ll be exhibiting throughout the conference, and we’re excited to share the ways organizations are using RightFind to accelerate the research process and gain scientific insights more quickly.

If you’re attending the show, we’d love to schedule some time to chat. Click here to set up a meeting.


Not attending? Follow along using the hashtag #MASC2018.

The post Join CCC at DIA’s #MASC2018 Conference from March 19-21 appeared first on Copyright Clearance Center.

]]> 0
Join CCC and Ixxus at Tempo di Libri, Livre Paris and FILBo Mon, 05 Mar 2018 08:00:15 +0000 Catch The Digital Transformation of Publishing: Challenges and Keys to Success in Italian at Tempo di Libri, French at Livre Paris and Spanish at FILBo.

The post Join CCC and Ixxus at Tempo di Libri, Livre Paris and FILBo appeared first on Copyright Clearance Center.

During March and April, professionals in Italy, France and Colombia will have the opportunity to attend the presentation The Digital Transformation of Publishing: Challenges and Keys to Success.

Where is the publishing industry in its path of digital transformation? What are the obstacles? How can data move more efficiently? Discover the answers through a survey of large international publishing houses and the innovative strategies and solutions proposed by Ixxus, a subsidiary of Copyright Clearance Center (CCC).

Join CCC and Ixxus at Tempo di Libri, Livre Paris and FILBo

8 March 2018, 16:30-17:15

Tempo di Libri (Italian-language conference)

Milan, Italy

Spazio AIE

La trasformazione digitale dell’editoria: sfide e chiavi del successo: Dove si trova l’editoria nel suo percorso di trasformazione digitale? Quali sono gli ostacoli? Come avanzare in modo più efficiente? Victoriano Colodrón presenterà un’indagine condotta tra grandi case editrici internazionali e le strategie e le soluzioni innovative proposte da Ixxus, società sussidiaria di Copyright Clearance Center. #mondodigitale

Join CCC and Ixxus at Tempo di Libri, Livre Paris and FILBo

19 March 2018, 12:00 – 12:30

Livre Paris (French-language conference)

Paris, France

Stand Allemagne, 1-P67

Où en est la transformation numérique de l’édition? Quels sont les obstacles? Comment avancer plus rapidement? Victoriano Colodrón (Copyright Clearance Center) présentera les résultats d’une enquête menée auprès de maisons d’édition internationales, la stratégie développée par Ixxus et des solutions novatrices en matière de “discoverability”, d’agilité des contenus, de métadonnées.

Join CCC and Ixxus at Tempo di Libri, Livre Paris and FILBo

19 April 2018, 12:00-12:30

Feria Internacional del Libro de Bogotá [FILBo] (Spanish-language conference)

Bogotá, Colombia

Stand Frankfurter Buchmesse, 1702A

¿En qué punto se encuentra la industria editorial en su trayecto de transformación digital? ¿Cuáles son los obstáculos que encuentran las editoriales? ¿Y cómo avanzar de una manera más rápida y eficaz? Victoriano Colodrón, de Copyright Clearance Center (CCC), presentará los resultados de una encuesta realizada entre responsables de grandes editoriales internacionales. Asimismo, hablará del enfoque estratégico en torno a la transformación digital que propone la empresa filial de CCC, Ixxus, y de sus soluciones innovadoras en materia de almacenamiento y ‘agilidad’ de los contenidos, metadatos, discoverability y colaboración.

The post Join CCC and Ixxus at Tempo di Libri, Livre Paris and FILBo appeared first on Copyright Clearance Center.

]]> 0
Three Simple Ways to Support Your OA Ecosystem Thu, 01 Mar 2018 08:00:56 +0000 With a few simple steps, you can better support an open access ecosystem: collaborate with institutions; scale APC billing processes; apply data standards.

The post Three Simple Ways to Support Your OA Ecosystem appeared first on Copyright Clearance Center.

The current approach to APC management can be highly fragmented and beset by inefficiencies in process and scarcity of resources. Academic institutions themselves face many of the same problems, and so publishers and institutions have significant incentive to collaborate on streamlining the process. With these few simple steps, you can better support an open access ecosystem.

Develop Relationships with Institutional Administrators

Author engagement is crucial to the success of the open access movement, but many authors need support in navigating a complex and ever-changing web of funder mandates and an equally varied set of publisher policies. They may also need help in accessing institutional funding for APCs and complying with institutional mandates and procedures.

As a result, a two-way relationship between author and publisher is transforming into a three- or four-way relationship involving the institution and potentially an external funder.

Karen Hawkins, Senior Director, Product Design at the Institute of Electrical and Electronics Engineers (IEEE), explains, “What we find is that authors are not always clear on funder requirements and the license that they want, and so we know that we need to make the process more institution-friendly.”

This means recognizing that in some cases institutional administrators themselves may need to use manuscript submission and payment systems on behalf of their authors. Publishers can also overcome some of the delays in the open access workflow by developing good working relationships with key members of staff in the library or research support office, particularly at the most research-intensive institutions.

Develop a Scalable Approach to Management and Billing of APCs

As the volume of APCs continues to rise, both institutions and publishers acknowledge that manually processing individual invoices for each article is an unsustainable model. A few institutions remain cautious about the value of aggregated billing arrangements, fearing a loss of transparency in APC pricing, but such arrangements can be invaluable in reducing the administrative burden on both universities and publishers. Some publishers are also exploring more innovative arrangements, by which agreement is reached with a national funder or a consortium of libraries to offset article-processing charges against subscription costs.

Ensuring scalability should be uppermost in the minds of all publishers as they develop new business processes to support open access.

Adopt Emerging Data Standards and Promote Interoperability

The key to helping institutions meet funder requirements lies in obtaining better quality data at an early stage from publishers.

“What we need are actual APC costs, date of payment, license type, DOI and agreed publication date,” observed Valerie McCutcheon, Research Information Manager at the University of Glasgow.

The adoption of standards also creates opportunities for efficiency savings for publishers themselves. Many are already integrating ORCID and FundRef into

their workflows, and with the release of the National Information Standards Organization’s (NISO) Recommended Practice on Access and Licensing for e-content, standards are beginning to emerge in this area. Publishers also have a vital role to play in shaping the development of new standards, for example, by contributing to the work of the Consortia Advancing Standards in Research Administration Information (CASRAI).


A version of this article originally appeared in Book Business.

The post Three Simple Ways to Support Your OA Ecosystem appeared first on Copyright Clearance Center.

]]> 0
Inside the Game-changing OER Legislation for Publishers Tue, 27 Feb 2018 05:27:35 +0000 OER’s next wall to climb in the K-12 space is the complex, regulated, and sometimes political process of state adoptions.

The post Inside the Game-changing OER Legislation for Publishers appeared first on Copyright Clearance Center.

Earlier this month, we described how open educational resources (OER) are gaining acceptance in the U.S. K-12 education market. By offering “free” or low-cost academic programs, OER developers over the past several years have made inroads in some states and school districts.

Greater demand for OER programs has been driven in part by the need for new materials aligned to the Common Core State Standards. To fulfill that demand, several OER organizations began to create comprehensive programs with standards, scope and sequence. This type of development marked a significant shift from the creation of supplemental and random resources that often characterized OER for many years. In other words, OER development moved from pieces to programs, and in doing so began to increase the odds of successful adoption by school districts.

OER’s Next Wall to Climb

OER’s next wall to climb in the K-12 space is the complex, regulated, and sometimes political process of state adoptions. For the uninitiated, a state adoption is a process whereby a state department of education conducts reviews of instructional materials to determine whether they are suited for use in K-12 classrooms. Nineteen states have adoption statutes. While OER programs have been adopted in several of those states, it’s not common.

A Look at What’s Happening in Texas

However, that could soon change in the state of Texas, which is known for its textbook politics and its politically charged education debates. With a K-12 enrollment of five million students, Texas is the second largest K-12 state adoption market in the country.

In the spring of 2017, the Texas legislature passed several measures expanding the development and use of OER at the K-12 and post-secondary levels.  The enacted legislation:

  • Revises the state’s definition of OER and allows the Texas Commissioner of Education to use open licenses to encourage Texas school districts to adopt OER.
  • Doubles to $20 million the amount of state funds allocated for development of K-12 OER over the next two budget years. The funds will be used to develop materials in subject areas that make up the bulk of district purchases as well as high school STEM courses. The funding increase expands a current state project to create OER courses for use at the high school level.
  • Requires the Texas State Board of Education to include information about OER as part of state adoptions. Information about “cost savings” must be listed.
  • Restores a state program that provides grants to districts so they can develop “lending libraries” of tech equipment for students who cannot access digital materials.
  • Authorizes the development of a web portal that will have information about all state-adopted instructional materials.
  • Creates a new post-secondary program designed to support and encourage professors to transition to OER use in their classrooms.

The legislation marks a firm shift toward OER that will begin to unfold this year. Implementation is likely to take several years, and is expected to affect the 2019 adoption of English language arts programs for the high school grades.

While the Texas adoption system no longer compels school districts to purchase programs approved by the State Board of Education, receiving adoption approval from the board still carries a lot of weight. Surely, several OER organizations will seek approval.

Programs submitted for review will also have to meet a long list of Texas requirements and they will have to align with the Texas Essential Knowledge and Skills standard – not Common Core State Standards, which the Texas State Board of Education and the legislature oppose. Development of new programs is an expensive endeavor and despite the “free” moniker that always travels with the term OER, there is nothing free about the type of R&D work that will need to take place to prepare materials for adoption reviews.

And what about publishers’ programs? The new legislation won’t spell the end of publishers’ products in the Texas market. Publishers have deep experience in Texas not only in terms of development, but also an understanding of the adoption review process and its many nuances.

More importantly, the changes in Texas will likely present new opportunities for publishers if they can be nimble. OER models are rapidly evolving. New services such as content management and integration, assessment, data analytics and other solutions are needed to make open resources truly sustainable.

Next month Andrew and Jay will take a closer look at the development of instructional materials in educational publishing and the new solutions that have emerged to support digital transformations.  Don’t miss it – subscribe below to receive new posts directly in your inbox.

The post Inside the Game-changing OER Legislation for Publishers appeared first on Copyright Clearance Center.

]]> 0
Join CCC in London at Researcher to Reader 2018 Thu, 22 Feb 2018 08:00:15 +0000 CCC's Jake Kelleher and Jennifer Goodrich share their “can’t-miss” sessions at Researcher to Reader, a forum for publishers and technology providers.

The post Join CCC in London at Researcher to Reader 2018 appeared first on Copyright Clearance Center.

The international scholarly content supply chain takes center stage every February in London at the Researcher to Reader Conference, a premier forum for discussion among content authors, publishers, librarians, and technology providers which ranges from content creation, discovery and use, through to archiving and preservation. 

CCC colleagues Jake Kelleher, Vice President of Business Development, and Jennifer Goodrich, Director of Product Management, share their take on “can’t-miss” sessions at this year’s conference: 


  • Workshop E – Open Access Communications (Mon 26 Feb, 10:20 AM)
    How can publishers, funders, research organizations and other stakeholders co-operate to communicate with each another and researchers more efficiently? 

This workshop, led by front-line industry experts Valerie McCutcheon, Research Information Manager, University of Glasgow; Catriona MacCallum, Director of Open Science, Hindawi Publishing; and Liz Ferguson, Vice President, Publishing Development, Wiley, is vitally important to the scholarly publishing ecosystem in this moment, making it one of my top “can’t miss” events. The question of sustainability for OA business models is in many ways predicated on the effectiveness with which authors, institutions, funders, and publishers align, and real solutions will only come about if these parties tackle challenges and obstacles together in forums such as this. 

  • Workshop D – Metadata Lifecycles (Mon 26 Feb, 10:20 AM)
    Why should researchers and readers care about metadata quality? 

My second pick is also a workshop, led by open data authorities Ginny Hendricks, Director of Member & Community Outreach at Crossref and founder of Metadata2020, and Ross Mounce, Open Access Grants Manager at Arcadia Fund. Beyond being an operational issue for publishers, rich, connected, and reusable metadata holds the promise of improving scholarly pursuits and advancing science for researchers and readers. When genuinely backed by all stakeholders, it can facilitate easy content discovery, bridge gaps between communities, and eliminate duplication of effort and research. Researchers and readers ought to learn as much as they can about metadata and ways to support it. 


  • Constants in a Changing World (Tues 27 Feb, 9:30 AM)
    How learned societies can survive and thrive in an open future.  

My first “must-attend” event is an expert panel discussion led by founder and director of Research Consulting and R2R Chair, Rob Johnson, alongside Catherine Cotton, CEO of the Federation of European Microbiological Societies (FEBS); Sally Hardy, Chief Executive of the Regional Studies Association, and Caroline Sutton, Director of Editorial Development at Taylor & Francis. In contrast to their larger counterparts, society publishers face unique challenges in terms of adapting to and offering value in an increasingly open landscape, while simultaneously pursuing their mission-driven activities in their field. When done right, however, OA can add to, rather than subtract from, the model of traditional society journal publishing. This panel is likely to have terrific advice on how to bring that vision to fruition.  

  •  From Open Access Dream to Administrative Nightmare (Mon 26 Feb, 3:30 PM)
    The ever-increasing burden of open access policy on libraries and researchers 

As part of the “Open Access & Open Science in Institutions” stream, this presentation from Elizabeth Gadd, Research Policy Manager at Loughborough University, and Yvonne Budden, Head of Scholarly Communications at the University of Warwick, promises to be really enlightening for funders, publishers, and technology providers, as these institution-based stakeholders elaborate on their current OA pain points. It’s incredibly important to discuss, evaluate, and appreciate the outcomes (good and bad) of OA mandates now that they have been operationalized. If the open model is to be successful in the long run, the needs of all players in the scholarly publishing ecosystem must be heard and accommodated with the assistance of new solutions, technology-based or otherwise.  


Interested in meeting up with Jen or Jake at Researcher2Reader?
Send a note to and we’ll get in touch to set up an appointment!  

The post Join CCC in London at Researcher to Reader 2018 appeared first on Copyright Clearance Center.

]]> 0