What is Shadow AI?


The following is an excerpt from “Shadow AI: Managing the Unseen Copyright Risks in Your Organization,” published by KMWorld. You can read the full piece here.

In today’s rapidly evolving digital landscape, organizations face a new challenge that combines technology adoption, information governance, and copyright compliance. While business leaders work to develop formal AI governance frameworks, employees are adopting generative AI tools at unprecedented rates—often without official approval or oversight.

This disconnect creates significant risks, particularly regarding copyright compliance throughout the AI lifecycle. This article focuses primarily on copyright implications, but organizations should also consider the numerous other risks posed by Shadow AI, including data security breaches, confidentiality violations, privacy law compliance issues, potential discriminatory outputs from biased AI, and regulatory violations across various sectors. Understanding these challenges is essential for knowledge managers to protect your organization’s information assets while enabling innovation.

The Shadow AI Phenomenon

Shadow AI refers to the use of artificial intelligence tools and applications by employees without explicit knowledge, approval, or oversight from IT departments or established governance processes. This includes everything from free web-based tools like ChatGPT or Claude for drafting emails and summarizing documents to AI coding assistants, image generators, or data analysis tools obtained outside official procurement channels.

The prevalence of unauthorized AI use is alarming. Salesforce’s “AI at Work” survey reveals that more than 55% of workplace generative AI users globally employ unapproved tools without employer consent, while 40% have deliberately used tools explicitly banned by their organizations. Even more concerning is that legal departments—who should be playing a key role in compliance—are among the worst offenders: 81% of in-house legal professionals admit to using unapproved AI tools, and 83% utilize AI solutions not provided by their employers (Axiom “View from Inside” Legal Survey). Sometimes, employees are so dissatisfied with employer- provided options that 35% are paying out of pocket for the AI tools they use at work. Even more alarming, 27% of employees have entered company information into non-approved GenAI tools (Writer.AI, December 2024).

This rapid, unsanctioned adoption creates a significant risk that formal governance struggles to address. It’s like an iceberg—the visible portion represents sanctioned use, while the submerged mass represents all the Shadow AI usage occurring daily across the enterprise. Circumvention of official channels creates a significant governance blind spot that organizations can no longer afford to ignore.

Why Shadow AI Is Different

Several factors make Shadow AI particularly challenging to address compared to other technology risks:

1. Invisibility and bypassing traditional governance: Many generative AI tools are web-based, easily accessible, and require no formal procurement, IT installation, or security review. Employees can start using them immediately without leaving obvious traces in corporate systems. As a result, organizations often have little to no visibility into what tools are being used, by whom, or for what purpose. These tools typically bypass standard governance processes like procurement approvals, security assessments, and asset management, creating significant blind spots for risk management and compliance efforts.

2. Unprecedented adoption speed: The adoption curve for generative AI has been incredibly steep—far faster than previous tech shifts like cloud computing or mobile. Governance structures simply cannot keep pace with this rapid adoption.

3. Low technical barriers: Most generative AI tools are designed for ease of use with free tiers, intuitive interfaces, and web accessibility. Anyone with internet access can start experimenting immediately without technical expertise or significant budget.

4. Competitive pressures: Employees see colleagues or competitors using AI to boost productivity and feel compelled to use these tools—even unapproved ones—just to keep up, sometimes implicitly or explicitly encouraged by managers focused on the speed of output rather than the process to get it.

Embracing the Challenge

AI is becoming deeply integrated into business tools and workflows. Trying to prevent its adoption entirely is likely futile. Knowledge managers should focus instead on preparedness—building the policies, processes, and awareness needed to manage AI adoption proactively, channeling it towards responsible, productive, and compliant uses.

Keep Learning

Topic:

Author: Roanie Levy

Roanie Levy, Licensing and Legal Advisor at CCC, combines over 20 years of intellectual property and copyright law expertise with a strong entrepreneurial and technological background. As Access Copyright's former President and CEO, Levy successfully navigated complex legal landscapes while driving innovation and growth. Her deep understanding of technology's impact on the creative industries informs her current focus on the ethical and responsible use of AI. At CCC, she supports initiatives to develop licensing frameworks that balance technological advancement with protecting creators' rights, ensuring that AI technologies are deployed transparently and fairly.