The following is an excerpt from “Shadow AI: Managing the Unseen Copyright Risks in Your Organization,” published by KMWorld. You can read the full piece here.
In today’s rapidly evolving digital landscape, organizations face a new challenge that combines technology adoption, information governance, and copyright compliance: Shadow AI. While business leaders work to develop formal AI governance frameworks, employees are adopting generative AI tools at unprecedented rates—often without official approval or oversight.
This disconnect creates significant risks, particularly regarding copyright compliance throughout the AI lifecycle. How can knowledge management professionals address these challenges? Here are concrete steps to consider:
1. Develop Comprehensive Yet Practical AI Policies
Aim for an Acceptable Use Policy (AUP) that:
- Defines AI and provides examples
- Specifies prohibited uses (e.g., inputting PII, confidential information, client data into public tools)
- Lists approved AI tools and use cases
- Establishes a process for requesting approval for new tools or uses
- Clearly states rules about data input
- Addresses ownership and use of AI-generated output
2. Update or Create a Copyright Compliance Policy
Ensure your existing copyright policy explicitly addresses AI, guiding employees on:
- Not inputting third-party copyrighted material into AI tools without the appropriate permission or license
- Understanding the copyright status of AI outputs
- Checking license rights before using third-party content with AI systems
3. Conduct Organization-wide AI Usage Audits
Understand the current landscape through:
- Anonymous surveys asking employees what AI tools they are using
- IT network monitoring to identify traffic to known public AI websites
- Expense report analysis to look for subscriptions to AI tools
- Discussions with department heads about observed AI use on their teams
4. Create Targeted Employee Training Programs
Develop mandatory training that covers:
- The company’s AI policy (what’s allowed, prohibited, requires approval)
- Specific risks (IP loss, privacy breach, confidentiality, copyright infringement)
- Guidelines on responsible prompting (avoiding sensitive data)
- Approved company tools and the process to request new ones
5. Establish AI Governance Committees
Form a cross-functional committee that includes representatives from Legal, IT, Security, HR, Risk/Compliance, and key Business Units to:
- Oversee AI policy development and updates
- Evaluate and approve AI tools and use cases
- Monitor AI usage and risks
- Stay abreast of legal and technological developments
6. Implement Proper Licensing Solutions
For organizations aiming to responsibly use valuable text-based content for AI initiatives, proactive licensing solutions are essential. CCC (Copyright Clearance Center) provides established licenses like the Annual Copyright License (ACL) for uses of content in AI workflows for internal purposes.
These licensing mechanisms are critical to managing copyright risk when incorporating high-quality, protected use of protected content into organizational AI workflows. With proper licensing, organizations can mitigate legal exposure to copyright infringement claims that might otherwise arise from unauthorized use of protected materials throughout the AI development and implementation lifecycle.
Beyond Risk Mitigation: The Competitive Advantage of Effective AI Governance
Implementing robust AI governance frameworks not only mitigates risks but also enhances a company’s competitive edge. Companies that adopt a comprehensive, responsible approach to AI realize twice the profit from their AI efforts compared to those without such frameworks. This underscores the tangible business benefits of responsible AI practices. (Bain)
Moreover, ethical AI practices build customer trust, leading to increased engagement and loyalty. The Economist Intelligence Unit highlights that responsible AI can improve a firm’s top-and-bottom-line growth by increasing customer engagement, broadening revenue streams, and enhancing pricing power.
Embracing the Challenge
AI is becoming deeply integrated into business tools and workflows. Trying to prevent its adoption entirely is likely futile. Knowledge managers should focus instead on preparedness—building the policies, processes, and awareness needed to manage AI adoption proactively, channeling it towards responsible, productive, and compliant uses.
By being proactive, practical, and collaborative, knowledge management professionals can help their organizations reduce the use of Shadow AI and harness the use of AI’s potential responsibly. This includes staying current with evolving case law and regulations, positioning knowledge management as business enablers rather than blockers, and anticipating further regulatory developments in privacy, bias, transparency, and safety.
The organizations that effectively manage these challenges will not only mitigate legal and reputational risks but also lay a sustainable foundation using AI responsibly as a significant competitive advantage in the knowledge economy.
