This article originally appeared in The Scholarly Kitchen.
Methods of Policy Influence
There are two major ways that a government such as that of the US can influence policy. The first is the most obvious way — through laws, regulations and executive orders, a government can establish ground rules for markets at large.
There is another way, however, which is often overlooked. With 6.8 trillion dollars in US Governmental spending, the US government can move markets simply through buying power as purchaser of goods and services. Buying power creates de facto norms.
OSTP RFI on Artificial Intelligence
Many of us are aware of and participated in a recent Request for Information on the Development of an Artificial Intelligence (AI) Action Plan issued on behalf of the US Office of Science and Technology Policy (OSTP). I, along with 8,754 others, submitted a response. CCC’s response to that RFI was, essentially, “respect intellectual property, and recognize the importance of transparency.” On the latter, we stated:
“CCC’s clients include the largest US-based companies in fields as diverse as food, fuel, pharmaceuticals, finance, engineering, and aerospace. Through our work with these businesses, we know that high stakes AI applications require transparency of input. This is just another version of responsible supply chain management. The US Government should not risk the security and health of its citizens by using AI systems developed with incomplete documentation….”
OMB and the Government as Consumer
We are expecting the AI Action Plan to be issued over the summer. This will be the “official” policy document. We may, however, glean some of the administration’s views by looking at a recently issued memo from Office of Management and Budget (OMB) Director Russell Voight. This memo’s subject line is “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust.” It calls for Federal agencies to “adopt a forward-leaning and pro-innovation approach that takes advantage of this technology to help shape the future of government operations.” As such, the US Government is establishing de facto rules through buying power.
While using terms of urgency, the policy also sets forth some guardrails around AI adoption (emphasis added by me):
Agencies must cut down on bureaucratic bottlenecks and redefine AI governance as an enabler of effective and safe innovation. As a step towards accelerating responsible adoption, agencies must establish clear expectations for their workforce on appropriate AI use particularly when an agency is using AI to support consequential decision-making. Agency policies must enable agency heads to delegate responsibilities and accountability for risk acceptance to appropriate officials throughout the agency, ensuring that swift action is possible with sufficient guardrails in place….
Every day, the Federal Government takes action and makes decisions that have consequential impacts on the public. If AI is used to perform such action, agencies must deploy trustworthy AI, ensuring that rapid AI innovation is not achieved at the expense of the American people or any violations of their trust.
As such, agencies are directed to implement minimum risk management practices for AI that could have significant impacts when deployed.
Further on, the document details expectations for AI systems to be used by the Government. Whatever transparency Congress requires or doesn’t require, when the US is a buyer, it must “ensure access to quality data for AI and data traceability.” I was especially pleased to see this language:
In this context, traceability refers to an agency’s ability to track and internally audit datasets used for AI, and where relevant, key metadata. A significant enabler of traceability is clear documentation that is meaningful or understandable to individual users and reflects the process for model-driven development.
The memo also indicates the importance of “documenting provenance of the data used to train, fine-tune, or operate the AI” for performance evaluation purposes.
As we have seen in the EU, any legal requirement on transparency of AI “training data” is likely to lead to a battle of lobbyists as to the specific requirements. “Understandable to individual users” may not be highly specific, but it communicates the point quite nicely.
OMB on Intellectual Property
With respect to intellectual property, a companion memo entitled “Driving Efficient Acquisition of Artificial Intelligence in Government” from Director Voight specifies:
“[A]gencies must have appropriate processes for addressing use of government data and include appropriate contractual terms that clearly delineate the respective ownership and IP rights of the government and the contractor. Careful consideration of respective IP licensing rights is even more important when an agency procures an AI system or service, including where agency information is used to train, finetune, and develop the AI system. Each agency must revisit, and update where necessary, its process for the treatment of data ownership and IP rights in procurements for AI systems or services….
The document is focused on the use of US Governmental materials in AI, continuing that contracts must “permanently prohibit the use of nonpublic inputted agency data and outputted results to further train publicly or commercially available AI algorithms, consistent with applicable law, absent explicit agency consent.”
Conclusion
In an admittedly different context, Belgian writer Raoul Vaneigem once said “[p]urchasing power is a license to purchase power.” The recent OMB memos provide an early look into how the current administration in the US may yield that power in the context of AI – with accountability, traceability, documentation, and care for IP rights being front and center.