News

New Voluntary AI Commitments Between US Government and Industry: Views and Perspectives

In July, the Biden-Harris administration in the US announced a set of voluntary commitments on the use and development of AI technologies it had negotiated with the leading developers in the industry. Some interest groups in the technology world called it an attempt to stifle innovation and development. Yet others believe that these commitments aren’t enough to control the dangers posed by AI technology. Let’s take a closer look at the essence of these commitments and the reaction they have caused.

US Government Voluntary Commitments Agreement on AI

Potential dangers arising from ever more powerful AI systems is an area of intense concern for the public. From fake news to harmful content to invasion of user privacy and more, these dangers might have devastating consequences for society. The European Union was among the first governmental structures to take decisive steps toward regulating AI. The EU AI Act is a major piece of legislation currently being discussed by the national governments within the union.

In contrast, the US government’s approach to AI regulation has been described as haphazard, laissez-faire, or even confused. No comprehensive legislative act on AI exists in the world’s largest economy, and this has been used by various commentators and interest groups to slam the Biden-Harris administration as slow and clueless when it comes to containing the dangers of AI.

Perhaps recognising that a full-scale legislative action à la EU AI Act right now is a hard task in a country known for its industry-friendly business climate, the current US administration decided to move ahead with a set of voluntary commitments on AI.

On 21 July, the administration announced the voluntary AI commitments negotiated with Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

The commitments cover the following three overarching areas:

1. Thorough pre-testing of AI systems before deployment. This covers internal and external security testing of the systems; and the sharing of information with governments, society, and academia on the details of the technology to be deployed.

2. Ensuring the security of AI systems. This point includes a commitment from the companies to invest in safeguards against cybersecurity. It also specifies a commitment to reporting vulnerabilities that arise in the systems.

3. Earning the public’s trust. This area covers an important commitment to making users aware that they are exposed to AI content at all times. Any AI content that users encounter should be clearly identifiable, e.g., via the use of watermarks or disclosure statements. Within this area, the companies also commit to prioritising research on the risks and harm that AI technology might cause.

Reaction to Voluntary AI Commitments

As expected, reaction to these commitments has been varied and highly dependent on who you ask. Many industry observers opine that these commitments are toothless measures, and their voluntary nature is a major indicator of it.

A major privacy-focused independent think-tank, the Electronic Privacy Information Center (EPIC), slammed the commitments as “not enough”. The think-tank’s Deputy Director, Caitriona Fitzgerald, points out that any meaningful AI-regulating measures require a comprehensive legislative act.

On a somewhat different side of the political spectrum, a conservative research group called the American Accountability Foundation (AAF) even accused the administration of pushing far-left ideology under the guise of controlling “harm posed by AI systems”.

As for the participating companies themselves, four of them – Anthropic, Google, Microsoft, and OpenAI – formed a partnership called the Frontier Model Forum mere days after the announcement of the voluntary AI commitments. Many of the ideas contained in the voluntary AI commitments, such as ensuring the safety of AI models and collaborating with governments and civil society, have been included in the core objectives of the Frontier Model Forum.

Some industry observers accused the quartet of setting up a cartel-like closed group aimed at monopolising the market.

ZENPULSAR’s View on Voluntary AI Commitments

The voluntary AI commitments announced by the White House are of primary interest to ZENPULSAR as a company working on advanced NLP algorithms. We are also a member of the Content Authenticity Initiative (CAI), an association that brings together some of the leading technology and media brands, such as Adobe, Microsoft, BBC, and Intel, in a pioneering effort to fight AI-generated fakes and plagiarism.

As a CAI member, we particularly welcome the part of the voluntary commitments that aims to make the AI-generated nature of any content transparent to the end user. The use of technology like Google’s recently launched SynthID, which automatically identifies and watermarks AI-generated images, is a step in the right direction and shows the company’s dedication to this critical part of the voluntary commitments’ set.

As a UK organisation, we are not directly involved in or affected by the voluntary AI commitments. However, we welcome these initial steps across the Atlantic aimed at providing a safe and accountable regulatory environment for AI systems. A complete legislative framework that will balance the need for AI innovation with protections for end users and civil society will also be a step in the right direction.

At the same time, we also believe that any voluntary commitment frameworks, industry forums, and important research groups need to involve a wider spectrum of businesses active in the AI field. The world of AI, including advanced AI systems, is no longer limited to a few big players like Microsoft, OpenAI, or Google. It has grown much wider and requires the participation of a larger pool of organisations working on the advancement of AI algorithms.

As for the expected efficacy of these voluntary AI commitments, time will tell if this is an effective measure or a mere attempt by the current US administration to look involved in protecting society against the dangers of AI. Given the Republican control of the House of Representatives, the Biden-Harris administration might feel that voluntary commitments based on gently asking the top AI developers to act as good boys is all they can achieve at this stage.