Add a bookmark to get started

16 November 202327 minute read

US senators introduce bill to establish AI governance framework

On November 15, Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Shelley Moore Capito (R-WV), and Ben Ray Luján (D-NM) introduced the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (AIRIA). The AIRIA is the latest in the efforts of the US in establishing a safe and innovation friendly environment for the development and deployment of artificial intelligence (AI), following closely behind the recent Executive Order on Safe, Secure, and Trustworthy AI (Executive Order) and the Blueprint for an AI Bill of Rights recently released by the White House.

The legislation has key bipartisan support from members of the Senate Commerce Committee, which holds jurisdiction over agencies overseeing AI, such as the National Institute of Standards and Technology (NIST), which has received significant attention in recent months from lawmakers and the White House.

The AIRIA is broadly split into two themes. Title I of the AIRIA focuses on legislative initiatives encouraging innovation, including amendments to open data policies, research into standards for detection of emergent behavior in AI, and research into methods of authenticating online content. Title II goes on to establish a framework of accountability, including defining many of the terms in play, reporting obligations, risk-management assessment protocols, certification procedures, enforcement measures, and a push for wider consumer education on AI.

The core of the legislation rests on new transparency and certification requirements for AI system Deployers based on two categories of AI systems: i) “high-impact” and ii) “critical-impact.” The legislation would establish a new certification regime for AI, requiring Critical-Impact Artificial Intelligence Systems to self-certify compliance with standards developed by the Department of Commerce. The AIRIA would also require transparency reports to be provided to Commerce in sectors of housing, employment, credit, education, healthcare, and insurance. As the definition of High-Impact Artificial Intelligence Systems is broad, and the transparency reports require new efforts in testing and monitoring these systems, the governance impact of this legislation would be consequential if signed into law.

Title I – Research and Innovation

Open data policy amendments

One of the first proposed developments to the AI landscape in the US is the expansion of what qualifies as open data or a ‘public digital asset’ under the United States Code (USC). These developments extend the classification to data models, which are now defined as “a mathematical, economic, or statistical representation of a system or process used to assist in making calculations and predictions, including through the use of algorithms, computer programs, or artificial intelligence systems.[1] Additions of this nature suggest that within the near future, algorithms and models maintained by the federal government will become much more present within the US, either in the form of private data models for internal use within federal agencies or shared publicly as an open access resource.

Online content authenticity and provenance standards research

The AIRIA goes on to mandate that the Under Secretary of Commerce for Standards and Technology will have 180 days to carry out research on the development and standardization of content provenance and authentication for human and AI-generated works. The mandate addresses a growing concern that it is becoming increasingly difficult to determine whether content has been generated artificially or through a human creator.

While this has concerns from the perspective of ownership and authorship, a more concerning trend is the use of AI to generate deep fakes, false information, and video/imagery with the intent to mislead viewers (particularly in relation to international events and democratic processes).

The mandate appears to bolster the provisions of the Executive Order which required that NIST work with federal agencies to develop standards and techniques with the purpose of authenticating content created by humans and identifying and labelling content created by machines. Such additions to the US AI legislative framework are of little surprise, as many international governments have either implemented similar measures already (such as China) or are in the process of contemplating their own method (such as the EU).

Once the research has been appropriately administered and drawn to conclusion, the Under Secretary of Commerce is then required to carry out a pilot program to determine whether existing technologies and the creation of open standards would be sufficient to address these concerns. While the pilot itself is expected to carry on for a period of 10 years, the Under Secretary of Commerce will be required to update several committees within Congress of its findings throughout the pilot period.

Standards for detection of emergent and anomalous behavior and AI-generated media

The AIRIA continues by expanding the statutory enabled activities of NIST through amendments to the National Institute of Standards and Technology Act[2] through the statutory provision of additional remit to support the development of best practices in relation to i) detecting outputs generated by artificial intelligence systems, including content such as text, audio, images, and videos and ii) detect and understand anomalous behavior of artificial intelligence systems and safeguards to mitigate potentially adversarial or compromising anomalous behavior. While the reliance on industry standards and regulation of AI at a federal agency level, rather than holistically (as in the case of the EU), has received criticism in the past, it is likely that the flexibility of this approach may fill the gaps within existing regulation and allow the US government to triage developing technology risks while the legislative process moves slowly towards more comprehensive regulation.

Comptroller General study on barriers and best practices to usage of AI in government

In similar fashion to the obligations of the Under Secretary of Commerce, the Comptroller General of the United States (Comptroller) is also mandated to conduct a comprehensive survey of statutory, regulatory, and policy barriers to the use of AI within functions of the federal government. In doing so, it is expected that the Comptroller identify best practices that can be used to leverage widescale use of AI within government bodies. On completion of their survey, the Comptroller will be expected to produce a report outlining the results of its findings and propose several recommendations on best practices and effective methods of leveraging AI within government.

Title II - Accountability

Defining the state of play

In an anticipated and well-needed step, the AIRIA begins to set the stage by defining many terms that are often misinterpreted or inaccurately used interchangeably when discussing AI. The following definitions are a collection of the most notable and will help to understand the newly proposed risk classification and transparency framework that is proposed below.

AI System: an engineered system that: a) generates outputs, such as content, predictions, recommendations, or decisions for a given set of human-defined objectives; and b) is designed to operate with varying levels of adaptability and autonomy using machine and human-based inputs.

Covered Agency: an agency for which the Under Secretary develops an NIST recommendation.

Covered Internet Platform: any public-facing website, consumer-facing internet application, or mobile application available to consumers in the United States. (Note that this includes social network sites, video sharing services, search engines, and content aggregation services, but excludes platforms that: i) are wholly owned, controlled, and operated by a person that does not employ more than 500 employees, processes personal data of less than 1 million individuals, and averaged less than $50,000,000 in annual revenue over the previous 3 years; or ii) are operated for the sole purpose of conducting research that is not directly or indirectly made for profit.)

Critical-Impact AI Organization: a non-government organization that serves as the Deployer of a Critical-Impact Artificial Intelligence System.

Critical-Impact Artificial Intelligence System: an artificial intelligence system that is:

  • deployed for a purpose other than solely for use by the Department of Defense or an intelligence agency and

  • used or intended to be used:

    • to make decisions that have a legal or similarly significant effect on:

      • the real-time or ex post facto collection of biometric data of natural persons by biometric identification systems without their consent

      • the direct management and operation of critical infrastructure and space-based infrastructure or

      • criminal justice and

    • in a manner that poses a significant risk to rights afforded under the Constitution of the United States or safety.

Deployer: an entity that uses or operates an artificial intelligence system for internal use or for use by third parties and does not include an entity that is solely an end user of a system.

Developer: an entity that designs, codes, produces, or owns an artificial intelligence system for internal use or for use by a third party as a baseline model and does not act as a Deployer of the artificial intelligence system.

Generative Artificial Intelligence System: an artificial intelligence system that generates novel data or content in a written or audio-visual format.

High-impact Artificial Intelligence System: means an artificial intelligence system:

  • deployed for a purpose other than solely for use by the Department of Defense or an intelligence agency and

  • that is specifically developed with the intended purpose of making decisions that have a legal or similarly significant effect on the access of an individual to housing, employment, credit, education, healthcare, or insurance in a manner that poses a significant risk to rights afforded under the Constitution of the United States or safety.

Significant Risk: a combination of severe, high-intensity, high-probability, and long-duration risk of harm to individuals.

Generative AI transparency

The first step taken in the AIRIA is the prohibition of operating a Covered Internet Platform (as defined above) that uses AI unless several transparency measures are met. These include indicating to each user of the platform that they are interacting with a service that leverages generative AI.

These notices must be sufficiently clearly identifiable and placed in places obvious to the user and must be provided to the user before they interact with content created by the platform. While the option may be presented to the user to switch these notifications off during their interaction, they must be presented during the course of a user’s first interaction.

Should a Covered Internet Platform fail to adhere to these transparency obligations, the Secretary of Commerce may take enforcement action including notifying the offending organizations and require that they take remedial action. Persistent ongoing offences will be subject to enforcement measures as described below.

Recommendations to federal agencies for risk management of High-Impact Artificial Intelligence System

The AIRIA introduces new provisions into the National Institute of Standards and Technology Act[3] that enshrine many of the concepts introduced throughout the remainder of the AIRIA. These include the inclusion of definitions, such as the definition of High-Impact Artificial Intelligence Systems.

Beyond these inclusions, the AIRIA introduces several new provisions in the form of recommendations that the Director of NIST should develop sector-specific recommendations for individual federal agencies to conduct oversight of the non-federal and federal use of High-Impact Artificial Intelligence Systems to improve the safe and responsible use of such systems. Such recommendations may include recommendations for key design choices, intended uses and users, mitigations for anticipated harms, sector specific nuances and considerations, and methods for evaluating the safety of High-Impact Artificial Intelligence Systems.

These recommendations are to be updated every two years to account for changes in the capabilities of AI and emerging use cases.

Office of Management and Budget oversight of recommendations to agencies

The AIRIA extends further obligations to NIST and the relevant federal agencies. Once NIST has finalized its recommendations as part of its obligations under the wider terms of the AIRIA, it must then submit these recommendations to the Director of NIST, the head of each Covered Agency, and any applicable congressional committees.

The heads of the Covered Agencies then have 90 days to provide a written response indicating whether they intend to i) implement the entire NIST recommendation; ii) implement part of the recommendation; or iii) refuse to carry out procedures to implement the recommendation. Where the Covered Agency seeks to implement aspects of the recommendation, the response should include timeframes for completing these measures. Where the Covered Agency refuses to implement these recommendations, a formal response indicating the reason for their refusal must also be provided. Once the recommendation process is finalized, the Director of NIST will make available a copy of each recommendation and any formal responses provided to the public at a reasonable cost.

As recommendations begin to be implemented by Covered Agencies, each agency is also obligated to annually (each February) report to the Director of NIST detailing the regulatory status of each recommendation and when the final stages of the process are anticipated. Should any agency fail to meet their deadline, the applicable congressional committees will be notified of such failure. Where possible, NIST shall work to aid Covered Agencies in implementing their recommendations and will work alongside the Office of Information and Regulatory Affairs to develop and periodically revise performance indicators to determine the effect of AI-specific regulatory intervention.

Transparency reports and obligations for High-Impact Artificial Intelligence Systems

In a similar manner to other international approaches, the AIRIA obligates Deployers of High-Impact Artificial Intelligence Systems to adhere to a set of transparency reporting obligations. This part of the AIRIA may have the most significant impact on Developers and Deployers as the number of these systems is substantially larger than Critical-Impact AI Systems. Recall that the definition for “High-Impact AI Systems” covers systems specifically developed with the intended purpose of making decisions that have a legal or similarly significant effect on the access of an individual to housing, employment, credit, education, healthcare, or insurance in a manner that poses a significant risk to rights afforded under the Constitution of the United States or safety.

Under the AIRIA, each Deployer must:

  • before deploying the High-Impact Artificial Intelligence System, and annually thereafter, submit a report describing the design and safety plans for the artificial intelligence system; and

  • submit an updated report on the High-Impact Artificial Intelligence System if the Deployer makes a material change to either:

    • the purpose for which it is used or

    • the type of data it processes or uses for training purposes.

The reports should include the following details:

  • the purpose

  • the intended use cases

  • deployment context

  • benefits

  • a description of data that it, once deployed, processes as inputs

  • if available:

    • a list of data categories and formats the Deployer used to retrain or continue training the AI

    • metrics for evaluating system performance and known limitations and

    • transparency measures, including information identifying to individuals the system is in use

  • processes and testing performed before each deployment to ensure safety, reliability, and effectiveness

  • where applicable, identification of any third-party artificial intelligence systems or datasets the Deployer relies on and

  • post-deployment monitoring and user safeguards.

Obligations of Developers of High-Impact Artificial Intelligence Systems are not exclusively restricted to reporting. Instead, Developers are also subject to many of the obligations required by Developers of Critical-Impact Artificial intelligence Systems, which will be detailed further below.

Where Developers fail to follow these requirements, enforcement action may be taken against them including being informed of their failure and required to correct their behavior. If this is not achieved within 15 days of the notification, Developers may be subject to additional enforcement measures as detailed below.

It should be noted that it is understood that these are extensive obligations and the AIRIA highlights that the Secretary of Commerce should ensure that obligations do not become overly burdensome or duplicate the requirements required by federal agencies as part of their oversight responsibilities.

Risk management assessment for critical-impact artificial intelligence systems

One of the biggest elements of the AIRIA is the introduction of a risk management assessment framework for Critical-Impact Artificial Intelligence Systems. In each case, each Critical-Impact AI Organization must perform a risk management assessment in accordance with the provisions of the AIRIA.

The assessment must take the following format:

  • It must occur within 30 days of the system being made publicly available

  • It must occur at least every two years until the system is no longer made publicly available and determine whether significant changes were made to the critical-impact artificial intelligence system and provide, to the extent practicable, aggregate results of any significant deviation from expected performance detailed in the initial assessment or the most recent assessment.

Each assessment is required to describe the means in which the organization, through testing, evaluation, validation, and verification (TEVV), address several mandated categories, including:

  • Policies, processes, procedures, and practices across the organization relating to transparent and effective mapping, measuring, and managing of artificial intelligence risks

  • The structure, context, and capabilities of the critical-impact artificial intelligence system or critical-impact foundation model

  • A description of how the organization employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor artificial intelligence risk and

  • A description of allocation of risk resources to map and measure risks on a regular basis, including how artificial intelligence risks based on assessments and other analytical outputs are prioritized, responded to, and managed and how strategies to maximize artificial intelligence benefits and minimize negative impacts were planned, prepared, implemented, documented, and informed by input from relevant artificial intelligence Deployers.

On completion of the assessment, the organization must then submit to the Secretary of Commerce a report in accordance with a format determined by the Secretary, outlining the assessment and its results. Where the Secretary of Commerce is unsatisfied with the information, they may request the organization to provide additional information to further substantiate their report. It should be noted that during this time, the Secretary may not prohibit the organization from making the Critical-Impact Artificial Intelligence System available to the public based exclusively on the review of the report or during requests for further information.

Critical-Impact AI Organizations are not the only parties with additional obligations and transparency requirements. The AIRIA also requires that Developers of Critical-Impact Artificial Intelligence Systems shall provide Deployers of the system information that is deemed reasonably necessary for the Deployer to comply with its own requirements. This information broadly includes:

  • an overview of the data used in training the baseline artificial intelligence system provided by the Developer

  • documentation outlining the structure and context of the baseline artificial intelligence system of the Developer

  • known capabilities, limitations, and risks of the baseline artificial intelligence system of the Developer at the time of the development of the artificial intelligence system and

  • documentation applicable to downstream use (including guidelines for intended use and statements of intended purpose of the system).

Certification of Critical-Impact Artificial Intelligence Systems

The AIRIA also establishes a novel process for certification of Critical-Impact Artificial Intelligence Systems.

This is to be achieved through the creation of an advisory committee responsible for advising the Secretary of Commerce and providing recommendations on TEVV standards and certification processes for Critical-Impact Artificial Intelligence Systems. These recommendations will, among other things, focus on maximizing alignment and interoperability of systems with standards issued by national and international standards bodies (including the ISO and NIST) and provide ongoing review of prospective TEVV standards submitted by the Secretary. To ensure a balanced approach, the committee will be composed of representatives across industries, including academia, companies developing AI, consumer advocacy groups, and enabling technology companies, and such others that the Secretary deems necessary.

Accompanying the committee, the AIRIA requires the Secretary to establish a 3-year implementation plan for certification of Critical-Impact Artificial Intelligence Systems (the Plan). Created in consultation with governmental arms, such as the National Artificial Intelligence Initiative Office, the Plan is set to include several key processes for the effective certification of these systems, including a methodology for gathering and using appropriate information as part of TEVV processes, processes for prescribing TEVV standards as they apply to this type of AI system, and an outline for future standards to be proposed. On completion of the initial Plan, it is then required to be submitted to the advisory committee and applicable committees of Congress for review.

Should the work under the Plan be approved, a set of standards for Critical-Impact Artificial Intelligence System TEVV methodologies shall be issued. Each of these standards will be required to be practical, ensure safe, secure, and transparent operations of systems, align with existing applicable standards from established standards organizations, stated in clear and objective terms, and provide a mechanism for review periodically to account for advances in technology. These standards will then be subject to public and governmental scrutiny prior to their widescale adoption.

In some cases, the AIRIA grants the Secretary of Commerce the ability to exempt (temporarily) a Critical-Impact Artificial Intelligence System from its established TEVV standards. In each case, an organization must apply (or reapply for an extension) in the event that the exemption would be consistent with public interest and would facilitate the development or evaluation of a feature or characteristic of a system providing safety and security that is not less than the prescribed TEVV standards.

It should be noted that the process of certification, with the exception of applying for an exemption, is largely through a process of self-certification. Each Critical-Impact AI Organization is required to assess their system and certify to the Secretary that it complies with applicable standards established under the AIRIA.

This does not mean, however, that organizations can self-certify without actually complying with the established standards. Where a Critical-Impact AI Organization is found to be in non-compliance, the Secretary of Commerce will immediately notify the organization of their breach and order them to take remedial action. Where the organization themselves later discovers that their system is non-compliant, they must also alert the Secretary of this and submit a report detailing the nature of their discovery, steps being taken to remedy the issue, and actions taken to ensure impacted stakeholders are appropriately considered. Where organizations continue to fail to rectify their non-compliance, the Secretary may then impose enforcement measures, as detailed below, in accordance with the substance of the breach.

Enforcement

Unlike previous initiatives of the US government and legislature, the AIRIA has several enforcement mechanisms that provide it the necessary “bite” that has been a criticism of previous proposals and regulatory interventions. These include:

  • Penalties of the greater of i) an amount not to exceed $300,000 or ii) an amount twice the value of the transaction that is the basis of the violation

  • Prohibitions of deployment of critical-impact artificial intelligence systems where it is determined that a critical-impact AI organization has intentionally violated the provisions of the AIRIA and

  • Civil actions that will subject the party to proceedings in front of the judiciary in addition to the above enforcement measures, particularly in instances where it is found that violation of the terms of the AIRIA has occurred with intent.

Artificial intelligence consumer education

The drafters of the AIRIA have equally recognized that a framework for regulation of AI is without material benefit unless US society is suitably educated on how they are impacted by the technology.

As such, the AIRIA requires that the Secretary of Commerce establish a working group responsible for the development of educational efforts focused on AI within 180 days of enactment that will terminate after a period of 2 years. The working group will comprise of members across industries, including higher education, Developers of AI, public health organizations, public safety organizations, rural workforce development advocates, and nonprofit technology industry organizations.

The working group is charged with identifying recommended education programs that can be used to inform consumers and stakeholders of AI. These programs are expected to include developing understanding of capabilities and limitations of AI, use cases for AI and how they can improve the lives of people in the US, human-machine interfaces, emergency fallback scenarios, and nomenclature and taxonomy for safety features and systems of AI.

In performance of these tasks, the Secretary of Commerce will work closely with the Chair of the Federal Trade Commission and consult on any proposed recommendations developed through the work of the working group.

What can we expect next?

The bipartisan AIRIA is perhaps the most comprehensive AI legislation introduced to date in the US Congress and represents a major step towards legislation governing AI in the United States. While the recent Executive Order from White House signed on October 30 starts the clock for federal departments and agencies to act on several mandates over the course of the next 12 months, there are unique areas of responsibility that Congress has that go beyond what an executive order is capable of doing such as codifying requirements, establishing enforcement mechanisms, and allocating resources to drive innovation.

The Senate has made AI a priority in this Congress in large part due to Senate Majority Leader Schumer. Since early 2023, the Majority Leader has been working with a core group of his colleagues –Senators Mike Rounds (R-SD), Todd Young (R-IN), and Martin Heinrich (D-NM), chair of the Senate AI Caucus – to craft legislation that can attract broad, bipartisan support and has encouraged senators to reach across the aisle and find AI legislative proposals that members can agree upon. Senator Thune’s proposal is likely one among many across different committee jurisdictions that may ultimately be considered in an eventual comprehensive AI legislative package.

Insights into Congress’s AI priorities have been coming into clearer focus in the past few months, and the pace of AI-related activity on Capitol Hill has significantly stepped up this fall.

Since September, the Senate “AI Gang” has been hosting a series of AI Insight Forums to help supplement and accelerate the traditional committee hearing process for the purposes of crafting comprehensive AI legislation:

  • In the first of these closed-door sessions, senators heard from executives of many of the top US tech firms, as well as representatives of advocacy organizations, labor groups, civil rights organizations and creative groups.

  • The second forum, in October, focused on innovation, which Schumer called “our North Star for AI.” That meeting also featured input from both company and thought leaders.

  • A third forum on November 1 addressed workforce implications of AI.

  • The most recent forum on November 8 addressed two major AI-related concerns of lawmakers: privacy and legal liability implications of AI; and protecting future elections from the use of deepfakes and other deceptive AI practices.

  • Additional forums this month and next are expected to cover issues relating to copyright and intellectual property, use-cases and risk management, national security, guarding against doomsday scenarios, AI’s role in our social world, and transparency, explainability, and alignment.

Congress has exercised its oversight role on AI issues, with multiple hearings each week over the past few months. Topics have included AI’s role in financial services; the federal government, including procurement and the workforce; communications technology, healthcare and even agriculture. Protecting consumers from AI-generated scams and deepfakes has also been on ongoing concern.

DLA Piper’s AI practice hosted two members of the “AI Gang” – Senators Heinrich and Rounds – on September 19 in Washington, DC to discuss the AI Insight Forums and emerging AI legislation.

DLA Piper is here to help

As part of the Financial Times’ 2023 North America Innovative Lawyer awards, DLA Piper has been shortlisted for an Innovative Lawyers in Technology award for its AI and Data Analytics practice.

DLA Piper’s AI policy team in Washington DC is led by the Founding Director of the Senate Artificial Intelligence Caucus.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI Chatroom Series.

For further information or if you have any questions, please contact any of the authors.



[1] Artificial Intelligence Research, Innovation, and Accountability Act of 2023. Sec. 101.

[2] (15 U.S.C. 278h-1)

[3] (15 U.S.C. 278h-1)

Print