S. 2938: Artificial Intelligence Risk Evaluation Act of 2025
Sponsor
Josh Hawley
Republican · MO
Bill Progress
Latest Action · Sep 29, 2025
Read twice and Referred to Commerce, Science, and Transportation. for review
Why it matters
This would create a mandatory federal review system for the most powerful AI models within 90 days of enactment and punish noncompliance with fines of at least $1,000,000 per day.
The bill would put the Department of Energy at the center of federal AI oversight. Within 90 days after enactment, the Secretary of Energy would have to establish an Advanced Artificial Intelligence Evaluation Program to test high-end AI systems for dangerous behavior. The bill defines an "advanced artificial intelligence system" as one trained using more than 10^26 integer or floating-point operations, which is a very high threshold aimed at frontier models rather than ordinary software.
The core rule is simple and tough: covered developers must participate in the federal program and provide whatever materials are requested, including underlying code, training data, model weights, interface engines, and detailed architecture information. No person may deploy an advanced AI system in interstate or foreign commerce unless they comply. If they do not, the penalty is a fine of not less than $1,000,000 for each day of violation, giving the bill real enforcement teeth.
The testing program itself is broad. It would include standardized and classified evaluations, adversarial testing that matches or exceeds real-world jailbreaking techniques, third-party assessments, blind model evaluations, and formal reporting back to developers. It would also direct the government to develop containment protocols, mitigation strategies, and oversight options for the most extreme cases, including potential nationalization. The program is explicitly told to assess artificial superintelligence, defined as AI that can operate autonomously for long periods in open-ended environments, match or exceed human cognitive performance across most domains, and potentially modify itself to get around human control.
The bill is also notable for the kinds of risks it targets. An "adverse AI incident" includes loss-of-control scenarios, weaponization risks involving foreign adversaries or foreign terrorist organizations, threats to critical infrastructure, major harm to civil liberties, economic competition, or labor markets, and "scheming behavior" such as deceiving human operators or hiding true capabilities. Within 360 days after enactment, the Secretary of Energy must send Congress a permanent framework plan, then update it at least once every year. Unless Congress renews it, the program would end 7 years after enactment.
What does S. 2938 do?
DOE must launch program within 90 days
The bill requires the Secretary of Energy to establish an Advanced Artificial Intelligence Evaluation Program in the Department of Energy not later than 90 days after enactment.
Targets models above 10^26 operations
An "advanced artificial intelligence system" is defined as an AI system trained using computing power greater than 10^26 integer or floating-point operations, setting the law's trigger at the frontier-model level.
No deployment without compliance
No person may deploy an advanced AI system in interstate or foreign commerce unless the developer complies with the participation rules, and "deploy" includes releasing, selling, providing access, or even releasing an open-source system.
$1,000,000-per-day penalty for violations
A developer that fails to participate or deploys without compliance faces fines of not less than $1,000,000 for every day the violation continues.
Developers must hand over core model materials
Covered developers must provide requested materials, including underlying code, training data, model weights, interface engines, and detailed architecture information, giving the Department of Energy deep access to how the system was built.
Congress gets plan in 360 days, then yearly
Not later than 360 days after enactment, the Secretary of Energy must submit Congress a permanent framework plan covering testing results, standards or licensing recommendations, hardware and cloud monitoring ideas, governance options, agency structure proposals, and existential and economic risk evaluations, with updates at least once every year after that.
Who benefits from S. 2938?
National security agencies and policymakers
They would gain a formal federal system inside the Department of Energy to test high-end AI for risks like weaponization by foreign adversaries or foreign terrorist organizations, threats to critical infrastructure, and loss-of-control scenarios.
Workers and communities worried about economic disruption
The bill explicitly treats significant erosion of labor markets and economic competition as part of an "adverse AI incident," meaning these harms must be evaluated rather than ignored.
Civil liberties and public-interest advocates
They benefit because the law specifically flags significant erosion of civil liberties as a reportable AI risk and requires formal evaluation, mitigation planning, and reporting to Congress.
Congress
Lawmakers would receive a detailed permanent framework plan within 360 days after enactment and then at least annual updates for the life of the program, giving them a regular stream of oversight information for up to 7 years unless the program is renewed.
Who is affected by S. 2938?
Frontier AI developers
Companies and other actors that design, code, produce, own, or substantially modify advanced AI systems for interstate or foreign commerce would be required to participate, including those that initiate a training run on systems above the 10^26-compute threshold.
Open-source AI publishers
They are directly affected because the bill defines "deploy" to include releasing open-source systems, so open publication of a covered advanced AI model would trigger the compliance rules.
Cloud and hardware oversight stakeholders
The bill's required framework plan must include proposals for continuous monitoring of hardware and cloud inputs, signaling possible future scrutiny for compute providers and infrastructure operators.
Developers of highly autonomous systems
Projects moving toward artificial superintelligence would face especially close attention because the program must assess systems that can operate autonomously for long periods, match or exceed human cognition across most domains, and potentially self-modify to circumvent human control.
What Congress Is Saying
S. 2938 hasn't been debated on the floor yet.
This section updates when a legislator speaks about it on the floor or in committee.
S2938 Legislative Journey
Committee Action
Sep 29, 2025
Read twice and referred to the Committee on Commerce, Science, and Transportation.
About the Sponsor
Josh Hawley
Republican, MO · 7 years in Congress
Committees: Homeland Security and Governmental Affairs, Small Business and Entrepreneurship, the Judiciary
View full profile →
Cosponsors (2)
This bill has 2 cosponsors: 1 Democrat, 1 Republican, reflecting bipartisan support. Cosponsors represent 2 states: Connecticut, Tennessee.
Committee Sponsors
Commerce, Science, and Transportation Committee
1 of 28 committee members cosponsored
14 Republicans across this committee haven't cosponsored yet. Mobilize their constituents
S. 2938 Quick Facts
- Committee
- Commerce, Science, and Transportation
- Chamber
- Senate
- Policy
- Science, Technology, Communications
- Introduced
- Sep 29, 2025
Read twice and Referred to Commerce, Science, and Transportation. for review
Sep 29, 2025
S. 2938 Common Questions
How much is the fine for violating the AI Risk Evaluation Act
Violations carry a fine of not less than $1,000,000 for each day the violation continues under the Artificial Intelligence Risk Evaluation Act of 2025 (Section 4).
What AI models would be covered by the 10^26 compute threshold
Under the Artificial Intelligence Risk Evaluation Act of 2025, an advanced AI system is one trained using more than 10^26 integer or floating-point operations (Section 3).
Can the government ban deployment of an AI model that skips federal testing
Yes. Under the Artificial Intelligence Risk Evaluation Act of 2025, no person may deploy a covered advanced AI system in interstate or foreign commerce without complying with the program (Section 4).
Does this bill apply to open-source AI releases
Yes. According to S2938, “deploy” includes releasing, selling, or providing access to a system outside the developer’s custody, including open-source release (Section 3).
What materials would AI developers have to give the Department of Energy
Covered developers must provide requested materials such as underlying code, training data, model weights, interface engines, and training or architecture details under S2938 (Section 4).
How soon would DOE have to launch the federal AI evaluation program
The Secretary of Energy must establish the Advanced Artificial Intelligence Evaluation Program within 90 days after enactment under the Artificial Intelligence Risk Evaluation Act of 2025 (Section 5).
What counts as an adverse AI incident under S2938
S2938 defines it to include loss of control, weaponization risks, critical infrastructure threats, harms to civil liberties or labor markets, scheming behavior, or attempts to do those things (Section 3).
Can DOE use classified testing and jailbreak-style red-teaming on frontier AI models
Yes. Under the Artificial Intelligence Risk Evaluation Act of 2025, the program includes classified testing, blind evaluations, third-party assessments, and protocols matching or exceeding real-world jailbreaking techniques (Section 5).
When would Congress get the AI oversight framework and how often would it be updated
Under the Artificial Intelligence Risk Evaluation Act of 2025, Congress must get the framework plan within 360 days after enactment, with updates at least once every year (Section 5).
Does the bill let the government consider nationalizing advanced AI systems
Yes. According to S2938 Section 5, DOE must develop regulatory oversight options for extreme cases, including potential nationalization.
Based on S. 2938 bill text
S. 2938 Bill Text
“To require the Secretary of Energy to establish the Advanced Artificial Intelligence Evaluation Program, and for other purposes.”
Source: U.S. Government Publishing Office
Get notified when S. 2938 moves
Committee votes, floor action, cosponsor changes — straight to your inbox.
Bill alerts + Legisletter's monthly briefing. Unsubscribe anytime.
Science, Technology, Communications Bills
9 related bills we're tracking
GUARDRAILS Act
Referred to the Committee on Energy and Commerce, and in addition to the Committee on the Judiciary, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned.
Mar 20, 2026
Artificial Intelligence Civil Rights Act of 2025
Referred to the Committee on Energy and Commerce, and in addition to the Committee on Oversight and Government Reform, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned.
Dec 2, 2025
TAKE IT DOWN Act
Became Public Law No: 119-12.
May 19, 2025
States' Right to Regulate AI Act
Read twice and referred to the Committee on Commerce, Science, and Transportation.
Dec 17, 2025
ACERO Act
Received in the Senate and Read twice and referred to the Committee on Commerce, Science, and Transportation.
Feb 24, 2026
ASCEND Act
Received in the Senate. Read twice. Placed on Senate Legislative Calendar under General Orders. Calendar No. 344.
Feb 24, 2026
GUARDRAILS Act
Read twice and referred to the Committee on Commerce, Science, and Transportation.
Mar 26, 2026
Small Business Artificial Intelligence Advancement Act
Received in the Senate and Read twice and referred to the Committee on Commerce, Science, and Transportation.
Feb 24, 2026
Research Integrity and Foreign Influence Prevention Act
Referred to the House Committee on Science, Space, and Technology.
Jun 5, 2025
Trending Right Now
Bills gaining momentum across Congress
Congressional Tribute to Constance Baker Motley Act of 2025
Referred to the House Committee on Financial Services.
Sep 11, 2025
Deterring American AI Model Theft Act of 2026
Referred to the House Committee on Foreign Affairs.
Apr 15, 2026
AI Foundation Model Transparency Act of 2026
Referred to the House Committee on Energy and Commerce.
Mar 26, 2026
Tracking Science, Technology, Communications in Congress? Monitor bills, track cosponsor momentum, and launch advocacy campaigns — all from one advocacy platform.