The White House issued an executive order Tuesday refining how federal agencies purchase artificial intelligence systems, establishing national-security guardrails and quarterly reporting requirements that industry representatives say could slow adoption of emerging technology across government.
The directive, which updates procurement guidance first outlined in a 2023 executive order, requires agencies to conduct security reviews before deploying foundation models in classified or sensitive unclassified environments. It also mandates that vendors attest their training data does not include material from adversary nations and that model weights remain under US control.
"We're creating a framework that allows agencies to move quickly on AI adoption while ensuring we're not inadvertently creating vulnerabilities," a senior administration official said during a background briefing Monday evening. "The national-security equities here are significant."
The order directs the Office of Management and Budget to establish a central registry tracking all federal contracts for foundation models exceeding $250,000. Agencies must submit quarterly reports detailing their AI deployments, including the tasks assigned to these systems and any observed failures or security incidents.
Scope and Implementation
The rules apply to foundation models—large-scale AI systems trained on broad datasets that can be adapted for multiple tasks—purchased or deployed by civilian and defense agencies. The order carves out an exception for intelligence agencies, which will operate under separate classified guidance.
Agencies have 90 days to inventory existing AI contracts and 180 days to bring current deployments into compliance with the new security standards. The administration official said the White House expects "minimal disruption" to ongoing programs but acknowledged some contracts may require renegotiation.
The directive also establishes a interagency working group, led by the National Security Council and including representatives from the Departments of Defense, Commerce, and Homeland Security, to review high-risk AI applications on a case-by-case basis.
Industry Pushback
A technology trade association representing major AI vendors criticized the order within hours of its release, arguing the attestation requirements and security reviews create unnecessary bureaucratic obstacles.
"These rules assume a level of supply-chain transparency that doesn't exist in the AI ecosystem," a spokesperson for the association said in a statement. "Requiring vendors to certify the provenance of every piece of training data is technically infeasible and will effectively lock smaller companies out of the federal market."
The spokesperson added that the quarterly reporting requirements would impose "significant compliance costs" without clear security benefits.
Administration officials defended the approach, noting that the federal government represents a substantial customer base for AI companies and that procurement rules have historically driven industry standards.
"If you want to sell to the US government, you need to meet US government security requirements," the senior administration official said. "That's not a new concept."
Broader Context
The executive order arrives as Congress debates comprehensive AI legislation, with competing bills in the House and Senate addressing liability, transparency, and safety standards. Some lawmakers have indicated the White House action could influence the legislative debate, though significant gaps remain between the parties on regulatory scope.
The administration has prioritized AI governance since early 2023, issuing voluntary commitments from major technology companies and establishing an AI Safety Institute within the Commerce Department.




