The United States is preparing strict new guidelines for artificial intelligence companies seeking government contracts, following a growing dispute between the government and AI firm Anthropic, according to a report on Friday.
The proposed rules would require AI developers working with the US government to allow “any lawful” use of their AI models, a condition that could significantly expand how federal agencies deploy artificial intelligence technologies.
The move comes amid tensions between US authorities and Anthropic after disagreements over the permitted uses of advanced AI systems, particularly in defense and national security applications.
Under the draft guidelines, companies bidding for federal AI contracts may also be required to ensure their systems avoid partisan bias and disclose if models are modified to comply with foreign laws or regulations.
The dispute escalated earlier when US officials designated Anthropic as a potential supply-chain risk, effectively barring its technology from certain government work following disagreements over restrictions on military uses of its AI models.
The new rules signal Washington’s growing effort to tighten oversight of AI development and deployment, particularly as governments increasingly rely on the technology for defense, intelligence, and administrative functions.
Officials have not yet announced a final timeline for adopting the guidelines, but the proposed framework highlights the US government’s intention to set clearer standards for AI companies seeking federal contracts amid rapid advances in the technology sector.
