Table of Contents
Why March 2026 changed everything
March 2026 was the month federal AI policy stopped being theoretical.
In the span of three weeks, three separate actions converged to fundamentally reshape how the government buys, regulates, and evaluates AI. Each one alone would be significant. Together, they represent the most consequential shift in federal AI procurement since the original AI executive orders.
This isn’t happening in a vacuum. On March 26, a federal judge blocked the Pentagon’s ban on Anthropic, calling the supply chain risk designation “Orwellian.” The ruling exposed how quickly AI vendor relationships can be severed by political decisions—and how unprepared most agencies and contractors are for that disruption. (We covered the vendor risk implications in detail in our previous post on AI vendor lock-in.)
But while the Anthropic drama dominated headlines, three quieter policy actions were laying the groundwork for lasting structural change. These are the rules that will still be shaping federal AI procurement long after the Anthropic injunction is resolved.
Rule 1: GSA’s AI contract clause (GSAR 552.239-7001)
On March 6, GSA released a draft contract clause titled “Basic Safeguarding of Artificial Intelligence Systems” as part of its Multiple Award Schedule (MAS) Refresh process. The clause would impose sweeping new obligations on any contractor selling AI through GSA Schedules—and on their subcontractors and service providers.
The clause has four provisions that every AI contractor needs to understand.
“American AI Systems” only
Contractors must use only “American AI Systems”—defined as AI systems developed and produced in the United States, per OMB M-25-22—when performing GSA Schedule orders. All AI systems used in contract performance must be disclosed, whether American or foreign.
This sounds straightforward until you consider how AI systems are actually built. As Lawfare noted in their analysis, the “American AI” requirement is “difficult to apply in a market built on global development teams, open-source components, and layered supply chains.” If your AI product uses an open-source model fine-tuned by a distributed team, with embeddings from one provider and inference from another, which part needs to be “American”?
No training on government data
Contractors and their service providers are prohibited from using government data to train, fine-tune, or otherwise improve any AI model—including models operated by third parties—for any purpose, commercial or otherwise.
This is a direct response to legitimate concerns about government data being used to improve commercial products. But it also means that if your AI solution improves over time by learning from the data it processes—as many enterprise AI products are designed to do—that feature is now off the table for federal work.
Government owns all custom AI development
The government claims ownership of all “Custom Development,” defined to include modifications, customizations, configurations, enhancements, and any associated implementations or workflows. Holland & Knight characterized these provisions as “among the most prescriptive seen in federal contracting.”
This is the provision generating the most industry alarm. If you customize your commercial AI product for a government client—fine-tune a model, build custom workflows, modify your interface—the government may own those modifications. For AI companies whose business model depends on reusing and improving their core product across clients, this is a potential dealbreaker.
The clause overrides your commercial terms
In any conflict between the clause and the contractor’s commercial policies, terms, or agreements, the clause takes precedence. This means your standard SaaS terms, your AI acceptable use policies, and your commercial licensing agreements all yield to the government’s requirements.
The comment period for this clause was extended to April 3, 2026. GSA has confirmed the clause will not be included in MAS Refresh 31—it’s been deferred to Refresh 32. That means there’s still time to influence the final language, but the direction is clear. If you sell AI through GSA Schedules, submit your comments before April 3.
Rule 2: OMB M-26-04 — the deadline that already passed
While the GSA clause is still in draft, OMB Memorandum M-26-04 is already in effect—and the first compliance deadline has already passed.
Issued on December 11, 2025, M-26-04 is titled “Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles.” It implements Executive Order 14319 and establishes new requirements for every executive agency procuring large language models.
What was required by March 11
By March 11, 2026, every federal agency was required to:
- Update procurement policies to include the Unbiased AI Principles in all new LLM procurements
- Modify existing LLM contracts, to the extent practicable, to include these requirements before exercising any contract options
If your organization has an active LLM contract with a federal agency, the terms of that contract may be changing at the next option exercise—even if you haven’t heard from your contracting officer yet.
The “truth-seeking” requirement
M-26-04 requires that procured LLMs be “truthful in responding to user prompts seeking factual information or analysis,” must “prioritize historical accuracy, scientific inquiry, and objectivity,” and must “acknowledge uncertainty” when information is incomplete or contradictory.
On paper, these are reasonable standards. In practice, they raise difficult questions. Who determines what constitutes “historical accuracy” for contested historical events? What does “objectivity” mean for politically sensitive policy questions? The memo provides principles but leaves the evaluation methodology largely to agencies.
The “ideological neutrality” requirement
LLMs shall be “neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas.” Developers shall not “intentionally encode partisan or ideological judgments” into an LLM’s outputs unless those judgments are prompted by or otherwise accessible to the end user.
This is the provision that has generated the most public attention. Every AI vendor will need to demonstrate that their model meets this standard—and every agency will need a framework for evaluating whether it does.
Documentation requirements
Contractors must provide:
- Model cards, system cards, and/or data cards including summaries of the training process, identified risks and mitigations, and model evaluation scores on LLM benchmarks
- Acceptable use policies differentiating between appropriate and inappropriate uses of the LLM
- End-user feedback mechanisms (such as an email inbox or point of contact) so users can report outputs that violate the Unbiased AI Principles
These are not suggestions. The memo states that compliance with the Unbiased AI Principles is “material to contract eligibility and payment.” Non-compliance is grounds for contract termination.
M-26-04 has a two-year sunset clause (December 2027). The requirements expire unless extended by the OMB Director. That creates a defined compliance window—and a defined opportunity for organizations that can help agencies meet these requirements now.
Rule 3: The White House AI legislative framework
On March 20, the White House released the “National Policy Framework for Artificial Intelligence: Legislative Recommendations.” Unlike the GSA clause and OMB memo, this document isn’t binding. It’s a set of recommendations to Congress for how federal AI legislation should be structured.
But it signals where regulation is heading—and two provisions have immediate strategic implications.
Federal preemption of state AI laws
The framework endorses broad federal preemption of state AI laws that impose “undue burdens,” calling for a single national standard rather than a “fragmented patchwork of state” regulations. It specifically recommends precluding states from regulating AI model development.
States retain authority over child safety, fraud, consumer protection, and their own government’s AI procurement. But laws like Colorado’s AI Act (SB 24-205) and California’s SB 942—which impose their own compliance requirements on AI developers—could be superseded if Congress acts on these recommendations.
For contractors navigating the current patchwork, this is potentially good news. Instead of complying with 50 different state standards, you’d comply with one federal standard. But that federal standard doesn’t exist yet, so right now you have the worst of both worlds: state laws that are enforceable today and a federal framework that’s still theoretical.
Developer liability shield
The framework cautions against penalizing AI developers for how third parties misuse their models. Legal analysts have compared this to the statutory defenses protecting gun manufacturers or social media platforms—a significant potential shield for AI companies.
This matters for government contractors because it signals the administration’s position on who bears responsibility when AI causes harm. If a contractor deploys an AI system that a government end user misuses, the framework suggests the developer shouldn’t be liable. But M-26-04 still holds the contractor accountable for the AI system’s compliance with the Unbiased AI Principles. The tension between these two positions will need to be resolved in legislation.
The Blackburn bill connection
Two days before the White House framework, Senator Marsha Blackburn released the TRUMP AMERICA AI Act—a 291-page discussion draft that is the most comprehensive federal AI legislation proposed to date. The bill closely tracks the White House framework on national uniformity and federal preemption.
But they diverge significantly on two points. The Blackburn bill would fully repeal Section 230 (with a two-year phase-in), while the White House framework recommends shielding developers. And the Blackburn bill introduces a strict duty of care for AI developers, enabling the Attorney General, state attorneys general, and private citizens to sue for defective design, failure to warn, and unreasonably dangerous products.
Neither document is law. But together, they define the negotiating range for federal AI legislation. The final law will land somewhere between the White House’s developer-friendly framework and the Blackburn bill’s accountability-focused approach.
What to watch
The resolution of federal preemption will determine whether state AI laws like Colorado’s (enforcement begins June 30, 2026) and the EU AI Act’s high-risk requirements (August 2, 2026) create compliance obligations in addition to federal requirements—or whether a single federal standard replaces them. Plan for the current patchwork until Congress acts.
How these three rules connect
These three actions aren’t isolated policy moves. They form a coherent regulatory architecture.
| Dimension | GSA Clause | OMB M-26-04 | WH Framework |
|---|---|---|---|
| Status | Draft (comment period) | Active (enforceable) | Recommendation |
| Key deadline | April 3, 2026 | March 11, 2026 (passed) | Congressional action TBD |
| Scope | GSA Schedule contractors | All executive agencies | All AI developers |
| Primary impact | IP ownership, data use, sourcing | Bias, transparency, documentation | Liability, preemption, regulation |
| Non-compliance risk | Contract ineligibility | Contract termination | Future legislative risk |
OMB M-26-04 defines what the government expects from AI systems: truthful, neutral, documented, and accountable.
The GSA clause defines how the government will control AI systems in practice: American-made, no training on government data, government-owned customizations.
The White House framework defines where federal regulation is heading: national uniformity over state patchwork, with an unresolved tension between developer protection and developer accountability.
For contractors, the message is: the government is moving fast to establish control over how AI enters federal systems. The organizations that build compliance infrastructure now will have a significant competitive advantage when these rules are finalized.
Your compliance checklist for Q2 2026
Whether you’re an established federal contractor or a commercial AI company considering government work, here’s what you should be doing right now.
Before April 3: GSA clause comments
- Read the draft clause. Understand how the “American AI Systems,” data training prohibition, and IP ownership provisions would affect your specific products and business model.
- Submit comments. The clause is in draft. Industry feedback can and does shape final language. If the IP ownership provision would prevent you from selling to the government, say so with specifics.
- Assess your supply chain. Map every AI component in your product to its country of origin. If you can’t answer “is this American AI?” for every component, you have work to do before this clause becomes final.
Now: OMB M-26-04 compliance
- Build your model documentation. If you don’t have model cards, system cards, or data cards that describe your training process, risks, mitigations, and benchmark scores, start now. These are required for any new LLM procurement and will be required at option exercise for existing contracts.
- Create your acceptable use policy. Document what your AI system should and should not be used for. Be specific. Vague policies won’t satisfy the memo’s requirements.
- Establish a feedback mechanism. Set up a process for government end users to report outputs that may violate the Unbiased AI Principles. This doesn’t need to be complex—a dedicated email address and a documented triage process will work.
- Prepare for bias evaluation. Agencies will need to assess whether your LLM meets the “truth-seeking” and “ideological neutrality” standards. Have your own evaluation results ready before they ask.
Q2 2026: Strategic positioning
- Monitor the Blackburn bill. The TRUMP AMERICA AI Act is a discussion draft, not law. But its provisions on duty of care, Section 230 repeal, and federal preemption will shape the legislative debate for the rest of 2026.
- Plan for state law compliance. Colorado’s AI Act enforcement begins June 30. The EU AI Act’s high-risk requirements take effect August 2. Until Congress passes a preemption law, these deadlines are real.
- Audit your vendor relationships. The Anthropic ban showed that a government vendor can be designated a supply chain risk overnight. If your AI solution depends on a single vendor’s models, APIs, or infrastructure, build your contingency plan now.
GSA has also partnered with NIST to develop AI evaluation standards for federal procurement, announced March 18. This means the government is building the testing infrastructure to verify the claims in your model cards and compliance documentation. Self-attestation won’t be enough for long.
What comes next
The federal AI regulatory landscape is moving faster than most organizations can track. In the last 30 days alone:
- GSA drafted the most prescriptive AI contract clause in federal procurement history
- OMB’s bias and transparency requirements became enforceable
- The White House signaled it wants federal preemption of state AI laws
- A 291-page AI bill was introduced in the Senate
- A federal judge blocked a politically motivated AI vendor ban
- GSA and NIST announced a partnership to build AI evaluation standards
And that’s just March.
The organizations that will win federal AI contracts in 2026 and beyond aren’t necessarily the ones with the most advanced models. They’re the ones that can demonstrate compliance, transparency, and governance readiness before the contracting officer asks.
The compliance infrastructure you build today isn’t overhead. It’s your competitive advantage.
Because the rules are here. And they’re not waiting for you to catch up.
Is Your AI Ready for Federal Compliance?
Codavyn helps government contractors and AI vendors navigate federal AI requirements—from M-26-04 compliance documentation to GSA clause readiness assessments and vendor diversification strategies.
Threats move at machine speed. We build AI-powered threat detection and compliance monitoring systems for government and enterprise. Learn about our cybersecurity solutions →