Listen to this post

Key point: “Winning the Race: America’s AI Action Plan,” the Trump Administration’s summary approach to federal artificial intelligence (AI) policy, and three new Executive Orders (EO) propose a wide-ranging federal strategy intended to solidify U.S. leadership in AI. For business leaders and public sector stakeholders, the Action Plan and EOs may be a double-edged sword: catalyzing AI innovation through deregulation, but in turn creating a complex, opaque compliance environment that demands careful navigation.

AI Action Plan Overview

The Action Plan’s thirty initiatives are segmented into three “pillars”: (I) Accelerate AI Innovation; (II) Build American AI Infrastructure; and (III) Lead in International AI Diplomacy and Security. Below, we list and describe the various initiatives in each pillar.

The Action Plan is more than a policy document. It is a roadmap of the strategic intent of the federal government. Stakeholders who align with its direction early may be best positioned to lead in the AI-driven economy.

Although the Action Plan and EOs will create a more permissive environment for AI innovation, the provisions in these documents signal the potential for regulatory uncertainty. Businesses should proactively assess their AI systems for robustness, fairness, and explainability, and should continue to monitor future federal rulemaking, agency guidelines, and procurement criteria to navigate these uncharted waters.

Pillar I: Accelerate AI Innovation

InitiativeDescription
Remove Red Tape and Onerous RegulationStreamline federal regulations to accelerate AI development
Ensure Frontier AI Protects Free SpeechAI models in federal use must uphold free speech and avoid bias
Encourage Open-Source & Open-Weight AIPromote transparency and access through open AI models
Enable AI AdoptionSupport AI integration across sectors and industries
Empower American WorkersReskill and upskill the workforce for AI-driven roles
Support Next-Generation ManufacturingInvest in AI to modernize U.S. manufacturing
Invest in AI-Enabled ScienceFund AI applications in scientific research
Build World-Class Scientific DatasetsCreate high-quality datasets for AI training
Advance the Science of AISupport foundational AI research and innovation
Invest in Interpretability and RobustnessImprove AI system transparency and reliability
Build an AI Evaluations EcosystemEstablish national standards for AI performance
Accelerate AI Adoption in GovernmentExpand AI use in federal operations
Drive AI in the Department of DefenseIntegrate AI into defense systems
Protect AI InnovationsSafeguard commercial and government AI IP
Combat Synthetic Media in Legal SystemAddress misuse of AI-generated content

Pillar II: Build American Infrastructure*

InitiativeDescription
Streamline Permitting & Manufacturing for Data Centers and Energy InfrastructureExpand the capacity to build AI computing infrastructure rapidly across the nation
Develop a New Electric Grid to Keep Pace with AI InnovationDevelop a comprehensive energy strategy to expand the existing power grid and embrace new energy generation sources
Restore American Semiconductor ManufacturingRemove burdens and obstacles to increase semiconductor manufacturing within the U.S.
Build High-Security Data Centers for Military and Intelligence Community UseBuild data centers for military and intelligence agencies and establish technical standards to ensure the digital and physical security of those data centers
Train a Skilled Workforce for AI InfrastructureDevelop roster of key roles and priorities for AI-related workforce, including tradesmen, and create training programs that align with those prioritized roles
Bolster Critical Infrastructure CybersecurityPromote the sharing of AI-security threat information and intelligence across U.S. critical infrastructure sectors
Promote Secure-by-Design AI TechnologiesPromote resilient and secure AI development and deployment commensurate with end-user needs
Promote Mature Federal Capacity for AI Incident ResponseEstablish incident response processes that include AI in standards, response frameworks, best-practices, and technical capabilities

*Notably, several of the initiatives in Pillar II are also addressed in the EOs the President signed in parallel with the Action Plan. 

Pillar III: Lead in International AI Diplomacy and Security

InitiativeDescription
Export American AI to Allies & PartnersMeet global demand for AI by “exporting full AI technology stack” to U.S. security partners
Counter Chinese Influence in International Governance BodiesEnsure that U.S. interests are reflected in the international or multilateral governance of AI
Strengthen AI Compute Export Control EnforcementUrges greater export control regulation and enforcement for key enabling technologies
Plug Loopholes in Existing Semiconductor Manufacturing Export ControlsDevelop new export controls on semiconductor manufacturing sub-systems
Align Protection Measures GloballyBuild the “plurilateral” support necessary to support AI security on a global scale among U.S. allies
Ensure the US Leads on Evaluating National Security Risks in Frontier ModelsIdentify national security risks for AI frontier systems through public-private partnership with developers
Invest in BiosecurityCreate enforcement mechanisms around robust nucleic acid sequence screening and customer verification procedures

Executive Orders Issued in Conjunction with the Action Plan

On July 23, 2025, President Trump issued three EOs in conjunction with the publication of the Action Plan:

  1. “Accelerating Federal Permitting of Data Center Infrastructure” streamlines the permitting process, expedites environmental reviews, authorizes access to federal lands, and offers financial incentives to accelerate the construction of AI data centers. The EO also directs the Department of Defense to leverage military facilities for the construction and installation of data center support equipment (energy infrastructure, power sources, semiconductors, etc.) necessary to operate data centers.
  2. “Promoting the Export of the American Technology Stack” establishes a national effort to support the U.S. AI industry by promoting the global deployment of AI technologies originating in the U.S. The EO directs the Secretary of Commerce to implement a program to support the development and deployment of AI export packages and to call for proposals from private industry to include in that program. The Secretary of State is responsible for mobilizing federal financing tools to support certain AI export packages.
  3. “Preventing Woke AI in the Federal Government” announces a policy to promote the innovation and use of trustworthy AI and directs agency heads to only purchase large language model systems developed in accordance with “Unbiased AI Principles.”

While EOs only have the force and effect of law on federal agencies and their employees, the U.S. Government is the world’s largest customer. The economies of scale involved in federal procurement activities often become de facto standards that carry over to private-sector economic activity.

Commercial Opportunities and Potential Risks

The Action Plan’s deregulatory posture is explicit. It calls for the removal of “onerous regulation” that impedes AI development and deployment. Further, Pillar I cautions that federal funding will not “be directed towards states that have enacted burdensome AI regulations that waste these funds.” This restriction on burdensome state laws arguably resuscitates the 10-year moratorium on AI laws at the state level that the U.S. Senate rejected during negotiations in connection with the One Big Beautiful Bill Act and may create uncertainty regarding federal funding opportunities in some states over others.

Pillar I and the first EO above promote open-source and open-weight AI models, lowering the entry barriers for startups and small-medium sized businesses. Pillar I and the second EO propose federal investments in AI-enabled science, next-generation manufacturing, and national AI infrastructure — including expedited permitting for data centers and semiconductor fabrication facilities — to further enhance the commercial landscape. Similar to the June 2025 Texas Responsible Artificial Intelligence Governance Act (TRAIGA), Pillar I proposes regulatory sandboxes around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools while committing to the open sharing of data and results. Although a developer who runs an AI model in a government sandbox offers innovation opportunities, there are legal, operational, and strategic risks to consider:

  • Sandbox participation may result in greater scrutiny of the model by regulators and may inform future regulations that would be applicable to the developer’s model. 
  • Developers will have to allocate financial resources, personnel, and time to manage sandbox participation, diverting resources from product development or deployment.
  • Unless agencies implement complete reciprocity for the work done in a sister-agency’s sandbox, developers may need to reconfigure or retrain models to satisfy individual sandbox procedures and requirements.

Pillar I and the third EO intend to revise federal procurement guidelines to require agencies to only contract with frontier LLM developers that objectively reflect truth and are free of “social engineering agendas.” However, there are at least two pragmatic challenges with this edict.

  • Developers of frontier models that analyze questions in the humanities fields and cutting-edge technologies will be hard-pressed to prove the LLM outputs are objectively true because social norms and scientific understandings change over time.
  • Frontier and foundational LLMs have significant overlap in their training data, which means a developer may struggle to disentangle the training data used in a foundational LLM from the frontier LLM that rests on top of it.

Pillar II’s objective to promote a mature federal capacity for AI incident response actions will need to be reconciled with EO 14239, a different order from March that places the onus on state and local governments to prepare for cyberattacks. To the extent that EO succeeds in pushing cybersecurity incident response duties to the state and local government levels, that result may complicate the coordination and communication efforts for a national business, and may create inconsistent responses in a multi-state incident.

Internationally, Pillar III and the second EO direct the Commerce and State Departments to export AI packages—including hardware, models, and standards—to allied nations. While this effort will open new global markets for American business, the opportunity will require American businesses to factor international law into those decisions, specifically in areas such as cross-border data transfers, data privacy, high-risk AI systems, and international licensing.

Regulatory and Legal Risks

Despite its deregulatory tone, the Action Plan introduces novel compliance challenges. It mandates that frontier AI models used in federal procurement uphold free speech and avoid ideological bias. This requirement, while politically framed, raises practical questions about content neutrality, algorithmic transparency, and First Amendment jurisprudence in AI design.

The Action Plan also emphasizes the need for AI systems to be “interpretable, controllable, and robust” and calls for the creation of a national AI evaluations ecosystem. These initiatives suggest that voluntary standards today may evolve into binding requirements tomorrow—especially for firms contracting with the federal government or operating in regulated sectors. Additionally, the question of explainability and interpretability with AI systems is far from settled, with one of the foundational questions yet unanswered—explainable to which audience?

Consistent with early statements of the Trump Administration, the Action Plan focuses on protecting “commercial and government AI innovations.” This focus implies that there will be heightened scrutiny around cybersecurity, trade secrets, and export controls and appears to be consistent with recent Department of Justice (DOJ) enforcement efforts. Compliance, legal and risk mitigation teams should anticipate increased enforcement activity in these areas.

Strategic Takeaways

From a risk management perspective, business leaders will be challenged to follow the old Latin proverb—Audentis Fortuna iuvat—which we translate as “Fortune favors the bold.” The proverb was closely associated with Pliny the Elder, the author and naval commander, whose life and death provide something of a cautionary tale—bold though he was, he perished leading a mission to rescue victims of the Mount Vesuvius volcanic eruption that destroyed Pompeii.

Risk managers are caught in a similar dilemma: allowing innovators to push forward in implementing a powerful new technology but doing so in a way that safeguards the enterprise from excessive risks, even when taken with the best of intentions. Articulating a vision, adhering to corporate values, and making informed decisions will be the navigational tools for business leaders in this new frontier.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Erik Dullea Erik Dullea

As head of Husch Blackwell’s Cybersecurity practice group, Erik assists clients in all aspects of cybersecurity and information security compliance and data breach response. Erik previously served as the acting deputy associate general counsel for the National Security Agency’s cybersecurity practice group before…

As head of Husch Blackwell’s Cybersecurity practice group, Erik assists clients in all aspects of cybersecurity and information security compliance and data breach response. Erik previously served as the acting deputy associate general counsel for the National Security Agency’s cybersecurity practice group before returning to the firm in 2023.

Photo of Owen Davis Owen Davis

Owen assists employers across industry sectors – from small businesses to Fortune 500 corporations – to identify changing workplace law at a local, state and federal level. He offers legal guidance on employment agreements, restrictive covenants, personnel policies and other human resources issues.

Owen assists employers across industry sectors – from small businesses to Fortune 500 corporations – to identify changing workplace law at a local, state and federal level. He offers legal guidance on employment agreements, restrictive covenants, personnel policies and other human resources issues. Owen also represents employers before state and federal courts as well as administrative agencies on matters related to discrimination, retaliation, harassment, and wage and hour violations.

Photo of Ana Cowan Ana Cowan

Ana has more than 16 years of experience representing physicians, physician groups, ambulatory surgery centers, and non-profit health organizations in regulatory healthcare matters, corporate and administrative matters.