White House Releases National Legislative Policy Framework for AI
On March 20, the White House released a National AI Legislative Framework (Legislative Framework or Framework), outlining the Trump Administration’s key priority objectives for comprehensive federal AI legislation. In particular, the Legislative Framework sets forth seven key objectives, with an emphasis on promoting innovation and U.S. competitiveness while addressing child safety, consumer protection, intellectual property, and national security concerns.
The Legislative Framework Is Part of the Administration’s Broader Policy Efforts to Promote AI. The release of this Legislative Framework responds to the December 11, 2025 Ensuring a National Policy Framework for Artificial Intelligence Executive Order (AI National Framework EO), which established a policy to “sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI,” and among other directives, called for the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to develop a legislative recommendation to establish a federal policy framework for AI. The AI National Framework EO directed that the recommended framework should preempt state AI laws that conflict with its policy statement, but should not recommend that preemption of state AI laws extends to: (1) children’s safety, (2) AI compute and data center infrastructure, other than generally applicable permitting reforms, or (3) state procurement and use of AI.
The Legislative Framework Provides 7 High-Level Objectives for Federal AI Legislation. The Legislative Framework signals the Administration’s priorities as Congress considers federal AI legislation to establish a unified national approach to AI governance.
(1) Protecting Children and Empowering Parents: The Framework states that “AI services and platforms must take measures to protect children, while empowering parents to control their children’s digital environment and upbringing.” Under this objective, the Administration touts the recently enacted Take It Down Act and emphasizes that existing child privacy protections already apply to AI systems. In addition to these existing frameworks, the Framework recommends that Congress take additional steps to regulate AI platforms and services likely to be accessed by minors, including to “establish commercially reasonable privacy protective age-assurances requirements (such as parental attestation)” and “require [such platforms] to implement features that reduce the risks of sexual exploitation and self-harm to minors.” On the issue of preemption, the Administration urges Congress to “ensure that it does not preempt states from enforcing their own generally applicable laws protecting children.”
(2) Safeguarding and Strengthening American Communities: This objective holds that “AI development, including data infrastructure buildout, should strengthen American communities and small businesses through economic growth and energy dominance, while ensuring communities are protected from harmful impacts.” This objective covers a broad range of issues, including federal permitting for AI infrastructure and law enforcement efforts to combat AI-enabled fraud, with a focus on illegal activity targeted at vulnerable populations.
(3) Respecting Intellectual Property Rights and Supporting Creators: With this objective, the Administration states that “American creators, publishers, and innovators should be protected from AI-generated outputs that infringe their protected content, without undermining lawful innovation and free expression.”
(4) Preventing Censorship and Protecting Free Speech: This objective states that “[t]he federal government must defend free speech and First Amendment protections, while preventing AI systems from being used to silence or censor lawful political expression or dissent.”
(5) Enabling Innovation and Ensuring American AI Dominance: Guided by the objective that “[t]he United States must lead the world in AI by removing barriers to innovation, accelerating deployment of AI applications across sectors, and ensuring broad access to the testing environments needed to build world-class AI systems,” this portion of the Framework recommends that Congress (1) establish regulatory sandboxes for AI applications and (2) “provide resources to make federal datasets accessible to industry and academia in AI-ready formats for use in training AI models and system.” Of note, this objective recommends that Congress should not create a new federal rulemaking body to regulate AI, and that it should instead support “sector-specific AI applications through existing regulatory bodies with subject matter expertise and through industry-led standards.”
(6) Educating Americans and Developing an AI-Ready Workforce: This objective states that “American workers must benefit from AI-driven growth, not just the outputs of AI development, through youth development and skills training, the creation of new jobs in an AI-powered economy, and expanded opportunities across sectors.” Here, the Administration recommends non-regulatory education and training approaches.
(7) Establishing a Federal Policy Framework, Preempting Cumbersome State AI Laws: The final objective states that “[t]he federal government must establish a federal AI policy framework to protect American rights, support innovation, and prevent a fragmented patchwork of state regulations that would hinder our national competitiveness, while respecting federalism and State rights.” This objective recommends a preemptive federal law to avoid “fifty discordant” state laws, explaining that “[p]reemption . . . ensure[s] that State laws do not govern areas better suited to the Federal Government or act contrary to the United States’ national strategy to achieve global AI dominance.” At the same time, this objective states that the “national standard should respect key principles of federalism and not preempt:” (1) “[t]he traditional police powers retained by the states to enforce laws of general applicability against AI developers and users, including particular laws to protect children, prevent fraud, and protect consumers;” (2) “[s]tate zoning laws, including state authorities, to determine the placement of AI infrastructure;” or (3) “[r]equirements governing a state’s own use of AI, whether through procurement or services they provide like law enforcement and public education.”
***
Wiley’s Artificial Intelligence Practice counsels clients on AI compliance, risk management, and regulatory and policy approaches, and we engage with key government stakeholders in this quickly moving area. Please reach out to the authors with any questions.



