Administration Releases New Executive Order Directing Federal Agencies on Artificial Intelligence (AI)

December 4, 2020

On December 3, the President signed a new Executive Order (EO) on “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.” This EO follows up on the 2019 Executive Order on Maintaining American Leadership in Artificial Intelligence (AI), as well as the just-released Guidance to agencies on AI regulatory approaches, and it sets out principles for most federal agencies to use in implementing AI technology. These are consistent with AI principles that the Administration has promoted in both its domestic and international efforts on AI governance, and designed to promote public trust in AI. The EO starts yet another near-term process for agency evaluation of AI across the federal government and additional opportunity for private sector engagement, as we enter a critical time period over the next year for agencies’ evaluation of AI and other emerging technology.

The new EO comes amidst a flurry of activity and attention to federal use of AI. As mentioned, President Trump issued an earlier AI Executive Order in 2019, which launched the American AI Initiative. Pursuant to that previous EO, the Office of Management and Budget (OMB) just released final guidance to federal agencies—including independent agencies—regarding the AI regulatory approaches. Additionally, the current National Defense Authorization Act (NDAA), which is expected to go to the President’s desk, contains several provisions addressing AI, including but not limited to government use and expectations regarding this important emerging technology.

The December 3 EO applies to most federal agencies, excluding independent agencies and the Department of Defense and Intelligence Community agencies (which each have already developed their own AI principles). The new EO establishes certain principles for agencies to use in implementing AI technology, and directs agencies to conduct inventories of their AI uses and publicize them. It also sets up a process for the OMB to provide a further “roadmap” of policy guidance including opportunity for public input. As with the OMB’s most recent guidance to agencies on AI regulatory approaches, which was just released in November, the new EO pushes key agency efforts into the next Administration, and over the first half of next year.   

Specifically, the EO directs federal agencies to adhere to the following principles when designing, developing, acquiring, and using AI:

  • Lawful and respectful of our Nation’s values. This includes consistency with applicable laws and policies, including those addressing privacy, civil rights, and civil liberties.
  • Purposeful and performance-driven. Agencies should pursue AI where “the benefits of doing so significantly outweigh the risks, and the risks can be assessed and managed.”
  • Accurate, reliable, and effective. Agencies should make sure that their application of AI is “consistent with the use cases for which that AI was trained, and such use is accurate, reliable, and effective.”
  • Safe, secure, and resilient. This includes AI resilience that takes account of “systematic vulnerabilities, adversarial manipulation, and other malicious exploitation.”
  • Understandable. Agencies must make sure that AI operations and outcomes are “sufficiently understandable by subject matter experts, users, and others, as appropriate.” This aligns with work that the National Institute of Standards and Technology (NIST) is doing on explainability in AI.
  • Responsible and traceable. Agencies should ensure that “human roles and responsibilities are clearly defined, understood, and appropriately assigned.”  
  • Regularly monitored. Agencies should implement regular testing to monitor compliance with these Principles, and be able to “supersede, disengage, or deactivate existing applications of AI” if outcomes are not acceptable.

Overall, the new EO is based on the foundation that AI needs to have “public trust” to be adopted. This is consistent with the 2019 EO and the Administration’s promotion of “trustworthy” AI, which has driven efforts on standards at the National Institute of Standards and Technology (NIST) and internationally, through organizations like the Global Partnership on AI (GPAI).

In terms of next steps:

  • Section 4 requires OMB to, within 180 days (or by June 1, 2021), publicly post a “roadmap” for the policy guidance that OMB intends to create or revise to better support the use of AI, consistent with the EO. This roadmap must include, “where appropriate, a schedule for engaging with the public and timelines for finalizing relevant policy guidance.” OMB should also consider voluntary consensus standards developed by industry—which the EO encourages agencies to continue to use—when revising or developing AI guidance for agencies. This timeline aligns with OMB’s deadline in late May for agencies to submit their plans for approaching AI regulation, as we have previously discussed.
  • Section 5 sets up a mechanism for ongoing inventory of AI use cases, via the Federal Chief Information Officers Council (CIO Council). Specifically, within 60 days (or by February 1, 2021), the CIO Council “shall identify, provide guidance on, and make publicly available the criteria, format, and mechanisms for agency inventories of non-classified and non-sensitive use cases of AI by agencies.” Once the CIO Council completes that task, each agency will have 180 days to prepare an inventory of its current and planned “non classified and non-sensitive use cases of AI,” an exercise each agency will need to perform annually. Additionally, agencies are expected to share—within 60 days of completion—their inventories with other agencies, to the extent practicable and in accordance with law and policy. They are also expected to make their inventories public, again to the extent practicable and in accordance with law and policy, within 120 of completion.     
  • Section 5 also requires that within 120 days from agencies completing their inventories, each agency must develop additional AI plans, specifically “either to achieve consistency with this order for each AI application or to retire AI applications found to be developed or used in a manner that is not consistent with this order.” They must subsequently implement these plans.
  • Section 7 calls on the Presidential Innovation Fellows (PIF) program, administered by the General Services Administration (GSA), within 90 days (or by March 3, 2021), to “identify priority areas of expertise and establish an AI track to attract experts from industry and academia to undertake a period of work at an agency.”
  • Section 7 also calls for the Office of Personnel Management (OPM), along with GSA to “create an inventory of Federal Government rotational programs and determine how these programs can be used to expand the number of employees with AI expertise at the agencies.” The timeline for this inventory of Government rotational programs is 45 days (or by January 17, 2021). A report from OPM “with recommendations for how the programs in the inventory can be best used to expand the number of employees with AI expertise at the agencies” will follow.

One other likely impact is that the inventory process will generate a great deal of discussion on the scope and applicability of what constitutes AI capabilities in the federal technology ecosystem. Given the varying and disparate missions and maturity-levels of federal government agencies, there will be a spectrum of definitions as to what constitutes an AI system. As the inventory progresses, these definitional issues will likely be addressed in a more wholistic manner and could include some classification framework that looks at factors like the balance of human/machine decision making, and the impact of processes AI-generated decisions would contribute to, especially from a privacy, civil rights and civil liberties perspective.

Wiley’s Artificial Intelligence practice counsels clients on AI compliance, risk management, and regulatory and policy approaches, and we engage with key government stakeholders in this quickly moving area. Please reach out to any of the authors for further information.

Read Time: 6 min
Jump to top of page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.