A New White House Project on Responsible AI Sends a Message to the Private Sector, Including Contractors

It is hardly news that artificial intelligence (AI) has captured attention across the federal government. Wiley’s multidisciplinary AI team has been involved in efforts at the National Institute for Standards and Technology (NIST), the National Telecommunications and Information Administration (NTIA), the Federal Trade Commission (FTC) and other agencies. But federal procurement may be one of the first areas to receive substantive requirements related to AI. The Biden Administration continues contributing to the discussion on responsible use and development of AI through a recently released Request for Information (RFI), which requests public comment to inform the White House Office of Science and Technology Policy’s (OSTP) development of policy guidance for proper use of AI in the U.S. government. The release of this guidance, alongside existing voluntary frameworks from the U.S. Department of Defense (DoD) and NIST, will shape expectations and oversight of AI use by the private sector, with government contractors perhaps facing the earliest tangible requirements related to responsible use of AI. In this blog, our team shares reactions to the White House’s effort in light of ongoing work on AI across government. Companies that provide—or want to provide—technology or services that rely in AI should pay particular attention to these and other federal announcements.

What is the RFI?

On May 23, 2023, the White House outlined steps it plans to take in its larger push toward responsible AI development and deployment. In addition to the RFI, the Administration released an updated roadmap for federal funding of AI research projects and a report, Artificial Intelligence and the Future of Teaching and Learning, from the U.S. Department of Education, outlining the risks and opportunities of AI in education. These actions aim to align with the Administration’s previous AI priorities discussed in its AI Bill of Rights and National Cybersecurity Strategy. All of this follows the 2019 Executive Order 13859, stating that it is the “policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy.”

The RFI is intended to inform a whole of government strategy for AI. A few of the RFI’s 29 questions show an interest in some form of regulation:

  • The RFI asks about “forms of voluntary or mandatory oversight of AI systems that would help mitigate risk[.] Can inspiration be drawn from analogous or instructive models of risk management in other sectors, such as laws and policies that promote oversight through registration, incentives, certification, or licensing?”
  • The RFI asks about procurement specifically: “How can the Federal Government work with the private sector to ensure that procured AI systems include protections to safeguard people’s rights and safety? What unique opportunities and risks would be presented by integrating recent advances in generative AI into Federal Government services and operations?”

Comments are due July 7. Below, we discuss what type of guidance might be expected to develop from the RFI and what innovators should consider in the interim.

What guiding principles can be expected?

The RFI is not writing on a blank slate. OSTP references both the AI Bill of Rights, which it published in October 2022, and version 1 of NIST’s voluntary AI Risk Management Framework (AI RMF), released by the Department of Commerce in January 2023. These two voluntary frameworks align on many key goals and values and inform the expected direction OSTP’s guidance will take.

Both the AI RMF and Bill of Rights recognize that AI has potential to benefit and improve lives, but they also focus on steps to address risks. The AI RMF considers “trustworthy” elements of AI products to be: (1) valid and reliable, (2) safe, (3) secure and resilient, (4) accountable and transparent, (5) explainable and interpretable, (6) privacy enhanced, and (7) fair with harmful bias managed. Similarly, the AI Bill of Rights set outs five guiding principles for deployment of AI: (1) safe and effective systems, (2) algorithmic discrimination protections, (3) data privacy, (4) notice and explanation, and (5) human alternatives, considerations, and fallback. Taken together, the frameworks map out an approach to addressing risk that, after considering public comment, will likely be applied to U.S. Government guidance.

For example, a key issue in deploying AI is identifying and taking steps to counteract potential harmful bias, which can result in discriminatory outcomes. Suggested approaches in this area include conducting proactive bias assessments, performing ongoing disparity testing, and ensuring that data sets are diverse, robust, and free from proxies for demographic features. The frameworks also suggest seeking broad and diverse input to help identify and combat potential bias. 

The frameworks also highlight the need to closely monitor AI systems to ensure safe operation and avoidance of unintended outcomes, and that potential vulnerabilities are identified and addressed. Recommendations include pre-deployment testing, ongoing monitoring and reporting, use of high-quality data, and potentially independent evaluations to mitigate risks.

In addition, as conveyed in the frameworks, use of AI in decision-making should clearly convey the use and outcomes of AI decisions in relation to risk – particularly as to high-impact decisions. Depending on the risks involved, such communication should consider how best to convey answers to questions about “what happened” in the system, “how” a decision was made, and “why” a decision was made by the system and its meaning or context to the user. This kind of analysis is one way to address other kinds of risks, so that operators and users can gain deeper insights into AI results and address them if necessary.

Because AI systems – including recently deployed and increasingly popular generative AI systems – require large amounts of data, any entity developing an AI system must be mindful of protections placed on data usage, collection, and access to comply with existing data and privacy protection laws and regulations. The frameworks recommend that AI system developers and users promote privacy in a number of ways, including through privacy-enhancing technologies and minimizing personally identifiable information through de-identification or aggregation.

Finally, the frameworks emphasize that humans have a key role in overseeing AI system development, use, and evaluation. The AI RMF, for example, includes human centricity as a “core concept” and imbeds human oversight and management in all aspect of risk assessment, management, and mitigation.

What does the guidance mean for government acquisition of AI and AI systems?

The Federal Acquisition Regulation (FAR) and the Defense Federal Acquisition Regulation Supplement (DFARS) do not currently regulate the procurement of AI separately from any other procured good or service. The federal government has already been procuring AI and AI-related systems and services for years, well before any OSTP guidance. The majority of current AI contracts appear to be held by DoD, which has developed a strategy to ensure acquisition of AI in line with the DoD AI Ethical Principles, adopted in 2020. DoD aims to identify and mitigate risks throughout the acquisition lifecycle using an Acquisition Toolkit, which includes standard solicitation language, evaluation criteria, and contract language. This contract language provides for independent government testing and evaluation of AI capabilities, remediation methods when AI use is not in compliance with the DoD AI Ethical Principles, training and documentation requirements from vendors, monitoring performance, and potentially data rights and deliverables. Much of DoD’s AI purchasing is through Other Transaction Agreements (OTAs) and other types of research and development vehicles that are not governed by the FAR or DFARS.

But, DoD is definitely not alone. For example, the General Services Administration (GSA) deployed an AI Center of Excellence (CoE). The CoE has held AI “challenges,” most recently an applied human healthcare AI challenge. The Department of Energy also has an Artificial Intelligence and Technology Office that has developed its own risk management playbook. Clearly, many agencies are moving out on AI policies and procurements of AI-enabled products and services ahead of any guidance from OSTP.

Nonetheless, the importance being placed on OSTP’s work, coupled with evolving expectations and questions about AI from Congress and across government, suggest that innovators should try to anticipate future developments. This includes work by NIST and the FTC.

How should companies start to anticipate the emergence of AI guidance?

The technology sector should pay attention to current and upcoming guidance and other government work on AI. In particular, companies that may seek to support federal missions using AI through a FAR-based contract or other vehicle should be prepared for responsible AI use to become a factor in the evaluation process. Accordingly, companies should start considering whether their compliance programs need an update. Contractor compliance programs will likely need to provide for training of personnel on the responsible use of AI, a method for monitoring personnel compliance with key principles for the use of AI, procedures for documenting, reporting, and mitigating misuse of AI, and more. Companies should consider policies around use of generative AI tools, which we outline in more detail here.

When deciding whether to engage in a federal government AI opportunity, companies should read the solicitation closely to ensure that they are capable of meeting the contract’s AI-related requirements. Companies should weigh the risks associated with participating in this developing space according to new rules and procedures with the benefit of participating in this promising new market to assess whether the opportunity is a good fit.

Looking forward, companies using AI and algorithmic technology will need to grapple not only with direct impacts of regulatory efforts by federal agencies and states but will also need to prepare for impacts to government contract formation and administration to the extent they are interested in bidding on government opportunities. As these AI policies begin to emerge, companies developing these technologies should consider what processes need to be built into the underlying AI systems and databases to ensure compliance with likely regulations and contract terms.   

***

Wiley’s multidisciplinary AI team assists clients in compliance, risk management, and advocacy related to AI technology and algorithmic decision-making. Please reach out to any of the authors with questions.

Tags

Wiley Connect

Sign up for updates

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.