Article

Questions to Ask Before Drafting Proposals With AI

Law360
August 9, 2023

This article was originally published in Law360 on August 9, 2023 and can be found here.


Proposal writing can feel like its own art, one typically practiced in a hurry, as the team assembles narrative inputs and subcontract pricing, and revisions from various color teams to meet what always feel like compressed deadlines. But what if that first proposal draft could show up, completely written, after some instructions and a few clicks?

With all the attention to advances in generative artificial intelligence (AI), surely more than a few capture managers have started to dream of this scenario.

It likely comes as no surprise that generative AI tools for proposal writing are arriving and being offered to federal contractors. In some ways, these tools are just part of AI's expanding use and governance in federal contracting. Agencies have been procuring AI from contractors for years,[1] including through marketplace-style platforms.[2]

Some agencies are exploring the use of AI to draft their solicitations.[3] Certain agencies have developed strategies to ensure AI acquisitions accord with ethical principles.[4] Various pending bills and proposals could affect government contractors' use of AI.[5]

Proposal drafting tools may show promising capabilities. And reflexively rejecting their use may feel unsatisfying. But the risks these types of tools present — both legal and practical — are real and warrant thoughtful analysis. This article offers three issues to help highlight some of the risks for any contractor considering adding AI tools to its proposal drafting.

How Was the AI Tool's Model Developed?

Generative AI tools can at times feel like a black box — in goes the prompt and outcomes a book report on Union Station. These outputs are shaped by both the content fed into an AI language model and how the model has been trained.

AI firms generally treat both of those features, the model's contents and its training, as proprietary, contributing to the black box atmosphere.

The model's contents matter. Part of the art of proposal writing is responding to the solicitation's prompts in ways that might seem anachronistic in a model trained on other types of writing. So you might expect that an AI proposal-drafting tool would rely on a language model built on ingesting and then being trained on actual federal proposals.

Think about where those proposals would come from, though. Whether competitive or not, proposals — and quotes and similar submissions — are generally nonpublic and closely guarded by contractors and agencies alike.

One option is that the proposal-writing tool relies on ingesting proposals from only the contractor using the tool. The tool would also include the many publicly available documents of a general nature that make up a typical large language model.

One could ask: Would the proposals from a single contractor, even one of the biggest, provide a large enough corpus for the tool to learn the subtleties of federal proposal writing? It might be like asking the tool to write a very specific novel based on the works of only a single author.

On the other hand, if a contractor uses a proposal-writing tool that includes nonpublic proposals from other contractors, could a competitor or an enforcement agency argue that the contractor has violated the Procurement Integrity Act, among other things?

This argument may seem strong, or it may seem strained, but the question is an open one, and it's not certain when or how the government will answer it.

Maybe, as a third option, AI tools will be built on publicly available proposals, like redacted proposals obtained through Freedom of Information Act requests. How contractors could verify a tool's self-imposed limits is unclear, as is how effective a tool could be after ingesting the subset of proposal contents that anyone can obtain.

Further questions are unanswered too. For example, recent discussion has focused on whether AI tools' scrape-and-analyze methods infringe on copyrights.[6]

Different contractors will of course have different views on these questions and different tolerances for the corresponding risks they perceive. But it's incumbent on all contractors who might be early adopters of AI-drafted proposals to consider these questions.

Beyond these baseline questions of legal risk, practical questions about any given AI model would follow, such as:

  • Does the model include each proposal's corresponding solicitation? Judging a proposal's effectiveness generally needs the context of what the solicitation requested.
  • Does the model have access to the proposals' evaluations? If yes, then you might ask again how the firm offering the AI tool has access. Even winning proposals win differently under different ground rules and evaluation criteria. Knowing how proposals win, or don't win, could be as important as having the proposal itself.
  • Do the people training the model have federal acquisition experience, and if so what type? One might expect specialized models to be trained by specialists. But that's something to put on the list for confirmation.

These questions underscore something of a tension for contractors. If turning to an AI tool to help draft proposals, a contractor's instinct may be to prefer that a language model used for generating proposals be informed by actual proposals — though it remains to be seen how much proposal-specific information these models can permissibly include while still improving a proposal's chances of selection for award.

How Would a Contractor Make an AI-Drafted Proposal Its Own?

Just as you're reading this post because you might be considering an AI tool for proposal writing, your competitors may be considering them, too. If and when contractors start using these tools, there may start to be competitions in which multiple offerors draft proposal narratives with AI tools trained on basically the same set of proposals by the same community of specialist AI trainers.

It's not hard to imagine a contracting agency tasked with evaluating proposals that, other than names and graphics, start to read like the same exact narrative three or four times over.

That potential for similar contents again raises the Procurement Integrity Act issue and related questions discussed above.

The prospect of similar contents also raises a practical question of how your proposal can stand out from the crowd. The answer likely involves one of the fundamental steps that should be included in company practices for any AI-generated content: full and thorough review by human team members.

At a practical level, even the richest, best-trained language model likely won't include every nuance of your capture team's relationship with the agency program team, the aspects of your incumbent performance that will appeal to the customer or how to present the cutting-edge widget that your company has spent two years perfecting in near secrecy.

Human review is also central to managing legal risks beyond the Procurement Integrity Act. Because AI models are not designed or intended to function without human review, the output of these models should always be closely checked for accuracy, alignment with intended purposes, legal compliance, appropriate wording and other considerations that your company would factor into reviewing content drafted entirely by humans.

Using the example from above, that cutting-edge widget described in an AI-drafted proposal needs to be something your company actually offers. You've probably read about the lawyers whose use of an AI tool to draft legal filings went awry when the tool cited cases that did not exist.[7] And reports of similar instances have given us the term AI hallucination.

You'll want to have checks in the process to make sure that any AI tool doesn't start generating proposal text that unwittingly invites risks of performance problems and fraudulent-inducement allegations with proposal features that might not exist, or you might never have intended to offer.

As one additional general reminder, it may serve your company well not only to ensure that there's a place in the process for human review and input, but also that the process is documented and includes a checklist or other record specific to individual proposals. That way, if questions arise later on, there are contemporaneous records to show that the company did indeed review, and as needed revise, any AI-generated proposal content.

What Could Happen to the Contractor's Content During and After Proposal Drafting?

Almost every proposal will have some information to which contractors want or need to restrict access. For any AI tool under consideration, you would want to understand where and how the tool transmits your inputs for processing.

Is there any risk that, for example, that the tool would use servers in locations where you couldn't transmit the export-controlled information you would need to write your proposals for certain customers?

Similar questions would apply after the AI tool generates a draft. In particular, would license terms permit the AI tool's owner to use your inputs to the AI tool, or the proposal draft generated by the tool, to generate proposal drafts for other contractors? If the answer is yes, questions follow.

For example, would everyone who has access to your proposal for training the model be a U.S. person? How would the tool guard against writing a proposal for your competitors about a cutting-edge performance approach that happens to be your company's trade secret, not theirs? Does the mere fact of this indirect access to proposals risk undermining assertions of trade secrets or exemptions from disclosure under FOIA?

Bottom Line

Some may feel that a time will come when using AI tools to help draft a proposal will seem as unremarkable, both practically and legally, as submitting a proposal by email is today. Others might anticipate that for all generative AI's promise, its use in drafting federal proposals will end up being minimal for one of any number of reasons.

Either way, while AI tools specific to proposal drafting are still emerging, asking questions such as the three above will help with identifying the risks in using those types of tools in this context.


[1] G. Dawson, K. Desouza and J. Denford, Understanding artificial intelligence spending by the U.S. federal government, Brookings (Sept. 22, 2022), https://www.brookings.edu/articles/understanding-artificial-intelligence-spending-by-the-u-s-federal-government/.

[2] Tradewinds, Accelerating the procurement and adoption of artificial intelligence/machine learning, digital, and data analytics solutions across the Department of Defense, https://www.tradewindai.com/ (last visited Aug. 1, 2023).

[3] Jory Heckman, DOD builds AI tool to speed up 'antiquated process' for contract writing, Federal News Network, Feb. 9, 2023, https://federalnewsnetwork.com/contracting/2023/02/dod-builds-ai-tool-to-speed-up-antiquated-process-for-contract-writing/.

[4] U.S. Dep't of Defense, DOD Adopts Ethical Principles for Artificial Intelligence, Feb. 24, 2020, https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/.

[5] Wiley Rein LLP, Federal Legislators Are Taking AI Implementation and Oversight Seriously, June 29, 2023, https://www.jdsupra.com/legalnews/federal-legislators-are-taking-ai-8813569/.

[6] Wiley Rein LLP, AI and Copyright Law: Understanding the Challenges, July 12, 2023, https://www.wileyconnect.com/AI-and-Copyright-Law-Understanding-the-Challenges.

[7] Steven A. Schwartz and Peter LoDcua, ChatGPT Lawyers Are Ordered to Consider Seeking Forgiveness, N.Y. Times, June 22, 2023, https://www.nytimes.com/2023/06/22/nyregion/lawyers-chatgpt-schwartz-loduca.html.

Read Time: 9 min
Jump to top of page

Wiley Rein LLP Cookie Preference Center

Your Privacy

When you visit our website, we use cookies on your browser to collect information. The information collected might relate to you, your preferences, or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. For more information about how we use Cookies, please see our Privacy Policy.

Strictly Necessary Cookies

Always Active

Necessary cookies enable core functionality such as security, network management, and accessibility. These cookies may only be disabled by changing your browser settings, but this may affect how the website functions.

Functional Cookies

Always Active

Some functions of the site require remembering user choices, for example your cookie preference, or keyword search highlighting. These do not store any personal information.

Form Submissions

Always Active

When submitting your data, for example on a contact form or event registration, a cookie might be used to monitor the state of your submission across pages.

Performance Cookies

Performance cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.

Powered by Firmseek