Podcast

The “Wild West” of AI Use In Campaigns

Political Law Podcast
October 12, 2023

Political Law · The “Wild West” of AI Use In Campaigns

As the tide turns toward another election cycle, the explosion of artificial intelligence (AI) raises alarms in the campaign space. Wiley Election Law partner Caleb Burns moderates a discussion with fellow Election Law partner Andrew Woodson; Privacy, Cyber & Data Governance partners Duane Pozza and Kat Scott; and Intellectual Property partner David Weslow on how best to navigate this new “AI Wild West.” Hear about required disclosures related to chatbots, laws prohibiting deepfakes, clarifications on “fair use” in the copyright world, differences between for-profits and nonprofits in state privacy laws, and even potential upcoming rulemaking from the Federal Election Commission itself.

Transcript

Caleb
Hello, I'm Caleb Burns, a partner at Wiley Rein in Washington, DC.  I was recently quoted in the press stating that the regulation of AI in connection with political campaigns is the “wild west.” What exactly did I mean by that? Well, the campaign finance laws do not address AI, though the Federal Election Commission is considering whether to enact a new regulation to change that, and we'll unpack that more in a minute.  Instead, users of AI in the campaign space must grapple with how pre-existing legal regimes like privacy, copyright, defamation, and consumer protection laws might apply.

The courts are only now starting to address this. Legislatures have begun considering and passing legislation, and regulators like the FEC are examining their authority to regulate AI. But until clear rules are developed by the government, we are left to exercise our own judgment about how all these preexisting legal regimes might apply to AI used in connection with political campaigns.

Fortunately, I am joined today by real experts in these areas of law, and I'm going to spend the next 20 to 30 minutes asking them questions so we can better understand how campaign use of AI might be regulated. So, with that as background, I want to start with my partner, Andrew Woodson, who previously worked at the FEC, who can tell us what they may be attempting to do in this space. Andrew.

Andrew
Thanks, Caleb. Like everyone else, the FEC has been hearing a lot about artificial intelligence lately. It's in the newspapers, the election law bar is talking and writing about it, doing podcasts on it, and members of Congress just spent much of an oversight hearing quizzing commissioners about their views on AI more than almost on any other topic.

So heading into the 2024 elections, I'd say this is one of the top issues the commission is wrestling with. Much of the AI-related focus right now is on a petition for rulemaking filed by the group Public Citizen. Back in May, this organization had asked the FEC to clarify that existing law prohibits candidates and their agents from disseminating communications that contain, “deliberately false AI-generated content.”

However, the Public Citizen petition was met with strong skepticism from some commissioners, indeed, so much so that the FEC could not even agree whether to publicize Public Citizens petition and ask the public for input on opening a rulemaking. This was a departure from the agency's standard practice, but the FEC's Republican commissioners, in particular, felt that the agency should not mislead the public when it was clear that the FEC had no statutory authority to regulate deepfakes.

In Commissioner Alan Dickerson's judgment, it was up to Congress, not the FEC, to rewrite the 1970s era law to cover the more modern problems posed by AI. Unfazed by this setback, Public Citizen came back to the Commission in July with an updated petition, expanding upon its arguments that the FEC had statutory authority to regulate deepfakes.

This time, likely in response to increased congressional interest in the explosion of AI-related press coverage over the summer, of which there was quite a bit, the commission unanimously agreed to obtain public comment on whether to formally open a rulemaking.  Comments are due by October 16 and the commission could make a decision on whether to open a rulemaking before the end of the year.

So if you're an AI developer, you're an ad-making consultant or a candidate or super PAC that anticipates using AI-generated content in 2024, you might want to watch this space, as they say, to see if anything happens before year-end.  On balance though, I think it's probably unlikely that the commission will ultimately end up opening rulemaking.

The jurisdictional concerns do remain real. At least one Democratic commissioner has already expressed some of the same concerns as Commissioner Dickerson, and all commissioners are mindful that any sort of regulation risks stifling AI's more promising uses.  But we'll see how the process plays out in the coming months.

Caleb
Well, thank you, Andrew. Very interesting. But if the FEC is not going to regulate in this area, what about the platforms that host the advertising? We've seen social media platforms enact their own rules around disclosures related to political advertising.  Is there a movement afoot for them to do something similar with AI-generated content?

Andrew
Definitely. Last month, Google announced that advertisers will soon need to disclose whether an ad includes, “synthetic content that inauthentically depicts real or realistic looking people or events.”  While this disclaimer won't apply to the use of AI for general photo editing purposes and those kinds of things, will apply when an advertiser uses AI to create a realistic looking depiction of an event that never actually occurred.

This new rule, which is going to apply to YouTube video ads as well as content on Google, will be in place for the 2024 primary season.  It remains to be seen whether other social media platforms will follow suit.  Indeed, Senator Amy Klobuchar and Congresswoman Yvette Clark recently sent a letter to several other technology companies asking this very question.

It's possible these companies are working on entirely new policies to cover AI-generated content, or they may instead rely upon excellent platform rules banning deepfakes and manipulated media for certain forms of video content. So stay tuned on that one as well.

Caleb
Well, thanks, Andrew. Very interesting from the FEC and campaign finance law perspective.

Let's open the aperture a little bit more and talk about some of these other pre-existing areas of law.  And I'm going to move to my partners, Duane Pozza and Kat Scott in our privacy, cyber, and data governance practice. Duane and Kat, what do privacy and other consumer protection laws have to say on the matter of AI use and AI-generated content?

Duane
Thanks, Caleb. So, this is a question that is generating a lot of interest from policymakers, enforcers, and all kinds of companies and organizations that are using AI. The bottom line is that many existing laws already apply to AI, and there are no shortage of laws that are technology neutral that protect against many of the potential or perceived harms associated with AI.

So, regulators, in fact, have gone out of their way recently to emphasize that AI is governed like any other technology under existing laws, and that regulators will consider the effects of AI. So just to take one prominent example, there are consumer protection laws that protect against unfair and deceptive acts and practices, they're often known as “UDAAPs”.  In particular, these kinds of laws protect against deceptive practices that are likely to mislead consumers. So, to the extent that AI is used as part of an allegedly deceptive practice, that could fall under a range of laws. So federal regulators, including the Federal Trade Commission, or the FTC, have reiterated the applicability of general laws to AI time and time again in recent months.

Just recently, the FTC hosted a roundtable discussion about AI and creative content, and the commissioners made this point again.  I'd also add that states have similar kinds of UDAAP laws and consumer protection laws, and similar authorities to go after these kinds of practices, and states may have fewer limits, in fact.

So, for example, the FTC is limited in bringing FTC Act cases against commercial entities, to those against commercial entities, and not true non-profits. But states may not have this kind of limitation in enforcing their own consumer protection laws. So there's also the possibility that states, particularly those with aggressive attorneys general, could attempt to use consumer protection laws in dealing with AI when it's used in relation with campaign solicitations, for example.

Katherine
Yeah, and with respect to privacy, this is certainly true of general consumer privacy laws. It’s just as a level set, privacy laws typically govern the collection, use, and sharing of personal information. And so to the extent that AI applications use personal information, for example, as an input to generate new content or just generally to process personal information or help organizations make decisions about individuals, privacy laws may certainly be triggered.

For sure, there are specific considerations for campaigns or others in the political ecosystem, including whether or not those traditional consumer privacy laws apply in the first instance. For example, many of the new state omnibus privacy laws have exceptions for non-profit entities. But even if the general consumer privacy laws don't apply to your organization, these laws are really setting a clear trend and expectation and should be looked at as a guide for best practices.

Caleb
Well, thank you, Kat. But what about AI specific laws? Are there any privacy or consumer protection laws that listeners should be aware of, beyond the generally applicable ones you and Duane have both discussed?

Katherine
Yeah, absolutely. So at the state level, we've seen a number of laws that address specific privacy and transparency concerns about AI.

So I mentioned the state omnibus privacy laws a little bit earlier. Those laws regulate when businesses use AI or automated processing system in furtherance of legally significant decisions about people, and under most of those laws, organizations that are deploying those covered AI systems have to give consumers the right to opt out of that processing and they have to conduct what's called a data protection assessment or a DPA to assess the benefits and risks of that kind of processing. And then in the election space in particular, there are some AI specific laws as well. So in 2019, California adopted a bot disclosure law that requires businesses to notify consumers if they are interacting online with a bot instead of a human, and if that bot is being used to incentivize a purchase or sale of goods and in the campaign space to influence a vote in an election.

So that definitely is a significant law that folks should be aware of. California also has a deepfake law on the books that prohibits maliciously creating or distributing campaign material with candidate photos superimposed without clear disclosure, stating that the content has been manipulated. That law, I think, is significant for a lot of reasons, but one being that it creates a private right of action for candidates.

Duane
I would also add that on the federal level, we're seeing an influx of AI specific laws being proposed and discussed by Legislators as well, the kinds of laws that are being discussed could, for example, mandate risk assessments for certain kinds of “high risk AI use cases.”  And another area that we're watching is executive action.  So the White House has signaled that it's going to release an executive order on AI in the near future. And it has already helped secure voluntary commitments from a number of technology companies on how to manage AI risk. And we're expecting the White House to try to build on these voluntary commitments in the next step on its executive.

Caleb
Yeah. Wow. That's a lot to digest there. Kat, Duane, what advice do you have for campaigns trying to keep on top of all this?

Katherine
Yeah, first and foremost, I think we recommend that it's important to take inventory of how your campaign or organization is using AI. AI can be a really broad concept depending on what legal or regulatory framework you're dealing with.

And it goes beyond the generative AI tools that everyone is reading headlines about. So it's important to know whether and how your campaign is using AI so that you have a clear sense of the legal and reputational risks associated with any given use. I think second, we'd recommend that organizations take a risk management approach to AI adoption and deployment, meaning that it's important to assess the benefits and the risks of using any AI technology and build your program and protections accordingly.

There are a number of tools out there that can help you in this regard, including the AI risk management framework, which is voluntary guidance that was recently put out by NIST, about how organizations can leverage the benefits of AI while appropriately managing the risk.

Caleb
And is NIST the government agency?

Katherine
It is. It's the National Institute of Standards and Technology. It sits within the Commerce Department, and it puts out a lot of voluntary guidance for risk management on cybersecurity, privacy, AI issues.

Duane
And finally, I think it's also important for any organization to have an official policy on how to use AI tools, particularly these new generated AI tools.

These are the kinds of policies that we're helping all kinds of clients develop in real time. An organizational policy is helpful in large part to make sure employees, contractors, and vendors are all on the same page, and make sure that any of these legal risks are being minimized.

Caleb
Well, thank you both. Very interesting.

Now let's move to our partner, David Weslow from our intellectual property group.  David, we've seen a lot of intellectual property issues and political advertisements over the years, including traditional claims of copyright infringement related to music and photos, the right of publicity claims we've seen that relate to people being shown in advertisements and even cyber-squatting claims related to domain names used by campaigns.

Does the use of AI-generated content raise these or other IP issues?

David
Hi Caleb.  Yeah, as with more traditional IP claims related to campaign advertisements, the use of AI generated content can also raise copyright, right of publicity, and even trademark or brand issues. If an AI system is trained by reviewing pre-existing content, and this could be anything from articles, books, photos, videos, music, or anything else that might be entitled to copyright protection, then this use of pre-existing content without permission for the AI system training could be subject to a claim of copyright infringement.

So following the recent filing of a number of copyright infringement cases in this context, we've started to see some companies who provide AI systems offer indemnification for their customers in relation to claims based on use of third party content for AI training data. And this type of use of third party content could also be subject to a fair use defense. But at this point, no courts have addressed the scope of this type of indemnification, and the very few courts that have addressed a copyright fair use defense in this context have only denied early motions to dismiss, even when fair use is brought up early in the case.

Caleb
That's very interesting. I hadn't thought of the IP risk in the context of using AI that has been trained with third party content. That, of course, is on the input side. What about from the AI output perspective?

David
Yeah, that's a good question. So the output from any AI system could also be challenged as a copyright infringement, if the output is based on pre-existing content that was used without authorization. So in the campaign context, this could be campaign advertisements, social media posts, website content, videos, or more.

Caleb
Okay, well now I'm going to put you on the hot seat like I just did with Kat and Duane. I mean, that's a lot to consider. How should political campaigns minimize or at least mitigate these risks?

David
Sure. Similar to content that may not have been generated by AI, it's always a good idea to make sure that any content that's being used by the campaign is both original and not a derivative of a pre-existing work.

And if the content is being prepared by a vendor, that the vendor provides contractual representations and warranties on both originality and non-infringement. And if you know that the content is going to be based on pre-existing works or brands or trademarks. It's very important to conduct a thorough fair use analysis before using the content.

There's no shortcut to reaching a copyright fair use decision. All four copyright fair use factors must always be balanced. And it's a good idea to keep in mind that fair use is an affirmative defense in litigation that has not yet been addressed by courts in the context of AI systems other than those very few decisions I mentioned that have only denied early motions to dismiss on fair use ground.

Caleb
Well, David, one more question since I've got you. I've heard folks say, “Oh, I got this off the internet. Therefore it is subject to fair use.” I'm taking from your comments - that is demonstrably not true.

David
Yeah, it's common misconception. We hear that a lot or that it's only five seconds worth of audio or video, or it turned up in a Google search or things like that.  And no, unfortunately, there's no shortcut to a fair use determination. It's a factually intensive balancing of four factors.  There's just no shortcut to determining that it's a fair use.

Caleb
Well, thanks for clearing that up and thank you all. What I'm learning from this podcast is that there is much to consider when using AI for campaign purposes.

So thank you, David, Duane, Kat, Andrew. Let's plan on doing this again as the field continues to evolve.


Read Time: 15 min
Jump to top of page

Wiley Rein LLP Cookie Preference Center

Your Privacy

When you visit our website, we use cookies on your browser to collect information. The information collected might relate to you, your preferences, or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. For more information about how we use Cookies, please see our Privacy Policy.

Strictly Necessary Cookies

Always Active

Necessary cookies enable core functionality such as security, network management, and accessibility. These cookies may only be disabled by changing your browser settings, but this may affect how the website functions.

Functional Cookies

Always Active

Some functions of the site require remembering user choices, for example your cookie preference, or keyword search highlighting. These do not store any personal information.

Form Submissions

Always Active

When submitting your data, for example on a contact form or event registration, a cookie might be used to monitor the state of your submission across pages.

Performance Cookies

Performance cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.

Powered by Firmseek