Alert

7 Tips for Leveraging Artificial Intelligence While Managing Risks in Political Campaigns

April 2, 2024

While its full potential is still untapped, artificial intelligence (AI) is poised to play a greater role during the 2024 elections and future grassroots advocacy efforts. Here are seven tips corporations and others should consider before integrating the technology into their efforts:

  1. The Federal Election Commission (FEC) is considering a rulemaking on AI, but most expect the agency will decline to proceed for jurisdictional reasons. Pro-regulatory interests – spurned on by Members of Congress and others – have repeatedly urged the FEC to open a rulemaking into the use of artificial intelligence in political campaigns. Last May, the Commission declined to do so, with several commissioners explaining that the federal campaign finance laws did not authorize the Commission to regulate deepfakes or most other uses of AI. Several months later, however, the Commission reversed course and agreed to take public comment. Nonetheless, most observers expect that the Commission will decline to open a rulemaking, citing the same jurisdictional concerns cited earlier. According to FEC Chairman Sean Cooksey, the Commission will likely announce a decision on this issue by early summer.
  2. Congress has introduced bills to regulate AI in political communications, but it is unlikely to pass anything before the November election. Three bills were introduced in Congress last year, each taking aim at deceptive AI from a different angle. The REAL Political Advertisements Act would require disclaimers on images and videos generated in whole or in part with AI, while The Protect Elections from Deceptive AI Act would prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates. The DEEPFAKES Accountability Act would create new criminal offenses related to the production of malicious deepfakes, and includes exceptions for parodies, satire, consensual deepfakes, and other types of fictionalized content. Most recently, in March 2024, the AI Transparency in Elections Act was introduced, which would require the FEC to create criteria for determining when a “covered communication” contains content “substantially generated by” AI and develop disclosure requirements. Importantly, any legislation Congress does pass would only apply to federal candidates and campaigns.
  3. The states, by contrast, are moving swiftly to regulate AI in the election context. While Congress may be moving slowly, the states are not. At time of publication, 11 states – including four since March 15 – have enacted laws to regulate “synthetic media” / deepfake technology: California (2019), Texas (2019), Michigan (2023), Minnesota (2023), Washington (2023), New Mexico (2024), Indiana (2024), Utah (2024), Wisconsin (2024), Idaho (2024), and Oregon (2024). Dozens of other bills have been introduced and are being considered in the legislatures of other states this year. Common elements of these laws include temporal limits, standards of intent, safe harbors for ads with disclaimers, private rights of action for injunctive and/or equitable relief, and criminal penalties. Some laws include exceptions for satire and parody in an effort to bolster defense against inevitable First Amendment challenges. At least one law imposes liability on paid advertising platforms.
  4. Tech platforms and industry groups are shaping policy and best practices. In February 2024, several major tech companies voluntarily signed “A Tech Accord to Combat Deceptive Use of AI in 2024 Elections” to demonstrate their commitment to ensuring the responsible use of AI in political communications. Many social media platforms already had their own policies relating to the use of AI in political content. These policies are subject to change, particularly as consensus on best practices continues to develop. Political professional groups have also been issuing policy statements on the use of AI. For example, the American Association of Political Consultants issued a policy statement in May 2023 that their Code of Ethics prohibits the use of “deep fake” generative AI content.
  5. Overall, rules of the road are emerging to help mitigate AI risks. While AI holds some incredibly powerful and positive promises, it also can introduce risks. There is growing consensus around what these risks are, and correspondingly, around how to address them. In particular, key principles in addressing AI risks and promoting trustworthy AI include avoiding harmful bias, explainability, accountability, transparency, privacy, and safety and security. These principles form the basis of – and inform – many of the voluntary frameworks and emerging laws and regulations that are growing around AI development and use.
  6. Even though the legal and regulatory landscape is still developing, organizations deploying AI should be aware of existing laws that govern the use of AI – both in the specific election context and more generally. There are already laws on the books – including both technology-agnostic laws (e.g., general consumer protection and privacy laws) and AI-specific laws (e.g., bot disclosure laws, as well as the state laws on deepfakes mentioned above) that govern AI, with more that are emerging. One key law that organizations should be aware of if they are placing outbound calls or texts is the Telephone Consumer Protection Act (TCPA). In February 2024, the FCC issued a Declaratory Ruling that made clear that this law – which imposes a variety of requirements on a wide range of calls and texts – applies to AI-generated voice calls.
  7. How AI can benefit or hurt your organization. It’s important to identify, understand, and mitigate the risks associated with AI, in addition to understanding the benefits it may have for your organization. Generating content is just one potential benefit of AI; other beneficial uses include enhancing cybersecurity, detecting fraud or deepfakes, conducting predictive analytics, and more. On the flip side, there are threat factors that organizations ought to monitor for during the election cycle, such as instances of deepfake robocalls or fraudulent robocalls. If AI is being used against your organization, there are a number of tools and remedies (some even powered by AI) that can help combat such fraud.

Wiley attorneys Kevin Rupy, Kat Scott, Andrew Woodson, and Hannah Miller hosted a webinar to discuss the use of AI in political campaigns and policy debates. You can watch the webinar on-demand here.

Read Time: 5 min
Jump to top of page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.