Newsletter

States and Localities Are Beginning to Move Forward with a Piecemeal Approach to AI

April 2022

Privacy In Focus®

As artificial intelligence (AI) becomes increasingly embedded into products, services, and business decisions, state and local lawmakers have been considering and passing a range of laws addressing AI.

Even as the federal government looks more closely at AI, including with the National Institute for Science and Technology (NIST) developing an AI Risk Management Framework, some states and localities appear poised to jump ahead – with both new laws and new regulations. 

Several Laws Enacted in 2021 Address AI

In 2021, several jurisdictions – including Alabama, Colorado, Illinois, Mississippi, and New York City – enacted legislation specifically directed at the use of AI. Their approaches varied, from creating bodies to study the impact of AI, to regulating the use of AI in contexts where governments have been concerned about increased risk of harm to individuals.

For example, some of these laws have focused on studying or promoting AI. For instance, Alabama’s law establishes a council on Advanced Technology and Artificial Intelligence “to review and advise the Governor, the Legislature, and other interested parties on the use and development of advanced technology and [AI] in th[e] state.” That council must “submit to the Governor and Legislature an annual report each year on any recommendations the council may have for administrative or policy action relating to advanced technology and artificial intelligence.” The Mississippi law – known as the “Mississippi Computer Science and Cyber Education Equality Act” – implements a mandatory K-12 computer science curriculum, which must include instruction in AI and machine learning, among other fields and topics.

Other laws are more regulatory with respect to AI. Most notably, New York City enacted an algorithmic accountability law, which bars employers and employment agencies in New York City from using “automated employment decision tool[s]” unless the tool has been subject to an annual audit checking for race- or gender-based discrimination, and a summary of the results of the most recent audit is publicly available on the employer or employment agency’s website. The new law would also require employers or employment agencies that use such AI tools to provide notices to employees and candidates, and to make other information about the automated employment decision tool available either on the employer’s or employment agency’s website or upon written request by the candidate or employee. The law authorizes a private right of action and imposes fines on employers or employment agencies from $500 – $1,500 per violation.

Colorado also enacted an AI law in 2021. Colorado’s AI law takes a sectoral approach, prohibiting insurers from using “any external consumer data and information sources, as well as algorithms or predictive models that use external consumer data and information sources, in a way that unfairly discriminated based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.” The law requires the Commissioner of Insurance in Colorado to promulgate related rules for insurers, which must require insurers to, among other things: (1) provide information to the Commissioner about the data used to develop and implement algorithms and predictive models; (2) provide an explanation of how the insurer uses external consumer data and information sources, as well as algorithms and predictive models that use such data; (3) establish and maintain “a risk management framework or similar processes or procedures that are reasonably designed to determine, to the extent practicable, whether the insurer’s use [of such data, algorithms, and predictive models] unfairly discriminates based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression”; (4) provide an assessment of the results of the risk management framework and actions taken to minimize the risk of unfair discrimination, including ongoing monitoring; and (5) attest that the insurer has implemented the risk management framework appropriately on a continuous basis. This law comes in addition to Colorado’s comprehensive privacy law, the Colorado Privacy Act, set to go into effect on July 1, 2023. Notably, the Colorado Privacy Act – like the new omnibus privacy law in Virginia – provides consumers with a right to opt out of processing of their personal data for purposes of automated profiling in furtherance of decisions that produce legal or similarly significant effects.

As additional examples, Illinois has adopted two laws related to AI in recent years. First, the Illinois Future of Work Act develops a task force to, among other things, study the impact of emerging technologies on the future of work. The legislative findings of that bill explained that “[r]paid advancements in technology, specifically the automation of jobs and expanded artificial intelligence capability, have had an will continue to have a profound impact on the type, quality, and number of jobs available in our 21st century economy.” Second, Illinois also has enacted the Artificial Intelligence Video Interview Act, which mandates notice, consent, sharing, deletion, and reporting obligations for employers that “use[] an artificial intelligence analysis of ... applicant-submitted videos” in the hiring process. Specifically, an employer that asks applicants to record video interviews and uses an AI analysis of that video must: (1) notify the applicant that AI may be used to analyze the applicant’s video interview and consider the applicant’s fitness for the position; (2) provide each applicant with information explaining how the AI works and what general types of characteristics the AI uses to evaluate applicants; and (3) obtain consent from the applicant. The law also limits the sharing of the videos and extends to applicants a right to delete the videos. A 2021 amendment to the law imposes reporting requirements on an employer that “relies solely upon an [AI] analysis of a video interview to determine whether an applicant will be selected for an in-person interview.” Specifically, such employers must report specified demographic information annually to the state’s Department of Commerce and Economic Opportunity, which in turn is required to analyze the demographic data reported and annually report to the Governor and General Assembly whether the data discloses a racial bias in the use of AI.

California Is Poised to Adopt Privacy Rules That Address AI

In addition to these laws enacted in 2021, it will be important for companies to monitor California’s privacy rulemaking process, as the new California Privacy Protection Agency (CPPA), the agency charged with rulemaking and enforcement authority over the California Privacy Rights Act (CPRA), is expected to issue regulations governing AI this year. As we have flagged, while the statute calls for final rules to be adopted by July 2022, at a February 17 CPPA board meeting, Executive Director Ashkan Soltani announced that draft regulations will be delayed.

The CPRA specifically charges the agency with “[i]ssuing regulations governing access and opt-out rights with respect to businesses’ use of automated decisionmaking technology, including profiling and requiring businesses’ response to access requests to include meaningful information about the logic involved in those decisionmaking processes, as well as a description of the likely outcome of the process with respect to the consumer.” In September 2021, the CPPA released an Invitation for Preliminary Comments on Proposed Rulemaking, which asked four questions regarding interpretation of the agency’s automated decision-making rulemaking authority:

  1. What activities should be deemed to constitute “automated decisionmaking technology” and/or “profiling”
  2. When consumers should be able to access information about businesses’ use of automated decision-making technology and what processes consumers and businesses should follow to facilitate access
  3. What information businesses must provide to consumers in response to access requests, including what businesses must do in order to provide “meaningful information about the logic” involved in the automated decision-making process
  4. The scope of consumers’ opt-out rights with regard to automated decision-making, and what processes consumers and businesses should follow to facilitate opt-outs.

This effort in California to regulate certain automated decision-making processes may open the door to greater regulation of AI and should be watched closely. 

***

This kind of a patchwork approach, if it continues, may create issues with managing regulatory compliance for many uses of AI across jurisdictions. Companies developing and deploying AI should continue to monitor state and local approaches to AI as the legal and regulatory landscape develops.

© 2022 Wiley Rein LLP

Read Time: 7 min
Jump to top of page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.