2025 State AI Laws Expand Liability, Raise Insurance Risks
This article was originally published by Law360 and is available here and as a PDF here.
As 2025 nears its end, claims professionals should be aware of trends in liability-expanding state legislation that addresses artificial intelligence use.
The rapid integration of AI into myriad business functions and everyday life has, unsurprisingly, led to numerous avenues of AI liability exposure. In addition to common-law causes of action regarding AI impacts, state legislatures have been enacting laws to regulate AI at a brisk pace. These laws address topics ranging from professional licensing to whistleblower protections, and many of these laws have created new private rights of action and civil liability exposures.
Insurance claims based on certain of these liability-expanding statutes are a certainty.
This article provides a brief overview of notable trends emerging regarding these new laws, with many of them focusing on the following issues: children's safety, intimate deepfakes, political advertising, healthcare, algorithmic discrimination and likeness protections.
Children's Safety
Several states have enacted legislation attempting to prevent an AI chatbot from facilitating harmful conduct involving children. States such as California, New Hampshire, New York and Texas have taken action.[1]
California and New Hampshire law now permit a private right of action for violations of their statutes. California enacted a law this year governing "AI companions," which are chatbots that exhibit anthropomorphic features and are able to sustain a "relationship" across multiple interactions.[2]
The law require[s] an operator to prevent a companion chatbot ... from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, as specified, and would require an operator to publish details on that protocol on the operator's internet website.[3]
The statute provides a private right of action permitting actual damages, statutory damages starting at $1,000 per violation, attorney fees and costs, and injunctive relief.
New Hampshire's H.B. 143, which becomes effective in 2026, creates a private right of action for individuals harmed by an AI chatbot's facilitation, encouragement, offer or recommendation of certain harmful acts to a child. The statute seeks to preclude AI chatbots from facilitating, encouraging, offering, soliciting or recommending "that [a] child imminently engage in: (1) Sexually explicit conduct. (2) The production or participation in the production of a visual depiction of such conduct. (3) The illegal use of drugs or alcohol. (4) Acts of self-harm or suicide. (5) Any crime of violence against another person."[4]
Under H.B. 143, a child, parent or next friend may sue an operator responsible for a chatbot's actions for damages starting at $1,000 per violation. Unlike California's statute, H.B. 143 does not expressly allow fee-shifting, but it authorizes the New Hampshire attorney general to bring enforcement actions for violations.
Similar New York and Texas laws do not establish a private right of action. The New York law requires AI companions to have protocols to refer users to a crisis service provider if they display suicidal ideation.[5] And the Texas statute, which becomes effective in 2026, prohibits an AI system from being developed or deployed to intentionally incite self-harm, harm to others or criminal activity.[6]
Both statutes authorize the state attorney general to enforce the law. In New York, violators face up to $15,000 per day per violation, with funds deposited to the state's suicide prevention fund. In Texas, the attorney general can seek civil penalties ranging from $10,000 for curable violations to $200,000 for uncurable violations and $2,000 to $40,000 per day for continued violations, plus additional state-agency penalties.
Intimate Deepfakes
Many states have enacted legislation addressing intimate deepfakes. These statutes seek to address AI-generated sexual images or videos that depict real individuals. Typically, these statutes either provide for a private right of action or enforcement by the state attorney general.
For instance, Michigan enacted the Protection from Intimate Deep Fakes Act this year, allowing individuals depicted in nonconsensual deepfake sexual images to sue creators or distributors who either knew or reasonably should have known that the creation or dissemination would cause harm, or who acted with intent to harass, extort, threaten or harm.[7]
The statute permits a depicted individual to bring "a civil action against a person for the nonconsensual creation or dissemination of a deep fake" if certain criteria are met.[8] These criteria include an intent requirement, the depiction of the person's "intimate parts," or his/her engagement in a sexual act, and that the depicted individual be identifiable.[9]
The statute also permits plaintiffs to sue under confidential filings and to seek damages, including damages for mental anguish, profits arising from the deepfake, attorney fees and costs, and for an injunction or temporary restraining order, a violation of which may result in a $1,000 per day civil fine.
North Dakota enacted a similar law this year.[10]
The law states that a depicted individual who is identifiable and who suffers harm from a person's violation of this section has a cause of action against the person if the person produced, possessed, distributed, promoted, advertised, sold, exhibited, broadcasted, or transmitted the sexually expressive image for the purpose of sexual arousal, sexual gratification, humiliation, degradation, or monetary or commercial gain
to recover the greater of actual economic and noneconomic damages (including emotional distress) or statutory damages up to $10,000 per defendant.[11]
In addition, the plaintiff can recover any profits the defendant earned from distributing or monetizing the image and may also seek punitive damages.[12]
Political Advertising
Legislators have been deeply attuned to AI's impact on elections. Starting in 2024, states across the country have passed laws attempting to govern AI-generated political advertising. These statutes generally seek to limit the use of deceptive AI-generated election content. Some of these statutes provide private rights of action, permitting candidates to sue. Others provide even broader rights of action, permitting activist organizations to sue on behalf of their interest groups.
Michigan enacted a statute in 2023 prohibiting the distribution of knowingly deceptive media that falsely represents an individual when intended to harm a candidate's reputation or influence voters through deception. The statute precludes a person from distributing "materially deceptive media" that "falsely represents a depicted individual" provided that certain requirements are met.[13] These requirements include an intent requirement, a 90-day period before an election, and a finding that the media is reasonably likely to cause deception.[14]
The Michigan statute includes exceptions and criminal penalties. In addition to enforcement by the state attorney general, it permits depicted individuals, injured candidates, and voter organizations to seek injunctive relief and recover attorney fees and costs. The statute requires judicial review of complaints for potential frivolity before obligating a defendant to respond.
In 2024, states such as Hawaii, Arizona and California enacted similar laws.
California's expansive statute applies 120 days before an election and continues up to 60 days after an election.[15] It precludes the distribution of election materials that are knowingly deceptive not only of candidates but also of election officials, voting machines, ballots or related property. The statute permits recipients of deceptive content, candidates, and election officials (among others) to sue any person or entity that distributes or republishes deceptive material. It permits monetary damages, injunctive relief, attorney fees and costs, and other remedies.
Hawaii's law includes civil and criminal penalties.[16] It permits depicted individuals and voter organizations to sue, permitting them to seek monetary damages, injunctive relief, attorney fees and costs, and other remedies. The statute permits a $1,000 per day penalty for violating an injunction. The law also permits the state attorney general to sue, but it does not permit the attorney general to seek damages.
Unlike many others, Arizona's statute is enforceable only by the state's attorney general.[17]
Healthcare
Several states have enacted healthcare-related statutes addressing issues arising from AI use in the healthcare industry. For these statutes, civil liability is generally enforced by the state attorney general or agency.
For example, the Illinois Wellness and Oversight for Psychological Resources Act, enacted in 2025, seeks to prevent unlicensed and unqualified artificial intelligence systems from offering therapy or psychotherapy services.[18]
It states that "[a] licensed professional may not allow artificial intelligence to do any of the following: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without the review and approval by a licensed professional."[19]
The law permits AI to be used by licensed professionals for administrative or supplementary support purposes only. The law permits the Illinois Department of Financial and Professional Regulation to seek penalties of up to $10,000 per violation, with penalties assessed based on the degree of harm and the circumstances of the violation.
In stark contrast, Texas has authorized healthcare practitioners to use AI for diagnostic purposes, "including the use of artificial intelligence for recommendations on a diagnosis or course of treatment based on a patient's medical record" if certain conditions are met.[20] The new law requires disclosure to patients and a practitioner's review of any AI recommendation. Violations are enforced by the attorney general exclusively and can reach up to $250,000 when protected health information is knowingly used for financial gain.
Algorithmic Discrimination
States have begun updating their anti-discrimination laws to account for algorithmic or AI discrimination. These laws tend to be enforced by state attorneys general.
Effective Jan. 1, Texas law will prohibit AI systems from being developed or deployed that unlawfully discriminate against protected classes.[21] Penalties are set forth in the statute, ranging from $10,000 for a curable violation to $200,000 for an uncurable violation.
Similarly, the Colorado attorney general will be able to enforce algorithmic discrimination laws starting on June 30, 2026.[22] It permits the Colorado attorney general to recover civil penalties under unfair trade practices law. This law also requires developers and deployers to implement risk management programs.
Illinois has extended algorithmic discrimination liability to employers whose use of AI in employment decisions results in discrimination against protected classes.[23] State agencies enforce this law.
Likeness Protections
Since AI can generate images, audio and video imitating real people, several states have begun enacting legislation to safeguard publicity rights.
Tennessee passed a law in 2024 that provides individuals with the exclusive right to the commercial use of their name, photograph, voice, or likeness in any medium and manner, including AI.[24] This law provides a private right of action to individuals, such as recording artists, and their license holders, such as record labels. Individuals are granted the right to sue those who enable the unauthorized use of their likeness.
Similarly, Utah enacted a law in 2025 that broadened its likeness protections to unauthorized commercial use of artificially recreated identities, including voice and image.[25] This law includes a private right of action permitting those affected to sue those who caused the publication of their likeness for damages, including punitive damages, injunctive relief, and attorney fees and costs.
Takeaways
The rapid expansion of AI-related legislation introduces obligations for developers, deployers and businesses using AI, often backed by enforcement mechanisms such as private rights of action and civil penalties. For carriers, this undoubtedly will lead to claims testing coverage under various insurance lines, with a focus on insuring terms, definitions and exclusions. Even for attorney general actions, where indemnity for fines or statutory penalties may be excluded or otherwise outside the insuring terms, disputes could arise regarding defense obligations, increasing the complexity for claims handlers.
These prominent trends do not fully account for all the new laws enacted to govern AI, and states continue to explore new AI-related laws that may serve to expand civil liability. Continued state legislation will create further piecemeal regulatory and liability schemes across the country.
On Dec. 11, President Donald Trump signed an executive order that seeks to preempt and challenge state AI laws. At this time, it remains uncertain which state laws, if any, will be targeted. Regardless of federal action, claims professionals would be wise to continue monitoring the impact of AI-related legislation.
[1] Cal. Bus. & Prof. Code §§22601–22606 (West 2025) (effective Jan. 01, 2026), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB243; N.H. Rev. Stat. Ann. §§639:3, III-a, 507:8-k (2025) (effective Jan. 01, 2026), https://gc.nh.gov/bill_Status/pdf.aspx?id=17004&q=billVersion; N.Y. Gen. Bus. Law art. 47, §§1700-1704 (McKinney 2025), https://www.nysenate.gov/legislation/laws/GBS/A47; Texas Responsible Artificial Intelligence Governance Act, Tex. Bus. & Com. Code §§552.001-.003, 552.051-.057 (West 2025) (effective Jan. 01, 2026), https://capitol.texas.gov/tlodocs/89R/billtext/pdf/HB00149F.pdf#navpanes=0.
[2] Cal. Bus & Prof. §22601.
[3] Id.
[4] N.H. Rev. Stat. Ann. §§639:3, III-a(a).
[5] N.Y. Gen. Bus. Law art. 47, §§1700-1704.
[6] Tex. Bus. & Com. Code §§552.001-.003, 552.051-.057.
[7] Protection from Intimate Deep Fakes Act, Mich. Comp. Laws §§752.381-.390 (2025), https://www.legislature.mi.gov/documents/2025-2026/publicact/pdf/2025-PA-0011.pdf.
[8] Id. §752.381(3).
[9] Id.
[10] N.D. Cent. Code §§12.1-27.1-01(13), 12.1-27.1-03.3 (2025), https://www.sos.nd.gov/sites/www/files/documents/services/leg-bills/2025-69/house-bills/1351.pdf.
[11] Id. §12.1-27.1-03.3(6).
[12] Id. §12.1-27.1-03.3(7).
[13] Mich. Comp. Laws Ann. §168.932f (West 2023), https://legislature.mi.gov/documents/2023-2024/publicact/pdf/2023-PA-0265.pdf.
[14] Id.
[15] Cal. Civ. Proc. Code §35 (West 2024), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB2839; Cal. Elec. Code § 20012 (West 2024), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB2839.
[16] Haw. Rev. Stat. §§11-303–11-304 (2024), https://data.capitol.hawaii.gov/sessions/session2024/bills/SB2687_HD2_.pdf .
[17] Ariz. Rev. Stat. Ann. §16-1024 (West 2024), https://www.azleg.gov/legtext/56leg/2r/bills/sb1359h.htm.
[18] Wellness and Oversight for Psychological Resources Act, Ill. Pub. Act 104-0054 (2025) (enacted) https://ilga.gov/Documents/Legislation/PublicActs/104/PDF/104-0054.pdf.
[19] Id.
[20] Tex. Health & Safety Code Ann. §§183.001–.012 (West 2025), https://capitol.texas.gov/tlodocs/89R/billtext/html/SB01188E.htm.
[21] Tex. Bus. & Com. Code §§552.001-.003, 552.051-.057.
[22] Colo. Rev. Stat. Ann. §6-1-1701-1704 (West 2025), https://s3.us-west-2.amazonaws.com/beta.leg.colorado.gov/7f9f16ac78b35ebefefda426135ad19d
[23] 775 Ill. Comp. Stat. 5/2-101-102 (2025), https://www.ilga.gov/Documents/Legislation/PublicActs/103/PDF/103-0804.pdf.
[24] Tenn. Code Ann. §§47-25-1101-1107 (2024), https://www.capitol.tn.gov/Bills/113/Amend/HA0578.pdf.
[25] Utah Code Ann. §§45-3-2 to 45-3-5, 45-3-7 (West 2025), https://le.utah.gov/Session/2025/bills/enrolled/SB0271.pdf.
[26] Hailey Konnath, Trump Executive Order Targets 'Excessive' State AI Laws, Law360 (Dec. 11, 2025, 11:43 PM), https://www.law360.com/articles/2421298/trump-executive-order-targets-excessive-state-ai-laws.


