Let’s “chat” about A.I. and insurance


  • Reed Smith LLP

October 24, 2023 – The widespread consideration of artificial intelligence technologies (“A.I.”) across platforms and industries is not surprising. Among other operational advantages, A.I. is touted as creating efficiencies and reducing costs, and on a global scale, accelerating progress on key issues such as health, security and sustainability. See K. Basu, “Immigration Lawyers Look to AI to Make Rote Work Faster, Cheaper,” Bloomberg Law (Sep. 28, 2023); J. Peltz, “Top UN tech policy official feels ‘optimistic about AI,’” Chicago Tribune (Sep. 27, 2023).

Although A.I. may hold great promise, businesses are exploring how to work with such technologies in a manner that avoids or limits exposure, including measures to create appropriate guardrails for employee use, adequately protect content and data, manage vendors, and navigate privacy issues and existing and anticipated regulations.

It is critical that companies evaluate and consider their insurance coverage programs when planning for, and responding to, these and other A.I. risks, including employment practices liability, cyber, directors and officer liability, media liability, professional liability/errors and omissions, property and business interruption, and general liability insurance, to name only a few.

A.I. and a changing risk landscape

Despite that incorporation of A.I. into business operations is still in its relative infancy, some risks are beginning to materialize. For example, at least one Equal Employment Opportunity Commission (EEOC) claim has been filed, EEOC v. ITutor Group, Inc., et al., No. 1:22-cv-02565 (E.D.N.Y. 2022).

In that case, the EEOC alleged that the defendant company, which provides English-language tutoring services, used an A.I. tool to automatically reject older applicants because of their age. Although this matter settled out of court — Press Release, “iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit,” EEOC (Sep. 11, 2023) — similar employment-related disputes are anticipated.

Emerging risks are not limited to the employment arena. As A.I. is increasingly leveraged across business units, companies must navigate new or increased exposures related to these spheres.

Companies may face exposures, for example, related to data privacy and cybersecurity claims where A.I. is used to enhance these threats. See C. Stupp, “AI Spurs New Cybersecurity Threats,” Wall St. J. (Oct. 6, 2023). Businesses may conceivably face A.I. exposure for “climate impact” claims. See N. Dolby, “Artificial Intelligence Can Make Companies Greener, but It Also Guzzles Energy,” Wall St. J. (Sep. 12, 2023).

In addition, many companies are already dealing with exposure related to violations of intellectual property laws. For example, there have been numerous copyright infringement lawsuits filed against A.I. companies that use internet “scraping” functions to aggregate data and generate “unique” content and training, alleging that copyrighted materials are used without appropriate author permissions. See B. Brittain, “More writers sue OpenAI for copyright infringement over AI training,” Reuters (Sep. 11, 2023).

Moreover, company boards and leadership might face claims for breaches of their fiduciary obligations related to, for example, inadequate financial reporting where A.I. assists with such function, or more broadly the implementation of A.I. policies and safeguards. Companies also run the risk of professional liability exposure to the extent A.I. technologies are alleged to adversely impact professional services, including in the medical and financial management industries.

Insurance considerations for A.I. risks

The rise in A.I. technologies across industries and functions should lead companies to question what protections might exist for related exposures. Beyond the implementation of A.I. usage and compliance policies and robust training, and the drafting of strong indemnification provisions in contracts with these technologies in mind, insurance is a key risk management tool.

For risk assessment purposes, insurers should not treat claims that involve A.I. any differently than claims that do not include such a component. If the claim at issue is a third-party lawsuit, for example, with allegations that otherwise trigger one or more coverages within the company’s insurance program, that claim should still be covered absent an applicable exclusion.

As indicated, A.I.-related claims can take many forms, including, for example, alleged violations of employment law, breaches of data privacy statutes, breaches of fiduciary duties or professional obligations, violations of securities laws, intellectual property infringement, or any other number of events, acts or omissions. As such, companies should continue to first look to their insurance policies that would otherwise respond to such claims — employment practices liability, cyber, directors and officers liability, media liability, professional liability/errors and omissions, property and business interruption, and general liability insurance, to name but a few.

In addition to the foregoing “traditional” insurance products, the insurance industry is expected to respond in a more dedicated way to some of the specialized risks faced by companies innovating in the A.I. space. In fact, certain insurers are already beginning to market bespoke insurance advertised to cover the heightened financial risks associated with the development and sale of new A.I. models. See B. Lin, “Is your AI model going off the rails? There may be an insurance policy for that,” Zywave (Oct. 2, 2023).

To date, exclusions specific to A.I. have not yet been identified in the insurance market. Nonetheless, in the event such exclusions or other coverage limitations begin to appear during placements and renewals — for example, exclusions for claims, losses, or damages that “arise out of” or are “related to” A.I. — such changes should be vigorously resisted by insureds. Moreover, where policy definitions can be augmented to clearly cover A.I., care should be taken to negotiate for those enhancements.

In addition, policyholders should be vigilant in completing insurance applications. Cyber insurance applications, for example, are notoriously complex and lengthy, and often presuppose a robust understanding of how a company’s technology systems operate. Although insurers have not begun routinely asking non-A.I. companies about A.I. usage during the underwriting process, as the A.I. landscape continues to grow, this might change.

Relatedly, just as insureds must manage their own risks with respect to A.I., they should be watchful where insurers may be employing A.I. to assist with their underwriting and claims handling functions. If insurers lean too heavily on A.I. to aggregate data and/or make underwriting or claims handling determinations without critical and thoughtful human involvement, insureds may bear the brunt of poor A.I. policy. This may run afoul of insureds’ entitlement to good faith, thoughtful consideration of their submissions whether in obtaining insurance or processing claims.

More broadly, A.I. represents a new era of risk that insurers are learning to manage and underwrite appropriately. Similar to the sudden onset of cyber-attacks that shifted the cyber insurance industry, a learning curve is inevitable and might lead to variable and uncertain underwriting, pricing, and offerings over the coming years. See Cyber Insurance Academy, “Surge in AI Ushers A ‘New Silent Cyber’” Risk (Oct. 3, 2023). Insureds should stay closely involved in placements and negotiations to flag material changes, and engage with insurers and brokers accordingly.

Conclusion

The current risk landscape with respect to A.I. is fluid and uncertain, holding not only immense promise but also hidden pitfalls. Insurance is designed to protect against such risks and provide assurances for businesses and their c-suites.

It is important that insureds remain involved and engaged as A.I. continues to grow so that these protections remain available and affordable. Knowledgeable coverage counsel can assist. With adequate protections in place, innovation can continue to flourish.

The information and statements in this article are provided for informational purposes only, and should not be construed as legal advice on any subject matter.

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.

Acquire Licensing Rights, opens new tab

Carolyn H. Rosenberg

Carolyn H. Rosenberg is a partner in Reed Smith’s insurance recovery group and a member of the firm’s A.I. and ESG task forces. She advises corporations, directors and officers, risk managers and other professionals when obtaining or renewing insurance policies, addressing corporate indemnification and enterprise risk management, and resolving coverage disputes. She can be reached at [email protected].

David M. Cummings

David M. Cummings is a partner in Reed Smith’s insurance recovery group. His practice focuses on advocating for corporate policyholders seeking counsel with respect to insurance placements and renewals, as well as during disputed claims, litigation, mediation and arbitration. He writes and speaks regularly on emerging risks and insurance issues, including with respect to A.I., ESG, and cyber. He can be reached at [email protected].


Leave a Reply

Your email address will not be published. Required fields are marked *