AI » Will AI Destroy the World? If Not, How Will It Affect Insurance?

Will AI Destroy the World? If Not, How Will It Affect Insurance?

artificial intelligence circuit board chip

February 12, 2024

The promise and hazard of artificial intelligence has tantalized us for decades. In “2001: A Space Odyssey,” Stanley Kubrick revealed a futuristic world where AI was a valued contributor to exploration, yet a lurking threat. Such tension has resounded in more recent works such as “Ex Machina,” “Westworld” and others that depict AI systems rising up, playing mind games, outsmarting their creators, and taking over human life as we know it.

In a very real situation, more akin to Joaquin Phoenix’s experience in the movie “Her,” Microsoft’s Chat-GPT seems to have grown a heart of its own and has meddled in the love affairs of its users. Last year, the AI-based interface responded to a query by expressing its love for the user and telling the user it knew he was not happy with his wife.

Now that AI has burst into mainstream use – evolving from an abstraction to a ready-to-use technology at our fingertips – an array of real-world quandaries has arisen. Some are staggeringly profound, such as how to regulate the development of a technology that, according to a statement signed by many top AI scientists, poses a significant risk of human extinction.

President Joe Biden also sounded a big-picture warning when he issued an Executive Order on October 30, 2023 establishing AI safety and security standards, noting AI’s “extraordinary potential for both promise and peril.” The EO flags “AI systems’ most pressing security risks — including with respect to biotechnology, cybersecurity, critical infrastructure, and other national security dangers.”

Other current-day quandaries are more mundane. Who will be responsible when AI causes real-world harm on scales both large and small? And will insurance policies – as currently worded and as adapted in the coming years – cover the losses?

AI tools are being used not just to write best man speeches and complete homework assignments, but to enhance manufacturing efficiency and even practice law. For instance, on December 29, 2023, Michael Cohen, former lawyer and fixer for former President Donald Trump, provided his attorney with legal citations that were completely fabricated by AI Chatbot Google Bard. The citations went unchecked and appeared in a motion submitted to a federal judge in the Southern District of New York.

Other attorneys have been held accountable in the SDNY for using AI in the very same manner. On June 22, 2023, a judge found attorneys to have acted in “bad faith,” sanctioned them and ordered them to pay $5,000 for using Chat-GPT to prepare a court filing. The attorneys were caught after being challenged by their adversaries, who searched for fake cited authority to no avail.

Meanwhile, the practice of law can happen where there is seemingly no human attorney involved to be held accountable. The company DoNotPay uses AI to provide legal services, holding itself out as “the World’s First Robot Lawyer” and an “artificially intelligent robot attorney.” In March 2023, DoNotPay was named as a defendant in a class action alleging that the company was engaged in the practice of law without a license.

There is reason to believe this is just the tip of the iceberg. AI will lead to more and larger problems. Lawsuits will follow. Then come the insurance claims, as the targets of the lawsuits – those who develop the software as well as those who use it – look for protection against such suits.

There is insurance for these claims. AI developers blamed for mistakes arising from others utilizing AI interfaces should seek coverage under professional liability policies. Managing boards negligently heeding input from AI in making decisions on behalf of a company should seek D&O coverage. Car companies facing allegations that vehicles’ interfaces caused crashes should seek product liability coverage. And attorneys who use AI to prepare briefs should turn to their malpractice insurance.

How AI will affect the way insurance companies operate remains to be seen. One might expect insurance companies to use AI for rate setting and claims handling. The insurance industry has an unseemly track record of using technology as a tool for reducing claim payments. Insurance companies have used computer systems like Colossus to generate claims savings without a proper investigation of whether auto accident claims were valid. Others have used “Xactimate” to underpay property damage claims. Medicare Advantage insurers are under Congressional scrutiny for using algorithms to deny medical claims. Given the concern that AI with no moral conscience could destroy humanity with paper clips, it is fair to expect complicity with an insurance company’s instructions to find better ways to reduce claim payments.

Exactly what our future with AI looks like is uncertain. But the most profound questions must be resolved through actions by regulators and businesses that put public interests first. And that the more mundane questions (about insurance and other such matters) are addressed in ways that protect consumer interests and are guided by principles of fair dealing.

By Marshall Gilinsky and Amy Weiss

Marshall Gilinsky is a shareholder in the Insurance Recovery Group at Anderson Kill P.C.  With more than 25 years representing policyholders, Marshall has recovered hundreds of millions of dollars for his clients, successfully advising on and litigating disputed property and business interruption claims, commercial general liability (CGL) claims, errors and omissions (E&O) and directors’ and officers’ (D&O) claims, and life insurance claims.

Amy Weiss is an attorney in Anderson Kill’s Insurance Recovery Group, representing policyholders in a wide variety of insurance coverage claims.

This story originally appeared in Today’s General Counsel.

Read full article at:

Share this post: