← Back to Articles

Here's a scenario that should keep every corporate risk manager up at night: your company deploys a cloud-based AI meeting tool across the organization. An employee's confidential conversation gets recorded without proper consent. A lawsuit follows. You reach for your insurance policy—and discover your carrier excluded AI-related claims at your last renewal.

This isn't hypothetical. It's happening right now. Insurers are racing to carve AI out of standard liability coverage at exactly the moment when lawsuits against cloud AI meeting tools are hitting federal court. The result is a liability gap that could leave employers holding the entire financial bag.

Insurers Are Pulling the Plug on AI Coverage

The insurance industry has seen enough. As of January 2026, the Insurance Services Office (ISO) introduced two new optional endorsements—CG 40 47 and CG 40 48—that allow insurers to exclude generative AI-related claims from commercial general liability policies. With ISO forms underpinning roughly 82% of U.S. property and casualty policies, adoption is expected to be rapid and widespread.

Major carriers are already moving. Berkshire Hathaway, Chubb, and Travelers have requested regulatory approval to exclude AI-related damages from general liability policies, and regulators approved more than 80% of those requests. Some carriers, including Berkley, have introduced absolute AI exclusions across Directors & Officers, Errors & Omissions, and cyber lines.

A Gallagher Re report produced in conjunction with MIT found that generative AI-related lawsuits in the U.S. grew 978% from 2021 to 2025, yet the insurance products most enterprises rely on offer only fragmented coverage for the liabilities AI systems create. The report mapped AI-specific risks against cyber, E&O, product liability, and commercial general liability policies and found significant gaps in every category.

What does this mean for your company? If you deploy any cloud-based AI meeting tool—Otter.ai, Fireflies, or similar products—and a privacy claim results, your general liability policy may no longer respond. You could be entirely on your own.

The Otter.ai Hearing: A Liability Watershed

The timing could not be worse. On May 20, 2026, Judge Eumi K. Lee will hear Otter.ai's motion to dismiss in the consolidated class action In re Otter.AI Privacy Litigation at the San Jose federal courthouse. This ruling will be the first federal test of whether legacy wiretap statutes apply to an AI bot that auto-joins video calls.

The litigation consolidates four class actions filed between August and September 2025. The plaintiffs—none of whom were Otter customers—allege that Otter's meeting assistant recorded their conversations without consent and used the recordings to train AI models. The legal claims span the federal Electronic Communications Privacy Act (ECPA), California's Invasion of Privacy Act (CIPA), Illinois's Biometric Information Privacy Act (BIPA), and the Computer Fraud and Abuse Act.

The potential financial exposure is staggering. Under these statutes, damages could reach $10,000 per ECPA violation, $5,000 per CIPA violation, and up to $5,000 per intentional BIPA violation. With Otter's own press release putting its user base at more than 35 million and over a billion meetings processed, even a fraction of affected participants translates to massive exposure.

Meanwhile, Fireflies.ai faces its own BIPA class actions. A second lawsuit—Fricker v. Fireflies.AI Corp.—was filed in March 2026 in the Northern District of Illinois, following the December 2025 Cruz case. As we covered in our analysis of the BIPA lawsuit wave hitting AI meeting bots, these lawsuits target a fundamental design pattern, not an isolated product flaw.

The Employer Liability Trap

Here's what many employers don't realize: the lawsuits may name the AI vendor, but the liability doesn't stop there. Employment law firm Littler Mendelson identified seven distinct risk areas employers should evaluate when deploying AI meeting tools, including consent, biometrics, accuracy, discrimination, attorney-client privilege, data retention, and confidentiality. As the firm's February 2026 analysis noted, these tools "introduce significant legal and operational risks, including potential violations of privacy and wiretap laws, exposure of confidential or privileged information, employment discrimination concerns, compliance challenges under new AI regulations, and increased discovery costs."

Illinois courts have held that multiple entities can be responsible for the same biometric data collection. An employer that licenses or enables AI notetaker usage in meetings involving Illinois residents may be directly implicated in BIPA claims if safeguards aren't in place. The employer doesn't need to collect the biometric data—merely enabling the tool that does can be enough.

On the discrimination front, AI transcription tools may consistently misunderstand accents, speech impediments, or other characteristics tied to protected classes. If those inaccurate transcripts inform performance reviews, hiring decisions, or disciplinary actions, the employer faces disparate impact exposure that no AI vendor indemnification clause will cover.

The Insurance Gap in Practice

Let's trace how this liability gap works with a concrete example:

  1. Your company subscribes to a cloud AI meeting tool. It auto-joins calls via calendar integration and transcribes conversations.
  2. A meeting participant in California or Illinois is recorded without all-party consent. The tool's terms of service say the account holder is responsible for obtaining consent—but that responsibility often falls through the cracks.
  3. A lawsuit is filed. The claims invoke federal wiretap law, state privacy acts, and potentially BIPA if voiceprints were collected.
  4. You file an insurance claim. Your carrier points to the new ISO AI exclusion endorsement attached at your last renewal. Claim denied.
  5. You check your cyber policy. It covers data breaches and ransomware, but as Gallagher Re found, it typically does not cover "defamation, hallucinations leading to financial loss or data disclosure via AI outputs."
  6. You check your E&O policy. It's designed for vendors and developers who supply AI tools—not the enterprises deploying them.
  7. You're left self-insuring a claim with potentially millions in statutory damages.

Gartner has urged general counsel to assess new AI-specific insurance offerings, noting that traditional policies were never designed for risks like AI hallucinations, algorithmic bias, or privacy violations from AI processing. Standalone AI liability products are emerging from companies like Armilla, Munich Re, and Testudo—but they represent an additional cost most organizations haven't budgeted for, and coverage caps may not match the scale of potential claims.

Why On-Device Processing Eliminates the Risk

Every layer of this liability chain—wiretap claims, biometric privacy violations, insurance exclusions—depends on one architectural choice: sending audio data to a third-party cloud server for processing.

When a cloud AI meeting tool joins your call, it acts as a third-party participant. It captures audio, transmits it to external servers, processes it using remote AI models, and stores the results. That's what creates the legal exposure: a third party intercepted communications without consent from all participants.

On-device processing breaks this chain entirely. When transcription happens locally on your own iPhone or Mac, there is no third-party interception. No audio leaves your device. No voiceprints are collected on external servers. No data is stored in someone else's cloud. The legal theories underpinning every major AI meeting tool lawsuit simply don't apply.

Apple's approach to AI reinforces this architecture. As Apple describes it, "the cornerstone of Apple Intelligence is on-device processing," keeping personal data local and secure. Basil AI leverages Apple's on-device Speech Recognition framework to transcribe meetings in real time without any server round-trip. Your audio never leaves your hardware.

For a deeper look at how wiretap law exposure works with cloud AI tools, see our article on AI meeting bots and wiretapping laws.

🔑 Key Takeaway

On-device transcription doesn't just reduce privacy risk—it eliminates the entire liability surface that insurers are now refusing to cover. No cloud processing means no wiretap claim, no biometric data collection, no third-party data sharing, and no insurance gap to worry about.

What Employers Should Do Right Now

With the Otter.ai motion-to-dismiss hearing days away and insurance carriers actively excluding AI risk from standard policies, employers need to act immediately:

  1. Audit your insurance policies. Check whether your carrier has attached ISO endorsements CG 40 47 or CG 40 48 at your last renewal. Ask your broker specifically about AI-related exclusions across all lines—general liability, cyber, E&O, and D&O.
  2. Inventory your AI meeting tools. Determine which cloud-based transcription tools are in use across the organization. Pay special attention to tools that auto-join meetings via calendar integration, as these carry the highest consent risk.
  3. Assess your consent framework. In all-party consent states like California, Florida, Illinois, and Massachusetts, a meeting host clicking "allow" is not sufficient consent from all participants. Review GDPR Article 7 requirements if any participants are in the EU.
  4. Review vendor terms of service. Otter.ai's privacy policy places responsibility on the account holder to obtain consent. Fireflies' terms similarly require customers to ensure they have "suitable safeguards" before recording third parties. If a claim arises, these vendors will point at you.
  5. Evaluate standalone AI insurance. If you must continue using cloud AI tools, explore dedicated AI liability products. But recognize that these are expensive, narrowly scoped, and may not cover the full range of privacy claims now being litigated.
  6. Switch to on-device alternatives. The most effective risk mitigation is architectural. Tools that process audio entirely on-device—never sending data to cloud servers—eliminate the legal theories that drive AI meeting tool litigation and the insurance gaps that follow.

The Bottom Line

The insurance industry doesn't make emotional decisions. When actuaries start writing exclusions, it means the risk data is real enough to model and severe enough to avoid. The simultaneous emergence of AI liability exclusions and AI meeting tool lawsuits is not a coincidence—it's a market signal that cloud-based AI meeting tools represent an unacceptable risk profile.

Employers who continue deploying cloud AI transcription tools are doing so in a world where their insurance may not respond, their legal exposure is being tested in federal court, and their vendors' terms of service put the liability squarely on them. That's not a sustainable position.

On-device AI transcription isn't just a privacy preference—it's a liability strategy. When no audio leaves your device, there's nothing to insure against.

AI Liability Insurance Employer Risk On-Device AI

Eliminate the Liability Gap

Basil AI processes everything on your device. No cloud. No third-party servers. No insurance gap. Just private, on-device meeting transcription.