Every time an AI meeting bot joins your Zoom call and identifies who said what, it may be creating something you can never change: a mathematical model of your voice. Unlike a compromised password, a stolen voiceprint is permanent. And in 2026, a wave of class action lawsuits is revealing that the most popular AI transcription tools have been collecting these biometric identifiers from millions of meeting participants—without their knowledge or consent.
The Lawsuits Shaking the AI Transcription Industry
The legal reckoning began in late 2025 and has accelerated throughout 2026. Multiple class action lawsuits now target the biggest names in AI meeting transcription, alleging systematic violations of biometric privacy laws.
Cruz v. Fireflies.AI Corp.
In December 2025, Illinois resident Katelin Cruz filed a class action against Fireflies.AI in federal court. According to the complaint, she participated in a virtual meeting hosted by an Illinois nonprofit where a Fireflies bot had been enabled—and her voice was recorded and processed without her knowledge. The lawsuit alleges that Fireflies' "Speaker Recognition" feature generates voiceprints, which are biometric identifiers covered under Illinois law, without providing the required written notice or obtaining consent. As reported by the National Law Review, Cruz was not a Fireflies user, never agreed to its terms of service, and had no idea her biometric data was being captured.
A second BIPA class action against Fireflies was filed in March 2026 by plaintiff Ethan Fricker in the Northern District of Illinois, seeking statutory damages and injunctive relief on behalf of all affected Illinois residents.
In re Otter.AI Privacy Litigation
Meanwhile, Otter.ai faces a consolidated class action—In re Otter.AI Privacy Litigation, 5:25-cv-6911 (N.D. Cal.)—alleging the company unlawfully records private conversations through its AI transcription tool. According to the complaint, Otter.ai uses the resulting transcripts to train its machine-learning models without obtaining prior consent from all meeting participants. The legal claims span the Electronic Communications Privacy Act, the California Invasion of Privacy Act, and BIPA violations for voiceprint collection.
⚠️ Why Voiceprints Are Different
A voiceprint is a mathematical model of your vocal characteristics—pitch, cadence, tone, and vocal-tract shape. Much like a fingerprint, it can identify you with high certainty. But unlike a password, you cannot change your voice if your voiceprint is compromised. Once collected and stored by a cloud service, your biometric data creates a permanent privacy risk that cannot be undone.
How AI Meeting Bots Create Voiceprints
To understand the legal exposure, you need to understand the technology. AI meeting tools use a technique called speaker diarization—the process of distinguishing who said what in a conversation. To accomplish this, these platforms analyze the unique vocal characteristics of each participant and generate a voiceprint for every speaker, including people who never signed up for the service.
As the law firm Mason LLP documented in its analysis, both the Fireflies and Otter lawsuits share a troubling pattern: the plaintiffs were not users of the AI tool, had no contractual relationship with the company, and received no notice that their biometric data was being collected.
This is the critical distinction. When a meeting host activates an AI transcription bot, every participant's voice is processed for speaker diarization. Their voiceprints are extracted and stored on cloud servers. But those participants may have never downloaded the app, agreed to any terms, or even known the bot was present. For a deeper look at how cloud-based speaker identification creates tracking risks, see our analysis of speaker diarization privacy risks.
BIPA: The Law That's Changing Everything
The Illinois Biometric Information Privacy Act (BIPA) is the strictest biometric privacy law in the United States. Under BIPA, voiceprints are classified as "biometric identifiers," and companies that collect them must follow strict rules:
- Written notice to individuals before collecting biometric data
- Informed written consent from each person whose data is collected
- A publicly available retention and destruction policy
- No sale or profit from biometric identifiers
The penalties for noncompliance are severe—up to $1,000 per negligent violation and $5,000 per intentional or reckless violation. And crucially, BIPA grants individuals a private right of action, meaning consumers can file lawsuits directly without requiring government enforcement.
As the law firm SGR noted, employers that deploy these tools are not insulated from liability either—Illinois courts have held that multiple entities can be responsible for the same biometric collection when they enable or benefit from the technology's use.
Beyond Illinois: A Growing Patchwork
BIPA is the gold standard, but it's not alone. Texas, Washington, California, and Colorado have all enacted biometric data privacy statutes with varying degrees of protection. California's Privacy Rights Act classifies biometric information, including voiceprints, as sensitive personal information requiring heightened protections. A single virtual meeting with participants across multiple states can trigger compliance obligations under several different biometric privacy statutes simultaneously.
The EU AI Act Adds Another Layer
The regulatory pressure isn't limited to the United States. Under GDPR Article 5, the processing of biometric voice data requires explicit justification. And beginning on August 2, 2026, the EU AI Act introduces an additional layer of obligation—AI systems used for worker monitoring and management may be classified as high-risk, a category that could encompass AI meeting tools offering sentiment analytics or productivity scoring alongside transcription.
Even more significantly, the EU AI Act strictly prohibits using AI systems to infer the emotions of a natural person in the workplace based on biometric data. Any cloud-based AI meeting tool that performs sentiment analysis on voice data will face outright bans or massive compliance audits within the EU.
Why Employers Should Be Alarmed
If your organization uses AI transcription tools, these lawsuits should be on your radar—even if you're not based in Illinois. Here's why:
- Liability extends to employers, not just vendors. An organization that licenses or encourages the use of an AI notetaker may be implicated in BIPA claims if proper safeguards aren't in place—even if headquartered outside Illinois, as long as any meeting participant is physically located in the state.
- Voiceprints compound exposure per meeting. Every meeting and every attendee represents a separate potential violation. BIPA claims accrue each time biometric data is unlawfully collected, not just the first time.
- HR and talent teams are especially vulnerable. AI notetakers during candidate interviews may be capturing voiceprints of applicants who were never informed their biometric data was being processed.
- Discovery risks multiply. AI-transcribed conversations, meeting minutes, and summaries may become discoverable in litigation, as the Duane Morris law firm warned in its February 2026 analysis.
The existing Otter.ai litigation has also raised broader questions about AI transcription in regulated contexts. For more on how these tools create compliance risks in specific industries, see our article on AI transcription lawsuits and privilege waiver.
The Hospital Breach That Proved the Risk
The dangers aren't theoretical. A case investigated by the Ontario Information and Privacy Commissioner illustrated exactly how AI transcription tools can cause real-world biometric privacy breaches. An AI transcription tool gained access to a hospital's virtual hepatology rounds meeting through a former physician's personal calendar—even though that physician had left the hospital over a year earlier. The tool joined the meeting, recorded everything, generated detailed notes containing the personal health information of seven patients, and automatically distributed those notes. The incident triggered a mandatory breach notification.
The hospital's response was telling: it blocked AI scribe tools like Otter.ai through firewall configuration, updated its privacy training to explicitly address AI transcription, and recommended that physicians routinely check meeting participant lists for unapproved AI tools before discussing any personal health information.
The On-Device Alternative: Why Architecture Matters
These lawsuits all share a root cause: cloud processing. When your voice is sent to a remote server for transcription and speaker identification, a third-party company gains access to your biometric data. They store it, process it, and—as the lawsuits allege—may use it to train AI models.
On-device transcription eliminates this risk at the architectural level. When speaker identification runs entirely on your device:
- No voiceprint database exists on any server—there's nothing to breach
- Voice embeddings are computed locally and never leave the device
- No cross-meeting tracking is possible by third parties
- The service provider has zero access to voice characteristics
- No BIPA exposure—you can't violate biometric consent laws when no biometric data is collected by a third party
Apple's approach to AI reinforces this model. As Apple's privacy documentation explains, the cornerstone of Apple Intelligence is on-device processing—keeping personal information on the device rather than sending it to external servers. Applications built on Apple's Speech Recognition framework leverage the Neural Engine for local processing, ensuring voice data never leaves the user's hardware.
🔒 How Basil AI Eliminates Voiceprint Risk
Basil AI performs all transcription and speaker identification directly on your iPhone or Mac using Apple's on-device Speech Recognition. Your voice never leaves your device. No cloud servers. No voiceprint databases. No biometric data collection. No BIPA exposure. It's not just privacy by policy—it's privacy by architecture.
- 100% on-device processing—zero cloud upload
- 8-hour continuous recording capability
- Speaker diarization without cloud voiceprints
- Apple Notes integration via iCloud
- No terms of service granting rights to your voice data
What You Should Do Right Now
Whether you're an individual professional, an HR leader, or a compliance officer, the 2026 BIPA lawsuit wave demands immediate action:
- Audit your AI meeting tools. Determine whether any transcription or notetaking apps your organization uses collect biometric data, including voiceprints through speaker recognition features.
- Check for AI bots on your calls. Most AI meeting assistants join as a named participant. If you see an unfamiliar bot in the attendee list, ask the host about it before speaking.
- Review vendor privacy policies. Look specifically for how long voiceprint data is retained, whether it's used for AI model training, and whether there's a publicly available retention and destruction policy.
- Implement consent frameworks. If you must use cloud-based tools, build a robust consent process that notifies all participants and obtains informed written consent before any biometric data capture occurs.
- Switch to on-device processing. The simplest way to eliminate biometric privacy risk is to use tools that never send voice data to a cloud server in the first place.
The Bottom Line
The BIPA lawsuits against Fireflies.AI and Otter.ai are just the beginning. As biometric privacy laws expand across states and the EU AI Act takes full effect in August 2026, the legal exposure for cloud-based AI meeting tools will only increase. Organizations that continue to deploy these tools without proper governance face compounding liability with every meeting and every participant.
The solution isn't to stop using AI transcription—it's to use transcription tools that respect the fundamental principle that your voice belongs to you. On-device processing isn't just a technical preference. In 2026, it's a legal imperative.