AI deception has already conned California government systems. What’s next? | Opinion
We are in an era where our own eyes and ears are becoming unreliable witnesses. As artificial intelligence technology evolves, the playing field of human interaction is being fundamentally altered by manipulation that is basically unlimited.
From the sanctity of our courtrooms to the integrity of our elections and the security of our life’s savings, digital deception is quietly dismantling the trust required for a functional society.
The judicial system is perhaps the most traditional foundation of truth, yet it is currently facing an authentication challenge. I recently attended a presentation for California prosecutors where retired Sacramento Sheriff Detective Sean Smith detailed how apps like Fake Text Message 2026 allow users to create entire conversation threads that are nearly indistinguishable from reality.
This isn’t just about social media drama. There have been credible reports of individuals sent to jail based on unverified, AI-generated text evidence.
Even more sophisticated are video “deepfakes.” These are videos, photos or recordings that appear real but has been artificially generated and have already begun appearing in California courts. In the 2025 case of Mendones v. Cushman & Wakefield, a judge was forced to scrutinize a video of witness testimony that was eventually revealed to be an AI-generated fabrication.
The problem is that humans are poor at spotting these fakes, with studies showing we can discern high-quality video deepfakes only about half of the time.
AI in politics
This erosion of truth extends directly into the democratic process. While AI can help election officials with mundane tasks like ballot-proofing or predicting voter turnout, it also provides bad actors with a high-definition megaphone for misinformation. We are seeing an increase in deepfake content, much of it designed to impersonate candidates or mislead voters about when and where to cast their ballots.
AI is now also being used to create artificial “grassroots” movements that can kill legitimate policy. In February, a Los Angeles Times report highlighted a chilling case where an AI-powered platform generated 20,000 emails that helped defeat a proposal to phase out gas-powered appliances before the South Coast Air Quality Management District. Officials considering the proposal were flooded with automated, deceptive communications.
As of early 2026, over two dozen states have scrambled to enact laws requiring disclaimers on such “materially deceptive” media, though these efforts often face uphill battles in court over First Amendment concerns.
Think about this: A livestream or video is shared of a public meeting. Voices and images of every person who spoke into a microphone are now publicly available. That is enough information to feed AI tools that produce fake videos or recordings impersonating your voice well enough to fool even your closest loved ones.
AI in finance
The financial sector is equally besieged, as fraudsters transition from crude phishing emails to sophisticated “GenAI-fueled” attacks. Deloitte Insights projects that AI-driven banking fraud could result in $40 billion in losses by 2027.
Scammers are using audio snippets harvested from social media platforms like TikTok to impersonate family members or bank customers. These clones are so effective that even family members are frequently fooled, and human accuracy in detecting them is barely better than a coin flip.
We are in a high-stakes arms race where the technology to deceive is outpacing the technology to detect such deceptions. If we do not prioritize the authentication of our digital world, the very foundations of our legal, political and financial systems will continue to crumble.
Matt Rexroad is an attorney, political consultant and certified fraud examiner.