Article Features By Brian Lee-Mounger Hendershot

Welcome to the first AI election. Here’s what local officials need to know and can do to prepare

Brian Lee-Mounger Hendershot is the managing editor for Western City magazine; he can be reached at bhendershot@calcities.org. Additional contributions by Alex Guzman, editorial assistant.


Artificial intelligence (AI) probably can’t do your job, but it could undermine this year’s elections. Experts and federal agencies are warning that the potential benefits of AI could be outweighed by malicious actors, a failure to regulate, and a failure to moderate. One super PAC already used AI to impersonate former President Donald Trump and a magician created fake robocalls discouraging people from voting. In Europe and India, voters are being bombarded with misleading, digitally altered images, audio clips, and videos.

Although some major tech companies have begun taking action, they have also disbanded or scaled back internal guardrails. “We’re in this weird place where in some ways, at this particular moment in the U.S., we’re actually worse off than we were in 2016,” said David Harris, an expert on AI ethics and former member of Meta’s Responsible AI team.

A handful of states have taken steps to stop fake content. But in California, pending proposals won’t take effect until 2025 — if they get passed at all. Absent meaningful action from the federal government, it’s up to us — especially elected officials and other trusted community leaders — to protect this year’s elections from AI-generated disinformation. But we must do so in a way that maintains trust in the election system.

“We all need to work together to help voters level up their, for lack of a better term, BS meters,” said Jonathan Mehta Stein, executive director of California Common Cause. “Everyone needs to be smarter, more skeptical consumers of political information in 2024.”

What does AI know?

Experts, including those speaking on background, agree that most people don’t know what AI means — in part because of willful marketing obfuscation, but also breathless media coverage. And that’s part of the problem. If you can’t define it, it’s hard to regulate it. AI has become an umbrella term that means everything and nothing.

“We have to educate ourselves on its pros and cons, its opportunities and pitfalls,” said Dr. Peter Pirnejad, Los Altos Hills city manager and Cal-ICMA president-elect. “We have to learn more about it before we start putting up the guardrails.”

There are two distinct types of AI that city leaders should know: predictive and generative. The former has been around for decades. Insurance companies, courts, and banks use predictive AI to sift through large amounts of data, identify patterns, and make decisions.

Rather than perceive and classify, generative AI uses data to create text, images, sounds, and videos. The results can be both extraordinarily impressive and profoundly disappointing — and flat-out wrong. Generative AI is a genuine technological breakthrough, but AI doesn’t “know” anything. It’s merely identifying and repeating a pattern.

Dr. Hany Farid, a professor for the School of Information at the University of California, Berkeley, says it’s not the tools themselves that are the problem. AI that is trained ethically, overseen by humans, and checked for bias can be beneficial. For example, voters could use AI to sift through public records to identify which politicians most align with their values.

The problem arises when companies disperse the technology widely without guardrails or oversight. “If all I had the ability to do was to create a piece of content and share it with my five friends on email, honestly, I don’t really care,” Farid said. “But that’s not all I can do, right? I can post these things on social media, and I can destroy individual lives. I can create large-scale fraud, and I can disrupt democracies because of the distribution channels.”

What can city officials do?

AI-generated content is turbocharging — and arguably automating — existing disinformation tactics. People could always create fake images of candidates in handcuffs or flood an agency with public records requests. But now they can do so on a massive scale.

“Confirmation bias validates people’s beliefs and solidifies their positions,” Pirnejad said. “Generative AI can automatically generate articles from various sources, reinforcing perspectives that validate people’s biases. Even if most of the information is accurate and only a small percentage is incorrect, the damage is done.”

When it comes to elections, you don’t need to fool everyone. You only need to target a few competitive districts on Election Night. “You run a massive campaign that tells people, scares them, tells them not to go to the polls — whatever it is — you move 50,000 votes,” Farid said. “That’s the ballgame.”

So, what can local officials do? Harris and others created a seven-step checklist, as has the U.S. Election Assistance Commission. Their recommendations should sound familiar to any city official. Learn about the issue, develop crisis plans, strengthen relationships, identify trusted messengers, and follow best security practices. When in doubt, physically verify the information before acting on it.

It’s the same principles you might apply when planning for a natural disaster, as election officials did in Arizona. What is your response if a robocall tells someone to go to the wrong place? What if someone releases a deepfake of an election official claiming they are going to rig the election? What if that deepfake contains some true information mixed in with false information? What about false claims of deepfakes?

“We have some time — not a lot of time — but we have time,” Farid said. “And if I’m a local election official, I want to be telling every single one of my voters, this is where you go for voting information … We know what the attacks are going to look like. So, some of the stuff I think we can plan for. Some of it will be hard to plan for.”

Electoral candidates will also need to weigh ethical considerations. Cambridge Analytica’s claims about its abilities to target votes in the 2016 election were overblown. Not now.

“There’s this seedy underground emerging of people who can help you develop new, dirty campaign tricks turbocharged by this new technology,” Stein said. “Everyone running for office is going to have to be on guard to not make a bad hire, but then also to guard against these things being used against them.”

Local elected officials may not feel the effects of AI-generated disinformation equally. Disinformation campaigns in a few districts — particularly swing districts or places with few outside watchers — can have an outsized impact. But AI can also exacerbate existing inequalities, increase voter suppression, and worsen sexual harassment

“If there’s a deepfake of Joe Biden, we’ll know about it,” Stein said. “But if there’s a deepfake of your mayor, or your city council person in a community where your local newspaper died five years ago, who’s going to be shining a light on that?”

Knowing and planning for AI-generated disinformation — as outlined in the linked checklists — are good first steps. But local officials cannot solve this issue by themselves.

What might long-term solutions look like?

Experts say if we truly want to stop the flood of fake content, we need to stop it at the source. Yet few industries have been regulated as loosely as America’s tech giants, which have long operated on a “move fast and break things” ethos. Changing that is easier said than done. Congress passed 27 bills in 2023. And Gov. Gavin Newsom, who warned last month against over-regulating AI, is loath to sign anything that adds new costs to the state’s dwindling budget.

Advocates point to fees as ways to generate the needed funding. It’s the same logic underpinning California’s successful moves to a circular economy. Companies that create harmful, but useful products should bear the cost of managing them.

“I would argue it’s almost unconscionable to not take fees from this industry to regulate it, because why should the public be paying for something that is generating so much investment right now,” Harris said. 

On the business side, some industry leaders have called for more regulations. However, companies like Meta — which experts say are key to clamping down on disinformation — are often punished for doing the right thing. Facebook once promoted a “time well spent” initiative, which led to the rise of its now biggest competitor, TikTok.

Then there is the First Amendment guarantee of free speech, which limits how and when regulators can respond to disinformation, as well as the tech itself. Cybersecurity is a game of whack-a-mole and fake content is no different.

Still, experts say there are some guardrails that could make a major difference. Regulators could force companies to embed watermarks on AI-generated content or make it harder for people to create images of say, the Pentagon under attack. Policymakers could expand anti-fraud and defamation laws to speed up the removal of malicious material, strengthen liability laws, or create specialized oversight bodies.

These actions won’t stop AI-generated disinformation. But they would make it a lot harder for the average person to create a deepfake, turning the technology into a manageable risk.

While these are tall orders, there is some hope. There isn’t yet a strong partisan divide on AI and an increasing number of Americans worry about AI-generated disinformation. There is a path forward to meaningful action, albeit a rocky and uncertain one. Pending proposals in the California Legislature, combined with the European Union’s AI Act, could create a de facto global standard.

Ultimately though, this is an issue that requires a holistic approach. For example, better media literacy and a strong, culturally sensitive network of local journalists can also help protect us from disinformation.

“It’s really a creative, social question about what we as a society want our relationship to technology to be,” Harris said. “If we get lazy and just go with the status quo, we’re going to keep getting technologies that are not consistent with our values and that don’t do the things that we want, because those will be the most profitable technologies.”

“There’re two futures here for the internet,” Farid said. “This f—– up dystopian nightmare, where Mark Zuckerberg and Elon Musk’s future is envisioned, where we either strap a device to our face or wire it directly into our cortex. And we monetize every aspect of human nature. … [And] there’s a more hopeful future where the internet is the internet we were promised 20, 30 years ago. Where it democratized access to knowledge and democratized information. It created a more interesting and thoughtful and caring society.”