There’s no killer AI app for cities. Experts say that’s a good thing
John Lorinc is a freelance journalist specializing in cities, climate, and technology. He can be reached at lorinc@rogers.com.
In California, “real-time crime centers” are becoming standard issue for many local police departments. These digital hubs, situated within operations and command centers, can rapidly knit together drone footage, gunfire locators, body-worn cameras, and real-time 911 calls with artificial intelligence, geographical information systems, and so on. Such platforms, largely funded by venture capital firms, are emblematic of a larger push to mesh emerging AI tools into the ways local governments deliver municipal services.
However, the legal guardrails around the tools themselves are often shaky. Take, for example, a 2020 state law prohibiting the use of facial recognition with body-worn cameras that expired in 2023. “The Legislature hasn’t passed anything to replace it,” says Paul Knothe, a partner at Liebert Cassidy Whitmore, who specializes in public safety. “Currently, there is no statewide prohibition on that sort of technology with body-worn cameras, although some municipalities have restricted their own use by ordinance.”
So, while the AI industry is experiencing an extraordinary boom – or a bubble – cities are approaching these highly disruptive technologies carefully.
“You don’t have mayors banging the table and demanding someone roll out an AI-powered solution in order to keep up.”
“There’s no killer app,” observes Anthony Townsend, urbanist-in-residence at Cornell Tech’s Urban Tech Hub and an expert on open data, internet-connected sensors, and other smart city technologies. “The only folks making money right now are people selling coding tools.”
Although many city employees informally use AI tools such as ChatGPT, there is plenty of evidence to suggest that caution is called for in public sector applications. New York City’s much-hyped public-facing chatbot, MyCity, failed spectacularly when users discovered it would dispense illegal advice to small business owners. The system now answers only a very limited range of queries. “That really scares people,” says Townsend.
Other cities have also been underwhelmed by these applications. Long Beach piloted an AI chatbot on its website last November but pulled it down after surveys showed it didn’t solve residents’ problems, says Ryan Kurtzman, who leads the city’s AI initiatives. “We found that we just didn’t have a real business case or demand to continue it because it was not providing [an] immeasurably better experience or quality of service than the website search bar or the website itself.”
Many cities have set up extensive networks to figure out what kind of AI tools will work for them. “What the MyCity thing in New York did was really just dampen the top-down push, where you don’t have mayors banging the table and demanding someone roll out an AI-powered solution in order to keep up,” Townsend says. “Frankly, I think that is actually great. It has given cities the time to do a lot of the governance planning they weren’t able to do during earlier waves of tech disruption.”
“We took a completely different approach.”
Ground zero for some of that forward planning is San José, where leaders like Chief Information Officer Khaled Tawfik are guiding an intentional adoption process, as well as a far-reaching effort to share knowledge with others through the GovAI Coalition, which has over 3,000 members across the U.S.
“It doesn’t make sense for us to develop solutions to the same issues individually. Dividing the challenge among the 3,000 members allows everybody to provide a different perspective on policies and risks. This allows government agencies to have a comprehensive approach and anticipate problems before they are encountered,” Tawfik says.
Nearby Fairfield, a member of the coalition, is typical of where many California cities are with AI adoption. According to City Manager David Gassaway, the city has used GovAI templates to create policies for AI-driven traffic and public safety systems, license plate readers, public chatbots, and cybersecurity. There’s also a pilot project with employees using Microsoft Copilot.
Tawfik says San José’s three AI priorities include educating both the city’s residents and its employees about the benefits of using AI to innovate; delivering real-time AI-generated translation in more than 50 languages to anyone who wants to tune in to council meetings; and finding ways to streamline city services, as measured through its 311 dashboard.
San José has also deployed AI to synthesize survey feedback after residents submit requests, such as filling a pothole. “We use these dashboards to explain why we’re doing things and why we change programs, and why we’re using AI to improve some of the services,” Tawfik explains.
The city has tried to make training its hundreds of employees on this new technology intensely practical. “The traditional approach is, you sit in a classroom, and the instructor will give you information,” says Tawfik. “We took a completely different approach. To participate, number one, you need to identify a problem you’re trying to solve. And two, you bring the data with you, and we will help you use the [AI] tools to improve what you do.”
Such training is focused on concerns over AI tools, such as fact-checking, privacy, and cybersecurity. “We realized trust is an issue, cybersecurity is an issue, and education is an issue,” Tawfik says.
Kurtzman, in Long Beach, says it’s been a challenge making sure that city employees who are already using AI are aware of the risks and the city’s AI policies, such as strict rules about uploading personal data. “My role is to really just advocate for responsible, smart, thoughtful use of these tools, but I know that what’s happening in other departments is people are actually going out and just using some of these things already, without even letting us know.” Long Beach, he adds, is installing a data protection feature that will block inappropriate uploads.
Workforce training for new city employees has emerged as another frontline issue, adds Tawfik. “We’re going to have the first graduating class from universities next year that went through their whole education using ChatGPT, and the city needs to embrace these new AI tools and provide clear safety guardrails.”
Ready means understanding the legal risks associated with entering data into AI models. “Privacy is a fundamental concern, both with constituent data, as well as the data that relates to employees of the agency,” says Alex Volberding, a Liebert Cassidy Whitmore partner who specializes in labour relations. “That information needs to be safeguarded.”
Yet adoption won’t just be about following regulations, cautions Pamela Robinson, a planning professor and director of the Civic Sandbox at Toronto Metropolitan University. Earlier this year, she and Morgan Boyco, a practicing planner and a Ph.D. planning candidate at Waterloo University, designed a tabletop game entitled “A Planner’s AI Dilemma Cards.” It’s an extensive set of AI-related ethics provocations that city planners might have to contend with in their daily practice.
An example: “You’re developing an engagement strategy for a new policy proposal. An AI tool can predict public sentiment for the proposal based on social media activity and past engagements. Do you use the predictions to anticipate community reaction and shape the engagement approach?”
“We were trying to think about how we can animate this conversation,” says Robinson, who will lead an AI learning lab at the 2026 American Planning Association conference in Detroit. “In the beginning, we were really interested in finding edge cases of GenAI use in public participation settings, but now we’ve expanded to include a wider range of planning use cases that would push planners and public participation folks who all have an obligation to work in the public interest to try to figure out what that actually means.”
The flipside of such exercises is ensuring that residents can put their faith in the technologies being deployed in their name. In 2022, Kurtzman partnered with Helpful Places, a civic tech start-up founded by Jacqueline Lu, the former New York City data analytics director. Her mandate: Help ensure that Long Beach residents have confidence in smart city technologies, including anything that collects personal data.
“They were looking for a way to make these privacy concepts or these elements of data processing more accessible or more legible to residents,” says Lu, whose firm has developed “nutrition labels” for the city’s public-facing digital systems, using an open source communication standard that denote uses, privacy and transparency policies, and other compliance features called Digital Trust for Places & Routines.
Responding to public concern about surveillance and privacy violations, Long Beach has committed to building public trust in its use of emerging technology, says Lu. “They’re looking for a tangible way to demonstrate that commitment to transparency.”
Some California cities see AI less as an exercise in procurement than the extension of local economic development alliances with major tech players. Rancho Cordova, northeast of Sacramento, has inked agreements with Nvidia and the Human Machine Collaboration Institute. The city is seeking to attract data center projects and sees itself as a test bed for AI applications, says City Manager Micah Runner.
“We are in a little bit of a unique position on the AI front because not only do we care about it as an organization, but we’re supporting the build out of the industry in our city as well,” he says.
The city’s AI portfolio includes Flock Safety and the code enforcement platform City Detect, as well as other applications, such as cameras mounted onto city vehicles that can proactively identify and score potholes or other public realm defects, like over-paved driveways or excessively large chain link fences. Long Beach is exploring implementing a similar detection tool for public realm defects. “These are not flashy public-facing AI use cases, but I think those are the ones where I see a lot of potential to drive more value,”
While Rancho Cordova’s sensor system can issue automatic code violation enforcement notices, Runner says the city has opted not to go there — at least not yet. “That’s the human element that, I think, will always be a part of the how we enforce.”
Indeed, none of the cities we spoke with said they viewed AI as a way to drive workforce reductions, a major source of anxiety in many sectors. “I doubt these technologies are going to create a reduction in headcount,” comments Fairfield’s Gassaway. “What I think is going to happen — depending on where you’re at in the organization and what sort of business service line you’re in — is that these are going to be tools to make employees more efficient or allow them to focus on higher cognitive things.”
Yet the need to boost productivity is certainly on the minds of city leaders. “Many governments, including Long Beach, are facing some potentially challenging financial times ahead due to the loss of federal and state grant funding and lower than expected revenues this year,” adds Kurtzman. “What I’m seeing in other departments is a sense of hope or promise around AI’s ability to deliver efficiencies and save time and costs.”
Those features may just become part of the city’s tech portfolio, observes Townsend, Cornell’s smart city expert: “I’ve heard people say this is basically just an enterprise [application]. It’s like a feature in the software. All your Microsoft stuff, all your Google stuff, is going to have AI every time you turn around.”
However, such predictions may prove to be overly sanguine, at least according to a 2025 New America Foundation survey of mayors, tech officers, academics, and civic tech entrepreneurs. The authors identified 1,600 state-level bills — half introduced in 2025 — that have triggered extensive retraining for public sector employees but caution on the part of localities, which have mainly deployed pilot projects.
The report, prepared by the RethinkAI coalition and written by academics from Northeastern University and Johns Hopkins, offers a warning that speaks to the public’s prevailing mood: “By optimizing machinery that residents already distrust,” the authors write, “we are building momentum without vision or a framework. … Our public institutions are under attack. And many of our attempts at reform didn’t fully address residents’ needs, discontent, or apathy. We are at an inflection point, and we need to rethink the role of civic technology as institutions change to meet the advent of AI and a new federal landscape.”
Which may in fact be the most salient accomplishment of San José’s steadily expanding GovAI initiative, says Townsend. “[T]hey have laid down how things are going to work with the vendor community. … This is the first time I’ve seen cities kind of presenting themselves as a monolithic market to the IT community. And that’s great. I think that’s going to produce better AI. It will eventually help the market grow faster, once folks figure out how to make money working within the expectations that have been set.”




