The normal pattern with new technology in local government is simple: a department identifies a useful tool, procurement moves faster than policy, and by the time residents ask basic questions, the contract is signed and the system is live.
That is not just about futuristic use cases. It is relevant now. Surveillance systems already rely on algorithmic features. Vendor software used in records, code enforcement, permitting, dispatch support, and customer-service workflows increasingly ships with AI functions built in, whether cities ask for them or not.
The governance framework I have pushed for is meant to address that gap before it gets worse. The core pieces are basic: city ownership of city data, contract terms that guarantee audit rights and deletion requirements, a tiered approval path for higher-risk technologies, and regular public reporting on what systems are in use.
That is not anti-technology. It is normal governance. We should not wait until there is controversy around a specific contract to decide what the rules are supposed to be.
A staff training on AI may be useful. It is not a substitute for policy. Awareness is not governance.
I drafted and advanced a city-level governance concept built around a few practical guardrails: city ownership of city data, contract provisions that preserve audit rights and deletion requirements, tiered review for higher-risk technologies, and regular public reporting. I kept the conversation tied to real local examples, including surveillance technology and vendor systems that increasingly add AI features without disclosure. Even when the initial work session discussion did not go as far as it should have, I kept the issue alive because the underlying gap did not go away.