AI adoption in government has to start with policy
One of the strongest themes in Tyler’s comments was that organizations cannot approach AI casually, especially in a government setting.
The City of Escondido, he explained, has moved through every phase of adoption over the past few years — from early awareness, to informal experimentation, to a more structured strategy led in partnership with the city’s IT department.
Today, a designated group of authorized users is piloting Microsoft Copilot as part of a citywide effort to better control how data is shared and protected.
“Before you develop any tools, workflows, or internal policies, make sure you understand who has the authority to make those decisions. Otherwise, you could spend a year building something that gets overridden in a single email.”
That point is especially relevant for public agencies, where departmental initiative still needs to align with broader city policies, technology standards, and security requirements.
Tyler was clear that this kind of coordination is not just about process. It is about risk management.
“The starting point for us was security. Once you understand that AI can create real risk if it’s used without guardrails, the next question becomes: how do we create a more secure path for using it well?”
In Escondido’s case, that led to a more controlled pilot through existing Microsoft infrastructure instead of a free-for-all across open AI tools.