From designing software, I've learned that people change their minds about technology when it's just easy to use. I think the excitement around AI in government is really just frustration with how bad current systems are. People are used to smooth, personalized experiences from companies like Amazon, so they want that from government services too. If those websites just worked well in the first place, nobody would be making such a big deal about AI.
Honestly, I think people get interested in AI because they see government workers drowning in paperwork. In healthcare, I've seen what happens when tech handles the administrative junk, and the relief is obvious. Staff are happier, patients have a better experience. I figure voters see the same thing in public agencies and hope AI lets officials stop filling out forms and do the work they're actually supposed to do. If someone asked me, I'd tell agencies to start small, explain what the AI is doing, and let the results speak for themselves.
More than ever, this question has arisen in governments in the US government. The primary reason is that the US government is facing complex challenges that are difficult to perceive and handle without relevant datasets, evidence, and scenario analysis tools. In 1950, Alan Turing speculated about the development of thinking machines. After a long time, modern AI systems can replicate the language of political leaders. AI assistance can enhance output quality and reduce performance issues. These achievements play a pivotal role in the public sector workforce. For example, the U.S. Patent and Trademark Office has deployed AI tools to enhance patent classification, reducing the time required to save an application. On the other hand, the State Department has leveraged artificial intelligence to help its employees use it efficiently in order to be productive. The team is deploying tools to use open-source and US government data to curate emails, translate documents, check departmental policies, and brainstorm ideas.
People have already gotten used to the idea that AI knows things they don't. If it can help them fix a car or review a contract, the leap to trusting it with policy doesn't feel like that far of a jump. But we're missing something important, and that's while AI can give information and make recommendations, it can't build trust with an opponent. It can't negotiate. It doesn't have empathy. It doesn't care about you and your best interests, really. In the legal industry, we see it all the time where people try to use AI to act as their lawyer. It might be good at giving you a baseline to know where to start, but when you're going up against a huge insurance company or fighting to change a law, you still need that human element. You need to be able to lean on relationships and judgment that only comes with human experience. That applies to our government, too. You can't automate accountability or having an inspired vision. People are fed up with red tape and a government that feels broken, I get it. But replacing people with machines won't fix that.