When government speaks, who’s doing the talking?

philwoodford
6 min readJul 11, 2023
A local council officer uses AI as part of their job (Source: DALL.E)

Much of the coverage of ChatGPT and other types of ‘generative AI’, which create text or images from prompts, has centred around its use by business. Some marketers might relish the opportunity to create content at speed and for zero cost, while news rooms may see an opportunity to cut headcount and produce bog-standard stories via bots. As we’ve seen, lawyers could even choose to rely on hallucinating machines to do their legwork for them. At their peril, m’lud.

But what about government? Surely at a time when trust in the machinery of our democracy is at an all-time low, civil servants and local officials wouldn’t rely on ChatGPT to communicate with us or draw up documentation? I was keen to explore this idea, so issued Freedom of Information requests to a number of major government ministries and all the local authorities in London to get a snapshot of where we’re at.

Initially I was stalled by central government, but the reason soon became apparent. The Cabinet Office was set to issue advice to civil servants at the end of June, which was published online. This guidance raises a number of issues (to which we’ll return later), but at least it’s in been put in place. The picture in local government is much more patchy.

Of the 26 councils in the capital that replied, only three had issued any advice to their staff. The Royal Borough of Greenwich is launching beta testing of its own chatbot tools (which it maintains will be more private and secure than those available via the web) and plans to host workshops for workers.

The London Boroughs of Havering and Newham have, meanwhile, both issued very basic advice on the use of ChatGPT — the popular AI platform which converts simple prompts into plausible text. This tells employees that they must seek something called a Data Protection Impact Assessment before ‘deploying’ the technology.

The remaining 20 authorities had issued no guidance and made no suggestion of any plan to do so, although Lewisham accepted it was something the council would ‘need to consider moving forward’.

Does this matter?

Well, think about the ways in which AI might potentially be used. At a basic level, it could help a council to create content for its web pages or regurgitate job descriptions into something approximating recruitment ads. Perhaps we’d shrug at those kind of applications of the technology. But what if local authority officers found it an easy way to formulate letters or emails they were sending to residents? Should that concern us? The idea that we might not know if the response came from a human or a machine?

Taking things a stage further, what about the formulation of more complex documents on policy and strategy? These can be time-consuming and not everyone is adept at writing. ChatGPT would seem like a great shortcut to creating something which reads entirely plausibly — drawing on its training data to write credible commentary on anything from traffic management to social housing and planning.

I’m not suggesting that this is already happening. London councils mostly denied that it was, although one or two were brave enough to admit they had no way of knowing. And that’s kind of the point. There will need to be rules in place. A few boroughs have thought about this, while others haven’t yet.

To say that a council doesn’t allow access to generative AI on its own platforms might seem like a safeguard against misuse of the technology. But 15 years ago, a lot of councils (and even private-sector organisations) banned the use of social media at work. And then the smartphone revolution came along and the whole idea of restricting access became a nonsense.

So what of the advice to central government civil servants? The headline news is that there’s a green light for using AI.

‘With appropriate care and consideration,’ the Cabinet Office tells government workers, ‘generative AI can be helpful and assist with your work.’ Mandarins are, however, advised to be ‘cautious and circumspect’ in their approach.

An important safeguard is that civil servants must ‘never put sensitive information or personal data’ into the tools — a recognition that government loses control of the information once it’s been entered into the chatbot. Users are then asked to think of the ‘three hows’: how the information will be used by the system; how the answers can mislead; and how generative AI operates.

In this particular section of the advice, there is welcome recognition of the fact that the tools don’t understand context and can’t deal with bias. There’s also acknowledgement that they may draw from sources civil servants wouldn’t normally trust and can generate misinformation.

Perhaps most importantly from the point of view of public trust, government officials are being told by the Cabinet Office to reference when the tools have been used. This is a great idea in principle, but it begs a question: on what grounds would the use of generative AI really be justified? That it allows someone to generate a lot of plausible content quickly? Is that really good enough for the purposes of government?

If it is difficult to justify its use, can we be sure that people would declare it? Might not civil servants fear that admitting to the use of AI would undermine their credibility in their department, making them look somewhat lazy, gullible or incompetent? And if they didn’t declare it, how would we spot it? The outputs are often entirely plausible. What’s more, a chatbot won’t give the same answer when asked the same question a second time.

Drilling down a little further into the guidance, the Cabinet Office gives suggestions on appropriate and inappropriate uses. It is thought to be ok to summarise information using something like Bard or ChatGPT. Examples given include a summary of a news or academic article to use as an annex to a briefing to ‘save time’.

Perhaps.

Generative AI, according to the guidance, can also be used ‘as a research tool to help gather background information on a topic relating to your policy area that you are unfamiliar with’.

Here, my qualms are growing.

I’ve personally had many encounters with ChatGPT in which it provides information that is just plain wrong. And when it’s challenged, it will immediately apologise profusely and correct itself. But the ability to challenge relies on my being familiar with the topic. If I am unfamiliar with it, most of what the bot tells me will seem tickety-boo.

There are some specialist uses which are also deemed to be ok — the analysis of textual data and coding — and I admit that my technical knowledge in these areas is not good enough to really understand the full ramifications. The Cabinet Office advice admits that the textual analysis wouldn’t really be conducted on a publicly available platform anyway and I can’t quite see why you wouldn’t use proprietary software (eg. Lumivero’s NVivo) for the purpose.

My summary of the situation right now is that central government has issued some welcome advice and, although there are likely many holes in it, they’ve made a start at addressing the issues. They are somewhat stymied by the fact the Sunak government is a champion of AI and wants to be at the forefront of the technological revolution, so the guidance is repeating mantras about the ‘great potential’ of the bots, rather than addressing it dispassionately.

Local government — if London authorities can be seen as a representative sample — seems to be playing catch-up. Given the speed with with AI is advancing, we’re only able to watch this space for so long before becoming worried.

--

--

philwoodford

Writer, trainer and lecturer. Co-host of weekly news review show on Colourful Radio.