Four potential positive uses of AI for government policymaking

This blog originally appeared on the Yorkshire Universities website.

My recent work has been helping universities to think about how they should respond to AI. Two recent books – Failed State by Sam Freedman, and Left Behind by Paul Collier – have prompted me to think about the potential positive impact of AI on government policymaking. Much of the same applies here as it does to universities: avoid an overly-inward, defensive focus and instead look at the potential societal impact, how you are in turn impacted and then how you can shape AI in society. But with government, everything is scaled up. Universities are large institutions but the civil service is far larger.

Here are four brief, non-exhaustive reflections. They are based on what is possible now or in the near future rather than relying on exponential growth of AI capabilities. They are in the realm of practical implementation rather than philosophical musings on the function of government policymaking.

All of the below needs to sit on a basic foundational understanding of how large language models (LLMs) and other AI models work and their limitations. As with universities, this needs to be strategic and embedded rather than a one-off AI literacy module. Effective prompting is a part of this, but ‘prompt engineering’ is far less important than it was in the past.

Instead, successful AI implementation comes down to the basic but often difficult questions that have always underpinned complex projects: what exactly you are trying to achieve? Why are you trying to achieve it? What context is important? The focus then is on how AI can augment and improve work, and experimenting and iterating and testing and challenging models in the process: AI as a co-worker rather than a computer programme, a form of labour rather than a tool. The UK Government’s AI Playbook offers some guidance for civil servants here; the Institute for Government also has a helpful insight paper on the topic.

1. Stress-testing policies as part of their development

LLMs can be great sparring partners. As an additional stage of policy formulation, new ideas can be challenged by an AI model that looks for risks, unforeseen consequences, potential downsides and (somewhat ironically, given their own propensity for this) biases. There can be tested against the Treasury’s Green Book, or against international commitments, manifestos and strategy documents. Perhaps policies should also be tested against the blunders of our governments.

2. Institutional memory and cross-departmental learning

These are perennial problems that can perhaps be ameliorated by AI. In its 2022 report on local growth funding the National Order Office found that there have been 55 initiatives since 1975 to tackle regional inequality, and £18 billion spent between 2011 and 2020. I think it is fair to say that these have not, overall, been successful. Where there are repositories of previous initiatives and reports and evaluations, it is a relatively straightforward process to feed these into an LLM to gain feedback and insights when designing future programmes. (One could, of course, actually sit down and read all these evaluations, but perhaps we need to be realistic here).

One application of LLMs in a university setting is the translation of research between different contexts and rapidly generating outputs for different audiences – and hopefully acting as a bridge to collaboration and improved communication. This applies equally to work across government departments and agencies, particularly where there are technical or legal understanding barriers.

3. Scaling up public engagement

Again, there is some work from academia relevant to effective policymaking. Researchers at LSE created an AI interviewer that carried out qualitative interviews with thousands of people in a few hours. It adapted questions on the fly and interacted in a conversational manner, performing the work of many human researchers in parallel. Participants rated these AI interviewers comparable to human interviewers in quality.

At Stanford’s Human-Centered AI Institute, researchers have accurately simulated the personalities of 1,052 individuals using interviews and a LLM. These virtual agents exhibit personas that answer questions and make decisions in ways that mirror their real-life counterparts. This could be useful for policymakers to test responses to anything from minor policy tweaks to unprecedented crises.

However, the same (fairly massive) caveats apply to policymakers as to academics. For example, interviewing is a valuable skill and one that has a deeper purpose of building understanding between the public and policymakers beyond the transaction of information. And whilst virtual agents or AI personas can be uncannily accurate, they will not always be so and therefore need to be consistently corroborated and validated with real world research – there is otherwise a risk of divergence between the AI personas and the real world.

4. Personal productivity

This is perhaps the most mundane but also the most potentially transformational. Civil servants are stretched. They’re working on complex, fast-moving briefs and need to get up to speed on topics incredibly quickly. They are asked to provide advice, understand evidence and communicate this, design initiatives and interventions, evaluate these, work across teams and with partners outside of government. They juggle multiple important policy areas at the same time, whilst being part of a highly complex bureaucracy.

The productivity benefits I envisage are not as a result of the above three areas (although they will surely help). Nor are they as a result of using ChatGPT to write meeting minutes. Instead, I see significant time savings from LLMs embedded into software. Today I can run small open models on my laptop. In the future, tuned models will run locally in common applications (think Word, Powerpoint, Excel, but also within finance and HR software) that enable tedious actions to be performed with plain text, without internet access needed.

“Reformat this document in the house style for white papers, add a table of contents, move the foreword before the executive summary, and extract all references and add these to a bibliography”. “Merge these two spreadsheets and add a new tab which calculates the difference between column F on each. Create a scatter graph of this and save a JPG version in the same folder. Format everything in the same style as ONS publications, and add my contact details on a front tab”. And so on. It’s not glamorous, it’s not particularly novel (you can do it all now, using cloud-based LLMs), but it’ll become much, much easier. (See Cal Newport for more on this).

Caveat

But if there’s one lesson to draw from both Freedman and Collier’s books, it is that we need to reverse the damaging centralisation of policymaking in the UK and the evisceration of local government. This is backed up by a whole load of academic and policy research that I’ve drawn on in previous years for my work for the British Academy, Universities UK, Yorkshire Universities and others. If local and regional government is hollowed out further, with AI being used to replace staff in town and city halls, everything above will be in vain. Instead, the UK’s regions need to be empowered, not eroded, and support for everything above needs to be doubled and tripled outside of Whitehall.

This is the only bit of this post which uses generative AI: ‘create a New Yorker style cartoon with a joke about an AI water cooler boosting the efficiency of gossip, picking up the themes of this article’.
This is the only bit of this post which uses generative AI: ‘create a New Yorker style cartoon with a joke about an AI water cooler boosting the efficiency of gossip, picking up the themes of this article’.

(Cover image of Birmingham Town hall via Unsplash)


Comments

Leave a comment