What the Public Really Wants from AI - And Why We’re Not There Yet
Neil Lawrence, DeepMind Professor of Machine Learning, University of Cambridge and Jessica Montgomery, Director, ai@cam on shifting the dial on AI policy
Researchers and policymakers have been asking the UK public what they want from AI for almost a decade. Across time and space, the answers are generally consistent: we want AI systems that deliver public benefit, with meaningful human oversight, transparency and accountability in decision-making, and protection from harms that might arise from the use of AI or the power imbalances around their development.
Recent public dialogues conducted by ai@cam in Cambridge and Liverpool repeated these themes. People want AI to support better public services, but they're increasingly sceptical about how it's being developed. As one participant put it: "AI needs to be managed, not just by the private companies, the Government need to manage what's going on, not just give AI a free reign."
Less commonly discussed is the growing gap between these aspirations and our institutional capacity to deliver them. Conversations with practitioners at AI Forum 02 explored why good policy intentions aren't translating into meaningful public benefit, and how the AI Opportunities Action Plan could help address that balance.
The implementation gap
Public dialogues consistently highlight independent oversight as a vital component of AI governance. Our latest public dialogues called for governance frameworks that protect public interests and enable beneficial innovation, backed by independent regulatory bodies with meaningful capabilities to guide AI development and enforce safeguards against harm.
In reality, we still don't fully understand the gaps and overlaps in the UK's AI -relevant regulations. Regulators face the challenge of keeping pace with rapid technological change in a context of increasingly constrained resources. The risk of "regulatory capture" – where large firms influence the rules meant to govern them – adds another layer of concern. This is why independent oversight must include a range of stakeholders, not just industry and government perspectives, including public voices. People’s experiences of AI can help identify potential harms, surface areas of opportunity, and open the door to a wider conversation about our shared future with AI.
Similar gaps appear across the policy landscape. While citizens want transparency about how AI affects them in public services, practitioners struggle with legacy IT systems, procurement constraints, and organisational silos, all of which affect how AI is deployed. The technical infrastructure and organisational capability to provide meaningful transparency often doesn't exist. The 'Scan > Pilot > Scale' approach offers promise in addressing this technical debt. Discussions at the AI Forum stressed the need for these agile approaches to be embedded in real deployment processes, not just policy rhetoric, so that it becomes a practical solution to a governance challenge.
Forging the UK’s path
A major question posed by public dialogues is: are we having the wrong policy conversation entirely? Policy debates often frame AI as an international arms race, with the UK's national ambition being to replicate creating Silicon Valley capabilities at home. However, there is another possibility.
The UK has its own distinctive capabilities: world-class research institutions, strong regulatory traditions, and deep domain expertise in areas like healthcare and finance. The question isn't how to replicate the capabilities of big tech on their home turf, but how to play a more creative game.
A starting point at May’s AI Forum was a conversation about what "sovereign AI" really looks like in practice. It's not just about having UK-based compute or UK companies (though those matter). It's about building capabilities that give the UK agency in an AI-driven world. This might mean developing AI systems that work for UK needs and opportunities; having enough domestic capability to avoid dangerous dependencies; and having the expertise to shape global AI standards, amongst other interventions.
Forum participants pointed to regulation itself as a potential competitive advantage. While others race to deploy AI first, the UK could lead in showing how to deploy it responsibly. This isn't about slowing innovation – it's about demonstrating that trustworthy AI can be a market advantage. Clear governance can create the confidence and trust necessary for widespread adoption of beneficial AI systems. Well-crafted regulation can provide the clarity that businesses need to invest and develop new applications, and the safeguards that publics need to be confident in their use.
In this context, the core tension isn't between innovation and regulation – it's between aspiration and implementation. Over-regulation risks stifling innovation, while under-regulation risks eroding public trust. We need proportionate, flexible approaches that evolve with technology, echoing public desires for agile governance while being grounded in implementation realities.
Where we go from here?
Trust doesn't come from dialogue alone – it comes from demonstrable action that shows AI systems working in service of human needs. Our dialogues revealed a clear set of characteristics that people want to see embedded in AI governance frameworks:
Public understanding from broader education and information about how AI works and impacts daily life.
Independent oversight from independent regulatory bodies with sufficient powers to govern AI in each sector, with a tailored approach that reflects the opportunities and challenges in different sectors.
Protection from power imbalances, through robust legislation and enforcement to ensure public benefit is prioritised over profit, particularly in public services.
Collaborative development through which AI developers engage with diverse communities to ensure systems work for everyone.
Robust security to prevent threats and fraud, with clear lines of accountability if systems are compromise.
A limitation of our current approaches to public dialogue is the gap between these dialogues and the practical processes of R&D or policymaking. One route forward would be more focused problem-solving sessions that bring public concerns together with implementation expertise. Instead of repeatedly documenting what people want, we should zoom in on what it would take to bridge the gap between those aspirations and deployed AI products.
The question isn't whether people want better AI governance – they've been clear about that for years. The question is whether policy is ready to deliver it.
About the authors
Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge where he leads the University’s flagship mission on AI, AI@Cam. He has been working on machine learning models for over 25 years. He returned to academia in 2019 after three years as Director of Machine Learning at Amazon. His main interest is the interaction of machine learning with the real world. This interest was triggered by deploying machine learning in the African context, where ‘end-to-end’ solutions are normally required. This has inspired new research directions at the interface of machine learning and systems research, this work is funded by a Senior AI Fellowship from the Alan Turing Institute. Neil is also visiting Professor at the University of Sheffield and the co-host of Talking Machines. He is the author of the book The Atomic Human .
Jessica Montgomery is currently Director of ai@cam, a new University of Cambridge strategic mission to develop AI technologies that serve science, citizens, and society. Alongside this role, she leads a variety of research and policy programmes tackling the real-world challenges associated with developing and deploying AI for societal benefit. These include: Accelerate Science, an initiative developing AI tools and collaborations in support of research and innovation; the Data Trusts Initiative, an incubator programme for pilot projects creating trustworthy data governance frameworks; and strategic research agenda development for the ELISE/ELLIS network of European AI research. Her interests in AI and its consequences for science and society stem from her policy career, in which she worked with parliamentarians, leading researchers and civil society organisations to bring scientific evidence to bear on major policy issues.