Where to work on AI policy in the UK government
Updated: Nov 5, 2022
Advanced AI systems offer huge opportunities as well as catastrophic risks. Here we outline where in government you could help AI go well.
Image generated by DALL-E 2.
Artificial Intelligence (AI) is the fastest growing deep technology in the world, with huge potential to rewrite the rules of entire industries, drive substantial economic growth and transform all areas of life.
The government takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously.
- The UK’s National AI Strategy - September 2021
Introduction and summary
This guide is a short summary of promising places to work in the UK government to positively impact the development and use of AI in our society. We outline some of the key teams and why they might be impactful places to work. This is the first of a series of posts outlining where you can work on specific policy areas (see here for a list of the policy areas we intend to cover).
Artificial Intelligence policy is already an impactful place to work and will continue to grow in importance over the coming years. This is in large part due to the huge potential benefits and thus-far unaddressed risks of very advanced systems.
More advanced systems could revolutionise entire industries, deeply affect our day to day lives, significantly change the balance of power globally. These systems could even pose an existential threat to human civilisation.
The UK government has recently published the National AI Strategy and AI Action Plan. This demonstrates a clear intention to take the extreme risks of advanced AI systems seriously and to make sure the UK is a world leader in AI development and governance.
The key teams working in this area are The Office for AI, The Centre for Data Ethics and Innovation and the new Office for Science and Technology Strategy. However, it seems plausible that the most influential teams are elsewhere in government, particularly Treasury, Number 10 and Cabinet Office teams that deal with resilience (particularly the Civil Contingencies and National Security secretariats). See our more general guide to finding high impact teams here.
If you agree that AI policy is high priority and think it may be a good fit for you, get in touch with us for further support planning your career.
Why work in AI policy?
A large number of the people we coach express interest in AI policy. We are also convinced that it could be very important. This is largely due to the catastrophic risk posed by very advanced AI systems that could be developed in the coming years and decades.
AI progress has been rapid over the last decade. Machine learning systems, which underlie modern artificial intelligence, have been increasingly successful in many previously-human-dominated areas. These include image and text generation, and decision making in complex environments. As these tools improve, the potential value of them being used well will also increases. AI could be used to revolutionise health care, science and deliver huge growth in productivity. As these systems become more powerful and we become more reliant on them, the cost of them failing to function as intended also increases. In the coming decades we may also be facing very advanced systems that could transform entire industries, deeply affect our day to day life and change the balance of power globally. These systems could pose an existential threat to human civilization.
The decisions that the government makes now and in the coming years could have huge implications on the future of AI. (e.g. because of inertia)
If you want to read more about why AI might be very important, please read this profile from 80,000 Hours (a career advice charity) and this guide to AI Safety from the Centre for Security & Emerging Technology (an American think tank).
This guide should be read together with our general guide on finding high impact teams in government. The teams we discuss here are not necessarily the highest impact for you to work in, and it might not be the right time for you to work on this directly.
Areas of AI policy in the UK government
The UK’s AI Strategy and AI Action Plan provide an overview of the areas that the UK government is planning to work on AI. The strategy describes 3 pillars.
Pillar 1: Investing in the long-term needs of the AI ecosystem
Influencing the key economic variables underlying AI development
Making the UK competitive in AI
Maintaining access to computer chips needed to train and run AI (“compute”)
Building UK AI talent
Pillar 2: Ensuring AI benefits all sectors and regions
Improving pathways to the commercialisation of AI products in more sectors
Increasing how successfully government and the broader public sector use AI
Pillar 3: Governing AI effectively
Regulating the companies developing and using AI
Defining how AI will be used in defence
Setting an example for other powerful nations to develop and use AI safely and humanely
All these areas could be influential. We think that the most important places to work will be those teams that are working on policies that will have longer-term implications or affect the risks from very advanced systems.
Much of the government's current work is focused on more immediate issues regarding less advanced systems. If you think that the issues created by more advanced systems are the key reason to prioritise working in AI, we still think these teams may still be a particularly valuable place for you to work. This is because decisions and policies made today will affect how we are able to address future risks and may shape the strategic landscape in which AI is developed internationally. Working in this area will also give you the skills, knowledge and connections to be able to have a positive impact on AI policy long-term.
As well as considering whether a team’s work will affect the long-term we suggest that you reflect on your personal values and strengths. This will inform which team’s work is the highest priority for you.
Where to work in government
We have talked to a many civil servants and experts who work on AI policy to produce the list below. It’s bound to change over time, and to work on this policy area it is important to develop your own view about what is important and which teams are most relevant to your interests. A key way to do this will be to keep up to date with the work of the Office for AI and Centre for Data Ethics and Innovation in particular. They will likely continue to be the teams in government with the most complete overview of all AI related policy. You could also try to get in touch with these teams (if you are a civil servant we can help with this).
Core teams working directly on AI in the UK government:
The core areas of AI-specific work in government are the Centre for Data Ethics and Innovation (CDEI) which is part of Department for Digital, Culture, Media & Sport (DCMS), the Office for AI (OAI), a joint BEIS-DCMS unit and the Office of Science & Technology Strategy (OSTS) in Cabinet Office.
Office for AI (OAI)
OAI is very AI-specific. They focus on making the UK more competitive in the AI space. This includes funding for PhDs, encouraging public sector use of AI, supporting AI sector growth, and improving the UK's approach to regulation. They also have a team that focuses more on AI-related risks, who “coordinate cross-government processes to accurately assess long term AI safety and risks…”
While the government “takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously.”, current work largely focuses on AI risks and benefits from current, less advanced systems (such as automated decision making). However, even if you think that risks from very advanced systems are particularly important, we expect that this team is still particularly important.
The Centre for Data Ethics and Innovation (CDEI)
CDEI is an advisory body to the UK government. It has a greater focus on ethics, responsible innovation, and safety than the Office for AI. The scope of CDEI’s work is also wider, including the use and misuse of data more broadly.
As an advisory body, they are slightly more removed from central government than OAI. They might therefore be a little less constrained by the need to deliver on immediate government priorities and able to explore more long term thinking. As with the Office for AI their work is mostly concerned with less advanced AI systems, such as autonomous vehicles and misinformation. CDEI’s work has also included working with the Ministry of Defence on how it uses AI safely.
The Office of Science & Technology Strategy (OSTS)
Currently, one of OSTS’s top four priorities is to help “focus [the UK’s] science and technology capabilities” to “[drive] growth and security through digital technologies that generate productivity across the whole economy”. As this team sits in the Cabinet Office – right in the centre of government – and is led by the Government Chief Science Advisor and a Deputy National Security Advisor, they will weigh in on many important decisions regarding the government's investment in and regulation of AI. The AI Strategy highlights that the Office for AI will work closely with the Office for Science and Technology Strategy in particular when trying to understand the government’s strategic goals for AI.
Other teams, departments and policy areas that will influence AI:
The above areas seem very promising for working directly on AI policy in the UK government. However, since AI is an extremely cross-cutting issue, there are teams across government that also have influence. These teams might influence AI policy areas such as:
technology supply chains (e.g. semiconductor production)
defence and national security technology development
cutting-edge talent and commercial pipelines
data policy and ethics
Here is a non-exhaustive list of other organisations, teams and policy areas that may play an important role:
ARIA is a newly set up agency that will fund "high-risk, high-reward" research.
The specifics of ARIAs funding strategy are uncertain, but may well include a substantial amount of research related to AI.
Government Office for Science (GO-Science)
GO-Science provides key scientific advice to the Prime Minister, the Cabinet and other influential stakeholders, including the Scientific Advisory Group for Emergencies (SAGE).
They are more independent of ministers and policymaking than OSTS and seen as impartial and authoritative. It has multiple teams that sometimes work on AI or related issues. GO-Science is responsible for key scientific advice around National Security & Resilience, which will include advice relating to AI.
The CMA works to reduce the negative effects of monopoly power within the UK economy. It also is responsible for consumer protection.
The Digital Markets Unit is particularly relevant: it tries to limit the undue market power of the largest tech companies (often leaders in AI). Some researchers think competition law could be particularly important for AI governance.
The Foreign, Commonwealth and Development Office (FCDO) and the Department for International Trade (DIT)
Both Departments will likely have influence over how other countries invest in and regulate AI, and in influencing key technology supply chains.
The more relevant roles will be those that relate to the UK’s relationship with states that regulate organisations working at the cutting-edge AI (the USA, China, and the EU).
AI in Defence and National Security
The decisions made around Defence and National Security may be particularly important and set crucial precedents about the ethics of AI use. Particularly around automated decision making, risk assessment, surveillance and autonomous weaponry.
Ministry of Defence (MoD) recently published a Defence AI Strategy.
Specific defence teams that may be important for AI governance are those which are likely to play a role in how the UK embeds AI into strategic decision making processes, and the armed forces more generally. For example, the Defence Science and Technology Laboratory, the Defence Concepts and Doctrines Centre, the Defence AI and Autonomy Unit (see page 30), the Defence AI Centre, and Defence Digital.
Organisations such as GCHQ and the National Security Secretariat are currently establishing how AI should be used in their remits. The National Cyber Security Centre (part of GCHQ) in particular is working to advise government and industry about how to make AI systems secure.
National Security Strategic Investment Fund is a joint initiative between HM Government and the British Business Bank. They seek to accelerate the development of technology that could have national security and defence applications.
The Office of the Chief Scientific Advisor for National Security (currently Alex van Someren), will also likely be important.
AI in healthcare and law enforcement:
Although many of the biggest risks and benefits from AI will be outside these areas, because they may be leader in the public sector use of AI, they may set precedents for how AI is used elsewhere.
For example, how the police and justice systems use AI required that we address a lot of important moral questions that we are yet to resolve.
Where you could work outside the Civil Service
We think the UK Civil Service is an impactful place to work to ensure that AI development is as positive as possible. However, there are lots of other places you could work on AI policy in the UK. These include:
Alan Turing Institute (ATI)
ATI is an independent research institute focusing on AI. It was set up by the UK government and has stronger ties to policy making than many academic institutions.
The AI policy team may be a high impact place to positively affect AI policy in the UK and elsewhere.
They have recently set up the Centre for Emerging Technology and Security to conduct research that will inform UK security policy around AI and other emerging technology
Ada Lovelace Institute (ALI)
ALI is an independent research institute, focused on ensuring "the benefits of data and AI must be justly and equitably distributed"
A lot of their work looks at how the government and other power institutions can use AI ethically.
Centre for the Governance of AI (GovAI)
GovAI is a research organisation working on policy ideas to improve the governance of AI. Their research interests include both UK and global governance.
CLTR has worked closely with the government to develop policy proposals that could mitigate risks from AI. These are outlined in their report, Future Proof.
Next steps for you
Decide on your priorities
As we have suggested above, we think that teams and organisations that might be most important are those that will impact future policy making (e.g. by setting precedents). We also expect that roles closer to “the centre'' might offer more opportunity to improve the most crucial decisions.
However , if you are relatively early in your career the most important consideration might be around which team or role will provide you with the greatest opportunity for personal growth. It’s possible that you should currently be focussing on building your skills and knowledge of the AI policy space, and getting to better know the key people, teams and organisations.
We encourage you to make your own mind up about what is most important and best fits with your strengths.
Look out for jobs in these teams and talk to them
The key way to look for jobs in these teams is to carefully set up your job alerts on Civil Service jobs. We have written previously about how you can optimise your job alerts. If you think you might want to work in this area, we recommend that you start reading these job descriptions and reaching out to hiring managers to learn more about their work (even if you don't plan to apply immediately).
Talk to us
If you are interested in moving your career towards working in AI policy, we would love to talk to you. Sign up for coaching with us and we can help you think through your options!
As we expect this policy area to move quickly, the landscape outlined above will also likely change over the coming months and years. We’d like to keep this post up-to-date, so please let us know if you notice anything incorrect or out-of-date. Email firstname.lastname@example.org.
80,000 hours career profile on preventing AI related catastrophes
The National AI strategy and AI action plan