National approaches to AI safety diverge in focus

Countries face competing incentives in the artificial intelligence race

Domestic initiatives in artificial intelligence safety are beginning to emerge in countries around the world. In the last 18 months, the UK, US, Canada and Japan have created national AI safety institutes that aim to address governance and regulatory challenges, including issues related to misinformation, human safety and economic equity. Although they are unified by a common goal of creating frameworks for safe AI innovation, they diverge in meaningful ways.

US: prioritising domestic developments

The US AI Safety Institute was launched in February 2024 by the National Institute of Standards and Technologies. With a total funding package of $10m, AISI aims to ‘facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts’. AISI is focused on developing methods for the detection, tracking and potential watermarking of synthetic content.

Such objectives are focused on actionable policies and the development of safety frameworks that can avert significant risks to ‘national security, public safety and individual rights.’ This includes coordinating with 200 companies on red-teaming exercises to identify vulnerabilities and develop mitigation strategies.

Throughout its early existence, the AISI was chiefly focused on US domestic safety concerns, with sparser public language about the need for global collaboration. This may be changing, with a new UK partnership to develop safety tests for advanced AI models as well as recent statements of purpose to foster a global AI safety institute, although this remains very preliminary.

UK: voluntary commitments, global collaboration

The UK AI Safety Institute, founded in April 2023, evolved from the Frontier AI Taskforce with an initial £100m investment and ongoing funding as part of a £20bn research and development initiative. In contrast to the US, the UK AI Safety Institute focuses on a broader array of safety considerations and stakeholders.

Its mission is to ensure the safe development of advanced AI systems through evaluations, foundational research and information sharing. It places a large emphasis on collaboration with international partners, industry, academia, civil society and national security agencies to advance AI safety and foster global consensus and institution building. In practice, that has meant an approach to AI that is bent on making the UK central to the discourse on global safety but is not immediately interested in creating regulatory obligations for AI firms.

The UK has remained overwhelmingly focused on voluntary commitments from AI companies, relying on existing regulations to address new risks. As Ellie Sweet, head of AI regulation strategy, engagement and consultation at the UK Department of Science, Innovation, and Technology, remarked at OMFIF’s AI in finance Sseminar: ‘It’s better to have our existing expert regulators interpret and apply those principles within their existing remits, rather than necessarily standing up a whole new regulatory framework.’

Meanwhile, the UK has been very active in its development of international partnerships, including a new UK AI Safety Institute Office in San Francisco and a UK-Canada science of AI safety partnership.

Canada: investing in becoming an AI leader

In April 2024, Canada announced plans to develop its own AI Safety Institute as part of a broader investment in AI by the Canadian government. The institute is funded with $50m and aims to protect against risks posed by advanced AI systems while also solidifying Canada’s place as a potential leader in AI development.

It will work under the broader Pan-Canadian Artificial Intelligence Strategy, which focuses on commercialisation, standards and research. The institute aims to help Canada better understand and mitigate the risks associated with AI technologies while also supporting international governance efforts. This includes aligning with international AI governance principles set by groups such as the G7 and the Global Partnership on AI to ensure that domestic AI innovation is responsibly conducted.

Japan: initiatives still in early phase

Japan has launched an AI Safety Institute that is very similar to the UK’s. The country’s institute – founded in January 2024 within the Information-technology Promotion Agency – involves decentralised AI governance across many governmental departments such as internal and foreign affairs. The exact investment amounts have not been publicly disclosed.

Current initiatives involve the creation of AI safety standards, conducting cross-department research on AI implications and opportunities and developing international partnerships with other emerging AI governance leaders, such as those in Europe and the US, to co-ordinate global AI safety and risk standards. The details of many of these initiatives are still emerging.

Figure 1. National efforts to understand AI technology

  US UK Japan Canada
Founded February 2024 April 2023 January 2024 April 2024
Funding $10m £100m ($127m) Not disclosed C$50m ($36m)
Executive Elizabeth Kelly, former special assistant to the president for economic policy at the White House National Economic Council Ian Hogarth, a venture capital investor (temporary) Akiko Murakami, chief digital officer of Sompo Japan Insurance Wyatt Tessari L’Allié, former Green Party of Canada member and researcher
Policy focus Initially domestic safety, research and guidelines. Emphasis on national security, misinformation and related topics. Increasingly broad focus and interest in global co-operation. Broad policy focus, centred around the development of domestic and international frameworks, voluntary commitments and global normative and policy leadership. Broad policy focus centred on developing AI safety standards, research on AI risks and coordinating international partnerships. Supporting Canadian leadership in AI, managing risks, participating in global normative and policy initiatives.

Source: OMFIF analysis

 

These initiatives represent major national efforts to understand AI technology and its opportunities and risks. Most countries are in an information-gathering stage, learning about AI and appearing reticent to deploy non-voluntary rules (Figure 1). But countries are increasingly eager to co-operate to support global governance.

The key test for the institutes will be the deliberation of mandatory rules for AI use and safety. AI governance expert Robert Trager has said that current voluntary commitments are much like allowing car manufacturers to self-regulate. When dealing with technology that poses fundamental risks to national and global safety, governmental rules-based frameworks are vital. Such deliberation on mandatory requirements should include coordination with innovators, local communities looking to leverage AI and the firms driving innovation so that the technology can continue to develop.

Julian Jacobs is Senior Economist, Digital Monetary Institute, OMFIF; Doctoral Candidate at the University of Oxford

This topic will be further explored in a roundtable on 27 June. Register to attend here.

Join Today

Connect with our membership team

Scroll to Top