UK, US and EU sign first international treaty on artificial intelligence


The UK has signed the world's first international treaty on artificial intelligence, alongside the European Union, the United States and seven other countries.

The agreement commits signatories to adopt or maintain measures to ensure that the use of AI is compatible with human rights, democracy and the law. These measures should protect the public from the risks inherent in AI models, such as biased training data, and from the risks of their misuse, such as the spread of misinformation.

The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was opened for signatures during a conference of Council of Europe Justice Ministers in Vilnius, Lithuania, on 5 September. Current signatories include:

  • Andorra.
  • EU.
  • Georgia.
  • Iceland.
  • Israel.
  • Norway.
  • Republic of Moldova.
  • San Marino.
  • United Kingdom
  • US

The treaty joins a growing body of international laws that aim to curb AI risks, including the Bletchley Declaration, which was signed by 28 countries in November 2023.

Other States that negotiated the treaty are expected to join the initiative, including the other 39 members of the Council of Europe and nine non-member States: Argentina, Australia, Canada, Costa Rica, the Holy See, Japan, Mexico, Peru and Uruguay.

Lord Chancellor Shabana Mahmood represented the UK with her signature. In a statement, she said: “Artificial intelligence has the capacity to radically improve the responsiveness and effectiveness of public services and to drive economic growth. However, we must not allow AI to shape us; we must shape it ourselves.”

“This convention is an important step towards ensuring that these new technologies can be harnessed without eroding our oldest values, such as human rights and the rule of law.”

SEE: UK and G7 countries to use artificial intelligence to boost public services

The Secretary General of the Council of Europe, Marija Pejčinović Burić, said in a press release: “We must ensure that the rise of AI respects our standards, rather than undermining them. The Framework Convention is designed to ensure precisely that.

“I hope that these will be the first of many signatures and that they will be quickly followed by ratifications, so that the treaty can enter into force as soon as possible.”

The treaty was adopted by the Committee of Ministers of the Council of Europe on 17 May this year. For it to enter into force, it must be ratified by five signatories, including at least three Council of Europe member states. Entry into force will take place three months after the fifth ratification, on the first day of the following month.

It is separate from the EU AI law, which came into force last month, as the Council of Europe is a 46-member organisation separate from the EU, and non-EU states can sign it.

The feasibility of an AI treaty was first discussed in 2019. The legislation was approved in 2022 by the Council's Artificial Intelligence Committee and formally adopted on 17 May this year.

What does the treaty require of signatories?

To protect human rights, democracy and the rule of law, the Framework Convention requires signatories to:

  1. Ensure that AI systems respect human dignity, autonomy, equality, non-discrimination, privacy, transparency, accountability and trustworthiness.
  2. Provide feedback on decisions made using AI and allow people to question decisions or the use of AI itself.
  3. Provide procedural safeguards, including grievance mechanisms and reporting of interactions with AI.
  4. Conduct ongoing risk assessments on human rights impacts and establish protective measures.
  5. Allow authorities to ban or pause certain AI applications if necessary.

The treaty covers the use of AI systems by public authorities, such as the NHS, and private companies operating in the parties' jurisdictions. It does not apply to activities related to national security, national defence matters or research and development, unless they have the potential to interfere with human rights, democracy or the rule of law.

According to the UK government, the treaty will serve to enhance existing laws and measures, such as the Internet Security Act. It also aims to work with regulators, devolved administrations and local authorities to ensure that the treaty's requirements can be implemented.

WATCH: UK government announces £32m AI projects

It is up to the “Conference of the Parties,” a group made up of official representatives of the Parties to the Convention, to determine the extent to which the provisions of the treaty are being implemented and to make recommendations.

The UK's steps towards safe AI

The treaty states that while it regulates AI, it promotes its progress and innovation. The UK government has tried to maintain this balance in its own actions.

In some ways, the government has hinted that it will be cracking down on AI developers. In the King’s Speech in July, it was announced that the government “will seek to put in place appropriate legislation to impose requirements on those working on developing the most powerful artificial intelligence models.”

This backs up Labour’s pre-election manifesto, which pledged to introduce “binding regulation for the handful of companies developing the most powerful AI models”. After the speech, Prime Minister Keir Starmer also told the House of Commons that his government would “harness the power of AI as we look to strengthen security frameworks”.

SEE: Delaying AI rollout in UK by five years could cost economy more than £150bn, Microsoft report says

The UK also created the first National AI Safety Institute in November 2023, with the primary goals of assessing existing AI systems, conducting fundamental AI safety research, and sharing information with other national and international actors. Then, in April this year, the UK and US governments agreed to work together on developing safety tests for advanced AI models, advancing plans drawn up by their respective AI Safety Institutes.

In contrast, the UK government has promised tech companies that the upcoming AI Bill will not be overly restrictive and has seemingly remained calm in introducing it. It was expected to include the bill in the aforementioned laws announced as part of the King's speech, but it did not do so.

scroll to top