UK AI Safety Institute launches open source testing platform


The UK's AI Safety Institute has launched a free, open-source testing platform that evaluates the safety of new AI models. Dubbed Inspect, the toolset should provide a “consistent approach” toward building secure AI applications around the world.

Inspect is the first AI safety testing platform created by a state-backed body and made available to the public for free. Ultimately, it will accelerate improvements in the development of safe AI models and the effectiveness of security testing.

How to use the Inspect software library

The Inspect software library, released on May 10, can be used to assess the safety of an AI model's characteristics, including its basic knowledge, reasoning ability, and autonomous capabilities, in a standardized way. Inspect provides a score based on its findings, which reveals how secure the model is and how effective the evaluation was.

Because Inspect's source code is open access, the global AI testing community, including enterprises, research centers and governments, can integrate it with their models and obtain essential security information faster and easier.

SEE: Top 5 AI trends to watch in 2024

AI Safety Institute president Ian Hogarth said the Inspect team was inspired by leading open source AI developers to create a building block toward a “shared, accessible approach to assessments.”

He said in the press release: “We look forward to seeing the global AI community using Inspect not only to conduct their own model safety testing, but also to help adapt and develop the open source platform so we can produce evaluations.” of high quality throughout the world. board.”

Secretary of State for Science, Innovation and Technology Michelle Donelan added that safe AI will improve several sectors in the UK, from “our NHS to our transport network”.

The AISI, together with the Artificial Intelligence Incubator think tank and Prime Minister Rishi Sunak's office, is also recruiting AI talent to test and develop new open source AI security tools.

Inspect: What developers need to know

A guide on how to use the Inspect toolkit in its basic format can be found on the UK Government's GitHub. However, the software comes with an MIT license that allows you to copy, modify, merge, publish, distribute, sell and sublicense it; This means that anyone can modify or add new test methods to the script through third-party Python packages to improve its capabilities.

Developers who want to use Inspect must first install it and ensure they have access to an AI model. They can then create an evaluation script using the Inspect framework and run it on the model of their choice.

Inspect evaluates the security of AI models using three main components:

  1. Data sets of sample test scenarios for evaluation, including prompts and target results.
  2. Solvers They run the test scenarios using the indicator.
  3. Scorers that analyze the result of the solvers and generate a score.

The source code can be accessed through the UK government's GitHub repository.

What experts say about Inspect

The overall response to the UK's Inspect announcement has been positive. The CEO of the AI ​​Community Platform Embracing the face Clément Delangue published in X who is interested in creating a “public leaderboard with evaluation results” of different models. Such a leaderboard could show the safest AIs and encourage developers to use Inspect so their models can be ranked.

Linux Foundation Europe too aware that Inspect's open source “aligns perfectly with our call for greater open source innovation by the public sector.” Deborah Raji, a Mozilla researcher and AI ethicist, called it a “testament to the power of public investment in open source tools for AI accountability.” in X.

The UK's steps towards safer AI

The UK AISI was launched at the AI ​​Safety Summit in November 2023 with the three main objectives: to evaluate existing AI systems, conduct fundamental research into AI safety and share information with other national and international actors . Shortly after the summit, the UK's National Cyber ​​Security Center published guidance on the security of AI systems alongside 17 other international agencies.

With the explosion of AI technologies in the last two years, there is a pressing need to establish and enforce strong AI safety standards to prevent issues such as bias, hallucinations, privacy violations, intellectual property violations, and misuse. intentional, which could have broader social and economic consequences. .

SEE: Definition of generative AI: how it works, benefits and dangers

In October 2023, G7 countries, including the UK, published the 'Hiroshima' AI code of conduct, which is a risk-based approach that aims to “promote safe and trustworthy AI around the world and will provide voluntary guidance for the actions of the organizations”. developing the most advanced AI systems.”

In March this year, G7 nations signed an agreement committing to explore how artificial intelligence can improve public services and boost economic growth. It also covered the joint development of a suite of AI tools to inform policymaking and ensure that AI used by public sector services is safe and trustworthy.

The following month, the UK government formally agreed to work with the United States on developing tests for advanced artificial intelligence models. Both countries agreed to “align their scientific approaches” and work together to “rapidly accelerate and iterate robust sets of assessments for AI models, systems and agents.”

This action was taken to uphold commitments made at the first Global AI Safety Summit, where governments around the world accepted their role in safety testing the next generation of AI models.



scroll to top