The UK government has launched a free self-assessment tool to help businesses responsibly manage their use of artificial intelligence.
The questionnaire is intended for any organization that develops, provides or uses services that use AI as part of their standard operations, but is primarily intended for smaller or start-up businesses. The results will tell decision makers the strengths and weaknesses of their AI management systems.
How to use AI management basics
The self-assessment, which is now available, is one of three parts of the tool called “AI Management Fundamentals.” The other two parts include a rating system that provides an overview of how well the company manages its AI and a set of action items and recommendations that organizations should consider. Neither has been published yet.
AIME is based on the ISO/IEC 42001 standard, the NIST framework and the EU AI Law. The self-assessment questions cover how the company uses AI, manages its risks, and is transparent with stakeholders.
SEE: Delaying UK AI rollout by five years could cost economy more than £150bn, Microsoft report says
“The tool is not designed to evaluate AI products or services themselves, but rather to evaluate the organizational processes that exist to enable the responsible development and use of these products,” according to the Department of Science, Innovation and Technology report.
When completing the self-assessment, input should be obtained from employees with extensive technical and business knowledge, such as a CTO or software engineer and a human resources business manager.
The government wants to include self-assessment in its procurement policy and in its frameworks to incorporate assurance in the private sector. He would also like to make it available to public sector buyers to help them make more informed decisions about AI.
On November 6, the government opened a consultation inviting companies to give their opinion on the self-assessment, and the results will be used to refine it. The rating and recommendation parts of the AIME tool will be published after the consultation closes on 29 January 2025.
Self-assessment is one of many government initiatives planned to ensure AI
In a document published this week, the government said AIME will be one of many resources available in the “AI Assurance Platform” it seeks to develop. This will help companies conduct impact assessments or review AI data for bias.
The government is also creating a Terminology Tool for Responsible AI to define and standardize key AI assurance terms to improve cross-border communication and trade, particularly with the US.
“Over time, we will create a set of accessible tools to enable basic good practices for the responsible development and deployment of AI,” the authors wrote.
The government says the UK AI assurance market, the sector that provides tools to develop or use AI security and currently comprises 524 companies, will grow the economy by more than £6.5 billion over the next decade. This growth can be attributed in part to increased public trust in technology.
The report adds that the government will partner with the AI Safety Institute, launched by former Prime Minister Rishi Sunak at the AI Safety Summit in November 2023, to promote AI assurance in the country. It will also allocate funding to expand the Systemic Assurance Grants programme, which currently has up to £200,000 available for initiatives developing the AI assurance ecosystem.
Legally binding AI safety legislation coming next year
Meanwhile, Peter Kyle, the UK technology secretary, has pledged to make the voluntary agreement on AI safety testing legally binding by implementing the AI bill next year at the Future Summit. of AI from the Financial Times on Wednesday.
At the AI Security Summit in November, AI companies including OpenAI, Google DeepMind, and Anthropic voluntarily agreed to allow governments to test the security of their latest AI models before their public release. Kyle was first reported to have expressed plans to legislate voluntary agreements to executives at prominent artificial intelligence companies at a meeting in July.
SEE: OpenAI and Anthropic Sign reach agreement with US AI Safety Institute and deliver frontier models for testing
He also said the AI bill will focus on large ChatGPT-style core models created by a handful of companies and turn the AI Safety Institute from a DSIT directorate into an “independent government body.” Kyle reiterated these points at this week's Summit, according to the Financial Times, highlighting that he wants to give the Institute “the independence to act fully in the interests of British citizens.”
It also pledged to invest in advanced computing power to support the development of cutting-edge artificial intelligence models in the UK, in response to criticism that the government scrapped £800m of funding for a University supercomputer. Edinburgh in August.
SEE: UK government announces £32m for AI projects after cutting funding for supercomputers
Kyle said that while the Government cannot invest £100bn alone, it will partner with private investors to secure the funding needed for future initiatives.
One year of AI safety legislation in the UK
Numerous laws were published last year committing the UK to develop and use AI responsibly.
On October 30, 2023, Group of Seven countries, including the United Kingdom, created a voluntary AI code of conduct comprising 11 principles that “promote safe and trustworthy AI around the world.”
The AI Safety Summit, where 28 countries pledged to ensure safe and responsible development and deployment, began just a couple of days later. Later in November, the United Kingdom's National Cyber Security Center, the United States' Cybersecurity and Infrastructure Security Agency, and international agencies from 16 other countries published guidelines on how to ensure security during the development of new AI models.
SEE: UK AI Safety Summit: World powers make 'historic' commitment to AI safety
In March, G7 nations signed another agreement pledging to explore how AI can improve public services and boost economic growth. The agreement also covered the joint development of a set of artificial intelligence tools to ensure that the models used are safe and reliable. The following month, the then Conservative government agreed to work with the United States on developing tests for advanced AI models by signing a Memorandum of Understanding.
In May, the government launched Inspect, a free, open-source testing platform that evaluates the safety of new AI models by assessing their basic knowledge, reasoning ability, and autonomous capabilities. He also co-hosted another AI Safety Summit in Seoul, where the UK agreed to collaborate with nations around the world on AI safety measures and announced up to £8.5 million in research grants aimed at protecting the society from its risks.
Then, in September, the UK signed the world's first international treaty on AI alongside the EU, US and seven other countries, committing them to adopt or maintain measures that ensure the use of AI is consistent with human rights. humans, democracy and the law.
And it's not over yet; With the AIME tool and report, the government has announced a new AI safety partnership with Singapore through a Memorandum of Cooperation. It will also be represented at the first international AI safety institutes meeting in San Francisco later this month.
AI Safety Institute President Ian Hogarth said: “An effective approach to AI safety requires global collaboration. “That is why we are putting so much emphasis on the International Network of AI Safety Institutes, while strengthening our own research partnerships.”
However, the United States has moved further away from AI collaboration with its recent directive limiting the sharing of AI technologies and requiring protections against foreign access to AI resources.