UK government scraps £1.3bn for AI and tech innovation


The UK government has set aside £1.3bn of funding that had been earmarked for AI and technological innovation. This includes £800m for the creation of the exascale supercomputer at the University of Edinburgh and £500m for the AI ​​Research Resource, another supercomputer facility comprising Isambard at the University of Bristol and Dawn at the University of Cambridge.

The funding was originally announced by the then Conservative government as part of the autumn statement in November. However, on Friday, a spokesperson for the Department for Science, Innovation and Technology revealed to the BBC that the Labour government, which came to power in early July, was redistributing the funding.

He claimed the money had been promised by the Conservative administration but was never allocated in its budget. In a statement, a spokesman said: “The government is taking difficult and necessary spending decisions across all departments in the face of billions of pounds of unfunded commitments. This is essential to restore economic stability and deliver on our national mission of growth.

“We have launched the AI ​​Opportunities Action Plan which will identify how we can strengthen our IT infrastructure to better suit our needs and consider how AI and other emerging technologies can best support our new Industrial Strategy.”

A £300m grant has already been committed to AIRR and will continue to be used as planned. Some of this amount has already been earmarked for the first phase of the Dawn supercomputer. However, the second phase, which would improve its speed tenfold, is now in jeopardy, according to The Register. The BBC claimed that the University of Edinburgh had already spent £31m on building housing for its exascale project and that the last government considered it a priority project.

“We are absolutely committed to building a technology infrastructure that creates growth and opportunity for people across the UK,” the DSIT spokesperson added.

The aim of the AIRR and exascale supercomputers was to enable researchers to analyse advanced AI models to ensure safety and drive advances in areas such as drug discovery, climate modelling and clean energy. According to The Guardian, the chancellor and vice-chancellor of the University of Edinburgh, Professor Sir Peter Mathieson, is urgently seeking a meeting with the technology secretary to discuss the future of exascale.

Removing funding goes against commitments made in the government's AI Action Plan

The shelved funding appears to go against a statement by Secretary of State for Science, Innovation and Technology Peter Kyle on 26 July, where he said he was “putting AI at the heart of the government's agenda to drive growth and improve our public services”.

He made this statement as part of the announcement of the new AI Action Plan, which, once developed, will set out how to best develop the country's AI sector.

Next month, Matt Clifford, one of the main organisers of the AI ​​Safety Summit in November, will publish his recommendations on how to accelerate the development and boost the adoption of useful AI products and services. An AI Opportunities Unit will also be created, made up of experts who will implement the recommendations.

The government announcement sees infrastructure as one of the “key enablers” of the Action Plan. If the necessary funding were received, exascale supercomputers and AIRR would provide the immense processing power needed to handle complex AI models, accelerating research and development of AI applications.

SEE: 4 ways to boost digital transformation in the UK

AI bill to focus on continued innovation, despite funding changes

While the UK Labour government has pulled back on investment in supercomputers, it has taken some steps to support AI innovation.

On July 31, Kyle told executives from Google, Microsoft, Apple, Meta and other major tech companies that the AI ​​bill will focus on large, ChatGPT-style database models built by just a handful of companies, according to the Financial Times.

He assured tech giants that it would not become a “Christmas tree bill” where more regulations are added through the legislative process. Limiting AI innovation in the UK could have a significant economic impact, with a Microsoft report finding that adding five years to the time it takes to implement AI could cost more than £150bn. According to the IMF, the AI ​​Action Plan could deliver annual productivity gains of 1.5%.

FT sources heard Kyle confirm that the AI ​​bill will focus on two things: making voluntary agreements between businesses and government legally binding and turning the AI ​​Safety Institute into an independent government body.

First point of the AI ​​bill: making voluntary agreements between the government and big tech companies legally binding

At the AI ​​Safety Summit, representatives from 28 countries signed the Bletchley Declaration, committing them to jointly manage and mitigate AI risks while ensuring safe and responsible development and deployment.

Eight companies involved in AI development, including ChatGPT creator OpenAI, voluntarily agreed to work with the signatories, allowing them to evaluate their latest models before release so that the declaration can be confirmed. These companies also voluntarily agreed to the AI ​​Border Security Commitments at the Seoul AI Summit in May, which include stopping the development of AI systems that pose serious, unmitigated risks.

According to the FT, UK government officials want these agreements to be legally binding so that companies cannot back out if they lose commercial viability.

Artificial Intelligence Bill, Item 2: Turning the AI ​​Safety Institute into an independent government body

The UK AISI was launched at the AI ​​Safety Summit with three main objectives: to assess existing AI systems for risks and vulnerabilities, to conduct fundamental research into AI safety, and to share information with other national and international stakeholders.

A government official said making AISI an independent body would reassure companies that they were not under government pressure while also strengthening their position, the FT reported.

The UK government's stance on AI regulation versus innovation remains unclear

The Labour government has shown evidence of both limiting and supporting the development of AI in the UK.

Alongside the reallocation of AI funding, the government has suggested it will take a tough line in restricting AI developers. The King’s Speech in July announced that the government will “seek to put in place appropriate legislation to impose requirements on those working to develop the most powerful artificial intelligence models.”

This backs up Labour’s pre-election manifesto, which pledged to introduce “binding regulation for the handful of companies developing the most powerful AI models”. After the speech, Prime Minister Keir Starmer also told the House of Commons that his government would “harness the power of AI as we look to strengthen security frameworks”.

The government has also promised tech companies that the AI ​​bill will not be overly restrictive and appears to have remained calm in presenting it. It was expected to be included in the laws announced as part of the King's speech.

scroll to top