Google and Meta criticise UK and EU AI regulations


Google and Meta have this week openly criticised European regulation of artificial intelligence, suggesting it will scupper the region's innovation potential.

Representatives from Facebook’s parent company, along with Spotify, SAP, Ericsson, Klarna and more, have signed an open letter to Europe expressing concerns about “inconsistent regulatory decision-making.”

The document states that interventions by European data protection authorities have created uncertainty about what data they can use to train their AI models. The signatories call for consistent and swift decisions regarding data regulations that allow the use of European data, similar to the GDPR.

The letter also highlights that the bloc will miss out on the latest “open” AI models, which are freely available to all, and “multimodal” models, which accept inputs and generate outputs in text, images, voice, video and other formats.

By preventing innovation in these areas, regulators are “depriving Europeans of the technological advances enjoyed by the US, China and India.” Moreover, without freedom of action over European data, models “will not understand or reflect European knowledge, culture or languages.”

SEE: Companies seek to balance innovation and ethics in AI, according to Deloitte

“We want to see Europe succeed and prosper, including in the field of cutting-edge AI research and technology,” the letter says. “But the reality is that Europe has become less competitive and less innovative compared to other regions and now risks falling even further behind in the AI ​​era due to incoherent regulatory decision-making.”

Google suggests copyrighted data could be allowed to be used to train business models

Google has also separately spoken out about UK laws that prevent AI models from being trained on copyrighted materials.

“If we don't take proactive steps, we risk being left behind,” Debbie Weinstein, Google's UK managing director, told The Guardian.

“The unresolved copyright issue is a barrier to development, and one way to unblock it, obviously from Google’s perspective, is to go back to where I think the government was in 2023, which was to allow commercial use of TDM.”

TDM, or text and data mining, is the practice of copying copyrighted works. Currently, it is only permitted for non-commercial purposes. Plans to allow it for commercial purposes were abandoned in February after receiving widespread criticism from the creative industries.

Google also published a paper this week called “Unlocking the UK’s AI Potential” in which it makes a number of suggestions for policy changes, including allowing commercial TDM, establishing a publicly funded mechanism for computing resources and launching a national AI skills service.

SEE: 83% of UK companies increase salaries for professionals with AI skills

It also calls for a “pro-innovation regulatory framework” that takes a risk-based and context-specific approach and is administered by public regulators such as the Competition and Markets Authority and the Information Commissioner’s Office, according to The Guardian.

EU regulations have affected big tech's AI plans

The EU represents a huge market for the world’s biggest tech companies, with 448 million people. However, the implementation of the rigid Artificial Intelligence Act and the Digital Markets Act has prevented them from launching their latest AI products in the region.

In June, Meta delayed training its large language models on public content shared by adults on Facebook and Instagram in Europe following opposition from Irish regulators. Meta AI, its cutting-edge artificial intelligence assistant, has yet to be launched within the bloc due to its “unpredictable” regulations.

Apple also won’t initially make its new set of generative AI capabilities, Apple Intelligence, available on EU devices, citing “regulatory uncertainties generated by the Digital Markets Act,” according to Bloomberg.

SEE: Apple Intelligence EU: Possible Mac launch amid DMA rules

According to a statement provided to The Verge by Apple spokesperson Fred Sainz, the company is “concerned that the DMA’s interoperability requirements may require us to compromise the integrity of our products in ways that put user privacy and data security at risk.”

Thomas Regnier, a spokesperson for the European Commission, told TechRepublic in an emailed statement: “All companies are welcome to offer their services in Europe, as long as they comply with EU law.”

Google’s Bard chatbot has launched in Europe four months after its launch in the US and UK, following privacy concerns raised by Ireland’s Data Protection Commission. Similar regulatory resistance is believed to have caused the delay in the arrival of its second version, Gemini, in the region.

This month, Ireland’s DPC has opened a new investigation into Google’s artificial intelligence model, PaLM 2, as it could breach GDPR rules. Specifically, it is investigating whether Google has sufficiently completed an assessment identifying the risks associated with the way it processes Europeans’ personal data to train the model.

X has also agreed to permanently stop processing personal data from EU users’ public posts to train its Grok artificial intelligence model. The DPC took Elon Musk’s company to Ireland’s High Court after finding it had failed to implement mitigation measures, such as an opt-out option, until several months after it began collecting data.

Many tech companies have their European headquarters in Ireland as it has one of the lowest corporate tax rates in the EU at 12.5%, so the country's data protection authority plays a key role in regulating technology across the bloc.

The UK's own AI regulations remain unclear

The UK government’s stance on AI regulation has been mixed, in part due to a change in leadership in July. Some officials are also concerned that over-regulation could alienate major tech players.

On July 31, Peter Kyle, Secretary of State for Science, Innovation and Technology, told executives from Google, Microsoft, Apple, Meta and other major tech companies that the upcoming AI Bill will focus on large, ChatGPT-style database models created by just a handful of companies, according to the Financial Times.

He also assured them that it would not become a “Christmas tree bill” where more regulations would be added through the legislative process. He added that the bill would focus primarily on making voluntary agreements between companies and the government legally binding and would make the AI ​​Safety Institute an “independent government body.”

As seen in the EU, AI regulations delay the launch of new products. While the intention is to keep consumers safe, regulators risk limiting their access to the latest technologies, which could bring tangible benefits.

Meta has taken advantage of this lack of immediate regulation in the UK by announcing that it will train its AI systems on public content shared on Facebook and Instagram in the country, something it is not currently doing in the EU.

SEE: Delaying AI rollout in UK by five years could cost economy more than £150bn, Microsoft report says

In August, the Labour government scrapped £1.3bn of funding that the Conservatives had earmarked for artificial intelligence and technological innovation.

The UK government has also continually indicated that it plans to take a strict approach to regulating AI developers. The King’s Speech in July said the government “will seek to put in place appropriate legislation to impose requirements on those working to develop the most powerful artificial intelligence models.”

scroll to top