What is the EU AI Office?


The European Commission has revealed details about its new AI Office, which is being formed to regulate the deployment of general-purpose models and AI Law in the EU. The office will be made up of five units covering different areas, including regulation, innovation and AI. for social good.

General-purpose models refer to fundamental AI models that can be used for a wide range of purposes, some of which may be unknown to the developer, such as OpenAI's GPT-4.

The office, which comes into force on June 16, will take on tasks such as drafting codes of practice and advising on AI models developed before the AI ​​Act comes into force in full. It will also provide access to AI testing resources and ensure that cutting-edge models are integrated into real-life applications.

The European Commission decided to establish the AI ​​Office in January 2024 to support European startups and SMEs in developing trustworthy AI. It belongs to the General Directorate Connect, the department in charge of digital technologies.

The office will employ more than 140 people, including technology specialists, administrative assistants, lawyers, policy specialists and economists. It will be led by the Head of the AI ​​Office, who will act under the guidance of a Chief Scientific Advisor and an International Affairs Advisor.

Margrethe Vestager, executive vice president for a Europe adapted to the digital age, said in a press release: “The AI ​​office inaugurated today will help us ensure a consistent implementation of the AI ​​Law. “Together with developers and the scientific community, the office will evaluate and test general-purpose AI to ensure that it serves us as humans and upholds our European values.”

Tasks for which the IA Office will be responsible

  • Ensure consistent implementation of the AI ​​Law across Member States.
  • Enforce the rules of the AI ​​Law and apply sanctions.
  • Develop codes of practice and perform testing and evaluation of AI models.
  • Use the expertise of the European Artificial Intelligence Council, an independent scientific panel, large technology companies, SMEs and start-ups, academia, think tanks and civil society in decision-making.
  • Provide advice on AI best practices and access to testing resources such as AI factories and European digital innovation centres.
  • Fund and support innovative research in artificial intelligence and robotics.
  • Support initiatives that ensure that AI models manufactured and trained in Europe are integrated into novel applications that drive the economy.
  • Build a strategic, coherent and effective European approach to AI that acts as a reference point for other nations.

The five units of the AI ​​Office

1. Regulation and Compliance Unit

The Regulation and Compliance Unit will be responsible for ensuring uniform application and enforcement of the AI ​​Law in all Member States of the Union. Staff will conduct investigations and administer sanctions for violations.

2. AI Safety Unit

The AI ​​Safety Unit will develop testing frameworks that identify systemic risks present in general-purpose AI models and corresponding mitigations. A model presents a systemic risk when the cumulative amount of computing used for its training is greater than a certain threshold, according to the EU AI Law.

This unit could be a response to the formation of AI Safety Institutes by the UK, US and other nations around the world. At the Seoul AI Summit in May, the EU agreed with 10 countries to form a collaborative network of AI Safety Institutes.

SEE: UK and US agree to collaborate on developing security tests for AI models

3. Unit of Excellence in AI and Robotics

The Excellence in AI and Robotics team will support and fund the development of models and their integration into useful applications. It also coordinates the GenAI4EU initiative, which aims to support the integration of generative AI across 14 industries, including health, climate and manufacturing, and the public sector.

4. Drive AI for social good

The AI ​​for Social Good Unit will collaborate with international organizations to work on AI applications that benefit society as a whole, such as weather modeling, cancer diagnosis, and digital twins for artistic reconstructions. The unity follows the April decision for the EU to collaborate with the United States on research that addresses “global challenges for the public good.”

SEE: UK and G7 countries will use AI to boost public services

5. AI Policy and Innovation Coordination Unit

The AI ​​Policy and Innovation Coordination Unit will be responsible for the overall implementation of the EU AI strategy. It will monitor trends and investments, support real-world AI testing, establish AI factories providing AI supercomputing services infrastructure, and collaborate with European digital innovation centers.

The EU AI Law in summary

One of the main responsibilities of the AI ​​Office is to enforce the AI ​​Law, the world's first comprehensive law on AI, in all Member States. The Act is a set of EU-level legislation that seeks to establish safeguards on the use of AI in Europe, while ensuring that European companies can benefit from the rapidly evolving technology.

SEE: How to prepare your company for the EU AI Law with KPMG's EU AI Center

While the AI ​​Act was passed in March, there are still some steps to take before companies must comply with its regulations. The EU AI Law must first be published in the EU Official Journal, which is expected to happen in July this year. It will come into effect 20 days after publication, but the requirements will be implemented in stages over the next 24 months.

The AI ​​Office must publish guidelines on the definition of AI systems and prohibitions within six months of the AI ​​law coming into force, and codes of practice within nine months.

Companies that do not comply with the EU AI Law face fines ranging from €35 million ($38 million) or 7% of global turnover, up to €7.5 million ($8.1 million). million dollars) or 1.5% of turnover, depending on the violation and size. of the company.

The EU's reputation for AI regulation

The fact that three of the office's units (Excellence in AI and Robotics, AI for Social Good and AI Policy and Innovation Coordination) are focused on fostering innovation in AI and increasing use cases suggests that the EU It is not bent on stifling progress with its restrictions, as critics say. of the AI ​​Law have suggested. Last year, OpenAI's Sam Altman said he was specifically wary of overregulation in the EU.

In addition to the AI ​​Law, the EU is taking a number of measures to ensure that AI models comply with the GDPR. On May 24, the European Data Protection Board's ChatGPT working group ruled that OpenAI has not done enough to ensure its chatbot provides accurate responses. Data accuracy and privacy are two important pillars of the GDPR, and in March 2023, Italy temporarily blocked ChatGPT for illegally collecting personal data.

In a report summarizing the working group's findings, the researchers wrote: “Although the measures taken to comply with the principle of transparency are beneficial in avoiding misinterpretations of ChatGPT results, they are not sufficient to comply with the principle of accuracy. of the data”.

scroll to top