As of August 2, 2025, suppliers of artificial intelligence models for general use (GPAI) in the European Union must comply with the key provisions of the EU AI law. The requirements include maintaining updated technical documentation and summaries of training data.
Law AI describes the measures of the entire EU with the objective of ensuring that AI is used safely and ethically. It establishes a risk -based approach to the regulation that classifies AI systems based on their level of perceived risk and impact on citizens.
As the deadline approaches, legal experts listen to the suppliers of AI that the legislation lacks clarity, opening them to possible sanctions even if they intend to comply. Some of the requirements also threaten innovation in the block by asking too much of the new technology companies, but the legislation has no real approach to mitigate the risks of bias and harmful content generated by AI.
Oliver Howley, partner of the Department of Technology of the Proskauer law firm, spoke with Techrepublic about these deficiencies. “In theory, on August 2, 2025 it should be a milestone for the responsible,” he said in an email. “In practice, it is creating significant uncertainty and, in some cases, real commercial doubts.”
The unclear legislation exposes GPAI suppliers to leaks and IP sanctions
Behind the scene, the suppliers of AI models in the EU are fighting with the legislation, since “it leaves too open to the interpretation,” Howley told Techrepublic. “In theory, the rules can be achieved … but they have been written at a high level and that creates a genuine ambiguity.”
The law defines the GPAI models that have a “significant generality” without clear thresholds, and that suppliers must publish “sufficiently detailed” summaries of the data used to train their models. The ambiguity here creates a problem, since revealing too many details could “risk revealing a valuable IP or triggering copyright disputes,” said Howley.
Some of the opaque requirements also pose unrealistic standards. The AI Practice Code, a voluntary framework that technology companies can register to implement and comply with the AI law, instruct GPAI models providers to filter websites that have opted for the extraction of data from their training data. Howley said this is “a standard that is quite difficult in the future, much less retroactively.”
Nor is it clear who is obliged to meet the requirements. “If you adjust an open source model for a specific task, is the 'supplier' now?” Howley said. “What happens if it only houses or wraps it in a downstream product? That matters because it affects who entails the compliance load.”
In fact, while open -source GPAI models suppliers are exempt from some of the transparency obligations, this is not true if they represent “systemic risk.” In fact, they have a different set of more rigorous obligations, which include safety tests, red teaming and monitoring after deployment. But since the open source allows unrestricted use, the monitoring of all downstream applications is almost impossible, however, the supplier could be considered responsible for the harmful results.
Heavy requirements could have a disproportionate impact on new AI companies
“Certain developers, despite signing the code, have expressed concern that transparency requirements can expose commercial secrets and slowly innovation in Europe,” Howley told Techrepublic. Openai, Anthrope and Google have committed themselves to him, with the giant of the particular search expressing such concerns. Goal has publicly refused to sign the code in protest of legislation in its current form.
“Some companies are already delaying launches or limiting access in the EU market, not because they do not agree with the objectives of the law, but because the compliance route is not clear, and the cost of making mistakes is too high.”
Howley said that new companies are having the best time because they do not have internal legal support to help with extensive documentation requirements. These are some of the most essential companies when it comes to innovation, and the EU recognizes it.
“For early stages developers, the risk of legal exposure or reversion of characteristics may be sufficient to divert EU investment completely,” he added. “Then, although the objectives of the law are solid, the risk is that its implementation slows down precisely the type of innovation responsible for which it was designed to support.”
A possible setback effect of canceling the potential of new companies is to increase geopolitical tensions. The vocal opposition of the US administration to the confrontations of the AI regulation with the EU impulse for supervision, and could force the business conversations in progress. “If compliance actions begin to hit US suppliers. UU., That tension could be intensified more,” said Howley.
ACT has very little focus on preventing bias and harmful content, limiting its effectiveness
While the law has significant transparency requirements, there are no mandatory thresholds for precision, reliability or impact of the real world, Howley told Techrepublic.
“Even systemic risk models are not regulated according to their real outputs, only in the robustness of surrounding paperwork,” he said. “A model could meet all technical requirements, from the publication of training summaries to the execution of incident response protocols, and still produce harmful or biased content.”
What rules are in force on August 2?
There are five sets of rules that GPAI models suppliers must ensure that they know and comply with this date:
Notified bodies
High -risk GPAI models should be prepared to commit to notified bodies for conformity evaluations and understand the regulatory structure that supports these evaluations.
High -risk AI systems are those that represent a significant threat to fundamental health, security or rights. They are: 1. used as the safety components of the products governed by EU Product Safety Laws, or 2. implemented in a sensitive use case, which includes:
- Biometric identification
- Critical Infrastructure Management
- Education
- EMPLOYMENT AND HUMAN RESOURCES
- APPLICATION OF THE LAW
GPAI models: systemic risk triggers stricter obligations
GPAI models can fulfill multiple purposes. These models raise “systemic risk” if they exceed 1025 floating point operations executed per second (FLOPS) during training and are designated as such by the EU AI office. OpenAi chatgpt, Google Meta's and Gemini conform to these criteria.
All GPAI models suppliers must have technical documentation, a summary of training data, a copyright compliance policy, guidance for subsequent implementers and transparency measures regarding the capacities, limitations and planned use.
GPAI models suppliers that represent systemic risks must also perform models evaluations, inform incidents, implement risk mitigation and cyber security safeguards, reveal the use of energy and carry out the monitoring after the market.
Government: Supervision of multiple EU bodies
This set of rules defines the governance and application architecture both at the EU and national level. GPAI models suppliers must cooperate with the EU AI office, the European Board of AI, the scientific panel and the national authorities to fulfill their compliance obligations, respond to supervision requests and participate in risk monitoring processes and incident reports.
Confidentiality: protections for IP and commercial secrets
All data requests made to GPAI models by the authorities will be legally justified, safely managed and subject to confidentiality protections, especially for IP, commercial secrets and source code.
Sanctions: fines of up to € 35 million or 7% of income
GPAI models providers will be subject to sanctions of up to € 35,000,000 or 7% of their total annual billing worldwide, which is greater, for breach of the practices of the prohibited ia under article 5, such as:
- Human behavior manipulation
- Social score
- Scraping of facial recognition data
- Biometric identification in real time in public
Other violations of regulatory obligations, such as transparency, risk management or deployment responsibilities, may result in fines of up to € 15,000,000 or 3% of billing.
The supply of deceptive or incomplete information to the authorities can lead to fines of up to € 7,500,000 or 1% of the billing.
For SMEs and new companies, the lowest amount of the amount or fixed percentage is applied. The sanctions will consider the seriousness of the violation, its impact, if the supplier cooperated and if the violation was intentional or negligent.
Although the specific regulatory obligations for GPAI models suppliers begin to apply on August 2, 2025, a one -year grace period is available to comply, which means that there will be no risk of sanctions until August 2, 2026.
When does the rest of the EU AI law enter into force?
The EU AI Law was published in the official magazine of the EU on July 12, 2024 and entered into force on August 1, 2024; However, several provisions are applied in phases.
- February 2, 2025: Certain AI systems were prohibited that represent an unacceptable risk (for example, social score, biometric surveillance in real -time in public). Companies that develop or use AI should ensure that their staff has a sufficient level of AI literacy.
- August 2, 2026: The GPAI models placed in the market after August 2, 2025 must comply with this date, as the powers of application of the commission formally begin.
The rules for certain high -risk AI systems listed also begin to be applied to: 1. those placed in the market after this date, and 2. Those placed in the market before this date and have suffered a substantial modification since then. - August 2, 2027: The GPAI models placed in the market before August 2, 2025 must obtain full compliance.
High -risk systems used as safety components of the products governed by EU Product Safety Laws must also comply with stricter obligations from now on. - August 2, 2030: The AI systems used by public sector organizations found in the high -risk category must fully comply on this date.
- December 31, 2030: The AI systems that are components of EU Ti systems on a large scale and were placed in the market before August 2, 2027, must comply with the fulfillment of this final deadline.
A group that represents Apple, Google, Meta and other companies urged regulators to postpone the implementation of the law in at least two years, but the EU rejected this request.