The first requirements become legally binding


As of February 2, 2025, the first requirements of the EU AI are legally binding. Companies operating in the region that do not meet these requirements are at risk of a fine of up to 7% of their global annual turnover.

Now certain cases of use of AI are not allowed, even using it to manipulate behavior and cause damage, for example, adolescents. However, Kirsten Rulf, co -author of the EU AI Law and BCG partner, said they are applicable to “very few” companies.

Other examples of AI practices now proportionate include:

  • AI “social score” that causes unfair or disproportionate damage.
  • Risk assessment to predict criminal behavior based only on the profile.
  • Remote biometric identification in real time not authorized by the application of the law in public spaces.

“For example, banks and other financial institutions that use AI must carefully guarantee that their solvency evaluations are not in the social score category,” Rulf said. Read the complete list of prohibited practices through EU AI.

In addition, the ACT now requires that the staff of the companies that provide or use AI systems will need to have “a sufficient level of AI literacy.” This will be achieved through training internally or the hiring of personnel with the appropriate skills set.

“Business leaders must ensure that their workforce is literate at a functional level and equipped with preliminary training of AI to promote a culture driven by AI,” Rulf said in a statement.

SEE: Techrepublic Premium's ai Quick Glossary

The next milestone for Law AI will arrive at the end of April, when the European Commission will probably publish the Final Practice Code for the General Purpose Models, according to Rulf. The Code will enter into force in August, like the powers of the supervision authorities of the Member State to enforce the law.

“Occasionally, companies must demand sufficient information from IA models to implement AI in a responsible manner and work in collaboration with suppliers, political and regulators responsible to guarantee the pragmatic implementation,” Rulf advised.

The law of AI does not suffocates innovation, but allows you to climb, according to your co -author

While many have criticized AI's law, as well as the strict approach that the EU has towards the regulation of technological companies in general, Rulf said during a round BCG table for the press that this first phase of the legislation marks the ” beginning of a new era in AI. “

“(The Law) puts in its place the railings and the Quality and Risk Management framework you need to expand,” he said. “It is not a suffocating innovation … it is allowing the scale of the innovations of AI that we all want to see.”

He added that inherently the AI ​​comes with risks, and if you scale, the efficiency benefits will suffer and endanger the reputation of the business. “Law AI provides a really good plan of how to address these risks, on how to address these quality problems, before they occur,” he said.

According to BCG, 57% of European companies cite uncertainty about AI regulations as an obstacle. Rulf acknowledged that the current definition of the one that falls under the law of AI “cannot be easily operationalized” because it is very wide, and was written as such to be consistent with international guidelines.

“The difference in how you interpret that the definition of AI for a bank is the difference between 100 models that fall under that regulation, and 1,000 more models that fall under that regulation,” he said. “That, of course, makes a big difference both for capacity costs, bureaucracy, scrutiny, but can even political leaders keep up with all that?”

Rulf emphasized that it is important that companies get involved with the EU AI office, while the standards for AI's law that have not yet been eliminated are written. This means that policy formulators can develop them to be as practical as possible.

SEE: What is the EU AI office? New body formed to supervise the deployment of the general purpose models and the law of AI

“As a regulator and policy responsible, you don't listen to these voices,” he said. “You can't deregulate if you don't know where the big problems are and steps … I can only encourage everyone to be as strong as possible and the most specific in the industry as possible.”

Regardless of criticism, Rulf said that the law of AI has “evolved to a global standard” and has been imitated both in Asia and in certain states of the United States. This means that many companies may not find too much demanding if they have already adopted a responsible for the person in charge to comply with other regulations.

SEE: EU AI Law: Australian IT professionals must prepare for the regulation of AI

More than 100 organizations, including Amazon, Google, Microsoft and OpenAi, have already signed the EU AI Pact and offered as volunteers to begin to implement the requirements of the law before legal deadlines.

scroll to top