The European Commission this week launched its new artificial intelligence (AI) office, which will help set policies for the bloc while also serving as a “global reference point”, according to officials.
“The European AI Office will support the development and use of trustworthy AI, while protecting against AI risks,” the commission wrote in a statement posted on its website. “The AI Office was created within the European Commission as a center of AI expertise and forms the basis of a single European AI governance system.”
“The AI Office also promotes an innovative ecosystem of trusted AI, to reap social and economic benefits,” the committee said. “It will ensure a strategic, coherent and effective European approach to AI at an international level, becoming a global reference point.”
The Commission presented its AI strategy package in April 2021, aiming to make the European Union (EU) a “world-class hub for AI and ensure that AI is human-centred and trustworthy.” “.
GOOGLE 'WORKING 24 HOURS' TO FIX GEMINI'S 'UNACCEPTABLE' AI, CEO SAYS
The new office will mainly work to coordinate policies between its member states and support its own governing bodies, a key point of the Bletchley Park agreement signed last year during the first global summit on AI safety.
The Bletchley Declaration, signed by 28 countries including the United States, China and the United Kingdom, focuses on two main points: identifying AI security risks and “building respective risk-based policies in all our countries to ensure the security in light of such risks”.
Safety in the development and use of AI has remained a central topic of debate and policy since the public first realized the technology's potential to transform
WHAT IS CHATGPT?
To monitor that development, the European Commission launched an AI innovation package, including the GenAI4EU initiative, which will support startups and small and medium-sized businesses to ensure that any new AI projects “respect the values and standards of the EU”.
European Commission President Ursula von der Leyen, in a State of the Union address, announced a new initiative to make Europe's supercomputers available to innovative European AI startups and launched a competition to provide cash prizes 250,000 euros (approximately $273,500) to companies that develop new AI models under an open source license for non-commercial use or must publish the research results.
Competing to lead the way in AI doesn't just mean staying at the forefront of technological development. AI safety policy has proven to be a competitive area for nations vying to establish themselves at the forefront of the industry.
NEW TEXT-TO-VIDEO AI MODEL HAS CREATIVE POTENTIAL BUT NEEDS 'EXTREME RESPONSIBILITY'
The United States established the US Artificial Intelligence Security Institute under the National Technology Standards Institute after the security summit, seeking to “facilitate the development of standards for the security and testing of AI models,” among others. tasks.
Europe followed suit and published the EU AI Law, which the commission touts as the world's first comprehensive law on AI. The European Parliament stated that AI developed within member states should remain “safe, transparent, traceable, non-discriminatory and environmentally friendly.”
CLICK HERE TO GET THE FOX NEWS APP
“AI systems should be monitored by people, rather than automation, to avoid harmful outcomes,” Parliament stated.
The AI Office will work with a “variety of institutions, experts and stakeholders” to fulfill its tasks, including an independent panel of scientific experts to ensure “strong links with the scientific community.”