The Biden administration will begin implementing new rules laid out in the president's executive order aimed at regulating artificial intelligence, although some experts are skeptical about how useful the new rules will be.
“The executive order's concern with model size and computing power, rather than the actual use case, is misguided. This approach risks creating compliance burdens for businesses without significantly improving accountability or transparency” , Jake Denton, research associate at the Heritage Foundation's Tech Policy Center. , he told Fox News Digital.
“The order's blurred lines and vaguely defined reporting requirements will likely lead to selective and inconsistent enforcement.”
Denton's comments come after The Associated Press reported Monday that the Biden administration would begin implementing new rules from the order, including a rule requiring developers of artificial intelligence systems to disclose security test results to the government.
WHITE HOUSE: DEVELOPERS OF 'POWERFUL AI SYSTEMS' NOW HAVE TO REPORT SECURITY TESTING RESULTS TO THE GOVERNMENT
The White House AI Council met Monday to discuss progress on the three-month executive order, the report said, coinciding with the 90-day goal set in the order under the Defense Production Act that AI start sharing. information with the Department of Commerce.
Ben Buchanan, White House special adviser on AI, told The Associated Press that the administration is interested in knowing whether “AI systems are safe before they are released to the public; the president has been very clear that companies must fulfill that.” bar.”
But Denton is skeptical that the order will lead to the announced results.
“The order's blurred lines and vaguely defined reporting requirements will likely lead to selective and inconsistent enforcement,” Denton said. “Meanwhile, significant information asymmetry between regulators and companies will likely make supervision ineffective.”
Christopher Alexander, director of analytics at Pioneer Development Group, also expressed skepticism about the new rules, pointing to the government's struggles to regulate other tech industries such as cryptocurrencies and expressing fears about censorship.
WHITE HOUSE URGES CONGRESS TO ACT AFTER 'ALARNING' AI TAYLOR SWIFT IMAGES
“The Biden administration's problematic regulation of cryptocurrencies is a perfect example of the government dictating to the industry instead of working with the industry to get proper regulations,” Alexander told Fox News Digital. “I am also concerned that the aggressive social media censorship efforts by the US government in recent years are very disconcerting, and I believe that any government oversight efforts should be carefully monitored by Congress to ensure accountability, and it is crucial that they clearly define 'Who will watch the watchers.'”
However, Alexander argued that it is important to set standards for the industry, noting that “the private sector motivations of AI companies are not always in the best interest of the general public.”
Biden's executive order seeks to close that gap, establishing a set of common standards for future AI security.
“I think the government is setting the tone for the future. There really isn't a standard yet to test the safety of these models. So this order doesn't have much teeth yet,” Phil Siegel, founder of, told Fox News Digital on Thursday. Center for Advanced Preparedness and Threat Response Simulation (CAPTRS).
“If the administration fails to seize the moment by creating stifling regulations, the United States will see its global advantage in artificial intelligence technology fade.”
“But some consensus processes are emerging. Over time, various prompts will probably be generated, whether randomly or not, to test the models. There will be some sophisticated AI models that will be used to converse or test new models. Additionally, 'red “Teamwork” will become a method used when teams of people and technology attempt to “break” these models.”
CHATGPT BOSS WARNS OF SOME 'SUPERHUMAN' ABILITIES AI COULD DEVELOP
Siegel compared the process to current rules for drug approval, which he said are now well understood and followed by drug developers.
“Eventually we will have that for testing AI models and honestly we should have had it for social media applications,” Siegel said.
Ziven Havens, policy director at the Bull Moose Project, argued that the administration has reached a critical point in AI regulation, which will require them to balance safety standards while being careful not to stifle innovation.
“If the Biden administration is to be successful with AI regulation, it will use the information provided to it to create reasonable standards that protect both consumers and businesses' ability to innovate,” Havens told Fox News Digital. .
CLICK HERE TO GET THE FOX NEWS APP
“If the administration fails to seize the moment by creating stifling regulations, the United States will see its global advantage in artificial intelligence technology fade. Waving a white flag over American innovation would be a disaster, both for our economy and our national security.”
The White House did not immediately respond to a request for comment from Fox News Digital.