50% More Professionals Rank Data Privacy as a Top GenAI Concern in 2024


Data privacy concerns around generative AI have increased, according to a new report from Deloitte. While last year only 22% of professionals placed it among their three main concerns, this year the figure has risen to 72%.

GenAI's next most important ethical concerns were transparency and data provenance, with 47% and 40% of professionals ranking them in the top three this year. Meanwhile, only 16% expressed concern about job displacement.

Staff are increasingly curious about how AI technology works, especially when it comes to sensitive data. A September study by HackerOne found that nearly half of security professionals believe AI is risky, and many view leaked training data as a threat.

Similarly, 78% of business leaders ranked “safe and secure” as one of their top three ethical technology principles, an increase of 37% from 2023, further demonstrating how the issue of Safety is a priority.

The survey results come from Deloitte's 2024 “State of Ethics and Trust in Technology” report, which surveyed more than 1,800 technical and business professionals around the world about the ethical principles they apply to technologies, specifically to GenAI.

High-profile AI security incidents likely to attract more attention

Just over half of respondents in this year's and last year's reports said that cognitive technologies such as AI and GenAI pose the greatest ethical risks compared to other emerging technologies, such as virtual reality, quantum computing, autonomous vehicles and robotics.

This new approach may be related to increased awareness of the importance of data security due to highly publicized incidents, such as when a bug in OpenAI's ChatGPT exposed personal data of around 1.2% of ChatGPT Plus subscribers, including names, emails and partial data. payment details.

Trust in the chatbot was surely also eroded by the news that hackers had broken into an online forum used by OpenAI employees and stolen sensitive information about the company's artificial intelligence systems.

SEE: Artificial Intelligence Ethics Policy

“The availability and widespread adoption of GenAI may have increased respondents' familiarity and confidence in the technology, generating optimism about its potential for good,” said Beena Ammanath, Global Deloitte AI Institute and Trustworthy AI leader, in a statement. of press.

“Continued feelings of caution about its apparent risks underscore the need for specific and evolved ethical frameworks that enable positive impact.”

AI legislation is affecting the way organizations operate around the world

Naturally, more staff are using GenAI at work than last year, and the percentage of professionals reporting using it internally increased 20% in Deloitte's year-over-year reports.

A whopping 94% said their companies have instilled it in processes in some way. However, most indicated that it is still in the pilot phase or that its use is limited, and only 12% said that its use is widespread. This aligns with recent Gartner research that found that most GenAI projects do not make it past the proof-of-concept stage.

SEE: IBM: As enterprise adoption of artificial intelligence increases, barriers limit its use

Regardless of its pervasiveness, decision-makers want to ensure that their use of AI doesn't get them in trouble, particularly when it comes to legislation. The highest-rated reason for having ethical technology policies and guidelines was compliance, cited by 34% of respondents, while regulatory penalties were among the top three concerns reported if such standards are not followed.

The EU AI Law came into force on August 1 and imposes strict requirements on high-risk AI systems to ensure safety, transparency and ethical use. Failure to comply could result in fines ranging from €35 million ($38 million) or 7% of global turnover to €7.5 million ($8.1 million) or 1.5% of the billing.

More than a hundred companies, including Amazon, Google, Microsoft and OpenAI, have already signed the EU AI Pact and volunteered to start implementing the law's requirements ahead of the legal deadlines. This demonstrates their commitment to responsible deployment of AI to the public and helps them avoid future legal challenges.

Similarly, in October 2023, the United States released an Executive Order on AI that presents broad guidance on maintaining security, civil rights, and privacy within government agencies, while promoting innovation and AI competition across the country. While not law, many operating companies in the US may make policy changes in response to ensure compliance with evolving federal oversight and public expectations about AI safety.

SEE: G7 countries establish voluntary code of conduct on AI

The EU AI Law has been influential in Europe: 34% of European respondents said their organizations had made changes to the use of AI over the past year in response. However, the impact is more widespread, with 26% of South Asian respondents and 16% of North and South American respondents also making changes due to the law's enactment.

Additionally, 20% of U.S.-based respondents said they had made changes to their organizations in response to the executive order. A quarter of respondents from South Asia, 21% from South America and 12% from Europe said the same.

“Cognitive technologies such as AI are recognized as having the greatest potential to benefit society and the greatest risk of misuse,” the report's authors wrote.

“The accelerated adoption of GenAI may be outpacing organizations' ability to manage the technology. “Companies should prioritize both the implementation of ethical standards for GenAI and a meaningful selection of use cases to which GenAI tools are applied.”

scroll to top