Companies seek to balance innovation and ethics in AI, says Deloitte


As generative AI becomes more popular, organizations must consider how to ethically implement it. But what does ethical AI implementation look like? Does it involve controlling human-level intelligence? Preventing bias? Or both?

To assess how companies are addressing this issue, Deloitte recently surveyed 100 senior executives at U.S. companies with annual revenues between $100 billion and $10 billion. The results indicated how business leaders are incorporating ethics into their generative AI policies.

Top priorities for AI ethics

What ethical issues do these organizations consider most important? Organizations prioritized the following ethical issues in developing and deploying AI:

  • Balancing innovation with regulation (62%).
  • Ensure transparency in how data is collected and used (59%).
  • Addressing user concerns and data privacy (56%).
  • Ensure transparency in the operation of business systems (55%).
  • Mitigating biases in algorithms, models and data (52%).
  • Ensure that systems operate reliably and as intended (47%).

Organizations with larger revenues—$1 billion or more per year—were more likely than smaller companies to say their ethical frameworks and governance structures foster technological innovation.

Unethical uses of AI can include misinformation, especially critical during election seasons, and reinforcing bias and discrimination. Generative AI can accidentally replicate human biases by copying what it sees, or malicious actors can use generative AI to intentionally create biased content more quickly.

Threat actors using phishing emails can take advantage of the typing speed of generative AI. Other potentially unethical use cases may include AI making important decisions in war situations or law enforcement.

In September 2023, the US government and major tech companies agreed to a voluntary commitment that sets standards for disclosing the use of generative AI and content created with it. The White House Office of Science and Technology Policy published a draft AI Bill of Rights, which includes anti-discrimination measures.

US companies using AI at certain scales and for high-risk tasks must report to the Department of Commerce starting in January 2024.

SEE: Get started with a template for an AI Ethics Policy.

“For any organization adopting AI, the technology presents both the potential for positive outcomes and the risk of unintended outcomes,” said Beena Ammanath, executive director of the Global Deloitte AI Institute and leader of Trustworthy AI at Deloitte, in an email to TechRepublic.

Who makes ethical decisions about AI?

In 34% of cases, ethical decisions about AI come from directors or senior managers. In 24% of cases, all professionals make decisions about AI independently. In rarer cases, company or department leaders (17%), managers (12%), professionals with mandatory training or certifications (7%), or an AI review board (7%) make ethical decisions related to AI.

Large companies (with annual revenue of $1 billion or more) were more likely to allow workers to make independent decisions about AI use than companies with annual revenue of less than $1 billion.

The majority of executives surveyed (76%) said their organization provides ethical AI training to their workforce, and 63% said they provide it to the board. Workers in the creation (69%) and pre-development (49%) phases receive ethical AI training less frequently.

“As organizations continue to explore opportunities with AI, it is encouraging to see how collaborative governance frameworks have emerged to empower workforces to promote ethical outcomes and drive positive impact,” said Kwasi Mitchell, Deloitte’s US director of purpose and DEI. “By adopting procedures designed to promote accountability and safeguard trust, leaders can establish a culture of integrity and innovation that allows them to effectively leverage the power of AI, while promoting equity and driving impact.”

Are organizations hiring and upskilling for AI ethics positions?

The following roles were hired or are part of the hiring plans of the surveyed organizations:

  • AI researcher (59%).
  • Policy analyst (53%).
  • AI Compliance Manager (50%).
  • Data scientist (47%).
  • AI governance specialist (40%).
  • Data ethics specialist (34%).
  • AI Ethics (27%).

Many of these professionals (68%) come from internal training and development programs. Even fewer have turned to external sources, such as traditional hiring or certification programs, and even fewer have turned to campus hiring and collaborations with academic institutions.

“Ultimately, companies need to be confident that their technology can be trusted to protect the privacy, security, and fair treatment of their users, and is aligned with their values ​​and expectations,” Ammanath said. “An effective approach to AI ethics should be based on the specific needs and values ​​of each organization, and companies that implement strategic ethics frameworks will often find that these systems support and drive innovation, rather than hinder it.”

scroll to top