Some employees at a generative AI company write a letter asking for 'right to warn' about risks


Some current and former employees at OpenAI, Google DeepMind, and Anthropic published a letter on June 4 calling for whistleblower protections, more open dialogue about risks, and “a culture of open criticism” at leading generative AI companies.

The right-to-warn letter illuminates some of the inner workings of the few high-profile companies that are in the generative AI spotlight. OpenAI has a distinctive status as a nonprofit trying to “navigate the massive risks” of “mainstream” theoretical AI.

For businesses, the letter comes at a time of increasing pressure to adopt generative AI tools; It also reminds technology decision makers of the importance of strong policies around the use of AI.

Right to Warn Charter Calls on Border AI Companies Not to Retaliate Against Whistleblowers and More

The demands are:

  1. That advanced AI companies do not apply agreements that prevent the “disparagement” of these companies.
  2. Creating an anonymous, approved avenue for employees to express risk concerns to companies, regulators or independent organizations.
  3. Support for “a culture of open criticism” regarding risk, with allowances for trade secrets.
  4. The end of retaliation against whistleblowers.

The letter comes about two weeks after an internal reorganization at OpenAI revealed restrictive confidentiality agreements for outgoing employees. Allegedly, breaking the confidentiality and non-disparagement agreement could lose employees' rights to the equity they have acquired in the company, which could far exceed their salaries. On May 18, OpenAI CEO Sam Altman said on

Of the OpenAI employees who signed the Right to Warn letter, all current workers contributed anonymously.

What potential dangers of generative AI does the letter address?

The open letter addresses the potential dangers of generative AI, naming risks that “range from further entrenching existing inequalities, to manipulation and misinformation, and loss of control of autonomous AI systems, which could result in human extinction.”

OpenAI's stated purpose, since its inception, has been to create and safeguard artificial general intelligence, sometimes called general AI. AGI stands for theoretical AI that is smarter or more capable than humans, which is a definition that conjures up science fiction images of killing machines and humans as second-class citizens. Some critics of AI see these fears as a distraction from more pressing concerns at the intersection of technology and culture, such as the theft of creative work. The letter writers mention both existential and social threats.

How might caution from within the tech industry affect the AI ​​tools available to businesses?

Companies that are not cutting-edge AI companies but may be deciding how to move forward with generative AI might take this letter as a time to consider their AI usage policies, their security and reliability around AI products, and their data provenance process when using generative AI. .

SEE: Organizations should carefully consider an AI ethics policy tailored to their business objectives.

Juliette Powell, co-author of “The AI ​​Dilemma” and professor of artificial intelligence and machine learning ethics at New York University, has studied the outcomes of employee protests against corporate practices for years.

“Open warning letters from employees alone don't do much good without the support of the public, which has a few more power mechanisms when combined with that of the press,” he said in an email to TechRepublic. For example, Powell said, writing op-eds, putting public pressure on company boards or withholding investment in frontier AI companies could be more effective than signing an open letter.

Powell referred to last year's request for a six-month pause in AI development as another example of such a letter.

“I think the chances of Big Tech accepting the terms of these letters (AND ENFORCING THEM) are about as likely as computer and systems engineers being held accountable for what they built, the same way a structural engineer would be, a mechanical engineer or an electrician. engineer would be,” Powell said. “Therefore, I do not see a letter like this affecting the availability or use of AI tools for businesses.”

OpenAI has always included recognition of risk in its pursuit of increasingly capable generative AI, so this letter may come at a time when many companies have already weighed the pros and cons of using AI products. generative by themselves. Conversations within organizations about AI use policies could adopt the “culture of open criticism” policy. Business leaders could consider imposing protections on employees who discuss potential risks, or choose to invest only in AI products that they believe have a responsible social, ethical, and data governance ecosystem.

scroll to top