OpenAI, creator of ChatGPT, plans to introduce tools to counter electoral misinformation


OpenAI says it works to prevent abuse, provide transparency in AI-generated content, and improve access to accurate voting information

The OpenAI logo is seen in this illustration taken on February 3, 2023. —Reuters

In a major step forward to ensure the integrity of elections around the world, OpenAI, the world's leading artificial intelligence (AI) company, announced the launch of tools to counter disinformation.

ChatGPT's creator, in a blog post, said the company is “working to prevent abuse, provide transparency into AI-generated content, and improve access to accurate voting information.”

The move is seen as significant amid the dangers of fake news and misinformation affecting electoral processes around the world, with a host of countries heading to the polls including Pakistan, India, the United States and the European Union. .

The World Economic Forum (WEF) in its Global Risk Report has also declared that AI-driven misinformation is the “biggest near-term threat” to the global economy.

Vowing to stop harmful use of its technology (ChatGPT and DALL·E), OpenAI sought to invite all stakeholders to protect the integrity of the elections.

“We want to ensure that our technology is not used in a way that could undermine this process. We want to ensure that our AI systems are built, deployed and used safely. Like any new technology, these tools come with benefits and challenges. “They are also unprecedented and we will continue to evolve our approach as we learn more about how our tools are used,” the company said.

Preventing abuse

The company added that before launching new systems, they gather them, engage users and external partners for feedback, and create security mitigations to reduce the potential for harm.

“DALL-E has guardrails to reject requests that request the generation of images of real people, including candidates,” he said.

OpenAI said they are still working to understand how effective their tools could be for personalized persuasion and until it is clear they will not allow people to create apps for political campaigns and lobbying.

The company has also created new GPTs through which users can report potential violations.

Additionally, a mechanism has been implemented that would not allow developers to create chatbots that pretend to be real people or institutions.

Likewise, applications that could discourage people from participating in democratic processes will not be allowed.

Transparency around content

Noting the importance of labeling AI-created content, OpenAI said the company is working on several provenance efforts that would give reliable attribution to text generated by ChatGPT and also give users the ability to detect whether an image was created using DALL-E 3. .

“Earlier this year, we will implement Coalition for Content Provenance and Authenticity digital credentials, an approach that encodes details about the provenance of content using cryptography, for images generated by DALL-E 3.”

The coalition, according AFP, also known as C2PA, aims to improve methods for identifying and tracking digital content. Its members include Microsoft, Sony, Adobe, and Japanese imaging companies Nikon and Canon.

Collaboration

ChatGPT, the statement added, is increasingly integrating with existing information sources, after which users will begin to have access to real-time news reports globally, including attributions and links.

Similarly, to improve access to authoritative voting information, OpenAI has partnered with the National Association of Secretaries of State (NASS), where ChatGPT will direct users to CanIVote.org, an authoritative voting information website in USA

scroll to top