Meta now has an AI chatbot. The influence of AI increases on social networks


When you use Facebook Messenger these days, a new message greets you with this message: “Ask Meta AI anything.”

You may have opened the app to text a friend, but Meta's new AI-powered chatbot tempts you with encyclopedic knowledge that's just a few keystrokes away.

Meta, the parent company of Facebook, has installed its local chatbot on its Whatsapp and Instagram services. Now, billions of Internet users can open one of these free social media platforms and take advantage of Meta AI services as a dictionary, guide, counselor or illustrator, among many other tasks it can perform, although not always reliably or infallible.

“Our goal is to build the world's leading AI and make it available to everyone,” Meta CEO Mark Zuckerberg said when announcing the chatbot's launch two weeks ago. “We believe that metaAI is now the most intelligent AI assistant that you can freely use.”

As the Meta movements suggest, generative AI is making its way into social media. TikTok has an engineering team focused on developing large language models that can recognize and generate text, and they are hiring writers and reporters who can annotate and improve the performance of these AI models. On Instagram's help page it says: “Meta can use [user] messages to train the AI ​​model, helping to improve AIs.”

TikTok and Meta did not respond to a request for comment, but artificial intelligence experts said social media users can expect to see more of this technology influencing their experience, for better or possibly worse.

Part of the reason social media apps are investing in AI is that they want to become “stickier” to consumers, said Ethan Mollick, a professor at the University of Pennsylvania's Wharton School who teaches entrepreneurship and innovation. Apps like Instagram try to keep users on their platforms as long as possible because captive attention drives advertising revenue, he said.

On Meta's first-quarter earnings call, Zuckerberg said it would take some time for the company to turn a profit on its investments in the chatbot and other uses of AI, but it has already seen the technology influence customer experiences. users on their platforms.

“Right now, about 30% of posts in the Facebook feed are delivered by our AI recommendation system,” Zuckerberg said, referring to the behind-the-scenes technology that shapes what Facebook users see. “And for the first time, more than 50% of the content people see on Instagram is now recommended by AI.”

In the future, AI will not only personalize user experiences, said Jaime Sevilla, director of Epoch, a research institute that studies trends in AI technology. In fall 2022, millions of users were captivated by Lensa's AI capabilities as it generated whimsical portraits from selfies. Expect to see more of this, Sevilla said.

“I think you're going to end up seeing totally AI-generated people releasing AI-generated music and stuff,” he said. “We could live in a world where the role humans play in social media is a small part of the whole.”

Mollick, author of the book “Co-intelligence: Living and Working with AI,” said these chatbots are already producing some of what people read online. “AI is increasingly driving many online communications,” he said. “[But] “We actually don’t know how much AI writing there is.”

Sevilla said generative AI likely won't supplant the digital marketplace created by social media. People crave authenticity in their interactions with friends and family online, she said, and social media companies must preserve a balance between that and AI-generated content and targeted advertising.

Although AI can help consumers find more useful products in daily life, there is also a dark side to the technology's appeal that can turn into coercion, Sevilla said.

“The systems will be pretty good at persuasion,” he said. A newly published study by artificial intelligence researchers at the Swiss Federal Institute of Technology in Lausanne found that GPT-4 was 81.7% more effective than a human at convincing someone in a debate to agree. While the study has not yet been peer-reviewed, Sevilla said the findings were concerning.

“That's relative to that [AI] “We could significantly expand the ability of fraudsters to interact with many victims and perpetrate more and more fraud,” he added.

Sevilla said policymakers should be aware of the dangers of AI spreading misinformation as the United States heads into another politically charged voting season this fall. Other experts warn that the question is not whether AI could play a role in influencing democratic systems around the world, but how.

Bindu Reddy, CEO and co-founder of Abacus.AI, said the solution is a little more nuanced than banning AI from our social media platforms: Bad actors were spreading hate and misinformation online long before AI came in. in the equation. For example, human rights advocates criticized Facebook in 2017 for failing to filter online hate speech that fueled the Rohingya genocide in Myanmar.

In Reddy's experience, AI has been good at detecting things like bias and pornography on online platforms. He has been using AI to moderate content since 2016, when he launched an anonymous social media app called Candid that relied on natural language processing to detect misinformation.

Regulators should prohibit people from using AI to create fakes of real people, Reddy said. But he criticizes laws such as the European Union's sweeping restrictions on AI development. In his opinion, it is dangerous for the United States to be trapped behind competing countries, such as China and Saudi Arabia, which are investing billions of dollars in the development of artificial intelligence technology.

So far, the Biden administration has released a “Plan for an AI Bill of Rights” that offers suggestions for safeguards the public should have, including protections for data privacy and against algorithmic discrimination. It is not enforceable, although it hints at legislation that might arise.

Sevilla acknowledged that AI moderators can be trained to have a company's biases, leading to some opinions being censored. But human moderators have also shown political biases.

For example, in 2021, The Times reported on complaints that it was difficult to find pro-Palestinian content on Facebook and Instagram. And conservative critics accused Twitter of political bias in 2020 because it blocked links to a New York Post story about the contents of Hunter Biden's laptop.

“In fact, we can study what kind of biases [AI] reflects,” said Sevilla.

Still, he said, AI could become so effective that it could powerfully oppress free speech.

“What happens when everything on your schedule fits perfectly into company guidelines?” Seville said. “Is that the kind of social media you want to consume?”

scroll to top