When you interact with ChatGPT and other conversational generative AI tools, they process your input through algorithms to compose a response that may seem like it comes from a conscious being despite the reality of how large language models (LLMs) work. Nonetheless, two-thirds of respondents to a University of Waterloo study believe AI chatbots are conscious in some form, passing the Turing test to convince them that an AI is equivalent to a human in consciousness.
Generative AI, as embodied in OpenAI’s work on ChatGPT, has come on leaps and bounds in recent years. The company and its rivals often talk about a vision of artificial general intelligence (AGI) with human-like intelligence. OpenAI even has a new scale to measure how close its models are to achieving AGI. But even the most optimistic experts don’t suggest that AGI systems will ever be self-aware or capable of true emotion. Still, of the 300 people who participated in the study, 67% said they believed ChatGPT would be able to reason, feel, and be aware of its existence in some way.
There was also a notable correlation between how often someone uses AI tools and how likely they are to perceive consciousness within them. That’s a testament to how well ChatGPT mimics humans, but it doesn’t mean AI is woke. ChatGPT’s conversational approach likely makes them seem more human-like, though no AI model works like a human brain. And while OpenAI is working on an AI model capable of autonomously conducting research called Strawberry, it’s still different from an AI that’s aware of what it’s doing and why.
“While most experts deny that current AI can be conscious, our research shows that for the majority of the general public, AI consciousness is already a reality,” explained University of Waterloo psychology professor and co-leader of the study Dr. Clara Colombatto. “These results demonstrate the power of language, because a conversation alone can lead us to think that an agent that looks and functions very differently from us can have a mind.”
Lack of customer service
Belief in AI consciousness could have important implications for how people interact with AI tools. On the positive side, it encourages good manners and makes it easier to trust what the tools do, which could make it easier to integrate them into daily life. But trust comes with risks, from over-reliance on them for decision-making to, at the extreme, emotional dependence on AI and fewer human interactions.
The researchers plan to further study the specific factors that make people think AI is conscious and what that means at the individual and societal level. They will also include long-term studies of how those attitudes change over time and in relation to cultural context. Understanding public perceptions of AI consciousness is crucial not only for developing AI products, but also for the regulations and standards that govern their use.
“In addition to emotions, consciousness is related to intellectual capacities that are essential for moral responsibility: the ability to formulate plans, act intentionally, and have self-control are tenets of our ethical and legal systems,” Colombatto said. “These public attitudes should therefore be a key consideration in the design and regulation of AI for safe use, alongside expert consensus.”