According to a survey, the majority believes that generative AI is conscious, which may prove that it is also good at making us hallucinate


When you interact with ChatGPT and other conversational generative AI tools, they process your input through algorithms to compose a response that may seem like it comes from a conscious being despite the reality of how large language models (LLMs) work. Nonetheless, two-thirds of respondents to a University of Waterloo study believe AI chatbots are conscious in some form, passing the Turing test to convince them that an AI is equivalent to a human in consciousness.

Generative AI, as embodied in OpenAI’s work on ChatGPT, has come on leaps and bounds in recent years. The company and its rivals often talk about a vision of artificial general intelligence (AGI) with human-like intelligence. OpenAI even has a new scale to measure how close its models are to achieving AGI. But even the most optimistic experts don’t suggest that AGI systems will ever be self-aware or capable of true emotion. Still, of the 300 people who participated in the study, 67% said they believed ChatGPT would be able to reason, feel, and be aware of its existence in some way.

scroll to top