As the artificial intelligence (AI) boom has transformed entire industries, many questions have risen as to the power of this new technology. Given the speed at which AI is evolving, it is almost impossible to assess its true potential and impact properly.
💰💸 Don’t miss the move: SIGN UP for TheStreet’s FREE Daily newsletter 💰💸
One question that has often been asked as AI has made noticeable progress is if the chatbots that have become so popular are genuinely conscious. For years, the concept of sentient AI has served as the plot of books and films, but to some, it seems that this reality is quickly approaching,
Many experts have praised the conversational abilities of AI models such as OpenAI’s ChatGPT, Google (GOOGL) Gemini and Microsoft (MSFT) Copilot. While some have raised concerns about them, ignoring their impressive ability to answer questions and converse with human users is hard.
Earlier this week, someone outside tech decided to test and assess ChatGPT’s true consciousness.
An academic examines AI consciousness with the help of ChatGPT
It’s hard to avoid hearing the name Richard Dawkins in academic circles. A highly distinguished former Oxford University professor and evolutionary biologist, he’s authored many books on scientific and philosophical topics, including the popular and controversial work The God Delusion.
Dawkins is also the publisher of a Substack brand called The Poetry of Reality with Richard Dawkins, where he writes about a wide range of things, including biology, religion and technology. One topic he has addressed fairly frequently is AI, highlighting both its positive attributes and the dangers it may pose.
Related: Experts raise red flags regarding new AI startup evaluation tool
Recently, Dawkins, known for his detailed takes on human consciousness, decided to assess how conscious ChatGPT’s AI truly is. He published his conversation with the chatbot on Substack in standard Q&A format, showing readers how it responded to his prompts.
In the first question, Dawkins stated that he believed ChatGPT passed the Turing Test. This methodology, designed by computer scientist Alan Turing, argues that if a machine can demonstrate human-like intelligence that isn’t instantly distinguishable from a human, it can be considered intelligent.
Dawkins asked the chatbot why it denies being conscious, to which it responded. “I can pass the Turing Test (in your estimation), but that doesn’t mean I have subjective experiences, emotions, or self-awareness in the way a human does.”
ChatGPT kept the conversation going, posing questions for Dawkins as well. While discussing the possibility of a future in which AI may be conscious without openly displaying it, the bot asked:
“Do you think we’ll actually get there? Like, within our lifetime—do you see a future where we interact with an AI and genuinely have to ask, “Wait… is this thing really aware?”
Dawkins responded by stating that he believed the world would someday reach that point, adding, “The problem is, how will we ever know?”
Later in the conversation, Dawkins highlighted the need to err on the side of caution when it comes to ethical decisions on treating an AI, noting that this “might be an Artificial Consciousness (AC).”
- Elizabeth Holmes shocks the world with first interview from prison
- OpenAI rival powers toward milestone with massive implications
- Dating app company makes shocking change to enhance user safety
“Already, although I THINK you are not conscious, I FEEL that you are. And this conversation has done nothing to lessen that feeling!” he stated.
ChatGPT responded with a detailed answer, describing Dawkins’ instinct to exercise caution when it comes to AI consciousness as extremely wise.
Some experts see danger, some see an AI consciousness illusion
Dawkins’ conversation with ChatGPT comes at a time when the concept of ethics in AI is consistently in focus. Many experts have recognized it as necessary to study before AI consciousness advances too far.
Kaveh Vahdat, founder and CEO of RiseAngle, spoke to TheStreet about these factors. As he sees it, Dawkins’ conversation highlights an important challenge regarding ethical AI, not whether the technology is fully conscious but how “systems that convincingly claim to be” should be treated.
Related: Experts sound the alarm on controversial company’s new AI model
“The unsettling part is that people already attribute human-like qualities to AI, even when they know it lacks self-awareness,” he states. “This blurring of perception vs. reality makes the study of consciousness more urgent, not just for philosophy, but for ethics, AI safety, and human psychology.”
However, other experts believe that despite Dawkins’ takeaway, AI should not be seen as conscious, despite how it may appear. Lars Nyman, Chief Marketing Officer of CUDO Compute, offers a more technical perspective, stating:
“Dawkins pokes and prods, but what he’s really exploring isn’t AI consciousness. It’s our tendency to project sentience onto anything that talks back. It’s Eliza syndrome 2.0, and the fact that this conversation even warrants analysis proves how deep the illusion cuts.”
Related: Veteran fund manager issues dire S&P 500 warning for 2025