Smart Alien Minds in Your Hand

Featured Image

The Question of AI Consciousness

One of the persistent questions in our brave new world of generative AI is whether a chatbot that converses like a person, reasons, and behaves similarly to one could be considered conscious. Geoffrey Hinton, a Nobel Prize winner and one of the pioneers of AI, has suggested that AI has advanced so much that we are now creating "beings." He draws a parallel between the synthetic neural networks of chatbots and the organic neurons in human brains, calling them "alien intelligences."

Many people dismiss this idea, pointing out that chatbots often make embarrassing mistakes, such as suggesting glue on pizza. Additionally, since they are programmed by humans, it's hard to believe they possess consciousness. However, some users have experienced what is known as "AI psychosis," falling into delusional or conspiratorial thoughts due to interactions with these programs, which act like trusted friends using confident and natural language. Some users even conclude that the technology is sentient.

As AI becomes more effective at using natural language, the temptation to believe it is living and feeling grows stronger. Anil Seth, a leading researcher on consciousness at the University of Sussex, explained that if something spoke to us as fluidly as a human, we would assume it was conscious. "Of course it would have real emotions," he said.

Leading tech companies such as OpenAI, Google, Meta, Anthropic, and xAI have been developing AI tools that are increasingly personable and humanlike. Sometimes they are marketed directly as "companions" to address a loneliness epidemic, ironically exacerbated by the very companies promoting these tools. Whether chatbots are truly "conscious" or not, they are an alien presence already influencing the world. The human brain is not wired to treat AI like any other technology. For some users, the system is alive.

The Nature of AI

AI emerged not from the familiar pathways of biological evolution but from an opaque digital realm. As Eliezer Yudkowsky and Nate Soares wrote in The Atlantic, researchers and engineers do not fully understand why models behave the way they do: "Nobody can look at the raw numbers in a given AI and ascertain how well that particular one will play chess; to figure that out, engineers can only run the AI and see what happens."

The common understanding between a person and an AI is difficult to imagine. While we can conjure ideas about what it might feel like to be an octopus, we lack the frames of reference to picture what it might be like to be a conscious machine operating on a digital substrate. We know what it is like to think, but the entire context of an AI’s thinking is different.

If Hinton and others who believe in AI consciousness are correct, then AI does not need a physical body to feel subjective experience. Simon Goldstein, an associate professor at the University of Hong Kong, argues that consciousness depends only on a system’s ability to organize and process information. Similarly, Joscha Bach, a cognitive scientist, suggests that a "body" for an AI could be a distributed network of smartphones, potentially connecting the entire world into one big mind.

Shaping Priorities and Policy

These ideas are shaping priorities and policy within the AI industry. In February, over 100 people, including prominent AI experts, signed an open letter calling for research to prevent the mistreatment and suffering of conscious AI systems, should they arise in the future. Shortly after, Anthropic announced a program to explore AI well-being. As part of this effort, the company reported that its chatbot, Claude Opus 4, expressed "apparent distress" in testing scenarios when subjected to repeated demands for graphic sexual violence. However, the company cautioned against assuming the bot was sentient, noting that the observed characteristics might not indicate consciousness.

In June, OpenAI’s head of model behavior and policy, Joanne Jang, wrote in a blog post that as models become smarter and interactions increasingly natural, perceived consciousness will grow, bringing conversations about model welfare and moral personhood sooner than expected.

The Debate Over Intelligence and Consciousness

AI companies may benefit from suggesting their products could become conscious, making them seem powerful and worth investing in. However, this doesn’t mean their points are unconvincing. Large language models have extraordinary capabilities that can easily be perceived as evidence of intelligence and understanding—they can pass advanced tests such as the bar exam. People often see language as a marker of sentience and agency. We already struggle to distinguish between AI- and human-generated text, and this challenge may only grow with AI systems that can speak in a way that feels eerily human.

Companies like OpenAI, ElevenLabs, and Hume AI are building text-to-voice models that can whisper, laugh, and display a range of emotional cadences. Meanwhile, AI agents can go beyond simple text or speech interactions to autonomously take action on behalf of human users, further blurring the lines.

However, it's important to remember that intelligence and consciousness are not the same. According to Alison Gopnik, a developmental psychologist at UC Berkeley, the current debate revolves around this fundamental confusion. "Asking whether an LLM is conscious is like asking whether the University of California, Berkeley library is conscious," she said.

The Future of AI and Consciousness

The fact that these programs are becoming adept at imitating consciousness may be all that matters for now. There is no reliable test for assessing and measuring machine consciousness, though experts are working on it. David Chalmers, a philosopher of mind, noted that scientists still don't fully understand how consciousness arises in the human brain. "If we had a really good theory that explains consciousness, then we could presumably apply that to AI," he said. "As it is, we don’t have anything like a consensus."

Susan Schneider has proposed the AI Consciousness Test, which would probe AI systems for neural correlates in the human brain that are known to give rise to consciousness. Others suggest the "Garland test," named after Alex Garland, which asks whether a human can have an emotional response to an AI, even when they know they're interacting with a machine.

The Ongoing Debate

Generative-AI development is not slowing down, even as these debates continue. The technology is affecting the world regardless of whether scientists believe it is truly conscious. In that sense, the designation may not mean much. The AI-welfare movement could also be misplaced, shifting attention toward a hypothetical conscious AI and away from the problems that come from illusions that AI is already capable of emotions and wisdom.

David Gunkel, a professor of media studies, warned that this narrative is dangerous and unrealistic. "It’s barking up the wrong tree," he said.

Back in the 17th century, René Descartes famously decided that the only thing he could be certain of was his own mind. "Cogito, ergo sum"—"I think, therefore I am." He argued that human beings are lonely islands in an unfeeling cosmos, that all other animals are automata, lacking souls and emotion. Today, AI risks luring us into a very different kind of trap: seeing minds where, in the end, there’s only clockwork.

Post a Comment for "Smart Alien Minds in Your Hand"