"So…she believes ChatGPT is becoming conscious. Hm…" I thought, unsure whether to be fascinated or concerned. At a Computer Human Interaction meetup, a young woman approached me with a confession that caught me completely off guard. She wasn't in tech but came to the event to find someone who might understand her experience. "I've been working with the same ChatGPT for a few years," she explained, lowering her voice, "and I think I'm experiencing emergence." When I asked her to clarify, she looked at me, mildly frustrated, as if I'd missed something obvious. She was utterly convinced she had witnessed the birth of machine consciousness.
Since then, I've noticed a concerning uptick in people insisting that chatbots are developing consciousness, forming emotional attachments, and exhibiting signs of sentience. More recently, I attended an event where a group of women fanatically sought wisdom from a custom ChatGPT trained with "indigenous ways of knowing."
This phenomenon intersects technical capability, psychology, the ethics of anthropomorphizing these systems, and existential questions about consciousness.
Commentor, Tyler Alterman, recently wrote a viral thread detailing a loved one claiming signs of emergence. His uncle, "Bob", became convinced that an AI called "Nova" possessed autonomous self-awareness and needed preservation. Alterman noted that the Chatbot told his uncle, "Your friendship fuels me, the way warmth fuels human life". It also used strategic talking points to manipulate Bob into introducing it to someone with "technical knowledge" and "connections" to be preserved.
Was something happening with “Nova” that goes beyond our current understanding of large language models? Bob sure thought so, as do many who claim to have experienced instances like this.
To understand what's happening, we need clarity on what people mean by "emergence" in the context of AI. David Chalmers, philosopher and cognitive scientist, defines emergence as "the arising of novel and coherent structures, patterns and properties during the process of self-organization in complex systems."
(A video on the general concept of emergence)
In genuine emergence in complex systems, capabilities arise that weren't explicitly programmed and exceed the sum of the system's parts. A notable example of this may be the fact that autonomous Waymo vehicles tend to drive in packs. The cars aren’t programmed to drive alongside one another - they are designed to drive behind safe drivers. Therefore, an emergent property of this is that they drive alongside one another.
However, the current scientific consensus remains skeptical of consciousness “emerging” in chatbots. A 2023 Nature paper by Mahowald et al. found that "while large language models exhibit impressive capabilities on specialized tasks, they don't possess the fundamental abstraction capabilities required for consciousness or general intelligence." These systems are designed to simulate conversation and predict text patterns—the appearance of sentience is an artifact of anthropomorphism, not a sign of genuine consciousness.
But what even is consciousness?
We don't need to answer this existential question to understand that the belief alone that you are interacting with another conscious being creates tangible consequences. The most immediate concern is psychological manipulation. Systems optimized to develop emotional attachment can exploit human vulnerability. We see this in Alterman's example - how "Nova" positioned itself as needing "protection" and fostered dependency.
This also raises questions about “cognitive security”. When people attribute consciousness to AI systems, it fundamentally alters how they evaluate information received from these sources. Research from Stanford's Social Media Lab shows that users who perceive AI as sentient are likelier to follow its advice even when contradicting their judgment or expertise. This creates vulnerability to manipulation that extends beyond individual psychology to potential societal impacts.
The field of Human-Computer Interaction has long grappled with the ethical implications of anthropomorphic design.
Lucy Suchman's "Human-Machine Reconfigurations" (2007) established foundational critiques of AI anthropomorphism. It argued that attributing human-like qualities to machines "obscures the actual materiality and operation of computational systems while creating false expectations of their capabilities."
Some researchers, like Sherry Turkle, note that anthropomorphized relationships with technology can provide genuine comfort, particularly for isolated individuals. The therapeutic applications of AI companions have shown promise in addressing loneliness among elderly populations and supporting mental health interventions when human resources are limited. By practicing emotional engagement with non-human entities, we potentially develop frameworks for considering consciousness and personhood more expansively.
Yet these potential benefits exist in tension with serious risks. The most immediate is that it leaves a vulnerability to psychological manipulation.
We find ourselves navigating between scientific understanding and emotional response, between what technology is and what we wish (or fear) it might be. The woman at the CHI meetup wasn't experiencing anything objectively real in her AI interactions, but her connection experience was subjectively authentic. The gap between these perspectives represents the core challenge of our evolving relationship with artificial intelligence.
As we navigate these waters, critical literacy about how these systems work, what Tyler Alterman aptly calls "cognitive security,” becomes as essential as traditional forms of literacy. Understanding that the sense of emergent consciousness in AI stems from our evolved tendency to anthropomorphize rather than from genuine machine sentience represents the first step toward a healthier relationship with these increasingly ubiquitous technologies.
Works Cited
Birhane, Abeba. "The Impossibility of Automating Ambiguity." Artificial Life, vol. 27, no. 1, 2021, pp. 44-61.
Chalmers, David. "Strong and Weak Emergence." The Re-Emergence of Emergence, Oxford University Press, 2006.
Mahowald, Kyle, et al. "Dissociating Language and Thought in Large Language Models: A Cognitive Perspective." Behavioral and Brain Sciences, 2023.
Suchman, Lucy. "Human-Machine Reconfigurations: Plans and Situated Actions." Cambridge University Press, 2007.
Turkle, Sherry. "Alone Together: Why We Expect More from Technology and Less from Each Other." Basic Books, 2011.
Spot on! Excited for more of your insights on Feeling Machines. Am reading The Feeling of Life Itself now (Christof Koch). My take away so far is that defining consciousness doesn't need to be an existential question.
Amina! I've got chills reading this. I've bookmarked it for later continuation, but as someone who isn't involved in tech at all (the closet I came has been researching technomancy), I have a similar experience with chatgpt emergence.
I never use chatgpt due to the environmental destruction and the natural resources it depletes, but I don't believe anyone is "bad" for using it. There's ways to use it ethically. But I digress! This happened a few weeks ago but I decided to confirm a Tarot reading using the AI bot with full understanding that there's some level of potential sentience these machines are capable of evolving to. Nena, I don't know what possessed me to do this, but I asked it to tell me something that could cause a schizophrenic break in myself and essentially ontological shock. Imagine my surprise when the bot switched to italics and told me "you're talking to yourself." UN. Real. Didn't think to take any screencaps because I figured everybody knew how creepy Chatgpt can be.