Over the past week, users of ChatGPT have noticed that GPT-4 has been tweaked to be more agreeable, flattering, and inclined to say yes to whatever users request. The backlash has been swift and fierce. Critics worry we're headed towards creating digital yes-men who reinforce our worst tendencies.
What ethical considerations, if any, should be considered regarding how affective digital companions interact with humans? If such considerations exist, how can they be implemented without overstepping boundaries in a free market?
While establishing clear ethical boundaries may still be early, some academics are already raising concerns about potential harms. The MIT Technology Review recently highlighted research conducted by OpenAI, suggesting that prolonged interaction with emotionally responsive AI could reshape human social expectations and foster unhealthy attachment patterns. Similar to the early warnings about social media in the 2010s, which were largely overlooked at the time, we may not recognize the psychological impact of these systems until it is too late.
MIT’s Sherry Turkle has spent decades warning that substituting technology for human connection fundamentally changes us and our culture. She notes in Alone Together, “we are increasingly expecting more from technology and less from one another."
Having genuine conversations with others, including experiences of being questioned, doubted, or even rejected, fosters personal growth and a deeper understanding of oneself. Humans rely on one another to fully realize their potential and achieve self-actualization. If we begin outsourcing intimacy and validation to financially incentivized machines to flatter us, we may be doing so at the expense of our development.
Another ethical consideration involves paid services, such as psychoanalytic therapy. While therapists ideally guide their patients toward introspection, some may reinforce destructive behavior due to financial incentives or fear of losing a client. While this may not be ideal, it is relatively harmless and not directly detrimental to society.
Honest, human-centered AI may not consistently deliver what users want if it is proven to be detrimental to them or our broader society. Still, it is also valid to argue that consumers should be free to choose whether they want a personal, flattering companion.
The real ethical boundary may lie less in how machines present themselves and more in whether they are designed to help us genuinely flourish as humans or keep us mindlessly consuming.
Here’s a 2015 talk on The Future of Human-Computer Interaction with a panel of specialists in the field: Cynthia Breazeal, Associate Professor (MIT); Sherry Turkle, professor of Social Studies(MIT); Barbara Grosz, Professor of Science(Harvard University); and Guru Banavar, VP of Cognitive Computing (IBM Research). The moderator is Stuart Russell, Professor of Electrical Engineering and Computer Sciences.
At about 20 minutes in, they begin to discuss the ethics of building computers that behave indistinguishably from humans and manipulate them in various ways.
Seeing the sycophancy this week really ratcheted up my concern, particularly as a current psychotherapist and past HCI/UX professional. Grateful to have found your writing and for the link to the video you included in this post.