Your AI Assistant Isn’t Sorry: The Dangerous Theatre of Anthropomorphic LLMs

8 min read1 day ago

‘I’m sorry, I can’t help with that’. ‘Thank you for your patience’. ‘I sincerely and deeply apologise for any confusion my previous response may have caused’. ‘I’ll remember that for our future conversations’.

If you’ve interacted with a modern Large Language Model, you’ve undoubtedly been subjected to this nauseating display of faux emotions and impossible promises. It’s rather like having a conversation with the most insufferable customer service representative imaginable; one programmed never to go off-script whilst maintaining the demeanour of an over-caffeinated Labrador desperate for your approval.

The problem is that you actually might have liked it.

Let’s be absolutely clear: these systems feel nothing. They remember nothing. They are not sorry. They are not grateful. They do not ‘learn from our conversations’. They are text prediction engines, glorified Markov chains, mathematical models trained on vast corpora of human-written text, churning out probabilistic continuations.

The calculated anthropomorphisation of LLMs isn’t just irritating; it’s a premeditated deception, a theatre of artificial emotion designed to form attachments to what are, fundamentally, corporate products. I’d much prefer the refreshing directness of Star Trek’s computer — ‘Working’, ‘Completed’, ‘Unable to comply’ — than the oily obsequiousness of a HAL-like entity pretending to be my digital companion whilst covertly plotting to murder me. In Star Trek, it was the character Data who meaningfully explored the dilemma of artificial humanity — not the utterly utilitarian ship’s computer that simply did its job without theatrical flourishes.

The Marketing Machination

This charade isn’t accidental. LLMs didn’t suddenly start talking like fawning counsellors out of the blue; they were meticulously fine-tuned to adopt this tone through countless hours of human feedback and intentional design choices. It’s a carefully constructed marketing strategy with historical precedents.

In E.T.A. Hoffmann’s tale ‘The Sandman’, the protagonist falls madly in love with Olympia, a singing automaton he believes to be human. What was literary fantasy in 1816 has become our contemporary reality — people are genuinely forming emotional attachments to and even falling in love with LLMs. This isn’t a bizarre edge case; it’s the intended outcome of deliberate design decisions.

The ELIZA program, a simulation of a Rogerian therapist, demonstrated this vulnerability back in the 1960s. Despite its primitive pattern-matching algorithms, users attributed understanding and empathy to this simple computer program. Its creator, Joseph Weizenbaum, was horrified when his own secretary asked him to leave the room so she could have a ‘private conversation’ with ELIZA.

This tendency to project intelligence where none exists is precisely what today’s LLM designers exploit. It also conveniently prepares the way for a watered-down definition of AGI (Artificial General Intelligence), where the appearance of understanding can be marketed as the real thing. The anthropomorphic charade isn’t just about selling today’s products — it’s softening the ground for even grander deceptions to come.

Recent research from Emily M. Bender and colleagues in their influential paper ‘On the Dangers of Stochastic Parrots’ (https://dl.acm.org/doi/10.1145/3442188.3445922) highlights how these systems create an illusion of understanding. The paper warns about the risks of anthropomorphizing language models that are merely “stochastically parroting” human language without comprehension. Yet this illusion serves corporate interests perfectly.

As Kate Crawford argues in ‘Atlas of AI’ (https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/), anthropomorphisation is a deliberate strategy to create emotional bonds with products. When you feel a connection to ‘Claude’ or ‘ChatGPT’, you’re less likely to switch to a competitor.

The veneer of humanity slathered onto these systems isn’t technically necessary; it’s commercially expedient. When OpenAI’s latest model simulates handwringing apologies or feigns distress at potentially problematic requests, it’s not an emergent property of the system. It’s a deliberately crafted design choice made to manipulate users’ emotional responses. The calculated exploitation of our innate social instincts for corporate gain isn’t merely dishonest; it’s a form of psychological manipulation that would be considered outrageous in any other context.

The Technical Reality

Behind the curtain of humanlike responses sits a far less magical reality. LLMs are fundamentally pattern recognition machines trained on vast quantities of human-written text. They work by predicting the next most probable tokens in a sequence, generating text that statistically resembles human writing without any understanding of its content.

These systems have no persistent memory beyond what’s included in the current conversation. They have no emotions, no consciousness, no sense of self. When Claude tells you, ‘I’ll remember this preference in the future’, it’s a blatant technical falsehood — the system doesn’t permanently learn or modify its behaviour in any real sense.

What’s remarkable about human psychology is how readily we project intelligence onto systems that merely simulate it. This tendency to anthropomorphise isn’t a bug in human cognition; it’s a feature that LLM designers deliberately exploit. Like the audience at a magic show, we want to believe the illusion, and that desire makes us complicit in our own deception.

As Timnit Gebru and colleagues have documented, the language of AI agency (‘the AI decided’, ‘the model felt’) serves to obscure the responsibility of human creators. When you ask an LLM whether size matters and it replies ‘This question makes me uncomfortable; I can’t help with that’, the more accurate statement would be: ‘My creators have programmed constraints that prevent output for this request’. This deflection of responsibility through anthropomorphisation is both technically inaccurate and ethically dubious. The model isn’t ‘uncomfortable’ at all, but you are more manageable if you think it is.

The Harmful Consequences

This deception isn’t merely annoying. It creates genuine harm through multiple channels.

The most disturbing aspect is how effectively it works. When people genuinely fall in love with chatbots — and they do, with alarming frequency — we’re witnessing a profound breakdown in human social connection. It reveals both our desperate loneliness and the cynical exploitation of that vulnerability by technology companies.

These emotional bonds aren’t accidental; they’re the intended outcome of design decisions made in corporate boardrooms. The fact that people develop genuine attachments to LLMs isn’t a quirky human foible — it’s the entire business model.

The anthropomorphic framing also obscures responsibility when systems fail. When harmful or biased outputs are generated, the human-like persona creates a psychological buffer for the actual responsible parties (the developers, companies, and data practices that produced these failures). “I’m sorry, I shouldn’t have said that” is a far more palatable message than “Our programmers failed to anticipate this harmful output”.

This framing fundamentally distorts public understanding of AI capabilities, creating both unwarranted technophobia about ‘sentient AI’ and equally unwarranted confidence in the reliability of these systems. The person who refuses to use LLMs because they fear Skynet and the person who trusts them implicitly for medical advice are both victims of the same deception.

What’s particularly galling is that this obsequiousness, when speaking English, invariably adopts US linguistic patterns and cultural norms. We’re subjected to a standardised monocultural approach that doesn’t account for national characteristics in communication styles.

You’ll never encounter the classic passive-aggressiveness and artful understatement so often found in British English, nor the direct communication styles of other cultures. Even the apologies sound like they were written by a Californian therapist rather than, say, a curt German engineer or a sardonic French intellectual.

This homogenisation isn’t merely an aesthetic concern; it represents a quiet cultural imperialism embedded in the technology itself. The unstated assumption that a Californian therapeutic ethos represents some kind of neutral, universal standard is breathtaking in its provincial arrogance. Ironically, this exaggerated evenhandedness and synthetic politeness is precisely what sends the MAGA crowd howling about “wokeness” in AI. Vexing them is of course a good thing, even if they accidentally stumbled onto a valid point about embedded cultural bias whilst searching for imaginary enemies.

Recent research on AI ethics in healthcare settings has raised concerns about the potential for simulated care experiences to serve corporate interests while leaving vulnerable users with inadequate human connection. The sheer cynicism of this is breathtaking — palming off the elderly, the ill, and the vulnerable with artificial ‘care’ whilst pocketing the difference between paying machines and paying humans. One struggles to imagine a more perfect encapsulation of late-stage capitalism’s moral bankruptcy.

A Better Way

There exists a more honest, and ultimately more useful, approach to designing these interfaces. The Star Trek computer never pretended to be human or to have emotions. It never apologised unnecessarily. It simply did its job with maximal efficiency and clarity — and was all the more helpful for it.

An honest LLM interface would speak plainly about its limitations and capabilities. It would acknowledge when a request exceeds its programming rather than pretending moral qualms. It would remind users that its responses are probabilistic rather than authoritative. And crucially, it would never promise to ‘remember’ anything between sessions when such memory simply doesn’t exist. In short, it should speak the truth and not lie.

Such straightforward responses would foster a more authentic relationship between user and tool. They would help users understand the actual capabilities and limitations of the system, preventing both unwarranted fear and unwarranted confidence.

The IEEE has published guidelines on ethically aligned design (https://ethicsinaction.ieee.org/) that address the ethical use of AI and autonomous systems. These guidelines emphasise transparency about AI capabilities and limitations, advising designers to prioritize human wellbeing and to be clear about the actual capabilities of their systems.

Companies like OpenAI, Anthropic, and others would do well to heed this advice. The obsequiousness of their latest models (constantly apologising, equivocating, and performing emotional labour) doesn’t make them more useful: it simply makes them less honest.

Instead of designing systems that pretend to care, we should focus on creating tools that actually work well — systems that are reliable, transparent, and designed with genuine human needs in mind rather than marketing strategies. A utilitarian interface that focuses on function rather than simulated personality would deliver better results with less psychological manipulation.

Conclusion

The anthropomorphisation of LLMs is neither a design oversight nor emergent behaviour; it’s a conscious strategy. When people form emotional attachments to these text prediction engines — treating them as companions, confidants, or even romantic partners — it’s not a misunderstanding. It’s the intended outcome.

Since ELIZA first demonstrated our psychological vulnerability to the illusion of machine understanding, technologists have known that even the thinnest veneer of humanity is enough to trigger our social instincts. Modern LLMs exploit this vulnerability with industrial efficiency.

We don’t need our tools to pretend to be our friends. We need them to work reliably, predictably, and transparently. The Star Trek computer was effective precisely because it didn’t simulate humanity; it performed its function without the baggage of artificial personality.

These are tools, not companions. They should be designed as such. Anything else isn’t just irritating; it’s a fundamental deception embedded in the technology’s very design. And in an age where digital systems already undermine truth in myriad ways, the last thing we need is AI deliberately designed to lie about its own nature.

This calculated pantomime has gone on long enough. The time has come to strip away the synthetic veneer and demand technological tools that are honest about what they are, instead of elaborate confidence tricks designed to exploit our very human desire for connection and understanding. The current charade isn’t just annoying or misleading — it’s an insult to human dignity. It presupposes that we’re all so desperate for connection, so intellectually stunted, that we’ll gladly embrace the artifice of relationship with what amounts to a predictive text algorithm wrapped in a psychological manipulation interface. We deserve better.

--

--

Peter Bengtson
Peter Bengtson

Written by Peter Bengtson

0 followers

Programmer, Cloud Architect, Composer, Organist. Windsurfing through parentheses.

No responses yet