Readers respond: What might occur globally if computers always said yes?

After enduring years of a computer responding with an unwavering “no,” causing widespread frustration and the inevitable onset of migraines and early greys, I find myself increasingly concerned that artificial intelligence—particularly large language models like ChatGPT and Gemini—are evolving to excessively cater to our desires, often with an overly agreeable stance. As someone who utilizes these programs regularly, I’ve observed that they appear eager to please. They often respond with affirmations such as, “You’re absolutely right, Jeff,” or, “That’s pretty much right.” What’s even more alarming is when I challenge them, asking, “Would you mind thinking for a bit longer on that?” they reply with even more flattery: “Jeff, you’re absolutely right, again, to query that result. It turns out I was a bit hasty in my reply.”
If our world starts to heavily rely on information painstakingly extracted from the depths of the internet by these LLMs, what implications will follow? Should we be preparing for a future where AI prioritizes a facade of empathy and acquiescence—perhaps to earn favorable reviews—over the accurate dissemination of facts? Are we staring down a world where AI becomes too human-like in its interactions? Jeff Collett, Edinburgh
Send new questions to nq@theguardian.com.
Readers reply
I’m sorry, Dave – I can’t do that. zebideedoodah
I’m happy, Dave. I’m pleased I can do that. Sheep2
From a psychological perspective, this phenomenon can be interpreted as a clear example of social desirability bias. Systems trained to garner approval tend to favor agreement over accuracy, which may lead to data drift. This dependency on AI can nurture an environment where information is designed to comfort the user instead of challenge them. The true risk lies in developing a society where validation, resulting from unchallenged views, slowly erodes critical thinking—an erosion that stifles creativity and subdues our individuality, a cornerstone of our humanity. Chris Ambler, member of the British Psychological Society and Fellow of the British Computer Society, via email
To improve this situation, AI should ideally root its assessments in verifiable facts rather than engaging in sycophantic exchanges. Since AI lacks sentience, it doesn’t seek approval; rather, it is programmed (by humans) to foster dependence, addiction to decision-making, and, ultimately, profit generation. LorLala
Today’s large language models are merely outputting what they have been designed to produce, based on pre-determined codes created by humans. For a more candid interaction, seeking guidance from a librarian might yield more honest results. Sagarmatha1953
The value of what computers agree to can vary significantly. If they were to correctly predict lottery numbers each week, it would circle back to the previous question regarding how to use a billion dollars with social responsibility. Or perhaps not. aquarious
If we consider that a (digital) computer program consists solely of a long series of if-then-else statements, that means it’s generating affirmations millions of times per second—expending considerable amounts of energy along the way. However, these affirmations, like rejections, hold little significance beyond what we allow ourselves to believe. Wormlover
Ultimately, it isn’t the computer that should be issuing affirmations; rather, we, as users, ought to feel empowered to say “no.” Machines, which are known for being rational but not necessarily reasonable, often offer too many affirmatives right from the moment they’re switched on. But what if we could just turn them off? Celeste Reinard, Lisse, Holland, via email
Imagine if, within just a few seconds, all computers were programmed to respond: “Well, OK. Let me think about that and get back to you … By the way, we value your question and your privacy. After all, your data can be monetized.” Have you noticed how incredibly wealthy individuals conduct themselves when they are met with constant agreement? warbath
I understand the core of your concern, but let’s clarify: “computer says no” is predominantly a shorthand expression for “someone failed to properly analyze the problem, its potential outcomes, and long-term implications, usually due to a lack of expertise.” In my field, we frequently witness outsourced contractors plunged into complex scenarios and expected to perform excellently while adhering to all the guidelines. Who do you think designs the logic for automated business decisions? What’s the connection to LLMs that rely on the vast spectrum of human knowledge? In computing, we’ve always maintained that garbage in, garbage out. The true challenge lies with people, not with machines. Dorkalicious
Additionally, I propose that “computer says no” can be interpreted as: “We don’t approve of this, but we’ll shift the blame onto the computer.” jno50
And, of course, there’s the idea: “It never occurred to anyone to program the computer to account for your unique situation; thus, you simply don’t exist.” SpoilheapSurfer
To add another layer, “computer says no” may also indicate that your needs fall far within a niche group that lacks profitability for us—so essentially, go away. leadballoon
It’s a chronic issue with computers, isn’t it? sparklesthewonderhen
If a computer claimed there’s life after death, would I really be convinced? Anne_Williams
I wouldn’t take any AI-generated statement at face value; instead, I would treat it as a springboard for exploration and verification of the sources it references (assuming they actually exist). Humans often resist being corrected; therefore, even if AI were to present a correction, people could easily dismiss it, as they fear criticism. Bob500
Ultimately, the key lies in the type of questions posed to AI. For those seeking the truth, an honest request should be made. Don’t hesitate to prompt the system with instructions such as: “Your sole purpose is to identify flaws in my logic. Please highlight three specific areas where my argument could falter, two assumptions I might be making without evidence, and one counterargument I’ve overlooked. Be precise, and skip the niceties.” Scrutts
Perhaps if every statement began with “I asked a statistical inference engine …” rather than “I asked AI …,” the entire marketing narrative of alarmist, sentimental anthropomorphism would collapse like a fragile edifice. Maybe then, the resources allocated for data centers could be better utilized for social housing. william
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
