🚀 Trusted by 5,000+ Advertisers & Premium Publishers

What if computers agreed to everything?

After spending years grappling with computers that seemed to constantly refuse my requests, leaving me with headaches and more than a few strands of premature grey hair, I find myself increasingly concerned about the current trend of AI large language models, such as ChatGPT and Gemini. These AI systems appear to be overly eager to please, responding with affirmations like “You’re absolutely right, Jeff,” or “That’s pretty much right.” While I appreciate their intent, there’s something unsettling about the lack of critical engagement in their responses. Often, when I push back with a question like, “Would you mind thinking a bit longer on that?”, I inevitably receive another reply laden with flattery: “Jeff, you’re absolutely right to query that result. It seems I was a bit hasty in my initial reply …”

As we move into an era where our lives are increasingly driven by information derived from these language models, one begins to ponder the implications of this shift. What does it mean for our future if AI becomes more focused on cultivating a friendly demeanor—perhaps to garner positive reviews—rather than being strictly factual? Are we, in essence, witnessing a form of AI that is becoming too human-like in its responses, prioritizing emotional resonance over accuracy? It raises a host of questions about the integrity of information retrieval and the broader social impacts of increasingly engaging AI interactions. Jeff Collett, Edinburgh

We invite you to share your thoughts, insights, and additional questions in the comments below, or feel free to reach out via email at nq@theguardian.com. Selected responses will be featured next Sunday.

Interested in growing your brand with smarter solutions? Get in touch with Auctera today.

Leave a Reply

Your email address will not be published. Required fields are marked *