🚀 Trusted by 5,000+ Advertisers & Premium Publishers

Eager managers, unusual errors, and an impending challenge: employees training AI to take over their roles.

As artificial intelligence continues its rapid expansion, many workers have expressed feelings of being “devalued” and voiced concerns about a decline in work quality. This sentiment is prevalent across various industries, illustrating the profound impact AI is having on job roles and responsibilities.

According to a recent analysis by the International Monetary Fund, it is projected that AI will influence approximately 40% of jobs globally. Kristalina Georgieva, the head of the IMF, described this phenomenon as “like a tsunami hitting the labour market.”

The Guardian has gathered firsthand accounts from workers who have been involved in training AI models that replace or modify some of their tasks. These experiences shed light on the emotional toll and practical implications of integrating AI into the workplace.

The editor

‘I now earn less while working longer correcting the mistakes of AI editors’

Christie, 55, is an editor based in the UK, specializing in academic papers for authors whose first language is not English. Excited by a project aimed at training new “assistant editors,” she soon realized that these assistants were actually AI systems designed to take over portions of her workload, which ultimately led to a pay decrease.

“Initially, I thought they were simply training more human editors to help manage the high volume of work,” Christie recalls. “However, my role shifted to correcting the numerous errors made by these AI assistants. They often produced nonsensical edits, like inserting random full stops or misnaming countries.”

Despite her best efforts to point out these issues, the mistakes persisted, and in some cases, even worsened.

“It was only later that I discovered these so-called editors were AI,” she reveals. “The company announced that from then on, all work would be pre-edited by this AI, which meant I would earn less money for doing what was essentially correcting AI’s mistakes—tasks that now took me longer than editing from scratch.”

Feeling devalued and betrayed by her employer, Christie notes, “Many of my colleagues have left, but I’m stuck in this toxic situation since they offer the highest volume of work, and I need to pay my bills.”

The palliative care consultant

‘AI struggled with patients’ pronunciation’

Mark Taubert, a palliative care consultant, was initially enthusiastic about a pilot project involving a chatbot designed to help patients manage the complexities associated with metastatic cancer.

In this role, Taubert, 51, collaborated with his team to develop the chatbot, which was programmed to utilize patient data and pre-existing information leaflets. “We asked patients for all their queries and incorporated important guidelines,” he explains.

However, while the chatbot performed adequately about 50% of the time, it struggled with the nuances inherent in human communication. “Patients don’t always communicate perfectly and might use incorrect names for medications, leading to misunderstandings,” he shares.

Despite the bot’s shortcomings, Taubert views the technology as a helper rather than a replacement. “It could eventually take over administrative duties, allowing me to focus more on patient interaction,” he adds, indicating his comfort with new technologies and their integration into his work.

The translator

‘The overall effect is a decline in quality’

Philip, a translator living in New Jersey, has found himself training AI translation engines that aim to replace human translators, yet he notes their ongoing inadequacy after four years of development. “Initially, they produced laughable results,” Philip shares. “Even though the quality has improved, they remain unreliable and require meticulous corrections for accuracy.”

He explains that while AI can provide a rough translation, it often fails to capture nuances or technical specifics, ultimately leading to a decline in the overall quality of translations. “I still find myself going through AI-generated text word by word, which defeats the purpose of using it for efficiency.”

For Philip, the looming question remains: “How long until I am no longer needed in my current role?”

The marketing writer

‘Training your robot replacement feels like digging your own digital grave’

Joe, 50, an award-winning marketing writer, found himself in an unsettling situation when his employer started implementing AI tools under the guise of enhancing productivity. Assured that his job was secure, he spent months developing workflows for AI processes, only to be unexpectedly laid off shortly afterward.

At his exit interview, he was told the decision had nothing to do with his performance. “The timing was certainly suspicious,” Joe reflects. “It felt like digging my own digital grave training my replacement.”

Now, he is considering a pivot to sales, uncertain of his future in this rapidly changing job landscape.

The mathematician

‘Work will look completely different in 10 years’ time, perhaps even less’

Filippo, a 44-year-old associate professor of mathematics, is currently collaborating with startups on AI models that could revolutionize the field. These tools are intended to assist in theorem proving with minimal human input.

“The advancements have been significant in just three months,” he observes. “It’s evident these tools are evolving rapidly, suggesting that mathematicians’ roles may dramatically change in the near future.”

While he feels secure in his current position, Filippo notes that younger mathematicians might be more vulnerable to these changes. “If I were just beginning my career, my outlook would be vastly different.”

*Names have been changed

Interested in growing your brand with smarter solutions? Get in touch with Auctera today.

Leave a Reply

Your email address will not be published. Required fields are marked *