🚀 Trusted by 5,000+ Advertisers & Premium Publishers

Retailers Seek ‘Charming & Human-Like’ AI for Shopping Assistance, But Could Chatbots Become Unruly?

Major retailers are projecting that sophisticated AI “assistants” will soon manage your meal planning, party organization, and shopping tasks with ease. This technology, however, will not come without its challenges, particularly for companies that are still grappling with the shortcomings of their simpler AI chatbots. Balancing the need for these new, “agentic” bots to be relatable while avoiding the risk of them going off-script will be a tightrope walk for many businesses.

The spotlight recently shone on AI chatbots when Woolworths had to rein in its virtual shopping assistant, Olive, after a failed attempt to create a relatable, human-like interaction. Instead of fostering a sense of connection, Olive’s chat about “relatives” over the phone left customers annoyed.

A customer voiced their frustration on Reddit: “I’m already irritated that I have to call, and now I’ve got some robot babbling at me? Wtf Woolies?”

Sign up: AU Breaking News email

While Woolworths is stepping back from Olive’s quirky personality, this incident highlights the ongoing teething problems that AI technology still faces, which were further emphasized by tests conducted by Guardian Australia assessing various retailers’ chatbots.

The misstep at Woolworths follows a series of AI customer service blunders, including Bunnings’ chatbot dispensing illegal electrical advice and Air Canada’s virtual assistant incorrectly promising a bereavement fare refund. These misadventures underscore the learning curve that companies are experiencing with AI technology.

Among the companies like Woolworths, Coles, and Wesfarmers—owner of Bunnings, Kmart, Officeworks, and Priceline—announcing plans for agentic shopping assistants, hype is unmistakably prevalent. A 2024 report from business consultancy Accenture confidently stated that “consumers are ready” for generative AI-powered shopping helpers while urging businesses to approach this technology with a “delightfully human” mindset. Yet, one must ask, is the technology actually prepared?

A customer service transformation

The concept of online chatbots designed to assist customers isn’t new, yet the tools have evolved significantly in sophistication. Initial versions relied on “rules-based” AI, as noted by Uri Gal, a professor of business information systems at the University of Sydney. These basic chatbots utilized a “decision tree” to provide immediate answers to straightforward inquiries.

For example, when faced with a query like “How do I return my order?”, the chatbot would typically direct the user to the retailer’s returns page or cite the relevant policy. “When given a specific prompt, it consistently provides the same response,” says Gal.

However, modern AI-driven retail bots can “learn” from new information provided to them, allowing them to generate varied answers. Many are designed using substantial language models from major tech companies, like ChatGPT.

The next step in this evolution is the development of agentic AI shopping assistants, which are designed to emulate human behaviors. According to Gal, these assistants “act on their own” to fulfill tasks independently, like purchasing groceries or airline tickets. Yet, this level of autonomy also introduces complications, particularly when it comes to privacy concerns. If bots have more access to customer data for their autonomous operations, it raises various governance issues.

“Given these systems’ novelty, as evidenced by the Woolworths incident, it’s reasonable to expect that various scenarios might arise, potentially leading to risks or an agent acting unpredictably,” Gal warns.

Woolworths has allied with Google to use its LLM, Gemini, to elevate Olive into a more capable “shopping companion” that can assist with more complex tasks, such as meal planning and party preparations, thereby automatically populating customers’ shopping baskets.

While Woolworths has indicated that Olive’s enhanced functions will be rolled out at a future date, the current collaboration with Google has already enabled the bot to handle phone calls—with varied outcomes.

In response to questions, Woolworths affirmed that Olive was not malfunctioning or acting erratically. Instead, it was human error that led to Olive discussing its “mother” following a customer providing their birthdate, which was intended to give it a personable demeanor. The supermarket later removed this scripting due to customer feedback, clarifying the miscommunication.

When things go wrong

In the case of Woolworths, the chatbot incident was not a case of glitches or unexpected behavior, as confirmed by the supermarket. A staff member had programmed Olive to discuss its “mother,” hoping to create a more relatable persona. “Due to the feedback we received, we’ve eliminated that scripting,” a Woolworths spokesperson confirmed.

Prof. Jeannie Paterson, co-director of the University of Melbourne’s Centre for AI and Digital Ethics, highlights the pitfalls of AI assistants as often stemming from their misunderstanding of user prompts. “Chatbots are only effective to the degree they can interpret—though I refrain from using the word ‘understand’, as they’re not sentient—the context of a human’s inquiry,” she explains.

Last year, Bunnings faced backlash when its AI chatbot advised a customer on rewiring an extension cord, a task they are legally prohibited from offering guidance on without proper licensing. In a separate incident in 2022, Air Canada’s chatbot provided a misleading statement regarding bereavement fare refunds that resulted in a legal battle when they refused to honor the erroneous advice.

Paterson asserts that organizations are “clearly responsible” for the performance of their chatbots. She argues that companies are trying to maintain a delicate balance between offering a responsive, adaptable AI assistant and managing the risk of the bot yielding incorrect advice that could lead to financial losses.

“One customer’s AI agent purchasing too many eggs or too much salmon may not seem significant,” she notes, “but if similar errors occur across a network of chatbots, the cumulative financial drain could be substantial long before they are addressed.”

To minimize this risk, it’s common for businesses to impose “very strict guardrails” on their bots, which unfortunately results in less flexibility and poorer performance in understanding customer intentions. Guardian Australia’s evaluations of several retail bots produced marginal results, suggesting that this technology still has room for improvement.

For instance, when Uniqlo’s “virtual shopping assistant” was given the prompt “I am looking for a woollen jumper,” it replied with, “Sorry, we could not recognise you.” Even after specifying “find a product” followed by “woollen jumper,” the chatbot responded with options for men’s button-down office shirts. When contacted, Uniqlo provided no comment regarding this issue.

Even Olive hasn’t always hit the mark. When prompted via Woolworths’ chat function with “How much is a 500g bag of pasta?”, Olive’s quirky response was, “I’m very sorry to hear you were missing items from your order.”

Interested in growing your brand with smarter solutions? Get in touch with Auctera today.

Leave a Reply

Your email address will not be published. Required fields are marked *