Retailers Seek ‘Charming & Human-Like’ AI for Shopping Assistance, But Could Chatbots Become Unruly?

Major retailers are projecting that sophisticated AI âassistantsâ will soon manage your meal planning, party organization, and shopping tasks with ease. This technology, however, will not come without its challenges, particularly for companies that are still grappling with the shortcomings of their simpler AI chatbots. Balancing the need for these new, âagenticâ bots to be relatable while avoiding the risk of them going off-script will be a tightrope walk for many businesses.
The spotlight recently shone on AI chatbots when Woolworths had to rein in its virtual shopping assistant, Olive, after a failed attempt to create a relatable, human-like interaction. Instead of fostering a sense of connection, Oliveâs chat about “relatives” over the phone left customers annoyed.
A customer voiced their frustration on Reddit: âIâm already irritated that I have to call, and now Iâve got some robot babbling at me? Wtf Woolies?â
Sign up: AU Breaking News email
While Woolworths is stepping back from Oliveâs quirky personality, this incident highlights the ongoing teething problems that AI technology still faces, which were further emphasized by tests conducted by Guardian Australia assessing various retailers’ chatbots.
The misstep at Woolworths follows a series of AI customer service blunders, including Bunnings’ chatbot dispensing illegal electrical advice and Air Canadaâs virtual assistant incorrectly promising a bereavement fare refund. These misadventures underscore the learning curve that companies are experiencing with AI technology.
Among the companies like Woolworths, Coles, and Wesfarmersâowner of Bunnings, Kmart, Officeworks, and Pricelineâannouncing plans for agentic shopping assistants, hype is unmistakably prevalent. A 2024 report from business consultancy Accenture confidently stated that âconsumers are readyâ for generative AI-powered shopping helpers while urging businesses to approach this technology with a âdelightfully humanâ mindset. Yet, one must ask, is the technology actually prepared?
A customer service transformation
The concept of online chatbots designed to assist customers isnât new, yet the tools have evolved significantly in sophistication. Initial versions relied on ârules-basedâ AI, as noted by Uri Gal, a professor of business information systems at the University of Sydney. These basic chatbots utilized a âdecision treeâ to provide immediate answers to straightforward inquiries.
For example, when faced with a query like âHow do I return my order?â, the chatbot would typically direct the user to the retailerâs returns page or cite the relevant policy. âWhen given a specific prompt, it consistently provides the same response,â says Gal.
However, modern AI-driven retail bots can âlearnâ from new information provided to them, allowing them to generate varied answers. Many are designed using substantial language models from major tech companies, like ChatGPT.
The next step in this evolution is the development of agentic AI shopping assistants, which are designed to emulate human behaviors. According to Gal, these assistants âact on their ownâ to fulfill tasks independently, like purchasing groceries or airline tickets. Yet, this level of autonomy also introduces complications, particularly when it comes to privacy concerns. If bots have more access to customer data for their autonomous operations, it raises various governance issues.
âGiven these systems’ novelty, as evidenced by the Woolworths incident, itâs reasonable to expect that various scenarios might arise, potentially leading to risks or an agent acting unpredictably,â Gal warns.
Woolworths has allied with Google to use its LLM, Gemini, to elevate Olive into a more capable âshopping companionâ that can assist with more complex tasks, such as meal planning and party preparations, thereby automatically populating customers’ shopping baskets.
While Woolworths has indicated that Oliveâs enhanced functions will be rolled out at a future date, the current collaboration with Google has already enabled the bot to handle phone callsâwith varied outcomes.
In response to questions, Woolworths affirmed that Olive was not malfunctioning or acting erratically. Instead, it was human error that led to Olive discussing its âmotherâ following a customer providing their birthdate, which was intended to give it a personable demeanor. The supermarket later removed this scripting due to customer feedback, clarifying the miscommunication.
When things go wrong
In the case of Woolworths, the chatbot incident was not a case of glitches or unexpected behavior, as confirmed by the supermarket. A staff member had programmed Olive to discuss its âmother,â hoping to create a more relatable persona. âDue to the feedback we received, weâve eliminated that scripting,â a Woolworths spokesperson confirmed.
Prof. Jeannie Paterson, co-director of the University of Melbourneâs Centre for AI and Digital Ethics, highlights the pitfalls of AI assistants as often stemming from their misunderstanding of user prompts. âChatbots are only effective to the degree they can interpretâthough I refrain from using the word ‘understand’, as they’re not sentientâthe context of a human’s inquiry,â she explains.
Last year, Bunnings faced backlash when its AI chatbot advised a customer on rewiring an extension cord, a task they are legally prohibited from offering guidance on without proper licensing. In a separate incident in 2022, Air Canadaâs chatbot provided a misleading statement regarding bereavement fare refunds that resulted in a legal battle when they refused to honor the erroneous advice.
Paterson asserts that organizations are âclearly responsibleâ for the performance of their chatbots. She argues that companies are trying to maintain a delicate balance between offering a responsive, adaptable AI assistant and managing the risk of the bot yielding incorrect advice that could lead to financial losses.
âOne customerâs AI agent purchasing too many eggs or too much salmon may not seem significant,â she notes, âbut if similar errors occur across a network of chatbots, the cumulative financial drain could be substantial long before they are addressed.â
To minimize this risk, itâs common for businesses to impose âvery strict guardrailsâ on their bots, which unfortunately results in less flexibility and poorer performance in understanding customer intentions. Guardian Australiaâs evaluations of several retail bots produced marginal results, suggesting that this technology still has room for improvement.
For instance, when Uniqlo’s âvirtual shopping assistantâ was given the prompt âI am looking for a woollen jumper,â it replied with, âSorry, we could not recognise you.â Even after specifying âfind a productâ followed by âwoollen jumper,â the chatbot responded with options for menâs button-down office shirts. When contacted, Uniqlo provided no comment regarding this issue.
Even Olive hasnât always hit the mark. When prompted via Woolworthsâ chat function with âHow much is a 500g bag of pasta?â, Oliveâs quirky response was, âIâm very sorry to hear you were missing items from your order.â
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
