KPMG Partner Penalized for Utilizing AI to Fraudulently Pass AI Training Examination

A partner at the consultancy KPMG has faced disciplinary action after using artificial intelligence to gain an unfair advantage during an internal training course focused on AI technology. This incident marks a significant breach of ethics within a leading firm in the accountancy sector.
The identity of the partner has not been disclosed publicly, but they were fined A$10,000 (approximately £5,200). Reports indicate that they are not alone in employing such questionable tactics; several other employees at KPMG are alleged to have utilized AI tools for similar purposes.
Since July of this year, KPMG Australia has identified more than two dozen employees using artificial intelligence tools to cheat on internal examinations. This surge raises alarm bells about AI-enabled misconduct within not only KPMG but also the broader accounting community.
To uncover this malpractice, the firm employed its own AI detection tools during the assessment process, as reported by the Australian Finance Review—the first outlet to break the story.
The situation presents a new chapter in the ongoing challenges faced by the so-called Big Four accountancy firms. KPMG, along with its competitors, has grappled with various cheating scandals in recent years. A noteworthy incident occurred in 2021 when KPMG Australia was fined A$615,000 for widespread misconduct involving over 1,100 partners who engaged in “improper answer-sharing” during skills and integrity assessments.
The introduction of AI tools has created new avenues for unethical behavior in the work environment. In December, the Association of Chartered Certified Accountants (ACCA), the largest accounting body in the UK, announced a new requirement for accounting students to take exams in person. This decision was driven by the challenges of preventing AI-induced cheating in remote settings.
Helen Brand, the CEO of ACCA, emphasized the critical point reached due to the rapid evolution of AI technologies that have outpaced existing safeguards designed to counteract cheating. This recognition highlights the drastic shift brought about by AI and the challenge it poses to maintaining integrity in assessment processes.
In light of these issues, major firms like KPMG and PricewaterhouseCoopers have taken proactive steps to integrate AI into their operational frameworks. They are now mandating their employees to utilize AI technologies in their work to improve productivity and reduce operational costs.
KPMG plans to include assessments of partners’ proficiency in using AI tools as part of their performance reviews in 2026. According to Niale Cleobury, the global AI workforce lead at KPMG, integrating AI into work is not just encouraged; it’s seen as a collective responsibility within the organization.
Observers on platforms like LinkedIn have highlighted the irony of cheating in AI training by utilizing AI itself. One user, Iwo Szapar, who founded a platform aimed at assessing organizations’ AI maturity, remarked that KPMG is overlooking a foundational issue. He stated that the real problem isn’t cheating; rather, it’s the inadequacy of training and the need for a redesign of how individuals are prepared for the complexities of modern technology.
In response to these incidents, KPMG has pledged to bolster its methods for identifying AI misuse among its staff. The company is committed to monitoring the frequency of such violations to implement better strategies moving forward.
Andrew Yates, the CEO of KPMG Australia, commented on the challenges posed by quickly evolving AI technologies. He acknowledged that the widespread adoption of AI tools has led to difficulties in managing compliance with internal policies. Yates stated, “Given the everyday use of these tools, some people breach our policy. We take it seriously when they do. We are also looking at ways to strengthen our approach in the current self-reporting regime.”
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
