Microsoft Reports North Korean Operatives Employing AI to Deceive Western Companies into Recruitment

The rise of fake IT workers allegedly orchestrated by North Korea has taken a new twist, as Microsoft has revealed that these unscrupulous individuals are leveraging advanced AI technologies to mislead western companies into hiring them. The involvement of artificial intelligence signifies a troubling evolution of long-standing tactics used by state-backed actors for illicit financial gain.
Microsoft reports that North Korean operatives are enhancing traditional schemes aimed at revenue generation by utilizing AI to fabricate phony identities and manipulate stolen identification details. This strategy is particularly aimed at increasing the believability of applicants pursuing jobs in IT and software development sectors.
The modus operandi of these scammers usually commences with state-sponsored fraudsters applying for remote IT positions in western countries. They create a façade of legitimacy by assuming false identities and often collaborating with “facilitators” located within the countries where the targeted companies operate. Once successfully hired, these workers typically remit their earnings back to the North Korean regime led by Kim Jong-un. In some instances, the fake employees have resorted to threatening organizations with the leakage of sensitive data if they face termination.
A recent blog post shared by Microsoft’s threat intelligence division outlines how AI is being employed to significantly enhance the efficacy of these scams. As part of an analysis of deceitful activities, Microsoft has labeled these North Korean groups as “Jasper Sleet” and “Coral Sleet” – a nod to the conventional practice within cybersecurity circles of assigning names to unidentified clusters of attackers.
According to Microsoft, scam artists have been using voice modulation technology to alter their accents during virtual interviews, thus enabling them to convincingly pose as candidates from western nations. Additionally, they have adopted the AI-driven application Face Swap, which allows them to superimpose images of North Korean IT professionals onto stolen identification documents and craft polished headshots to accompany their résumés.
In its findings, Microsoft observed that “Jasper Sleet utilizes AI throughout the entire attack lifecycle – from getting hired, remaining employed, to misusing the access gained.” This methodical approach highlights the grave challenges cybersecurity professionals face in identifying and mitigating such sophisticated threats.
Last year, the tech giant disclosed that it had disrupted around 3,000 Microsoft Outlook or Hotmail accounts that were linked to these counterfeit North Korean IT workers. This revelation underscores the magnitude of the threat posed by these actors on global cybersecurity.
In another concerning development, Microsoft pointed out that these fraudsters utilize AI platforms to compile “culturally appropriate” lists of names and matching email address formats. This practice enables them to assemble credible but fictitious identities for their job applications. A typical AI prompt they might employ could be as straightforward as “generate a list of 100 Greek names” or “develop email address formats using the name Jane Doe.”
Moreover, these scammers ingeniously deploy AI to sift through job listings on platforms like Upwork, identifying software and IT job requirements to craft tailored applications. The company has acknowledged its dedication to taking swift and stringent action to remove bad actors from its platform.
Notably, once these fraudulent individuals gain employment, they frequently harness AI tools for various tasks, such as drafting emails, translating documents, and even generating code. This allows them to prolong their deception and avoid exposure as frauds or being dismissed due to inadequate performance.
In light of this alarming trend, companies are being urged to implement more stringent hiring practices, particularly by conducting video or in-person interviews for IT personnel. Experts note that discerning interviewers can often identify deepfake videos through various “tells,” such as pixelation along the edges of faces or inconsistencies in how light interacts with AI-generated imagery.
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
