South Korea’s Pioneering AI Legislation Encounters Resistance in Its Quest to Become a Global Tech Leader

South Korea is making significant strides in the realm of artificial intelligence (AI) regulation by unveiling what is considered the most extensive and intricate set of AI-related laws globally. This regulatory framework aims to set a precedent that other nations may follow. However, the introduction of this comprehensive legislation has already sparked a wave of dissent and criticism.
The newly established laws mandate that companies disclose AI-generated content, prompting backlash from local tech startups. These companies argue that the regulations impose excessive burdens, while civil society organizations contend that they fall short in addressing critical issues.
The AI Basic Act, which came into effect last Thursday, is a response to the growing concerns surrounding artificially created media and automated decision-making processes. Governments across the globe are grappling with the swift advancements in technology, and South Korea appears to be taking the initiative in regulating it.
Under this act, companies offering AI services are required to conform to several regulations:
-
They must incorporate discrete digital watermarks for obviously artificial content, such as animated graphics or artwork. In cases of realistic deepfakes, visible labels will be necessary.
-
Systems classified as “high-impact AI,” including those involved in medical diagnoses, recruitment, and loan approvals, will require operators to perform risk assessments and document decision-making processes. Notably, if a human is responsible for the final decision, the AI system might not fall under this category.
-
Robust AI models will require safety evaluations, yet the criteria for this is so stringent that government officials admit that no existing models currently meet these standards.
Failure to comply with these regulations could result in fines of up to 30 million won (£15,000). Nevertheless, the government has committed to a grace period of at least one year before enforcing penalties.
This legislation is promoted as the first of its kind to be rigorously enforced by a national government, aligning with South Korea’s goal to emerge as one of the top three AI leaders globally, alongside the United States and China. Government officials assert that the law is primarily (80-90%) aimed at fostering industry growth rather than imposing restrictions.
Alice Oh, a professor of computer science at the Korea Advanced Institute of Science and Technology (KAIST), acknowledged the law’s imperfections but noted its intention to evolve without stifling innovation. However, a recent survey conducted by the Startup Alliance revealed that an overwhelming 98% of AI startups were not prepared for compliance. Lim Jung-wook, the co-head of the alliance, expressed widespread frustration, saying, “There’s a bit of resentment. Why do we have to be the first to do this?”
A point of contention among critics is that companies must self-assess whether their systems qualify as high-impact AI. This self-determination process is viewed as time-consuming and laden with ambiguities.
Furthermore, concerns about competitive inequality persist, as all domestic companies face regulation regardless of their size, while only foreign firms, such as Google and OpenAI, are subject to compliance if they meet specific thresholds.
Behind the scenes, the push for regulation has developed against a backdrop of social tension, causing civil society organizations to worry that the new measures are insufficient. A report from Security Hero, a US identity protection firm, indicated that South Korea accounts for approximately 53% of all global deepfake pornography victims. An investigation in August 2024 unveiled extensive networks distributing AI-generated sexual content, igniting outrage and highlighting the urgent need for better protection.
Interestingly, the origins of the law date back to July 2020, long preceding these recent controversies. Initial drafts encountered many delays, partially due to accusations that they prioritized industrial interests over public safety.
Civil society groups assert that the current legislation falls short in adequately safeguarding individuals impacted by AI systems. Following its enactment, four organizations, including the human rights lawyer collective Minbyun, released a statement emphasizing the legislation’s lack of robust provisions for citizen protection against AI risks.
They pointed out that while the law offers some safeguards for “users,” those defined users are predominantly institutions—hospitals, financial firms, and public organizations—rather than ordinary people affected by AI technologies. They further criticized the absence of clearly prohibited AI systems and highlighted loopholes concerning “human involvement” in decision-making processes.
The human rights commission of South Korea has also voiced concerns about the enforcement decree, stating that it lacks defined terms for what constitutes high-impact AI. As a result, individuals most at risk of rights violations might remain unprotected under the regulatory framework.
In response, the ministry of science and ICT expressed optimism, anticipating that the law would “eliminate legal uncertainties” and contribute to building “a healthy and safe domestic AI ecosystem,” with plans to further clarify regulations through updated guidelines.
Experts believe South Korea is intentionally pursuing a distinct regulatory path compared to other regions. Instead of the EU’s stringent, risk-based model, the US and UK’s market-oriented regulations, or China’s state-led policies, South Korea has opted for a more flexible, principles-based approach. Melissa Hyesun Yoon, a law professor specializing in AI governance at Hanyang University, describes this framework as centered around “trust-based promotion and regulation.”
Yoon concluded, “Korea’s framework will serve as a valuable reference in global discussions on AI governance.”
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
