South Korea has begun enforcing its “AI Basic Act,” a sweeping framework that requires companies to tell users when they are interacting with high-impact or generative AI systems—and to label certain AI-generated outputs so they are not easily mistaken for real audio, images, or video. The law, formally enacted as Act No. 20676 on January 21, 2025, entered into force one year after promulgation, with a separate delayed effective date for one portion related to digital medical devices.
The Ministry of Science and ICT (MSIT) has framed the law as both an industrial policy and a trust-and-safety regime: a way to accelerate domestic AI adoption while adding baseline guardrails around transparency, safety, and oversight. MSIT first announced that the National Assembly passed the act on December 26, 2024, and said the government would move quickly to prepare lower statutes and guidelines needed for implementation.
What the law demands on transparency and labeling
At the center of the act’s “trustworthiness assurance” chapter is Article 31, which creates three distinct transparency obligations for AI business operators.
First, operators that provide a product or service using either “high-impact” AI or “generative” AI must notify users in advance that the product or service is operated based on the relevant AI. This is a general “you are using AI” disclosure requirement.
Second, when operators provide generative AI—or a product or service using generative AI—they must label that the output was generated by generative AI.
Third, where an operator provides virtual sound, image, or video outputs that are “difficult to distinguish” from real ones, the operator must notify or label the content in a way users can clearly recognize that the outputs were generated by an AI system. The law also includes a carve-out meant to preserve viewing or enjoyment: if the output corresponds to (or forms part of) an artistic or creative work, the notice or label may be done in a manner that does not hinder exhibition or enjoyment.
Crucially, the act leaves important implementation detail—such as exact labeling methods and exceptions—to future Presidential Decrees. That has become a focal point for companies, especially those distributing media across multiple platforms where a single piece of content may be consumed inside an app, reposted elsewhere, and redistributed again.
In early reporting after the law took effect, Korean media described MSIT guidance that differentiates labeling depending on where content is consumed—for example, emphasizing UI-based notices within a service, but requiring more explicit visible or audible watermarks when content is exported for external distribution. Other reporting highlighted that the law allows “invisible” watermarking in contexts where content is readily identifiable as artificial (such as certain animation or webcomics), while requiring clearer visible marks for deepfake-like content that closely resembles real people or events.
Who is covered—and what “high-impact” means
The act’s obligations apply to “AI business operators,” a term broad enough to cover developers and entities providing AI-based products and services. It also asserts extraterritorial reach: conduct outside South Korea can still fall under the act if it affects the domestic market or users.
The definition of “high-impact” AI is designed around risk to life, safety, or fundamental rights, and it includes systems deployed in specified sensitive sectors. The statute’s list includes areas such as energy supply, drinking-water production processes, certain health and medical service systems, and other categories later in the list (including financial and employment-related evaluations) that regulators and businesses have pointed to as likely to draw early attention.
International coverage has focused on practical examples: AI used in loan screening, job applications, healthcare, transportation, and nuclear safety. Reuters reported that firms must ensure human oversight for “high-impact” AI and must provide advance notice and labeling for high-impact and generative AI use, including labeling where AI output is hard to distinguish from reality.
Safety, human oversight, and documentation
Beyond transparency, the law also introduces safety and governance duties that connect directly to labeling and disclosure.
Article 32 requires AI business operators to implement risk identification, assessment, and mitigation measures for certain AI systems based on a cumulative computing threshold for training that will be set by Presidential Decree, and to submit implementation results to MSIT.
Article 34 then layers in additional responsibilities for business operators providing high-impact AI. These include operating a risk management plan, preparing an “explanation plan” for AI outcomes (including key criteria and an overview of learning data), creating user protection plans, and assigning human management and oversight. The law also requires companies to prepare and retain documents that can verify the measures taken to ensure safety and trustworthiness.
While some requirements are “endeavor to” obligations rather than strict mandates—for example, conducting impact assessments of fundamental-rights impacts in advance—these provisions still set expectations that could shape how companies build compliance programs, especially in regulated industries where audits and documentation are already standard practice.
Foreign firms and the “domestic agent” requirement
For foreign AI firms, one of the most consequential provisions may be Article 36, which requires certain operators without an address or office in South Korea to designate a “domestic agent” and report that appointment to MSIT. The exact thresholds for which foreign firms must comply (e.g., users or sales) are left to Presidential Decree, but the agent’s remit is spelled out: it can cover submission of safety implementation results, requests for high-impact confirmation, and support for implementing high-impact safety and trust measures (including keeping documents current and accurate).
That structure resembles other jurisdictions’ approaches to cross-border platform regulation, where authorities seek an accountable local contact for compliance, notices, and investigations.
Enforcement tools and penalties
The act empowers MSIT to conduct fact-finding investigations where it discovers or suspects violations of specific provisions—including parts of Article 31, Article 32, and Article 34—and to issue cease or corrective orders when violations are found.
Administrative fines can reach up to 30 million won for certain failures, including failing to provide the advance notification required by Article 31(1). Reuters also reported that companies would receive at least a one-year grace period before authorities begin imposing administrative fines for infractions, even as the law itself takes effect.
Why this matters globally
South Korea’s move is landing at a moment when major economies are still calibrating how to regulate fast-moving generative AI and synthetic media.
In international reporting, the law has been described as taking effect “all at once” compared with the European Union’s AI Act, which is being applied in phases over multiple years. The Wall Street Journal characterized the regime as unusually broad in scope, with requirements that users be notified when a service is powered by AI and that AI-generated content that could be confused with real life carry visible watermarks or metadata labels—alongside financial penalties for violations.
The transparency-and-labeling provisions are likely to be especially influential because they aim at a core generative-AI challenge: preventing ordinary users from confusing synthetic media for real events, people, or speech. At the same time, the law’s “creative works” flexibility and its reliance on forthcoming decrees suggest South Korea is attempting to balance consumer protection with a content industry that is increasingly experimenting with AI tools.
Industry concerns remain. Reuters reported criticism from startups that key details are vague and could create “regulatory risk,” potentially pushing companies toward conservative approaches to avoid enforcement exposure. Korean media have also raised questions about practical enforceability—particularly when watermark-removal tools are widely available and some deepfake content is produced using overseas services, complicating jurisdiction and compliance parity between domestic and foreign firms.
For global AI providers, the message is clear: if you operate in South Korea—or your AI products materially affect Korean users—you may be expected to build user-facing disclosure, output labeling, and synthetic-media signaling into your systems, and to maintain documentation and oversight processes for higher-risk deployments. And because the act explicitly leaves room for additional rules via Presidential Decree, compliance obligations could sharpen as implementing regulations and guidance mature over the coming months.







