What you need to know about South Korea’s AI Basic Act
Education | Artificial Intelligence
Published: Feb 10, 2025
*Disclaimer: This article was written using a translated copy of the South Korea AI Basic Act*
After the European Union paved the way for creating a legal framework for artificial intelligence (AI) in early 2024, many wondered what government or jurisdiction would follow. The year continued with discussions on how to best implement AI governance and debates on where the line stands between sufficient governance and proper opportunity for creativity in the technology industry. Fast forward a couple of months, as the world prepared to welcome in the new year those questions were finally answered. In late December 2024, South Korea stepped forward proposing their own legislation regarding AI. By January 21, 2025, they became the second entity to propose AI regulation with the passing of the AI Basic Act. To address the obvious next question of when these regulations will be enforced, the enforcement date stands as January 22, 2026, giving organizations roughly a year to prepare. It’s also worth noting that this act contains six sections with 43 articles, and we've outlined the key points below.
The goal of the AI Basic Act
With any new legislation passed by a government, there is a shared purpose in mind, and the AI Basic Act is no exception in what it aims to cover:
- Protecting people’s rights
- Improving people’s quality of life
- Strengthening national competitiveness in the realm of AI
- Specifying requirements to establish trust during the AI development process
The goal is for organizations to be able to creatively develop AI systems in a stable manner that enhances safety and reliability while also improving people’s quality of life through the use of the system.
Obligations of organizations under the AI Basic Act
Transparency
Organizations that provide an AI product or service must clearly inform users that the results they are interacting with were derived from an AI system. The same applies when an AI system is used to generate visual graphics such as videos and pictures. This ensures users understand that what they are seeing is AI-generated and not real. There is some leniency when it comes to artistic creations to ensure the disclosure does not ruin the piece, however, ultimately users must be clearly made aware in some manner.
Safety
Organizations must identify, assess, and mitigate risks throughout the AI life cycle. Also, organizations must establish a risk management system that monitors and allows for response to AI incidents. Proof of this must be submitted to the Minister of Science and ICT.
Requirements of organizations located outside of South Korea
An important aspect of the scope of the AI Basic Act is that it applies to organizations developing or using AI as a part of business activities within the South Korean market. This is inclusive of organizations that are not physically based within the nation itself. In fact, the act specifies certain requirements in place for organizations based outside of the nation of South Korea. If the organization does not have a domestic address, and the organization meets a certain specified user or financial limit, it is then required to designate a person to act as a “domestic representative”.
This representative is responsible for the following on behalf of the organization:
- Submission of evidence that the organization identifies, assesses, and mitigates risks throughout the AI life cycle and establishes an AI risk management system
- Confirmation whether the organization is providing an AI product or service that falls under the definition of high-impact AI
- If the organization does provide high-impact AI products or services, the representative must ensure that certain measures are met, explained further below
High-Impact Artificial Intelligence
Now that it has been mentioned a few times, what exactly is high-impact AI? It is an AI system that is utilized by an industry that has significant impact on human life, human rights, and physical safety. With that definition in mind, it is probable that a few industries naturally come to mind, such as energy, healthcare, public transportation, surveillance, and public utilities. If issues were to arise with AI systems in these industries, it would have an immense effect on people and their wellbeing.
AI systems that are involved in government decision-making also fall under the category of high-impact AI. As you can imagine, if a government was making important decisions that affect their citizens while using an unreliable AI system, there could be significant negative results. These are just a few examples of several that are called out in article two of the AI Basic Act.
If an AI provider falls under the high-impact AI category, several measures must be taken to ensure safety and reliability of the system. Organizations must:
- Establish and implement a risk management plan
- Be able to explain the final results derived from the AI system, the criteria used to drive results, and the learning data used to develop the AI system
- Implement user protection measures
- Implement human supervision of the system
- Prepare and retain documentation to prove system safety and reliability
Impact Assessments
The concept of impact assessments for high-impact AI systems are also described within the AI Basic Act. It is essential that organizations providing high-impact AI products or services evaluate the impact of their system on human rights. It should also be noted that when government agencies decide to use high-impact AI products or services, they must give priority to systems that had an impact assessment conducted on them.
High-Level Overview of AI Basic Act Topics
Here are some additional high-level AI Basic Act topics that are worth highlighting:
- The Minister of Science and ICT will establish an AI basic plan every three years. This will promote the AI industry within the nation to increase competitiveness.
- The AI Basic Act calls for the creation of several entities, such as:
- The National Artificial Intelligence Commission
- The National Artificial Intelligence Policy Center
- Artificial Intelligence Safety Research Institute
- The act covers support for small and medium enterprises, as well as startups to help promote innovation within the nation.
- Support for AI data centers is also covered to build up AI infrastructure in the nation.
- The act also covers attracting and developing AI professionals.
How your organization can prepare for AI Basic Act enforcement
If you’re interested in learning more about Schellman’s suite of AI services and discovering how we can streamline your journey towards AI compliance, contact us today. We can even tell you everything you need to know about becoming ISO 42001 certified.
In the meantime, read more about the latest in AI compliance here:
About Jerrad Bartczak
Jerrad Bartczak is a Senior Associate – AI within the AI Practice at Schellman, based in New York. He specializes in AI assessments including ISO 42001 and HITRUST + AI, while staying current on worldwide AI compliance and governance developments. He also possesses in-depth compliance knowledge cultivated through years of experience conducting HITRUST, SOC 1, SOC 2, DEA EPCS and HIPAA audits. Jerrad maintains CISSP, CISA, CCSFP, CCSK and Security+ certifications.