South Korea

The rapid development of artificial intelligence (AI) promises plenty of benefits and opportunities but also comes with risks when processing personal information.

The rapid development of artificial intelligence (AI) promises plenty of benefits and opportunities but also comes with risks when processing personal information.

These risks include:

  1. Large-scale data processing: the extensive learning data used in AI development likely involves a variety of personal and sensitive information;
  2. Complexity and lack of transparency: the methods used in processing personal information to develop and operate AI services are complex, which makes it difficult for data subjects to know how their personal information is processed;
  3. Automation and uncertainty: the difficulty in predicting the results of data processing in automated services can lead to unexpected consequences such as privacy infringement, social discrimination and bias.

A recent controversy in South Korea about AI and personal information protection involved “Lee Luda,” an AI chatbot service released on December 23, 2020. Lee Luda was quickly shut down due to complaints about its inappropriate use of personal information. For example, the providers of Lee Luda were accused of directly copying user conversations from another website they serviced without consent. Although the service providers claimed they had consent to collect and use personal information, their stated purpose for collecting and using the data – “service development” – was considered to be too abstract.

To address the many issues, South Korean regulators are strengthening the safety and integrity of AI-related personal information processing. For example, the Ministry of Science and ICT released its “People-centered National Artificial Intelligence Ethical Guidelines,” while the Personal Information Protection Commission released the “Guidelines for Protecting Personal Information Processed by Automated Methods” along with the “Artificial Intelligence Personal Information Protection Self-Checklist.”

This article provides an overview of these recent regulatory trends in South Korea relating to personal information protection in the era of AI.

AI and Personal Information Protection Rules

Considering the nature of AI technology and services, protecting personal information requires: a) compliance with personal information protection obligations under applicable statutes; b) self-regulating activities in accordance with “Privacy by Design,” in which organizations implementing AI technology and policies must consider privacy throughout the entire life cycle of products and services; and c) compliance with personal information protection rules in designing and operating AI to prevent privacy infringement, social discrimination and bias.

AI-related personal information protection rules that are regulated by current statutes are mandatory, while those unregulated by current statutes are only recommended. The following is a list of the important rules in South Korea.

Mandatory:

  • Obtain voluntary, clear and specific consent and minimize the personal information items to be collected.
  • Generally, personal information can be used or provided to a third-party only if to do so is within the scope of the original purpose for collection. Thus, a separate legal basis is required to use or share personal information beyond the original purpose.
    • (i) Personal information may be used without the consent of the data subject if reasonably related to the initial purpose of collection, there is no disadvantage to the data subject and appropriate measures (e.g. encryption) are in place to ensure the security of the personal information. (Personal Information Protection Act Article 15(3));
    • (ii) Unless there are special circumstances, personal information collected to offer a service can be used or provided to third-parties without additional consent to develop AI to improve the service. This is because the use is reasonably related to the original purpose of collection, the data subject can predict the use and it is unlikely the use will unfairly infringe on the data subject’s interests. (Authoritative Interpretation).

 

  • Using and providing pseudonymous information without the data subject’s consent is limited to statistical, scientific research and public archiving purposes. Accordingly, additional consent is generally required. Data controllers must also avoid risks of re-identification.
    • (i) Although AI uses scientific methods such as modelling, learning and testing, it is difficult to view the operation of AI-related services as scientific research. AI service providers must clearly notify data subjects and obtain additional consent when using (disclosing or providing) pseudonymous personal information. However, additional consent is not required if scientific methods such as technical development or substantiation are employed to improve functions and enhance algorithms. (Authoritative Interpretation);
    • (ii) Pseudonymous AI learning data requires caution because it is extensive and may include identifiable and attributable information along with private information. For example, in the case of SNS conversation data, it is necessary to pseudonymize not only the identifying information of the speaker, but also the identity or private information of any individuals mentioned in the conversation;
    • (iii) If pseudonymized information is disclosed to unspecified individuals, someone may have information that, in combination with the disclosed information, can identify an individual. Service providers should refrain from disclosing pseudonymized information to unspecified individuals and anonymize any personal information provided to unspecified third-parties.

 

  • Safely store and manage personal information used in the development and operation of AI with measures such as encryption and access control;
  • Immediately destroy personal information that becomes unnecessary due to, for example, the termination of the AI development or operation;
  • Manage, supervise and educate data controllers involved in the development and operation of AI.

Recommended:

  • Analyze and remove privacy infringement risks and follow relevant AI regulations and privacy protections in accordance with the Privacy by Design principle. In particular, identify the general status of the data to be collected and used, and analyze personal information items and types (identifiers, attribute information, etc.) to determine the use and grounds for collecting each personal information item (e.g. consent);
  • Actively self-regulate personal information protection activities during the development and operation of AI and obtain user trust by clearly disclosing how the AI service works;
  • Incorporate countermeasures in the service system and constantly monitor the quality and risk of learning data to avoid social discrimination and bias resulting from processing personal information during the development and operation of AI.

Examples of Related Measures

A. Personal Information Processing Based on Consent

In the case of a service collecting and analyzing TV viewing data to suggest personalized content, service providers should inform users of the items, purpose and how long they intend to keep their personal information collected and any disadvantages of not giving consent. They should also prove that consent was obtained from users who clearly understood the terms of the consent.

B. Processing the Minimum Amount of Personal Information Needed

In the case of AI speaker devices that are on standby and activated through a “wake” word or sensor and voice recognition to perform commands such as playing music, these devices risk being activated unintentionally by other noise. This may lead to conversations being recorded without a user’s knowledge. These devices should include a function to inform a user when personal information is being collected, for example, by flashing LED lights when the device is recording. Furthermore, the device should allow users to turn off voice recognition and standby mode giving them control over whether or not to use the recording function.

C. Anonyms and Pseudonyms

In the case of a service that analyses viewing data from smart TVs to provide personalized content to users, there is a risk emerges that names and phone numbers could be leaked since this information is often saved together with the viewing data. Service providers should identify if it is necessary to separate the various types of personal information collected and anonymize or pseudonymize any personal information the service does not require to function.

D. Transparent Disclosure of Personal Information Processing Methods

In the case where a service makes personalized recommendations by providing to third-parties smart TV viewing data and voice information, third-party provisions drafted in fine print could prevent users from realizing their information may be provided to third-parties. To avoid this issue, service providers should prominently disclose personal information processing policies, including collection, use, storage, sharing and destruction, to the user with infographics and diagrams. Such information should also be easily available to users on a smartphone application or TV system.

E. Ability to Request Access, Correction and Deletion of Personal Information

In the above case of the AI speaker devices , it may be difficult for users to enforce their privacy rights. For example, it may be difficult for users to delete voice information collected or saved without their knowledge because they don’t know how to make such requests. Accordingly, service providers should allow users to easily learn how their information was used and request correction or deletion of their information. Also, services should prepare automated measures to quickly respond to user requests.


 

Disclaimer: All views are personal and do not reflect that of the organization. The views shared are not intended for any legal advice and are for general information and education purposes only.


 

By Kwang-Wook Lee, Keun Woo Lee, Chulgun Lim, Helen H. Hwang

Kwang-Wook LeeKwang-Wook Lee

kwlee@yoonyang.com

Kwang-Wook Lee is a partner and a Head of firm’s
New Business Team at Yoon & Yang specializes
in the new technology related businesses such
as fintech, smart car, Internet of Things, big
data, U-healthcare, and shared economy

Keun Woo LeeKeun Woo Lee

klee@yoonyang.com

Keun Woo Lee is a partner at Yoon & Yang,
specializing in intellectual property protection,
privacy protection, trade secrets protection,
including e-commerce and other technology,
media and telecommunication areas.

Chulgun LimChulgun Lim

cglim@yoonyang.com

Chulgun Lim is a partner at Yoon & Yang and his
practice areas include disputes and litigation
cases relating to personal information protection,
technology, and intellectual property.

Helen H. HwangHelen H. Hwang

hkhwang@yoonyang.com

Helen H. Hwang is a senior foreign attorney at
Yoon & Yang, and her practice areas include
intellectual property including patent and
trademark, foreign outbound investment, and
general corporate law.

Yoon&Yang

Yoon & Yang
yoonyang.com
18th, 19th, 22nd, 23rd, 34th Fl., ASEM Tower 517 Yeongdong-daero, Gangnam-Gu, Seoul 06164, Korea

 

* This article was first published in the October 2021 issue of the IHC Magazine. You can read/download the magazine here.

IHC-Magazine for LinkedIn 211020a

Latest Updates
Related Articles
Related Articles by Jurisdiction
The right to be forgotten
One of the issues created by the Internet concerns the right to delete what one has posted on the web ...
Recent revisions to the “Three Data Laws” on privacy protection in South Korea
The National Assembly of South Korea recently passed a bill to amend the so-called “Three Data Laws” of Korea — the Personal Information Protection Act, the Act on Promotion of Information and Communications Network Utilisation and Information Protection ...
Latest Articles