Highlights

  • home_icon HOME
  • Research
  • Highlights
Development of AI Guideline in Healthcare for Society


AI technology is an emerging issue in not only particular incidents, such as the boycott of research with KAIST against the autonomous weapon development in 2018 and the Chatbot Luda debate Iruda case in 2020 but also social structures, such as the in fringement of privacy by digital contact tracing, replacement of human jobs by machines as well as the resulting unemployment, and the AI algorithm distortion in relation to platform workers in delivery apps and so on.
The AI technology is one of the core rechnologies driving the Fourth Industrial Revolution, but it also contains various risks of potential misuse and abuse. The users who are exposed to AI need a guideline that can help them to recognize the risks of the technologies and use them in an appropriate manner. The need is particularly more in the field of healthcare in which safety is the most critical issue. Not to mention that it is the first one in the world, the AI Guideline in Healthcare for Society, developed by the KAIST Korea Policy Center for the Fourth Industrial Revolution (KPC4IR) (published on August 15, 2021), is a very timely accomplishment.

|World’s First AI Application Guideline in Healthcare|

Policy-based efforts to prevent the misuse and abuse of AI
“The guideline summarizes the items that must be checked by the healthcare workers, patients, families and citizens who utilize AI. It is critical to make preemptive efforts to imagine the possible issues related to the misuse and abuse of the technology and to properly respond to them. We need not just the personal efforts made by scientists, engineers and developers but also the policy and system-based measures for the whole society.”
Professor So Young Kim, the director of KPC4IR, said that they prepared checklists that are necessary to receive the appropriate guidance in three essential areas. The first is ‘What is the data upon which the healthcare AI service is developed?’ This is about how much the collected data can properly represent the conditions of the patients that are to be analyzed by AI. A lesson was taken from the example of the skin disease diagnosis AI system developed in Germany. The AI system was trained mainly with the people whose skin color was white, and so the system was likely to fail to accurately diagnose the skin lesions in the people of other races. The guideline is also directed to the questions, “What are the assumptions that the AI system has regarding the patients and diseases?” and “How reliable are the decisions that are made with the assistance of AI?” The guideline provides the checklists that can be refered to by the healthcare service users in these essential topics as well.

Healthcare AI guideline for both the users and beneficiaries
The project was jointly carried with the Institute for the National University of Singapore Public Understanding of Risk (IPUR), funded by the UK Lloyd’s Register which is the world’s first and largest register, and the Sense about Science, which is a representative science and technology NGO in UK. The research group was invited by the Z-Inspection Group, introduced in IEEE Transaction on Technology and Society, to join an international collaboration for the development of reliable AI. and the guideline was covered by many key news media as an important topic. Therefore, the response to the guideline publication is highly encouraging. The COVID-19 pandemic has increased people’s attention in the application of AI technology and its safety in healthcare. As Director Kim said, the guideline summarized all the necessary questions from the viewpoints of both the users and beneficiaries of AI, including the relevant technical developers, policy-makers, healthcare workers and citizens, so that the beneficial effects of AI can be maximized.

Prof.So Young Kim
2021 Annual Report


KAIST 291 Daehak-ro, Yuseong-gu, Daejeon (34141)
T : +82-42-350-2381~2384
F : +82-42-350-2080
Copyright (C) 2015. KAIST Institute