WHO releases guidelines on AI ethics and governance for large multi-modal models

Written by Katie McCool

A clear globe wrapped in a stethoscope on a blue background next to the WHO logo, to represent the concept of WHO introducing new guidance for large multi-modal models.

The World Health Organization (WHO) has introduced new guidance addressing the ethical considerations and governance of large multi-modal models (LMMs), a rapidly advancing category of generative artificial intelligence (AI) being applied to a wide range of healthcare domains.


What are LMMs? 

LMMs, a subset of generative AI, are making significant contributions to the generation of real-world data (RWD). These advanced models can process a variety of data inputs, including text, videos, and images, and using transformer architectures, can learn relationships between different modalities. Research indicates they outperform text-only models and can tackle new tasks such as describing images or generating commands for robots. With millions or billions of parameters, these models excel in comprehending and generating content across varied data types, making them versatile for complex tasks in natural language processing (NLP) and computer vision. Able to produce diverse outputs, essentially mimicking human communication, LMMs have gained unprecedented adoption rates, exemplified by platforms like ChatGPT, Bard, and Bert emerging into public awareness in 2023.  


While LMMs are beginning to gain traction for specific health-related applications, documented risks include the potential for false, biased, or incomplete statements that could harm health decision-making. There are also concerns that LMMs may be trained on poor-quality or biased data based on factors like race, ethnicity, and gender. New guidance from WHO provides over 40 recommendations for governments, tech companies, and healthcare providers to ensure responsible use of LMMs for public health. 

The guidance also outlines broader risks to health system accessibility, affordability, and the possibility of ‘automation bias’ among healthcare professionals and patients, impacting error identification and decision delegation. Like other AI technologies, LMMs are vulnerable to cybersecurity threats, jeopardizing patient information and trust in health care. To ensure safe LMM development, the WHO emphasizes engaging governments, tech companies, healthcare providers, patients, and civil society throughout the technology’s lifecycle, including oversight and regulation. 

“Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs,” explained Dr Alain Labrique, WHO Director for Digital Health and Innovation in the Science Division. 

The WHO guidance outlines five key applications of LMMs in health care. These include: 

  1. Diagnosis and clinical care, such as addressing patient queries 
  2. Patient-guided use; for example, for symptom investigation  
  3. Handling clerical and administrative tasks, like documenting patient visits 
  4. Supporting medical and nursing education through simulated patient encounters
  5. Contributing to scientific research and drug development by identifying new compounds

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” commented Dr Jeremy Farrar, WHO Chief Scientist. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.” 

The new guidance provides key recommendations for governments and developers in the development and deployment of LMMs for public health and medical purposes. Governments are urged to invest in accessible infrastructure, enact laws and policies ensuring ethical obligations and human rights standards, assign regulatory agencies for assessment, and implement post-release auditing by independent third parties. Developers should engage diverse stakeholders in transparent design, focusing on well-defined tasks with accuracy and reliability to enhance health systems. Additionally, they should anticipate and comprehend potential secondary outcomes for effective LMM performance. These recommendations aim to foster responsible and ethical utilization of LMMs in health care. 

Want regular updates on the latest real-world evidence news straight to your inbox? Become a member on The Evidence Base® today>>>