.
T

he digital transformation impacts every aspect of our world. The potential for exponential technologies—like generative AI, machine learning, and big data analytics—to bolster health tech seems limitless. Yet the shift from analog communication—face–to–face, oral dialogue, and paper based—to less tangible digitally mediated communication methods, the health sector faces unique challenges. 

The Covid pandemic not only killed more than six million people, it also left a legacy of distrust in science.  New approaches and capabilities are needed to ensure that communication facilities can effectively respond to disease threats and rebuild public trust in science, our institutions, and national governments. AI could help. Developers and technology experts suggest that health information that’s been curated for easy use can foster consumer choice, patient engagement, and self–management of health and chronic illness. According to Eric Schmidt, former CEO of Google, AI will double everyone’s productivity. Perhaps it can also make us more informed and, in turn, improve our health outcomes. 

Mis– and disinformation are such concerns—as witnessed in how Covid eroded scientific progress, trust in vaccinations, and decision–making—that experts estimate hundreds of thousands of people died from the anti–science misinformation scourge. One of the opportunities we have with AI is to diffuse and develop quality health communication that pro–actively facilitates broad–based public engagement and countering mis– and disinformation that have been so harmful.  

Current approaches to mis– and disinformation lack comprehensive, participatory, multisectoral engagement. For instance, both the recent U.S. White House executive order requiring clear communication with “watermarking” of AI–generated content and the  EU’s recently–passed AI regulation are short term fixes.    

At this point in its development it is incumbent on us to pair AI with expectations, supporting evidence–based conversations and inclusive, participatory engagement among those developing the AI and the intended audiences for its use. We need to begin this process now to avoid further decay of trust, truth, and progress. Initiatives such as the new Council for Quality Health Communication are a step in the right direction, but more remains to be done to ensure the ethical development of AI for health and wellbeing.

About
Scott Ratzan
:
Scott C. Ratzan MD, MPA leads the Business Partner Roundtable series with the U.S. Council for International Business Foundation. He is the founding Editor-in-Chief of the Journal of Health Communication: International Perspectives and is a World in 2050 Brain Trust member.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.

a global affairs media network

www.diplomaticourier.com

Higher quality communication to combat health misinformation

May 30, 2024

Misinformation erodes public trust in critical health services, and that costs lives. AI developers and health communicators must work together to ensure better quality information, bolstering public confidence in health services and helping them make more informed choices, writes Scott Ratzan.

T

he digital transformation impacts every aspect of our world. The potential for exponential technologies—like generative AI, machine learning, and big data analytics—to bolster health tech seems limitless. Yet the shift from analog communication—face–to–face, oral dialogue, and paper based—to less tangible digitally mediated communication methods, the health sector faces unique challenges. 

The Covid pandemic not only killed more than six million people, it also left a legacy of distrust in science.  New approaches and capabilities are needed to ensure that communication facilities can effectively respond to disease threats and rebuild public trust in science, our institutions, and national governments. AI could help. Developers and technology experts suggest that health information that’s been curated for easy use can foster consumer choice, patient engagement, and self–management of health and chronic illness. According to Eric Schmidt, former CEO of Google, AI will double everyone’s productivity. Perhaps it can also make us more informed and, in turn, improve our health outcomes. 

Mis– and disinformation are such concerns—as witnessed in how Covid eroded scientific progress, trust in vaccinations, and decision–making—that experts estimate hundreds of thousands of people died from the anti–science misinformation scourge. One of the opportunities we have with AI is to diffuse and develop quality health communication that pro–actively facilitates broad–based public engagement and countering mis– and disinformation that have been so harmful.  

Current approaches to mis– and disinformation lack comprehensive, participatory, multisectoral engagement. For instance, both the recent U.S. White House executive order requiring clear communication with “watermarking” of AI–generated content and the  EU’s recently–passed AI regulation are short term fixes.    

At this point in its development it is incumbent on us to pair AI with expectations, supporting evidence–based conversations and inclusive, participatory engagement among those developing the AI and the intended audiences for its use. We need to begin this process now to avoid further decay of trust, truth, and progress. Initiatives such as the new Council for Quality Health Communication are a step in the right direction, but more remains to be done to ensure the ethical development of AI for health and wellbeing.

About
Scott Ratzan
:
Scott C. Ratzan MD, MPA leads the Business Partner Roundtable series with the U.S. Council for International Business Foundation. He is the founding Editor-in-Chief of the Journal of Health Communication: International Perspectives and is a World in 2050 Brain Trust member.
The views presented in this article are the author’s own and do not necessarily represent the views of any other organization.