XAI4Covid — Applying AI Explainability to Crisis Communication
Investigating how research on explainability of AI systems can be translated and applied to improving public communication in complex crisis situations such as the COVID-19 pandemic.
XAI4Covid investigates how research on explainability of AI systems could be translated and applied to improving public communication in complex crisis situations such as the COVID-19 pandemic. The project was funded as part of the supplementary module “Corona Crisis and Beyond — Perspectives for Science, Scholarship and Society” of the Volkswagen Foundation, which aims at supporting research projects whose findings not only contribute directly to overcoming the crisis, but can also provide impetus for overcoming major societal challenges in the medium to long term.
Problem & Context
The COVID-19 pandemic exposed fundamental challenges in communicating complex scientific knowledge to the general public:
- Communication Gap: Expert knowledge about infection risks, containment measures, and vaccination was often communicated in ways that were difficult for the general public to understand and act upon.
- Public Acceptance: Lack of understandable explanations contributed to reduced public acceptance and implementation of recommended containment measures.
- Untapped AI Research: Existing research on AI explainability techniques — such as contrastive and counterfactual explanations — had not yet been applied to improve public crisis communication.
Research Approach
- Contrastive & Counterfactual Explanations: Applying AI explainability techniques to help the public understand complex scenarios through contrasting alternatives and “what-if” reasoning.
- User-Centered Design: Integrating insights from communication science, psychology, and HCI to make scientific information relatable and increase public understanding and acceptance.
Solution: Interactive Infection Risk Simulator
The centrepiece is an interactive web tool (what-if-masks.org) that enables people without scientific backgrounds to explore indoor infection risks intuitively.
- What-If Exploration: Users can interactively adjust factors like masks, incidence rates, and ventilation to understand how behaviours affect infection risks.
- Scientifically Grounded & Accessible: Based on established epidemiological models while designed for ease of use and public understanding.
Additional Outputs
- Guidelines on Explanation Methods: Practical guidelines on how AI explanation methods can strengthen COVID-19 communication (download German version), helping experts and decision-makers improve their public messaging.
- Instagram Tutorial: A tutorial showing how to create interactive teaser stories for the simulator on Instagram (download German version), extending the reach to younger audiences through social media.
- Project Poster: A comprehensive poster detailing the interdisciplinary approach to human-centred COVID-19 communication (download) and how insights from Explainable AI research were applied.
Evaluation
The interactive simulator was scientifically evaluated for user acceptance and effectiveness. The evaluation confirmed that the approach of translating AI explainability techniques to crisis communication significantly improved public understanding of infection risk factors and increased engagement with the underlying scientific information.
Results & Impact
Multi-stage evaluation demonstrated significant effectiveness of applying AI explainability techniques to crisis communication:
183+
Total evaluation participants
85%
Found explanations understandable
↑ Trust
Significantly more trustworthy & persuasive
80%
Agreed comparison improves understandability
Impact Areas
- Public Health: Provided the general public with an intuitive tool to understand indoor infection risks, empowering informed decision-making during the pandemic.
- Crisis Communication: Established a novel approach for translating AI explainability techniques into practical communication methods that can be applied beyond COVID-19 to future crisis scenarios.
- Research Transfer: Demonstrated that insights from AI research can have direct societal impact when applied to real-world communication challenges.
EIPCM’s Role
EIPCM led the XAI4COVID project, developing the core concept of applying AI explainability techniques — specifically contrastive and counterfactual explanations — to crisis communication. EIPCM designed and built the project’s centrepiece: an interactive indoor infection risk simulator (what-if-masks.org) that enables users to explore how protective measures affect virus transmission in everyday scenarios.
EIPCM also conducted the user-centred scientific evaluation with 183+ participants and produced practical guidelines on explanation methods for public health communication, combining insights from communication science, psychology, and human-computer interaction.
Learnings
- Explainability Beyond AI Systems: Techniques developed for explaining AI decisions can be effectively repurposed to explain complex scientific information to the general public.
- Interactive Exploration Over Static Information: Allowing people to interactively explore “what-if” scenarios proved far more effective than presenting static risk information.
- Interdisciplinary Collaboration is Essential: Bridging AI research, communication science, and public health expertise was key to creating tools that are both scientifically accurate and publicly accessible.
Resources & Publications
Consortium
Partners
- EIPCM (Germany) — Project Lead
- Radboud University (Netherlands) — Research Partner
- University of Applied Sciences Stralsund (Germany)