Assessing Trustworthy AI in times of COVID-19.

The paper’s main contributions are twofold: to demonstrate how to apply the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare; and to investigate the research question of what does “trustworthy AI” mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic.

Read more: