On Trustworthy AI
We have been conducting research in assessing #trustworthyai for healthcare for the last three years.
We have learned a number of lessons.
Some of them are:
1. When dealing with proprietary software, IP is an obstacle to transparency;
2. Co-creation at early AI design phase allows to incorporate trustworthy principles into design, thus reducing risks;
3. The early involvement of an interdisciplinary panel of experts broadens the horizon of AI designers which are usually focused on the problem definition from a technical perspective;
4. Aid the development of AI design with reduced end-user vulnerability
5. Consider the aim of the future AI system as a claim that needs to be validated before the AI system is deployed.
6. Socio-technical scenarios can be used to broaden stakeholders’ understanding of one’s own role in the technology, as well as awareness of stakeholders’ interdependence;
7. Most AI-based decision support systems in healthcare are not certified as medical devices and therefore luck of clinical validations;
8. Monitoring AI systems over time (we call it ethical maintenance) is important;
9. Policy makers should give tangible incentives to stakeholders to follow recommendations received when doing a self-assessment.
10. Involve patients at every stage of the design process. It is particularly important to ensure that the views, needs, and preferences of vulnerable and disadvantaged patient groups are taken into account to avoid exacerbating existing inequalities.
What did we learn in assessing Trustworthy AI in practice?
Z-Inspection®: A Process to Assess Trustworthy AI.
Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.
IEEE Transactions on Technology and Society, VOL. 2, NO. 2, JUNE 2021.
Print ISSN: 2637-6415, Online ISSN: 2637-6415, Digital Object Identifier: 10.1109/TTS.2021.3066209
DOWNLOAD THE PAPER
On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls.
Roberto V. Zicari • James Brusseau • Stig Nikolaj Blomberg • Helle Collatz Christensen • Megan Coffee • Marianna B. Ganapini • Sara Gerke • Thomas Krendl Gilbert • Eleanore Hickman • Elisabeth Hildt • Sune Holm • Ulrich Kühne • Vince I. Madai • Walter Osika • Andy Spezzatti • Eberhard Schnebel • Jesmin Jahan Tithi • Dennis Vetter • Magnus Westerlund • Renee Wurth • Julia Amann • Vegard Antun • Valentina Beretta • Frédérick Bruneault • Erik Campano • Boris Düdder • Alessio Gallucci • Emmanuel Goffi • Christoffer Bjerre Haase • Thilo Hagendorff • Pedro Kringen • Florian Möslein • Davi Ottenheimer • Matiss Ozols • Laura Palazzani • Martin Petrin • Karin Tafur • Jim Tørresen • Holger Volland • Georgios Kararigas
Front. Hum. Dyn., Human and Artificial Collaboration for Medical Best Practices, 08 July 2021 |
VIEW ORIGINAL RESEARCH article https://www.frontiersin.org/articles/10.3389/fhumd.2021.673104/full
Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier.
Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano, Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund and Renee Wurth.
Front. Hum. Dyn. |Human and Artificial Collaboration for Medical Best Practices, July 13, 2021
VIEW ORIGINAL RESEARCH article https://www.frontiersin.org/articles/10.3389/fhumd.2021.688152/full