Z-Inspection®: A Process to Assess Trustworthy AI

Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn, Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.

AbstractThe ethical and societal implications of artificial intelligence systems raise concerns. In this article, we outline a novel process based on applied ethics, namely, Z-Inspection􏰀, to assess if an AI system is trustworthy. We use the definition of trustworthy AI given by the high-level European Commission’s expert group on AI. Z-Inspection􏰀 is a general inspection pro- cess that can be applied to a variety of domains where AI systems are used, such as business, healthcare, and public sector, among many others. To the best of our knowledge, Z-Inspection􏰀 is the first process to assess trustworthy AI in practice

Index TermsAccountability, artificial intelligence (AI), AI ethics, AI policy, AI-audit, algorithmic audits, corporate social responsibility, deep learning (DL), ethics, law, responsible inno- vation, society, machine learning (ML), Z-Inspection.

IEEE Transactions on Technology and Society,
VOL. 2, NO. 2, JUNE 2021

Print ISSN: 2637-6415
Online ISSN: 2637-6415
Digital Object Identifier: 10.1109/TTS.2021.3066209
DOWNLOAD THE PAPER

Cite This

You may also like...