Skip to content

On Responsible AI. Interview with Ricardo Baeza-Yates.

by Roberto V. Zicari on February 7, 2022

“Today, AI can be a cluster bomb. Rich people reap the benefits while poor people suffer the result. Therefore, we should not wait for trouble to address the ethical issues of our systems. We should alleviate and account for these issues at the start.” — Ricardo Baeza-Yates.

Q1. What are your current projects as Director of Research at the Institute for Experiential AI of Northeastern University? 

Ricardo Baeza-Yates: I am currently involved in several applied research projects in different stages at various companies. I cannot discuss specific details for confidentiality reasons, but the projects relate predominantly to aspects of responsible AI such as accountability, fairness, bias, diversity, inclusion, transparency, explainability, and privacy. At EAI, we developed a suite of responsible AI services based on the PIE model that covers AI ethics strategy, risk analysis, and training. We complement this model with an on-demand AI ethics board, algorithmic audits, and an AI systems registry.

Q2. What is responsible AI for you? 

Ricardo Baeza-Yates: Responsible AI aims to create systems that benefit individuals, societies, and the environment. It encompasses all the ethical, legal, and technical aspects of developing and deploying beneficial AI technologies. It includes making sure your AI system does not interfere with a human agency, cause harm, discriminate, or waste resources. We build Responsible AI solutions to be technologically and ethically robust, encompassing everything from data to algorithms, design, and user interface. We also identify the humans with real executive power that are accountable when a system goes wrong.

Q3. Is it the use of AI that should be responsible and/or the design/implementation that should be responsible? 

Ricardo Baeza-Yates: Design and implementation are both significant elements of responsible AI. Even a well-designed system could be a tool for illegal or unethical practices, with or without ill intention. We must educate those who develop the algorithms, train the models, and supply/analyze the data to recognize and remedy problems within their systems.

Q4. How is responsible AI different/similar to the definition of Trustworthy AI – for example from the EU High Level Experts group

Ricardo Baeza-Yates: Responsible AI focuses on responsibility and accountability, while trustworthy AI focuses on trust. However, if the output of a system is not correct 100% of the time, we cannot trust it. So, we should shift the focus from the percentage of time the system works (accuracy) to the portion of time it does not (false positives and negatives). When that happens and people are harmed, we have ethical and legal issues.  Part of the problem is that ethics and trust are human traits that we should not transfer to machines.

Q5. How do you know when an application may harm people?

Ricardo Baeza-Yates: This is a very good question as in many cases harm occurs in unexpected ways. However, we can mitigate a good percentage of it buy thinking in the possible problems before they happen. How exactly to do it is an area of current research, but already we can do many things:

  • Work with the stakeholders of your system from the design to the deployment. That implies your power users, your non digital users, regulators, civil society, etc. They should be able to check your hypotheses, your functional requirements, your fairness measures, your validation procedures, etc. They should be able to contest you. 
  • Analyze and mitigate bias in the data (e.g., gender and ethnic bias), in the results of the optimization function (e.g., data bias is amplified or an unexpected group of users is discriminated) and/or in the feedback loop between the system and its users (e.g., exposure and popularity bias).
  • Do an ethical risk assessment and/or a full algorithmic audit, that includes not only the technical part but also the impact of your system on your users.

Q6. What is your take on the EU proposed AI law?

Ricardo Baeza-Yates: Among the many details of the law, I think AI regulation poses two significant flaws: First, we should not regulate the use of technology but focus instead on the problems and sectors in a way that is independent of the technology. Rather than restrict technology that may harm people, we can approach it the same as food or health regulations that work for all possible technologies. Otherwise, we will need to regulate distributed ledgers or quantum computing in the near future.

The second flaw is that risk is a continuous variable. Dividing AI applications into four risk categories (one is implicit, the no risk category) is a problem because those categories do not really exist (see The Dangers of Categorical Thinking.) Plus, when companies self-evaluate, it presents a conflict of interest and a bias to choose the lowest risk level possible. 

Q7. You mentioned that “we should not regulate the use of technology, but focus instead on the problems and sectors in a way that is independent of the technology”.  AI seems to introduce an extra complexity, that is, the difficulty in many cases to explain the output of an AI system. If you are making a critical decision that can affect people based on an AI algorithm for which you do not know why it produced an output, it would be in your analogy equivalent to allow a particular medicine to be sold that is producing lethal side effects. Do we want this?

Ricardo Baeza-Yates: No, of course not. However, I do not think it is the best analogy, as the studies needed for a new medicine must find why the side effects occur and after that you do an ethical risk assessment to approve it (i.e., the benefits of the medicine justify the lethal side effects). But the analogy is better for the solution. We may need something similar to the FDA in the U.S.A. that approves each medicine or device via a 3-phase study with real people. Of course, this is needed only for systems that may harm people.  

Today, AI can be a cluster bomb. Rich people reap the benefits while poor people suffer the result. Therefore, we should not wait for trouble to address the ethical issues of our systems. We should alleviate and account for these issues at the start. To help companies confront these problems, I compiled 10 key questions that a company should ask before using AI. They address competence, technical quality, and social impact. 

Q8. Ethics principles have been established long ago, well before AI and new technology were invented. Laws are often running behind technology, and that is why we need ethics.  Do you agree?

Ricardo Baeza-Yates: Ethics always runs behind technology too. It happened with chemical weapons in World War I and nuclear bombs in World War II, to mention just two examples. And I disagree because ethics is not something that we need, ethics is part of being human. It is associated with feeling disgust, when you know that something is wrong. So, ethics in practice existed before the first laws. Is the other way around, laws exist because there are things so disgusting (or unethical) that we do not want people doing them. However, in the Christian world, Bentham and Austin proposed the separation of law and morals in the 19th century, which in a way implies that ethics applies only to issues not regulated by law (and then the separation boundary is different in every country!). Although this view started to change in the middle of the 20th century, the separation still exists, which for me does not make much sense. I prefer the Muslim view where ethics applies to everything and law is a subset of it. 

Q9. A recent article you co-authored “is meant to provide a reference point at the beginning of this decade regarding matters of consensus and disagreement on how to enact AI Ethics for the good of our institutions, society, and individuals.” Can you please elaborate a bit on this? What are the key messages you want to convey?

Ricardo Baeza-Yates: The main message of the open article that you refer to is freedom for research in AI ethics, even in industry. This was motivated by what happened with the Google AI Ethics team more than a year ago. In the article we first give a short history of AI ethics and the key problems that we have today. Then we point to the dangers: losing research independence, dividing the AI ethics research community in two (academia vs. industry), and the lack of diversity and representation. Then we propose 11 actions to change the current course, hoping that at least some of them will be adopted.

……………………………………………………………

Ricardo Baeza-Yates is Director of Research at the Institute for Experiential AI of Northeastern University. He is also a part-time Professor at Universitat Pompeu Fabra in Barcelona and Universidad de Chile in Santiago. Before he was the CTO of NTENT, a semantic search technology company based in California and prior to these roles, he was VP of Research at Yahoo Labs, based in Barcelona, Spain, and later in Sunnyvale, California, from 2006 to 2016. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 1999 and 2011 (2nd ed), that won the ASIST 2012 Book of the Year award. From 2002 to 2004 he was elected to the Board of Governors of the IEEE Computer Society and between 2012 and 2016 was elected to the ACM Council. Since 2010 is a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow, among other awards and distinctions. He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989, and his areas of expertise are web search and data mining, information retrieval, bias and ethics on AI, data science and algorithms in general.

Regarding the topic of this interview, he is actively involved as expert in many initiatives, committees or advisory boards related to Responsible AI all around the world: Global AI Ethics Consortium, Global Partnership on AI, IADB’s fAIr LAC Initiative (Latin America and the Caribbean), Council of AI (Spain) and ACM’s Technology Policy Subcommittee on AI and Algorithms (USA). He is also a co-founder of OptIA in Chile, a NGO devoted to algorithmic transparency and inclusion, and member of the editorial committee of the new AI and Ethics journal where he co-authored an article highlighting the importance of research freedom on ethical AI.  

…………………………………………………………………………

Resources

AI and Ethics: Reports/Papers classified by topics

-– Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence. European commission, 8 April, 2019. Link to .PDF

– WHITE PAPER. On Artificial Intelligence – A European approach to excellence and trust.  European Commission, Brussels, 19.2.2020 COM(2020) 65 final. Link to .PDF

–  Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS. LINK

–  Recommendation on the ethics of artificial intelligence. UNESCO, November 2021. LINK

–  Recommendation of the Council on Artificial Intelligence. OECD,22/05/2019 LINK

– How to Assess Trustworthy AI  in practice, Roberto V. Zicari, Innovation, Governance and AI4Good, The Responsible AI Forum Munich, December 6, 2021. DOWNLOAD .PDF: Zicari.Munich.December6,2021

Related Posts

On Responsible AI. Interview with Kay Firth-Butterfield, World Economic Forum. ODBMS Industry Watch. September 20, 2021

Follow us on Twitter: @odbmsorg

From → Uncategorized

No comments yet

Leave a Reply

Note: HTML is allowed. Your email address will not be published.

Subscribe to this comment feed via RSS