How to make Artificial Intelligence fair, transparent and accountable:

Ideally, the process begins before the design phase

By AJung Moon, Executive Member, IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems; CEO, Generation R Consulting

Data-driven algorithms and artificial intelligence (AI) already influence many aspects of our personal and business lives. They are becoming more sophisticated, useful, and pervasive. Owing in part to the rapid advancement of powerful algorithms, AI has created not only new business opportunities worldwide, but also concerns from consumers, policymakers anddevelopers of the technology.

These concerns need to be addressed. In fact, practitioners of data science, big data, and machine learning have been actively addressing social and ethical concerns that pertain to our increasingly algorithmic society. Yet questions linger. Can learning algorithms be designed to be fair? How can we make the complex algorithms more transparent and their output explainable?

Businesses should be able to answer questions about fairness, transparency, and accountability because discriminatory and opaque algorithms can translate into real risks. Those risks include the possibility of catastrophic brand damage and lawsuits and can breed distrust in employee-employer relationships. These risks lend urgency to a discussion on how to make AI fair, transparent and accountable, and, therefore, trustworthy and widely accepted.

‘Fairness’ for you and your algorithm

Though some experts use the term AI to refer to software artifacts with equal or superior intelligence to that of a human’s (“strong AI” or “artificial general intelligence”), I use the the term to include machine learning and other algorithms that supplement or replace traditionally human decisions.

In an increasingly digital world, data analytics offer attractive and competitive opportunities for efficiencies and value creation. Yet the responsibility to make insightful sense of significant patterns in data via algorithm-driven analytics remains ours and not the technical systems we create. For instance, if we detect a pattern that reflects an imbalance in the number of female and male software engineers in a company, and we decide not to perpetuate that pattern, human decision makers should make appropriate changes in hiring practices.

My example, of course, is obvious, the stuff of headlines. But ignoring the obvious can have real consequences in the form of unintentional discrimination against individuals of certain gender, race, ethnicity and so on.

In practice, however, capturing the notion of fairness in an algorithm can be elusive. Everyone has an idea of what fairness means to them, but what is considered to be fair by one individual or group may not easily transfer to others. Using gender information to diagnose breast cancer and prostate cancer, for example, would be highly useful and appropriate. Using gender information to determine the quality of new hires for a computer scientist position, on the other hand, would not.

The initial consideration of what fairness means to your organizational culture should be leavened by a consideration of what fairness might mean to the full gamut of affected stakeholders.

Why should we care about ‘Transparency’?

In a technology context, the notion of transparency can be multi-layered. We hear of “black box algorithms,” meaning that we don’t know the inner workings of the algorithms either because they are proprietary or not understandable to interested parties. The interested parties could be the developers of the system, a third party evaluator or consumers.

Thus the notions of understandabilityand interpretability become important. Is a stakeholder able to interpret and understand the workings of a system and its outputs? An end-user might forgo the need for complete and transparent access to the underlying algorithm and the dataset if easily understandable information about the system is provided to them by a qualified, trustworthy expert or entity. When people are able to understand how something works, they are more likely to use the system appropriately and to trust those who develop and deploy it.

‘Accountability’ means knowing ‘things can go wrong’

Accountability for an algorithm and how it is applied begins with those who design and deploy the system that relies on it. Ultimately, designers and deployers share responsibility for the consequences or impact an algorithmic system has on stakeholders and society. They must ask themselves hard questions.

Does an AI application do what it is designed to do? How can we tell? Have we thought about all possible failure modes of an algorithm and actively migitated the probabilities of high risk failures?

Accepting the possibility of unintended consequences is an important element in accepting accountability.

In my work in this area, I find designers and deployers often surprised by failures in algorithmic ethics.Those failures typically stem from an accountability gap within the organization.

Often, senior management isn’t aware of the business risks inherent with different design decisions related to the algorithms their business depends on. Conversely, algorithm designers are often not in a position to make critical business decisions on behalf of the organization. The result is an accountability gap that goes undiscovered and uncommunicated, leaving the organization vulnerable to unforeseen risks for which they later maybe held accountable.

Wrestling with practicalities

What can be done to address fairness, transparency, and accountability issues today?

One of the first steps is to define and understand the needs and concerns of key stakeholders, including those without formal representation. What are their values? How do the different stakeholder groups view what is fair within the application domain? What information about the system do they need and desire to know?

From that starting point, fairness, transparency and accountability issues can be identified and characterized, ideally through a comprehensive internal assessment assisted by an outside, impartial third party. The findings of such an assessment can then be integrated into design and organizational practices, including the involvement of human agency if the identified risks are unacceptable.

Such an approach tempers the unrealistic drive to create the “perfect algorithm,” and instead, allows you to build an ecosystem of developers, technology and end-users that operates as a fair, transparent, and accountable system.

Resources at hand

Technologists need not wrestle alone with these daunting challenges. Resources for ongoing discussions, developing awareness of the issues and practical guidance for design processes are readily available.

For instance, I belong to the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which is dedicated to ensuring every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained and empowered to prioritize ethical considerations so that AI is advanced for the benefit of humanity.

One significant Initiative output is the iterative resource, Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, now in its recently released second edition, which encompasses input from scores of global thought leaders.

The Initiative, in tandem with the IEEE Standards Association, also has a newly established working group addressing related standards under the title, Project Defining Model Processes for Addressing Ethical Concerns During System Design.

Your participation in these and related IEEE programs is welcome. The complexity of the issues raised here, as well as the need for broad-based stakeholder involvement, can only be addressed with the widest possible input from the technology community.

You may also like...