Ethical Risk Assessment of Automated Decision Making Systems

Ethical Risk Assessment of Automated Decision Making Systems

By Steven FinlayHead of Analytics at HML.

In my previous article (https://www.odbms.org/2015/02/importance-ethics-data-science/) I discussed the need to consider ethical as well as legal issues when designing automated decision making systems. In this article, I’m going to describe an approach to assessing the ethical risks associated with these systems. It’s a simple and pragmatic approach that can easily be incorporated as part of a standard risk assessment exercise, which should be undertaken during a system’s design phase.

When undertaking an ethical risk assessment of a decision making system, the first, and arguably the most important factor to consider is the impact that decisions made by the system are going to have on people’s lives. Figure 1 provides some examples of how different types of decision can be classified in terms of their impact.

 

F1

Figure 1. Impact of different types of decisions

In Figure 1, a decision making system which only makes decisions that result in insignificant or benign outcomes is classified as a “low impact” system (Green boxes in Figure 1). For example, a marketing application used to select who to target with offers for washing powder, fruit juice or similar products. At the other end of the scale, systems that make decisions about things such as who should receive treatment for a life threatening illness, have very significant impacts on those involved (the red boxes in Figure 1). There are also a whole host of decisions that that lie between these two extremes. For example, who to grant credit too, the price of insurance, and so on (yellow and orange boxes in Figure 1).

The second question I ask myself is: Who is the beneficiary of the decisions made by the system? Consider Figure 2.

F2

Figure 2. Beneficiaries

Figure 2 covers the same types of decision as Figure 1, but ranks them in a different way. The more the decision maker benefits from a decision at the expense of the individual, the greater the ethical risk of that decision (the red boxes in Figure 2). For example, an employer makes decisions about who to employ purely for their own benefit. The impact on job applicants is not the employer’s concern. Decisions which are more altruistic in nature, that are taken specifically to help people or to benefit wider society, are at the lower end of the risk spectrum (the green boxes in Figure 2). Decisions that lie between the two extremes represent situations where there are mutual benefits for both the decision maker and the individual to varying degrees.

The third aspect of the risk assessment focuses on the nature of the data used by the decision making system. In particular, how immutable is the data? Does the data describe something about the person that they can’t change, or is the data more fluid? Figure 3 provides some examples of data that displays varying degrees of immutability.

F3

Figure 3. Immutability of Data

In Figure 3, data items such as age, race and peoples’ DNA are deemed high risk – people are born with these traits and that’s that. At the other end of the scale are things such as what books people buy and how fast they drive, which are very much within peoples’ ability to change quickly and with very little effort. There is also lots of personal data that an individual can change, but where there are various huddles and difficulties to overcome – the data has a degree of inertia. Consider marital status as an example. I can marry, divorce, marry again as often as I like, but there are social, financial and legal barriers that make it difficult to do this very often.

The next stage is to bring together the three assessment criteria (Impact, beneficiary and immutability). The way this is done is illustrated in Figure 4.

Fig4

Figure 4. Ethical Risk Assessment

What Figure 4 tells me is that if a decision making system makes decisions that:

  • Have a high impact on peoples’ lives.
  • Maximizes the benefit of the decision maker (to make a buck, optimize their business processes etc.) at the expense of the individual.
  • Are made on the basis of information over which individuals have no control.

Then this is the most ethically challenging and risky type of decision making. Taking a highly automated black box approach in these situations, with a sole focus on predictive accuracy and numerical optimization of outcomes is not going to be sufficient, and will expose an organization to significant risk. I’m not saying an organization can’t use high impact immutable data to further their own ends, but that they need to tread carefully, and be prepared to respond to challenges about the way they use that data.

Having identified risk, the final task is to take appropriate action to mitigate that risk. This is a three stage process:

  1. Undertake analysis to identify “at risk” groups that may not be treated fairly (ethically) by the system. E.g. women, ethnical minorities, children, and so on.
  2. Design constraints and over-ride rules to ensure that “at risk” groups are treated in a fair way.
  3. After the system goes live, monitor the situation on a regular basis. Constraints and over-rides can then be fine-tuned as required.

Let us continue with the healthcare example, where an automated system is used to make life or death decisions about who receives treatment. One might, for example, be concerned that the system tends to favour adults over children for treatment. Prioritizing adults may be optimal from a pure cost/benefit perspective because on average adults respond marginally better to treatment than children for this type of illness. However, from a societal view, to give children lower priority than adults would be considered unacceptable by many. Therefore, something needs to be done to redress the balance. For example, one option is to allocate resources separately within each group. One set of decision making criteria then is applied for children and a different set of criteria for adults.

References

Finlay, S. (2014). Predictive Analytics, Data Mining and Big Data. Myths, Methods and Misconceptions. Palgrave Macmillian.

You may also like...