Monthly Archives: September 2017

What to do about biased AI? Going beyond transparency of automated systems

Automated decision making and the difficulty of ensuring accountability for algorithmic decisions have been in the news. This is a big deal if we are to start addressing some of the serious ethical issues in developing Artificial Intelligence systems that can’t easily be made transparent. I’m breaking out of a concentrated book-writing space to offer my voice – and to outline some of the directions I think we should be taking to address the wicked problems of ethics, algorithms and accountability – and hoping also to be standing up to be counted as one of the people opening out discussions in this space, so that it can be more diverse.

A few months ago I submitted a response to the UK’s Science and Technology Committee consultation on automated decision making. This consultation asked specifically how transparency could be empoyed to allow more scrutiny of algorithmic systems. I outlined some reasons why transparency alone is not appropriate for making algorithms accountable. I argue that:

  • Automated decision making systems including artificial intelligence systems are subject to a range of biases related to their design, function and the data used to train and enact these systems.
  • Transparency alone cannot address these biases.
  • New regulatory techniques such as ‘procedural regularity’ may provide methods to assess algorithmic outcomes against defined procedures ensuring fairness.
  • Transparency might apply to features of data used to train algorithms or AI systems.

My response identifies that one of the issues in the space is that that previous lessons about regulation and about the function of computing systems have been lost. Automated decision making using computational methods is not new: predictive techniques including example-based or taught learning systems, which can make predictions based on examples and generalize to unseen data, were developed in the 1960s and refined in the following decades. There is consensus, now as then, that automated systems are biased.  This is a very big problem for a society that wants to expand automated decision making to many more areas, with the expansion of more generalized AI systems. So here are some key points from my consultation response, along with

Automated systems are biased – but why?

Researchers agree that these systems hold the risk of producing biased outcomes due to:

  • The function of algorithmic systems being black boxed to the operators, through design choices that make either the process of decision-making or the factors considered in decision-making too opaque to directly influence, or that limit the control of the designer[1].
  • Biases in data collection reproduced in the outputs of the system, for example medical school entry algorithms as far back as the 1970s[2].
  • Biases in interpretation of data by algorithms that would, in humans, be balanced by conscious attention to redressing bias[3], for example sexist biases in language translation tools.
  • Biases in the ways that learning algorithms are ‘tuned’ based on the behavior of testing users[4], as exemplified by sexist and racist implications to Google autocomplete suggestions (these are likely to have been generated by designers failing to tune the autocomplete suggestions away from such biased suggestions).
  • Biases resulting from the insertion of algorithms designed for one purpose into a system designed for another, without consideration of any potential impact, for example the use of algorithms designed for high-frequency trading in the use of biometric border control systems[5].
  • Biases in training data used to train the decision-making systems, as evidenced by racial bias in facial-recognition algorithms trained with data containing faces of primarily Caucasian origin[6].

Addressing Algorithmic Bias – beyond transparency to design and regulation

These biases are well identified, and have cultural impacts beyond the specific cases in which they appear. But they can be addressed – although biases in AI systems like neural networks can be more difficult. The bottom line is that research, industrial strategy and regulatory developments need to be connected together.

Limitations of Algorithmic Transparency Alone

Transparency alone is not a solution. Relying on transparency as the sole or main principle for regularisation or governance is unlikely to reduce biases resulting from expanded algorithmic processing.

  • Alone, transparency mechanisms can encourage false binaries between ‘invisible’ and the ‘visible’ algorithms, failing to enact scrutiny on important systems that are less visible[7]
    • Transparency doesn’t necessarily create trust, and may result in platform owners refusing to connect their systems to others.
    • Transparency cannot apply to some types of systems: neural networks distribute intelligent decision-making across linked nodes, there is no possibility of transparency in relation to the decisions of each node or the relationships between nodes[8].
    • Transparency cannot address the change of systems over time.
    • Transparency does not solve the privacy issues related to combining together personal data sources.
    • Transparency of source code can permit audit of a system’s design but not of its outputs, especially in machine-learning systems[9].

What’s to be done?

In writing this consultation response I suggested that transparency of training data might be one way of addressing the shortcomings of addressing transparency alone. There are some other potential directions to pursue as well. These include

  • Transparency connected to context
    • Taking a user-centred approach to design, empower users to make informed decisions by being transparent in the context of the service they are using.
  • Accountability vs explanation
    • The General Data Protection Regulation says that users have the right to object to an automated decision that has been made about them. This suggests that users will need both an explanation of how a decision has been made, and the right to raise an objection to the decision. Researchers and designers should investigate how to identify decision making
  • Scrutability of algorithmic inputs, eg. training data
    • It is becoming more widely agreed that training data should be available for data scientists to analyse, to identify and interrogate systemic bias in training data before it is programmed into decision-making systems. We need much more research into how training data can be made scrutible and what regulatory processes need to be set up in order to facilitate this.

Who is Doing the Work?

Dozens of scholars and practitioners are working on these issues. I have added footnotes to some of the classic work in computer science that has looked at these issues in the past, and I hope that the wide ranging conversation that’s required to address these issues continues. It’s certainly part of my next phase of work, as I continue to work on issues of ethics and values in the design of connected systems.

[1] Dix, Alan (1991) Human Issues in the Use of Pattern Recognition Techniques. Available at http://alandix.com/academic/papers/neuro92/neuro92.pdf

[2] British Medical Journal 5 March 1988. Available at: http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC2545288&blobtype=pdf

3 Caliskan, Aylin, Joanna J. Bryson, Arvind Narayanan (2017) Science  14, Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

4 Dix, Alan (1991) Human Issues in the Use of Pattern Recognition Techniques. Available at http://alandix.com/academic/papers/neuro92/neuro92.pdf

[5] Amoore, L. (2013). The politics of possibility: risk and security beyond probability. Duke University Press.

[6] Klare B. F., Burge M. J., Klontz J. C., Vorder Bruegge R. W., Jain A. K. . “Face Recognition Performance: Role of Demographic Information”, IEEE Transactions on Information Forensics and Security, Vol. 7, Issue 6, 2012, pp. 1789-1801.

[7] Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 1461444816676645.

[8] Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. Forthcoming in 165 University of Pennsylvania Law Review

[9] Kroll et al (2017)