Monthly Archives: May 2020

May


If you were looking for some dark optimism
From a walk among the tower blocks, in the gloaming
What would you miss, in the long low seduction of the light
Waning pink behind the clouds, behind the towers?

The river moves; the air’s scent of flowers
Floats past as I hang on the concrete
(was it always so thick with lichen?)
And weep.

The corner store is closed, shutters down.
No milk or old onions, no sweets.
I saw an ambulance there last week.

By the Thames a couple arm in arm
Springtime romance blooming, their masks fitted tight.
He jokes about throwing himself in the river
“But” she says, “you’ll be at work”.

In the yellow evening I want to hope
Passing through the square with the bunting
The open pub (landlord in gloves)
And the jolly blonde families in deck chairs
2 metres apart, on their front lawns,
The stylish young arrayed with plastic cups
Celebrating victory 75 years ago.

The dead are still dead.
And the living, us
Are waiting.

This is the easy part.
Songs on the air in the flower scented evening
Barbecue and take-out beer.
Next week, tomorrow, the beer must be served
The trash taken out
The children taught.

And how?
To be alive is
To
Be alive, until
The spring is spring without you.

(In memory of Barbara Powell, November 1950-May 2002)

Machines Explain Things To Me

We’re deep in the mire of a pandemic, and what’s the promise to let us out? A contract with Palantir to process health data and a serious level of investment in AI systems that are meant to move materials between hospitals. An app whose data about your proximity to your neighbour will be processed to find and notify your contacts. Once again, decision-making machines are positioned as helpers.

How deep does it go, our fascination with machines? With numbers, data, the magic of calculation? And now that this fascination is both legitimate and embedded in the designs of social institutions, what are the consequences? This post summarizes the beginnings of my ongoing work on the politics of explanations, reflecting on how information asymmetries are often sustained by the provision of explanations by some for the benefit of others.

Historian of science Lorraine Daston’s work identifies that it might be deeply embedded indeed. She writes “the cults of communicability and impartiality – again, with or without accuracy – also have an almost unbroken history in the sciences as well as in public life from the seventeenth century to the present . . . even when the truth of the matter was not to be had, numbers could be invented, dispersed to correspondents at home and abroad, and, above all, mentally shared: you and I may disagree about the accuracy and the implications of a set of numbers, but we understand the same thing by them” (1995, p. 9).

In these days of disinformation, deep fakes, and governments who structure their decision-making to render it less easy to scrutinize, it seems worth revisiting Daston’s discussions of how and why numbers and expertise are positioned, valorized and legitimated in this way. Daston calls these processes moral economies – the webs of values that function in relation to each other to build up certain legitimate ways of thinking. Philosopher Charles Taylor and my colleague Robin Mansell use a similar notion of social imaginaries to describe the competing but coherent ways that groups imagine and create expectations (including about the ‘natural way’ to build technologies and social systems).

In my own work, I use the term moral orders to evoke the way that these webs of values and practices build up and gain legitimacy, and especially how they are sustained by being described in moral or ethical terms.

As the hot white heat of AI Ethics has irradiated all of the technology space for the past two years, it’s possible to see the debates about ‘tech for good’ and ‘ethical AI’ as evidence of these kinds of moral justification. What’s especially interesting is how these justifications, once they move out into the world, can become so obviously part of the status quo that they become embedded into the design of technologies.

Transparency, or a lack therof, has come to be seen as one of the main risks of a shift towards reliance on machines in automated decision making. We call for ‘design for fairness’ or ‘auditability’ or ‘transparent design’ as if adhering to certain design principles would produce better outcomes. But if it’s possible to see the biased quality of an automated system, it may not actually be possible to avoid using the system, or to otherwise respond to its failings. Transparency has been much discussed as a necessary, if not sufficient condition to enhance public understanding of how automated systems intervene in people’s access to information, capacity to exercise voice within democratic processes.

Here in the UK (as elsewhere) policy advocates struggle to align existing principles of accountability with the new dynamics of algorithmic or automated decision-making (ADM). In relation to public sector decision making, third-sector organization NESTA has recommended that

“every algorithm used by a public sector organisation should be accompanied by a description of its function, objectives and intended impact.  He also called for  every algorithm  to  have an identical sand-box version for auditors to test the impact of different input conditions.” 

In a debate on this topic in the UK house of Lords in February 2020, the shadow Spokesperson (Digital, Culture, Media and Sport),   Lord Griffiths of Burry Port (Lab)  said ” We must have the general principles of what we want to do to regulate this area available to us, but be ready to act immediately—as and when circumstances require it—instead of taking cumbersome pieces of legislation through all stages in both Houses. ”  

He asked whether the Information Commissioner’s Office was  really the only regulator that can handle this multiplicity of tasks , including online harms and the ADM.   

Perhaps a greater risk than a lack of transparency is a problem in relation to explainability. Designing a system so that it’s decision-making process can be explained has now become viewed as an important goal within some of the fields of computer science and analytic philosophy. The expanding field of Fairness, Accountability and Transparency in machine learning (and the associated FaCCT conference) show how much attention is paid to creating ways to structure principles of transparency, bias reduction or well-specified aspects of fairness into computer systems.

These principled, structured interventions go some way to addressing specific forms of bias and transparency. However there is much that they can’t address – including the aspects of automated systems that cannot be effectively explained, including forms of machine learning where the associations made between different elements are dynamic , modulating and based on mathematical abstractions and principles that are not amenable to straightforward causal explanations. This means that ‘explanation’ as commonly understood, cannot apply to all of the aspects of certain types of automated systems. This is one of the challenges in building ‘explainable AI’ and one reason why I have argued that questions about data governance need to be part of the discussion; rather than focusing only on explanation and narrow interpretations of transparency.

Furthermore, the existing research on explanations overlooks an important element of explanation and explainability: the way that revealing or obscuring information operates to direct explanatory power to some actors rather than others. Are designers of machine learning systems the beneficiaries of explanations advocated by researchers who thought they were advocating for public understanding of technology?

This is one among several important questions to consider when looking at the politics of explanation. Others might concern what’s normatively valuable about explanation, the the ways that the history and culture of machine learning systems illuminate values.

Daston’s view of the history of science identifis that what counts as a fact depends on which historical moment you find yourself in. In the current moment, when scientifically verified facts are framed as debatable in part as a means of undermining their influence, and when not only quantifiable but machine-processed information is held as decisive (even when it is not), what can be made of our fascination with AI?

Sacrifice Poem (who is at work?)

When I twisted my ankle
During the permitted morning run
On Westminster Bridge
(the sound of the tide rushing out with no boats)
I delicately walked past
The hospital where the prime minister
Lies
(don’t say dying).

Police at the gates
Panic on the faces of people rushing in
ID cards held aloft, to face the day.

In front, a rainbow floral display
Perpetual plastic flowers
Reads I [heart] NHS

A worker gives it a glance, rushing.
Does she think, like me
That this effusion seems too close
To a funeral display?

Behind, three ambulances
Are lined up
In the emergency bay.

Across the road, a dozen cameras
A dozen operators
Anchors in suits
Producers on the phone

Wait.
Later their broadcasts speak
Of war and “fighting spirits”
Of bravery and sacrifice.

Down below, in the playground
Of the hospital daycare
A woman runs with a stroller
Mask on her face
Through the doors
With the child
On her way to work.

Who battles:
Who sacrifices?