Understanding and Ethics

Some helpful folks have pointed out that my last post concerned my birthday. It also concerned some of my theoretical and conceptual interests, which are oriented around the capacity to shift organizational structures and patterns of work (especially in technlogy development) towards modes focused on collective benefit, regeneration and mutual support. This post reflects on the last year of my work and outlines where my thinking has come, while also acknowledging the AMAZING projects I’ve been working on.

Understanding and Explanation: Understanding Automated Decisions

In Januaury, I completed the Understanding Automated Decisions project, (FINAL REPORT HERE) linking a research team at LSE with designers at technology studio Projects by IF to show possible ways of explaining how automated systems, including AI systems, make decisions. The delight in this project was in connecting MSc student researchers Nandra Anissa, Paul-Marie Carfantan, Annalisa Eichholtzer and Arnav Joshi with Georgeina Bourke and her team from IF. We all debated, gesticulated, scribbled, schemed, plotted and blogged our way to an interdisciplinary discussion of explanation and its potential value, culminating in a large and very orange gallery show at LSE.

Some of this work has been focused around specific start-up and small company projects. For the first phase of this project we built prototype interfaces to show how on-demand insurance rates are calculated based on risk factors associated with specific data, based in the academic research on . This kind of ‘explanatory interface’ works well when data streams are straightforward. In even the simple form of machine learning, where data from previous behavior would be processed to generate risk calculations, the interfaces that seem easiest to design are unlikely to fully explain the process of machine learning.

Things become even more complex in the case of federated learning, as we discovered at the end of the project through exchanges with Google’s UX team (here’s IF’s blog on this project). The balance between security, privacy, and explanation of the processes through which information is shared between personal devices and centralized network services that can run global updates is very difficult. We proposed that perhaps individual users should be able to trust third parties to manage how closely a model fits with a set of parameters that are important for individuals. Here’s how IF’s designers envision this:

Ethics and Technology development: “doing the right thing”

As I worked on Understanding Automated Decisions I was struck by how important my collaborators, not only at IF but within small organizations, saw the idea of ‘doing the right thing’. This was also something that we saw in the dozens of startups that we engaged with in the Virt-EU Project. Many small companies argued that while ethics was important, it was too slow or difficult (or perhaps would be best done by people outside of organizations). Others, though, oriented their business towards doing  ethics, especially within ‘tech for good’ companies. “Doing” vs “postponing” ethics provides a way of thinking of ethics as a practice rather than as something that needs to be complied with.

To put it another way, making interfaces to explain was a way of doing ethics – where we wanted to be doing the right thing.

Across tech cultures, doing the right thing or doing good is often evoked. We are bound up in an ideology of progress through technological development, and want to use our power to shift this progress in a particular direction. Now that various scandals have revealed how the current models for technology development and the tech industry create harm, new perspectives are needed.

Beyond Consequentialism

The consequentialist ethical tradition, where the ‘goodness’ of decisions is assessed in relation to their measurable arguments, is often applied to reflections by technologists on the responsibility for creating new technologies like AI or connected systems.  The Moral Machine experiment, for example, approached concerns about the ethics of connected vehicle systems by accumulating a list of moral conundrums that these systems are likely to encounter.

As I have experienced over decades, the hope of technologists that they can do the right thing actually also suggest that virtue ethics is a key part of cultures of technology production. A reading of Shannon Vallor’s book Technology and the Virtues suggests that many different philosophical traditions suggest ways of looking at good actions related to technology. Virtuousness is often evoked in projects that evoke a ‘hacker ethic’, which has been described as following liberal individualistic principles and in conflating means and ends. Analysing hacker ethics as forms of virtue ethics repositions the virtue ethics critique of technology development. ‘doing the right thing’ – or ‘not being evil’ can motivate opposition to regulatory action focused on responsibility or resistitution in cases of harm.

Philosopher Elizabeth Anscombe argues that that the main focus of virtue ethics should be on how an ethical person would behave when faced with a particular ethical dilemma. Such a positioning holds a commitment to concepts such as excellence and virtue, instead of implications, utility or greatest good for the greatest number as in the case of consequentialist or utilitarian ethics.

Flourishing, Capabilities, Care

However, the foundation of virtue ethics is not only oriented towards goodness. It’s also fundamentally focused on human flourishing – eudaimonia. Personally, I believe that flourishing should include the flourishing of ecosystems, living environments and the capacity for continued life on earth. Therefore, a strict virtue ethics perspective that focuses only on human flourishing in relation to a set of individual virtues defined primarily by Western enlightenment values fails to account for the need for others to flourish in relation with us: an ecological, systems-based ethics that underpins Donna Haraway’s work on interspecies kinship and the many traditions of thought that consider a cosmopolitics – from Zoe Todd’s description of Inuit philosophy of the climate to Isabelle Stengers’ cosmopolitics of Gaia.

In the next months, I’ll be working with colleagues from Virt-EU to identify how two other aspects of ethics  might be helpful to consider in more detail. These are the capabilities of not only individuals but organizations to act, and the care that is required to sustain functions of systems, at practical scale.

The capabilities of start-ups, for example, are influenced by the political-economic context they operate within, the ways of generating financing, attention, and the skills for product development.

From a care perspective we could think about users of technologies as participants and producers of knowledge, not only of value. Quantified Self and wearable technologies are important to think about here, since previous research identifies that data streams from intimate connected devices are often important aspects of relationships of care between people: Laura Forlano writes vividly of the logic of care behind sharing data needed to maintain life with a disability.

There is much more to say here, but briefly, it has become clear to me as I reorganize my way of doing scholarship, that focusing on flourishing, capabilities and care transform the way we can think of knowledge being made, as well as providing points of practical intervention in technology development that can address the reductive nature of focusing on consequence or narrow individual virtue.

Where Next

After a year of reflection and regeneration, my next work will be focusing on identifying and understanding the hybridizing knowledges that emerge across contexts of difference (human/non-human, socio/technical, indigenous/migrant). This identification positions openness to and respect for many forms of knowledge as core values. By focusing on hybridizing of contexts and knowledges, across space and time, new ways of knowing and being may emerge, as they are urgently needed.