In the Time of Corona 1

The sun in the early afternoon is very warm. BBC 3 is playing lieder music and dimly I can hear the toddlers who live next door fussing before their afternoon nap. Outside I see birds and some brazen field mice foraging on the bits I dropped in the garden. It is as if everything were normal. Abnormally normal.

And yet. A stillness hangs in the air. An airplane has just passed by, an ordinary thing here in Central London. And yet. Reading the news has informed me that airlines are massively cutting back their flights, so perhaps this ordinary tearing of the air will become more extraordinary.

The UK’s official government policy has not yet enforced the closures of schools nor workplaces. It is however informing individuals to self-isolate, and this, bit by bit, takes apart the fragile infrastructure of society. As privileged folks like me, with jobs done at a laptop start working at home, stop travelling, the numbers of people circulating around this busy city start to drop.

It would be tempting to think of this time of waiting, this gathering stillness as the defining experience of this time of viral spread.

And yet.

What is happening now is not the story of this crisis. This is not a narrative of this time, but of several other times. In one sense, what is happening now is the preparation for future viral times. Mutual Assistance groups are forming, loosely, gathering together the well-intentioned. The one I’m following seems largely to generate influence in the here and now by informing the well-intentioned about how much work their neighbours are already doing running food banks, community organizations and support networks – as well as linking up individuals who have been isolated and need someone to run to the pharmacy.

In truth though, these mutual aid networks are not for now. They are building capacity for the time when the real narrative of the pandemic begins: the time when many people are infected, and so many are sick that seeing doctors is impossible. When the privilege of being healthy also embeds the responsibility to care for others – and not by adding to a spreadsheet or getting a prescription but by feeding the hungry, washing the feverish, cleaning the floor. Add to this the terrifying realization that many people who are immuno-compromised may not be with us when we emerge on the other side.

The other time of the virus is far longer, encompassing both the recent past and the longer future. This time of the virus includes its origins in animals whose habitats were encroached upon and who became (like people too) enmeshed in a persistent logic of capitalism that has destroyed the regenerative capacities of the earth’s ecosystem, and perhaps the regenerative capacities of people too. I talked a little bit about this in an interview here – but in my hopeful moments I like to entertain the thought that the practice of a quieter, slower pace of work may begin to set the groundwork for the changes of practice that have been necessary for so long – to assuage the climate crisis and to create the capacity for a society capable of regeneration and survival.

There are darker ends to the narrative of course. A country destroyed. A country in mourning for people it failed to save. Individual sadness, anxiety and grief brought on by social separation. Further distress for the people least capable of sustaining it: people living in refugee camps, recent arrivals who don’t feel at home, people struggling to feed their children or who are experiencing violence at home.

And yet.

As the sun slants away and the animals flit in and out of view, I feel the change of times.

Ethics in Practice: Bravery and Creativity

This is a repost from https://www.adalovelaceinstitute.org/beginning-just-ai-bravery-and-creativity-for-ethics-in-practice/

I’m very excited, and a little nervous, to be starting a network focused on understanding and reframing justice and flourishing in the age of AI. Here, I build on the work that we did at VIRT-EU developing ideas about virtue, capability and care to focusing on the idea of flourishing in relation to sustainability (both in terms of accessibility and repairability of technologies and systems) and justice (encompassing both the capabilities of technology developers and an ethical orientation towards care in terms of its consequences). My aim in this network is to begin by understanding the current positions researchers have taken towards ethics, and by focusing on some specific tricky problem areas, to develop new capabilities to work differently, across disciplines. As I wrote below, this is daunting, hence my call for bravery and creativity.

The UK’s Arts and Humanities Research Council (AHRC) and the Ada Lovelace Institute are partnering to establish a network of researchers and practitioners to join up the study of AI and data-driven technologies with understandings of social and ethical values, impacts and interests. The JUST AI (Joining Up Society and Technology in AI) network will build upon research into AI ethics, orienting it around practical issues of social justice, distribution, governance and design. Using a collaborative approach, it will investigate and create research capacity around ‘just AI’ – AI that is ethical and works for the common good and is effectively governed and regulated. The network’s name also points to the need for work on the social and ethical facets of AI to cut through the ‘hype’ or techno-solutionism that often accompanies AI research.

Instigating the JUST AI network

I’ve recently agreed to instigate the formation of the network to convene people working across disciplines and find new ways of linking research and artistic communities together.

In my work, I have been interested in how it’s possible to shift organisational structures and patterns of work (especially in technology development) towards modes focused on collective benefit, regeneration and mutual support. The acronym Joining Up Society and Technology in AI resonates with my longstanding interest in how people create technologies in relation to the values they hold, and how we all respond to their influences. Using AI in the title gestures to the influence of discussions about AI, data and automated systems, and as a general term gives us lots of space to work across the span of techno-social systems in these areas.

Ethics in practice

Looking across tech cultures, doing the right thing or doing good is often evoked as a core value. The network presents an amazing opportunity to develop research into how ethics is practised, as well as to shift the ways that research, policy and practice on ethics are performed.

We are bound up in an ideology of progress through technological development – and want to use our power to shift this progress in a particular direction. But there are important questions to answer about whether aiming for virtuous self-improvement can influence technology within a broader setting of powerful companies, venture capital expectations and continuing injustice often worsened by the adoption of data-based technology.

In this context, we need to begin thinking more of ethics as a practice, and consider how practices intersect with power, and how both may be changed. The end goal of any of these changes, challenges and directions of travel is to enhance the capacity for what philosophers call eudaimonia – human flourishing.

Lots of areas of flourishing are impacted by new data/AI systems, such as health, care, transport and the physical environments of our cities. Of course, in the climate emergency, flourishing isn’t only a human concern; environmental justice and the actions needed to bring forward regenerative culture are important for ensuring long-term flourishing for all living beings.

We need to understand how to enable people to engage with the opportunities and constraints that their life situation presents, and to not only develop themselves but to support others in creating new conditions. Philosophically, taking care and creating capability are also part of the conversation.

The JUST AI network seeks to move work on ethics away from discussions of consequence and towards consideration of practices in relation to long-term flourishing, care and development of capability.

Bravery, creativity and change

In my work I gather empirical evidence that shows the challenges presented by data/AI technologies; for our systems of care, for the places we work and live, and for the living environment of which we are a part. Addressing these challenges requires bravery and creativity, a commitment to connecting and respecting different expertise and ways of working, and open-mindedness about possibilities. I have been accused of being an optimist – and exploring ‘just AI’ with researchers and practitioners will, I hope, provide some new ways forward. I’m so excited to start.

Understanding and Ethics

Some helpful folks have pointed out that my last post concerned my birthday. It also concerned some of my theoretical and conceptual interests, which are oriented around the capacity to shift organizational structures and patterns of work (especially in technlogy development) towards modes focused on collective benefit, regeneration and mutual support. This post reflects on the last year of my work and outlines where my thinking has come, while also acknowledging the AMAZING projects I’ve been working on.

Understanding and Explanation: Understanding Automated Decisions

In Januaury, I completed the Understanding Automated Decisions project, (FINAL REPORT HERE) linking a research team at LSE with designers at technology studio Projects by IF to show possible ways of explaining how automated systems, including AI systems, make decisions. The delight in this project was in connecting MSc student researchers Nandra Anissa, Paul-Marie Carfantan, Annalisa Eichholtzer and Arnav Joshi with Georgeina Bourke and her team from IF. We all debated, gesticulated, scribbled, schemed, plotted and blogged our way to an interdisciplinary discussion of explanation and its potential value, culminating in a large and very orange gallery show at LSE.

Some of this work has been focused around specific start-up and small company projects. For the first phase of this project we built prototype interfaces to show how on-demand insurance rates are calculated based on risk factors associated with specific data, based in the academic research on . This kind of ‘explanatory interface’ works well when data streams are straightforward. In even the simple form of machine learning, where data from previous behavior would be processed to generate risk calculations, the interfaces that seem easiest to design are unlikely to fully explain the process of machine learning.

Things become even more complex in the case of federated learning, as we discovered at the end of the project through exchanges with Google’s UX team (here’s IF’s blog on this project). The balance between security, privacy, and explanation of the processes through which information is shared between personal devices and centralized network services that can run global updates is very difficult. We proposed that perhaps individual users should be able to trust third parties to manage how closely a model fits with a set of parameters that are important for individuals. Here’s how IF’s designers envision this:

Ethics and Technology development: “doing the right thing”

As I worked on Understanding Automated Decisions I was struck by how important my collaborators, not only at IF but within small organizations, saw the idea of ‘doing the right thing’. This was also something that we saw in the dozens of startups that we engaged with in the Virt-EU Project. Many small companies argued that while ethics was important, it was too slow or difficult (or perhaps would be best done by people outside of organizations). Others, though, oriented their business towards doing  ethics, especially within ‘tech for good’ companies. “Doing” vs “postponing” ethics provides a way of thinking of ethics as a practice rather than as something that needs to be complied with.

To put it another way, making interfaces to explain was a way of doing ethics – where we wanted to be doing the right thing.

Across tech cultures, doing the right thing or doing good is often evoked. We are bound up in an ideology of progress through technological development, and want to use our power to shift this progress in a particular direction. Now that various scandals have revealed how the current models for technology development and the tech industry create harm, new perspectives are needed.

Beyond Consequentialism

The consequentialist ethical tradition, where the ‘goodness’ of decisions is assessed in relation to their measurable arguments, is often applied to reflections by technologists on the responsibility for creating new technologies like AI or connected systems.  The Moral Machine experiment, for example, approached concerns about the ethics of connected vehicle systems by accumulating a list of moral conundrums that these systems are likely to encounter.

As I have experienced over decades, the hope of technologists that they can do the right thing actually also suggest that virtue ethics is a key part of cultures of technology production. A reading of Shannon Vallor’s book Technology and the Virtues suggests that many different philosophical traditions suggest ways of looking at good actions related to technology. Virtuousness is often evoked in projects that evoke a ‘hacker ethic’, which has been described as following liberal individualistic principles and in conflating means and ends. Analysing hacker ethics as forms of virtue ethics repositions the virtue ethics critique of technology development. ‘doing the right thing’ – or ‘not being evil’ can motivate opposition to regulatory action focused on responsibility or resistitution in cases of harm.

Philosopher Elizabeth Anscombe argues that that the main focus of virtue ethics should be on how an ethical person would behave when faced with a particular ethical dilemma. Such a positioning holds a commitment to concepts such as excellence and virtue, instead of implications, utility or greatest good for the greatest number as in the case of consequentialist or utilitarian ethics.

Flourishing, Capabilities, Care

However, the foundation of virtue ethics is not only oriented towards goodness. It’s also fundamentally focused on human flourishing – eudaimonia. Personally, I believe that flourishing should include the flourishing of ecosystems, living environments and the capacity for continued life on earth. Therefore, a strict virtue ethics perspective that focuses only on human flourishing in relation to a set of individual virtues defined primarily by Western enlightenment values fails to account for the need for others to flourish in relation with us: an ecological, systems-based ethics that underpins Donna Haraway’s work on interspecies kinship and the many traditions of thought that consider a cosmopolitics – from Zoe Todd’s description of Inuit philosophy of the climate to Isabelle Stengers’ cosmopolitics of Gaia.

In the next months, I’ll be working with colleagues from Virt-EU to identify how two other aspects of ethics  might be helpful to consider in more detail. These are the capabilities of not only individuals but organizations to act, and the care that is required to sustain functions of systems, at practical scale.

The capabilities of start-ups, for example, are influenced by the political-economic context they operate within, the ways of generating financing, attention, and the skills for product development.

From a care perspective we could think about users of technologies as participants and producers of knowledge, not only of value. Quantified Self and wearable technologies are important to think about here, since previous research identifies that data streams from intimate connected devices are often important aspects of relationships of care between people: Laura Forlano writes vividly of the logic of care behind sharing data needed to maintain life with a disability.

There is much more to say here, but briefly, it has become clear to me as I reorganize my way of doing scholarship, that focusing on flourishing, capabilities and care transform the way we can think of knowledge being made, as well as providing points of practical intervention in technology development that can address the reductive nature of focusing on consequence or narrow individual virtue.

Where Next

After a year of reflection and regeneration, my next work will be focusing on identifying and understanding the hybridizing knowledges that emerge across contexts of difference (human/non-human, socio/technical, indigenous/migrant). This identification positions openness to and respect for many forms of knowledge as core values. By focusing on hybridizing of contexts and knowledges, across space and time, new ways of knowing and being may emerge, as they are urgently needed.

On 40

In the morning, tomorrow morning, I will be forty. It seems a time of reckoning, of all the things I expected and all the other ones that happened instead.

When I turned thirty I threw a great big party at a country house and invited all my friends. We swam in a pond and drank too much wine and made a big potluck dinner on the terrace. I wore a princess crown for the whole day. Earlier that week I’d handed in my PhD thesis and started a party that lasted for days and days, ending in that potluck. The previous month I’d proposed to my boyfriend and he’d accepted. I had a job waiting for me in Oxford. Here’s me submitting the PhD with my brother – I’m relieved, but also anxious about everything to come.

During that 30th birthday week I listed in my mind a few things that I was *sure* I’d do before I was 40. Write a book. Have a baby. Get a ‘proper job’. Get married and live happily ever after. But what I didn’t have any sense of at that time, was not WHAT I wanted to do, but HOW I wanted to do it – how I wanted to live. In the intervening decade, I did many things. I moved across the world. I learned to row. I planted gardens in three different houses and once dug a pond. I worked very hard at being a ‘good academic’ – publishing, going to conferences, meeting people and impressing them, devising and writing grants that got rejected over and over (and sometimes not), pushing through the internal politics that shape a department, a university, a neoliberal concept of education. I did have the baby, who is six years old and indomitable. I did get married, although now I am in the process of getting divorced. I still haven’t finished writing a book.

Not What but How

This time around, I have no plans for what I want to do, but I have many thoughts about how I want to be. In the past several months, as my marriage has dissolved and I find myself in the miasma of emotions accompanying divorce (jealousy, shame, fear, anger, hurt, incredulity, and sometimes even hope), I have been struck by the feeling that I’m finally forced to think about how I want my life to feel.Here’s me on a plane. I took this myself, and I love how you can see so many emotions, but also something fresh and exciting in my eyes.

 

This week, I made a jellyfish costume for my daughter and we went to her friend’s birthday party. I met some local women for a drink one evening, and sat in the pub on another evening with the mothers of kids in my daughter’s class. I spent Sunday afternoon with the people who are part of one of my research teams. We drank pisco sours and ate ceviche and I watched birds while we talked about Pokemon and ethics. Later in the week we all met again to sort out our fieldwork on Internet of Things developers and their ethics-in-practice, and write a bit of one of the papers we are drafting together. I also had a meeting with the design consultancy who work with me on the Understanding Automated Decisions project, with some amazing LSE research staff. We dug into some complicated questions about how to explain automated decisions to different groups, in different contexts. We heard that our proposal to exhibit work explaining how algorithms and machine learning make decisions will be shown at the LSE Atrium Gallery in October. I finished writing a chapter on how the economics of data change the way our everyday life is mediated, and an article on the moral justifications that technology developers use to make ‘what works’ into ‘what is good.’ I also got so sick with a cold that I had to spend a day in bed.

It’s a full life, in other words. But what surprises me about it – and what allows me to think about the next decade with hope – is that its richness comes from the encounters with others, whether at the pub or around a meeting table. In the last ten years I often felt lonely, as I railed against the devastating and seemingly intractable problems of climate change, unethical technology, decayed democracy and violence against migrants. Of course, alone these are intractable. And anyway my ideas about them are as bad as anyone’s. But when I start to listen to others, and to make space for those ideas, something else begins to emerge. And making that space, and making these connections, is part of how I am hoping I’ll live in the next ten years. Here’s me and the lovely Funda Ustek-Spilda, one of the many talented people I get to work with.

Making Space for Love

The reading I’m doing now that most sustains me comes from Hannah Arendt and Erich Fromm. Both of these thinkers lived through the rise of Nazi Germany, the ensuing world war, and the subsequent conservative turn in American politics. Both have important lessons for how we might want to live now. Arendt urges us to make space for political life, to develop our capacities to act and not to find ourselves hindered by thinking of politics in too narrow a way. Fromm, in his work on love, argues that the capacity to love requires a capacity to be with oneself, and to resist the notion that connections with others are undertaken on the basis of ‘fair’ (or capitalist) exchange. He argues that our experiences of the world are shaped by the economic and political conditions that we live under, and in order to change these we need to find our own internal capacity to generate and give love – not merely romantically but as parents, friends, and members of a community.

 

Sometime in the next twelve months I’ll have to move to a new home, submit my book manuscript, and go up for promotion at work. As “whats” they loom, and I am tempted to prepare to master them, to scale peaks and to check off items on a list. As “hows”, though, I think about what opportunities these challenging things provide for me to make space for others, for me to learn, and for all of us to develop the capacity to act.

I think this decade is going to be amazing. And I have no idea what it will bring.

Information Politics and the Internet of Things

The connected world is a complex one but that does not mean our information rights have disappeared

This post summarizes some of the points made in a plenary talk at the Restart Project’s FixFest Repair Conference, held in London on Oct 6, with themes related to what we are working on in the Virt-EU project. Here’s the video.

The internet, so we are told, is now around us and potentially embedded everywhere. But this vision of the ‘internet of things’ masks a fractured landscape of devices that only work on certain systems, of black-boxes that mask the protocols and rules through which things like personal assistants, connected appliances or even autonomous cars collect and share data. This black-boxing of connected systems makes it difficult for the vision of a fully-connected ‘internet of things’ to come to pass: instead, rival companies compete to have their ecosystem be the one that links up your personal assistant, calendar, online shopping, connected appliance and transit app. The connected world therefore has ample opportunity for surveillance and for new forms of marketing.

It also has important implications for how we think about information politics. The right to know about what’s going on around us is often cited as a reason to support a diverse media, to oppose ‘fake news’ and to rally around facts. But a right to know can also extend into a right to repair – as I explored at the Restart Project’s FixFest conference. In discussion with repair advocate Kyle Wiens, I outlined how ‘rights to repair’ now depend on being able to gain access to information about how devices work. Kyle has been advocating for years that people should be able to get access to manuals describing how electronics are put together. But now, changes in technology and its intellectual property rights are confounding the right to repair.

Manuals can provide illustrations of how things work, but this doesn’t work as well when hardware collapses into software. Software firmware is notoriously difficult to completely understand – you can reverse engineer it to see how it works, but this takes a long time and if you only have the ‘compiled code’ – functional software – rather than the firmware itself, it could be difficult to figure out why a device is working the way that it is.

Ownership models are changing too: the ‘right to repair’ is threatened by the move from an ‘ownership’ to a ‘service’ paradigm. This might not seem a big deal, but as North American farmers with John Deere tractors discovered, moving from owning your tractor to paying a service contract on the software that runs it are very different. Kyle and other repair advocates have been working with farmers to push back against these service contracts and allow access by individual farmers.

Service contracts underpin many of the ‘connected objects’ we encounter, and in some cases we violate them as soon as we attempt to examine or repair the device. But some legislation is now coming forward that secures some rights to repair – for example, consumer rights to access manuals and spare parks through European legislation on longer product lifetimes.  Other connected systems demonstrate the complexities of expanding advocacy related to the right to repair.

For example, manufacturers of connected objects such as connected cars may have security concerns about opening up systems.This is partly due to some high profile hacks of connected car systems, for example. Networks of connected objects make other objects vulnerable. So if you leave some open (even to repair) you might have opened up vulnerabilities: hearing aids, pacemakers, etc. These are always cast as being exploitable, and the price for resisting exploitation is often the right to understand how something works.

The security monitoring company PenTest Partners write, “Autonomous vehicles require significant investment to develop, and the output is considered a trade secret. The real-time nature of self-driving vehicles means that this sensitive code must be inside the vehicle, potentially allowing an attacker to access it. How do you allow users to update the firmware without leaking all the details to competitors?”

Some features of Android phones, where individual phones can be modified, and updates made to the firmware held on a central server and then negotiated at the point of the software update, have been proposed as a solution. Again, the objections to this are related to the risks of having networks of connected systems – but also a lack of trust that people won’t use unlocked phones in ways that make them susceptible to malware. The deeper problem is of course that understanding these risks and whether the mitigation works requires the ability to look into and understand them.

That’s why I’m proposing that rights to repair might also now be accompanied by rights to scrutinize systems – the latter secures the access to knowledge and the former the ability to take action using that knowledge in a way that’s meaningful for the communication world we find ourselves in. These rights link with the necessity to be able to examine features of automated or otherwise opaque systems. Yes – the connected world is a complex one, but no, that does not mean our information rights have disappeared.

What to do about biased AI? Going beyond transparency of automated systems

Automated decision making and the difficulty of ensuring accountability for algorithmic decisions have been in the news. This is a big deal if we are to start addressing some of the serious ethical issues in developing Artificial Intelligence systems that can’t easily be made transparent. I’m breaking out of a concentrated book-writing space to offer my voice – and to outline some of the directions I think we should be taking to address the wicked problems of ethics, algorithms and accountability – and hoping also to be standing up to be counted as one of the people opening out discussions in this space, so that it can be more diverse.

A few months ago I submitted a response to the UK’s Science and Technology Committee consultation on automated decision making. This consultation asked specifically how transparency could be empoyed to allow more scrutiny of algorithmic systems. I outlined some reasons why transparency alone is not appropriate for making algorithms accountable. I argue that:

  • Automated decision making systems including artificial intelligence systems are subject to a range of biases related to their design, function and the data used to train and enact these systems.
  • Transparency alone cannot address these biases.
  • New regulatory techniques such as ‘procedural regularity’ may provide methods to assess algorithmic outcomes against defined procedures ensuring fairness.
  • Transparency might apply to features of data used to train algorithms or AI systems.

My response identifies that one of the issues in the space is that that previous lessons about regulation and about the function of computing systems have been lost. Automated decision making using computational methods is not new: predictive techniques including example-based or taught learning systems, which can make predictions based on examples and generalize to unseen data, were developed in the 1960s and refined in the following decades. There is consensus, now as then, that automated systems are biased.  This is a very big problem for a society that wants to expand automated decision making to many more areas, with the expansion of more generalized AI systems. So here are some key points from my consultation response, along with

Automated systems are biased – but why?

Researchers agree that these systems hold the risk of producing biased outcomes due to:

  • The function of algorithmic systems being black boxed to the operators, through design choices that make either the process of decision-making or the factors considered in decision-making too opaque to directly influence, or that limit the control of the designer[1].
  • Biases in data collection reproduced in the outputs of the system, for example medical school entry algorithms as far back as the 1970s[2].
  • Biases in interpretation of data by algorithms that would, in humans, be balanced by conscious attention to redressing bias[3], for example sexist biases in language translation tools.
  • Biases in the ways that learning algorithms are ‘tuned’ based on the behavior of testing users[4], as exemplified by sexist and racist implications to Google autocomplete suggestions (these are likely to have been generated by designers failing to tune the autocomplete suggestions away from such biased suggestions).
  • Biases resulting from the insertion of algorithms designed for one purpose into a system designed for another, without consideration of any potential impact, for example the use of algorithms designed for high-frequency trading in the use of biometric border control systems[5].
  • Biases in training data used to train the decision-making systems, as evidenced by racial bias in facial-recognition algorithms trained with data containing faces of primarily Caucasian origin[6].

Addressing Algorithmic Bias – beyond transparency to design and regulation

These biases are well identified, and have cultural impacts beyond the specific cases in which they appear. But they can be addressed – although biases in AI systems like neural networks can be more difficult. The bottom line is that research, industrial strategy and regulatory developments need to be connected together.

Limitations of Algorithmic Transparency Alone

Transparency alone is not a solution. Relying on transparency as the sole or main principle for regularisation or governance is unlikely to reduce biases resulting from expanded algorithmic processing.

  • Alone, transparency mechanisms can encourage false binaries between ‘invisible’ and the ‘visible’ algorithms, failing to enact scrutiny on important systems that are less visible[7]
    • Transparency doesn’t necessarily create trust, and may result in platform owners refusing to connect their systems to others.
    • Transparency cannot apply to some types of systems: neural networks distribute intelligent decision-making across linked nodes, there is no possibility of transparency in relation to the decisions of each node or the relationships between nodes[8].
    • Transparency cannot address the change of systems over time.
    • Transparency does not solve the privacy issues related to combining together personal data sources.
    • Transparency of source code can permit audit of a system’s design but not of its outputs, especially in machine-learning systems[9].

What’s to be done?

In writing this consultation response I suggested that transparency of training data might be one way of addressing the shortcomings of addressing transparency alone. There are some other potential directions to pursue as well. These include

  • Transparency connected to context
    • Taking a user-centred approach to design, empower users to make informed decisions by being transparent in the context of the service they are using.
  • Accountability vs explanation
    • The General Data Protection Regulation says that users have the right to object to an automated decision that has been made about them. This suggests that users will need both an explanation of how a decision has been made, and the right to raise an objection to the decision. Researchers and designers should investigate how to identify decision making
  • Scrutability of algorithmic inputs, eg. training data
    • It is becoming more widely agreed that training data should be available for data scientists to analyse, to identify and interrogate systemic bias in training data before it is programmed into decision-making systems. We need much more research into how training data can be made scrutible and what regulatory processes need to be set up in order to facilitate this.

Who is Doing the Work?

Dozens of scholars and practitioners are working on these issues. I have added footnotes to some of the classic work in computer science that has looked at these issues in the past, and I hope that the wide ranging conversation that’s required to address these issues continues. It’s certainly part of my next phase of work, as I continue to work on issues of ethics and values in the design of connected systems.

[1] Dix, Alan (1991) Human Issues in the Use of Pattern Recognition Techniques. Available at http://alandix.com/academic/papers/neuro92/neuro92.pdf

[2] British Medical Journal 5 March 1988. Available at: http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC2545288&blobtype=pdf

3 Caliskan, Aylin, Joanna J. Bryson, Arvind Narayanan (2017) Science  14, Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

4 Dix, Alan (1991) Human Issues in the Use of Pattern Recognition Techniques. Available at http://alandix.com/academic/papers/neuro92/neuro92.pdf

[5] Amoore, L. (2013). The politics of possibility: risk and security beyond probability. Duke University Press.

[6] Klare B. F., Burge M. J., Klontz J. C., Vorder Bruegge R. W., Jain A. K. . “Face Recognition Performance: Role of Demographic Information”, IEEE Transactions on Information Forensics and Security, Vol. 7, Issue 6, 2012, pp. 1789-1801.

[7] Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 1461444816676645.

[8] Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. Forthcoming in 165 University of Pennsylvania Law Review

[9] Kroll et al (2017)

 

Our Lives in Data: Mediating citizenship

Why do we care that algorithms make decisions, or that social media platforms hold all of our data and market to us? Yesterday, I went with the current crop of MSc Data and Society students to the Science Museum’s Our Lives in Data exhibit. Sponsored by Microsoft and PwC among others, the exhibit includes demonstrations of face recognition systems and aggregate data profiles created from thousands of taps in and taps out on the London Underground. In viewing these examples, I was inspired to revise a recent talk that I delivered at the Vrije Universiteit in Brussels. Here is a new version of the talk: Citizenship and (Location) Data, that refers to examples in the Science Museum.

Gallery view of Our Lives in Data - an exhibition exploring how big data is transforming the world around us; uncovering some of the diverse ways our data is being collected, analysed and used.

Gallery view of Our Lives in Data – an exhibition exploring how big data is transforming the world around us; uncovering some of the diverse ways our data is being collected, analysed and used.

Technological Frames for Citizenship

As long as we’ve had new technological innovations, we’ve had people connecting technical features to forms of life. Early 20th century sociologist Georg Simmel even worried about how clocks and watches would create an urban society where people rushed for the sake of it. The expansion of electricity, and then radio and telephone, all implicitly established ‘connected’ and ‘disconnected’ citizens – and also created regulations that stipulated rights to access such technologies. Our ongoing concern about ‘digital divides’ in access to internet connectivity is a response to the assumption that one’s full participation in civic life depends on access to information technology – so we think about claiming ‘communication rights’. Of course, thinking about expanded access as a precondition for participation also creates a new space for access providers; governments and companies that promise to bridge the digital divide but who also benefit from selling more people the means of access.

Now, technologies of datafication transform everyday acts into streams of data and make them available through platforms. A new dynamic of relationships is established – and it has a significant impact on how we might think about the ‘active’ citizenship where people speak and are heard on things that matter.

From access to action

Broadly speaking, it’s possible to see a shift in ways of talking about and building supports for citizenship that moves from thinking about citizenship as access to a network towards thinking about it in terms of producing data for action. This shift has big implications for people and for institutions, because it changes the kinds of intermediaries at work. Rather than organizations providing access, the new civic intermediaries collect, process and present data. And just like the time when big companies like Cisco and IBM created strategies to participate in expanding access to networks, we now have big companies (sometimes the same ones) as well as governments and third sector organizations developing ways to benefit from or intervene in data collection.

 

This shifts the conceptual plane on which its possible to make rights claim about citizenship. The data ecosystem that needs to be established to make data actionable is based on IT access that is so ubiquitous as to include connected everyday objects. There is no longer a claim for a right to be included, to become a member of a network, but an expectation that everyone is on the network and, furthermore, that they are constantly producing data that can be captured to represent their actions on this network. There is a compulsion to participate, as to stay outside the network would remove all of the benefits of being connected.

 

Staying on the network produces data. The internet’s architecture makes it possible to trace clicks and links between content, and the expansion of connectivity to GPS-enabled mobile devices and other sensor-equipped technologies means that more things produce data. But data by itself is meaningless. It has to be cleaned, rendered, calculated and presented. Location data is a good example of this. By itself, it is a . Other scholars have argued that the paradigm of datafication means that citizenship gets collapsed onto data production – that ‘citizens become sensors’ (Gabyrs, 2015). This is certainly part of the process, but ideas about good citizenship are also created in the overall framework of processing data in order to take action.

Instead of claiming rights of access, citizenship is shifted towards contribution to an aggregate for the purposes of decision-making. In the museum, Transport for London presents posters aggregating the underground or bus trips of thousands of people. In the aggregate, patterns start to emerge and the exhibit suggests that these patterns create ways for agencies like Transport for London to make decisions about how to provide underground service that is more optimal for citizens.

Optimization and prediction

It’s also worth thinking about how ‘optimization’ has itself become a framework for citizenship – a model of consumer choice extended to the provision of services and the everyday experiences of people. Optimization depends on effective prediction – theorists of governance who follow Foucault have identified how technologies of rationalization have positioned certain kinds of civic acts as desirable and others as undesirable – and many predictions are now made based on aggregated data.

 

When a main framework for civic life is in relation to optimization, some things are going to be easier to fit in the framework than others. It’s relatively straightforward to optimize transportation or the collection of recycling, but more difficult to optimize volunteering, knowing your neighbours, or creating local capacity. This also raises some interesting issues of ethics – like the ethics of aggregation. When prediction decisions are based on aggregate data, a lot depends on what that data includes. While press coverage focuses on the role of algorithms in face recognition, insurance calculations and other realms, what’s really at stake is the data. At the Science Museum, my students wondered if an algorithm judging age and happiness based on facial features might develop judgement rules based on a larger sample of Caucasian-featured faces compared to Asian or African-featured faces. Prediction in the service of optimization might also, over time, structure kinds of ‘ideal’ ‘good citizenship’ based on people behaving in ways that create data or play into processes of ‘optimization’. There are myriad examples of this: from the data produced for transport providers to the exchange of data about friends and connections for continued access to media platforms that make money by optimizing the connection between audiences and advertisers. We are beginning to understand the implications of the monopolies on intermediation that these companies create. The expansion of the mediated network suggests that everyone can participate, and hence, in order to behave well in this new environment, that they SHOULD participate.

 

Optimization as a frame also influences civic projects that are attempting to create bottom-up alternatives. For example, FixMyStreet, a (now-classic) interface for crowdsourced contributions of local problems, collects location data points and user-generated content identifying maintenance problems that cities should fix. Gabrys (2014) identifies how this creates a kind of ‘computational paradigm’ for citizenship. I argue, building on this, that the FixMyStreet platform itself plays an important role in creating optimized citizenship: it not only suggests formats for easily computable data (as Gabrys points out) but does the computation and returns the results in ways that allow governments to optimize their expenditure on maintenance by identifying areas of maintenance that, if addressed, will be positively viewed by the people who submitted data. This optimizes the relationship between the government and these people, but the relationship cannot account for the views of the people who have not used the platform. On one level, these people might be excluded from access to communication networks, but on another level, their failure to submit data to a platform system that could calculate it into something that would make government’s work optimal removes them from consideration. Optimizing governments work in this example requires civic data production – of calculable units – but it requires an intermediary to work on it too.

Sometimes civic data projects build their own intermediaries. This is certainly a step in the right direction, but it’s not exactly a disruption of the process of defining citizenship in the direction of optimization. This has some consequences, as the drive towards optimization can, over time, shift influence away from participatory Cyclestreets, a non-profit organization that develops cycling maps based on contributions from individual cyclists, developed a trip planning and problem reporting application in Hackney, a London borough with a very high level of. The app collects data from the GPS function of cyclists’ mobile phones and provides this, along with information on the purpose of the trip and basic demographics, to Hackney Council so that the council can understand use of cycling infrastructure as well as its problems. Individual cyclists who use their bikes primarily for utility journeys such as getting to work may also want to use the app to record times, distances and calories burned, share journeys and upload reports of problems they have encountered in their daily journeys, including photographs and descriptions – much like FixMyStreet. Cyclestreets then uses volunteered data to create cyclist-produced maps, but all of the data is available to Hackney council to analyse and use in policy decision-making.

 

This application is developed from open-source technical tools and creates a relatively direct means for citizens to share data with government, via problem reporting and sharing of chosen cycle journeys. The app is free to download, and Cyclestreets does not benefit financially from its use. However, it also relies on the logic of datafication, both in terms of the cyclist’s ideal knowledge of their own cycling behaviour and in terms of the borough’s decision-making: the data from the app legitimates some decisions about cycling infrastructure development and perhaps limits others. It also reiterates a logic of optimization. Because of this, the Cyclestreets app and many others like it are becoming superceded by corporate apps that request access to many types of customer data from smartphones rather than relying on volunteered data. These applications, including Citymapper, are extremely easy to use and provide very well-calculated cycling routes that do not require much input from So the civically-minded data citizen imagined by Cyclestreets who volunteer data are displaced by the consistently data-producing Citymapper customers, who benefit from more optimal experiences of navigation.

 

Optimization is one of several possible actions taken in relation to data. These examples have illustrated how working towards optimization changes the mediation of citizenship, and thus, in some ways, the qualities or expectations created in relation to citizenship. Optimization as an action valorizes data creation and increases the significance of intermediaries who can make civic actions optimal – which creates different forms of exclusion than those related to lack of access.

 

Dilemmas of Technological Citizenship

My point in all of this is that the creation of technological frameworks for citizenship creates some key dilemmas. The dilemmas result from the frames or protocols that define ‘good’ technological citizenship as working in a particular way. I also think there are some productive an interesting ways to respond to these dilemmas. There are both normative and critical perspectives to take. I’ve talked specifically about optimization as a feature of the focus on data and calculability. There are other features: participation, transparency and predictability. All of these features build from and are wound into the framework of data for action. They can’t easily be resolved, revealing how hopes for technology reveal ongoing power differentials, across the past twenty years of techno-civic projects in cities. These projects generate dilemmas relating to the ways that citizenship should be understood or enacted in relation to newly available technological tools. The dilemmas show that power and agency are always at work in influencing who can speak, be heard, or act in relation to things that matter in the places they live.

Algorithms, Accountability, and Political Emotion

Screen Shot 2016-06-29 at 23.18.49

Last week (it seems a century ago) I was at the Big Boulder social data conference discussing the use of algorithms in managing social data. Then, since I live in the UK, Brexit Events intervened. Sadness and shock for many have since morphed into uncertainty for all. Online media, driven by the social analytics I heard about in Boulder, shape and intensify these feelings as we use them to get our news and connect with people we care about. This raises some really important issues about accountability, especially as more news and information about politics gets transmitted through social media. It also stirs up some interesting questions about the relation between industry focus on sentiment analysis of social media in relation to brands, and the rise of emotion-driven politics.

So in this post I’ll talk about why algorithms matter in moments of uncertainty, what it might mean to make them accountable or ethical, and what mechanisms might help to do this.

  1. Algorithms present the world to you – and that’s sometimes based on how you emote about it

Algorithmic processes underpin the presentation of news stories, posts and other elements of social media. An algorithm is a recipe that specifies how a number of elements are supposed to be combined. It usually has a defined outcome – like a relative ranking of a post in a social media newsfeed. Many different data will be introduced, and an algorithm’s function is to integrate them together in a way that delivers the defined outcome. Many algorithms can work together in the kinds of systems we encounter daily.

One element of algorithmic systems that find interesting at this moment in time, and that’s sentiment. Measuring how people say they feel about particular brands in order to better target them has been a key pillar of the advertising industry for decades. With the expansion of social analytics, it’s now also the backbone of political analysis aimed at seeing which leaders, parties and approaches to issues acquire more positive responses. But could too much of a focus on sentiment also intensify emotional appeals from politicians, to the detriment of our political life? What responsibility do social media companies bear?

Social Media Companies Filter Politics Emotionally

Increasingly, media companies are sensitive to the political and emotional characteristics of responses to the kinds of elements that are presented and shared. Sentiment analysis algorithms, trained on data that categorizes words into ‘positive’ and ‘negative, are widely employed in the online advertising sphere to try to ascertain how people respond to brands. Sentiment analysis also underpinned the infamous ‘Facebook emotion study’ which sought to investigate whether people spent more time using the platform when they had more ‘positive’ or ‘negative’ posts and stories in their feeds.

With the expansion of the emotional response buttons on Facebook, more precise sentiment analysis is now possible, and it is certain that emotional responses of some type are factored in to subsequent presentation of online content along with other things like clicking on links.

Sentiment analysis is based on categorizations of particular words as ‘postive’ or negative. Algorithms based on presenting media in response to such emotional words have to be ‘trained’ on this data. For sentiment analysis in particular, there are many issues with training data, because the procedure depends on the assumption that words are most often associated with particular feelings. Sentiment analysis algorithms can have difficulty identifying when a word is used sarcastically, for example.

Similarly, other algorithms used to sort or present information are also trained on particular sets of data. As Louise Amoore’s research investigates, algorithm developers will place computational elements into systems that they build, often without much attention to the purposes for which they were first designed.

In the case of sentiment analysis, I am curious as to the consequences of long term investments in this method by analytics companies and the online media industry. Especially, I’m wondering about whether focusing on sentiment or optimizing presentation of content with relation to sentiment is in any way connected to the rise of ‘fact-free’ politics and the ascendancy of emotional arguments in campaigns like the Brexit referendum and the American presidential primaries.

  1. Algorithms have to be trained: training data establish what’s ‘normal’ or ‘good’

The way that sentiment analysis depends on whether words are understood as positive or negative gives an example of how training data establishes baselines for how algorithms work.

Before algorithms can run ‘in the wild’ they have to be trained to ensure that the outcome occurs in the way that’s expected. This means that designers use ‘training data’ during the design process. This is data that helps to normalize the algorithm. For face recognition training data will be faces, for chatbots it might be conversations, or for decision-making software it might be correlations.

But the data that’s put in to ‘train’ algorithms has an impact – it shapes the function of the system in one way or another. A series of high profile examples illustrate what kinds of discrimination can be built into algorithms through their training data: facial recognition algorithms that categorize black faces as gorillas, or Asian faces as blinking. Systems that use financial risk data to train algorithms that underpin border control. Historical data on crime is used to train ‘predictive policing’ systems that direct police patrols to places where crimes have occurred in the past, focusing attention on populations who are already marginalized.

These data make assumptions about what is ‘normal’ in the world, from faces to risk taking behavior. At Big Boulder a member of the IBM Watson team described how Watson’s artificial intelligence system uses the internet’s unstructured data as ‘training data’ for its learning algorithms, particularly in relation to human speech. In a year where the web’s discourse created GamerGate and the viral spread of fake news stories, it’s a little worrying not to know exactly what assumptions about the world Watson might be picking up.

So what shall we do?

  1. You can’t make algorithms transparent as such

There’s much discussion currently about ‘opening black boxes’ and trying to make algorithms transparent, but this is not really possible as such. In recent work, Mike Annany and Kate Crawford have created a long list of reasons for this, noting that transparency is disconnected from power, can be harmful, can create false binaries between the ‘invisible’ and the ‘visible’ algorithms, and that transparency doesn’t necessarily create trust. Instead, it simply creates more opportunities for professionals and platform owners to police the boundaries of their systems. Finally, Annany and Crawford note that looking inside systems is not enough, because it’s important to see how they are actual able to be manipulated.

  1. Maybe training data can be reckoned and valued as such

If it’s not desirable (or even really possible) to make algorithmic systems transparent, what mechanisms might make them accountable? One strategy worth thinking about might be to identify or even register the training data that are used to set up the frameworks that key algorithms employ. This doesn’t mean making the algorithms transparent, for all the reasons specified above, but it might create a means for establishing more accountability about the cultural assumptions underpinning the function of these mechanisms. It might be desirable, in the public interest, to establish a register of training data employed in key algorithmic processes judged to be significant for public life (access to information, access to finance, access to employment, etc). Such a register could even be encrypted if required, so that training data would not be leaked as a trade secret, but held such that anyone seeking to investigate a potential breach of rights could have the register opened at request.

This may not be enough, as Annany and Crawford intimate, and it may not yet have adequate industry support, but given the failures of transparency itself it may be the kind of concrete step needed to begin firmer thinking about algorithmic accountability.

Ethics of Perverse Systems

Things, of course, cannot go on as they are. The rate of environmental destruction, fossil fuel burning, reactionary politics and censure of debate is of course untenable. So is the high capitalist solution of monetizing every remaining speck in the universe, trading on its futures and leveraging the outcome to secure the fortunes of the fortunate and lock many others into destitutions. Abominable is the lack of empathy and xenophobic turn of politicians (and, I expect, in some way all sorts of people) in the faces of people desperate to escape war and imprisioned on borders instead of welcomed and settled.

And so, also, for those of us concerned with the capacity to become ourselves by expressing ourselves, the intensification of surveillance of our everyday life, which we know to change our behavior, to be less forthright with ideas, to keep our radical thoughts to ourselves lest they be too disruptive to be heard.

How then should we go on?

Some say we shouldn’t bother. The popular press cover all of the above issues in ways that often appear calculated to disempower. Facebook will feed you ads whether you subscribe to it or not. The sea level will rise whether you drink tap water or not. The rich will manipulate government whether you vote or not. Selfishiness is inevitable, social collapse perhaps as well. Some progressive thinkers embed this stance into hope for a post-apocalyptic regeneration of life, but one that is predicated on the suffering of many as the inevitable excess of consumption reaches peak cruelty (perhaps at the same time as peak oil). Some conservatives also see this as inevitable, but aim perhaps to be among the few who benefit. Populists of various political persuasions focus on the villains contained in various pieces of this puzzle. All of it suggests that we are naturally, inevitably horrible people.

Naomi Klein’s recent article in the London Review of Books (based on her Edward Said lecture) reminds us that there are other ways of thinking. She refers to the ‘seven generation’ rule that stipulates that we should think about the long term impact of any action, and leave the natural world in an improved state for those who are to come.  She resists the idea of ‘sacrifice zones’ where the land and lives of poor/black and brown people are offered up to safeguard the places that the rich inhabit. Only by not seeing these lives as truly equal – as ‘others’ who can’t really be human – is anyone able to justify this. This follows from Said’s work in defining how Orientalism, this ‘othering’ of people outside the places where power defines itself to reside, justifies treatment that dehumanizes them while also assuring the continuation of easy lives elsewhere. Klein suggests we resist sacrifice and focus on solidarity. This requires the capacity for tolerance and respect of all humans as well as others – as philosopher Achille Mbembe has also pointed out.

Perverse Systems

Klein’s article also started me thinking about one of the key questions of my book project. Is it possible to be hopeful about a technological world? Advanced technology, even of the communicational type that is my focus, is so deeply bound up with the impossible expansion of value extraction from every facet of experience, and by association with violence and exclusion. If my recent research is any indication, attempts to intensify this value extraction from the very material of ordinary life and from our own attempts to make it meaningful to connecting to each other and ourselves. My previous research has indicated that the same mad dash to extract value that angers indigenous people in Brazil, Canada and the USA whose rights to be upon and with their land are disappeared to permit more resource exploitation, mobile phone companies have essentially disappeared the right to privacy of their subscribers. In exchange for cheaper calls (and to compensate for expensive investments) location data are collected and packaged. Some companies operate subsidiaries that analyse and sell these data. Both of these activities are ruthless exploitation of realms of life that on their own have meaning and substance on far different registers than their valuation as commodities might suggest. Ethically as well as economically, these are painful, woeful, terrible responses. They create and sustain perverse systems. And these, because they are unfolding in so many places and on so many scales, it seems impossible to conceive of how to think otherwise.

Yet thinking otherwise and working otherwise is also essential, because alternatives are also unfolding in many areas and at many scales, often without much attention.

In 1998 I took a course in environmental philosophy called Environment Enquiry – taught by environmental philosopher Bob Henderson. We read Daniel Quinn’s 1991 philosophical novel Ishmael, which broadly sketches this approach by contrasting the Takers (I think you can probably work out their motivations and actions) with the Leavers, who enact ‘seven-generation’ values and who are bound into traditions and rhythms that hold them. The original text now, nearly two decades later reads problematically, with a fair nostalgia for imagined past tribalism and a dash of ‘noble savage’.  Despite the naivité, there may be some value in the broader opposition between Leavers and Takers, provided we redefine what they are to take account of what we know of the world. In my mind the Leaver category requires contribution as well as living with difference. This isn’t quite how Quinn thought about it, but it is how Said and Mbembe do. Living with difference is really hard. It starts with believing that everyone (yes *everyone*) has the same importance, but that they will enact their own importance in totally different ways.

How could we conceive technological systems built by Leavers? Neo-tribalists would probably point to the mythological ‘original internet’. Others might look to the leveraging of worldwide networked communications by small groups of people who organize to occupy and slow down extractive capitalism. Oh right, that would be Occupy. Still others might point to commons-based organization of resources including intellectual property. Oh right – distributed local communication networks.  But what about the other things I’ve added to the concept? How can technologies move away from not only embedding difference and Othering but weaponizing it? Surveillance technologies for example do an excellent job of this – collecting more personal information from poor/black and brown people and hence reinforcing difference and threat. It may be possible to think about sensing technology under radically different organizational and cultural conditions, for example, much as these other examples begin from different positions.

I want to identify and celebrate these examples of working differently, but I have also critiqued some of them in my work. I hope I haven’t overplayed the critique – since the purpose of it was in many cases to identify how difficult it is to move progressive projects away from the knowledge and exchange cultures of currently dominant work. This cuts across many parts of the tech sphere. Personal privacy, for example. This is taking, and holding apart something of value, rather than sharing and creating relationships through the exchange. This reciprocity and openness, this fluidity, is one of the most frustrating things about abandoning the notion of the individual liberal subject. Equally, the perspective of individual responsibility that underpins many projects for contribution of data or expertise as the foundations for citizenship underplay how complex our sense of responsibility may be when it is always tempered with coercion.

Living another way

In much critical theory of technology I read a profound worry about technology itself. Ursula Franklin argues that technologies are real worlds composed of practices that we undertake all the time, and that they can through the way they are built, imagined and administered, dismiss entire ways of knowing and being. My work focuses on these practices, but never quite gets away from the worry, as I never manage to square the circle of how or whether technologies could be otherwise. But I know that there are ways of organizing beyond hierarchy, and ways of living beyond value extraction. I am certain that these have communicational elements attached to them as well and that some of these depend on the construction of technological systems.  If this is an act of faith, I will claim it – and try as hard as I can to contribute to making it so.

 

 

 

 

A surfeit of care

I am suffering from a surfeit of care. I really care. A lot. About a lot of things.

I care that the climate is changing, fast, and that people and animals will die as a result – are dying already, as refugees flee from a war accelerated by drought, and new famines begin in Southern Africa, and as ice melts in the Arctic at a pace never imagined. (I care also that I never managed to visit the Arctic before it melted, but I am not so sure now that I believe any longer in the great education of travel. I worry about going anywhere because I think I might be too sad at what has already gone).

I also care that these changes mean not only death of beings but death of ideas: the knowledge of the seasons, the patterns of the past, the ability to feel a part of nature rather than its enemy.

I care that governments here and in many places have turned away from valuing people and the things that they can build together, and have disassembled and partitioned and sold the very things that make society possible: eductation, health care, access to water, access to knowledge. I care thus, about policy and procedure, and the devil in details of governance documents and institutional arrangements and public oversight. I care about principles, and I will argue them based on careful research.

I care about my students and try to show them a world of ideas that is beyond their own experience; and in teaching them about the hopefully still expansive possibilities of the world I try to convince myself of the same. I care about the ideas themselves: I want them to see that the world, even the material and technical world, is formed of ideas about how to best go about being in it; and even when it appears fixed, it is always changing.

I care about my family, about teaching my daughter things that will help her survive in an uncertain and perhaps incoherent world. I try to wire into her brain the old stories and the new ways, confidence in herself and practical skills and empathy, because surely she will need it. I try also to live gracefully and lovingly alongside the In-House-Hacker, even though I’m so swamped with care that I must sometimes seem bereft.

I care about birds, and toads, and plants, and trees, and forests and animals and people I have never met and never will. I am the result of a globalization of knowledge and the victim of a globalization of care.

And all this care keeps my heart in my throat, makes my skin prickle with sensitivity to every news story about another outrage. It makes me grieve for a certainty I never believed in, to hope for transformations that I am sometimes fear that I am simply too frightened to force through myself. I worry that I am not doing enough, with every plastic tray I purchase in opposition to my clear desire to live a sustainable life, with every petition I sign knowing it won’t make a difference. With every demo I march on, even. I worry that it is all sound and fury. Because I really, really care.

And somewhere deep down I wish to be released from this care. I wish I could simply detach from the problems of the world, perhaps by ceasing to be an optimist and assuming that I could (WE COULD) never solve them anyway, so why bother. Or maybe by becoming a hedonist and floating away on a cloud of pure experience, unsullied by critique.

But in the here and now, and inside the only mind I have, I’m struggling not to be submerged. Struggling to find a thread and a story that associates rather than dissociates, that integrates and grounds and makes the world meaningful – or makes a new world seem possible, even in the ever-present wreckage of the old.

I don’t know how to do it, but to turn heartache to a song, fear to determination, anxiety to optimism. I don’t know how to do it but to to keep swimming, keep kicking, keep breathing and moving and loving. How do you do it? How do you keep afloat?