What to do about biased AI? Going beyond transparency of automated systems

Automated decision making and the difficulty of ensuring accountability for algorithmic decisions have been in the news. This is a big deal if we are to start addressing some of the serious ethical issues in developing Artificial Intelligence systems that can’t easily be made transparent. I’m breaking out of a concentrated book-writing space to offer my voice – and to outline some of the directions I think we should be taking to address the wicked problems of ethics, algorithms and accountability – and hoping also to be standing up to be counted as one of the people opening out discussions in this space, so that it can be more diverse.

A few months ago I submitted a response to the UK’s Science and Technology Committee consultation on automated decision making. This consultation asked specifically how transparency could be empoyed to allow more scrutiny of algorithmic systems. I outlined some reasons why transparency alone is not appropriate for making algorithms accountable. I argue that:

  • Automated decision making systems including artificial intelligence systems are subject to a range of biases related to their design, function and the data used to train and enact these systems.
  • Transparency alone cannot address these biases.
  • New regulatory techniques such as ‘procedural regularity’ may provide methods to assess algorithmic outcomes against defined procedures ensuring fairness.
  • Transparency might apply to features of data used to train algorithms or AI systems.

My response identifies that one of the issues in the space is that that previous lessons about regulation and about the function of computing systems have been lost. Automated decision making using computational methods is not new: predictive techniques including example-based or taught learning systems, which can make predictions based on examples and generalize to unseen data, were developed in the 1960s and refined in the following decades. There is consensus, now as then, that automated systems are biased.  This is a very big problem for a society that wants to expand automated decision making to many more areas, with the expansion of more generalized AI systems. So here are some key points from my consultation response, along with

Automated systems are biased – but why?

Researchers agree that these systems hold the risk of producing biased outcomes due to:

  • The function of algorithmic systems being black boxed to the operators, through design choices that make either the process of decision-making or the factors considered in decision-making too opaque to directly influence, or that limit the control of the designer[1].
  • Biases in data collection reproduced in the outputs of the system, for example medical school entry algorithms as far back as the 1970s[2].
  • Biases in interpretation of data by algorithms that would, in humans, be balanced by conscious attention to redressing bias[3], for example sexist biases in language translation tools.
  • Biases in the ways that learning algorithms are ‘tuned’ based on the behavior of testing users[4], as exemplified by sexist and racist implications to Google autocomplete suggestions (these are likely to have been generated by designers failing to tune the autocomplete suggestions away from such biased suggestions).
  • Biases resulting from the insertion of algorithms designed for one purpose into a system designed for another, without consideration of any potential impact, for example the use of algorithms designed for high-frequency trading in the use of biometric border control systems[5].
  • Biases in training data used to train the decision-making systems, as evidenced by racial bias in facial-recognition algorithms trained with data containing faces of primarily Caucasian origin[6].

Addressing Algorithmic Bias – beyond transparency to design and regulation

These biases are well identified, and have cultural impacts beyond the specific cases in which they appear. But they can be addressed – although biases in AI systems like neural networks can be more difficult. The bottom line is that research, industrial strategy and regulatory developments need to be connected together.

Limitations of Algorithmic Transparency Alone

Transparency alone is not a solution. Relying on transparency as the sole or main principle for regularisation or governance is unlikely to reduce biases resulting from expanded algorithmic processing.

  • Alone, transparency mechanisms can encourage false binaries between ‘invisible’ and the ‘visible’ algorithms, failing to enact scrutiny on important systems that are less visible[7]
    • Transparency doesn’t necessarily create trust, and may result in platform owners refusing to connect their systems to others.
    • Transparency cannot apply to some types of systems: neural networks distribute intelligent decision-making across linked nodes, there is no possibility of transparency in relation to the decisions of each node or the relationships between nodes[8].
    • Transparency cannot address the change of systems over time.
    • Transparency does not solve the privacy issues related to combining together personal data sources.
    • Transparency of source code can permit audit of a system’s design but not of its outputs, especially in machine-learning systems[9].

What’s to be done?

In writing this consultation response I suggested that transparency of training data might be one way of addressing the shortcomings of addressing transparency alone. There are some other potential directions to pursue as well. These include

  • Transparency connected to context
    • Taking a user-centred approach to design, empower users to make informed decisions by being transparent in the context of the service they are using.
  • Accountability vs explanation
    • The General Data Protection Regulation says that users have the right to object to an automated decision that has been made about them. This suggests that users will need both an explanation of how a decision has been made, and the right to raise an objection to the decision. Researchers and designers should investigate how to identify decision making
  • Scrutability of algorithmic inputs, eg. training data
    • It is becoming more widely agreed that training data should be available for data scientists to analyse, to identify and interrogate systemic bias in training data before it is programmed into decision-making systems. We need much more research into how training data can be made scrutible and what regulatory processes need to be set up in order to facilitate this.

Who is Doing the Work?

Dozens of scholars and practitioners are working on these issues. I have added footnotes to some of the classic work in computer science that has looked at these issues in the past, and I hope that the wide ranging conversation that’s required to address these issues continues. It’s certainly part of my next phase of work, as I continue to work on issues of ethics and values in the design of connected systems.

[1] Dix, Alan (1991) Human Issues in the Use of Pattern Recognition Techniques. Available at http://alandix.com/academic/papers/neuro92/neuro92.pdf

[2] British Medical Journal 5 March 1988. Available at: http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC2545288&blobtype=pdf

3 Caliskan, Aylin, Joanna J. Bryson, Arvind Narayanan (2017) Science  14, Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

4 Dix, Alan (1991) Human Issues in the Use of Pattern Recognition Techniques. Available at http://alandix.com/academic/papers/neuro92/neuro92.pdf

[5] Amoore, L. (2013). The politics of possibility: risk and security beyond probability. Duke University Press.

[6] Klare B. F., Burge M. J., Klontz J. C., Vorder Bruegge R. W., Jain A. K. . “Face Recognition Performance: Role of Demographic Information”, IEEE Transactions on Information Forensics and Security, Vol. 7, Issue 6, 2012, pp. 1789-1801.

[7] Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 1461444816676645.

[8] Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. Forthcoming in 165 University of Pennsylvania Law Review

[9] Kroll et al (2017)


Our Lives in Data: Mediating citizenship

Why do we care that algorithms make decisions, or that social media platforms hold all of our data and market to us? Yesterday, I went with the current crop of MSc Data and Society students to the Science Museum’s Our Lives in Data exhibit. Sponsored by Microsoft and PwC among others, the exhibit includes demonstrations of face recognition systems and aggregate data profiles created from thousands of taps in and taps out on the London Underground. In viewing these examples, I was inspired to revise a recent talk that I delivered at the Vrije Universiteit in Brussels. Here is a new version of the talk: Citizenship and (Location) Data, that refers to examples in the Science Museum.

Gallery view of Our Lives in Data - an exhibition exploring how big data is transforming the world around us; uncovering some of the diverse ways our data is being collected, analysed and used.

Gallery view of Our Lives in Data – an exhibition exploring how big data is transforming the world around us; uncovering some of the diverse ways our data is being collected, analysed and used.

Technological Frames for Citizenship

As long as we’ve had new technological innovations, we’ve had people connecting technical features to forms of life. Early 20th century sociologist Georg Simmel even worried about how clocks and watches would create an urban society where people rushed for the sake of it. The expansion of electricity, and then radio and telephone, all implicitly established ‘connected’ and ‘disconnected’ citizens – and also created regulations that stipulated rights to access such technologies. Our ongoing concern about ‘digital divides’ in access to internet connectivity is a response to the assumption that one’s full participation in civic life depends on access to information technology – so we think about claiming ‘communication rights’. Of course, thinking about expanded access as a precondition for participation also creates a new space for access providers; governments and companies that promise to bridge the digital divide but who also benefit from selling more people the means of access.

Now, technologies of datafication transform everyday acts into streams of data and make them available through platforms. A new dynamic of relationships is established – and it has a significant impact on how we might think about the ‘active’ citizenship where people speak and are heard on things that matter.

From access to action

Broadly speaking, it’s possible to see a shift in ways of talking about and building supports for citizenship that moves from thinking about citizenship as access to a network towards thinking about it in terms of producing data for action. This shift has big implications for people and for institutions, because it changes the kinds of intermediaries at work. Rather than organizations providing access, the new civic intermediaries collect, process and present data. And just like the time when big companies like Cisco and IBM created strategies to participate in expanding access to networks, we now have big companies (sometimes the same ones) as well as governments and third sector organizations developing ways to benefit from or intervene in data collection.


This shifts the conceptual plane on which its possible to make rights claim about citizenship. The data ecosystem that needs to be established to make data actionable is based on IT access that is so ubiquitous as to include connected everyday objects. There is no longer a claim for a right to be included, to become a member of a network, but an expectation that everyone is on the network and, furthermore, that they are constantly producing data that can be captured to represent their actions on this network. There is a compulsion to participate, as to stay outside the network would remove all of the benefits of being connected.


Staying on the network produces data. The internet’s architecture makes it possible to trace clicks and links between content, and the expansion of connectivity to GPS-enabled mobile devices and other sensor-equipped technologies means that more things produce data. But data by itself is meaningless. It has to be cleaned, rendered, calculated and presented. Location data is a good example of this. By itself, it is a . Other scholars have argued that the paradigm of datafication means that citizenship gets collapsed onto data production – that ‘citizens become sensors’ (Gabyrs, 2015). This is certainly part of the process, but ideas about good citizenship are also created in the overall framework of processing data in order to take action.

Instead of claiming rights of access, citizenship is shifted towards contribution to an aggregate for the purposes of decision-making. In the museum, Transport for London presents posters aggregating the underground or bus trips of thousands of people. In the aggregate, patterns start to emerge and the exhibit suggests that these patterns create ways for agencies like Transport for London to make decisions about how to provide underground service that is more optimal for citizens.

Optimization and prediction

It’s also worth thinking about how ‘optimization’ has itself become a framework for citizenship – a model of consumer choice extended to the provision of services and the everyday experiences of people. Optimization depends on effective prediction – theorists of governance who follow Foucault have identified how technologies of rationalization have positioned certain kinds of civic acts as desirable and others as undesirable – and many predictions are now made based on aggregated data.


When a main framework for civic life is in relation to optimization, some things are going to be easier to fit in the framework than others. It’s relatively straightforward to optimize transportation or the collection of recycling, but more difficult to optimize volunteering, knowing your neighbours, or creating local capacity. This also raises some interesting issues of ethics – like the ethics of aggregation. When prediction decisions are based on aggregate data, a lot depends on what that data includes. While press coverage focuses on the role of algorithms in face recognition, insurance calculations and other realms, what’s really at stake is the data. At the Science Museum, my students wondered if an algorithm judging age and happiness based on facial features might develop judgement rules based on a larger sample of Caucasian-featured faces compared to Asian or African-featured faces. Prediction in the service of optimization might also, over time, structure kinds of ‘ideal’ ‘good citizenship’ based on people behaving in ways that create data or play into processes of ‘optimization’. There are myriad examples of this: from the data produced for transport providers to the exchange of data about friends and connections for continued access to media platforms that make money by optimizing the connection between audiences and advertisers. We are beginning to understand the implications of the monopolies on intermediation that these companies create. The expansion of the mediated network suggests that everyone can participate, and hence, in order to behave well in this new environment, that they SHOULD participate.


Optimization as a frame also influences civic projects that are attempting to create bottom-up alternatives. For example, FixMyStreet, a (now-classic) interface for crowdsourced contributions of local problems, collects location data points and user-generated content identifying maintenance problems that cities should fix. Gabrys (2014) identifies how this creates a kind of ‘computational paradigm’ for citizenship. I argue, building on this, that the FixMyStreet platform itself plays an important role in creating optimized citizenship: it not only suggests formats for easily computable data (as Gabrys points out) but does the computation and returns the results in ways that allow governments to optimize their expenditure on maintenance by identifying areas of maintenance that, if addressed, will be positively viewed by the people who submitted data. This optimizes the relationship between the government and these people, but the relationship cannot account for the views of the people who have not used the platform. On one level, these people might be excluded from access to communication networks, but on another level, their failure to submit data to a platform system that could calculate it into something that would make government’s work optimal removes them from consideration. Optimizing governments work in this example requires civic data production – of calculable units – but it requires an intermediary to work on it too.

Sometimes civic data projects build their own intermediaries. This is certainly a step in the right direction, but it’s not exactly a disruption of the process of defining citizenship in the direction of optimization. This has some consequences, as the drive towards optimization can, over time, shift influence away from participatory Cyclestreets, a non-profit organization that develops cycling maps based on contributions from individual cyclists, developed a trip planning and problem reporting application in Hackney, a London borough with a very high level of. The app collects data from the GPS function of cyclists’ mobile phones and provides this, along with information on the purpose of the trip and basic demographics, to Hackney Council so that the council can understand use of cycling infrastructure as well as its problems. Individual cyclists who use their bikes primarily for utility journeys such as getting to work may also want to use the app to record times, distances and calories burned, share journeys and upload reports of problems they have encountered in their daily journeys, including photographs and descriptions – much like FixMyStreet. Cyclestreets then uses volunteered data to create cyclist-produced maps, but all of the data is available to Hackney council to analyse and use in policy decision-making.


This application is developed from open-source technical tools and creates a relatively direct means for citizens to share data with government, via problem reporting and sharing of chosen cycle journeys. The app is free to download, and Cyclestreets does not benefit financially from its use. However, it also relies on the logic of datafication, both in terms of the cyclist’s ideal knowledge of their own cycling behaviour and in terms of the borough’s decision-making: the data from the app legitimates some decisions about cycling infrastructure development and perhaps limits others. It also reiterates a logic of optimization. Because of this, the Cyclestreets app and many others like it are becoming superceded by corporate apps that request access to many types of customer data from smartphones rather than relying on volunteered data. These applications, including Citymapper, are extremely easy to use and provide very well-calculated cycling routes that do not require much input from So the civically-minded data citizen imagined by Cyclestreets who volunteer data are displaced by the consistently data-producing Citymapper customers, who benefit from more optimal experiences of navigation.


Optimization is one of several possible actions taken in relation to data. These examples have illustrated how working towards optimization changes the mediation of citizenship, and thus, in some ways, the qualities or expectations created in relation to citizenship. Optimization as an action valorizes data creation and increases the significance of intermediaries who can make civic actions optimal – which creates different forms of exclusion than those related to lack of access.


Dilemmas of Technological Citizenship

My point in all of this is that the creation of technological frameworks for citizenship creates some key dilemmas. The dilemmas result from the frames or protocols that define ‘good’ technological citizenship as working in a particular way. I also think there are some productive an interesting ways to respond to these dilemmas. There are both normative and critical perspectives to take. I’ve talked specifically about optimization as a feature of the focus on data and calculability. There are other features: participation, transparency and predictability. All of these features build from and are wound into the framework of data for action. They can’t easily be resolved, revealing how hopes for technology reveal ongoing power differentials, across the past twenty years of techno-civic projects in cities. These projects generate dilemmas relating to the ways that citizenship should be understood or enacted in relation to newly available technological tools. The dilemmas show that power and agency are always at work in influencing who can speak, be heard, or act in relation to things that matter in the places they live.

Algorithms, Accountability, and Political Emotion

Screen Shot 2016-06-29 at 23.18.49

Last week (it seems a century ago) I was at the Big Boulder social data conference discussing the use of algorithms in managing social data. Then, since I live in the UK, Brexit Events intervened. Sadness and shock for many have since morphed into uncertainty for all. Online media, driven by the social analytics I heard about in Boulder, shape and intensify these feelings as we use them to get our news and connect with people we care about. This raises some really important issues about accountability, especially as more news and information about politics gets transmitted through social media. It also stirs up some interesting questions about the relation between industry focus on sentiment analysis of social media in relation to brands, and the rise of emotion-driven politics.

So in this post I’ll talk about why algorithms matter in moments of uncertainty, what it might mean to make them accountable or ethical, and what mechanisms might help to do this.

  1. Algorithms present the world to you – and that’s sometimes based on how you emote about it

Algorithmic processes underpin the presentation of news stories, posts and other elements of social media. An algorithm is a recipe that specifies how a number of elements are supposed to be combined. It usually has a defined outcome – like a relative ranking of a post in a social media newsfeed. Many different data will be introduced, and an algorithm’s function is to integrate them together in a way that delivers the defined outcome. Many algorithms can work together in the kinds of systems we encounter daily.

One element of algorithmic systems that find interesting at this moment in time, and that’s sentiment. Measuring how people say they feel about particular brands in order to better target them has been a key pillar of the advertising industry for decades. With the expansion of social analytics, it’s now also the backbone of political analysis aimed at seeing which leaders, parties and approaches to issues acquire more positive responses. But could too much of a focus on sentiment also intensify emotional appeals from politicians, to the detriment of our political life? What responsibility do social media companies bear?

Social Media Companies Filter Politics Emotionally

Increasingly, media companies are sensitive to the political and emotional characteristics of responses to the kinds of elements that are presented and shared. Sentiment analysis algorithms, trained on data that categorizes words into ‘positive’ and ‘negative, are widely employed in the online advertising sphere to try to ascertain how people respond to brands. Sentiment analysis also underpinned the infamous ‘Facebook emotion study’ which sought to investigate whether people spent more time using the platform when they had more ‘positive’ or ‘negative’ posts and stories in their feeds.

With the expansion of the emotional response buttons on Facebook, more precise sentiment analysis is now possible, and it is certain that emotional responses of some type are factored in to subsequent presentation of online content along with other things like clicking on links.

Sentiment analysis is based on categorizations of particular words as ‘postive’ or negative. Algorithms based on presenting media in response to such emotional words have to be ‘trained’ on this data. For sentiment analysis in particular, there are many issues with training data, because the procedure depends on the assumption that words are most often associated with particular feelings. Sentiment analysis algorithms can have difficulty identifying when a word is used sarcastically, for example.

Similarly, other algorithms used to sort or present information are also trained on particular sets of data. As Louise Amoore’s research investigates, algorithm developers will place computational elements into systems that they build, often without much attention to the purposes for which they were first designed.

In the case of sentiment analysis, I am curious as to the consequences of long term investments in this method by analytics companies and the online media industry. Especially, I’m wondering about whether focusing on sentiment or optimizing presentation of content with relation to sentiment is in any way connected to the rise of ‘fact-free’ politics and the ascendancy of emotional arguments in campaigns like the Brexit referendum and the American presidential primaries.

  1. Algorithms have to be trained: training data establish what’s ‘normal’ or ‘good’

The way that sentiment analysis depends on whether words are understood as positive or negative gives an example of how training data establishes baselines for how algorithms work.

Before algorithms can run ‘in the wild’ they have to be trained to ensure that the outcome occurs in the way that’s expected. This means that designers use ‘training data’ during the design process. This is data that helps to normalize the algorithm. For face recognition training data will be faces, for chatbots it might be conversations, or for decision-making software it might be correlations.

But the data that’s put in to ‘train’ algorithms has an impact – it shapes the function of the system in one way or another. A series of high profile examples illustrate what kinds of discrimination can be built into algorithms through their training data: facial recognition algorithms that categorize black faces as gorillas, or Asian faces as blinking. Systems that use financial risk data to train algorithms that underpin border control. Historical data on crime is used to train ‘predictive policing’ systems that direct police patrols to places where crimes have occurred in the past, focusing attention on populations who are already marginalized.

These data make assumptions about what is ‘normal’ in the world, from faces to risk taking behavior. At Big Boulder a member of the IBM Watson team described how Watson’s artificial intelligence system uses the internet’s unstructured data as ‘training data’ for its learning algorithms, particularly in relation to human speech. In a year where the web’s discourse created GamerGate and the viral spread of fake news stories, it’s a little worrying not to know exactly what assumptions about the world Watson might be picking up.

So what shall we do?

  1. You can’t make algorithms transparent as such

There’s much discussion currently about ‘opening black boxes’ and trying to make algorithms transparent, but this is not really possible as such. In recent work, Mike Annany and Kate Crawford have created a long list of reasons for this, noting that transparency is disconnected from power, can be harmful, can create false binaries between the ‘invisible’ and the ‘visible’ algorithms, and that transparency doesn’t necessarily create trust. Instead, it simply creates more opportunities for professionals and platform owners to police the boundaries of their systems. Finally, Annany and Crawford note that looking inside systems is not enough, because it’s important to see how they are actual able to be manipulated.

  1. Maybe training data can be reckoned and valued as such

If it’s not desirable (or even really possible) to make algorithmic systems transparent, what mechanisms might make them accountable? One strategy worth thinking about might be to identify or even register the training data that are used to set up the frameworks that key algorithms employ. This doesn’t mean making the algorithms transparent, for all the reasons specified above, but it might create a means for establishing more accountability about the cultural assumptions underpinning the function of these mechanisms. It might be desirable, in the public interest, to establish a register of training data employed in key algorithmic processes judged to be significant for public life (access to information, access to finance, access to employment, etc). Such a register could even be encrypted if required, so that training data would not be leaked as a trade secret, but held such that anyone seeking to investigate a potential breach of rights could have the register opened at request.

This may not be enough, as Annany and Crawford intimate, and it may not yet have adequate industry support, but given the failures of transparency itself it may be the kind of concrete step needed to begin firmer thinking about algorithmic accountability.

Ethics of Perverse Systems

Things, of course, cannot go on as they are. The rate of environmental destruction, fossil fuel burning, reactionary politics and censure of debate is of course untenable. So is the high capitalist solution of monetizing every remaining speck in the universe, trading on its futures and leveraging the outcome to secure the fortunes of the fortunate and lock many others into destitutions. Abominable is the lack of empathy and xenophobic turn of politicians (and, I expect, in some way all sorts of people) in the faces of people desperate to escape war and imprisioned on borders instead of welcomed and settled.

And so, also, for those of us concerned with the capacity to become ourselves by expressing ourselves, the intensification of surveillance of our everyday life, which we know to change our behavior, to be less forthright with ideas, to keep our radical thoughts to ourselves lest they be too disruptive to be heard.

How then should we go on?

Some say we shouldn’t bother. The popular press cover all of the above issues in ways that often appear calculated to disempower. Facebook will feed you ads whether you subscribe to it or not. The sea level will rise whether you drink tap water or not. The rich will manipulate government whether you vote or not. Selfishiness is inevitable, social collapse perhaps as well. Some progressive thinkers embed this stance into hope for a post-apocalyptic regeneration of life, but one that is predicated on the suffering of many as the inevitable excess of consumption reaches peak cruelty (perhaps at the same time as peak oil). Some conservatives also see this as inevitable, but aim perhaps to be among the few who benefit. Populists of various political persuasions focus on the villains contained in various pieces of this puzzle. All of it suggests that we are naturally, inevitably horrible people.

Naomi Klein’s recent article in the London Review of Books (based on her Edward Said lecture) reminds us that there are other ways of thinking. She refers to the ‘seven generation’ rule that stipulates that we should think about the long term impact of any action, and leave the natural world in an improved state for those who are to come.  She resists the idea of ‘sacrifice zones’ where the land and lives of poor/black and brown people are offered up to safeguard the places that the rich inhabit. Only by not seeing these lives as truly equal – as ‘others’ who can’t really be human – is anyone able to justify this. This follows from Said’s work in defining how Orientalism, this ‘othering’ of people outside the places where power defines itself to reside, justifies treatment that dehumanizes them while also assuring the continuation of easy lives elsewhere. Klein suggests we resist sacrifice and focus on solidarity. This requires the capacity for tolerance and respect of all humans as well as others – as philosopher Achille Mbembe has also pointed out.

Perverse Systems

Klein’s article also started me thinking about one of the key questions of my book project. Is it possible to be hopeful about a technological world? Advanced technology, even of the communicational type that is my focus, is so deeply bound up with the impossible expansion of value extraction from every facet of experience, and by association with violence and exclusion. If my recent research is any indication, attempts to intensify this value extraction from the very material of ordinary life and from our own attempts to make it meaningful to connecting to each other and ourselves. My previous research has indicated that the same mad dash to extract value that angers indigenous people in Brazil, Canada and the USA whose rights to be upon and with their land are disappeared to permit more resource exploitation, mobile phone companies have essentially disappeared the right to privacy of their subscribers. In exchange for cheaper calls (and to compensate for expensive investments) location data are collected and packaged. Some companies operate subsidiaries that analyse and sell these data. Both of these activities are ruthless exploitation of realms of life that on their own have meaning and substance on far different registers than their valuation as commodities might suggest. Ethically as well as economically, these are painful, woeful, terrible responses. They create and sustain perverse systems. And these, because they are unfolding in so many places and on so many scales, it seems impossible to conceive of how to think otherwise.

Yet thinking otherwise and working otherwise is also essential, because alternatives are also unfolding in many areas and at many scales, often without much attention.

In 1998 I took a course in environmental philosophy called Environment Enquiry – taught by environmental philosopher Bob Henderson. We read Daniel Quinn’s 1991 philosophical novel Ishmael, which broadly sketches this approach by contrasting the Takers (I think you can probably work out their motivations and actions) with the Leavers, who enact ‘seven-generation’ values and who are bound into traditions and rhythms that hold them. The original text now, nearly two decades later reads problematically, with a fair nostalgia for imagined past tribalism and a dash of ‘noble savage’.  Despite the naivité, there may be some value in the broader opposition between Leavers and Takers, provided we redefine what they are to take account of what we know of the world. In my mind the Leaver category requires contribution as well as living with difference. This isn’t quite how Quinn thought about it, but it is how Said and Mbembe do. Living with difference is really hard. It starts with believing that everyone (yes *everyone*) has the same importance, but that they will enact their own importance in totally different ways.

How could we conceive technological systems built by Leavers? Neo-tribalists would probably point to the mythological ‘original internet’. Others might look to the leveraging of worldwide networked communications by small groups of people who organize to occupy and slow down extractive capitalism. Oh right, that would be Occupy. Still others might point to commons-based organization of resources including intellectual property. Oh right – distributed local communication networks.  But what about the other things I’ve added to the concept? How can technologies move away from not only embedding difference and Othering but weaponizing it? Surveillance technologies for example do an excellent job of this – collecting more personal information from poor/black and brown people and hence reinforcing difference and threat. It may be possible to think about sensing technology under radically different organizational and cultural conditions, for example, much as these other examples begin from different positions.

I want to identify and celebrate these examples of working differently, but I have also critiqued some of them in my work. I hope I haven’t overplayed the critique – since the purpose of it was in many cases to identify how difficult it is to move progressive projects away from the knowledge and exchange cultures of currently dominant work. This cuts across many parts of the tech sphere. Personal privacy, for example. This is taking, and holding apart something of value, rather than sharing and creating relationships through the exchange. This reciprocity and openness, this fluidity, is one of the most frustrating things about abandoning the notion of the individual liberal subject. Equally, the perspective of individual responsibility that underpins many projects for contribution of data or expertise as the foundations for citizenship underplay how complex our sense of responsibility may be when it is always tempered with coercion.

Living another way

In much critical theory of technology I read a profound worry about technology itself. Ursula Franklin argues that technologies are real worlds composed of practices that we undertake all the time, and that they can through the way they are built, imagined and administered, dismiss entire ways of knowing and being. My work focuses on these practices, but never quite gets away from the worry, as I never manage to square the circle of how or whether technologies could be otherwise. But I know that there are ways of organizing beyond hierarchy, and ways of living beyond value extraction. I am certain that these have communicational elements attached to them as well and that some of these depend on the construction of technological systems.  If this is an act of faith, I will claim it – and try as hard as I can to contribute to making it so.





A surfeit of care

I am suffering from a surfeit of care. I really care. A lot. About a lot of things.

I care that the climate is changing, fast, and that people and animals will die as a result – are dying already, as refugees flee from a war accelerated by drought, and new famines begin in Southern Africa, and as ice melts in the Arctic at a pace never imagined. (I care also that I never managed to visit the Arctic before it melted, but I am not so sure now that I believe any longer in the great education of travel. I worry about going anywhere because I think I might be too sad at what has already gone).

I also care that these changes mean not only death of beings but death of ideas: the knowledge of the seasons, the patterns of the past, the ability to feel a part of nature rather than its enemy.

I care that governments here and in many places have turned away from valuing people and the things that they can build together, and have disassembled and partitioned and sold the very things that make society possible: eductation, health care, access to water, access to knowledge. I care thus, about policy and procedure, and the devil in details of governance documents and institutional arrangements and public oversight. I care about principles, and I will argue them based on careful research.

I care about my students and try to show them a world of ideas that is beyond their own experience; and in teaching them about the hopefully still expansive possibilities of the world I try to convince myself of the same. I care about the ideas themselves: I want them to see that the world, even the material and technical world, is formed of ideas about how to best go about being in it; and even when it appears fixed, it is always changing.

I care about my family, about teaching my daughter things that will help her survive in an uncertain and perhaps incoherent world. I try to wire into her brain the old stories and the new ways, confidence in herself and practical skills and empathy, because surely she will need it. I try also to live gracefully and lovingly alongside the In-House-Hacker, even though I’m so swamped with care that I must sometimes seem bereft.

I care about birds, and toads, and plants, and trees, and forests and animals and people I have never met and never will. I am the result of a globalization of knowledge and the victim of a globalization of care.

And all this care keeps my heart in my throat, makes my skin prickle with sensitivity to every news story about another outrage. It makes me grieve for a certainty I never believed in, to hope for transformations that I am sometimes fear that I am simply too frightened to force through myself. I worry that I am not doing enough, with every plastic tray I purchase in opposition to my clear desire to live a sustainable life, with every petition I sign knowing it won’t make a difference. With every demo I march on, even. I worry that it is all sound and fury. Because I really, really care.

And somewhere deep down I wish to be released from this care. I wish I could simply detach from the problems of the world, perhaps by ceasing to be an optimist and assuming that I could (WE COULD) never solve them anyway, so why bother. Or maybe by becoming a hedonist and floating away on a cloud of pure experience, unsullied by critique.

But in the here and now, and inside the only mind I have, I’m struggling not to be submerged. Struggling to find a thread and a story that associates rather than dissociates, that integrates and grounds and makes the world meaningful – or makes a new world seem possible, even in the ever-present wreckage of the old.

I don’t know how to do it, but to turn heartache to a song, fear to determination, anxiety to optimism. I don’t know how to do it but to to keep swimming, keep kicking, keep breathing and moving and loving. How do you do it? How do you keep afloat?

Urgency and Complexity: New Thinking

It’s the new year. The sun shines orange, veiled in vapour, and warmer than seasonal average, of course.


2015 was, for me, the year I came to terms with the real ending of the world. That is to say, the modern world that I was born in and grew up in and which, from my perspective, started unravelling when I learned about global warming at 14 but no one (including me) did anything about it. I greeted the financial collapses of 2008 with a kind of perverse hope that this would be the watershed event that would provoke the end of the world and the beginning of another.

Maybe it did.

But the planes still fly and make their vapour trails, and demand is still projected to increase. The machine rumbles on, into the abyss. The trees of the 70 year old woodlot across from my daughter’s daycare have been felled, the pond has been filled, and the birds have taken to the sky to protest. We haven’t heard from the frogs yet.

The End, but not yet Something New

We all know that it’s over – that capitalism’s promise of continous growth is impossible on a single planet of finite resources. But the next thing has not yet come. I spent much of 2015 being in quiet despair at the impossibility of ethical existence within a system that destroys humans and all other living beings, by its very design. But lately I have been moving into the place past despair, which is not hope but perhaps a fierce joy in existence, brief and complex. I can recognize my blue mood as deep grief, not for a person this time but for the world that I knew – the world that is still with us but which must soon, become something else. Donna Haraway writes that our responsibility at this point is to make the Anthroposcene or whatever it is called (she goes with Cthulucene, in line with the great makers and remakers Gaia, Medusa, Spider Woman) as short as possible, since it is probably best understood as the boundary between epochs.

So the new thinking I’m embarking on is dedicated to trying to make a path through the Dithering, a thread of some type (probably not linear, but maybe red) from the place we are to some other place where we can see ourselves. This is a task for all of those who have never liked disciplines, a good task for a grieving environmentalist (as I’ve always been) but also a good task for someone who wants to think about communicative relationships, humans and nonhumans.

Decentering humans; understanding complexity

In particular I have identified how mediation appears to be a crucial concept in linking two key lines of thinking: one focused on removing humans from their hubristically defined position as superior to other beings on earth, and the other identifies how complex adaptive systems can move past oppositional (or even dialectical) engagement. Since the earth environment is one of these complex adaptive systems, and since it is constantly in the process of changing, this set of thoughts is equally relevant for the project. It’s also at the heart of John Durham Peters’ excellent book The Marvellous Clouds.

In the first line of thinking, Haraway proposes an ethics of kinship that connects the human with many others, especially those who are alien or not alike. This development and sustenance of relationships outside of the known is an extension of her work on cyborg identities, and she, like others working in this area, calls for a renewed sense of connection with the other beings of the world. Simiarly, Robin Wall Kimmerar offers the insight that humans might, in gratitude to the rest of creation, pay attention. She writes, “Paying attention to the more-than-human world doesn’t lead only to amazement; it leads also to acknowledgment of pain. Open and attentive, we see and feel equally the beauty and the wounds, the old growth and the clear-cut, the mountain and the mine. Paying attention to suffering sharpens our ability to respond. To be responsible.”

However not all thinkers committed to decentering humans from the web of experience think that nature will respond in any way to our attention. Isabelle Stengers proposes in her recent work a Gaia that is powerful and implacable. There is no resource to this nature. It cannot be perceived or engaged with. Stengers writes, “we will have to go on answering for what we are undertaking in the face of an implacable being who is deaf to our justifications” (47). This means that none of the modes of mediation on which we have come to rely (including of course measurement and sensing) could bring humans closer to perceiving the natural world.

I need to spend some more time with Stengers to see whether there might be some new insights on media (although not likely communication) from her implacable Gaia, but I find it interesting that she also entreats us to ‘pay attention.’ Attention is what many media scholars spend time discussing, and media companies trying to measure. Attention, like so many things, has slid into being commodified. So paying attention to things outside ourselves that are also part of ourselves is a radical act indeed. Paying attention mens having to face the terrible realization that as I wrote this I cleaned the bathroom – and that the poisoned water I produced might kill the kin of the birds I watch out the window.  On the other hand, once one is really paying attention, all the analytic and creative energies that one possesses can be directed. And the consequences of that attention might invite creativity and solutions that apply not only to climate or ecological crisis but to all sorts of situations where the notion of progress has come undone, and the dyanic of opposition and dialetic no longer apply.

The second strand of my reading and thinking concerns how to understand the dynamics of complex adaptive systems. This is inspired by rereading Robin Mansell’s Imagining the Internet, where she identifies the multiple hierarchical and heterarchical levels that interrelate within communication systems. Rather than seeking to optimize or rationalize, systems processes tend towards self-maintenance,and can’t always be massaged into producing cause and effect relationships.

Consideration of complex adaptive systems doesn’t seem too difficult to fit with the first set of readings. If anything, it aligns with the notion of relationship that we value through ‘paying attention’. It also opens out the possibility for paradox and unintended (or unknowable) outcomes.

Again, while thinking this way might help to draw a thread towards the urgent and enormous quesitons of our time, it will also help to address any number of smaller (and no less pressing) questions of justice and the ‘good society’ that we need to address on the way.

Next post soon – more on complexity and some on ethics and relationships.

Why I’ve Declined Your Kind Invitation (and why you should try again)

An Open Letter to everyone who’s recently invited me to speak at their event.

I really want to attend your event. It’s probably very close to my current interests – technological citizenship, ‘smart cities’, the Internet of Things, ethics and communication rights.  There are probably really great people also coming to this event, people who share these interests and with whom I’d have amazing discussions and maybe even collaborate with in the future.

I know that if I don’t attend your event all of these opportunities are lost.

And yes – I’m still working on the kinds of things that caused you to invite me. My book proposal on technological citizenships is out for review. I have a paper on open source knowledge and IoT/citizen science projects that’s nearly published. I’m as enthusiastic as ever about meeting and working with cities, communities and activists who are using data and sensing technologies to tell their own stories and change the governance of their cities and communities.

But getting that work done is difficult. At the moment I’m a solo researcher – attending your event might help me meet more collaborators, but it also takes time from reading, writing, interviewing and putting together grant proposals. Not to mention leading a new MSc programme in Data & Society – and organizing my own events as part of this.

I also have full time teaching responsibilities, and a young family with another parent who also works long hours.

Right now, I’m not attending your event because I’m committed to getting the serious work done – researching, writing and thinking carefully so I have something significant to contribute. I know that this has some risks, but I want to take time to understand what’s happening and what’s at stake. I’ve decided not to spend my time running from lecture theatre to airport and back to pack in all of the experiences I can. I hope this makes my work better – and more important for all of us.

So please – don’t assume that since I’ve declined this time, I’m not interested.

Please invite me again. Share your event feedback. Let me know what you are working on. Maybe together we can find a way to advance our research without exhausting ourselves.




Rights, communication and the refugee crisis (or, how the real world made my research project better)

I have started working on a book, and this week I feel guilty about writing it. The book is about the ways that technologies, citizenship and urban life produce one another. I start in the 1990s, in the conceptual space of rights definition and rights claims, including the claims related to communication rights as well as renewed claims for “rights to the city”. In this time, we talked about remaking the city, perhaps virtually, but also about fighting for its public space. This paradigm is fading, though, and in the next part of the book I write about how data and citizenship combine, how large-scale data collection and analysis shapes the ways that people feel that they can and do act, and how activists and advocates try to resist the dominant ways that data is collected and used. Certain kinds of surveillance dynamics are created by this collection and use, but there are also potential ways to resist this (albeit by demanding more individual responsibility) Looking forward, I also analyse how sensing technologies that collect intimate data intensify the ways that these experiences of surveillance and individualization occur, perhaps making us into “very predictable people” as one journalist has suggested. Sensor citizenships are all about risk: predicting it, gathering data to better describe it, reducing it. It’s chilling to consider how normalized and constrained the everyday life of the otherwise free and privileged might become – but also perhaps inspiring to consider the positive ways that embedded sensing technologies might be able to be used – to facilitate collaboration, or spur citizen science.

So while I am writing this careful, rather restrained analysis of citizenship and communication, the Western world is exploding with a crisis of citizenship. Thousands of people are fleeing war and danger and the European state machinery is singularly failing to accommodate them, to the extent that preventable deaths have captured public imagination. And my tiny proscribed musings on the ways that communication and data technologies create different citizenship seem feeble in the face of this overwhelming pain and complexity.

But the events I’m following have given me a bit of a chance to think through some of the ideas I am working on. I have been asked why I’m interested in cities, technology and citizenship, and my answer is that state conceptions of citizenship are under strain, and in cities people simply arrive and have to negotiate their belonging. In the refugee crisis, many of the actions of European states show the fractures in the rights-based state level model of citizenship – including the inadequacy of the Dublin III regulation for refugee registration as well as the hesitation of some states, like the UK, to accept more refugees.

Equally, the situation also shows the ways that networked citizenship can operate, by capturing and shifting the political mood and discourse – talking about people and experiences rather than “swarms of migrants”. This has surely been helped along by the swift, meme and hashtag-driven discussion on social media, and amplified by the mass media (I wrote about how this happens in advocacy movements here). I’m moved by the efforts of people I know who are working hard to get communications access to people stranded at the train station in Budapest.

Less encouragingly, the refugee crisis also demonstrates the fraying of the rights paradigm. Refugees have rights to asylum but states do not wish to grant them. So people move. They create new situations by their presence, by their refusal to be moved. This is a riskier tactic than claiming rights. It is a worrying trend. It also intersects with the kind of individualization that is tied to data production. I have just noticed that one of the key concerns of EU governments is the collection of more data about refugees, with the purpose of tracking them more specifically as they move. This sounds of course like a good idea, but it depends on a strong and trusted power to oversee the collection and tracking. As strong right-wing (even fascist) governments rise to power or exert more influence across Europe, we must ask whether this trust is well placed.

Finally, the refugee crisis has had me thinking a lot about my hope for the book: that I might be able to bring back into the high-tech discussions of future technology some essential human qualities that are often poorly considered or “designed out”. Qualities like empathy. Care. Husbandry and maintenance of the environments around us. These are qualities that I believe to be essential to cultivate, not only in our societies (where they always have been) but also in the technological systems that support the functioning of societies. In this late summer of crisis and pain, empathy is what motivates thousands to call for refugee acceptance or to donate materials and time. It is what we seek to generate when we communicate stories about people fleeing. It is of course what makes us human.

In my small work I hope to demonstrate that this greatest of all human qualities need not be laid aside, not in our institutions nor in our technology systems. After donating to help refugees and praying for all of the desperate people, it’s the least I can do.

Women’s Technology (honouring Betty Pezalla, 1924-2015 and Barbara Powell, 1950-2002)

My grandmother died this week.  Parent to five, grandparent to 14, great-grandparent to 12. After a childhood during the Depression, she went to college to study home economics, but her true passion was fibre arts. She spun, dyed, knitted, felted and wove sweaters, scarves, rugs, baskets, animals, wallhangings, and many and sundry other beautiful things. In middle age she retrained as an art teacher and went back to work – in mid-1960’s midwest USA. She exhibited her work in galleries well into her 80s. Here she is with my daughter, sometime in 2012.


My mother died thirteen years ago this week. Parent to three, senior university administrator, violinist, baker, master fart-joke teller. She achieved a PhD with two children underfoot, then went on to write a book that surfaced women’s histories hidden in archives. She also baked six loaves of sour dough bread every Saturday while listening to the opera, and loved going to garage sales. Here she is, fierce, with her brother at a wedding in the mid-1970s.

mom paul

I cannot tell you the number of things I learned from these women. Confidence in my intelligence. The truth about ambition and responsibility. A love of family. Generosity.

One thing I learned though that I don’t often think about was a passion for new technology and technical thinking. This, along with everything else has shaped me, and I want to write a little more about it.


My grandma’s studio

Both my mom and my grandma knit. They had bags of wool with needles that they toted around with them to fill up moments of time – watching TV or listening to the radio, sitting in on kids’ music lessons, riding in the car. These bags contained magical charts laying out the stitching patterns needed to make a cable, or a rosette, or a cuff. I didn’t know it then but these charts and their notation are a form of programming – a set of abstract schematics to be followed (and interpreted, within boundaries) that create an entire new product.

I learned to knit (under duress) but what really fascinated me was weaving.  My grandma’s looms were enormous and beautiful, with different coloured warp threads controlled by foot pedals. The patterns of these threads, combined with the colours of the other materials woven across them, produced the beauty and complexity of the finished rugs and hangings. I marvelled at how grandma kept the pattern and the process in her head – long before I read about how Jacquard created the first programming punch cards to operate looms, in 1801.


on of my grandma’s looms (unstrung)

Of course baking and cooking also follow programs, that you can modify within certain boundaries. So you can scale up to six loaves of bread, or modify a recipe when you run out of something.

These are women’s technologies (or at least they are now – weaving and knitting were men’s work in the past when there was money to be made from them, and professional cooks are still mostly men), which means we might discount them when thinking about new and shiny ways to ‘learn to code’ or ‘get women into STEM’. But they require complex, abstract, programmatic thinking. To make beautiful and tasty things. Here I am with grandma and daughter, eating some tasty things.

cardomom buns

Keeping this in mind, it’s now less surprising for me to remember my mother’s incredible delight in exploring the early Internet. She’d return from work with amazing tales of information she’d found from far-flung countries. When I was shown the web, I was kind of underwhelmed. It took effort to find information – you needed to type commands, use Boolean logic, and navigate around the databases and usegroups. But now I suspect that the world of tech made much more sense to my mom than I might have expected. After all, her little sister was an educator at the Computer Museum and has developed an art practice that investigates geometry and topgraphy. The more I think of it, the more I can surface the deep roots of my own interest in technology and culture.

I miss my mom and grandma exquisitely. But I know how much they made me who I am. And now I get to think about how to pass on their legacy not just to my own daughter (shown here in a sweater knit by her great-grandma at age 89) but to many women who might not yet have thought about the connection between knitting, cooking, art, and computing.


Sharing and Responsibility

It’s been a long time since I posted. I’ve been working on lots of things: finishing some writing about knowledge cultures, starting some research on data and ethics, cities and ‘smartness’ , and developing some new teaching provision in these areas. Some of what I’ve been working on is up at academia.edu, and much of it is available at my university’s open access repository.

I’ve also been thinking a lot. Often I’m thinking about the stark contrast between the mundane beauty of the everyday and the almost overwhelming complexity of the reality of the world, with its seemingly insoluable problems of climate change, perpetual war, and rising inequality in the rich world. How is it ethically possible to continue to enjoy the benefits of a highly developed society  in the knowledge of these problems? What responsibilities do we have?

The barrier to taking on this tension lies in the difficulty of connecting the everyday to the systemic, the banal action to its complex consequence. It requires thinking about the extent to which the global connects to the local, and the present to the unknown future.

This is a picture of my street, located in the middle of an enormous city. It is beautiful, I think. It is also full of complexity. There is a school: an institution with power, with connections to the state. There are trees full of birds and squirrels and foxes. There are lots of people who live on the street who come from different countries in the world and who are all trying to get along in this city. There are airplanes flying in the sky hazed with pollution, in the warm November (and remember, there used to not be warm Novembers).


There are millions of streets in the world. Indeed, most people in the world will shortly be living in cities, if they don’t already. Streets and cities are persistent human constructions. Given that we are now living in a new epoch, an ‘Anthropocene‘ characterized by the massive impact on the entire planet of the human species and our particular habits, perhaps we could think more carefully about how we live within these particular environments created and shaped by us.

Even in cities the humans are not the only ones around. Recent research indicates that cities have surprisingly high biodiversity. London supports bee colonies, in part because of lower pesticide use. Foxes are a permanent part of the city environment. On my street there are also snails, slugs, bats, bugs, and rats in abundance too (I am sure there are rats. There are always rats).

So we are somehow managing to live alongside these other creatures, although every time a neighbour replaces their back yard with a big extension I wonder about the consequences. How can we live with others?

This question is valuable in terms of the human world as well. This week I got to go to an event called ‘Design for Sharing’ that launched a report into the practices of collaboration. These are the everyday things that keep neighbourhoods and people together: sharing food, or tools, or trading goods, or time. Although the ‘sharing economy’ of Uber and Air BnB is gaining attention, this is actually a distributed rental economy, and the attention is often focusing us away from understanding how people share and why.

The research that Design for Sharing presented shows that there are many ways to share – starting with one small thing, weaving people and objects and ideas together. But what is significant is how little ICT tools feature in sharing practices. It seems that in the everyday world of communities and objects, trust and relationships are built face to face. We can contrast this with the way that many relationships including the online ‘sharing economy’ examples are mediated by data, information and metrics. How then are the relationships of trust meant to be constructed?  The response, for Uber and Air BnB and many other businesses, is to apply data analytics, and use them to broker the relationship.

This means that sharing relationships can scale up enormously. They are no longer limited by who you know and hence who you trust. There are clearly many possible social gains in this kind of understanding. But what of the losses? What does it mean to cede judgement to an analytic process? In part it means that only information that can be placed in the process can be considered. For the creation of online relationships, this often means quantified data. We are now starting to understand what the cultural consequences of quantification may be: Benjamin Grosser has written a revealing essay “What do Metrics Want” about the shift in culture aligned with a culture of metric. He writes, “Theodore Porter, in his study of quantification titled Trust in Numbers, calls quantification a “technology of distancethat “minimizes the need for initmate knowledge and personal trust.” Enumeration is impersonal, suggests objectivity, and in the case of social quantification, abstracts individuality.”

This abstracting of individuality is part of the influence that the metrics have within the system. This influence is oriented around the idea of ‘more’ – more measurement, more participation, more value for the owners of Facebook. And the quantification of social interaction simultaneously renders the content and meaning of the interaction less valuable.

This is the precise opposite of the kinds of intimate trust relationships that motivate people to solve problems together. It is also a dangerous reduction of the kind of relational complexity that I evoked when I wrote about the many things, beings, and systems that exist and interact on the street where I live. What is important becomes what can be measured, and what is measured becomes what is valuable. But what of the things that are difficult to measure, like the feeling of the leaves, or the friendliness of the neighbours? Or even those things that are transformed through the process of measurement, like a sense of community? What might be lost in the measuring process?

I would like to think of another way being responsible. Everything counts, yes, but what if we thought that everything matters?