Category Archives: Uncategorized

Algorithms, Accountability, and Political Emotion

Screen Shot 2016-06-29 at 23.18.49

Last week (it seems a century ago) I was at the Big Boulder social data conference discussing the use of algorithms in managing social data. Then, since I live in the UK, Brexit Events intervened. Sadness and shock for many have since morphed into uncertainty for all. Online media, driven by the social analytics I heard about in Boulder, shape and intensify these feelings as we use them to get our news and connect with people we care about. This raises some really important issues about accountability, especially as more news and information about politics gets transmitted through social media. It also stirs up some interesting questions about the relation between industry focus on sentiment analysis of social media in relation to brands, and the rise of emotion-driven politics.

So in this post I’ll talk about why algorithms matter in moments of uncertainty, what it might mean to make them accountable or ethical, and what mechanisms might help to do this.

  1. Algorithms present the world to you – and that’s sometimes based on how you emote about it

Algorithmic processes underpin the presentation of news stories, posts and other elements of social media. An algorithm is a recipe that specifies how a number of elements are supposed to be combined. It usually has a defined outcome – like a relative ranking of a post in a social media newsfeed. Many different data will be introduced, and an algorithm’s function is to integrate them together in a way that delivers the defined outcome. Many algorithms can work together in the kinds of systems we encounter daily.

One element of algorithmic systems that find interesting at this moment in time, and that’s sentiment. Measuring how people say they feel about particular brands in order to better target them has been a key pillar of the advertising industry for decades. With the expansion of social analytics, it’s now also the backbone of political analysis aimed at seeing which leaders, parties and approaches to issues acquire more positive responses. But could too much of a focus on sentiment also intensify emotional appeals from politicians, to the detriment of our political life? What responsibility do social media companies bear?

Social Media Companies Filter Politics Emotionally

Increasingly, media companies are sensitive to the political and emotional characteristics of responses to the kinds of elements that are presented and shared. Sentiment analysis algorithms, trained on data that categorizes words into ‘positive’ and ‘negative, are widely employed in the online advertising sphere to try to ascertain how people respond to brands. Sentiment analysis also underpinned the infamous ‘Facebook emotion study’ which sought to investigate whether people spent more time using the platform when they had more ‘positive’ or ‘negative’ posts and stories in their feeds.

With the expansion of the emotional response buttons on Facebook, more precise sentiment analysis is now possible, and it is certain that emotional responses of some type are factored in to subsequent presentation of online content along with other things like clicking on links.

Sentiment analysis is based on categorizations of particular words as ‘postive’ or negative. Algorithms based on presenting media in response to such emotional words have to be ‘trained’ on this data. For sentiment analysis in particular, there are many issues with training data, because the procedure depends on the assumption that words are most often associated with particular feelings. Sentiment analysis algorithms can have difficulty identifying when a word is used sarcastically, for example.

Similarly, other algorithms used to sort or present information are also trained on particular sets of data. As Louise Amoore’s research investigates, algorithm developers will place computational elements into systems that they build, often without much attention to the purposes for which they were first designed.

In the case of sentiment analysis, I am curious as to the consequences of long term investments in this method by analytics companies and the online media industry. Especially, I’m wondering about whether focusing on sentiment or optimizing presentation of content with relation to sentiment is in any way connected to the rise of ‘fact-free’ politics and the ascendancy of emotional arguments in campaigns like the Brexit referendum and the American presidential primaries.

  1. Algorithms have to be trained: training data establish what’s ‘normal’ or ‘good’

The way that sentiment analysis depends on whether words are understood as positive or negative gives an example of how training data establishes baselines for how algorithms work.

Before algorithms can run ‘in the wild’ they have to be trained to ensure that the outcome occurs in the way that’s expected. This means that designers use ‘training data’ during the design process. This is data that helps to normalize the algorithm. For face recognition training data will be faces, for chatbots it might be conversations, or for decision-making software it might be correlations.

But the data that’s put in to ‘train’ algorithms has an impact – it shapes the function of the system in one way or another. A series of high profile examples illustrate what kinds of discrimination can be built into algorithms through their training data: facial recognition algorithms that categorize black faces as gorillas, or Asian faces as blinking. Systems that use financial risk data to train algorithms that underpin border control. Historical data on crime is used to train ‘predictive policing’ systems that direct police patrols to places where crimes have occurred in the past, focusing attention on populations who are already marginalized.

These data make assumptions about what is ‘normal’ in the world, from faces to risk taking behavior. At Big Boulder a member of the IBM Watson team described how Watson’s artificial intelligence system uses the internet’s unstructured data as ‘training data’ for its learning algorithms, particularly in relation to human speech. In a year where the web’s discourse created GamerGate and the viral spread of fake news stories, it’s a little worrying not to know exactly what assumptions about the world Watson might be picking up.

So what shall we do?

  1. You can’t make algorithms transparent as such

There’s much discussion currently about ‘opening black boxes’ and trying to make algorithms transparent, but this is not really possible as such. In recent work, Mike Annany and Kate Crawford have created a long list of reasons for this, noting that transparency is disconnected from power, can be harmful, can create false binaries between the ‘invisible’ and the ‘visible’ algorithms, and that transparency doesn’t necessarily create trust. Instead, it simply creates more opportunities for professionals and platform owners to police the boundaries of their systems. Finally, Annany and Crawford note that looking inside systems is not enough, because it’s important to see how they are actual able to be manipulated.

  1. Maybe training data can be reckoned and valued as such

If it’s not desirable (or even really possible) to make algorithmic systems transparent, what mechanisms might make them accountable? One strategy worth thinking about might be to identify or even register the training data that are used to set up the frameworks that key algorithms employ. This doesn’t mean making the algorithms transparent, for all the reasons specified above, but it might create a means for establishing more accountability about the cultural assumptions underpinning the function of these mechanisms. It might be desirable, in the public interest, to establish a register of training data employed in key algorithmic processes judged to be significant for public life (access to information, access to finance, access to employment, etc). Such a register could even be encrypted if required, so that training data would not be leaked as a trade secret, but held such that anyone seeking to investigate a potential breach of rights could have the register opened at request.

This may not be enough, as Annany and Crawford intimate, and it may not yet have adequate industry support, but given the failures of transparency itself it may be the kind of concrete step needed to begin firmer thinking about algorithmic accountability.

Ethics of Perverse Systems

Things, of course, cannot go on as they are. The rate of environmental destruction, fossil fuel burning, reactionary politics and censure of debate is of course untenable. So is the high capitalist solution of monetizing every remaining speck in the universe, trading on its futures and leveraging the outcome to secure the fortunes of the fortunate and lock many others into destitutions. Abominable is the lack of empathy and xenophobic turn of politicians (and, I expect, in some way all sorts of people) in the faces of people desperate to escape war and imprisioned on borders instead of welcomed and settled.

And so, also, for those of us concerned with the capacity to become ourselves by expressing ourselves, the intensification of surveillance of our everyday life, which we know to change our behavior, to be less forthright with ideas, to keep our radical thoughts to ourselves lest they be too disruptive to be heard.

How then should we go on?

Some say we shouldn’t bother. The popular press cover all of the above issues in ways that often appear calculated to disempower. Facebook will feed you ads whether you subscribe to it or not. The sea level will rise whether you drink tap water or not. The rich will manipulate government whether you vote or not. Selfishiness is inevitable, social collapse perhaps as well. Some progressive thinkers embed this stance into hope for a post-apocalyptic regeneration of life, but one that is predicated on the suffering of many as the inevitable excess of consumption reaches peak cruelty (perhaps at the same time as peak oil). Some conservatives also see this as inevitable, but aim perhaps to be among the few who benefit. Populists of various political persuasions focus on the villains contained in various pieces of this puzzle. All of it suggests that we are naturally, inevitably horrible people.

Naomi Klein’s recent article in the London Review of Books (based on her Edward Said lecture) reminds us that there are other ways of thinking. She refers to the ‘seven generation’ rule that stipulates that we should think about the long term impact of any action, and leave the natural world in an improved state for those who are to come.  She resists the idea of ‘sacrifice zones’ where the land and lives of poor/black and brown people are offered up to safeguard the places that the rich inhabit. Only by not seeing these lives as truly equal – as ‘others’ who can’t really be human – is anyone able to justify this. This follows from Said’s work in defining how Orientalism, this ‘othering’ of people outside the places where power defines itself to reside, justifies treatment that dehumanizes them while also assuring the continuation of easy lives elsewhere. Klein suggests we resist sacrifice and focus on solidarity. This requires the capacity for tolerance and respect of all humans as well as others – as philosopher Achille Mbembe has also pointed out.

Perverse Systems

Klein’s article also started me thinking about one of the key questions of my book project. Is it possible to be hopeful about a technological world? Advanced technology, even of the communicational type that is my focus, is so deeply bound up with the impossible expansion of value extraction from every facet of experience, and by association with violence and exclusion. If my recent research is any indication, attempts to intensify this value extraction from the very material of ordinary life and from our own attempts to make it meaningful to connecting to each other and ourselves. My previous research has indicated that the same mad dash to extract value that angers indigenous people in Brazil, Canada and the USA whose rights to be upon and with their land are disappeared to permit more resource exploitation, mobile phone companies have essentially disappeared the right to privacy of their subscribers. In exchange for cheaper calls (and to compensate for expensive investments) location data are collected and packaged. Some companies operate subsidiaries that analyse and sell these data. Both of these activities are ruthless exploitation of realms of life that on their own have meaning and substance on far different registers than their valuation as commodities might suggest. Ethically as well as economically, these are painful, woeful, terrible responses. They create and sustain perverse systems. And these, because they are unfolding in so many places and on so many scales, it seems impossible to conceive of how to think otherwise.

Yet thinking otherwise and working otherwise is also essential, because alternatives are also unfolding in many areas and at many scales, often without much attention.

In 1998 I took a course in environmental philosophy called Environment Enquiry – taught by environmental philosopher Bob Henderson. We read Daniel Quinn’s 1991 philosophical novel Ishmael, which broadly sketches this approach by contrasting the Takers (I think you can probably work out their motivations and actions) with the Leavers, who enact ‘seven-generation’ values and who are bound into traditions and rhythms that hold them. The original text now, nearly two decades later reads problematically, with a fair nostalgia for imagined past tribalism and a dash of ‘noble savage’.  Despite the naivité, there may be some value in the broader opposition between Leavers and Takers, provided we redefine what they are to take account of what we know of the world. In my mind the Leaver category requires contribution as well as living with difference. This isn’t quite how Quinn thought about it, but it is how Said and Mbembe do. Living with difference is really hard. It starts with believing that everyone (yes *everyone*) has the same importance, but that they will enact their own importance in totally different ways.

How could we conceive technological systems built by Leavers? Neo-tribalists would probably point to the mythological ‘original internet’. Others might look to the leveraging of worldwide networked communications by small groups of people who organize to occupy and slow down extractive capitalism. Oh right, that would be Occupy. Still others might point to commons-based organization of resources including intellectual property. Oh right – distributed local communication networks.  But what about the other things I’ve added to the concept? How can technologies move away from not only embedding difference and Othering but weaponizing it? Surveillance technologies for example do an excellent job of this – collecting more personal information from poor/black and brown people and hence reinforcing difference and threat. It may be possible to think about sensing technology under radically different organizational and cultural conditions, for example, much as these other examples begin from different positions.

I want to identify and celebrate these examples of working differently, but I have also critiqued some of them in my work. I hope I haven’t overplayed the critique – since the purpose of it was in many cases to identify how difficult it is to move progressive projects away from the knowledge and exchange cultures of currently dominant work. This cuts across many parts of the tech sphere. Personal privacy, for example. This is taking, and holding apart something of value, rather than sharing and creating relationships through the exchange. This reciprocity and openness, this fluidity, is one of the most frustrating things about abandoning the notion of the individual liberal subject. Equally, the perspective of individual responsibility that underpins many projects for contribution of data or expertise as the foundations for citizenship underplay how complex our sense of responsibility may be when it is always tempered with coercion.

Living another way

In much critical theory of technology I read a profound worry about technology itself. Ursula Franklin argues that technologies are real worlds composed of practices that we undertake all the time, and that they can through the way they are built, imagined and administered, dismiss entire ways of knowing and being. My work focuses on these practices, but never quite gets away from the worry, as I never manage to square the circle of how or whether technologies could be otherwise. But I know that there are ways of organizing beyond hierarchy, and ways of living beyond value extraction. I am certain that these have communicational elements attached to them as well and that some of these depend on the construction of technological systems.  If this is an act of faith, I will claim it – and try as hard as I can to contribute to making it so.





A surfeit of care

I am suffering from a surfeit of care. I really care. A lot. About a lot of things.

I care that the climate is changing, fast, and that people and animals will die as a result – are dying already, as refugees flee from a war accelerated by drought, and new famines begin in Southern Africa, and as ice melts in the Arctic at a pace never imagined. (I care also that I never managed to visit the Arctic before it melted, but I am not so sure now that I believe any longer in the great education of travel. I worry about going anywhere because I think I might be too sad at what has already gone).

I also care that these changes mean not only death of beings but death of ideas: the knowledge of the seasons, the patterns of the past, the ability to feel a part of nature rather than its enemy.

I care that governments here and in many places have turned away from valuing people and the things that they can build together, and have disassembled and partitioned and sold the very things that make society possible: eductation, health care, access to water, access to knowledge. I care thus, about policy and procedure, and the devil in details of governance documents and institutional arrangements and public oversight. I care about principles, and I will argue them based on careful research.

I care about my students and try to show them a world of ideas that is beyond their own experience; and in teaching them about the hopefully still expansive possibilities of the world I try to convince myself of the same. I care about the ideas themselves: I want them to see that the world, even the material and technical world, is formed of ideas about how to best go about being in it; and even when it appears fixed, it is always changing.

I care about my family, about teaching my daughter things that will help her survive in an uncertain and perhaps incoherent world. I try to wire into her brain the old stories and the new ways, confidence in herself and practical skills and empathy, because surely she will need it. I try also to live gracefully and lovingly alongside the In-House-Hacker, even though I’m so swamped with care that I must sometimes seem bereft.

I care about birds, and toads, and plants, and trees, and forests and animals and people I have never met and never will. I am the result of a globalization of knowledge and the victim of a globalization of care.

And all this care keeps my heart in my throat, makes my skin prickle with sensitivity to every news story about another outrage. It makes me grieve for a certainty I never believed in, to hope for transformations that I am sometimes fear that I am simply too frightened to force through myself. I worry that I am not doing enough, with every plastic tray I purchase in opposition to my clear desire to live a sustainable life, with every petition I sign knowing it won’t make a difference. With every demo I march on, even. I worry that it is all sound and fury. Because I really, really care.

And somewhere deep down I wish to be released from this care. I wish I could simply detach from the problems of the world, perhaps by ceasing to be an optimist and assuming that I could (WE COULD) never solve them anyway, so why bother. Or maybe by becoming a hedonist and floating away on a cloud of pure experience, unsullied by critique.

But in the here and now, and inside the only mind I have, I’m struggling not to be submerged. Struggling to find a thread and a story that associates rather than dissociates, that integrates and grounds and makes the world meaningful – or makes a new world seem possible, even in the ever-present wreckage of the old.

I don’t know how to do it, but to turn heartache to a song, fear to determination, anxiety to optimism. I don’t know how to do it but to to keep swimming, keep kicking, keep breathing and moving and loving. How do you do it? How do you keep afloat?

Urgency and Complexity: New Thinking

It’s the new year. The sun shines orange, veiled in vapour, and warmer than seasonal average, of course.


2015 was, for me, the year I came to terms with the real ending of the world. That is to say, the modern world that I was born in and grew up in and which, from my perspective, started unravelling when I learned about global warming at 14 but no one (including me) did anything about it. I greeted the financial collapses of 2008 with a kind of perverse hope that this would be the watershed event that would provoke the end of the world and the beginning of another.

Maybe it did.

But the planes still fly and make their vapour trails, and demand is still projected to increase. The machine rumbles on, into the abyss. The trees of the 70 year old woodlot across from my daughter’s daycare have been felled, the pond has been filled, and the birds have taken to the sky to protest. We haven’t heard from the frogs yet.

The End, but not yet Something New

We all know that it’s over – that capitalism’s promise of continous growth is impossible on a single planet of finite resources. But the next thing has not yet come. I spent much of 2015 being in quiet despair at the impossibility of ethical existence within a system that destroys humans and all other living beings, by its very design. But lately I have been moving into the place past despair, which is not hope but perhaps a fierce joy in existence, brief and complex. I can recognize my blue mood as deep grief, not for a person this time but for the world that I knew – the world that is still with us but which must soon, become something else. Donna Haraway writes that our responsibility at this point is to make the Anthroposcene or whatever it is called (she goes with Cthulucene, in line with the great makers and remakers Gaia, Medusa, Spider Woman) as short as possible, since it is probably best understood as the boundary between epochs.

So the new thinking I’m embarking on is dedicated to trying to make a path through the Dithering, a thread of some type (probably not linear, but maybe red) from the place we are to some other place where we can see ourselves. This is a task for all of those who have never liked disciplines, a good task for a grieving environmentalist (as I’ve always been) but also a good task for someone who wants to think about communicative relationships, humans and nonhumans.

Decentering humans; understanding complexity

In particular I have identified how mediation appears to be a crucial concept in linking two key lines of thinking: one focused on removing humans from their hubristically defined position as superior to other beings on earth, and the other identifies how complex adaptive systems can move past oppositional (or even dialectical) engagement. Since the earth environment is one of these complex adaptive systems, and since it is constantly in the process of changing, this set of thoughts is equally relevant for the project. It’s also at the heart of John Durham Peters’ excellent book The Marvellous Clouds.

In the first line of thinking, Haraway proposes an ethics of kinship that connects the human with many others, especially those who are alien or not alike. This development and sustenance of relationships outside of the known is an extension of her work on cyborg identities, and she, like others working in this area, calls for a renewed sense of connection with the other beings of the world. Simiarly, Robin Wall Kimmerar offers the insight that humans might, in gratitude to the rest of creation, pay attention. She writes, “Paying attention to the more-than-human world doesn’t lead only to amazement; it leads also to acknowledgment of pain. Open and attentive, we see and feel equally the beauty and the wounds, the old growth and the clear-cut, the mountain and the mine. Paying attention to suffering sharpens our ability to respond. To be responsible.”

However not all thinkers committed to decentering humans from the web of experience think that nature will respond in any way to our attention. Isabelle Stengers proposes in her recent work a Gaia that is powerful and implacable. There is no resource to this nature. It cannot be perceived or engaged with. Stengers writes, “we will have to go on answering for what we are undertaking in the face of an implacable being who is deaf to our justifications” (47). This means that none of the modes of mediation on which we have come to rely (including of course measurement and sensing) could bring humans closer to perceiving the natural world.

I need to spend some more time with Stengers to see whether there might be some new insights on media (although not likely communication) from her implacable Gaia, but I find it interesting that she also entreats us to ‘pay attention.’ Attention is what many media scholars spend time discussing, and media companies trying to measure. Attention, like so many things, has slid into being commodified. So paying attention to things outside ourselves that are also part of ourselves is a radical act indeed. Paying attention mens having to face the terrible realization that as I wrote this I cleaned the bathroom – and that the poisoned water I produced might kill the kin of the birds I watch out the window.  On the other hand, once one is really paying attention, all the analytic and creative energies that one possesses can be directed. And the consequences of that attention might invite creativity and solutions that apply not only to climate or ecological crisis but to all sorts of situations where the notion of progress has come undone, and the dyanic of opposition and dialetic no longer apply.

The second strand of my reading and thinking concerns how to understand the dynamics of complex adaptive systems. This is inspired by rereading Robin Mansell’s Imagining the Internet, where she identifies the multiple hierarchical and heterarchical levels that interrelate within communication systems. Rather than seeking to optimize or rationalize, systems processes tend towards self-maintenance,and can’t always be massaged into producing cause and effect relationships.

Consideration of complex adaptive systems doesn’t seem too difficult to fit with the first set of readings. If anything, it aligns with the notion of relationship that we value through ‘paying attention’. It also opens out the possibility for paradox and unintended (or unknowable) outcomes.

Again, while thinking this way might help to draw a thread towards the urgent and enormous quesitons of our time, it will also help to address any number of smaller (and no less pressing) questions of justice and the ‘good society’ that we need to address on the way.

Next post soon – more on complexity and some on ethics and relationships.

Why I’ve Declined Your Kind Invitation (and why you should try again)

An Open Letter to everyone who’s recently invited me to speak at their event.

I really want to attend your event. It’s probably very close to my current interests – technological citizenship, ‘smart cities’, the Internet of Things, ethics and communication rights.  There are probably really great people also coming to this event, people who share these interests and with whom I’d have amazing discussions and maybe even collaborate with in the future.

I know that if I don’t attend your event all of these opportunities are lost.

And yes – I’m still working on the kinds of things that caused you to invite me. My book proposal on technological citizenships is out for review. I have a paper on open source knowledge and IoT/citizen science projects that’s nearly published. I’m as enthusiastic as ever about meeting and working with cities, communities and activists who are using data and sensing technologies to tell their own stories and change the governance of their cities and communities.

But getting that work done is difficult. At the moment I’m a solo researcher – attending your event might help me meet more collaborators, but it also takes time from reading, writing, interviewing and putting together grant proposals. Not to mention leading a new MSc programme in Data & Society – and organizing my own events as part of this.

I also have full time teaching responsibilities, and a young family with another parent who also works long hours.

Right now, I’m not attending your event because I’m committed to getting the serious work done – researching, writing and thinking carefully so I have something significant to contribute. I know that this has some risks, but I want to take time to understand what’s happening and what’s at stake. I’ve decided not to spend my time running from lecture theatre to airport and back to pack in all of the experiences I can. I hope this makes my work better – and more important for all of us.

So please – don’t assume that since I’ve declined this time, I’m not interested.

Please invite me again. Share your event feedback. Let me know what you are working on. Maybe together we can find a way to advance our research without exhausting ourselves.




Rights, communication and the refugee crisis (or, how the real world made my research project better)

I have started working on a book, and this week I feel guilty about writing it. The book is about the ways that technologies, citizenship and urban life produce one another. I start in the 1990s, in the conceptual space of rights definition and rights claims, including the claims related to communication rights as well as renewed claims for “rights to the city”. In this time, we talked about remaking the city, perhaps virtually, but also about fighting for its public space. This paradigm is fading, though, and in the next part of the book I write about how data and citizenship combine, how large-scale data collection and analysis shapes the ways that people feel that they can and do act, and how activists and advocates try to resist the dominant ways that data is collected and used. Certain kinds of surveillance dynamics are created by this collection and use, but there are also potential ways to resist this (albeit by demanding more individual responsibility) Looking forward, I also analyse how sensing technologies that collect intimate data intensify the ways that these experiences of surveillance and individualization occur, perhaps making us into “very predictable people” as one journalist has suggested. Sensor citizenships are all about risk: predicting it, gathering data to better describe it, reducing it. It’s chilling to consider how normalized and constrained the everyday life of the otherwise free and privileged might become – but also perhaps inspiring to consider the positive ways that embedded sensing technologies might be able to be used – to facilitate collaboration, or spur citizen science.

So while I am writing this careful, rather restrained analysis of citizenship and communication, the Western world is exploding with a crisis of citizenship. Thousands of people are fleeing war and danger and the European state machinery is singularly failing to accommodate them, to the extent that preventable deaths have captured public imagination. And my tiny proscribed musings on the ways that communication and data technologies create different citizenship seem feeble in the face of this overwhelming pain and complexity.

But the events I’m following have given me a bit of a chance to think through some of the ideas I am working on. I have been asked why I’m interested in cities, technology and citizenship, and my answer is that state conceptions of citizenship are under strain, and in cities people simply arrive and have to negotiate their belonging. In the refugee crisis, many of the actions of European states show the fractures in the rights-based state level model of citizenship – including the inadequacy of the Dublin III regulation for refugee registration as well as the hesitation of some states, like the UK, to accept more refugees.

Equally, the situation also shows the ways that networked citizenship can operate, by capturing and shifting the political mood and discourse – talking about people and experiences rather than “swarms of migrants”. This has surely been helped along by the swift, meme and hashtag-driven discussion on social media, and amplified by the mass media (I wrote about how this happens in advocacy movements here). I’m moved by the efforts of people I know who are working hard to get communications access to people stranded at the train station in Budapest.

Less encouragingly, the refugee crisis also demonstrates the fraying of the rights paradigm. Refugees have rights to asylum but states do not wish to grant them. So people move. They create new situations by their presence, by their refusal to be moved. This is a riskier tactic than claiming rights. It is a worrying trend. It also intersects with the kind of individualization that is tied to data production. I have just noticed that one of the key concerns of EU governments is the collection of more data about refugees, with the purpose of tracking them more specifically as they move. This sounds of course like a good idea, but it depends on a strong and trusted power to oversee the collection and tracking. As strong right-wing (even fascist) governments rise to power or exert more influence across Europe, we must ask whether this trust is well placed.

Finally, the refugee crisis has had me thinking a lot about my hope for the book: that I might be able to bring back into the high-tech discussions of future technology some essential human qualities that are often poorly considered or “designed out”. Qualities like empathy. Care. Husbandry and maintenance of the environments around us. These are qualities that I believe to be essential to cultivate, not only in our societies (where they always have been) but also in the technological systems that support the functioning of societies. In this late summer of crisis and pain, empathy is what motivates thousands to call for refugee acceptance or to donate materials and time. It is what we seek to generate when we communicate stories about people fleeing. It is of course what makes us human.

In my small work I hope to demonstrate that this greatest of all human qualities need not be laid aside, not in our institutions nor in our technology systems. After donating to help refugees and praying for all of the desperate people, it’s the least I can do.

Women’s Technology (honouring Betty Pezalla, 1924-2015 and Barbara Powell, 1950-2002)

My grandmother died this week.  Parent to five, grandparent to 14, great-grandparent to 12. After a childhood during the Depression, she went to college to study home economics, but her true passion was fibre arts. She spun, dyed, knitted, felted and wove sweaters, scarves, rugs, baskets, animals, wallhangings, and many and sundry other beautiful things. In middle age she retrained as an art teacher and went back to work – in mid-1960’s midwest USA. She exhibited her work in galleries well into her 80s. Here she is with my daughter, sometime in 2012.


My mother died thirteen years ago this week. Parent to three, senior university administrator, violinist, baker, master fart-joke teller. She achieved a PhD with two children underfoot, then went on to write a book that surfaced women’s histories hidden in archives. She also baked six loaves of sour dough bread every Saturday while listening to the opera, and loved going to garage sales. Here she is, fierce, with her brother at a wedding in the mid-1970s.

mom paul

I cannot tell you the number of things I learned from these women. Confidence in my intelligence. The truth about ambition and responsibility. A love of family. Generosity.

One thing I learned though that I don’t often think about was a passion for new technology and technical thinking. This, along with everything else has shaped me, and I want to write a little more about it.


My grandma’s studio

Both my mom and my grandma knit. They had bags of wool with needles that they toted around with them to fill up moments of time – watching TV or listening to the radio, sitting in on kids’ music lessons, riding in the car. These bags contained magical charts laying out the stitching patterns needed to make a cable, or a rosette, or a cuff. I didn’t know it then but these charts and their notation are a form of programming – a set of abstract schematics to be followed (and interpreted, within boundaries) that create an entire new product.

I learned to knit (under duress) but what really fascinated me was weaving.  My grandma’s looms were enormous and beautiful, with different coloured warp threads controlled by foot pedals. The patterns of these threads, combined with the colours of the other materials woven across them, produced the beauty and complexity of the finished rugs and hangings. I marvelled at how grandma kept the pattern and the process in her head – long before I read about how Jacquard created the first programming punch cards to operate looms, in 1801.


on of my grandma’s looms (unstrung)

Of course baking and cooking also follow programs, that you can modify within certain boundaries. So you can scale up to six loaves of bread, or modify a recipe when you run out of something.

These are women’s technologies (or at least they are now – weaving and knitting were men’s work in the past when there was money to be made from them, and professional cooks are still mostly men), which means we might discount them when thinking about new and shiny ways to ‘learn to code’ or ‘get women into STEM’. But they require complex, abstract, programmatic thinking. To make beautiful and tasty things. Here I am with grandma and daughter, eating some tasty things.

cardomom buns

Keeping this in mind, it’s now less surprising for me to remember my mother’s incredible delight in exploring the early Internet. She’d return from work with amazing tales of information she’d found from far-flung countries. When I was shown the web, I was kind of underwhelmed. It took effort to find information – you needed to type commands, use Boolean logic, and navigate around the databases and usegroups. But now I suspect that the world of tech made much more sense to my mom than I might have expected. After all, her little sister was an educator at the Computer Museum and has developed an art practice that investigates geometry and topgraphy. The more I think of it, the more I can surface the deep roots of my own interest in technology and culture.

I miss my mom and grandma exquisitely. But I know how much they made me who I am. And now I get to think about how to pass on their legacy not just to my own daughter (shown here in a sweater knit by her great-grandma at age 89) but to many women who might not yet have thought about the connection between knitting, cooking, art, and computing.


Sharing and Responsibility

It’s been a long time since I posted. I’ve been working on lots of things: finishing some writing about knowledge cultures, starting some research on data and ethics, cities and ‘smartness’ , and developing some new teaching provision in these areas. Some of what I’ve been working on is up at, and much of it is available at my university’s open access repository.

I’ve also been thinking a lot. Often I’m thinking about the stark contrast between the mundane beauty of the everyday and the almost overwhelming complexity of the reality of the world, with its seemingly insoluable problems of climate change, perpetual war, and rising inequality in the rich world. How is it ethically possible to continue to enjoy the benefits of a highly developed society  in the knowledge of these problems? What responsibilities do we have?

The barrier to taking on this tension lies in the difficulty of connecting the everyday to the systemic, the banal action to its complex consequence. It requires thinking about the extent to which the global connects to the local, and the present to the unknown future.

This is a picture of my street, located in the middle of an enormous city. It is beautiful, I think. It is also full of complexity. There is a school: an institution with power, with connections to the state. There are trees full of birds and squirrels and foxes. There are lots of people who live on the street who come from different countries in the world and who are all trying to get along in this city. There are airplanes flying in the sky hazed with pollution, in the warm November (and remember, there used to not be warm Novembers).


There are millions of streets in the world. Indeed, most people in the world will shortly be living in cities, if they don’t already. Streets and cities are persistent human constructions. Given that we are now living in a new epoch, an ‘Anthropocene‘ characterized by the massive impact on the entire planet of the human species and our particular habits, perhaps we could think more carefully about how we live within these particular environments created and shaped by us.

Even in cities the humans are not the only ones around. Recent research indicates that cities have surprisingly high biodiversity. London supports bee colonies, in part because of lower pesticide use. Foxes are a permanent part of the city environment. On my street there are also snails, slugs, bats, bugs, and rats in abundance too (I am sure there are rats. There are always rats).

So we are somehow managing to live alongside these other creatures, although every time a neighbour replaces their back yard with a big extension I wonder about the consequences. How can we live with others?

This question is valuable in terms of the human world as well. This week I got to go to an event called ‘Design for Sharing’ that launched a report into the practices of collaboration. These are the everyday things that keep neighbourhoods and people together: sharing food, or tools, or trading goods, or time. Although the ‘sharing economy’ of Uber and Air BnB is gaining attention, this is actually a distributed rental economy, and the attention is often focusing us away from understanding how people share and why.

The research that Design for Sharing presented shows that there are many ways to share – starting with one small thing, weaving people and objects and ideas together. But what is significant is how little ICT tools feature in sharing practices. It seems that in the everyday world of communities and objects, trust and relationships are built face to face. We can contrast this with the way that many relationships including the online ‘sharing economy’ examples are mediated by data, information and metrics. How then are the relationships of trust meant to be constructed?  The response, for Uber and Air BnB and many other businesses, is to apply data analytics, and use them to broker the relationship.

This means that sharing relationships can scale up enormously. They are no longer limited by who you know and hence who you trust. There are clearly many possible social gains in this kind of understanding. But what of the losses? What does it mean to cede judgement to an analytic process? In part it means that only information that can be placed in the process can be considered. For the creation of online relationships, this often means quantified data. We are now starting to understand what the cultural consequences of quantification may be: Benjamin Grosser has written a revealing essay “What do Metrics Want” about the shift in culture aligned with a culture of metric. He writes, “Theodore Porter, in his study of quantification titled Trust in Numbers, calls quantification a “technology of distancethat “minimizes the need for initmate knowledge and personal trust.” Enumeration is impersonal, suggests objectivity, and in the case of social quantification, abstracts individuality.”

This abstracting of individuality is part of the influence that the metrics have within the system. This influence is oriented around the idea of ‘more’ – more measurement, more participation, more value for the owners of Facebook. And the quantification of social interaction simultaneously renders the content and meaning of the interaction less valuable.

This is the precise opposite of the kinds of intimate trust relationships that motivate people to solve problems together. It is also a dangerous reduction of the kind of relational complexity that I evoked when I wrote about the many things, beings, and systems that exist and interact on the street where I live. What is important becomes what can be measured, and what is measured becomes what is valuable. But what of the things that are difficult to measure, like the feeling of the leaves, or the friendliness of the neighbours? Or even those things that are transformed through the process of measurement, like a sense of community? What might be lost in the measuring process?

I would like to think of another way being responsible. Everything counts, yes, but what if we thought that everything matters?

Citizen led smart cities

(Inspired by my morning at the SciDevNet event “Making It Count: Big Data, the Open Revolution, and Public Engagment”)

The ‘smart city’ is on the ascendant again. A decade after I first heard people talk about the ‘smartness’ of cities in terms of the access to IT infrastructure, I hear it again. It’s different this time. It’s not about individual access to information. It’s more about the individual, (or, the ‘citizen’) as a creator of data – which in the aggregate becomes valuable to the city, since it then knows all sorts of things about what people are doing, and also theoretically valuable to the ‘citizen’ if its made transparent. But how do cities get this data, and what are the power relationships behind it? Many people have been working on these questions, and some tricky conflicts have emerged.


By Ramon FVelasquez (Own work) [CC-BY-SA-3.0 (], via Wikimedia Commons

By Ramon FVelasquez (Own work) [CC-BY-SA-3.0], via Wikimedia Commons

For example, one way of getting lots of ‘smart city’ data without asking each individual to accept or reject Terms and Conditions of Use (those documents that specify how data is used but which  so few people read) is to  create data brokerage models for smart cities in which the city is the data curator. In other words, a city government or other entity could agree Terms and Conditions on behalf of citizens. What is the relationship between the city government or entity agreeing the T and Cs, and the ‘citizen’ as such? Under what circumstances is the ‘citizen’ (the individual, the resident, the taxpayer?) in conflict with ‘the city’?  Urban geography gives us lots of situations in which we can identify possible conflicts between ‘citizens’ and the city: for example the relationships between people without documentation and the cities in which they live. The people living in illegal dwellings, favelas or new developments just outside of city limits. The people contesting a council tax bill. The guy with the broken door on the Brandon Estate in Southwark who has been unable to get anyone to fix his door in the past three months.

People in cities aren’t automatically citizens, and they are not automatically inclined to enact their relationship to that place in a particular or acceptable way. The power relationships between them and the city could be quite contested. Even if you have the right to withdraw your data from collection by the local authority, are you likely to use that

Rights and Freedoms


By Joadl (Own work) [GFDL ( or CC-BY-SA-3.0-at (], via Wikimedia Commons

In fact there are various rights that we may wish to consider. Rights to be forgotten, rights to be anonymous, rights to speak and listen. We might also want to consider freedoms, of which some are ‘negative’ and some ‘positive’ (following Isiah Berlin). The freedom to do things that don’t negatively impinge on others is a kind of ‘positive’ liberty, where a negative liberty is the freedom from harassment and harm. Which kinds of freedom are enabled and constrained by ‘smart city’ data?


On my way back to my office at lunch, I passed a man asking for spare change in the street. I passed without giving spare change and felt a twinge of guilt. Then I asked myself what the chain of trust and relationships that linked me to that man might be, and how data might play a role.  I might assume that the man’s basic needs are fulfilled through services supported by my taxes, although given the current policy frames I might not be able to count on the validity of my assumption. So let us take a private sector example: through the donations I to homeless shelters, soup kitchens and crisis support for drug users I might assume that someone will help to keep him alive under the worst circumstances.


Which one of us has responsibilities as a ‘citizen’? Me, because I am a ‘good citizen’ who works at a job and pays taxes? Him, because maybe he is born here or in the EU and therefore has right to be here? Which one of us should be accountable for the data that is collected about us? Which one of us is generating more data and what kind of data is that? I have social media feeds that provide indication of ‘good citizen’ status – meaning, I exist for the companies that are collecting data about me. I have money to spend, and the information about where I might spend it is important enough for businesses to pay for. The man in the street, on the other hand, won’t have such easily monetizable traces. If his presence in the city creates data, it may well come in the form of police reports for loitering, or social work reports. Or perhaps nothing at all. Does that make him less of a participant in the life of the city? No – but it does remind us of which one of us has more control over the data that is generated.

Questions of data are increasingly questions of citizenship and voice. As such they need to consider not just the financial value of data to a city or a ‘citizen’ but the relationships of power and influence that characterize our lives. ‘Big data’ are not oil – they are pieces of information about people, and our politics and policy about smart cities should consider this from the start.


Politics, Technology and Design – My busy January

This January I’ve had the chance to do research work in lots of (more than usually) interesting ways – in art museums, castles, design schools and among colleagues from many disciplines. I’m so impressed at what I was doing:

Disassembling a Toaster in an Art Museum


I started the month on a panel at the V&A Museum’s Design Culture Salon series, talking about ‘transparent design’. I used the opportunity to take apart a toaster while talking about Heidegger, something I have always wanted to do. I focused in my talk on the politics of hacking, asking about the different experiences of a ‘closed’ but functional (what Heidegger calls ‘ready to hand’) toaster, and an ‘open’, ‘hackable’ but non-usable toaster (what Heidegger calls ‘present to hand’).

The idea of breaking something to understand better how it works, or how it comes to be,  is a central tenet of hacker culture.  A number of theorists have worked on how to think about the broken, the trashy, or the defunct as productive places to work: Jussi Parikka and Garnet Hertz have talked about doing ‘archaeology’ on broken toys and out of date electronics, and Jennifer Gabrys has a very sensitive philosophy of trash. I in my own work have been interested in studying failure, breakdown, reconstruction. But this failure simultaneously removes the utility of an object and attaches the politics of reconstruction to prowess in hacking and cracking. This raises a question about how and for whom we would like design to be transparent.

Building an Imaginary Machine in a Castle

I kept working on this idea of failure as a productive politics at an amazing Daghstul Seminar at a castle in southwestern Germany. It was appropriately remote and gloomy – this photo was taken at 8 am!


These seminars are usually only for computer scientists but the organizers of this one worked hard to bring together an interdisciplinary group to discuss social, theoretical and technical aspects of building autonomous, non-Internet networks. These are the kind of things I have written about here.

A few of us – Jon Crowcroft, Paul Dourish, Kevin Fall, Kat Jungnickel, Irina Shklovski and Christian Becker – worked over several days on the concept of ‘failing networks’, culminating in a critical making exercise to build a ‘failure machine’. Kat and I both use critical making as a technique to materialize research and inquiry processes – and in this case to demonstrate interdisciplinarity.


The ‘machine’ was modelled after a 17th century piece of wearable technology called a chatelaine (what I think of as a Wonder Woman utility belt). It featured a set of intersecting filters and controls that, depending on the perspective of the person wearing it, would create unpredicatable outcomes. Some of the filters included a ‘moral concern unbundler’ to take into account unexpected social outcomes of technology, and an ‘unarchiver’ that alternated between an inappropriate failure to remember and an inappropriate failure to forget.


The best thing about the exercise was how much it resonated with computer scientist colleagues. It turns out that establishing the limiting conditions for networks is actually an important process, and that network scientists DO in fact build ‘failure machines’ to test their networks. But these don’t usually include the kind of contextual, social, temporal and political aspects that we included in ours.

Narrating the Live Hack

Fresh from the excitement of using design methods to bridge disciplines, last week I cycled over (in the sun! !) to the Royal College of Art for a workshop with Kevin Walker’s Information Experience Design MA students.


While Kevin tried (and sometimes failed) to add blinky lights and switches to an Arduino, I talked about the assumptions we make about democratization of technology (Heidegger again..) and introduced some really tricky questions about how our experience of life is mediated by constantly dyanamic software processes – and what this might mean for our sense of identity, our privacy and our relationships.

Designing in Academic Research

Finally, I started to apply what I’ve learned about design as a research process with my LSE Media+Comms colleagues. With Nick Anstead I’ve started investigating how our department might build a research tool to help us bring together sizeable and varied  kinds of data sets and quickly and effectively analyse them. At the same time I wanted to investigate how the design process might help the department express some of our shared (or divergent) perspectives on research. We held a ‘Research Dialogue’ where we debated the use of ‘Big Data” in our practice, and hypothesized what kinds of ‘data analysis machines’ might represent our research priorities. We think we have some insights that can actually help us design a tool, but already the process has given me lots of food for thought about how values, opinions, and unexpected tensions emerge in prototyping processes.

I’ve also relaunched my Digital Media Futures course for the term, where we will be experimenting with similar ideas and practices. And sometime soon I’m looking forward to sitting down and doing some concentrated writing….I hope.