Category Archives: OII

Evolution, Innovation, and Ethics

I took my sweetie to London’s best holiday nerdfest last night – Robin Ince’s 9 Lessons and Carols for Godless People.  It was a three-hour celebration of the wonders and beauties that science can reveal – along with lots of hilarious British standup comedy.  Throughout, there was lots of emphasis on the role of evolution in creating fantastically complex organisms – and societies.  But there was something bittersweet, to me, about celebrating how much our society has evolved, especially in the wake of the disastrous lack of results from Copenhagen.

Yes, our society has evolved and created astonishing innovations like the computer I’m using to write this, and the network that ensures all of you can read it.  The internal combustion engine, in particular, has facilitated extraordinary developments in transportation, commerce, health and well-being.

But such development comes with consequences, as we now know.  Our evolved intelligence has got us into this mess, and now must get us out of it.  Unfortunately, much of society is now in thrall to a particularly well-evolved form of self-interested greed.  The policy debates about how to respond to climate change illustrate this well:  everyone agrees that something must be done, the conclusive data is building up, but there is hesitation.  Why?  In many cases, because agreeing to collectively solve a problem interferes with the pursuit of individual gains – a pursuit so well supported by today’s capitalism.

Luckily, we have also evolved an ethics of collective action.  Elinor Ostrom’s Nobel prize winning work explains that societies have also evolved innovative ways of sharing resources to avoid the “tragedy of the commons.”  As the pressure to define ourselves as self-interested consumers mounts in this holiday shopping week, it’s important to remember what else our society has evolved:  ethics, compassion, and a sense of the collective good.

Happy holidays – I’m off to slow down and enjoy the snow.

Open ecologies – can open hardware be like open software?

The growth of the open source software development movement is held up as one of the great successes of a networked world – leaving source code open is associated with global-scale participation in software development and open-source products that are now central to the technology industry.  This has in turn inspired calls for opening up other “closed” processes – government, education, knowledge (like Wikipedia).   There’s now talk of a global “open everything” movement.

But as Steven Weber explains, there are some specific elements to open-source development.  First, that open code is a way of providing easily modifiable basic tools that can be customized to solve a whole set of different problems.  This is one key to the success of open source – it’s the utility of the source code that’s available, and ability to modify it.  So my friend who is working on a totally bespoke database can draw elements of source code from other databases built by others, even if those other products have little to do with what he’s making.  Weber’s second element is that open-source is based on principles and values rather than efficiency.

Given these key elements, can we expect to produce “open everything’?  Under what circumstances does an open-source model translate outside of software?  To investigate this I’ve started watching the nascent movement towards open hardware development.  Of course, hardware is a physical product with manufacturing costs.  But if we think the design and production process, there are some clear opportunities to create an open source production ecology.

First, hardware designs are not material objects.  They are, like software, intellectual products.  Currently, most hardware production is based on patented designs.  But hardware hackers (or hobbyists) can upload, view and download designs at OpenCores, which also allows would-be manufacturers to produce prototypes of their chip designs.  Second, the realms of software and hardware are converging.  The cost of developing software-controlled chipsets is dropping, with the major cost now being the software development itself.

The larger issue is how to grow an open source development and production ecology.  In software development, one aspect of this ecology is the licensing framework, which identifies free software and makes using source code conditional on releasing any subsequent source code.  How could this happen in the hardware world?  How would a prospective hardware (re)designer know that the amazing mobile widget she/he was holding had an open design?
The solution, according to a nascent coalition called the Open Hardware and Design Alliance (OHANDA – watch this space) would be to develop a trademark sticker, to identify a piece of open hardware. The sticker would include a registration key, pointing to a design held in a repository somewhere.  Then that design could be reused.

This potential intervention raises some interesting questions about “open everything.”  How do open ecosystems grow?  How modular do the “open” elements have to be?  (it would be obviously more valuable to have a few, easy-to-use open hardware models than one design that’s difficult to reuse).  And finally, what are the defining values of openness?  OHANDA may provide some important lessons.

Internet Governance Forum: Freedom and openness (UPDATED)

In the desert, the mountains hover in the distance.  Sun glances and taxis arrive at the gates of the conference center.  Getting from outside to inside means going through security cordons, police checks, metal detectors.

Inside, discussions balance freedom and openness.  There is no necessary consensus:  freedom and openness can mean different things to different people.  We want to secure human rights on the internet – we want to make the media that happens there as independent as possible.  We have the same conversation as we did before, we talk about technology having values, and attempt to make those values as universal as possible.  It’s not easy, and not everyone agrees.

Internet governance is a process, and unlike the IETF or ICANN, we use the time to disagree, to discuss.  This is a great opportunity to talk about the process that we followed at Oxford bringing together free speech and child protection advocates.  The same process applied, and the results were very positive.

Except sometimes the perspective of multi-stakeholder process is rattled by misunderstanding.  I had dinner the other night with a group of folks from the Open Net Initiative who were troubled by their book promotion poster being tossed to the ground by UN security.  This is another issue of balance:  although the book poster mentioned Chinese firewalls the dialogue at these meetings happens in UN space.  No one is allowed to hang posters, no matter what the subject.

This is a delicate process, and it means crossing the cordon at the gate.  Not always easy.

UPDATE:  I’ve talked to more people at the IGF about the poster incident – since I wasn’t there I can’t comment on exactly what occurred.  A few people who were there noted that the disagreement was NOT about commercial posters but about references to China – even though the existence of China’s Great Firewall is not disputed.  Why such a strong response to a statement of fact?   Especially since one of the features I observed at the Forum was healthy disagreement.  It would be deeply problematic for the internet as a global resource if this tolerance were limited.

Research Design The Fun Way

Last month I had the most amazing experience:  with some superstar colleagues, I designed a qualitative study aimed at understanding why people don’t adopt broadband.  The goal of the study was to understand barriers to broadband adoption, and we thought the best way would be to talk to people about how they communicate and why they choose to use some technologies and not others.

I’ll write more specifics about the study later, but I wanted to reflect on how exciting the research design process was for me, and share some of the reasons I felt it worked well.

Trust

First of all, my colleagues/friends/partners in crime were people I’d known for many years, fellow-travellers in the community wireless world.  But we hadn’t seen each other much since I’d moved to England.  One friend lived close to where we’d be having our full team meetings, and so all of us stayed there.  I’ve heard this called the “couch-surfing theory of participatory research.”  I don’t necessarily think you HAVE to sleep on the couch (or on the floor as we did) to do good research, but it is an excellent way of building trust, which is essential for designing and enacting good social research.

Doing your homework

Our timelines on the project were short.  Before I arrived to sleep on the couch, we had about a week to prepare.  Everyone did their homework.  We called people who had done similar studies, talked to various members of the wider team to see what they wanted to know about, and researched the funding stream that was supporting the study so we could understand what values were at play.

Trust (again) and the Efficiency of In-person Meetings

After a week of telephone calls and brainstorming, we met for a head-to-head with the entire research team.  Like sleeping on the couch, it made a big difference to be in the same room as the people we were working with – especially since some of them we hadn’t met before.  Yes, we could have done the work by video-conference, but in cases when there are big ideas at stake, and a big team of different types of personalities, meeting in person saves more time and builds more trust.  The meeting also contained what I think of as exemplary research design practices, including:

  • careful listening for requirements and for philosophical perspectives: “I believe this is important, so can we make sure that we think about it?”
  • flexibility, and core commitments:  “This is what we are really interested in, but we know that we might not find it if we ask directly”
  • productive disagreement “this could work, but it won’t fit our requirements”
  • iteration “if we ask something more like this, will that help to answer our questions?”
  • triangulation, or looking at things sideways “How about if we turn the question around”

Living-room floor categorization (The Big Picture)

The day after the full meeting, our smaller team spent the day rearranging the flipchart sheets we’d produced in the meeting, overlapping them in various ways on my colleague’s living room floor.  Photographic evidence exists of me doing “research yoga” – adding a sheet of paper to the arrangement that later became our main analytical framework.  My own living room isn’t big enough for this kind of research practice, but a big table and index cards will do; so that you can see the entire schema in one shot.

Take a Break

After all this intense work of brainstorming, finding field sites and establishing analytical categories, we all needed a break.  We took a day off.  The next day our brains were much sharper and clearer.

Tea and Peer Review

The next day, before I flew home, we met another colleague for tea and ran some of our field strategies and analytical categories by her.  Since she hadn’t been consumed with moving around our sheets-of-paper categories, she had some excellent suggestions on where there were gaps in the questions we planned on asking, as well as some creative research strategies.  We integrated what seemed to make sense, and then

Have a Beer

We relaxed!

Sadly, I couldn’t help with conducting the fieldwork.  My colleagues are out in the field now, and I’m sure they are accumulating lots of other great insights on doing high quality social science research – the fun way.

Uses of Twitter – how to go to a conference when you’re home sick

I was toying with the idea of going to the Oxford Social Media Convention when the dreaded Autumn Headcold struck.   I succeeded in slinking back to London and collapsing on the couch, and this morning staying upright during a Skype conference.  So how to participate in the conference without being there?

Thank you, Twitter and hashtag #oxsmc09 – I’ve had questions asked and answered, and started conversations with attendees and generally got the snarky backchannel on the panel discussions (which is the real fun at conferences).

All without any of you having to hear me cough.

Hacking the City – redux

I was delighted to read that the Personal Democracy Forum’s 2009 Conference (twitter slurp here) includes a Birds of a Feather meetup on the topic of “Hacking the City.”  I first heard community technologists use this phrase in 2005, when Mike wrote a post about how community Wi-Fi is a way of hacking the social space of cities.  What he was referring to was the way that community interventions in provision of communications infrastructure could change how people socialized – since so much of our interactions are mediated by various types of networks.

But “hacking the city” like so many good ideas, has taken on another life.  It’s now used to describe how networked technologies can be harnessed so that citizens can take action in their own cities. There’s DIYCity.org, where volunteers in cities around the world build open source tools and advocate for open data , New York City’s The Open Planning Project (who advocate for open source software in government, and run several citizen-participation blogs) and MySociety’s  FixMyStreet, which features maps where my neighours have flagged two instances of fly-tipping and two piles of dog poo within 1 km of my house.

After doing some work this year about other types of digital activism, I’m returning this summer to thinking about the politics of local networks – it’s time, and furthermore it matters!  Can anyone think of other good examples of hacking the city?

UPDATE:  Exciting!  Personal Democracy Forum Europe in Barcelona in November.

Quantifying everything: Wolfram alpha and algorithms

Wolfram Alpha is pretty great:  you type in a problem and it finds a solution.  It does this by transforming the natural language problem into computational elements and entries in its curated data set, and then running the computations.  Ta-Daa!  The solution appears, provided that the problem includes elements that are 1. reducible to computation and 2. include elements that are in the database.  Improving on 2. is easy enough, the argument goes:  simply add more things into the database.  If you want to calculate the likelihood that a word will occur in a Yeats poem, simply add more Yeats poems to the database and eventually you’ll get a meaningful result.

It’s principle 1. that’s potentially more problematic.  It raises the question about the extent to which all knowledge can be quantified.  In other words, it doesn’t explain why the repetition of words in a Yeats poem might be important.

Ahh, you say.  But that’s not science!  True, science is about quantifiablity.  But it is also about inquiry, about determining how to ask questions that are verifiable.  And it is about applying those questions generatively in order to develop new knowledge.  Wolfram Alpha’s founder has written about a new kind of science, which is based on simple rules that can be embodied in computer programs. I’m ready to be convinced, but I’m concerned that the Age of the Algorithm could mean the end of the Age of Inquiry.

My most memorable university exam included a question which asked me to differentiate special relativity from general relativity, and to explain how Einstein developed one from another.  I attempted to get Wolfram Alpha to compute this, but the closest result I got was this.  So far, inquiry is safe.

Free Access, Media Scarcity . . . . and the future of capitalism

Last week I was wandering around the Pergamon Museum in Berlin, riffing with Wolf (an OII DPhil) about how knowledge gets produced and distributed.  We looked at Greek statues “collected” by the Germans, moved to Russia by the Sovietes for 50 years, and finally on public display – and discussed the radically different ways of knowing made possible in a world of globally produced, distributed, and commented information.  “It’s an amazing privilege” I said to Wolf.  “But what are the long term implications?  There’s always  controlling access to information.  In the middle ages it was all physically locked up in places like Oxford.  Now I’m worried it will come to be controlled in some other way.”

Over the next few days, talks by Lawrence Lessig and Cory Doctorow highlighted how the movement towards free access means that control over information, media (and maybe knowledge) threaten established business models and legal frameworks.  Lessig showed how current intellectual property law is so far out of sync with practices of remix that it is criminalizing a generation of kids who use media like ideas.  For Doctorow,  the key change for media has been the decreasing potential for making money by making media excludable (controlling who gets a copy of something.  Faced with the fact that the “internet is a perfect copying machine”, businesses are responding by trying to make it an imperfect copying machine.  Like Lessig, Doctorow thinks this is reactionary and counterproductive.  He thinks the only viable business models will be based on new understandings of how to distribute media/information/knowldege, and not on controlling its reproduction.  Free access has created unprecedented participation in culture – the current market economy doesn’t work if there’s an oversupply of art and undersupply of demand.

Back in Oxford, my colleague/flatmate Bernie picked up the thread.  He’s been thinking about how capitalism has always depended on scarcity.  Informational capitalism has exploded such that information is no longer scarce – it’s easy to copy and distribute.  So what becomes of capitalism?

These transformations of information/media use and reuse highlight the importance of access.  Access is reconfigured through and with technical changes, practices, laws.  Unlike 500 years ago, you don’t have to travel to Oxford to find information – but instead you have to negotiate licenses, torrents, remixes, and misinformation.  How to sort it all out, and whether this can happen under capitalism, is one of society’s next challenges.  It’s a far cry from determining who gets to store the Greek statues.

What is Privacy, Anyway? PrivacyOS in Berlin

I’m so happy to be in Berlin with Ian Brown and 4 OII doctoral students, at the European Privacy Open Space.  At the same time as the re:publica media conference, it’s a collection of lawyers/students/private sector vendors.

But what is privacy?  Talk #1 discussed data privacy in terms of its economic value.  Talk #2, by a Microsoft guy designing a U-Prove token, talks about privay as an interface between some one individual and service providers who need to know all kinds of things:  “miniumum disclosure tokens” that provide the ability to verify aspects of someone’s identity without having to tell everything.

More privacy definitions as the conference continues.

UPDATE:  Day 2

Technical presentation on “Selective Access Control in Social Networks” – social networking privacy is facilitated by a layer controlled by public key encryption.  So for example the same profile details would be released to different social networks

Human readable privacy policies – privacy is a set of relationships that individuals have to understand in order to do things (buy, sell, read, write) online.  Therefore, human readable privacy policies and iconography needs to be developed so that people understand where their information is going, who it will be used by, and how. (As Ian points out, if there is no competition, such a proposal wouldn’t be very effective as there would be no reason to choose a company with more easy to read privacy policies).

According to these presenters, privacy can be a negotiation, a layer, an interface or even a value proposition.  But is understanding what we trade off when we spend time online really the same as having the privacy of a home, or the anonymity of public space?  Lots to think about still.

High Noon for Net Neutrality – EU style

IPIntegrity reports that the EU’s “trialogues” – debates between the European Parliament, the European Council  and the European Commision are putting the future of the internet at risk: political dealmaking (and the power of the British and the French) risk undermining the current legislation.

Take a look at some of the proposed changes (excerpted from Monica Horten of IPIntegrity):

Framework directive Article 8

Under the European Parliament deal is this   text stays  in

European Parliament Art. 8.4 (fa) applying the principle that end-users

should be able to access and distribute

any lawful content and use any lawful

applications and/or services of their

choice;

if it  agrees to  this text:

Council:Art. 8.2 (b) ensuring that there is no distortion

or restriction of competition in the

electronic communications sector, with

particular attention to the provision of

wholesale services,

instead of this text:

(b) ensuring that there is no

distortion or restriction of competition

in the electronic communications sector,

in particular for the delivery of and

access to content and services across all

networks;

The alternative is you get this text: ( which removes users right to distribute information – a fundamental right under EU law).

Council Art 8.4 (g) applying the principle that endusers

should be able to access and

distribute information or run

applications and services of their choice.

—–

I just delivered a paper in which I argued that the EU treatment of net neutrality was not *too* bad, Not encouraging news for those interested in a free and open net.