Historicizing Big Data and Geo-information

30 March 2016 0

IMG_20151028_160759-01

I was asked by my colleague and friend Oliver Belcher to act as a discussant in a session that he put together at the 2016 meeting of the Association of American Geographers: ‘Historicizing Big Data and Geo-information’.

The session contained a set of truly excellent and provocative talks by Louise Amoore, Matthew Hannah, Patrick McHaffie, and Eric Huntley. I’ve now had a chance to type up my discussant notes (although apologies for the hastily-put-together nature of the text).

I think that this has been a much-needed set of papers at a moment in which we’re awash with hype about ‘big data’. We hear that we’re at a significant moment of change; that there’s a big data revolution that will transform science, society, the economy, security, politics, and just about everything else.

And so, it’s important that these sorts of conversations are brought together. To allow us to think about continuities and discontinuities. To allow us to think about what is and what isn’t truly new here. And to do that in order to hone our responses as critical scholars.

One way to start – perhaps – is to recognize, as all of the papers in this session have, that while ‘big data’ may not be new, we’re in, have been in, or at least have long been moving towards, what Danny Hills refers to as an Age of Entanglement. I think it is maybe useful as a starting point for me to quote him here.

He says “We are our thoughts, whether they are created by our neurons, by our electronically augmented minds, by our technologically mediated social interactions, or by our machines themselves. We are our bodies, whether they are born in womb or test tube, our genes inherited or designed, organs augmented, repaired, transplanted, or manufactured. We are our perceptions, whether they are through our eyes and ears or our sensory-fused hyper-spectral sensors, processed as much by computers as by our own cortex. We are our institutions, cooperating super-organisms, entangled amalgams of people and machines with super-human intelligence, processing, sensing, deciding, acting.”

In other words, while big data may not be new, we do now undoubtedly live in a digitally infused, digitally-augmented world. One in which we’re entangled in complex digital ecosystems; hybrid complex ecosystems in which it is increasingly hard to disentangle agency and intent.

Why’s my phone telling me to go left and not right? Why is the supermarket creating some personalized economic incentives for me and not others? Why is the search engine prioritizing some knowledge and not others? As researchers, it is hard to address questions like these because there is often no straightforward way of knowing the answers. Do we look to the code embedded within algorithms? Do we look to the people or institutions who created the algorithms? Do we look to the databases? Do we look to the people who manage the databases? Or do we look to the people, processes, and places emitting the data?

What today’s talks have all usefully done is point to the fact that we need to be addressing some combination of all of those questions.

So, let me just pick up with a few general reflections and concerns about what the histories of big data mean for the futures of big data – that emerge from listening to these talks. Like all of the speakers, what I’ll especially focus on here is what geography can bring to the table.

First, is a thought about our role as geographers. Many geographers, me included, spend a lot of time thinking about the geographies of the digital; thinking about how geographies still matter.

We probably do a lot of this to counter some of the ongoing, relentless, Silicon Valley narratives of things like the cloud. Narratives that claim that – provided they are connected – anyone can do anything from anywhere at any time. So, we end up spending a lot of our energy pushing back: arguing that there is no such thing as the cloud. That there are just computers. Computers all in other places.

But I wonder if we’re missing a trick, by not also asking more questions about the contextual, specific, but likely present ways in which some facets of geography might actually matter less in a world of ever more digital connectivity. Not as a way of ceding ground to the breathless ‘distance is dead’ narrative – but in a critical and grounded way. Are there any economic, social, and political domains is distance, or spatial context actually becoming less relevant?

Second, when we speak about big data, or the cloud, or the algorithms that govern code-spaces, we often envision the digital as operating in a black box. In many cases, that is unavoidable.

But we can also draw more on the work from computational social science, critical internet studies, human computer interaction, and critical GIS. In all of those domains, research is attempting to open the black boxes of cloud computing; of big data; of opaque algorithms. Scholars are asking and answering questions about the geographies of the digital. Where is it; who produces it; who does it represent; who doesn’t it represent; who has control; to whom is it visible.

There is much more that clearly needs to be done, and this work needs to be relentless, ongoing, and – of course – critical. But, one hope for the future is to see more cross-pollination with those communities who are developing tools, methods, and strategies to empirically engage with geography in a more data-infused world. So, yes – there are black boxes. But those boxes are sometimes hackable.

Third, and relatedly. A key critique of ‘big data’ – that I see – in the critical data studies community, is the one about correlative reasoning (in other words, if your dataset is big enough, you no longer need theory; no longer need to understand context; and can just look for correlations in the dataset). And relatedly a lack of reflexivity within those data practices of data analysis. But I wonder if we aren’t also overplaying our hand a little here. Some big data practitioner work does stop at correlations, but a fair amount of it can be quite reflexive and aware of its limitations.

These researchers are still building or doing multi level models, social network analytics, community detection models, influence analysis, predictive analytics, and machine learning. My point here, is that whilst a lot of ‘big data’ work is undoubtedly naïve, let’s also not underestimate the power that those with access to the right datasets, the computing resources to analyse those datasets, and the methods to analyse those datasets – have.

My broader point is that we really need to find the balance of not understating and not overstating the work being done by governments and corporations in the domain of big data. Yes, some big data practices out there are naïve and dumb. And yes, some are terrifyingly precise in the ways that they can anticipate human behavior.

To get that balance, I think we need a few things. The first is to pay attention to what has been called the Fourth Law of Thermodynamics: That, the amount of energy required to refute bullshit is an order of magnitude larger than required to create it. Let’s make sure our energy is wisely spent.

To get the right balance, it also seems clear that all of us need to not just try to better understand the nuances of key techniques and methods being employed.  But also to think about what we can specifically add to the debate as geographers – and on this latter point, this is something that I think the papers in this session did very well.

Fourth, when thinking about the political economy of data, it’s becoming ever more clear that we need a full throated response to reclaim our digital environments (a point that Agnieszka Leszczynski has been forcefully making). Privacy and security scholars and activists have been especially vocal here. But let’s again think about what our role as geographers can be in this debate.

The way in which my colleague Joe Shaw and I are thinking about this is (and – my advertising pitch here is that this is something we’re speaking about in three sessions we’ve organized on the topic on Friday morning) – is to argue that we need to translate some of the ongoing ‘right to the city’ debate into the digital sphere. The point being that if places we live in are bricks, mortar, and digital data – we need to think about rights of access, control, and use – in our digitally-infused world.

This is just one type of intervention; and I’m sure that building on the foundations of critical historicisations of big data can offer us fertile ground for reimagining what we actually want our data-infused futures to look like.

Fifth, something that I saw, and really appreciated, in all of the papers was a forceful reflection on how data are always political. Too often data, and especially ‘big data’, gets presented as a neutral and objective reflection of the object of its measurement. That big data can speak for themselves. That, big data are a mirror of the world around them. What a lot of today’s work has done is reflect on not just how data reflect the world, but also how they produce it; how they drive it. As we tread deeper into Danny Hills’ ‘Age of Entanglement’, this is something we’ll need much more of.

As Trevor Barnes, in the last session mentioned, the best kinds of papers leave you hungry for more detail – and a few more things I would have loved to have heard more about are:

From Louise –a bit more about what our vision of the cloud enables beyond the cloud; the cloud in many ways can make some facets of the cloud – or life – perceptible – the cloud being deployed to study life online. But how much of the cloud vision is about moving beyond the cloud – being deployed to study life offline; to study the facets of life that aren’t directly emitting digital data shadows? Also, the empirical work you spoke about sounds fascinating – and I hope the questions give you some more time to bring out ways in which you’ve gone behind the algorithms – and underneath the cloud – to look at how these knowledges are created.

From Matthew – it was interesting to see how some of our contemporary concerns about the power of big data to aid the surveillance powers of the powerful – are far from new. So what might protests against contemporary regimes learn from the earlier moments you spoke about? There are many of us who want to opt out; is this now less possible because of the more encompassing nature of contemporary data collection regimes?

From Eric – I wonder if the idea of the ‘world inventory’ in the 80s; the details of it; what it means in practice, were similar to large tech firm like Google’s vision of a world inventory of geospatial information today. Does a world inventory now mean something significantly different from what it used to?

From Patrick – You didn’t use the term ‘smart city’. But I wonder if you’ve looked into any so called ‘smart city’ initiatives – and if you could say more about how we should we should be honing our inquiry into the so-called ‘smart city’ based on what you’ve learnt here; based on what we know about the visions that brought the Cartographatron into being?

For all of us – scholars in this field – I wonder if we’re all speaking about the same thing in this session when we talk about ‘big data’. Are we taking about datasets that are a census rather than a sample of a population. Are we just using ‘big data’ as a proxy for ‘digital data’? Are we using that term to refer to the whole contemporary apparatus of data trails, shadows, storage, analysis, and use? Are we using it to refer to digital unknown unknowns – the digital black box? Is the term actually helping us as short-hand for something else? Or do we need more precise language if we want to make sure we’re genuinely having a conversation?

And finally, for all of us, I want to ask why this seems to continue to be such a male dominated field? In two sessions, seven speakers, and two discussants, we had only one female speaker. Are we reproducing the gendered nature of earlier computational scholarship? One of the dangers of telling these histories – is that it can end up being white men speaking about white men. This is not a critique of the organiser, as I know Oliver is well attune to these issues, but rather a question about how and why we might be re(producing) masuclinist knowledges.

So, to end – I want to again thank Oliver and the speakers for putting together this session on historicizing Big Data. We need more conversations like this; and we need more scholarship like this. And this is work that will have impacts beyond the boundaries of geography.

We know that we can’t go backwards; and I think the goal that many of us have is a more just, more democratic, more inclusive data-infused world. And to achieve that, one thing we all need to be doing is participating in ongoing debates about how we govern, understand, and make sense of our digitally-augmented contexts.

And perhaps one thing that we can all take away from this session is that if we want to take part in the debate – to influence it – we’ll need to understand big data’s history if we want to change its futures.

About Mark Graham

Mark Graham is the Professor of Internet Geography at the OII, a Faculty Fellow at the Alan Turing Institute, a Research Fellow at Green Templeton College, and an Associate in the University of Oxford’s School of Geography and the Environment. He leads a range of research projects spanning topics between digital labour, the gig economy, internet geographies, and ICTs and development.