I am here in a very hot and sunny Indianapolis trying to figure out what is meant by deep mapping, with an NEH Summer Institute at UIPUI hosted by the Polis Center here. There follows a very high-level attempt to synthesize some thoughts from the first week.
Deep mapping – we think, although we’ll all probably have changed our minds by next Friday, if not well before – is about representing (or, as I am increasingly preferring to think, remediating) the things that Ordnance Survey would, quite rightly, run a perfectly projected and triangulated mile from mapping at all. Fuzziness. Experience. Emotion. What it means to move through a landscape at a particular time in a particular way. Or, as Ingold might say, to negotiate a taskscape. Communicating these things meaningfully as stories or arguments. There has been lots of fascinating back and forth about this all week, although – and this is the idea at least – next week we move a beyond the purely abstract and grapple with what it means to actually design one.
If we’re to define the meaning we’re hoping to build here, it’s clear that we need to rethink some pretty basic terms. E.g. we talk instinctively about ‘reading’ maps, but I have always wondered how well that noun and that verb really go together. We assume that ‘deep mapping’ for the humanities – a concept which we assume will be at least partly online – has to stem from GIS, and that a ‘deep map, whatever we might end up calling that, will be some kind of paradigm shift beyond ‘conventional’ computer mapping. But the ’depth’ of a map is surely a function of how much knowledge – knowledge rather than information – is added to the base layer, where that information comes from, and how it is structured. The amazing HGIS projects we’ve seen this week give us the framework we need to think in, but the concept of ‘information’ therein should surely be seen as a starting point. The lack of very basic kinds of such information in popular mapping applications has been highlighted, and perhaps serves to illustrate this point. In 2008, Mary Spence, President of the British Cartographic Society, argued in a lecture:
Corporate cartographers are demolishing thousands of years of history, not to mention Britain’s remarkable geography, at a stroke by not including them on [GPS] maps which millions of us now use every day. We’re in danger of losing what makes maps so unique, giving us a feel for a place even if we’ve never been there.
To put it another way, are ‘thin maps’ really all that ‘thin’, when they are produced and curated properly according to accepted technical and scholarly standards? Maps are objects of emotion, in a way that texts are not (which is not to deny the emotional power of text, it is simply to recognize that it is a different kind of power). Read Mike Parker’s 2009 Map Addict for an affectionate and quirky tour of the emotional power of OS maps (although anyone with archaeological tendencies will have to grit their teeth when he burbles about the mystic power of ley lines and the cosmic significance of the layout of Milton Keynes). According to Spence, a map of somewhere we have never been ties together our own experiences of place, whether absolute (i.e. georeferenced) or abstract, along with our expectations and our needs. If this is true for the lay audiences of, say the Ordnance Survey, isn’t the vision of a deep map articulated this past week some sort of scholarly equivalent? We can use an OS map to make a guess, an inference or an interpretation (much discussion this week has, directly or indirectly, focused on these three things and their role in scholarly approaches). What we cannot do with an OS map is annotate or embed it with any of these. The defining function of a deep map, for me, is an ability to do this, as well as the ability to structure the outputs in a formal way (RDF is looking really quite promising, I think – if you treat different mapped objects in the object-subject-predicate framework, that overcomes a lot of the problems of linearity and scale that we’ve battled with this week). The different levels of ephemerality that this would mean categorising (or, heaven help us, quantifying) is probably a story for another post, but a deep map should be able to convey experience of moving through the landscape being described.
There are other questions which bringing such a map into the unforgiving world of scholarly publication would undoubtedly entail. Must a map be replicable? Must someone else be able to come along and map the same thing in the same way, or at least according to their own subjective experience(s)? In a live link up the UCLA team behind the Roman Forum project demonstrated their stuff, and argued that the visual is replicable and –relatively easily – publishable, but of course other sensory experiences are not. We saw, for example, a visualisation of how far an orator’s voice could carry. The visualisation looks wonderful, and the quantitative methodology even more so, but to be meaningful as an instrument in the history of Roman oratory, one would have to consider so many subjective variables – the volume of the orator’s voice (of course), the ambient noise and local weather conditions (especially wind). There are even less knowable functions, such as how well individuals in the crowd could hear, whether they had any hearing impairments etc. This is not to carp –after all, we made (or tried to make) a virtue of addressing and constraining such evidential parameters in the MiPP project, and our outputs certainly looked nothing like as spectacular as UCLA’s virtual Rome – but a deep map must be able to cope with those constraints.
To stand any chance of mapping them, we need to treat such ephemera as objects, and object-orientation seemed to be where our – or at least my – thinking was going last week. And then roll out the RDF…