Littleworth to Faringdon Corpse Path

September 19, 2017

More on corpse roads, or rather a corpse road. As noted in my previous post, there is plenty of circumstantial evidence for the psychological power of corpse roads, or corpse paths. They connected communities, and provide tantalizing insight into the importance humans attach to movement through the landscape at critical points of their lives. But hard facts are hard to come by. In 1928, the journal Folklore published a Correspondence from “Wm Self. Weeks” entitled Public Right of Way Believed to be Created by the Passage of a Corpse [Shibboleth login needed]. Having debunked this actual notion itself (with reference to an earlier Q&A in the journal Justice of the Peace, a “legal journal mainly devoted to matters affecting Magistrates and Local Authorities generally”), Weeks goes on to propound the theory that the idea of the corpse path comes from an agricultural practice of deliberately leaving a strip unploughed along a field’s edge, to allow the carriage of bodies: ”bier balks … wider strips of turf left between the ploughed strips of land in certain places expressly for funeral ways” (p. 935).  Weeks quotes in support of this view correspondence in the Times Literary Supplement, in response to a previous letter on the subject he published there ten years previously. This came from L. R. Phelps of Oriel College, Oxford. Phelps writes:

In many parishes the church path is a familiar feature. Where I knew it best, at Littleworth, in Berkshire [now Oxfordshire], it connected an outlying hamlet with its parish church at Farringdon [sic], some two miles off. The characteristic of a ‘church path’ is that it is never ploughed over, but stands out from the field, hard and dry, and of a width sufficient to allow the bearers of a coffin to walk abreast along it.

Faringdon is not too far away from me so recently, along with the lady in my life (whose capacity for accommodating her husband’s more oddball obsessions is quite remarkable; she even provided the salmon sandwiches), I drove up to take a look at this path which Phelps was aware of, apparently from personal recollection. A public right of way still exists between Littleworth and Faringdon, as can be seen from the data provided by Oxfordshire County Council, and made available as KML by Barry Cornelius on his thoroughly excellent Rights of Way maps website:

faringdon

Credit: Google Earth; http://www.rowmaps.com

Faringdon, and its C11th church, is at the western end of the path. A 2014 archaeological survey of the environs, in preparation for new facilities being built at the church, found evidence for 341 burials of a range of ages. Of course one cannot easily tell from such evidence the places of death, and thus where the bodies were borne from, but in their conclusion the investigators note:

20170917_141836

Faringdon Church

“The community of believers excavated at All Saints comprised a broader church. Although no clearly high-status individuals were recovered, the investigation revealed a broad demographic section through the population of men and women, children and adults. The excavation showed the degree of care attached to the ill and dying, as well as concern for the well-being of the dead. The prosaic realities of country life and death from the late medieval to 19th century were revealed by the work carried out All Saints, Faringdon.”

As for the path from Littleworth: two clues immediately support Phelps’s recollection. At Littleworth itself, a footpath sign pointing west at the edge of the village indicates “Church Walk” to Faringdon (below left). Secondly, just outside of Faringdon itself is “Church Path Farm” – also labelled thus on Google Earth – the site of which includes a curious chapel-like outbuilding, with Gothic-style arching (below, right).

                                     littleworthpix

The walk itself – flat, easy going, suitable for the transportation of a load – crosses a total of four fields. The easternmost is large and flat, recently harvested of (I think) kale. The pathway adopts a slightly different tangent to the plough lines (see pic below) – which might be of some 20170917_124312significance, as these at least should preserve the orientation of the field. The slight difference between the path and the orientation suggests this field might have been sub-dived in to strips of a more south-westerly orientation than it is today, and that the path thus pre-dates it. The most obvious reason for this would be a change to accommodate the construction of London Street to the south, which connects Faringdon to the A420.

The westernmost section of the route, from Grove Lodge to Church Path Farm is characterized by deep, old-looking hedgerows – see picture below, left (from Church Path Farm, the route down into Faringdon itself is a modern metalled road, obviously optimized for vehicles with the hedgerows accordingly removed). It is these that make me think that Weeks and Phelps might have been on to something. These hedgerows do not simply follow the line of the field boundary as the path does to the east (below), but they delineate a gap between the fields themselves. This would be consistent with a route deliberately left to enable the passage of a bier party – a “bier balk”.

 

 

20170917_135659-1                                                           

 

20170917_121015

Path heading north from Grove Lodge

The Oxfordshire/Cornelius map above indicates another Right of Way heading north from Church Walk at Grove Lodge. This path is still clearly visible in the landscape, skirting east to avoid Grove Wood (see image above). Could this have served as another corpse path, perhaps linking the settlements of Thrupp and/or Radcot with the church at Faringdon? Maybe, but the point is that the “bier balk” hedgerows appear on Church Walk only at this point, and head west, past Church Path Farm and down to the town. Could it be therefore that only the section of the path *near* the settlement, which leads down to the churchyard itself, had the characteristics of a corpse path, and that the Shakespearean notion (see previous blog post) of dedicated pathways stretching across the pre-Enclosure countryside are a literary device?

I think this is probably so. There are exceptions – for example the Mardale to Schap route in Cumbria has a number of attestations, and is over six miles long, but I wonder if this is something to do with the extreme remoteness of the area and moorland topography – the “vital” needs to convey bodies for burial, and the challenges for doing so, stands out more in such conditions. But elsewhere, they were short sections of routes, proximal to the church, and would have been the aggregated path of more than one route for bier parties – which would be consistent with local people attaching significance accordingly.

 

Advertisements

Corpse roads

September 18, 2017

In the last few years, I have been gathering information on early-modern ideas and folklore around so-called “corpse roads”, which date from before such things as metalled transport networks and the Enclosures. When access to consecrated burial grounds was deliberately limited by a Church wanting to preserve its authority (and burial fees), an inevitable consequence was that people had to transport their dead, sometimes over long distances, for interment. A great deal of superstition and “fake news” grew up around some of these routes, for example – as I shall be blogging shortly – the belief that any route taken by a bier party over private land automatically became a public right of way. They seem to have had a particular significance in rural communities in the North West of England, especially Cumbria.

The idea of the corpse road is certainly an old one. In A Midsummer Night’s Dream, Puck soliloquizes: Now it is the time of night/That the graves all gaping wide/Every one lets forth its sprite/In the church-way paths to glide.

In my view, corpse roads – although undoubtedly a magnet for the eccentric and the off-the-wall – are a testimony to the imaginative power of physical progress through the landscape at crux points in life (and death), and of the kinds of imperatives which drove connections through those landscapes. As Ingold might say, they are very particular form of “task-scape”. I am interested in why they became important enough, at least to some people, for Shakespeare to write about them.

Here is a *very* early and initial  dump of start and finish points of corpse roads that I’ve been able to identify, mostly in secondary literature. I hope to be able to rectify/georeference each entry more thoroughly as and where time allows.

 

Academic crowdsourcing – feedback loops

September 7, 2017

This year’s summer reading included Jon Ronson’s So you’ve been publicly shamed, a journalistic investigation into why the internet has become so fond of collaring those who transgress its unwritten rules and tearing them apart. His case studies include Jonah Lehrer, the writer found to have fabricated quotes by Bob Dylan, Lindsey Stone, who was inadvisably photographed flipping the bird in Arlington National Cemetery, and Justine Sacco, the advertising exec who, while en route to South Africa, tweeted a “joke” about hoping she didn’t catch AIDS as she was white. All three became transient global hate-figures, with tens of thousands of tweets and comments raining shame down upon them. Ronson’s book is a readable and engaging romp through what are, of course, deadly serious issues for contemporary digital culture. However his conclusion interested me: he contends that the modern day version of the village stocks he describes is down to “feedback loops”. Ronson urges us to disregard the theories of Gustave Le Bon (one of Goebbels’s favourite theoreticians) and Philip Zimbardo, conceiver of the notorious Stanford Prison Experiment, who argue that mass hatred and hysteria are spread from node to node within the crowd through some process of broadly defined “contagion”. Rather, says Ronson, internet users copy what they see happening – a version of the “information cascade” theory of James Surowiecki, which I have blogged about before. So when tens of thousands of Twitter users piled into the wretched Sacco for example, it was because they had seen others doing so, resulting in a collective assurance that it was “right”. Ronson underscores this with the observation of how dramatically effective are signs attached to speed limit signs which automatically flash motorists’ current speed. This instant feedback, devoid on any actual consequence or punishment – dramatically cuts instances of speeding.

Successful academic crowdsourcing projects, as I and other have argued elsewhere, depend for their success on the relationships they create with their volunteers. I believe there is some reason to believe that Ronson’s logic can be applied here too – i.e. where both non-professional volunteers and professional project instigators are exposed controlled feedback loops. Lasecki et. al. for example argue that crowds can self-learn through the correct application of mechanical tasks which are tightly regulated and controlled on platforms such as Mechanical Turk. The feedback loop is that the task has been performed correctly or incorrectly. Other volunteers report learning from each other via discussion forums, learning from good practice. Others go on to create Wikipedia pages around the content they have worked on – although whether Wikipedia is crowdsourcing or something else, such as community participation is another matter (a distinction succinctly made on this blog post).

Those of us who have researched crowdsourcing over the last few years often get hung up on semantics and labels; and I am guilty as charged — I have found myself having far longer conversations that the subject justifies (which is how much?) over whether crowdsourcing should a hyphen or not. I think that considering the attributes of what makes crowdsourcing crowdsourcing, as opposed to something else, is more useful. An effort to characterize what makes “good” or “productive” feedback loops – as opposed to wild and unconstrained ones which destroyed Lehrer, Stone and Sacco – might be a good place to start.

Research questions, abstract problems – a round table on Citizen Science

February 26, 2017

I recently participated in a round-table discussion entitled “Impossible Partnerships”, organized by The Cultural Capital Exchange at the Royal Institution, on the theme of Citizen Science; the Impossibe Partnerships of the title being those between the academy and the wider public. It is always interesting to attend citizen science events – I get so caught up in the humanities crowdsourcing world (such as it is) that it’s good to revisit the intellectual field that it came from in the first place. This is one of those blog posts whose main aim is to organize my own notes and straighten my own thinking after the event, so don’t read on if you are expecting deep or profound insights.

20170221_173602

Crucible of knowledge: the Royal Institution’s famous lecture theatre

Galaxy Zoo of course featured heavily. This remains one of the poster-child citizen science projects, because it gets the basics right. It looks good, it works, it reaches out to build relationships with new communities (including the humanities), and it is particularly good at taking what works and configuring it to function in those new communities. We figured that one of the common factors that keeps it working across different areas is its success in tapping in to intrinsic motivations of people who are interested in the content – citizen scientists are interested in science. There is also an element of altruism involved, giving one’s time and effort for the greater good – but one point I think we agreed on is that it is far, far easier to classify the kinds of task involved, rather than the people undertaking them. This was our rationale in that 2012 scoping study of humanities crowdsourcing.

A key distinction was made between projects which aggregate or process data, and those which generate new data. Galaxy Zoo is mainly about taking empirical content and aggregating it, in contrast, say, to a project that seeks to gather public observations of butterfly or bird populations. This could be a really interesting distinction for humanities crowdsourcing too, but one which becomes problematic where one type of question leads to the other. What if content is processed/digitized through transcription (for example), and this seeds ideas which leads to amateur scholars generating blog posts, articles, discussions, ideas, books etc… Does this sort of thing happen in citizen science (genuine question – maybe it does).  So this is one of those key distinctions between citizen science and citizen humanities. The raw material of the former is often natural phenomena – bird populations, raw imagery of galaxies, protein sequences – but in the latter it can be digital material that “citizen humanists” have created from whatever source.

Another key question which came up several times during the afternoon was the nature of science itself, and how citizen science relates to it. A professional scientist will begin an experiment with several possible hypotheses, then test them against the data. Citizen scientists do not necessarily organize their thinking in this way. This raises the question: can the frameworks and research questions of a project be co-produced with public audiences? Or do they have to be determined by a central team of professionals, and farmed out to wider audiences? This is certainly the implication of Jeff Howe’s original framing of crowdsourcing:

“All these companies grew up in the Internet age and were designed to take advantage of the networked world. … [I]t doesn’t matter where the laborers are – they might be down the block, they might be in Indonesia – as long as they are connected to the network.

Technological advances in everything from product design software to digital video cameras are breaking down the cost barriers that once separated amateurs from professionals. … The labor isn’t always free, but it costs a lot less than paying traditional employees. It’s not outsourcing; it’s crowdsourcing.”

So is it the case that citizen science is about abstract research problems – “are golden finches as common in area X now as they were five years ago?” rather than concrete research questions – “why has the population of golden finches declined over the last five years?”

For me, the main takeaway was our recognition citizen science and “conventional” science is not, and should not try to be, the same thing, and should not have the same goals. The important thing in citizen science is not to focus on the “conventional” scientific out comes of good, methodologically sound and peer-reviewable research – that is, at most, an incidental benefit – but on the relationships between professional academic scientists and non-scientists it creates; and how these can help build a more scientifically literate population. The same should go for the citizen humanities. We can all count bird populations, we can all classify galaxies, we call all transcribe handwritten text, but the most profitable goal for citizen science/humanities is a more collaborative social understanding of why doing so matters.

Tales of many places: Data Infrastructure for Named Entities

January 28, 2017

The use of computational methods for ancient world geography are still very much dominated by the URI based gazetteer. These powerful and flexible reference lists, trail-blazed by projects such as the Pleaides and Pelagios projects, allow resources to be linked by common spatial referents they share. However, while computers love URIs unconditionally, the relationship they have with place is more ambivalent: a simmering critical tension which has given rise to what we call the Spatial Humanities. This critical tension between the ways humanists see place and the way computers deal with it has highlighted important geo-philosophical principles for the study of the ancient world. For me, one of the most important of these is the principle that places as entities which exist in some form of human discourse such as text, and places as locations which can be situated within the (modern) framework of latitude and longitude, must be separated. Gazetteers allow us to do this, which is why they are so important.

My 2017 kicked off with a meeting in a snowy Leipzig (see above), Digital Infrastructure for Named Entities Data, which sought to further problematize the use of these computational methods to support the investigation of past place. As might be expected of an event driven by Pelagios, the use of URI-based gazetteers featured heavily. The Pleagios Commons was presented by the event’s organizer, Chiara Palladino, as both a community and an infrastructure. It centres on the general concept of “place”, and clusters of material which share the same properties. Pelagios may be seen, Chiara said, as the “Connecting structure behind the system”, aiming at a decentralized and federated approach to provide maps which combine geographical, chronological and biographical data. The event’s exploration of this key, overarching concept highlighted three main issues:

  1. Hodological views of past space

Ancient geographies should be seen in the context of hodological space – as pathways through the world, not points on top of it. Hodology, a concept discussed by several speakers, views space from the perspective of experience and mobility.  Hodological space concerns the tension between intent, possibility, and real (embodied) experience. It is frequently bidimensional, as evidenced in the example given by Sergio Brilliante, of western Crete in the Periplus (mariner’s account) of Pseudo-Skylax, which displayed the best route for travel, not the cartographically optimal one. I was struck by the modern parallel of the WWII Cretan “runner”, George Psychoundakis, who, in his riveting account of his role in the resistance in Crete, measured the distances over which his wartime missions took him on foot by the number of cigarettes he smoked on the journey.

It was noted that in Arabic scripts, geographic areas are generally not measured, except for the purposes of agriculture. A hodological approach was described as a counterpoint to “scientific method” in geography: one can frame geographic accuracy either in terms of “accurate” Cartesian maps, or as the consistent application of geo criteria.

  1. Name neutrality

Like any form of humanistic space, hodological space is never neutral. Place references in humanistic discourse are often the result of mutivocal, multi-authorial and partial accounts; and the workshop bore a heavy emphasis on this. Many surviving Classical texts are written by Greek or Athenian authors, so there is a strong Athenocentricism and Graecocentricism to them. Non-Greeks tend to be “hidden”. This seemed to me somewhat reminiscent of the Mercator projection (which most modern Web cartography relies upon), which “shrinks” mid-latitude countries and accentuates those at higher and lower latitudes, thus visually privileging the developed world at the expense of the developing countries (who could forget the scene in the West Wing when the Cartographers for Social Equality regale CJ Cregg on the subject). Similarly toponyms are not neutral, a problem which the separating of platial concept and platical location can help address. Our own Heritage Gazetteer of Cyprus is attempting to do this through application of “attestations” of agnostic geographic entities, an approach also being used by  Sinai Rusinek in her Hebrew gazetteer. Similarly Thomass Carlson described the Syriaca.org gazetteer, which links cultural heritage to texts in the Syriac language. Carlson noted that names are a linguistic strategy not absolute entities.  The nature of names means that disambiguation does not work consistently. Even an expert reader might not be able to determine out what exactly a toponym refers to. While many ancient world gazetteers rely on URIs, URIs can never replace unambiguous linguistic names.  Context free URIs, which the gazetteer community has long relied on, are no longer sufficient to represent non-neutral humanistic place.

  1. Ontological (mis)alignment

Finally, a point well made by Maurizio Lana was that geographical ontologies must be bottom up to be truly representative. In his presentation he described the Geolat project, which deals with the use of spatial ontologies, and again frames names as cultural patterns. There is a driving force which pulls readers towards names, to what is easily identifiable. It is necessary to separate the study of entities from naming. This means that an ontology that is developed for one purpose might not be suitable for others. For example, in the Heritage Gazetteer of Cyprus we make use of Geonames as a means of locating archaeological entities, but the Feature Type list of Geonames is not nearly detailed or granular enough to adequately describe the different kinds of features which exist in the gazetteer. Therefore where geo-ontologies have come from, and why they do not align, can lead to very interesting conclusions about the nature of historical spatial structures.

As often, there was a great background discussion with colleagues who were not physically present via Twitter, which I have captured as a raw Storify. Among the most engaging of these discussions was an exchange as to whether a place had to have a name, or rather whether place acts as a conceptual container for events (in which case what are they?). My previous belief in the former position found itself severely tested by this exchange, and the papers which touched on hodological views of the past provided reinforcements. I think I am now a follower of the latter view. Thank you to those Twitter friends for this, you know who you are.

Talking to ourselves: Crowdsourcing, Boaty McBoatface and Brexit

October 30, 2016

Back in April, I gave a talk at a symposium entitled Finding New Knowledge: Archival Records in the Age of Big Data in Maryland called “Of what are they a source? The Crowd as Authors, Observers and Meaning-Makers”. In this talk I made the point that 2016 marked ten years since Jeff Howe coined the term “crowdsourcing” as a pastiche of “outsourcing” in his now-famous Wired piece. I also talked about the saga of “Boaty McBoatface”, then making headlines in the UK. If you recall, Boaty McBoatface was the winner, with over 12,000 votes, of the Natural Environmental Research Council’s open-ended appeal to “the crowd” to suggest names for its new £200m polar research ship, and vote on the suggestions. I asked if the episode had anything to tell us about where crowdsourcing had gone in its first ten years.  Well, we had a good titter at poor old NERC’s expense (although in fairness I did point out that, in a way, it was wildly successful as a crowdsourcing exercise – surely global awareness of NERC’s essential work in climatology and polar research has never been higher). In my talk I suggested the Boaty McBoatface episode was emblematic of crowdsourcing in the hyper-networked age of social media. The crowdsourcing of 2006 was based, yes, on networks, enabled by the emerging ubiquity of the World Wide Web, but it was a model where “producers” – companies with T-Shirts to design (Howe’s example), astrophysicists with galaxy images to classify (the Zooniverse poster child of citizen science), or users of Amazon Mechanical Turk put content online, and entreated “the crowd” to do something with it. This is interactivity at a fairly basic level. But the 2016 level of web interactivity is a completely different ball game, and it is skewing attitudes to expertise and professionalism in unexpected and unsettling ways.

The relationship between citizen science (or academic crowdsourcing) and “The Wisdom of Crowds” has always been a nebulous one. The earlier iterations of Transcribe Bentham, for example, or Old Weather, are not so much exercises in crowd wisdom, but perhaps “crowd intelligence” – the execution of intelligent tasks that a computer could not undertake. These activities (and the numerous others I examined with Mark Hedges in our AHRC Crowd-Sourcing Scoping Survey four years) ago all involve intelligent decision making, even if it is simply an intelligent decision as to how a particular word in Bentham’s papers should be transcribed. The decisions are defined and, to differing degrees, constrained by the input and oversight of expert project members, which give context and structure to those intelligent decisions: a recent set of interviews we have conducted with crowdsourcing projects have all stressed the centrality of a co-productive relationship between professional project staff and non-professional project participants (“volunpeers”, to use the rather wonderful terminology of the Smithsonian Transcription Center’s initiative).

However events since April have put the relationship between “the crowd” and “the expert” on to the front pages on a fairly regular basis. Four months ago, the United Kingdom voted by the small but decisive margin of 51.9% to 48.1% to exit the European Union. The “Wisdom of [the] Crowd” in making this decision informed much of the debate in the run up to the vote, with the merits of “crowd wisdom” versus “expert wisdom” being a key theme. Michael Gove, a politician who turned out to be too treacherous even for a Conservative party leadership election, famously declared that “Britain has had enough of experts”. It is a theme that has persisted since the vote, placing the qualification obtained from the act of representing “ordinary people” through election directly over, say, the economic expertise of the Governor of the Bank of England:

Is this fault line between the expert and the crowd real, a social division negotiated by successful academic crowdsourcing projects, or is it merely a conceit of divisive political rhetoric?  Essentially, this is a question of who “produces” wisdom, and who “consumes” it, and in which direction do the cognitive processes which lead to decision making flow (and which way should they flow?). This highlights the nebulous and inexact definition of “the crowd”. It worked pretty well ten years ago when Howe wrote his article, and translated easily enough into the “crowd intelligence” paradigm of the late 2000s, and early academic crowdsourcing. In these earlier days of Web 2.0, it was still possible to make at least a scalar distinction between producers and consumers, between the crowd and the crowdsourcer (or the outsourcer and organization outsourced to, to keep with his metaphor); even though the role of the user as a creator and a consumer of content was changing (2006 was, after all, also the year in which Facebook and Twitter launched). But how about today? This is a question raised by a recent data analysis of Brexit by the Economist. In this survey of voters’ opinions, it emerges that over 80% of Leave voters stated that they had “more faith in the wisdom of ordinary people than the opinions of experts”. I find the wording of this question fascinating, if not a little loaded – after all, is it not reasonable to place one’s faith in any kind of “wisdom” than an “opinion”? But the implicit connection between generally a generally held belief and (crowd) wisdom is antithetical to independent decision making. This is crucial to any argument that “crowd wisdom” leads to better decisions – such as leaving the EU. In his 2004 book, The Wisdom of Crowds: Why the Many Are Smarter Than The Few, James Surowiecki talks of “information cascades” being a threat to good crowd decisions. In information cascades, people rely on ungrounded opinions of others that have gone before: the more opinions, the more ongoing, self-replicating reinforcement. Surowiecki says:

Independence is important to intelligent decision making for two reasons. First, it keep (sic) the mistakes that people make from becoming correlated … [o]ne of the quickest ways to make people’s judgements systematically biased is to make them dependent on each other for information. Second, independent individuals are more likely to have new information rather than the same old data everyone is already familiar with.

According to the Economist’s data, the Brexit vote certainly has some of the characteristics of information cascade as described by Surowiecki: many of those polled who voted that way did so at least in part of their faith in the “wisdom of ordinary people”. This is the same self-replicating logic of the NERC boat naming competition which led to Boaty McBoatface; and a product of the kind of closed-loop thinking which social media represents. Five years ago, the New Scientist reported a very similar phenomenon with different kinds of hashtags – depending on the kind of community involved, some (#TeaParty in their example) develop great traction among distinct groups of mutual followers with individuals tweeting to one another, whereas others (#OccpyWallStreet in this case) attract much greater engagement from those not already engaged. It’s a pattern that comes up again and again, and surely Brexit is a harbinger of new ways in which democracy works.

It is certainly embodies and represents the information cascade as one key aspect that Surowiecki would have us believe is not the Wisdom of Crowds as a means of making “good” decisions. There may be those who say that to argue this is to argue against democracy, that there are no “good” or “bad” decisions, only “democratic” ones.  That is completely true of course; and not for a moment here do I question the democratic validity of the Brexit decision itself. I also happen to believe that millions of Leave voters are decent, intelligent, honourable people who genuinely voted for what, in their considered opinion, was the best for the country. But since the Goves of the world made a point and a virtue of placing the Leave case in opposition to the “opinions of experts”, it becomes legitimate to ask questions about the cognitive processes which result from so doing. And the contrast of this divisive rhetoric with those constructive and collaborative relationships between experts and non-experts evident from academic crowdsourcing could not be greater.

But that in turn makes one ask how useful the label “expert” really is. What, in the rhetoric of Gove, Davies etc, actually consigns any individual person to this reviled category? Is it just anyone who works in a university or other professional organization? Who is and who is not an expert is a matter of circumstance and perspective, and it shifts and changes all the time. Those academic crowdsourcing projects understand that, which is why they were so successful. If only politics could take the lesson.

 

Quantitative, Qualitative, Digital. Research Methods and DH

September 21, 2016

This summer, there was an extensive discussion on the Humanist mailing list about the form and nature of research methods in digital humanities. This matters, as it speaks in a fundamental way to a question whose very asking defines Digital Humanities as a discipline: when does the development and use of a tool become a method, or a methodology? The thoughts and responses this thread provoked is testament to the importance of this question.  While this post does not aim to offer a complete digest of this thread, I wanted to highlight a couple of key points that emerged from it. A key theme was in one exchange, which concerned the point in any research activity which employs digital tools at which human interpretation enters. Should this be the creation of tools, the design of those tools, the adding of metadata, the design of metadata, and so on. If one is creating a set of metadata records relating to a painting with reference to “Charles I” (ran an example given by Dominic Oldman), the computer would not “understand” the meaning of any information provided by the user, and any subsequent online aggregation would be similarly knowledge-agnostic.

In other words, where should human knowledge in the Digital Humanities lie? In the tool, or in the data, or both?

Whatever the answer, the key aspect is the point at which a convention in the use of a particular tool becomes a method. In a posting to the thread on 25th July, Willard McCarty stated:

The divergence is over the tendency of ‘method’ to become something fixed. (Consider, for example, “I have a method for doing that.” Contrast “What if I try doing this?”).

“Fixedness” is essential, and it implies some form of critically-grounded consensus among those using the method in question. This is perhaps easier to see in the social sciences that it is in the [Digital] humanities. For example, how would a classicist, or an historian, or a literature scholar approaching manuscripts through the method of close reading present and describe that method in the appropriate section of the paper? How would this differ from, say the equivalent section in a paper by a social scientist using grounded theory to approach a set of interviews? While there may be no differentiation in the rigour or quality of the research, but one suspects the latter would have a far greater consensus – and body of methodological literature – to draw upon to describe grounded theory, than the former would to describe close reading.

Many discussions on this subject remain content-focused still. What content means in itself has assumed a broader aspect. Whereas “content” in the DH may once have meant digitized texts, images and manuscripts, surely now it also includes web content such as tweets, transient social media, and blog posts such as this one. It is essential to continue to address the DH research life-cycle, as based on content, but I still but we need to tackle explicitly methodology (emphasis deliberate), in both its definition and epistemology, and defined by the presence of fixity, as noted by McCarty.” Methodological pluralism”, the key theme of the thread on Humanist this summer, is great, but for there to be pluralism, there must first be singularity. As noted, the social sciences have this in a very grounded way. I have always argued that the very terms “quantitative” and “qualitative” are understood, shared, written about and, ultimately, used in a much more systematic way in the social sciences than in the (digital) humanities, where they are often taken to express a simple distinction between “something than can be computed versus something that cannot”.

I am not saying this is not a useful distinction, but surely the Humanist thread shows that the DH should at least deepen the distinction to mean “something which can be understood by a computer versus something that cannot”.

I would like to pose three further questions on the topic:

1) how are “technological approaches” defined in DH – e.g. the use of a tool, the use of a suite of tools, the composite use of a generic set of digital applications?

2) what does a “technological approach” employing one or more tools enable us to do?

3) how is what we do with technology a) replicable and b) documentable?

Bolton Abbey, North Yorkshire

August 14, 2016

boltonabbey

Pencil sketch, mostly harder pencils. August 2016.

Sourcing GIS data

March 29, 2016

Where does one get GIS data for teaching purposes? This is the sort of question one might ask on Twitter. However while, like many, I have learned to overcome, or at least creatively ignore, the constraints of 140 characters, it can’t really be done for a question this broad, or with as many attendant sub-issues. That said, this post was finally edged into existence by a Twitter follow, from “Canadian GIS & Geomatics Resources” (@CanadianGIS). So many thanks to them for the unintended prod. The linked website of this account states:

I am sure that almost any geomatics professional would agree that a major part of any GIS are the data sets involved. The data can be in the form of vectors, rasters, aerial photography or statistical tabular data and most often the data component can be very costly or labor intensive.

Too true. And as the university term ends, reviewing the issue from the point of view of teaching seems apposite.

First, of course, students need to know what a shapefile actually is. A shapefile is the building block of GIS, the datasets where individual map layers live. Points, lines, polygons: Cartesian geography are what makes the world go round – or at least the digital world, if we accept the oft-quoted statistic that 80% or all online material is in some way georeferenced. I have made various efforts to establish the veracity of this statistic or otherwise, and if anyone has any leads, I would be most grateful if you would share them with me by email or, better still, in the comments section here. Surely it can’t be any less than that now, with the emergence of mobile computing and the saturation of the 4G smartphone market. Anyway…

In my postgraduate course, part of a Digital Humanities MA programme, on digital mapping, I have used the Ordnance Survey Open Data resources, Geofabrik, an on-demand batch download service for OpenStreetMap data, Web Feature Service data from Westminster City Council, and  continental coastline data from the European Environment Agency. The first two in particular are useful, as they provide different perspectives from respectively the central mapping verses open source/crowdsourced geodata angles. But in the expediency required of teaching a module, they main virtues are the fact they’re free, (fairly) reliable, free, malleable, and can be delivered straight to the student’s machine, or classroom PC (infrastructure problems aside – but that’s a different matter) – and uploaded to a package such as QGIS.  But I also use some shapefiles, specifically point files, I created myself. Students should also be encouraged to consider how (and where) the data comes from. This seems to be the most important aspect of geospatial within the Digital Humanities. This data is out there, it can be downloaded, but to understand what it actually *is*, what it actually means, you have to create it. That can mean writing Python scripts to extract toponyms, considering how place is represented in a text, or poring over Google Earth to identify latitude/longitude references for archaeological features.

This goes to the heart of what it means to create geodata, certainly in the Digital Humanities. Like the Ordnance Survey and Geofabrik, much of the geodata around us on the internet arrives pre-packaged and with all its assumptions hidden from view.  Agnieszka Leszczynski, whose excellent work on the distinction between quantitative and qualitative geography I have been re-reading as part of preparation for various forthcoming writings, calls this a ‘datalogical’ view of the world. Everything is abstracted as computable points, lines and polygons (or rasters). Such data is abstracted from the ‘infological’ view of the world, as understood by the humanities.  As Leszczynski puts is: “The conceptual errors and semantic ambiguities of representation in the infologial world propagate and assume materiality in the form of bits and bytes”[1]. It is this process of assumption that a good DH module on digital mapping must address.

In the course of this module I have also become aware of important intellectual gaps in this sort of provision. Nowhere, for example, in either the OS or Geofabrik datasets, is there information in British public Rights of Way (PROWs). I’m going to be needing this data later in the summer for my own research on the historical geography of corpse roads (more here in the future, I hope). But a bit of Googling turned up the following blog reply from OS at the time of the OS data release in April 2010:

I’ve done some more digging on ROW information. It is the IP of the Local Authorities and currently we have an agreement that allows us to to include it in OS Explorer and OS Landranger Maps. Copies of the ‘Definitive Map’ are passed to our Data Collection and Management team where any changes are put into our GIS system in a vector format. These changes get fed through to Cartographic Production who update the ROW information within our raster mapping. Digitising the changes in this way is actually something we’ve not been doing for very long so we don’t have a full coverage in vector format, but it seems the answer to your question is a bit of both! I hope that makes sense![2]

So… teaching GIS in the arcane backstreets of the (digital) spatial humanities still means seeing what is not there due to IP as well as what is.

[1] Leszczynski, Agnieszka. “Quantitative Limits to Qualitative Engagements: GIS, Its Critics, and the Philosophical Divide∗.” The Professional Geographer 61.3 (2009): 350-365.

[2] https://www.ordnancesurvey.co.uk/blog/2010/04/os-opendata-goes-live/

Question: (how) do we map disappeared places?

September 2, 2015

A while ago I asked Twitter if there was a name for a long period of inactivity on blogs or social media. Erik Champion came up with some nice suggestions

which raise questions about whether blogging represents either the presence or absence of ‘loafing’; and  replied with a certain elegant simplicity:

Anyway, having been either ‘living’ or ‘loafing’ a lot these last few months, this is my first post since February.

I want to ask another question, but 140 characters just won’t cut it for this one. How does one represent a place in a gazetteer, or any other kind of database or GIS for that matter, which no longer exists? To take an example of ‘Mikro Kaimeni’, a tiny volcanic island in the Santorini archipelago mapped and published by Thomas Graves in his 1850 military survey of the Aegean:

20150610_110846

Some sixteen years after this map was made, Santorini erupted and Mikro Kaimeni combined with the large central island, Neo Kameni:

osm_santorini

Can such places be hermenutic objects by virtue of the fact that they are representing in the human record (in this case Graves’s map), even though they no longer exist as spatial footprints on the earth’s surface? I suppose they have to be. The same could go for fictional places (Middle Earth, Gotham City etc). What kind of representational issues does this create for mapping in the humanities more generally?