Archive for the ‘digital humanities’ Category

Research questions, abstract problems – a round table on Citizen Science

February 26, 2017

I recently participated in a round-table discussion entitled “Impossible Partnerships”, organized by The Cultural Capital Exchange at the Royal Institution, on the theme of Citizen Science; the Impossibe Partnerships of the title being those between the academy and the wider public. It is always interesting to attend citizen science events – I get so caught up in the humanities crowdsourcing world (such as it is) that it’s good to revisit the intellectual field that it came from in the first place. This is one of those blog posts whose main aim is to organize my own notes and straighten my own thinking after the event, so don’t read on if you are expecting deep or profound insights.

20170221_173602

Crucible of knowledge: the Royal Institution’s famous lecture theatre

Galaxy Zoo of course featured heavily. This remains one of the poster-child citizen science projects, because it gets the basics right. It looks good, it works, it reaches out to build relationships with new communities (including the humanities), and it is particularly good at taking what works and configuring it to function in those new communities. We figured that one of the common factors that keeps it working across different areas is its success in tapping in to intrinsic motivations of people who are interested in the content – citizen scientists are interested in science. There is also an element of altruism involved, giving one’s time and effort for the greater good – but one point I think we agreed on is that it is far, far easier to classify the kinds of task involved, rather than the people undertaking them. This was our rationale in that 2012 scoping study of humanities crowdsourcing.

A key distinction was made between projects which aggregate or process data, and those which generate new data. Galaxy Zoo is mainly about taking empirical content and aggregating it, in contrast, say, to a project that seeks to gather public observations of butterfly or bird populations. This could be a really interesting distinction for humanities crowdsourcing too, but one which becomes problematic where one type of question leads to the other. What if content is processed/digitized through transcription (for example), and this seeds ideas which leads to amateur scholars generating blog posts, articles, discussions, ideas, books etc… Does this sort of thing happen in citizen science (genuine question – maybe it does).  So this is one of those key distinctions between citizen science and citizen humanities. The raw material of the former is often natural phenomena – bird populations, raw imagery of galaxies, protein sequences – but in the latter it can be digital material that “citizen humanists” have created from whatever source.

Another key question which came up several times during the afternoon was the nature of science itself, and how citizen science relates to it. A professional scientist will begin an experiment with several possible hypotheses, then test them against the data. Citizen scientists do not necessarily organize their thinking in this way. This raises the question: can the frameworks and research questions of a project be co-produced with public audiences? Or do they have to be determined by a central team of professionals, and farmed out to wider audiences? This is certainly the implication of Jeff Howe’s original framing of crowdsourcing:

“All these companies grew up in the Internet age and were designed to take advantage of the networked world. … [I]t doesn’t matter where the laborers are – they might be down the block, they might be in Indonesia – as long as they are connected to the network.

Technological advances in everything from product design software to digital video cameras are breaking down the cost barriers that once separated amateurs from professionals. … The labor isn’t always free, but it costs a lot less than paying traditional employees. It’s not outsourcing; it’s crowdsourcing.”

So is it the case that citizen science is about abstract research problems – “are golden finches as common in area X now as they were five years ago?” rather than concrete research questions – “why has the population of golden finches declined over the last five years?”

For me, the main takeaway was our recognition citizen science and “conventional” science is not, and should not try to be, the same thing, and should not have the same goals. The important thing in citizen science is not to focus on the “conventional” scientific out comes of good, methodologically sound and peer-reviewable research – that is, at most, an incidental benefit – but on the relationships between professional academic scientists and non-scientists it creates; and how these can help build a more scientifically literate population. The same should go for the citizen humanities. We can all count bird populations, we can all classify galaxies, we call all transcribe handwritten text, but the most profitable goal for citizen science/humanities is a more collaborative social understanding of why doing so matters.

Sourcing GIS data

March 29, 2016

Where does one get GIS data for teaching purposes? This is the sort of question one might ask on Twitter. However while, like many, I have learned to overcome, or at least creatively ignore, the constraints of 140 characters, it can’t really be done for a question this broad, or with as many attendant sub-issues. That said, this post was finally edged into existence by a Twitter follow, from “Canadian GIS & Geomatics Resources” (@CanadianGIS). So many thanks to them for the unintended prod. The linked website of this account states:

I am sure that almost any geomatics professional would agree that a major part of any GIS are the data sets involved. The data can be in the form of vectors, rasters, aerial photography or statistical tabular data and most often the data component can be very costly or labor intensive.

Too true. And as the university term ends, reviewing the issue from the point of view of teaching seems apposite.

First, of course, students need to know what a shapefile actually is. A shapefile is the building block of GIS, the datasets where individual map layers live. Points, lines, polygons: Cartesian geography are what makes the world go round – or at least the digital world, if we accept the oft-quoted statistic that 80% or all online material is in some way georeferenced. I have made various efforts to establish the veracity of this statistic or otherwise, and if anyone has any leads, I would be most grateful if you would share them with me by email or, better still, in the comments section here. Surely it can’t be any less than that now, with the emergence of mobile computing and the saturation of the 4G smartphone market. Anyway…

In my postgraduate course, part of a Digital Humanities MA programme, on digital mapping, I have used the Ordnance Survey Open Data resources, Geofabrik, an on-demand batch download service for OpenStreetMap data, Web Feature Service data from Westminster City Council, and  continental coastline data from the European Environment Agency. The first two in particular are useful, as they provide different perspectives from respectively the central mapping verses open source/crowdsourced geodata angles. But in the expediency required of teaching a module, they main virtues are the fact they’re free, (fairly) reliable, free, malleable, and can be delivered straight to the student’s machine, or classroom PC (infrastructure problems aside – but that’s a different matter) – and uploaded to a package such as QGIS.  But I also use some shapefiles, specifically point files, I created myself. Students should also be encouraged to consider how (and where) the data comes from. This seems to be the most important aspect of geospatial within the Digital Humanities. This data is out there, it can be downloaded, but to understand what it actually *is*, what it actually means, you have to create it. That can mean writing Python scripts to extract toponyms, considering how place is represented in a text, or poring over Google Earth to identify latitude/longitude references for archaeological features.

This goes to the heart of what it means to create geodata, certainly in the Digital Humanities. Like the Ordnance Survey and Geofabrik, much of the geodata around us on the internet arrives pre-packaged and with all its assumptions hidden from view.  Agnieszka Leszczynski, whose excellent work on the distinction between quantitative and qualitative geography I have been re-reading as part of preparation for various forthcoming writings, calls this a ‘datalogical’ view of the world. Everything is abstracted as computable points, lines and polygons (or rasters). Such data is abstracted from the ‘infological’ view of the world, as understood by the humanities.  As Leszczynski puts is: “The conceptual errors and semantic ambiguities of representation in the infologial world propagate and assume materiality in the form of bits and bytes”[1]. It is this process of assumption that a good DH module on digital mapping must address.

In the course of this module I have also become aware of important intellectual gaps in this sort of provision. Nowhere, for example, in either the OS or Geofabrik datasets, is there information in British public Rights of Way (PROWs). I’m going to be needing this data later in the summer for my own research on the historical geography of corpse roads (more here in the future, I hope). But a bit of Googling turned up the following blog reply from OS at the time of the OS data release in April 2010:

I’ve done some more digging on ROW information. It is the IP of the Local Authorities and currently we have an agreement that allows us to to include it in OS Explorer and OS Landranger Maps. Copies of the ‘Definitive Map’ are passed to our Data Collection and Management team where any changes are put into our GIS system in a vector format. These changes get fed through to Cartographic Production who update the ROW information within our raster mapping. Digitising the changes in this way is actually something we’ve not been doing for very long so we don’t have a full coverage in vector format, but it seems the answer to your question is a bit of both! I hope that makes sense![2]

So… teaching GIS in the arcane backstreets of the (digital) spatial humanities still means seeing what is not there due to IP as well as what is.

[1] Leszczynski, Agnieszka. “Quantitative Limits to Qualitative Engagements: GIS, Its Critics, and the Philosophical Divide∗.” The Professional Geographer 61.3 (2009): 350-365.

[2] https://www.ordnancesurvey.co.uk/blog/2010/04/os-opendata-goes-live/

Reconstruction, visualization and frontier archaeology

August 17, 2013

Recently on holiday in the North East, I took in two Roman forts of the frontier of Hadrian’s Wall, Segedunum and Arbeia. Both have stories to tell, narratives, about the Roman occupation of Britain, and in the current period both have been curated in various ways. At both, the curating authorities (Tyne and Wear Museums), with ongoing archaeological research being undertaken by the fantastic WallQuest community archaeology project.

The public walkthrough reconstructions of what the buildings and the contents might have been like at both sites pose some interesting questions about the nature of historical/archaeological narratives, and how they can be elaborated. At Segedunum, there is a reconstruction of a bath house. Although the fort itself had such a structure, modern development means that it is not in the same place, nor does the foundations of the reconstruction relate directly to archaeological evidence. The features of the bath house are drawn from composite analysis of bath houses from throughout the Roman Empire. So what we have here is a narrative, but it is a generic narrative: it is stitched together, generalized, a mosaic of hundreds of disparate narratives, but it can only be very loosely constrained by time (a bath house such as that at Segedunum would have had a lifespan of 250-300 years), and not to any one individual. we cannot tell the story of any one Roman officer or auxiliary solider who used it.

Image

Reconstructed bath house at Segedunum

On the other hand at Arbeia, there are three sets of granaries, the visible foundations all nicely curated and accessible to the public. You can see the stone piers and columns that the granary floors were mounted on, to allow air movement to stop the grain rotting. Why three granaries for a fort of no more than 600 occupants? Because in the third century, the Emperor Severus wanted to conquer the nearby Caledonii; and for his push up into Scotland we needed a secure supply base with plenty of grain.

Image

Granaries at Arbeia, reconstructed West Gatehouse in the background

This is an absolute narrative. It is constrained by actual events which are historical and documented. At the same fort is a reconstructed gateway, which is this time situated on actual foundations. This is an inferential narrative, with some of the gateway’s features being reconstructed again from composite evidence from elsewhere (did it have two or three stories? A shingled roof? We don’t know, but we infer). These narratives are supported by annotated scale models in the gateway structure which we, they paying public (actually Arbeia is free), can view and review at our leisure. This speaks to the nature of empirical, inferential and conjectural reconstruction detailed in a forthcoming book chapter by myself and Kirk Woolford (of contributions to the EVA conference, published by Springer).

Narratives are personal, but the can also be generic. In some ways this speaks back to the concept of the Deep Map (see older posts). The walkthrough reconstruction constitutes, I think, half a Deep Map. It provides a full sensory environment, but is not ‘scholarly’ in that it does not elucidate what it would have been like for a first or second century Roman, or auxiliary soldier to experience the environment. Maybe the future of 3D visualization should be to integrate modelling, reconstruction, remediation, and interpretation to bring available (and reputable) knowledge from whatever source about what that original sensory experience would have been – texts, inscriptions, writing tablets, environmental archaeology, experimental archaeology etc. In other words, visualization should no longer be seen as a means of making hypothetical visual representations of what the past might of been, but of integrating knowledge about the experience of the environment derived from all five senses, but using vision as the medium.  It can never be a total representation incorporating all possible experiences under all possible environmental conditions, but then a map can never be a total representation of geography (except, possibly, in the world of Borges’s On the Exactitude of Science).

To crowd-source or not to crowd-source

January 6, 2013

Shortly before Christmas, I was engaged in discussion with a Swedish-based colleague about crowd-sourcing and the humanities. My colleague – an environmental archaeologist – posited that it could be demonstrated that crowd-sourcing was not an effective methodology for his area. Ask randomly selected members of the public to draw a Viking helmet. You would get a series of not dissimilar depictions – a sort of pointed or semi-conical helmet, with horns on either side. But Viking helmets did not have horns.

Having recently published a report for the AHRC on humanities crowd-sourcing, a research review which looked at around 100 publications, and about the same number of projects, activities, blogs etc, I would say the answer to this apparent fault is: don’t identify Viking helmets by asking the public to draw them. Obvious as this may sound, it is in fact just an obvious example of a complex calculation that needs to be carried out when assessing if crowd-sourcing is appropriate for any particular problem. Too often, we found in our review, crowd-sourcing was used simply because there was a data resource there, or some infrastructure which would enable it, and not because there was a really important or interesting question that could be posed by engaging the public – although we found honourable exceptions to this. Many such projects contributed to the workshop we held last May, which can be found here. To help identify which sorts of problems would be appropriate, we have developed – or rather, since this will undoubtedly involve in the future, I should say we are developing – a four facet typology of humanities crowd-sourcing scenarios. These facets are asset type (the content or data forming the subject of the activity), process type (what is done with that content) task type (how it is done), and the output type (the thing, resource or knowledge produced). What we are now working on is identifying – or trying to identify – examples of how these might fit together to form successful crowd-sourcing workflows.

To put it in the terms of my friend’s challenge: an accurate image of a Viking helmet is not an output which can be generated by setting creative tasks to underpin the process of recording and creating content, and the ephemeral and unanchored public conception of what a Viking helmet looks like is not an appropriate asset to draw from. Obvious as this may sound, it hints that a systematic framework for identifying where crowd-sourcing will, and won’t, work, is methodologically possible. And this could, potentially, be very valuable as the humanities faces increasing interest from well-organized and well-funded citizen science communities such as Zooniverse (which already supports and facilitates several of the early success stories in humanities crowd-sourcing such as Ancient Lives and OldWeather).

This of course raises a host of other issues. How on earth can peer-review structures cope with this, and should they try to? What motivates the public, and indeed academics, to engage with crowd-sourcing? We hint at some answers. Transparency and documentation is essential for the former area, and we found that in the latter, most projects swiftly develop a core community of very dedicated followerswho undertake reams of work, but – possibly like many more conventional collaborations – finding those people, or letting them find you, is not always easy.

The final AHRC report is available: Crowdsourcing-connected-communities.

Last day in Indiana

July 1, 2012

It’s my last day in Indianapolis. It’s been hard work and I’ve met some great people. I’ve experienced Indianapolis’s hottest day since 1954, and *really* learned to appreciate good air conditioning. Have we, in the last two weeks, defined what a deep map actually is? In a sense we did, but more importantly than the semantic definition, I reckon we managed to form a set of shared understandings, some fairly intuitive, which articulate (for me at least) how deep mapping differs from other kinds of mapping. It must integrate, and at least some of this integration must involve the linear concepts of what, when and where (but see below). It must reflect experience at the local level as well as data at the macro level, and it must provide a means of scaling between them. It must allow the reader (I hereby renounce the word ‘user’ in relation to deep maps) to navigate the data and derive their own conclusions. Unlike a GIS – ‘so far so Arc’ is a phrase I have co-coined this week – it cannot, and should not attempt to, actualize every possible connection in the data, either implicitly or explicitly. Above all, a deep map must have a topology that enables all these things, and if, in the next six months, the Polis Center can move us towards  a schema underlying that topology, then I think our efforts, and theirs, will have been well rewarded.

The bigger questions for me are what does this really mean for the ‘spatial humanities’; and what the devil are the spatial humanities anyway. They have no Wikipedia entry (so how can they possibly exist?). I have never particularly liked the term ‘spatial turn’, as it implies a setting apart, which I do not think the spatial humanities should be about. The spatial humanities mean nothing if they do not communicate with the rest of the humanities, and beyond. Perhaps – and this is the landscape historian in me talking – it is about the kind of topology that you can extract from objects in the landscape itself. Our group in Week 2 spent a great deal of time thinking about the local and the experiential, and how the latter can be mapped on to the former, in the context of a particular Unitarian ministry in Indianapolis. What are the stories you can get from the landscape, not just tell about it.

Allow me to illustrate the point with war memorials. The city’s primary visitor information site, visitindy.com, states that Indianapolis has more war memorials than any city apart from Washington D.C.. Last Saturday, a crew of us hired a car and visited Columbus IN, an hour and a half’s drive away. In Columbus there is a memorial to most of America’s wars: eight by six Indiana limestone columns, arranged in a close grid formation with free public access from the outside. Engraved on all sides of the columns around the outside, except the outer facing edges, are names of the fallen, their dates, and the war in which they served. On the inner columns– further in, where you have to explore to find them, giving them the mystique of the inner sanctum – are inscribed the full texts of letters written home by fallen servicemen. In most cases, they seem to have been written just days before the dates of death.  The deeply personal natures of these letters provide an emotional connection, and combined with the spatiality of the columns, this connection forms a very specific, and very deliberately told, spatial narrative. It was also a deeply moving experience.

Today, in Indianapolis itself, I was exploring the very lovely canal area, and came across the memorial to the USS Indianapolis. The Indianapolis was a cruiser of the US Navy sunk by Japanese torpedoes in 1945, with heavy loss of life. Particular poignancy is given to the memorial by a narrative of the ship’s history, and the unfolding events leading up to the sinking, inscribed in prose on the monument’s pedestal. I stood there and read it, totally engrossed and as moved by the story as I was by the Columbus memorial.

Image

USS Indianapolis memorial

The point for deep maps: American war memorials tell stories in a very deliberate, designed and methodical way, to deeply powerful effect in the two examples I saw. British war memorials tend not to do this. You get a monument, lists of names of the fallen and the war in question, and perhaps a motto of some sort. An explicit story is not told. This does not make the experience any less moving, but it is based on a shared and implicit communal memory, whose origins are not made explicit in the fabric of the monument. It reflects a subtle difference in how servicemen and women are memorialized, in the formation of the inherently spatial stories that are told in order to remember them.

This is merely one example of subtle differences which run through any built environment of any period in any place, and they become less subtle as you scale in more and more cultures with progressively weaker ties. Britain and America. Europe, Britain and America. Europe, America and Africa, and so on. We scale out and out, and then we get to the point where the approaches to ‘what’ ‘when’ and ‘where’ – the approaches that we worked on in our group – must be recognised not as universal ways of looking at the world, but as products of our British/American/Australian backgrounds, educations and cultural memories. Thus it will be with any deep map.

How do we explain to the shade of Edward Said that by mapping these narratives we are not automatically claiming ownership of them, however much we might want or try not to? How deep will these deep maps need to go…?

Deep maps in Indy

June 25, 2012

I am here in a very hot and sunny Indianapolis trying to figure out what is meant by deep mapping, with an NEH Summer Institute at UIPUI hosted by the Polis Center here. There follows a very high-level attempt to synthesize some thoughts from the first week.

Deep mapping – we think, although we’ll all probably have changed our minds by next Friday, if not well before  – is about representing (or, as I am increasingly preferring to think, remediating) the things that Ordnance Survey would, quite rightly, run a perfectly projected and triangulated mile from mapping at all. Fuzziness. Experience. Emotion. What it means to move through a landscape at a particular time in a particular way. Or, as Ingold might say, to negotiate a taskscape. Communicating these things meaningfully as stories or arguments. There has been lots of fascinating back and forth about this all week, although – and this is the idea at least – next week we move a beyond the purely abstract and grapple with what it means to actually design one.

If we’re to define the meaning we’re hoping to build here, it’s clear that we need to rethink some pretty basic terms. E.g. we talk instinctively about ‘reading’ maps, but I have always wondered how well that noun and that verb really go together. We assume that ‘deep mapping’ for the humanities – a concept which we assume will be at least partly online – has to stem from GIS, and that a ‘deep map, whatever we might end up calling that, will be some kind of paradigm shift beyond ‘conventional’ computer mapping. But the ’depth’ of a map is surely a function of how much knowledge – knowledge rather than information – is added to the base layer, where that information comes from, and how it is structured. The amazing HGIS projects we’ve seen this week give us the framework we need to think in, but the concept of ‘information’ therein should surely be seen as a starting point. The lack of very basic kinds of such information in popular mapping applications has been highlighted, and perhaps serves to illustrate this point. In 2008, Mary Spence, President of the British Cartographic Society, argued in a lecture:

Corporate cartographers are demolishing thousands of years of history, not to mention Britain’s remarkable geography, at a stroke by not including them on [GPS] maps which millions of us now use every day. We’re in danger of losing what makes maps so unique, giving us a feel for a place even if we’ve never been there.

To put it another way, are ‘thin maps’ really all that ‘thin’, when they are produced and curated properly according to accepted technical and scholarly standards? Maps are objects of emotion, in a way that texts are not (which is not to deny the emotional power of text, it is simply to recognize that it is a different kind of power). Read Mike Parker’s 2009 Map Addict for an affectionate and quirky tour of the emotional power of OS maps (although anyone with archaeological tendencies will have to grit their teeth when he burbles about the mystic power of ley lines and the cosmic significance of the layout of Milton Keynes). According to Spence, a map of somewhere we have never been ties together our own experiences of place, whether absolute (i.e. georeferenced) or abstract, along with our expectations and our needs. If this is true for the lay audiences of, say the Ordnance Survey, isn’t the vision of a deep map articulated this past week some sort of scholarly equivalent? We can use an OS map to make a guess, an inference or an interpretation (much discussion this week has, directly or indirectly, focused on these three things and their role in scholarly approaches). What we cannot do with an OS map is annotate or embed it with any of these. The defining function of a deep map, for me, is an ability to do this, as well as the ability to structure the outputs in a formal way (RDF is looking really quite promising, I think – if you treat different mapped objects in the object-subject-predicate framework, that overcomes a lot of the problems of linearity and scale that we’ve battled with this week). The different levels of ephemerality that this would mean categorising (or, heaven help us, quantifying) is probably a story for another post, but a deep map should be able to convey experience of moving through the landscape being described.

There are other questions which bringing such a map into the unforgiving world of scholarly publication would undoubtedly entail. Must a map be replicable? Must someone else be able to come along and map the same thing in the same way, or at least according to their own subjective experience(s)?  In a live link up the UCLA team behind the Roman Forum project demonstrated their stuff, and argued that the visual is replicable and –relatively easily – publishable, but of course other sensory experiences are not.  We saw, for example, a visualisation of how far an orator’s voice could carry. The visualisation looks wonderful, and the quantitative methodology even more so, but to be meaningful as an instrument in the history of Roman oratory, one would have to consider so many subjective variables – the volume of the orator’s voice (of course), the ambient noise and local weather conditions (especially wind). There are even less knowable functions, such as how well individuals in the crowd could hear, whether they had any hearing impairments etc. This is not to carp –after all, we made (or tried to make) a virtue of addressing and constraining such evidential parameters in the MiPP project, and our outputs certainly looked nothing like as spectacular as UCLA’s virtual Rome – but a deep map must be able to cope with those constraints.

To stand any chance of mapping them, we need to treat such ephemera as objects, and object-orientation seemed to be where our – or at least my – thinking was going last week. And then roll out the RDF…

CAA1 – The Digital Humanities and Archaeology Venn Diagram

April 1, 2012

The question  ‘what is the digital humanities’ is hardly new; nor is discussion of the various epistemologies of which the digital humanities are made. However, the relationship which archaeology has with the digital humanities – whatever the epistemology of either – has been curiously lacking. Perhaps this is because archaeology has such strong and independent digital traditions, and such a set of well-understood quantitative methods, that the close analysis of of those traditions – familiar to readers of Humanist, say –  seem redundant. However, at the excellent CAA International conference in Southampton last week, there was a dedicated round-table session on the ‘Digital Humanities/Archaeology Venn Diagram’, in which I was a participant. This session highlighted that the situation is far more nuanced and complex that it first seems. As is so often the case with digital humanities.

A Venn Diagram, of course, assumes two or more discrete groups of objects, where some objects contain the attributes of only one group, and others share attributes of multiple groups. So – assuming that one can draw a Venn loop big enough to contain the digital humanities – what objects do they share with archaeology? As I have not been the first to point out, digital humanities is mainly concerned with methods. This, indeed, was the basis of Short and McCarty’s famous diagram. The full title of CAA – Computer Applications and Quantitative Methods in Archaeology – suggests that a methodological focus is one such object shared by both groups. However unlike the digital humanities, archaeology is concerned with a well defined set of questions. Most if not all, of these questions derive from ‘what happened in the past?’. Invariably the answers lie, in turn, in a certain class of material; and indeed we refer to collectively to this class as ‘material culture’.  And digital methods are a means that we use to the end of getting at the knowledge that comes from interpretation of material culture.

The digital humanities have much broader shared heritage which, as well as being methodological, is also primarily textual. This fact is illustrated by the main print publication in the field being called Literary and Linguistic Computing. It is not, I think, insignificant as an indication of how things have moved on that that a much more recently (2007)  founded journal has the less content-specific title Digital Humanities Quarterly. This, I suspect, is related to the reason why digitisation so often falls between the cracks in the priorities of funding agencies: there is a perception that the world of printed text is so vast that trying to add to the corpus incrementally would be like painting the Forth Bridge with a toothbrush (although this doesn’t affect my general view that the biggest enemy of mass digitisation today is not FEC or public spending cuts, but the Mauer im Kopf that form notions of data ownership and IPR). The digital humanities are facing a tension, as they always have, between variable availability of digital material, and the broad access to content that any porting over to the ‘digital’ that the word ‘humanities’ implies. As Stuart Jeffrey’s talk in the session made clear, the questions facing archaeology are more about what data archaeologists throw away: the emergence of Twitter, for example, gives an illusion of ephemerality, but every tweet adds to the increasing cloud of noise on the internet; and those charged with preserving the archaeological record in digital form must decide where where the noise ends and the record begins.

There is also the question of what digital methods *do* to our data. Most scholars who call themselves ‘digital humanists’ would reject the notion that textual analysis, which begins with semantic and/or stylometric mark-up is a purely quantitative exercise; and that qualitative aspects of reading and analysis arise from, and challenge, the additional knowledge which is imparted to a text in the course of encoding by an expert. However, as a baseline, it is exactly the kind of quantitative  reading of primary material which archaeology – going back to the early 1990s – characterized as reductionist and positivist. Outside the shared zone of the Venn diagram, then, must be considered the notions of positivism and reductionism: they present fundamentally different challenges to archaeological material than they do to other kinds of primary resource, certainly including text, but also, I suspect, to other kinds of ‘humanist’ material as well.

A final point which emerged from the session is the disciplinary nature(s) of archaeology and the digital humanities themselves. I would like to pose the question as to why the former is often expressed as a singular noun whereas the latter is a plural. Plurality in ‘the humanities’ is taken implicitly. It conjures up notions of a holistic liberal arts education in the human condition, taking in the fruits of all the arts and sciences in which humankind has excelled over the centuries. But some humanities are surely more digital than others. Some branches of learning, such as corpus linguistics, lend themselves to quantitative analysis of their material. Others tend towards the qualitative, and need to be prefixed by correspondingly different kinds of ‘digital’. Others are still more interpretive, with their practitioners actively resisting ‘number crunching’. Therefore, instead of being satisfied with ‘The Digital Humanities’ as an awkward collective noun, maybe we could look to free ourselves of the restrictions of nomenclature by recognizing that can’t impose homogeneity, and nor should we try to. Maybe we could even extend this logic, and start thinking in terms of ‘digital archaeologies’; of branches of archaeology which require (e.g.) archiving, communication, semantic web, UGC and so on; and some which don’t require any.  I can’t doubt that the richness and variety of the conference last week is the strongest argument possible for this.