Talking to ourselves: Crowdsourcing, Boaty McBoatface and Brexit

October 30, 2016

Back in April, I gave a talk at a symposium entitled Finding New Knowledge: Archival Records in the Age of Big Data in Maryland called “Of what are they a source? The Crowd as Authors, Observers and Meaning-Makers”. In this talk I made the point that 2016 marked ten years since Jeff Howe coined the term “crowdsourcing” as a pastiche of “outsourcing” in his now-famous Wired piece. I also talked about the saga of “Boaty McBoatface”, then making headlines in the UK. If you recall, Boaty McBoatface was the winner, with over 12,000 votes, of the Natural Environmental Research Council’s open-ended appeal to “the crowd” to suggest names for its new £200m polar research ship, and vote on the suggestions. I asked if the episode had anything to tell us about where crowdsourcing had gone in its first ten years.  Well, we had a good titter at poor old NERC’s expense (although in fairness I did point out that, in a way, it was wildly successful as a crowdsourcing exercise – surely global awareness of NERC’s essential work in climatology and polar research has never been higher). In my talk I suggested the Boaty McBoatface episode was emblematic of crowdsourcing in the hyper-networked age of social media. The crowdsourcing of 2006 was based, yes, on networks, enabled by the emerging ubiquity of the World Wide Web, but it was a model where “producers” – companies with T-Shirts to design (Howe’s example), astrophysicists with galaxy images to classify (the Zooniverse poster child of citizen science), or users of Amazon Mechanical Turk put content online, and entreated “the crowd” to do something with it. This is interactivity at a fairly basic level. But the 2016 level of web interactivity is a completely different ball game, and it is skewing attitudes to expertise and professionalism in unexpected and unsettling ways.

The relationship between citizen science (or academic crowdsourcing) and “The Wisdom of Crowds” has always been a nebulous one. The earlier iterations of Transcribe Bentham, for example, or Old Weather, are not so much exercises in crowd wisdom, but perhaps “crowd intelligence” – the execution of intelligent tasks that a computer could not undertake. These activities (and the numerous others I examined with Mark Hedges in our AHRC Crowd-Sourcing Scoping Survey four years) ago all involve intelligent decision making, even if it is simply an intelligent decision as to how a particular word in Bentham’s papers should be transcribed. The decisions are defined and, to differing degrees, constrained by the input and oversight of expert project members, which give context and structure to those intelligent decisions: a recent set of interviews we have conducted with crowdsourcing projects have all stressed the centrality of a co-productive relationship between professional project staff and non-professional project participants (“volunpeers”, to use the rather wonderful terminology of the Smithsonian Transcription Center’s initiative).

However events since April have put the relationship between “the crowd” and “the expert” on to the front pages on a fairly regular basis. Four months ago, the United Kingdom voted by the small but decisive margin of 51.9% to 48.1% to exit the European Union. The “Wisdom of [the] Crowd” in making this decision informed much of the debate in the run up to the vote, with the merits of “crowd wisdom” versus “expert wisdom” being a key theme. Michael Gove, a politician who turned out to be too treacherous even for a Conservative party leadership election, famously declared that “Britain has had enough of experts”. It is a theme that has persisted since the vote, placing the qualification obtained from the act of representing “ordinary people” through election directly over, say, the economic expertise of the Governor of the Bank of England:

Is this fault line between the expert and the crowd real, a social division negotiated by successful academic crowdsourcing projects, or is it merely a conceit of divisive political rhetoric?  Essentially, this is a question of who “produces” wisdom, and who “consumes” it, and in which direction do the cognitive processes which lead to decision making flow (and which way should they flow?). This highlights the nebulous and inexact definition of “the crowd”. It worked pretty well ten years ago when Howe wrote his article, and translated easily enough into the “crowd intelligence” paradigm of the late 2000s, and early academic crowdsourcing. In these earlier days of Web 2.0, it was still possible to make at least a scalar distinction between producers and consumers, between the crowd and the crowdsourcer (or the outsourcer and organization outsourced to, to keep with his metaphor); even though the role of the user as a creator and a consumer of content was changing (2006 was, after all, also the year in which Facebook and Twitter launched). But how about today? This is a question raised by a recent data analysis of Brexit by the Economist. In this survey of voters’ opinions, it emerges that over 80% of Leave voters stated that they had “more faith in the wisdom of ordinary people than the opinions of experts”. I find the wording of this question fascinating, if not a little loaded – after all, is it not reasonable to place one’s faith in any kind of “wisdom” than an “opinion”? But the implicit connection between generally a generally held belief and (crowd) wisdom is antithetical to independent decision making. This is crucial to any argument that “crowd wisdom” leads to better decisions – such as leaving the EU. In his 2004 book, The Wisdom of Crowds: Why the Many Are Smarter Than The Few, James Surowiecki talks of “information cascades” being a threat to good crowd decisions. In information cascades, people rely on ungrounded opinions of others that have gone before: the more opinions, the more ongoing, self-replicating reinforcement. Surowiecki says:

Independence is important to intelligent decision making for two reasons. First, it keep (sic) the mistakes that people make from becoming correlated … [o]ne of the quickest ways to make people’s judgements systematically biased is to make them dependent on each other for information. Second, independent individuals are more likely to have new information rather than the same old data everyone is already familiar with.

According to the Economist’s data, the Brexit vote certainly has some of the characteristics of information cascade as described by Surowiecki: many of those polled who voted that way did so at least in part of their faith in the “wisdom of ordinary people”. This is the same self-replicating logic of the NERC boat naming competition which led to Boaty McBoatface; and a product of the kind of closed-loop thinking which social media represents. Five years ago, the New Scientist reported a very similar phenomenon with different kinds of hashtags – depending on the kind of community involved, some (#TeaParty in their example) develop great traction among distinct groups of mutual followers with individuals tweeting to one another, whereas others (#OccpyWallStreet in this case) attract much greater engagement from those not already engaged. It’s a pattern that comes up again and again, and surely Brexit is a harbinger of new ways in which democracy works.

It is certainly embodies and represents the information cascade as one key aspect that Surowiecki would have us believe is not the Wisdom of Crowds as a means of making “good” decisions. There may be those who say that to argue this is to argue against democracy, that there are no “good” or “bad” decisions, only “democratic” ones.  That is completely true of course; and not for a moment here do I question the democratic validity of the Brexit decision itself. I also happen to believe that millions of Leave voters are decent, intelligent, honourable people who genuinely voted for what, in their considered opinion, was the best for the country. But since the Goves of the world made a point and a virtue of placing the Leave case in opposition to the “opinions of experts”, it becomes legitimate to ask questions about the cognitive processes which result from so doing. And the contrast of this divisive rhetoric with those constructive and collaborative relationships between experts and non-experts evident from academic crowdsourcing could not be greater.

But that in turn makes one ask how useful the label “expert” really is. What, in the rhetoric of Gove, Davies etc, actually consigns any individual person to this reviled category? Is it just anyone who works in a university or other professional organization? Who is and who is not an expert is a matter of circumstance and perspective, and it shifts and changes all the time. Those academic crowdsourcing projects understand that, which is why they were so successful. If only politics could take the lesson.

 

Quantitative, Qualitative, Digital. Research Methods and DH

September 21, 2016

This summer, there was an extensive discussion on the Humanist mailing list about the form and nature of research methods in digital humanities. This matters, as it speaks in a fundamental way to a question whose very asking defines Digital Humanities as a discipline: when does the development and use of a tool become a method, or a methodology? The thoughts and responses this thread provoked is testament to the importance of this question.  While this post does not aim to offer a complete digest of this thread, I wanted to highlight a couple of key points that emerged from it. A key theme was in one exchange, which concerned the point in any research activity which employs digital tools at which human interpretation enters. Should this be the creation of tools, the design of those tools, the adding of metadata, the design of metadata, and so on. If one is creating a set of metadata records relating to a painting with reference to “Charles I” (ran an example given by Dominic Oldman), the computer would not “understand” the meaning of any information provided by the user, and any subsequent online aggregation would be similarly knowledge-agnostic.

In other words, where should human knowledge in the Digital Humanities lie? In the tool, or in the data, or both?

Whatever the answer, the key aspect is the point at which a convention in the use of a particular tool becomes a method. In a posting to the thread on 25th July, Willard McCarty stated:

The divergence is over the tendency of ‘method’ to become something fixed. (Consider, for example, “I have a method for doing that.” Contrast “What if I try doing this?”).

“Fixedness” is essential, and it implies some form of critically-grounded consensus among those using the method in question. This is perhaps easier to see in the social sciences that it is in the [Digital] humanities. For example, how would a classicist, or an historian, or a literature scholar approaching manuscripts through the method of close reading present and describe that method in the appropriate section of the paper? How would this differ from, say the equivalent section in a paper by a social scientist using grounded theory to approach a set of interviews? While there may be no differentiation in the rigour or quality of the research, but one suspects the latter would have a far greater consensus – and body of methodological literature – to draw upon to describe grounded theory, than the former would to describe close reading.

Many discussions on this subject remain content-focused still. What content means in itself has assumed a broader aspect. Whereas “content” in the DH may once have meant digitized texts, images and manuscripts, surely now it also includes web content such as tweets, transient social media, and blog posts such as this one. It is essential to continue to address the DH research life-cycle, as based on content, but I still but we need to tackle explicitly methodology (emphasis deliberate), in both its definition and epistemology, and defined by the presence of fixity, as noted by McCarty.” Methodological pluralism”, the key theme of the thread on Humanist this summer, is great, but for there to be pluralism, there must first be singularity. As noted, the social sciences have this in a very grounded way. I have always argued that the very terms “quantitative” and “qualitative” are understood, shared, written about and, ultimately, used in a much more systematic way in the social sciences than in the (digital) humanities, where they are often taken to express a simple distinction between “something than can be computed versus something that cannot”.

I am not saying this is not a useful distinction, but surely the Humanist thread shows that the DH should at least deepen the distinction to mean “something which can be understood by a computer versus something that cannot”.

I would like to pose three further questions on the topic:

1) how are “technological approaches” defined in DH – e.g. the use of a tool, the use of a suite of tools, the composite use of a generic set of digital applications?

2) what does a “technological approach” employing one or more tools enable us to do?

3) how is what we do with technology a) replicable and b) documentable?

Bolton Abbey, North Yorkshire

August 14, 2016

boltonabbey

Pencil sketch, mostly harder pencils. August 2016.

Sourcing GIS data

March 29, 2016

Where does one get GIS data for teaching purposes? This is the sort of question one might ask on Twitter. However while, like many, I have learned to overcome, or at least creatively ignore, the constraints of 140 characters, it can’t really be done for a question this broad, or with as many attendant sub-issues. That said, this post was finally edged into existence by a Twitter follow, from “Canadian GIS & Geomatics Resources” (@CanadianGIS). So many thanks to them for the unintended prod. The linked website of this account states:

I am sure that almost any geomatics professional would agree that a major part of any GIS are the data sets involved. The data can be in the form of vectors, rasters, aerial photography or statistical tabular data and most often the data component can be very costly or labor intensive.

Too true. And as the university term ends, reviewing the issue from the point of view of teaching seems apposite.

First, of course, students need to know what a shapefile actually is. A shapefile is the building block of GIS, the datasets where individual map layers live. Points, lines, polygons: Cartesian geography are what makes the world go round – or at least the digital world, if we accept the oft-quoted statistic that 80% or all online material is in some way georeferenced. I have made various efforts to establish the veracity of this statistic or otherwise, and if anyone has any leads, I would be most grateful if you would share them with me by email or, better still, in the comments section here. Surely it can’t be any less than that now, with the emergence of mobile computing and the saturation of the 4G smartphone market. Anyway…

In my postgraduate course, part of a Digital Humanities MA programme, on digital mapping, I have used the Ordnance Survey Open Data resources, Geofabrik, an on-demand batch download service for OpenStreetMap data, Web Feature Service data from Westminster City Council, and  continental coastline data from the European Environment Agency. The first two in particular are useful, as they provide different perspectives from respectively the central mapping verses open source/crowdsourced geodata angles. But in the expediency required of teaching a module, they main virtues are the fact they’re free, (fairly) reliable, free, malleable, and can be delivered straight to the student’s machine, or classroom PC (infrastructure problems aside – but that’s a different matter) – and uploaded to a package such as QGIS.  But I also use some shapefiles, specifically point files, I created myself. Students should also be encouraged to consider how (and where) the data comes from. This seems to be the most important aspect of geospatial within the Digital Humanities. This data is out there, it can be downloaded, but to understand what it actually *is*, what it actually means, you have to create it. That can mean writing Python scripts to extract toponyms, considering how place is represented in a text, or poring over Google Earth to identify latitude/longitude references for archaeological features.

This goes to the heart of what it means to create geodata, certainly in the Digital Humanities. Like the Ordnance Survey and Geofabrik, much of the geodata around us on the internet arrives pre-packaged and with all its assumptions hidden from view.  Agnieszka Leszczynski, whose excellent work on the distinction between quantitative and qualitative geography I have been re-reading as part of preparation for various forthcoming writings, calls this a ‘datalogical’ view of the world. Everything is abstracted as computable points, lines and polygons (or rasters). Such data is abstracted from the ‘infological’ view of the world, as understood by the humanities.  As Leszczynski puts is: “The conceptual errors and semantic ambiguities of representation in the infologial world propagate and assume materiality in the form of bits and bytes”[1]. It is this process of assumption that a good DH module on digital mapping must address.

In the course of this module I have also become aware of important intellectual gaps in this sort of provision. Nowhere, for example, in either the OS or Geofabrik datasets, is there information in British public Rights of Way (PROWs). I’m going to be needing this data later in the summer for my own research on the historical geography of corpse roads (more here in the future, I hope). But a bit of Googling turned up the following blog reply from OS at the time of the OS data release in April 2010:

I’ve done some more digging on ROW information. It is the IP of the Local Authorities and currently we have an agreement that allows us to to include it in OS Explorer and OS Landranger Maps. Copies of the ‘Definitive Map’ are passed to our Data Collection and Management team where any changes are put into our GIS system in a vector format. These changes get fed through to Cartographic Production who update the ROW information within our raster mapping. Digitising the changes in this way is actually something we’ve not been doing for very long so we don’t have a full coverage in vector format, but it seems the answer to your question is a bit of both! I hope that makes sense![2]

So… teaching GIS in the arcane backstreets of the (digital) spatial humanities still means seeing what is not there due to IP as well as what is.

[1] Leszczynski, Agnieszka. “Quantitative Limits to Qualitative Engagements: GIS, Its Critics, and the Philosophical Divide∗.” The Professional Geographer 61.3 (2009): 350-365.

[2] https://www.ordnancesurvey.co.uk/blog/2010/04/os-opendata-goes-live/

Question: (how) do we map disappeared places?

September 2, 2015

A while ago I asked Twitter if there was a name for a long period of inactivity on blogs or social media. Erik Champion came up with some nice suggestions

which raise questions about whether blogging represents either the presence or absence of ‘loafing’; and  replied with a certain elegant simplicity:

Anyway, having been either ‘living’ or ‘loafing’ a lot these last few months, this is my first post since February.

I want to ask another question, but 140 characters just won’t cut it for this one. How does one represent a place in a gazetteer, or any other kind of database or GIS for that matter, which no longer exists? To take an example of ‘Mikro Kaimeni’, a tiny volcanic island in the Santorini archipelago mapped and published by Thomas Graves in his 1850 military survey of the Aegean:

20150610_110846

Some sixteen years after this map was made, Santorini erupted and Mikro Kaimeni combined with the large central island, Neo Kameni:

osm_santorini

Can such places be hermenutic objects by virtue of the fact that they are representing in the human record (in this case Graves’s map), even though they no longer exist as spatial footprints on the earth’s surface? I suppose they have to be. The same could go for fictional places (Middle Earth, Gotham City etc). What kind of representational issues does this create for mapping in the humanities more generally?

Digital Destinations: What to do with a digital MA

February 16, 2015

King’s Careers & Employability gathers statistics on graduate employment destinations for the Higher Education Statistics Agency (HESA).  Such data is available for the Department of Digital Humanities’ cohorts for the three academic years between 2010/11 and 2012/13, that is to say graduates of the MA Digital Humanities, the MA Digital Asset and Media Management and the MA Digital Culture and Society of those years. This information, which includes the sectors and organizations that alumni enter, and their job titles, is gathered from telephone interviews and online surveys six months after their graduation. Of those who graduated in 2012/13, 93.8% were in full time work, with the remainder undertaking further study in some form. 38.4% of those approached did not reply, or refused to provide answers. A certain health warning must therefore be attached to the information currently available; and in the last couple of years the numbers on all three programmes have grown considerably, so the sample size is small compared to the numbers of students currently taking the degrees. But in surveying the data that we do have, it is possible to make some preliminary observations.

Firstly, the good news is that all of our graduates from 2012/13 who responded to the survey were in employment, or undertaking further studies, within those six months. In the whole three-year period, MA DAMM graduates entered the digital asset management profession via corporations including EMAP, and the university library sector (Goldsmiths College).  They also entered managerial roles at large corporations including Coca-Cola and the Wellcome Trust. Digital media organizations feature strongly in MA DCS students’ destinations, with employers including NBC, Saatchi and Saatchi and Lexisnexis UK, with roles including design, social media strategy and technical journalism. Librarianship is also represented, with one student becoming an Assistant Librarian at a very high-profile university library. Others appear to have gone straight in to quite senior roles. These include a Director of Marketing, PR and Investments at an international educational organization, a Senior Strategy Analyst at a major international media group, and a Senior Project Manager at a London e-consultancy firm. One nascent trend that can be detected is that graduates of MA DH seem more likely to stay in the research sector.  Several HE institutions feature in MA DH destinations, including Queen Mary, the University of Oslo, Valencia University, the Open University and the University of London, as well as King’s itself; although graduates entering these organizations are doing so in technical and practical, roles such as analysts and e-learning professionals, rather than as higher degree research students. A US Office of the State Archaeologist, Waterstones and Oxford University Press also feature, reflecting (perhaps) MA DH’s strengths in publishing and research communication. Many of the roles which MA DH graduates enter are specialized, for example Data Engineer, Conservator Search Engine Evaluator, although more junior managerial jobs also figure.

As noted, the figures on which these observations are based must be treated with some caution; and doubtless as data for 2013/4 and beyond become available, clearer trends will emerge from across the three MA programmes. Currently, there is a range of destinations to which our graduates go, spanning the private and research sectors, and there is much overlap in the types of organizations for which graduates from all three programmes work and the roles they obtain. However, two broad conclusions can be drawn. Firstly, that all three programmes offer a range of skills based on a critical understanding of digital theory and practice which can be transferred to multiple kinds of organization/role. Secondly our record on full employment shows that there is growing demand for these skills, and that those skills are becoming increasingly essential to both the commercial and research sectors.

Of Historic Units and Cypriot heritage

December 15, 2014

The team behind the Heritage Gazetteer of Cyprus were in Nicosia last week, presenting a near-final form of the project to an audience of experts in Cypriot history and archaeology. The resource the project has been tasked by the A. G. Leventis Foundation to produce is very nearly complete, and will be launched to the world in January 2015.

The HGC has always been about the names of places, and how these names change over time. As I have blogged about previously, and as we outlined in our presentation to the International Cartographic Association’s Digital Approaches workshop in Budapest in September, this name-driven approach, which is based on three layers of data – modern toponyms, ‘Historical Units’ and ‘Archaeological Entities’ represents the limits of the current project. However what it cannot do raises important intellectual questions about how digital representations of place are organized and presented online. The aim of this post is to capture some of these questions, particularly with regard to our ‘Historic Unit’ data layer.

To recap the definitions: a modern toponym is, quite simply, the official name currently in use, and the only data sources for this are official ones – currently in the form of the Complete Gazetteer of Cyprus (Konstantinides and Chrisotodolou 1987). In our presentation last Thursday, we reiterated our definition of an HU as:

“Entities of substantial geographical extent and significance, such as towns, archaeological sites and the extents of kingdoms”

And AEs as:

“A discrete feature, with a distinct spatial footprint, formed in a definable period”

This definition of an AE is relatively clear. Most importantly, the reference to a ‘distinct spatial footprint’ means that it is related to mappable feature, which is extrinsic to any definition in the HGC data structure. However, a colleague at the meeting expressed the general view when he described HUs as being “the most interesting aspect, but also the most problematic”. Currently, they are defined on the map as a polygon, drawn by the user when they create the HU record. In some cases, a polygon can be defined relatively straightforwardly. For example the Venetian walls of Nicosia form a discrete spatial footprint, that can be traced using the HGC’s geocoding tool. But in most cases, this requires a subjective judgement, and thus a subjective representation, that is arguably inimical to the positivist interpretation which any robust database requires; and imposes exactly the kind of Cartesian absolutism that I and others have railed against in several recent and forthcoming publications. Further, creating an HU in this way can lead to our grouping data points that are actually very different in nature. Sometimes an HU will equate to a modern toponym (such as Marchello, in Nea Paphos), and sometimes it does not. Hellenistic kingdom of Paphos is another example of a composite; whereas medieval/Venetian Nicosia is a defined location. As another colleague commented on the project recently:

“The concept of Historical Units is something that I think needs some additional definition. I understand that other standards have the same fuzzy things, but poorly defined things add increasing difficulty as the dataset grows. My reading … is that you mean geographical feature which is uncomfortably close to Archaeological Entity. Have you considered just calling them ‘historical features’ and having a containment / recursive relationship?”

As we take the HGC forward therefore, we propose to modify this artificial footprinting mechanism, so that an HU is rather represented by a set of thematically, but not necessarily geographically, conjoined AEs.

Nea Paphos - an example of an HU in the HGC

Nea Paphos – an example of an HU in the HGC

This speaks to a much more fundamental problem of how archaeological data – bearing in mind that the HGC is about names, and is thus more a creature of history than of archaeology – is recorded. Much rhetoric of the semantic web in the discipline, at least in its earlier phases, focused on the need to link archaeological datasets ‘organically’, where data produced by one site can be linked and contextualized with that from another, without the excavation teams of either having to adopt a priori methods or procedures for data production. This may hold true to an extent, but our experience with HUs especially shows that when combining such data, some kind ofa posteriori aggregation process must be undergone. Otherwise, quite simply, one adds little to the data by linking it. In our current model, this is imposed by the user doing the aggregating; but the subjectivity this introduces is fraught with difficulties. Therefore, our next steps will be to develop our categorising and attributing capabilities for AEs, and begin aggregating into HUs on that basis. They will therefore be grown from the ground up in a way that is guided by the HGC data structure, rather than imposed from the user downwards.

On to prehistory: the Ancient Places of Cyprus

October 19, 2014

rightwayup

I currently have the great good fortune to be a visiting scholar at Stanford University’s Center for Electronic Spatial and Textual Analysis (CESTA). I have two aims while here: working on the monograph (on spatial narratives – more blogging on this to follow) upon which my term’s research leave from King’s is contingent, and developing a new project, recently christened “The Archaeology of Place in Ancient Cyprus”. This is follows the A G Leventis Heritage Gazetteer of Cyprus, which I have blogged about previously; but rather than being concerned with historical names, which are written down and attested, this phase is focusing on the prehistoric period – where, of course, there are no written records, and thus no known place-names (and no documented attestations of their spellings, forms etc). In the Archaeology of Place, We are creating a seed dataset which seeks to represent Cypriot archaeology of the prehistoric period, before any contemporary place-names are documented.  This involves a multistage process of critical quantification: starting with published material on prehistoric sites and features, we are examining how these can be defined in objective (and computable) terms, and how different units of archaeology can be represented at different scales. This will lead to a broader examination the ‘toponymic spaces’ of prehistoric features: how do the areas they occupy on the Earth’s surface relate to more recent place-name structures? And what strategies can we use to grow this dataset in the future, beyond the corpus of material currently available in print?

Critical quantification is key to this project.  I have real problems with the way the words ‘quantitative’ and ‘qualitative’ are often used in the Digital Humanities. They – as far as I can see – are terms that have evolved over a long period of time in the social sciences, where they have a well-understood meaning and a solid methodological grounding. In the DH, they are frequently used as catchy labels for ‘things which either can or cannot be machine-read respectively’. This is undoubtedly not helpful, given the great complexity and diversity of ‘humanities data’ – a term which, itself, is surely too broad to be all that useful.

So we are beginning with what can definitely be quantified.  When I. A. Todd et al define a ‘tomb’ in the cemetery at Kalavassos for example, we can treat this is a piece of discrete information, much as we are treating an attested name as a discrete piece of information in the HGC (with its own URI, and the possibility of other URIs for “smaller” pieces of information, such as finds, with which it has a container relationship). But in the future we will consider what other attributes could be added to each of these, for example, relationships with modern features which might not have been documented at the time. Online images, and pieces of related data from the geoweb. Even social media elements. This will open up the possibility for more in-depth experimentation using GIS – for example investigating least-cost pathways between sites in the northern Vasilikos Valley with points of known importance on the south coast, and how the finds, features and pits of those sites might be used to enrich that analysis. We will also undertake a broader consideration of what this exercise tells us about the epistemology of archaeology, and its quantitative aspects, might mean. While it makes perfect sense for quantification to follow the ‘objectivity’ of the material involved – beginning with physical objects, with clearly defined sites, and obvious statements that can be made about their attributes – we are interested in where the affordances of the digital environment of a database might take us in terms of contextualising them with purely digital objects; and how this might help us mediate spatial narratives of Cyprus’s distant past.

(Not quite) moving mountains: recording volcanic landscapes in digital gazetteers

August 8, 2014

Digital gazetteers have been immensely successful as means of linking and describing places online. GeoNames for example, now contains 10,000,000 geographical names corresponding to over 7,500,000 unique features. However, as we will be outlining at the ICA Digital Technologies in Cartographic Heritage next month in relation to the Heritage Gazetteer of Cyprus project, one assumption which often underlies them is fixity: an assumption that a name and a place and, by extension, its location on the Earth’s surface are immutably linked. This allows gazetteers to be treated as authorities. For example, a gazetteer with descriptions fixed to locations can be used to associate postal codes with a layer of contemporary environmental data and describe relationships between them; or to create a look-up list for the provision of services. It can also be very valuable for research, where a digital edition of a text has mentions of places. If contained in a parallel gazetteer, these can be used to provide citations and external authorities to those places, and also to other references in other texts.

However, physical geography changes. In the Aegean, where  the African tectonic plate is subducting beneath the Eurasian plate to the north, the South Aegean Volcanic Arc has been created, a band of active and dormant volcanic islands including the islands of  Aegina, Methana, Milos, Santorini, Kolumbo, and Kos, Nisyros and Yali. Each of these locations has a fixed modern aspect, and can be related to a record in a digital gazetteer.  However, these islands have changed over the years as a result of historical volcanism, and this history requires  the flexibility of a digital gazetteer to adequately represent it.

thera2

The island of Thera. The volcanic dome of Mt. Profitis Elias is shown viewed from the north.

I recently helped refine the entry in the Pleiades gazetteer for the Santorini Archipelago. Pleiades assigns a URI to each location, and allows information to be associated with that location via the URIs.  Santorini provides a case study of how multiple Pleaides URIs, associated with different time periods, can trace the history of the archipelago’s volcanism.  The  five present-day islands frame two ancient calderas, the larger formed more recently in the great Late Bronze Age eruption, and the other formed very much earlier in the region’s history. Originally, it is most likely that a single island was present, which fragmented over the millennia in response to the eruptions. Working backwards therefore, we begin with a URI for the islands as a whole:  http://pleiades.stoa.org/places/808255902. This covers the entire entity of the ‘Santorini Archipelago’. We associate all the names that have pertained to the island group through history – Καλλιστη (Calliste; 550 BC – 330 BC); Hiera (550 BC – 330 BC) and Στρογγύλη (Strongyle; AD 1918 – AD 2000), as well as the modern designation ‘Santorini Archipelago’ itself.  These four names have been used, at different times as either a collective term for all the islands, or, in the case of Strongyle, for the (geologically) original single island. This URI-labelled entity has lower-level connections with the islands that were formed during the periods of historic volcanism:  Therasia, Thera,  Nea Kameni, Mikro Kameni, Palea Kameni, Caimeni  and Aspronisi. Each, in turn, has its own URI.

thera1

The Santorini Archipelago in Pleiades

Mikro Kameni  and Caimeni are interesting cases as they no longer exist on the modern map. They are attested respectively by the naval survey of Thomas Graves of HMS Volgae (1851), and Caimeni was attested by Thomaso Porcacchi in 1620. Both formed elements of what are now the Kameini islands, but due to the fact that they have these distinct historical attestations, they are assigned URIs, with the time periods when they were known to exist according to the sources, even though they do not exist today.

This speaks to a wider issue of digital gazetteers, and their role in the understanding of past landscapes. With the authority they imbue to place-names, gazetteers might, if developed without reference to the nuances of landscape changes over time, potentially risk implicitly enforcing the view, no longer widely accepted, that places are immutable holders of history and historic events; where, in the words of Tilley in A Phenomenology of Landscape: Places, Paths and Monuments (1994), ‘space is directly equivalent to, and separate from time, the second primary abstracted scale according to which societal change could be documented and ‘measured’.’ (p. 9). The evolution of islands due to volcanism show clearly the need for a critical framework that avoids such approaches to historical and archaeological spaces.

Image biographies?

May 11, 2014

Last week, thanks to my Fellowship of the Software Sustainability Institute, I attended Electronic Visualization and the Arts in Florence. This fascinating and wide-ranging conference bought together a tremendous range of people and ideas. Strategy, application, theory. A fascinating take on crowd-sourcing appeared in the form of a project of ETH-Bibliotek in Zurich, cataloging a massive image archive of daily life working for Swissair using the knowledge of Swissair retirees.

Image

Another issue that came up was the challenge associated with archiving a digital image for the  very long term – 150 years or more? Today we still have images taken in the 1920s and 1930s, can digital imaging deliver similar longevity? The short answer is almost certainly not. One strategy discussed, by Graham Diprose and Mike Seabourne, is to archive digital artwork and photography by printing it on specially prepared paper; an approach they describe as a ‘technology proof form of insurance’. I think this raises important issues about how images, digital or otherwise, are dealt with as ‘objects’. This topic has a certain hinterland in the domain of cultural heritage, as the concept of the ‘object biography’ has been discussed since at least 1999, when Chris Gosden and Yvonne Marshall wrote that ‘as people and objects gather time, movement and change, they are constantly transformed, and these transformations of person and object are tied up with each other’ (‘The Cultural Biography of Objects’, World Archaeology, Volume 31 No. 2 [October 1999]: 169-178). The key difference with images – an susbset class of objects that has only existed for a little over one hundred years – subject, significance and material can be separated. What an image depicts is separate from the materiality of the photograph itself — and from the cloud of numbers that make up a digital image. Such considerations encourage us to think about what the ‘biography’ of an image might look like. And this is important. The Gosden-Marshall model of the object biography has gained currency in a number of major museums, including the Pitt Rivers. The implications of the image biography, where meaning and material are preserved side by side, will be preserved side by side, can only be done if the relationships between these two are also preserved. This will include methods for preserving metadata (a point made in the session by Nick Lambert); but it also accords, I think, with the broader intellectual direction of where visualization is going.

Image

Florence, Piazza degli Ottaviani. The EVA venue is on the left.

To explain this: this year, in London, EVA International will celebrate its 25th anniversary.  In the last twenty five years, much digital visualization has been contingent of the presentation of 3D material on the 2D screen. My sense from the last three or so EVA Londons, confirmed by EVA Florence, is that in the next twenty five years, visualization will involve the 2D screen less and less. A greater proportion of demos at EVA London now involve objects, not screen-based presentations. Carol MacGillivray, Bruno Mathez and Frederic Fol Leymarie’s  Diasynchonosope project in 2013 and Gary Priestnall, Jeremy Gardiner, Jake Durrant & James Goulding’s Projection Augmented Relief Models in 2012 are particularly striking examples, but there are many more.  Preserving these visulaziations, including more conventional digital images, will require an integrated cloud of thinking on software sustainability, and its relationship to curation practice, digital augmentation and policy. I look forward to seeing more illustrations of this in London in July. I was also happy to participate in a network meeting of EVA international on the third day.The meeting ended with a small ceremony to inaugurate a plaque in the newly refurbished room to commemorate the event (see photo – this is a facsimile, pending the real thing being engraved). Image