Friday, 3 February 2017

Research Infrastructure and the Future of National Libraries



In my role as a member of the British Library Advisory Council, I was recently asked to present a few thoughts on how research infrastructure might change in response to the changing demands of academics.  This post records my notes for that discussion.  It does not record what I said to that committee on the day - and certainly does not imply that the British Library in any way endorses or subscribes to my views; but it does reflect what I believe is the necessary direction of travel in the provision of resources for academic research. 


I have been asked to speak for a few minutes about developments in academic research and the implications these might have for the British Library; and where I really wanted to start was with a quick appreciation of where we have come from.

It is important to remember that we are sitting at the centre of what was a 200 year project to create a comprehensive – divided, but universal - infrastructure for research and knowledge creation.  All you have to do is walk down Museum Row – from The Science Museum, to the Natural History Museum, to the V&A – each with their active Higher Education equivalent research staff – to remember that we inherited a powerful, cross disciplinary research infrastructure.  Or look back to the old round reading room of the British Library with its 400-odd volumes of an ever changing manuscript catalogue – seeking to encompass all of human knowledge.  Whatever your field, whatever you methodology, the nineteenth and early twentieth century created an infrastructure in stone and brick.
 
In the last fifty years much of this has either been transformed, or else become increasingly redundant – catering for an ever shrinking body of old-school scholars; while much of the effective infrastructure that underpins research has moved elsewhere.  Arguably 'Science, Technology, Engineering and Medicine' (STEM) subjects, with their greater resources have led the way in creating endless new data stores and distributed infrastructural kit.  And while buildings – like the Crick Institute, or CERN – represent a fragment of a constant ongoing rebuilding of intellectual infrastructure, they are just the tip of a much larger transformation that has taken a new form.

Through repositories like Cern’s Zenodo project; through the Genome project (with at the time, its seemingly huge demand for data storage), with Gold Open Access science journals (built on commercial publishing models, and incorporating their own data stores), with GitHub and with a collaborative project-based approach to research, STEM has created a new distributed knowledge infrastructure – because the older one failed first for their disciplines.  

In the process STEM has largely side-stepped the brick and stone infrastructure along the way – in particular the British Library.  You will not find a physicist or an astronomer in any of the Library’s reading rooms.  In other words the hardest end of STEM seems to me to have cracked substantial elements of this conundrum, and left the other two thirds of the research landscape – from the softer end of STEM, to social science, business and economics, and the humanities, largely eating dust, and reliant on an increasingly creaky twentieth century infrastructure.

So, in the first instance, it seems to me that we are challenged to rethink ‘research’ data, and publication as a new form of infrastructure.  So, the Library – or somewhere – needs to create a context in which notes and files, data stores of all kinds can be shared and curated – in a digital form.   And this data, or data store, needs in turn to be tied directly to the public commentary – or publication – built upon that data.

But in the process there is also something more subtle going on.  While STEM has led in a particular direction, it has brought with it a particular style of research organisation, which again changes the nature of the infrastructure required.  All you have to do is look at the evolution of the Research Councils UK – from its shared services centre, to an ever growing emphasis on inter-disciplinary funding – and emphasis on large team projects, and the training of Early Career Researchers to be ‘leaders’ – by which they mean project heads – to see a direction of travel towards large teams of ‘laboratory-ish’ groups, fronted by media friendly ‘interpreters’.  And of course, this is all combined with a precipitate concentration of research funding on an ever smaller number of ever more self-congratulatory institutions.

In other words, it seems to me that national research culture – and in a more chaotic way, international infrastructure as well - is faced with a twofold change.  First, there is a fundamental transformation in the most significant core of the research ‘infrastructure’ from bricks and mortar, to online - to immediately accessible data; with the tools to use it and ‘publish it’.

And second we are faced with a gradual, forced move towards larger and more ‘laboratory-like’ forms of research, in which collaboration – both virtual and face-to-face – are increasingly normal.
By way of a caveat, however, we are also faced with a multi-generational lag in which every variety of lone and independent scholar will want the same old, same old – to be available regardless of the cost.

All of which just leads me to believe that the evolving nature of research – mainly that based in Higher Education – needs urgent attention in the following areas.


  • The shared curation and storage of data and research materials – building on STEM models, but made friendlier to different data types.
  • We also need the tools to work with that data – and training that supports their use.
  • We need to explore different validation and authorisation models for ‘publication’.  At the moment we are allowing a multi-billion pound business to be built on national expenditure, and we need to reclaim elements of this – through new models of peer review and distribution.  These in turn need to be tied to data – vertically integrated from data to experiment to commentary - and more amenable to collaboration – with a traceable development path through all of it.
  • We also need a clearer commitment to non-HE researchers.  We need to acknowledge that HE – as gatekeeper of research authority - forms part of the problem.  And we need to keep a weather eye on the boundaries around who can research.  The BL certainly needs to create an infrastructure for HE research, but it needs to be an infrastructure that is open to everyone.

In other words, and as usual, the British Library needs to remember that it is a national machine for research and learning, committed to access to all knowledge, for everyone who needs it; and to use these first principles to navigate a remarkably complex and rapidly changing landscape.

We also need to remember, as William Gibson said: ‘The future is already here – it’s just not very evenly distributed’.

Friday, 20 January 2017

Humanities2

What follows is the lightly revised text of a 'Provocation' I presented at the launch event held to celebrate the creation of the Digital Humanities Institute  from the Humanities Research Institute.  Held in Sheffield on the 17th of January 2017, the event perhaps required more celebration than critique (and the DHI deserves to be celebrated).  But I hope that the main point - that we need to move beyond the textual humanities to something more ambitious - comes through in what follows.


I have been working with what has up till now been the Humanities Research Institute for almost twenty years.   I have witnessed as it has grown with each project, and engaged with each new twist and turn in that remarkable story of the evolution of the digital humanities in the UK since the 1990s.   It has been a real privilege. 


Of all the centres created in the UK in that time – the HRI has been the most successful, and most influential.  And it has been successful, because, more than any other equivalent it has created a sustainable model of online publishing of complex inherited materials, and done so in delicate balance with an ongoing exploration of the new things that can be done with each new technology – and in balance again with a recognition of the new problems the online presents.


I frequently claim that the UK is at the forefront of the digital humanities – not necessarily because the UK has been at the bleeding edge of technical innovation; or because its academics have won many of the intemperate arguments that pre-occupy critical theory.  Instead, it is at the forefront of worldwide developments because, following the HRI, the UK figured out early that the inexorable move to the online, both demanded a clarity of purpose, and a constant and ongoing commitment to sustainable publication.  The HRI, and now the DHI, represent that clear and unambiguous commitment to putting high quality materials online in an academically credible form; and an equally unambiguous commitment to measured innovation in search and retrieval, representation, and analysis.


But, while it is a moment to look back on a remarkable achievement, it is also a moment to grasp the nettle of change.  This re-foundation is a clear marker of that necessity and reflects a recognition both that the Humanities as a whole are on the move, and that the roles the DHI might play in that process are themselves changing.

For me, this sets a fundamental challenge.   And where I tend to start is with that label ‘The Humanities’.  This category of knowing has never really sat very comfortably for me.  It has always seemed a rather absurd, portmanteau import from the land of Trump – a kind of Trumpery – used to give a sense of identity to the thousand small private Universities that pock-mark the US; and to a collection of ill-assorted sub-disciplines brought together primarily in defence of their funding.  And it goes without saying, ‘The Humanities’ are always in crisis.
 
But, if you asked me to define the ‘humanities’ part of that equally awkward phrase – the Digital Humanities – it has to encompass that process through which a society learns about itself; where it re-affirms its collective identity and values; where the past and the present work in dialogue.  And whether that is via history, or literature, philosophy or politics, or the cultural components of geography and sociology – the ‘Humanities’ is where a community is first created and then constantly redefined in argument with itself, and with its past. 

For all the addition of the ‘digital’ to the equation, that underlying purpose remains, and remains uniquely significant to a working civil society. 


But, up until now – that conversation – that dialogue between the past and the present – has pre-eminently taken the form of text – the texts of history books and novels; long analytical articles and essays; aphorisms, poems and manifestos.  And even when you add the ‘digital’ to create the ‘Digital Humanities’, the dominance of ‘text’ remains constant.  Indeed, if you look at the projects that have been undertaken by the HRI over the last two decades, the vast majority have been about text, and the re-representation of inherited text in a new digital format.  You can, of course, point to mapping projects, and 3d modelling of historic buildings, but the core work of the ‘digital humanities’ to date has been taking inherited text, and making it newly available for search and analysis as a single encoded stream of data.


This is a fantastic thing – the digital humanities have given us new access to old text; and created several news forms of ‘reading’ along the way – distant, close, and everywhere in between.  It has arguably, created a newly democratic world of knowledge – in which some 40% of all humans have access to the web and all the knowledge contained therein – all 3.5 billion of them.  That small-minded world many of us grew up in, of Encyclopaedia salesmen peddling access to a world of information most of us were otherwise excluded from by class and race and gender – is simply gone.  This is a very good thing.


But, while the first twenty years of the web forms a place where the stuff of the post-enlightenment dead needed to find a home; our hard work recreating this body of material also means that we have spent the last twenty years very much swimming against the tide of the ‘humanities’ as a set of contemporary practises.  We have reproduced an old-school library, but online – with better finding aids and notetaking facilities, and we have made it more democratic and hyper-available – for all the paywalls in the world.  But at the same time, we have also allowed ourselves to limit that project to a ‘textual’ humanities; when the civic and civil conversation that the ‘humanities’ must represent, has itself moved from text to sound and from sound to image.  There is a sense in which we are desperately trying to represent a community – a conversation – made up of an ever changing collection of voices in an ever changing series of formats, but trying to do so, via that single encoded stream of knowing:  text.


This is where the greatest danger and the greatest opportunity for the ‘digital humanities’ lies – because if you look at ‘data’ in its most abstract forms, this equation between knowing and text, is breaking down, and is certainly changing at a dramatic pace.


The greatest technological developments shaping the cultures of the twentieth century focussed on creating alternatives to text.  Whether you look to sound and voice, via radio and recording; or image and movement, via film and television – the first half of the twentieth century created a series of new forms of aural and visual engagement that gave to sound and image, the same universal reach that for the preceding four hundred years, was provided by print.  The second half of the twentieth century, and the first decade of the twenty-first, was equally taken up with putting sound and image in our everyday - jostling for attention, and pushing aside – text.  


It is perhaps difficult to remember that the car radio only became commonplace in the 1950s; and that the transistor radio making mobile music possible – on the beach and on the street – was a product of the same decade.  Instant photography and moving images were similarly, only given freedom to go walkabout in the 1970s and 1980s, with luggable televisions, and backbreaking video cameras. 


This trajectory of change – and ever greater focus on the non-textual – has simply increased in pace with the advent of the smart phone and the tablet.  While at the margins, the Kindle may have changed how we read Fifty Shades of Grey on public transport; it was the Walkman, the iPod, and the smartphone that have most fundamentally changed how we spend our time - what kinds of signals we are interpreting from minute to minute.   The most powerful developments of the last decade have involved maps and images – from Google Earth to Flickr and PinInterest.


Ironically, while the book and the journal article have remained stubbornly the same - even in their digital forms; and while much of ‘digital humanities’ efforts have been directed towards capturing a technology of text that had been largely invented by 1600, and remained largely unchanged since; the content of our culture has been radically transformed by the creation of unlimited sound and image. 

If you want proof of this, all you need do reflect on the triggers of your imagination when contemplating the 1960s or 1980s – or the 2000s or 2010s.  We have become a world of sound and image.   

Half the time we now narrate the past through discographies of popular music; and most of what we know about the past is delivered via image rich documentaries, and historical dramatizations – wholly dependent on film archives for their power and claim to authenticity.  Our conversation – that dialogue with the dead, that forms the core of the humanities – has become increasingly multi-modal; and multi-valiant. A simple measure of this – is that the percentage of text on the web has been declining steadily since the mid-2000s.  According to Anthony Cociolo, text currently represents only some 27% of web content.  


Over the last two decades the Digital Humanities has crafted a technology for the representation of text; but we now need to pay more attention to all that other data – the non-textual materials that increasingly comprise our cultural legacy, and the content of our humanities conversation.


And the digital humanities have a genuine opportunity to create something exponentially more powerful than the textual humanities.  What the digital side of all this allows, is the removal of the barriers between sound and image and text – between novel, song and oil painting.  Each of these is no more than just another variety of signal – of encoding – now, in the digital, divided one from the other by nothing more substantial than a different file format.


If we can multiply sound by text – give each encoded word a further aural inflection; and each sound a textual representation of its meaning to the listener – we make the humanities stronger and more powerful.  By bringing text and image together; we create something that allows new forms of analysis, new layers of complexity, and new doubts and claims, to be heard among the whispering voices of that humanities conversation.  In part, this is a simple recognition that the physical heft of a book, changes how you read it; and that doing so on a crowded tube train, is different from reading even the very same physical book on a sunny beach.


Much has already been done to bring all these signals on to the same screen – to map texts; and add image to commentary; but there is an opportunity to go much further with this, and to acknowledge in a methodologically consistent way, that we can use sound and image, place, space and all the recoverable contexts of human experience to generate a more powerful, empathetic understanding of the past; to have a fuller more compelling conversation with the dead.  To my mind, we need new methodologies that allow us to analyse and deconstruct multiple signals, multiple data streams – sound multiplied by text by image by space.  We need to recreate the humanities by multiplying its various strands, one against the other, to create something more powerful, more challenging, and more compelling.  Perhaps, the Humanities3.
 
 

Wednesday, 6 July 2016

The Digital Humanities in Three Dimensions



This post is adapted from a talk I gave to the annual conference of the Australasian Association of Digital Humanities in Hobart on 21st June 2016.  It was a great conference with some great papers, leading to some great discussions.  This particular talk generated perhaps more heat than light, but  the main point seems important to me.



The Digital Humanities is a funny beast.  I tend to think of it as something of a pantomime horse – with criticism, distant reading and literary theory occupying the front end – all neighing and foot stamping at the MLA each year (and in the Los Angeles Review of Books) - while history, geography and library science are stuck in the rear – doing the hard work of creating new digital resources, and testing new tools.  Firmly at the back end of this arrangement, I spend much of my time hoping that the angry debates in the front don’t result in too many ructions behind.

But as a result of this weird portmanteau existence the Digital Humanities – its debates and its aspirations - has been largely about text.  As Matthew Kirchenbaum has noticed – much of it can be found in the English department.  Its origins are always located in the work of Father Busa and its greatest stars from Franco Moretti onwards, keep us focussed on the ‘distant reading’ of words. Indeed, the object of study for most Digital Humanists remains the inherited text of Western culture – now available for recalculation via Google Books, ECCO, EEBO, and Project Gutenberg. 

And because Digital Humanities is being led from areas of the academy that take as its object of study a canonical set of texts (however extensive and contested); we have been naturally led to use tools that privilege text analysis and to ignore methodologies that are focussed elsewhere.  The popular tools on the block are topic modelling of text, and network analysis based on the natural language processing, of text.  This is particularly true in North America where subjects such as geography do not have as strong an institutional presence as in Europe and much of the rest of the world, and where the spatial and sonic turns in the humanities feel less well established. 

This emphasis on text tends to make the Digital Humanities feel rather safer than it should. While the digital humanities is frequently cited for its disruptive potential - its ‘affordances’ – it is inherently conservative about what constitutes a legitimate subject, and has breathed new life into areas that forty years ago lready felt moribund.  The Enlightenment, and the papers of Newton, Austen, Bentham and Darwin – all dead writers of an elite stamp - have been revived and their ‘texts’ been made hyper-available.


In part, this is just about the rhythms of the academy.  The Digital Humanities arose just as post-modernism and second wave feminist criticism seemed to exit stage left, and with them, much of the imperative to critique the canon.  But it is also a result of underlying economic structures and the technologies of twentieth-century librarianship.  We very seldom acknowledge it, but the direction of our work is frequently determined and universally facilitated by the for-profit commercial information sector – by the likes of ProQuest, Google and Elsevier, Ancestry.com and Cengage Gale.  And they in turn are the product of a hundred-year history that has shaped what is available to all researchers in the humanities. 

If you want to know why, for example, The Times Digital Archive was the first major newspaper available online; if you want to know why early modern English books came next; if you want to know why Indian and African and South American literature is not available in the same way, it is down to selections made by these companies, and selections made, not last year, but a hundred years ago.

The current digital landscape is actually a reflection of an older underlying project, and an older technology.  Perhaps the biggest influence on what is available to researchers online – the biggest selection bias involved - is just a ghost of commercially produced, for profit, microfilm.  In other words, we have text because that is what people thought was important in 1906 or 1927, or 1935.  We tend to forget that microfilm was the great new technology of the twentieth century - and was itself part of an apparently radical disruptive intellectual project.  It is worthwhile remembering the details.


In 1906 Paul Otlet and Robert Goldschmidt proposed the livre microphotographique – library of microfilm – as a World Center Library of Juridical, Social and Cultural Documentation.  This was to be the ultimate universal library and knowledge machine – the library at Alexandria made new – and it was made possible by microfilm. 

Later perfected for commercial uses, by the late 1920s, microfilm was the methodology of choice used by the Library of Congress to film and republish some 3 million volumes from the British Library between 1927 and 1935; and in 1935, Kodak started filming The Times on a commercial basis.

And this pre-history of heritage material on the web is relevant for a simple reason.  It costs less than a penny per page to generate a digital image from a microfilm.  It is automated to the point that all you do is feed the reel into a machine wait.  By way of comparison, it costs around 15 pence per page to generate a similar image from a real book – even with modern automation, and three times that again capture a page of manuscript in an archive.  For many of the projects designed during the first decade of the web, it was cheaper to have material microfilmed first, as a first step in digitisation.

Seventeenth and eighteenth century books are available online precisely because the Library of Congress microfilmed them in in the 1920s; and The Times Digital Archive is available because Kodak microfilmed it over eighty years ago. Chinese and Arabic literature is not available in the same way because the Library of Congress and Kodak and their ilk decided it was not important. Pro-Quest, the multi-billion pound corporation that supplies half the material used by Digital Humanities scholars, started as University Microfilms International in 1938. 

In other words, what happened in the twentieth century – the aspiration to create a particular kind of universal library, and to commercialise world culture (and to a 1930s mind, this meant male and European culture) – essentially shapes what is now available on line. This is why most of the material we currently have is in black and white instead of colour.  And most importantly, it is why we have text; and in particular, canonical texts in English.

And if the Digital Humanities was really only the front end of that pantomime horse, this would not be that big a deal.  But the Digital Humanities is also the back end – all the people creating the infrastructure that defines world culture online.  If you ask an undergraduate (or most humanities professors) about their research practises, it rapidly becomes clear that hard copy wood pulp has been replaced by digital materials.  What we study is what we can find on line.  

In part, the selection bias driven by the role of microfilm and the textual bias this implies, just means that like the humanities in general, the digital sort is inherently, and institutionally, Western centric, elitist and racist.   Rich white people produced the text that the humanities tend to study and despite the heroic multi-generation effort that has sought to recover female voices; or projects seeking to give new voice to the poor – from below - this selective intellectual landscape remains.

In other words, the textual Digital Humanities offers a superficial and faux radicalism that effectively re-enforces the conservative character of much humanities research.  The Digital Humanities' problem in recruiting beyond white and privileged practitioners is not just down to the boorish cultures of code – rude male children being unwelcoming -  but a result of its object of study.

All of which is just by way of introducing the real subject of this post - that for us to actually grasp the ‘affordances’ that the digital makes possible we really need to change that ‘object of study’ and move beyond microfilmed cultures.   

And that when we add space and place, time and sound, to our analysis, and when we start from a hundred other places than the English department – from geography and archaeology, to quantitative biology and informatics – we can create something that is more compelling, more revealing and more powerful – and arguably more inclusive and democratic along the way.

By way of pursuing this idea, I want to go through a few of the different ways new tools and approaches create real opportunities to move beyond the analysis of ‘text’ to something more ambitious; and in the process attack that very real inherent bias – and inherent conservatism -  that the ‘textual’ humanities brings with it. 

The rain falls on every head – and I just want explore how we can move beyond the elite and the Western, the privileged and the male. 

In the humanities we think of digitisation of text,but in a dozen other fields, they are digitising different components of the physical world.  And when everything is digital – when all forms of stuff come to us down a single pipeline -  everything can be inter-related in new ways.  The web and the internet simply provides a context in which image, sound, video and text are brought onto a single page.

Consider for a moment the ‘Haptic Cow’ project from the Royal Veterinary College in London.  In this instance they have developed a full scale ‘haptic’ representation of a cow in labour, facing a difficult birth, which allows students to physically engage and experience the process of manipulating a calf in situ.  Imagine this technology applied to a more historical event, or process, or experience.  It suggests that the object of study can be different, and should include the haptic - the feel and heft of a thing in your hand.  This is being coded for millions of objects through 3d scanning; but we do not yet have an effective way of incorporating that 3d encoding into our reading of the past.  

And if we can ‘feel’ an object, it changes how we read the text that comes with it; or the experience that text encodes. The world would look and feel very different if we organised it around those objects – the inherited texts attached to them perhaps - but those objects’ origin and materiality forming the core of the meaning we seek to interrogate.  We can use the technology to think harder about the changing nature of work, or punishment, the ‘feel’ of oppression and luxury. Museums and collections - the catacombs of culture - are undoubtedly just as powerfully selective and controlling as the unseen hand of the publishers and archivists; but in stepping beyond text, we can hope to play the museums off against the text.


The same could be said of the aural - that weird world of sound on which we continually impose the order of language, music and meaning; but which is in fact a stream of sensations filtered through place and culture.  For people working in musicology there feels to be a ‘sonic turn’ in the humanities, but most of us have paid it little heed.

There are projects like the Virtual St Paul's Cross, which allows you to ‘hear’ John Donne’s sermons from the 1620s, from different vantage points around the yard.  Donne is a dead white man par excellence, but the project changes how we imagine the text and the event.  And again begins to navigate that normally unbridgeable space between text and the material world to help give us access to the experience of the beggar in the crowd, of women, children and the historically unvoiced. 

For myself, I want to understand a sermon heard in the precise church in which it was delivered; a political speech in the field, or parliamentary chamber; or an impassioned defence in the squalid courtroom in which it was enacted, or under an African judgement tree – with the weather and the smell thrown in.  And I want to hear it from the back of the hall, through the ears of a child or a servant.

This would help challenge us to think harder and differently about text that purport to represent speech, and text that sits between the mind and the page.  Recorded voice – even in the form of text – is inherently more quotidian, is inherently more likely to give us access to the 90 percent of the population whose voices are recorded, but whose 'text' is not.  Text recording speech is different to text produced by the elite power users of the technology of writing – who write directly from mind to page.  This at least shifts us a bit – from text, to voice.

Similarly, in the work of people such as Ian Gregory, we can see the beginnings of new ways of reading both the landscape, and the textual leavings of the dead in the landscape.   His projects on mapping the Lakeland poets; and mapping 19th century government reports, imply a new and different kind of reading.


What happens to a traveller's journal when it is mapped onto a landscape? What happens to a landscape painting when we can see both its reference landscape, and the studio in which it was completed?  What happens to even text, when it is understood to encode a basic geographical relationship?  How do we understand a conversation on a walk when we can map its phrases and exchanges against the earth’s surface?  And what forms of analysis can undertake when each journey, each neighbourhood, each street and room, are available to add to the text associated with them? 

The rain falls on every head.

All of which is to state the obvious.  There are lots of new technologies that change how we connect with historical evidence – whether that is text or something more interesting; and that we increasingly access it all via that single remarkable pipeline that is the online and the digital.   

But it strikes me that adding these new dimensions to the object of study allows us to do something important.  I have spent the last thirty-eight years working on a ‘history from below’ focused on the lives of eighteenth century London’s working people.  And what I want to suggest is that these new dimensions and methodologies actually make that project fundamentally more possible; and by extension makes the larger project of recovering the voices and experience of the voiceless dead, more possible.  When you add in the haptic, the mapped and the geographical, the aural and the 3D, what you actually end up with is a world in which non-elite – and non-western - people are newly available in a new way.  You also move from a kind of history as explanation, to history as empathy – across cultures and genders, across time and space.

Sound and space and place, are fundamentally more intellectually democratic than text.  90% our inherited canon is inherited from rich dead white men; and yet the thronging multitude who stood in St Paul’s Churchyard; the quotidian hoards who walked through the streets and listened to the ballad singers, experienced something that we can now recover.  The sound of judgement as experienced by the women and men who stood trial at the Old Bailey, and their voice of defiance, can be recovered.  And even the cold and wind of a weather that can now be captured day by day for a quarter of a millennium; can be added to the democratic possibilities new digital resources allow.  Add in the objects in the museums, the sounds of the ships, and their course through the oceans; the measurable experience of labour, and imprisonment, the joy of music and movement, the inherited landscapes, bearing all the marks of the toil of the voiceless dead, and you end up with something new.  The material world – in digital – gives us access to the rest of the world, and begins to create tools that speak to the 99% of the world’s population who, in 1700 or 1800 did not read or write, and did not leave easy traces for us to follow.  The Digital Humanities in Three Dimensions, challenges us and empowers us, to write a different, more inclusive, kind of history.

The rain falls on every head.