Digital Humanities: The Future of History?
Volume 3 | Issue 3 - Health & Medicine
Article by Agnes French, Edited by Hannah Lyons, Additional Research by Ellie Veryard.
Having spent the last six weeks contributing towards a computerised database that catalogues medieval miracles from various sources (which can be found here) and thereby effectively converts qualitative evidence into a quantitative form, I can imagine some may question the expediency of such an enterprise. Regardless of the fact that I personally have found it to be an extremely interesting process and one which marks a refreshing departure from the more traditional methods by which I have studied history previously, I would also argue that the database is reflective of two wider issues within the historical field; namely, the place for quantitative forms of analysis within history, and the uses of computing within the historical profession.
To begin with the quantitative “issue”, as it were; it would be untrue to claim that I have encountered no problems at all in converting such a particular and intricate piece of evidence that is a miracle story into a rather more unadorned row on a database – there have been obstacles. One of the hardest of these to overcome was the fact that some rather subjective decisions had to be made – Does a random reference to healing snake-bites using the pages of holy texts constitute a miracle? Does the divine preservation of saint’s bodies after death deserve its own category? If someone prophesises their own death did they perform the miracle or receive it from God? And so on. As such, the quantitative nature of this database does not remain “untainted” if you like, by human categorisation and therefore by the unavoidably individual viewpoint of those inputting the data. The best one can hope for is that one follows a general pattern imparted by the creators of the database (Hannah Probert and Simon Lax), which I have indeed tried to do.
However, putting these issues aside, the question remains, how useful is it to translate qualitative evidence into quantitative data in this manner? And the answer, in my opinion, is very. As already discussed, one source of argument against this kind of historical use of “data” might be that it bypasses some of the intricacies of the material and by its very nature forces free-flowing narratives into non-flexible check-boxes. However, this is not to say that using historical databases cannot be used to explore evidence in different and what could potentially be, very fruitful ways. As of today the ‘Miraculous Database’ does not include the largest of samples – sixteen texts – however at five-hundred and ninety miracles, it is still a substantial enough body of evidence to be of use. For instance, the database reveals that on average, female miracle-workers perform almost double the amount of miracles under the category of ‘patronage’ compared to their male counterparts. On the other hand, there are much more ‘divination’ miracles performed by men than women. What does this tell us about gender expectations during this time period? Could it possibly relate to the idea of woman as nurturers, or the fact that divination was preserved mostly for men due to the very direct link it provided to the divine? Regardless, the point is that the database can encourage new ways of thinking about miracles in the medieval era, and could be used not only to back up hypothesise, but also in creating new questions to be answered. Furthermore, the joy of the database is that it can be continually added to, changed, adapted, so that one day it may encompass a very large sample, containing a wide range of information that could be used to represent the majority, if not all, of the medieval recordings of miracles.
Again you may be asking, but what use is this quantitative approach when one can simply read the miracles and know exactly what was recorded? For starters, reading one miracle, or even one book of miracles, will fail to give a clear indication of wider trends, such as change over time with regards to the number of miracles performed, what kind of miracles were performed, who performed them and the kind of person they were performed on. Short of reading every single of the five-hundred and ninety miracles amassed so far and drawing your own conclusions, I can’t see a much more effective method than using a database which has converted this material into more quantifiable categories.
What is not being said is that the database can be used for detailed analysis of each miracle, for clearly it cannot; if anyone truly wants to understand a particular miracle, why it was written, who it was aimed at, whether or not it had a particular “message”, they will need to read the miracle of course. The point of the database is not to encourage a return to the ideas of the cliometricians of the past, believing human behaviour can be explained through the use of mathematical methods, statistics, graphs, charts and tables alone, but to encourage an exploration of the quantitative nature of the evidence alongside the qualitative. It is not a question of one being “better” than the other, but of what can be achieved when utilising both methods of analysis. As John Tosh has asserted, the use of purely quantitative data and statistics to explain the reasons behind change is ‘the most challenging task’ in this type of history and so of course needs to be interpreted with reference to qualitative evidence also.
Furthermore, the database in no way stands as a barrier to more “traditional” analysis of miracles, and can in fact be used to extend this analysis, such as the debate surrounding contemporary opinions on the miraculous. For example, wouldn’t an increase in a certain type of miracle – as identified by the database’s miracle categories and ‘mega-categories’ – indicate a growing interest and popularity in that kind of miraculous action? Or a growth in the number of occurrences of a certain saint being reflective of the rising popularity and interest in the values and characteristics associated with that saint? In this manner then, the divide between the qualitative and the quantitative becomes rather more blurred, as the evidence of one can surely be intertwined with and utilised in order to prove (or disprove) the theories of the other.
Having explored the theme of the uses, as well as the issues, concerning quantitative historical methods, I now wish to turn to the benefits of incorporating digital technology into historical study more generally. One of the key benefits of this kind of approach is its availability and accessibility to anybody who is interested; the miracles database is inclusive and available to anyone to make us of, as long as they have a basic understanding of computing. As Robert Darnton has asserted, in the digital world the domain of historical study is no longer purely the property of academics, but is ‘now open to amateurs’ (such as myself, happily). Without even gaining a BA undergraduate degree – let alone an M.A., Ph.D., and all the rest of it – I have been able to contribute to a digital database which is potentially being used to teach fellow undergraduate students. This is not to take away from the fact that this is a university-led scheme, and a tutor-led project (Dr. Charles West and Dr. Julia Hillner, who devised the project and have supervised me throughout the process), but the point remains that in theory anyone can create anything they like – providing they have access to a computer and at least some basic computing knowledge – and can reach a potentially huge readership in doing so.
That is the key to this kind of computerised database – anyone can potentially add to it (with permission from the university) and anyone can access it online, akin to a more thoroughly controlled Wikipedia. So why not allow other students to partake in this process? The database has not been made up so far of the leaders in the academic field, but those who are yet to enter it, so why not open it up further? Democratisation of this type could surely yield fairly fast results and, with some support and guidance, the input would be just as reliable as it has been previously. Even if only ten students volunteer, that is still a potential of ten extra books to add to the list – thereby increasingly the count by more than a third in one fell swoop.
However, before getting carried away with ideas about democratising history, the importance of utilising computer technology in my mind – albeit in a more complex way than was used for the miraculous database – lies also in what it can offer history as a profession. Although the technology used for the database so far has not been of the most advanced variety, the fact that the system is computerised in itself opens up the possibilities for its future development. A small example of the ways in which databases such as these can be more interactive than other mediums is the fact that most of the locations included in the database have also been incorporated into Google Map and Google Earth, providing an interesting visual resource which displays the information in a user-friendly and attractive manner.
Furthermore, talk is underway concerning updating the database from Microsoft Access into a more complex format, involving the combined effort of the Medieval History department and Computer Sciences. This can only point to the further opportunities for the increased sophistication of the database in the future and indeed, for the benefits of increased dialogue between computing and arts departments in general.
As Emmanuel Le Roy Ladurie once said, “The historian of tomorrow will be a programmer, or he will not be a historian”. Of course, after the rapid rise and subsequent backlash of the use of quantitative and cliometric methods in the 1960s and 1970s this vision seemed somewhat farther away. As Ian Anderson has noted, the new economic history school was attacked for ‘turning historians into statisticians, slaves to quantitative analysis’ with Fogel and Engerman’s 1974 work Time on the Cross: the Economics of American Negro Slavery, becoming a central target for those against this kind of quantitative history. In the immediate aftermath of this Lawrence Stone even went as far as to assert that the computer ‘should only be employed as the choice of last resort’. However, it now looks as if Ladurie’s prophecy could be becoming a reality; cries for historians to take a more active role in digital technology are springing up more and more and as the new generation becomes increasingly computer savvy fears that historians are not keeping up with the trend are quickly becoming a reality. Orville Vernon Burton has warned that not only do ‘individual professors lag far behind their students in the techniques used to gain information and analyze and synthesize it… but also that higher education itself is failing to keep up with the young people it should be serving’.
As such it would surely be beneficial to the future of history to coalesce the different methodologies to ensure that the future of history is secure and not lost amidst the “information age”. Anderson has pointed to a tendency in British historians to place a greater emphasis on the empiricist tradition in which they are rooted, thereby investing less interest in the theoretical models, methodologies and processes that computerised methods such as databases can be utilised for. Anderson believes however, that these two approaches to history ‘are not diametrically opposed to each other or mutually exclusive’ and that they should instead be viewed as ‘two points on a methodological continuum’, concluding that until this view becomes more accepted the story of history and computing remains one of ‘perpetually unfulfilled potential’.
In terms of my own experience of contributing towards a fairly simple historical database, I can honestly say that I’ve learnt more about computers in these six weeks than I did during a year-long GSCE in I.T, and skills which I genuinely feel I can utilise in the future. So why not take things further? Why not create courses concerning digital humanities? Why not incorporate new computing technologies more thoroughly into history courses? As Ladurie’s vision rushes closer and closer towards reality, and given the uncertainty facing so many students today, maybe this computer-based approach is one the history profession should grab hold of, and welcome databases – especially ones about miracles – into the fold.
Other organisations involved in digital history
• The Humanities Research Institute based in Sheffield, is one institute among many others helping to provide digital access to resources. It has run numerous projects -involving the digitisation of source materials -to allow greater access for researchers in disciplines across the Humanities Subjects.
• Collaborating with other institutions it has provided both searchable databases and digitised transcripts of records. So far their work includes, among others, producing London Lives- a collection of resources relating to poverty and social conditions, the Canterbury Tales Project- working with other partners to digitise all early surviving manuscripts of the Canterbury Tales and Flora Tristen- contributing to the digitisation of the correspondence of nineteenth-century feminist Flora Tristen.
• Other purveyors in digital history include the collaboration between ICMA and the Universities of Reading and Southampton to create the Medieval Solider Database. Funded by the Arts and Humanities Research Council the scheme aims to create a researchable database of medieval soldiers using information from the Muster Rolls held at National Archives in Kew and the Bibliotheque nationale de France in Paris.