Digital scholarship blog

Enabling innovative research with British Library digital collections

07 May 2024

Recovered Pages: Computing for Cultural Heritage Student Projects

The British Library is continuing to recover from last year’s cyber-attack. While our teams work to restore our services safely and securely, one of our goals in the Digital Research Team is to get some of the information from our currently inaccessible web pages into an easily readable and shareable format. We’ll be sharing these pages via blog posts here, with information recovered from the Wayback Machine, a fantastic initiative of the Internet Archive.  

The next page in this series is all about the student projects that came out of our Computing for Cultural Heritage project with the National Archives and Birkbeck University. This student project page was captured by the Wayback Machine on 7 June 2023.  

 

Computing for Cultural Heritage Student Projects

computing for cultural heritage logo - an image of a laptop with bookshelves as the screen saver

This page provides abstracts for a selection of student projects undertaken as part of a one-year part-time Postgraduate Certificate (PGCert), Computing for Cultural Heritage, co-developed by British Library, National Archives and Birkbeck University and funded by the Institute of Coding as part of a £4.8 million University skills drive.

“I have gone from not being able to print 'hello' in Python to writing some relatively complex programs and having a much greater understanding of data science and how it is applicable to my work."

- Jessica Green  

Key points

  • Aim of the trial was to provide professionals working in the cultural heritage sector with an understanding of basic programming and computational analytic tools to support them in their daily work 
  • During the Autumn & Spring terms (October 2019-April 2020), 12 staff members from British Library and 8 staff staff members from The National Archives completed two new trial modules at Birkbeck University: Demystifying computing for heritage professionals and Work-based Project 
  • Birkbeck University have now launched the Applied Data Science (Postgraduate Certificate) based on the outcomes of the trial

Student Projects

 

Transforming Physical Labels into Digital References 

Sotirios Alpanis, British Library
This project aims to use computing to convert data collected during the preparation of archive material for digitisation into a tool that can verify and validate image captures, and subsequently label them. This will take as its input physical information about each document being digitised, perform and facilitate a series of validations throughout image capture and quality assurance and result in an xml file containing a map of physical labels to digital files. The project will take place within the British Library/Qatar Foundation Partnership (BL/QFP), which is digitising archive material for display on the QDL.qa.  

Enhancing national thesis metadata with persistent identifiers

Jenny Basford, British Library 
Working with data from ISNI (International Standard Name Identifier) Agency and EThOS (Electronic Theses Online Service), both based at the British Library, I intend to enhance the metadata of both databases by identifying doctoral supervisors in thesis metadata and matching these data with ISNI holdings. This work will also feed into the European-funded FREYA project, which is concerned with the use of a wide variety of persistent identifiers across the research landscape to improve openness in research culture and infrastructure through Linked Data applications.

A software tool to support the social media activities of the Unlocking Our Sound Heritage Project

Lucia Cavorsi, British Library
Video
I would like to design a software tool able to flag forthcoming anniversaries, by comparing all the dates present in SAMI (sound and moving image catalogue – Sound Archive) with the current date. The aim of this tool is to suggest potential content for the Sound Archive’s social media posts. Useful dates in SAMI which could be matched with the current date and provide material for tweets are: birth and death dates of performers or authors, radio programme broadcast dates, recording dates).  I would like this tool to also match the subjects currently present in SAMI with the subjects featured in the list of anniversaries 2020 which the social media team uses. For example anniversaries like ‘International HIV day’, ‘International day of Lesbian visibility’ etc.  A windows pop up message will be designed for anniversaries notifications on the day.  If time permits, it would be convenient to also analyse what hashtags have been used over last year by the people who are followed by or follow the Sound Archive Twitter account. By extracting a list of these hashtags further, and more sound related, anniversaries could be added to the list of anniversaries currently used by the UOSH’s social media team.

Computing Cholera: Topic modelling the catalogue entries of the General Board of Health

Christopher Day, The National Archives
BlogOther
The correspondence of the General Board of Health (1848–1871) documents the work of a body set up to deal with cholera epidemics in a period where some English homes were so filthy as to be described as “mere pigholes not fit for human beings”. Individual descriptions for each of these over 89,000 letters are available on Discovery, The National Archives (UK)’s catalogue. Now, some 170 years later, access to the letters themselves has been disrupted by another epidemic, COVID-19. This paper examines how data science can be used to repurpose archival catalogue descriptions, initially created to enhance the ‘human findability’ of records (and favoured by many UK archives due to high digitisation costs), for large-scale computational analysis. The records of the General Board will be used as a case study: their catalogue descriptions topic modelled using a latent Dirichlet allocation model, visualised, and analysed – giving an insight into how new sanitary regulations were negotiated with a divided public during an epidemic. The paper then explores the validity of using the descriptions of historical sources as a source in their own right; and asks how, during a time of restricted archival access, metadata can be used to continue research.

An Automated Text Extraction Tool for Use on Digitised Maps

Nicholas Dykes, British Library
Blog / Video
Researchers of history often have difficulty geo-locating historical place names in Africa. I would like to apply automated transcription techniques to a digitised archive of historical maps of Africa to create a resource that will allow users to search for text, and discover where, and on which maps that text can be found. This will enable identification and analysis both of historical place names and of other text, such as topographical descriptions. I propose to develop a software tool in Python that will send images stored locally to the Google Vision API, and retrieve and process a response for each image, consisting of a JSON file containing the text found, pixel coordinate bounding boxes for each instance of text, and a confidence score. The tool will also create a copy of each image with the text instances highlighted. I will experiment with the parameters of the API in order to achieve the most accurate results.  I will incorporate a routine that will store each related JSON file and highlighted image together in a separate folder for each map image, and create an Excel spreadsheet containing text results, confidence scores, links to relevant image folders, and hyperlinks to high-res images hosted on the BL website. The spreadsheet and subfolders will then be packaged together into a single downloadable resource.  The finished software tool will have the capability to create a similar resource of interlinked spreadsheet and subfolders from any batch of images.

Reconstituting a Deconstructed Dataset using Python and SQLite

Alex Green, The National Archives
Video
For this project I will rebuild a database and establish the referential integrity of the data from CSV files using Python and SQLite. To do this I will need to study the data, read the documentation, draw an entity relationship diagram and learn more about relational databases. I want to enable users to query the data as they would have been able to in the past. I will then make the code reusable so it can be used to rebuild other databases, testing it with a further two datasets in CSV form. As an additional challenge, I plan to rearrange the data to meet the principles of ‘tidy data’ to aid data analysis.

PIMMS: Developing a Model Pre-Ingest Metadata Management System at the British Library

Jessica Green, British Library
GitHub / Video
I am proposing a solution to analysing and preparing for ingest a vast amount of ‘legacy’ BL digitised content into the future Digital Asset Management System (DAMPS). This involves building a prototype for a SQL database to aggregate metadata about digitised content and preparing for SIP creation. In addition, I will write basic queries to aid in our ongoing analysis about these TIFF files, including planning for storage, copyright, digital preservation and duplicate analysis. I will use Python to import sample metadata from BL sources like SharePoint, Excel and BL catalogues – currently used for analysis of ‘live’ and ‘legacy’ digitised BL collections. There is at least 1 PB of digitised content on the BL networks alone, as well as on external media such as hard-drives and CDs. We plan to only ingest one copy of each digitised TIFF file set and need to ensure that the metadata is accurate and up-to-date at the point of ingest. This database, the Pre-Ingest Metadata Management System (PIMMS), could serve as a central metadata repository for legacy digitised BL collections until then. I look forward to using Python and SQL, as well as drawing on the coding skills from others, to make these processes more efficient and effective going forward.

Exploring, cleaning and visualising catalogue metadata

Alex Hailey, British Library
Blog / Video
Working with catalogue metadata for the India Office Records (IOR) I will undertake three tasks: 1) converting c430,000 IOR/E index entries to descriptions within the relevant volume entries; 2) producing an SQL database for 46,500 IOR/P descriptions, allowing enhanced search when compared with the BL catalogue; and 3) creating Python scripts for searching, analysis and visualisation, to be demonstrated on dataset(s) and delivered through Jupyter Notebooks.

Automatic generation of unique reference numbers for structured archival data.

Graham Jevon, British Library
Blog / Video / GitHub
The British Library’s Endangered Archives Programme (EAP) funds the digital preservation of endangered archival material around the world. Third party researchers digitise material and send the content to the British Library. This is accompanied by an Excel spreadsheet containing metadata that describes the digitised content. EAP’s main task is to clean, validate, and enhance the metadata prior to ingesting it into the Library’s cataloguing system (IAMS). One of these tasks is the creation of unique catalogue reference numbers for each record (each row of data on the spreadsheet). This is a predominantly manual process that is potentially time consuming and subject to human inputting errors. This project seeks to solve this problem. The intention is to create a Windows executable program that will enable users to upload a csv file, enter a prefix, and then click generate. The instant result will be an export of a new csv file, which contains the data from the original csv file plus automatically generated catalogue reference numbers. These reference numbers are not random. They are structured in accordance with an ordered archival hierarchy. The program will include additional flexibility to account for several variables, including language encoding, computational efficiency, data validation, and wider re-use beyond EAP and the British Library.

Automating Metadata Extraction in Born Digital Processing

Callum McKean, British Library
Video
To automate the metadata extraction section of the Library’s current work-flow for born-digital processing using Python, then interrogate and collate information in new ways using the SQLite module.

Analysis of peak customer interactions with Reference staff at the British Library: a software solution

Jaimee McRoberts, British Library
Video
The British Library, facing on-going budget constraints, has a need to efficiently deploy Reference Services staff during peak periods of demand. The service would benefit from analysis of existing statistical data recording the timestamp of each customer interaction at a Reference Desk. In order to do this, a software solution is required to extract, analyse, and output the necessary data. This project report demonstrates a solution utilising Python alongside the pandas library which has successfully achieved the required data analysis.

Enhancing the data in the Manorial Documents Register (MDR) and making it more accessible

Elisabeth Novitski, The National Archives
Video
To develop computer scripts that will take the data from the existing separate and inconsistently formatted files and merge them into a consistent and organised dataset. This data will be loaded into the Manorial Documents Register (MDR) and National Register of Archives (NRA) to provide the user with improved search ability and access to the manorial document information.

Automating data analysis for collection care research at The National Archives: spectral and textual data

Lucia Pereira Pardo, The National Archives
The day-to-day work of a conservation scientist working for the care of an archival collection involves acquiring experimental data from the varied range of materials present in the physical records (inks, pigments, dyes, binding media, paper, parchment, photographs, textiles, degradation and restoration products, among others). To this end, we use multiple and complementary analytical and testing techniques, such as X-ray fluorescence (XRF), Fourier Transform Infrared (FTIR) and Fibre Optic Reflectance spectroscopies (FORS), multispectral imaging (MSI), colour and gloss measurements, microfading (MFT) and other accelerated ageing tests.  The outcome of these analyses is a heterogeneous and often large dataset, which can be challenging and time-consuming to process and analyse. Therefore, the objective of this project is to automate these tasks when possible, or at least to apply computing techniques to optimise the time and efforts invested in routine operations, so that resources are freed for actual research and more specialised and creative tasks dealing with the interpretation of the results.

Improving efficiencies in content development through batch processing and the automation of workloads

Harriet Roden, British Library
Video
With the purpose to support and enrich the curriculum, the British Library’s Digital Learning team produces large-scale content packages for online learners through individual projects. Due to their reliance on other internal teams within the workflow for content delivery, a substantial amount of resource is spent on routine tasks to duplicate collection metadata across various databases. In order to reduce inefficiencies, increase productivity and improve reliability, my project aimed to alleviate pressures across the workflow through workload automation, through four separate phases.

The Botish Library: building a poetry printing machine with Python

Giulia Carla Rossi, British Library
Blog / Video
This project aims to build a poetry printing machine, as a creative output that unites traditional content, new media and Python. The poems will be sourced from the British Library Digitised Books dataset collection, available under Public Domain Mark; I will sort through the datasets and identify which titles can be categorised as poetry using Python. I will then create a new dataset comprising these poetry books and relative metadata, which will then be connected to the printer with a Python script. The poetry printing machine will print randomized poems from this new dataset, together with some metadata (e.g. poem title, book title, author and shelfmark ID) that will allow users to easily identify the book.

Automating data entry in the UOSH Tracking Database

Chris Weaver, British Library
The proposed software solution is the creation of a Python script (to feature as a module in a larger script) to extract data from a web-based tool (either via obtaining data in JSON format via the sites' API or accessing the database powering the site directly). The data obtained is then formatted and inserted into corresponding fields in a Microsoft SQL Server database.

Final Module

Following the completion of the trial, participants had the opportunity to complete their PGCert in Applied Data Science by attending the final module, Analytic Tools for Information Professionals, which was part of the official course launched last autumn. We followed up with some of the participants to hear more about their experience of the full course:

“The third and final module of the computing for cultural heritage course was not only fascinating and enjoyable, it was also really pertinent to my job and I was immediately able to put the skills I learned into practice.  

The majority of the third module focussed on machine learning. We studied a number of different methods and one of these proved invaluable to the Agents of Enslavement research project I am currently leading. This project included a crowdsourcing task which asked the public to draw rectangles around four different types of newspaper advertisement. The purpose of the task was to use the coordinates of these rectangles to crop the images and create a dataset of adverts that can then be analysed for research purposes. To help ensure that no adverts were missed and to account for individual errors, each image was classified by five different people.  

One of my biggest technical challenges was to find a way of aggregating the rectangles drawn by five different people on a single page in order to calculate the rectangles of best fit. If each person only drew one rectangle, it was relatively easy for me to aggregate the results using the coding skills I had developed in the first two modules. I could simply find the average (or mean) of the five different classification attempts. But what if people identified several adverts and therefore drew multiple rectangles on a single page? For example, what if person one drew a rectangle around only one advert in the top left corner of the page; people two and three drew two rectangles on the same page, one in the top left and one in the top right; and people four and five drew rectangles around four adverts on the same page (one in each corner). How would I be able to create a piece of code that knew how to aggregate the coordinates of all the rectangles drawn in the top left and to separately aggregate the coordinates of all the rectangles drawn in the bottom right, and so on?  

One solution to this problem was to use an unsupervised machine learning method to cluster the coordinates before running the aggregation method. Much to my amazement, this worked perfectly and enabled me to successfully process the total of 92,218 rectangles that were drawn and create an aggregated dataset of more than 25,000 unique newspaper adverts.” 

-Graham Jevon, EAP Cataloguer; BL Endangered Archives Programme 

“The final module of the course was in some ways the most challenging — requiring a lot of us to dust off the statistics and algebra parts of our brain. However, I think, it was also the most powerful; revealing how machine learning approaches can help us to uncover hidden knowledge and patterns in a huge variety of different areas.  

Completing the course during COVID meant that collection access was limited, so I ended up completing a case study examining how generic tropes have evolved in science fiction across time using a dataset extracted from GoodReads. This work proved to be exceptionally useful in helping me to think about how computers understand language differently; and how we can leverage their ability to make statistical inferences in order to support our own, qualitative analyses. 

In my own collection area, working with born digital archives in Contemporary Archives and Manuscripts, we treat draft material — of novels, poems or anything else — as very important to understanding the creative process. I am excited to apply some of these techniques — particularly Unsupervised Machine Learning — to examine the hidden relationships between draft material in some of our creative archives. 

The course has provided many, many avenues of potential enquiry like this and I’m excited to see the projects that its graduates undertake across the Library.” 

- Callum McKean, Lead Curator, Digital; Contemporary British Collection

“I really enjoyed the Analytics Tools for Data Science module. As a data science novice, I came to the course with limited theoretical knowledge of how data science tools could be applied to answer research questions. The choice of using real-life data to solve queries specific to professionals in the cultural heritage sector was really appreciated as it made everyday applications of the tools and code more tangible. I can see now how curators’ expertise and specialised knowledge could be combined with tools for data analysis to further understanding of and meaningful research in their own collection area."

- Giulia Carla Rossi, Curator, Digital Publications; Contemporary British Collection

Please note this page was originally published in Feb 2021 and some of the resources, job titles and locations may now be out of date.

.