Digital Curator Mia Ridge with an update... Last year we posted about the British Library building a dedicated product team to work on the Universal Viewer (an item viewer that can display images, audio, video and even 3D versions of digitised and born-digital library collections), based on the IIIF standard for interoperable images and other media. We're now delighted to introduce the team. I asked them to write a brief note of introduction, mentioning what they've been working on or like about the UV and IIIF. They'll be at the IIIF conference in LA next week, so do say hi if you spot them.
We'd also love to hear from people interested in improving the Universal Viewer's UX in line with current expectations - more below!
Meet the Universal Viewer Product Team
Erin Burnand, Product Owner: I am responsible for developing the product's vision, goal and strategy, and work with stakeholders to ensure we get the best possible product. I have over 20 years experience working in GLAMs (galleries, libraries, archives and museums), most recently working in the British Library's Collection Metadata Authority Control team.
I love the way IIIF presents so many opportunities to be creative with our collections, and how imaginative the community are - I am looking forward to exploring these ideas to engage with our audiences as British Library services begin to be restored.
Lanie Okorodudu, Senior Test Engineer: I joined the British Library in March this year and have been working on testing the functionality of IIIF and UV implementations, new features, bug fixes and updates to ensure they meet our high quality standards. I appreciate the robust and flexible framework packed with features that IIIF offers as these are necessary for efficient testing and validation of digital content delivery systems.
James Misson: I am a Research Software Engineer with an academic background in the digital humanities and book history, currently working on improving developer documentation for the Universal Viewer to facilitate contributions from the community. I enjoy how generous and open the IIIF community is with sharing knowledge and expertise.
Saira Akhter, Research Software Engineer: I joined the Library as part of the Universal Viewer product team back in November. I’ve been working on enhancing the settings dialogue by adding new options and fixing bugs to improve compatibility with various file formats. I like that the UV code is configurable, which allows for flexibility and easy adaptation across different institutions.
Erin, Lanie, James and Saira
Say Hi at the IIIF conference!
The team will be hosting a Universal Viewer 'live' community call at the 2024 IIIF conference in Los Angeles. Join them at the Kerckhoff Coffee House at 10 - 11.30am on Thursday June 6th where they'll update on recent team activities including work on the user experience of transcribed text (a collaboration with the Swedish National Archive) and improving documentation to make it easier for newcomers to UV to participate in the community.
What are they working on?
In addition to their comments above, the team have spent some time working through practical questions with the UV Open Collective - the process for pull requests, testing, documentation, and developing and sharing our roadmap.
We've also been talking to folks at Riksarkivet (the Swedish National Archives), as they're working on implementing a search function within transcriptions displayed on the viewer. If you're involved in the IIIF or UV community you might have seen our call for inspiration for 'UV IIIF transcription search design': Have you seen any notable versions of IIIF viewers that display and / or search transcribed (or translated, etc) text? Please add a screenshot and notes to a new slide on our working doc UV IIIF transcription search UX - and thank you to people who've already shared some links and ideas!
V&A visit
The V&A Museum's Digital Media team have been doing interesting things with IIIF for a number of years, so we organised a meetup between the British Library's UV team and V&A digital folk working with IIIF once our team was complete. In April we went over to the V&A to share our experiences and discuss potential ways to collaborate, like sharing documentation for getting started with the UV. Our thanks to Richard Palmer for hosting, Hussain Ali, Luca Carini and Meaghan Curry for sharing their work and ideas.
The British Library and V&A teams on the grass in the V&A's South Kensington courtyard
How can you help?
We - particularly Mia, Erin and Lanie - are keen to work on the viewer's usability (UX) and accessibility. What features are missing compared to other viewers? What tweaks can we make to the spatial grouping and labels for different functions to make them clearer and more consistent? You can get in touch via [email protected], or post on the IIIF or Universal Viewer Slacks (join the Universal Viewer Slack; join the IIIF Slack).
The British Library is continuing to recover from last year’s cyber-attack. While our teams work to restore our services safely and securely, one of our goals in the Digital Research Team is to get some of the information from our currently inaccessible web pages into an easily readable and shareable format. We’ll be sharing these pages via blog posts here, with information recovered from the Wayback Machine, a fantastic initiative of the Internet Archive.
The next page in this series is all about the student projects that came out of our Computing for Cultural Heritage project with the National Archives and Birkbeck University. This student project page was captured by the Wayback Machine on 7 June 2023.
Computing for Cultural Heritage Student Projects
This page provides abstracts for a selection of student projects undertaken as part of a one-year part-time Postgraduate Certificate (PGCert), Computing for Cultural Heritage, co-developed by British Library, National Archives and Birkbeck University and funded by the Institute of Coding as part of a £4.8 million University skills drive.
“I have gone from not being able to print 'hello' in Python to writing some relatively complex programs and having a much greater understanding of data science and how it is applicable to my work."
- Jessica Green
Key points
Aim of the trial was to provide professionals working in the cultural heritage sector with an understanding of basic programming and computational analytic tools to support them in their daily work
During the Autumn & Spring terms (October 2019-April 2020), 12 staff members from British Library and 8 staff staff members from The National Archives completed two new trial modules at Birkbeck University: Demystifying computing for heritage professionals and Work-based Project
Transforming Physical Labels into Digital References
Sotirios Alpanis, British Library This project aims to use computing to convert data collected during the preparation of archive material for digitisation into a tool that can verify and validate image captures, and subsequently label them. This will take as its input physical information about each document being digitised, perform and facilitate a series of validations throughout image capture and quality assurance and result in an xml file containing a map of physical labels to digital files. The project will take place within the British Library/Qatar Foundation Partnership (BL/QFP), which is digitising archive material for display on the QDL.qa.
Enhancing national thesis metadata with persistent identifiers
Jenny Basford, British Library Working with data from ISNI (International Standard Name Identifier) Agency and EThOS (Electronic Theses Online Service), both based at the British Library, I intend to enhance the metadata of both databases by identifying doctoral supervisors in thesis metadata and matching these data with ISNI holdings. This work will also feed into the European-funded FREYA project, which is concerned with the use of a wide variety of persistent identifiers across the research landscape to improve openness in research culture and infrastructure through Linked Data applications.
A software tool to support the social media activities of the Unlocking Our Sound Heritage Project
Lucia Cavorsi, British Library Video I would like to design a software tool able to flag forthcoming anniversaries, by comparing all the dates present in SAMI (sound and moving image catalogue – Sound Archive) with the current date. The aim of this tool is to suggest potential content for the Sound Archive’s social media posts. Useful dates in SAMI which could be matched with the current date and provide material for tweets are: birth and death dates of performers or authors, radio programme broadcast dates, recording dates). I would like this tool to also match the subjects currently present in SAMI with the subjects featured in the list of anniversaries 2020 which the social media team uses. For example anniversaries like ‘International HIV day’, ‘International day of Lesbian visibility’ etc. A windows pop up message will be designed for anniversaries notifications on the day. If time permits, it would be convenient to also analyse what hashtags have been used over last year by the people who are followed by or follow the Sound Archive Twitter account. By extracting a list of these hashtags further, and more sound related, anniversaries could be added to the list of anniversaries currently used by the UOSH’s social media team.
Computing Cholera: Topic modelling the catalogue entries of the General Board of Health
Christopher Day, The National Archives Blog / Other The correspondence of the General Board of Health (1848–1871) documents the work of a body set up to deal with cholera epidemics in a period where some English homes were so filthy as to be described as “mere pigholes not fit for human beings”. Individual descriptions for each of these over 89,000 letters are available on Discovery, The National Archives (UK)’s catalogue. Now, some 170 years later, access to the letters themselves has been disrupted by another epidemic, COVID-19. This paper examines how data science can be used to repurpose archival catalogue descriptions, initially created to enhance the ‘human findability’ of records (and favoured by many UK archives due to high digitisation costs), for large-scale computational analysis. The records of the General Board will be used as a case study: their catalogue descriptions topic modelled using a latent Dirichlet allocation model, visualised, and analysed – giving an insight into how new sanitary regulations were negotiated with a divided public during an epidemic. The paper then explores the validity of using the descriptions of historical sources as a source in their own right; and asks how, during a time of restricted archival access, metadata can be used to continue research.
An Automated Text Extraction Tool for Use on Digitised Maps
Nicholas Dykes, British Library Blog / Video Researchers of history often have difficulty geo-locating historical place names in Africa. I would like to apply automated transcription techniques to a digitised archive of historical maps of Africa to create a resource that will allow users to search for text, and discover where, and on which maps that text can be found. This will enable identification and analysis both of historical place names and of other text, such as topographical descriptions. I propose to develop a software tool in Python that will send images stored locally to the Google Vision API, and retrieve and process a response for each image, consisting of a JSON file containing the text found, pixel coordinate bounding boxes for each instance of text, and a confidence score. The tool will also create a copy of each image with the text instances highlighted. I will experiment with the parameters of the API in order to achieve the most accurate results. I will incorporate a routine that will store each related JSON file and highlighted image together in a separate folder for each map image, and create an Excel spreadsheet containing text results, confidence scores, links to relevant image folders, and hyperlinks to high-res images hosted on the BL website. The spreadsheet and subfolders will then be packaged together into a single downloadable resource. The finished software tool will have the capability to create a similar resource of interlinked spreadsheet and subfolders from any batch of images.
Reconstituting a Deconstructed Dataset using Python and SQLite
Alex Green, The National Archives Video For this project I will rebuild a database and establish the referential integrity of the data from CSV files using Python and SQLite. To do this I will need to study the data, read the documentation, draw an entity relationship diagram and learn more about relational databases. I want to enable users to query the data as they would have been able to in the past. I will then make the code reusable so it can be used to rebuild other databases, testing it with a further two datasets in CSV form. As an additional challenge, I plan to rearrange the data to meet the principles of ‘tidy data’ to aid data analysis.
PIMMS: Developing a Model Pre-Ingest Metadata Management System at the British Library
Jessica Green, British Library GitHub / Video I am proposing a solution to analysing and preparing for ingest a vast amount of ‘legacy’ BL digitised content into the future Digital Asset Management System (DAMPS). This involves building a prototype for a SQL database to aggregate metadata about digitised content and preparing for SIP creation. In addition, I will write basic queries to aid in our ongoing analysis about these TIFF files, including planning for storage, copyright, digital preservation and duplicate analysis. I will use Python to import sample metadata from BL sources like SharePoint, Excel and BL catalogues – currently used for analysis of ‘live’ and ‘legacy’ digitised BL collections. There is at least 1 PB of digitised content on the BL networks alone, as well as on external media such as hard-drives and CDs. We plan to only ingest one copy of each digitised TIFF file set and need to ensure that the metadata is accurate and up-to-date at the point of ingest. This database, the Pre-Ingest Metadata Management System (PIMMS), could serve as a central metadata repository for legacy digitised BL collections until then. I look forward to using Python and SQL, as well as drawing on the coding skills from others, to make these processes more efficient and effective going forward.
Exploring, cleaning and visualising catalogue metadata
Alex Hailey, British Library Blog / Video Working with catalogue metadata for the India Office Records (IOR) I will undertake three tasks: 1) converting c430,000 IOR/E index entries to descriptions within the relevant volume entries; 2) producing an SQL database for 46,500 IOR/P descriptions, allowing enhanced search when compared with the BL catalogue; and 3) creating Python scripts for searching, analysis and visualisation, to be demonstrated on dataset(s) and delivered through Jupyter Notebooks.
Automatic generation of unique reference numbers for structured archival data.
Graham Jevon, British Library Blog / Video / GitHub The British Library’s Endangered Archives Programme (EAP) funds the digital preservation of endangered archival material around the world. Third party researchers digitise material and send the content to the British Library. This is accompanied by an Excel spreadsheet containing metadata that describes the digitised content. EAP’s main task is to clean, validate, and enhance the metadata prior to ingesting it into the Library’s cataloguing system (IAMS). One of these tasks is the creation of unique catalogue reference numbers for each record (each row of data on the spreadsheet). This is a predominantly manual process that is potentially time consuming and subject to human inputting errors. This project seeks to solve this problem. The intention is to create a Windows executable program that will enable users to upload a csv file, enter a prefix, and then click generate. The instant result will be an export of a new csv file, which contains the data from the original csv file plus automatically generated catalogue reference numbers. These reference numbers are not random. They are structured in accordance with an ordered archival hierarchy. The program will include additional flexibility to account for several variables, including language encoding, computational efficiency, data validation, and wider re-use beyond EAP and the British Library.
Automating Metadata Extraction in Born Digital Processing
Callum McKean, British Library Video To automate the metadata extraction section of the Library’s current work-flow for born-digital processing using Python, then interrogate and collate information in new ways using the SQLite module.
Analysis of peak customer interactions with Reference staff at the British Library: a software solution
Jaimee McRoberts, British Library Video The British Library, facing on-going budget constraints, has a need to efficiently deploy Reference Services staff during peak periods of demand. The service would benefit from analysis of existing statistical data recording the timestamp of each customer interaction at a Reference Desk. In order to do this, a software solution is required to extract, analyse, and output the necessary data. This project report demonstrates a solution utilising Python alongside the pandas library which has successfully achieved the required data analysis.
Enhancing the data in the Manorial Documents Register (MDR) and making it more accessible
Elisabeth Novitski, The National Archives Video To develop computer scripts that will take the data from the existing separate and inconsistently formatted files and merge them into a consistent and organised dataset. This data will be loaded into the Manorial Documents Register (MDR) and National Register of Archives (NRA) to provide the user with improved search ability and access to the manorial document information.
Automating data analysis for collection care research at The National Archives: spectral and textual data
Lucia Pereira Pardo, The National Archives The day-to-day work of a conservation scientist working for the care of an archival collection involves acquiring experimental data from the varied range of materials present in the physical records (inks, pigments, dyes, binding media, paper, parchment, photographs, textiles, degradation and restoration products, among others). To this end, we use multiple and complementary analytical and testing techniques, such as X-ray fluorescence (XRF), Fourier Transform Infrared (FTIR) and Fibre Optic Reflectance spectroscopies (FORS), multispectral imaging (MSI), colour and gloss measurements, microfading (MFT) and other accelerated ageing tests. The outcome of these analyses is a heterogeneous and often large dataset, which can be challenging and time-consuming to process and analyse. Therefore, the objective of this project is to automate these tasks when possible, or at least to apply computing techniques to optimise the time and efforts invested in routine operations, so that resources are freed for actual research and more specialised and creative tasks dealing with the interpretation of the results.
Improving efficiencies in content development through batch processing and the automation of workloads
Harriet Roden, British Library Video With the purpose to support and enrich the curriculum, the British Library’s Digital Learning team produces large-scale content packages for online learners through individual projects. Due to their reliance on other internal teams within the workflow for content delivery, a substantial amount of resource is spent on routine tasks to duplicate collection metadata across various databases. In order to reduce inefficiencies, increase productivity and improve reliability, my project aimed to alleviate pressures across the workflow through workload automation, through four separate phases.
The Botish Library: building a poetry printing machine with Python
Giulia Carla Rossi, British Library Blog / Video This project aims to build a poetry printing machine, as a creative output that unites traditional content, new media and Python. The poems will be sourced from the British Library Digitised Books dataset collection, available under Public Domain Mark; I will sort through the datasets and identify which titles can be categorised as poetry using Python. I will then create a new dataset comprising these poetry books and relative metadata, which will then be connected to the printer with a Python script. The poetry printing machine will print randomized poems from this new dataset, together with some metadata (e.g. poem title, book title, author and shelfmark ID) that will allow users to easily identify the book.
Automating data entry in the UOSH Tracking Database
Chris Weaver, British Library The proposed software solution is the creation of a Python script (to feature as a module in a larger script) to extract data from a web-based tool (either via obtaining data in JSON format via the sites' API or accessing the database powering the site directly). The data obtained is then formatted and inserted into corresponding fields in a Microsoft SQL Server database.
Final Module
Following the completion of the trial, participants had the opportunity to complete their PGCert in Applied Data Science by attending the final module, Analytic Tools for Information Professionals, which was part of the official course launched last autumn. We followed up with some of the participants to hear more about their experience of the full course:
“The third and final module of the computing for cultural heritage course was not only fascinating and enjoyable, it was also really pertinent to my job and I was immediately able to put the skills I learned into practice.
The majority of the third module focussed on machine learning. We studied a number of different methods and one of these proved invaluable to the Agents of Enslavement research project I am currently leading. This project included a crowdsourcing task which asked the public to draw rectangles around four different types of newspaper advertisement. The purpose of the task was to use the coordinates of these rectangles to crop the images and create a dataset of adverts that can then be analysed for research purposes. To help ensure that no adverts were missed and to account for individual errors, each image was classified by five different people.
One of my biggest technical challenges was to find a way of aggregating the rectangles drawn by five different people on a single page in order to calculate the rectangles of best fit. If each person only drew one rectangle, it was relatively easy for me to aggregate the results using the coding skills I had developed in the first two modules. I could simply find the average (or mean) of the five different classification attempts. But what if people identified several adverts and therefore drew multiple rectangles on a single page? For example, what if person one drew a rectangle around only one advert in the top left corner of the page; people two and three drew two rectangles on the same page, one in the top left and one in the top right; and people four and five drew rectangles around four adverts on the same page (one in each corner). How would I be able to create a piece of code that knew how to aggregate the coordinates of all the rectangles drawn in the top left and to separately aggregate the coordinates of all the rectangles drawn in the bottom right, and so on?
One solution to this problem was to use an unsupervised machine learning method to cluster the coordinates before running the aggregation method. Much to my amazement, this worked perfectly and enabled me to successfully process the total of 92,218 rectangles that were drawn and create an aggregated dataset of more than 25,000 unique newspaper adverts.”
“The final module of the course was in some ways the most challenging — requiring a lot of us to dust off the statistics and algebra parts of our brain. However, I think, it was also the most powerful; revealing how machine learning approaches can help us to uncover hidden knowledge and patterns in a huge variety of different areas.
Completing the course during COVID meant that collection access was limited, so I ended up completing a case study examining how generic tropes have evolved in science fiction across time using a dataset extracted from GoodReads. This work proved to be exceptionally useful in helping me to think about how computers understand language differently; and how we can leverage their ability to make statistical inferences in order to support our own, qualitative analyses.
In my own collection area, working with born digital archives in Contemporary Archives and Manuscripts, we treat draft material — of novels, poems or anything else — as very important to understanding the creative process. I am excited to apply some of these techniques — particularly Unsupervised Machine Learning — to examine the hidden relationships between draft material in some of our creative archives.
The course has provided many, many avenues of potential enquiry like this and I’m excited to see the projects that its graduates undertake across the Library.”
- Callum McKean, Lead Curator, Digital; Contemporary British Collection
“I really enjoyed the Analytics Tools for Data Science module. As a data science novice, I came to the course with limited theoretical knowledge of how data science tools could be applied to answer research questions. The choice of using real-life data to solve queries specific to professionals in the cultural heritage sector was really appreciated as it made everyday applications of the tools and code more tangible. I can see now how curators’ expertise and specialised knowledge could be combined with tools for data analysis to further understanding of and meaningful research in their own collection area."
-Giulia Carla Rossi, Curator, Digital Publications; Contemporary British Collection
Please note this page was originally published in Feb 2021 and some of the resources, job titles and locations may now be out of date.
The British Library is continuing to recover from last year’s cyber-attack. While our teams work to restore our services safely and securely, one of our goals in the Digital Research Team is to get some of the information from our currently inaccessible web pages into an easily readable and shareable format. We’ll be sharing these pages via blog posts here, with information recovered from the Wayback Machine, a fantastic initiative of the Internet Archive.
The second page in this series is a case study on the impact of our Digital Scholarship Training Programme, captured by the Wayback Machine on 3 October 2023.
Graham Jevon: A Digital Transformation Story
'The Digital Scholarship Training Programme has introduced me to new software, opened my eyes to digital opportunities, provided inspiration for me to improve, and helped me attain new skills'
Key points
Graham Jevon has been an active participant in the Digital Scholarship Training Programme
Through gaining digital skills he has been able to build software to automate tricky processes
Graham went on to become a Coleridge Fellowship scholar, putting these digital skills to good use!
Find out more on what Graham has been up to on his Staff Profile
Did you know? The Digital Scholarship Training Programme has been running since 2012, and creates opportunities for staff to develop necessary skills and knowledge to support emerging areas of modern scholarship.
The Digital Scholarship Training Programme
Since joining the library in 2018, the Digital Scholarship Training Programme has been integral to the trajectory of both my personal development and the working practices within my team.
The very first training course I attended at the library was the introduction to OpenRefine. The key thing that I took away from this course was not necessarily the skills to use the software, but simply understanding OpenRefine’s functionality and the possibilities the software offered for my team. This inspired me to spend time after the session devising a workflow that enhanced our cataloguing efficiency and accuracy, enabling me to create more detailed and accurate metadata in less time. With OpenRefine I created a semi-automated workflow that required the kind of logical thinking associated with computer programming, but without the need to understand a computer programming language.
Computing for Cultural Heritage
The use of this kind of logical thinking and the introduction to writing computational expressions within OpenRefine sparked an interest in me to learn a computing language such as Python. I started a free online Python introduction, but without much context to the course my attention quickly waned. When the Digital Scholarship Computing for Cultural Heritage course was announced I therefore jumped at the chance to apply.
I went into the Computing for Cultural Heritage course hoping to learn skills that would enable me to solve cataloguing and administrative problems, skills that would help me process data in spreadsheets more efficiently and accurately. I had one particular problem in mind and I was able to address this problem in the project module of the course. For the project we had to design a software program. I created a program (known as ReG), which automatically generates structured catalogue references for archival collections. I was extremely pleased with the outcome of this project and this piece of software is something that my team now use in our day-to-day activities. An error-prone task that could take hours or days to complete manually in Excel now takes just a few seconds and is always 100% accurate.
This in itself was a great outcome of the course that met my hopes at the outset. But this course did so much more. I came away from the course with a completely new set of data science skills that I could build on and apply in other areas. For example, I recently created another piece of software that helps my team survey any digitisation data that we receive, to help us spot any errors or problems that need fixing.
The British Library Coleridge Research Fellowship
The data science skills were particularly instrumental in enabling me to apply successfully for the British Library’s Coleridge research fellowship. This research fellowship is partly a personal development scheme and it enabled me the opportunity to put my new data science skills into practice in a research environment (rather than simply using them in a cataloguing context). My previous academic research experience was based on traditional analogue methods. But for the Coleridge project I used crowdsourcing to extract data for analysis from two collections of newspapers.
The third and final Computing for Cultural Heritage module focussed on machine learning and I was able to apply these skills directly to the crowdsourcing project Agents of Enslavement. The first crowdsourcing task, for example, asked the public to draw rectangles around four specific types of newspaper advertisement. To help ensure that no adverts were missed and to account for individual errors, each image was classified by five different people. I therefore had to aggregate the results. Thanks to the new data science skills I had learned, I was able to write a Python script that used machine learning algorithms to aggregate 92,000 total rectangles drawn by the public into an aggregated dataset of 25,000 unique newspaper advertisements.
The OpenRefine and Computing for Cultural Heritage course are just two of the many digital scholarship training sessions that I have attended. But they perfectly illustrate the value of the Digital Scholarship Training Programme, which has introduced me to new software, opened my eyes to digital opportunities, provided inspiration for me to improve, and helped me attain new skills that I have been able to put into practice both for the benefit of myself and my team.
The British Library is continuing to recover from last year’s cyber-attack. While our teams work to restore our services safely and securely, one of our goals in the Digital Research Team is to get some of the information from our currently inaccessible web pages into an easily readable and shareable format. We’ll be sharing these pages via blog posts here, with information recovered from the Wayback Machine, a fantastic initiative of the Internet Archive.
The Digital Scholarship Training Programme has been running since 2012, and creates opportunities for staff to develop necessary skills and knowledge to support emerging areas of modern scholarship.
About
This internal and bespoke staff training programme is one of the cornerstones of the Digital Curator Team’s work at the British Library. Running since 2012, it provides colleagues with the space and opportunity to delve into and explore all that digital content and new technologies have to offer in the research domain today. The Digital Curator team oversees the design and delivery of roughly 50-60 training events a year. Since its inception, well over a thousand individual staff members have come through the programme, on average attending three or more courses each and the Library has seen a steep change in its capacity to support innovative digital research.
Objectives
Staff are familiar and conversant with the foundational concepts, methods and tools of digital scholarship.
Staff are empowered to innovate.
Collaborative digital initiatives flourish across subject areas within the Library as well as externally.
Our internal capacity for training and skill-sharing in digital scholarship are a shared responsibility across the Library.
The Programme
What's it all about?
To celebrate our ten year anniversary, we created a series of video testimonials from the people behind the Training Programme - coordinators, instructors, and attendees. Click 'Watch on YouTube' to view the whole series of videos.
Nora McGregor, Digital Curator, gives a presentation all about the Digital Scholarship Training Programme - where it started, where it's going and what it hopes to accomplish.
Courses
As digital research methods have changed overtime, so too have course topics and content. Today's full course catalogue reflects this through a diversity of topics from cleaning up data, digital storytelling, to command line programming and geo-referencing.
Courses range from half-days to full-day workshops for no more than 15 attendees at a time and are taught mainly by staff members but also external trainers where necessary. Example courses include:
We host a monthly “Hack & Yack” to run alongside the more formal training programme. During these two-hour self-paced casual meet-ups, open to all staff, the group works through a variety of online tutorials on a particular digital topic. Example sessions include:
The Digital Scholarship Reading Group holds informal discussions on the first Tuesday of each month. Each month we discuss an article, conference, podcast or video related to digital scholarship. It's a great way to keep up with new ideas or reality check trends in digital scholarship (including the digital humanities). We welcome people from any department in the Library, and take suggestions for topics that are particularly relevant to diverse teams or disciplines.
Curious about what we cover? Check out this previous blog post that cover the last five years of our Reading Group.
21st Century Curatorship Talk Series
The Digital Scholarship team hosts the 21st Century Curatorship Programme (C21st), a series of professional development talks and seminars, open to all staff, providing a forum for keeping up with new developments and emerging technologies in scholarship, libraries and cultural heritage.
What’s new?
In 2019, the British Library and partners Birkbeck University and The National Archives were awarded £222,420 in funding by the Institute of Coding (IoC) to co-develop a one-year part-time postgraduate Certificate (PGCert), Computing for Cultural Heritage, as part of a £4.8 million University skills drive. The new course aims to provide working professionals, particularly across the GLAM sector (Galleries, Libraries, Archives and Museums), with an understanding of basic programming, analytic tools and computing environments to support them in their daily work.
Further information
For more information on the Training Programme's most recent year, including our performance numbers and topics covered by the training, please see our full screen, interactive inforgraphic.
We're always pleased to hear that people want to use images from our Flickr collection, and appreciate folk who get in touch to check if their proposed use - particularly commercial use - is ok.
You don't have to credit the Library when using our public domain images, but we always appreciate credit where possible as a way of celebrating their re-use and to help other people find the collection.
If you'd like to credit us, you can say something like 'Images courtesy of the British Library’s Flickr Collection'.
We also love hearing how people have used our images, so please do let us know ([email protected]) about the results if you do use them.
By Digital Curator Mia Ridge for the British Library's Digital Research team
This blog post is by Peter Smith, DPhil Student at the Faculty of Asian and Middle Eastern Studies, University of Oxford
Introduction
The study of writing and literature has been transformed by the mass transcription of printed materials, aided significantly by the use of Optical Character Recognition (OCR). This has enabled textual analysis through a growing array of digital techniques, ranging from simple word searches in a text to linguistic analysis of large corpora – the possibilities are yet to be fully explored. However, printed materials are only one expression of the written word and tend to be more representative of certain types of writing. These may be shaped by efforts to standardise spelling or character variants, they may use more formal or literary styles of language, and they are often edited and polished with great care. They will never reveal the great, messy diversity of features that occur in writings produced by the human hand. What of the personal letters and documents, poems and essays scribbled on paper with no intention of distribution; the unpublished drafts of a major literary work; or manuscript editions of various classics that, before the use of print, were the sole means of preserving ancient writings and handing them onto future generations? These are also a rich resource for exploring past lives and events or expressions of literary culture.
The study of handwritten materials is not new but, until recently, the possibilities for analysing them using digital tools have been quite limited. With the advent of Handwritten Text Recognition (HTR) the picture is starting to change. HTR applications such as Transkribus and eScriptorium are capable of learning to transcribe a broad range of scripts in multiple languages. As the potential of these platforms develops, large collections of manuscripts can be automatically transcribed and consequently explored using digital tools. Institutions such as the British Library are doing much to encourage this process and improve accessibility of the transcribed works for academic research and the general interest of the public. My recent role in an HTR project at the Library represents one small step in this process and here I hope to provide a glimpse behind-the-scenes, a look at some of the challenges of developing HTR.
As a PhD student exploring classical Chinese texts, I was delighted to find a placement at the British Library working on HTR of historical Chinese manuscripts. This project proceeded under the guidance of my British Library supervisors Dr Adi Keinan-Schoonbaert and Mélodie Doumy. I was also provided with support and expertise from outside of the Library: Colin Brisson is part of a group working on Chinese Historical documents Automatic Transcription (CHAT). They have already gathered and developed preliminary models for processing handwritten Chinese with the open source HTR application eScriptorium. I worked with Colin to train the software further using materials from the British Library. These were drawn entirely from the fabulous collection of manuscripts from Dunhuang, China, which date back to the Tang dynasty (618–907 CE) and beyond. Examples of these can be seen below, along with reference numbers for each item, and the originals can be viewed on the new website of the International Dunhuang Programme. Some of these texts were written with great care in standard Chinese scripts and are very well preserved. Others are much more messy: cursive scripts, irregular layouts, character corrections, and margin notes are all common features of handwritten work. The writing materials themselves may be stained, torn, or eaten by animals, resulting in missing or illegible text. All these issues have the potential to mislead the ‘intelligence’ of a machine. To overcome such challenges the software requires data – multiple examples of the diverse elements it might encounter and instruction as to how they should be understood.
The challenges encountered in my work on HTR can be examined in three broad categories, reflecting three steps in the HTR process of eScriptorium: image binarisation, layout segmentation, and text recognition.
Image binarisation
The first task in processing an image is to reduce its complexity, to remove any information that is not relevant to the output required. One way of doing this is image binarisation, taking a colour image and using an algorithm to strip it of hue and brightness values so that only black and white pixels remain. This was achieved using a binarisation model developed by Colin Brisson and his partners. My role in this stage was to observe the results of the process and identify strengths and weaknesses in the current model. These break down into three different categories: capturing details, stained or discoloured paper, and colour and density of ink.
1. Capturing details
In the process of distinguishing the brushstrokes of characters from other random marks on the paper, it is perhaps inevitable that some thin or faint lines – occurring as a feature of the hand written text or through deterioration over time – might be lost during binarisation. Typically the binarisation model does very well in picking them out, as seen in figure 1:
Fig 1. Good retention of thin lines (S.3011, recto image 23)
While problems with faint strokes are understandable, it was surprising to find that loss of detail was also an issue in somewhat thicker lines. I wasn’t able to determine the cause of this but it occurred in more than one image. See figures 2 and 3:
Fig 2. Loss of detail in thick lines (S.3011, recto image 23)
Fig 3. Loss of detail in thick lines (S.3011, recto image 23)
2. Stained and discoloured paper
Where paper has darkened over time, the contrast between ink and background is diminished and during binarisation some writing may be entirely removed along with the dark colours of the paper. Although I encountered this occasionally, unless the background was really dark the binarisation model did well. One notable success is its ability to remove the dark colours of partially stained sections. This can be seen in figure 4, where a dark stain is removed while a good amount of detail is retained in the written characters.
Fig 4. Good retention of character detail on heavily stained paper (S.2200, recto image 6)
3. Colour and density of ink
The majority of manuscripts are written in black ink, ideal for creating good contrast with most background colourations. In some places however, text may be written with less concentrated ink, resulting in greyer tones that are not so easy to distinguish from the paper. The binarisation model can identify these correctly but sometimes it fails to distinguish them from the other random markings and colour variations that can be found in the paper of ancient manuscripts. Of particular interest is the use of red ink, which is often indicative of later annotations in the margins or between lines, or used for the addition of punctuation. The current binarisation model will sometimes ignore red ink if it is very faint but in most cases it identifies it very well. In one impressive example, shown in figure 5, it identified the red text while removing larger red marks used to highlight other characters written in black ink, demonstrating an ability to distinguish between semantic and less significant information.
Fig 5. Effective retention of red characters and removal of large red marks (S.2200, recto image 7)
In summary, the examples above show that the current binarisation model is already very effective at eliminating unwanted background colours and stains while preserving most of the important character detail. Its response to red ink illustrates a capacity for nuanced analysis. It does not treat every red pixel in the same way, but determines whether to keep it or remove it according to the context. There is clearly room for further training and refinement of the model but it already produces materials that are quite suitable for the next stages of the HTR process.
Layout segmentation
Segmentation defines the different regions of a digitised manuscript and the type of content they contain, either text or image. Lines are drawn around blocks of text to establish a text region and for many manuscripts there is just one per image. Anything outside of the marked regions will just be ignored by the software. On occasion, additional regions might be used to distinguish writings in the margins of the manuscript. Finally, within each text region the lines of text must also be clearly marked. Having established the location of the lines, they can be assigned a particular type. In this project the options include ‘default’, ‘double line’, and ‘other’ – the purpose of these will be explored below.
All of this work can be automated in eScriptorium using a segmentation model. However, when it comes to analysing Chinese manuscripts, this model was the least developed component in the eScriptorium HTR process and much of our work focused on developing its capabilities. My task was to run binarised images through the model and then manually correct any errors. Figure 6 shows the eScriptorium interface and the initial results produced by the segmentation model. Vertical sections of text are marked with a purple line and the endings of each section are indicated with a horizontal pink line.
Fig 6. Initial results of the segmentation model section showing multiple errors. The text is the Zhuangzi Commentary by Guo Xiang (S.1603)
This example shows that the segmentation model is very good at positioning a line in the centre of a vertical column of text. Frequently, however, single lines of text are marked as a sequence of separate lines while other lines of text are completely ignored. The correct output, achieved through manual segmentation, is shown in figure 7. Every line is marked from beginning to end with no omissions or inappropriate breaks.
Fig 7. Results of manual segmentation showing the text region (the blue rectangle) and the single and double lines of text (S.1603)
Once the lines of a text are marked, line masks can be generated automatically, defining the area of text around each line. Masks are needed to show the transcription model (discussed below) exactly where it should look when attempting to match images on the page to digital characters. The example in figure 8 shows that the results of the masking process are almost perfect, encompassing every Chinese character without overlapping other lines.
Fig 8. Line masks outline the area of text associated with each line (S.1603)
The main challenge with developing a good segmentation model is that manuscripts in the Dunhuang collection have so much variation in layout. Large and small characters mix together in different ways and the distribution of lines and characters can vary considerably. When selecting material for this project I picked a range of standard layouts. This provided some degree of variation but also contained enough repetition for the training to be effective. For example, the manuscript shown above in figures 6–8 combines a classical text written in large characters interspersed with double lines of commentary in smaller writing, in this case it is the Zhuangzi Commentary by Guo Xiang. The large text is assigned the ‘default’ line type while the smaller lines of commentary are marked as ‘double-line’ text. There is also an ‘other’ line type which can be applied to anything that isn’t part of the main text – margin notes are one example. Line types do not affect how characters are transcribed but they can be used to determine how different sections of text relate to each other and how they are assembled and formatted in the final output files.
Fig 9. A section from the Lotus Sūtra with a text region, lines of prose, and lines of verse clearly marked (Or8210/S.1338)
Figures 8 and 9, above, represent standard layouts used in the writing of a text but manuscripts contain many elements that are more random. Of these, inter-line annotations are a good example. They are typically added by a later hand, offering comments on a particular character or line of text. Annotations might be as short as a single character (figure 10) or could be a much longer comment squeezed in between the lines of text (figure 11). In such cases these additions can be distinguished from the main text by being labelled with the ‘other’ line type.
Fig 10. Single character annotation in S.3011, recto image 14 (left) and a longer annotation in S.5556, recto image 4 (right)
Fig 11. A comment in red ink inserted between two lines of text (S.2200, recto image 5)
Other occasional features include corrections to the text. These might be made by the original scribe or by a later hand. In such cases one character may be blotted out and a replacement added to the side, as seen in figure 12. For the reader, these should be understood as part of the text itself but for the segmentation model they appear similar or identical to annotations. For the purpose of segmentation training any irregular features like this are identified using the ‘other’ line type.
Fig 12. Character correction in S.3011, recto image 23.
As the examples above show, segmentation presents many challenges. Even the standard features of common layouts offer a degree of variation and in some manuscripts irregularities abound. However, work done on this project has now been used for further training of the segmentation model and reports are promising. The model appears capable of learning quickly, even from relatively small data sets. As the process improves, time spent using and training the model offers increasing returns. Even if some errors remain, manual correction is always possible and segmented images can pass through to the final stage of text recognition.
Text recognition
Although transcription is the ultimate aim of this process it consumed less of my time on the project so I will keep this section relatively brief. Fortunately, this is another stage where the available model works very well. It had previously been trained on other print and manuscript collections so a well-established vocabulary set was in place, capable of recognising many of the characters found in historical writings. Dealing with handwritten text is inevitably a greater challenge for a transcription model but my selection of manuscripts included several carefully written texts. I felt there was a good chance of success and was very keen to give it a go, hoping I might end up with some usable transcriptions of these works. Once the transcription model had been run I inspected the first page using eScriptorium’s correction interface as illustrated in figure 13.
Fig 13. Comparison of image and transcription in eScriptorium’s correction interface.
The interface presents a single line from the scanned image alongside the digitally transcribed text, allowing me to check each character and amend any errors. I quickly scanned the first few lines hoping I would find something other than random symbols – I was not disappointed! The results weren’t perfect of course but one or two lines actually came through with no errors at all and generally the character error rate seems very low. After careful correction of the errors that remained and some additional work on the reading order of the lines, I was able to export one complete manuscript transcription bringing the whole process to a satisfying conclusion.
Final thoughts
Naturally there is still some work to be done. All the models would benefit from further refinement and the segmentation model in particular will require training on a broader range of layouts before it can handle the great diversity of the Dunhuang collection. Hopefully future projects will allow more of these manuscripts to be used in the training of eScriptorium so that a robust HTR process can be established. I look forward to further developments and, for now, am very grateful for the chance I’ve had to work alongside my fabulous colleagues at the British Library and play some small role in this work.
Digital research in the arts and humanities has traditionally tended to focus on digitised physical objects and archives. However, born-digital cultural materials that originate and circulate across a range of digital formats and platforms are rapidly expanding and increasing in complexity, which raises opportunities and issues for research and archiving communities. Collecting, preserving, accessing and sharing born-digital objects and data presents a range of technical, legal and ethical challenges that, if unaddressed, threaten the archival and research futures of these vital cultural materials and records of the 21st century. Moreover, the environments, contexts and formats through which born-digital records are mediated necessitate reconceptualising the materials and practices we associate with cultural heritage and memory. Research and practitioner communities working with born-digital materials are growing and their interests are varied, from digital cultures and intangible cultural heritage to web archives, electronic literature and social media.
To explore and discuss issues relating to born-digital cultural heritage, the Digital Humanities Research Hub at the School of Advanced Study, University of London, in collaboration with British Library curators, colleagues from Aarhus University and the Endangered Material Knowledge Programme at the British Museum, are currently inviting submissions for the inaugural Born-Digital Collections, Archives and Memory conference, which will be hosted at the University of London and online from 2-4 April 2025. The full call for proposals and submission portal is available at https://easychair.org/cfp/borndigital2025.
This international conference seeks to further an interdisciplinary and cross-sectoral discussion on how the born-digital transforms what and how we research in the humanities. We welcome contributions from researchers and practitioners involved in any way in accessing or developing born-digital collections and archives, and interested in exploring the novel and transformative effects of born-digital cultural heritage. Areas of particular (but not exclusive) interest include:
A broad range of born-digital objects and formats:
Web-based and networked heritage, including but not limited to websites, emails, social media platforms/content and other forms of personal communication
Software-based heritage, such as video games, mobile applications, computer-based artworks and installations, including approaches to archiving, preserving and understanding their source code
Born-digital narrative and artistic forms, such as electronic literature and born-digital art collections
Emerging formats and multimodal born-digital cultural heritage
Community-led and personal born-digital archives
Physical, intangible and digitised cultural heritage that has been remediated in a transformative way in born-digital formats and platforms
Theoretical, methodological and creative approaches to engaging with born-digital collections and archives:
Approaches to researching the born-digital mediation of cultural memory
Histories and historiographies of born-digital technologies
Creative research uses and creative technologist approaches to born-digital materials
Experimental research approaches to engaging with born-digital objects, data and collections
Methodological reflections on using digital, quantitative and/or qualitative methods with born-digital objects, data and collections
Novel approaches to conceptualising born-digital and/or hybrid cultural heritage and archives
Critical approaches to born-digital archiving, curation and preservation:
Critical archival studies and librarianship approaches to born-digital collections
Preserving and understanding obsolete media formats, including but not limited to CD-ROMs, floppy disks and other forms of optical and magnetic media
Preservation challenges associated with the platformisation of digital cultural production
Semantic technology, ontologies, metadata standards, markup languages and born-digital curation
Ethical approaches to collecting and accessing ‘difficult’ born-digital heritage, such as traumatic or offensive online materials
Risks and opportunities of generative AI in the context of born-digital archiving
Access, training and frameworks for born-digital archiving and collecting:
Institutional, national and transnational approaches to born-digital archiving and collecting
Legal, trustworthy, ethical and environmentally sustainable frameworks for born-digital archiving and collecting, including attention to cybersecurity and safety concerns
Access, skills and training for born-digital research and archives
Inequalities of access to born-digital collecting and archiving infrastructures, including linguistic, geographic, economic, legal, cultural, technological and institutional barriers
Options for Submissions
A number of different submission types are welcomed and there will be an option for some presentations to be delivered online.
Conference papers (150-300 words)
Presentations lasting 20 minutes. Papers will be grouped with others on similar subjects or themes to form a complete session. There will be time for questions at the end of each session.
Panel sessions (100 word summary plus 150-200 words per paper)
Proposals should consist of three or four 20-minute papers. There will be time for questions at the end of each session.
Roundtables (200-300 word summary and 75-100 word bio for each speaker)
Proposals should include between three to five speakers, inclusive of a moderator, and each session will be no more than 90 minutes.
Posters, demos & showcases (100-200 words)
These can be traditional printed posters, digital-only posters, digital tool showcases, or software demonstrations. Please indicate the form your presentation will take in your submission.
If you propose a technical demonstration of some kind, please include details of technical equipment to be used and the nature of assistance (if any) required. Organisers will be able to provide a limited number of external monitors for digital posters and demonstrations, but participants will be expected to provide any specialist equipment required for their demonstration. Where appropriate, posters and demos may be made available online for virtual attendees to access.
Lightning talks (100-200 words)
Talks will be no more than 5 minutes and can be used to jump-start a conversation, pitch a new project, find potential collaborations, or try out a new idea. Reports on completed projects would be more appropriately given as 20-minute papers.
Workshops (150-300 words)
Please include details about the format, length, proposed topic, and intended audience.
Proposals will be reviewed by members of the programme committee. The peer review process will be double-blind, so no names or affiliations should appear on the submissions. The one exception is proposals for roundtable sessions, which should include the names of proposed participants. All authors and reviewers are required to adhere to the conference Code of Conduct.
The submission deadline for proposals is 15 May 2024, has been extended to 7 June 2024, and notification of acceptance is now scheduled for early August 2024. Organisers plan to make a number of bursaries available to presenters to cover the cost of attendance and details about these will be shared when notifications are sent.
The British Library has joined forces with the Guardian to hold a summit on the complex policy impacts of AI on media and information industries. The summit, chaired by broadcaster and author Timandra Harkness, brings together politicians, policy makers, industry leaders, artists and academics to shed light on key issues facing the media, newspapers, broadcasting, library and publishing industries in the age of AI. The summit was on Monday 11 March 2024 14:00 - 17:20; networking reception 17:30 - 19:00 GMT.
Lucy Crompton-Reid, Chief Executive of Wikimedia UK; Sara Lloyd, Group Communications Director & Global AI Lead at Pan Macmillan and Matt Rogerson from the Guardian will tackle the issue of copyright in the age of algorithms.
Novelist Tahmima Anam; Greg Clark MP, Chair Science & Technology Committee; Chris Moran from the Guardian and Roly Keating, Chief Executive of the British Library will discuss the issue of AI generated misinformation and bias.
Speakers on stage at the AI Summit. Photo credit Mia Ridge
AI is rapidly changing the world as we know it, and the media and information industries are no exception. AI-powered technologies are already being used to automate tasks, create personalised content, and deliver targeted advertising. In the process AI is quickly becoming both a friend and a foe. People can use AI to flood the online environment with misinformation, creating significant worries, for example, around how deep fakes, and AI personalised and targeted content could influence democratic processes. At the same time, AI could become a key tool to combat misinformation by identifying fake news articles and social media posts.
Many creators of content - from the organisations creating and publishing content, to individual authors, artists and actors - are worried that their copyright has been infringed by AI and we have already seen a flurry of legal action, mostly in the United States. At the same time, many artists are embracing AI as a part of their creative process. The recent British Library exhibition on Digital Storytelling explored the ways technology provides new opportunities to transform and enhance the way writers write and readers engage, including interactive works that invite and respond to user input, and reading experiences influenced by data feeds.
And it is not only in the world of news that there is a danger of AI misinformation. In science, where AI is revolutionising many areas of research from helping us discover new drugs to aiding research on complexities of climate change, we are, at the same time, encountering the issue of fake, AI generated scientific articles. For libraries, AI holds the future promise of improving discovery and access to information, which would help library users to find relevant information quickly. Yet, AI is also introducing significant new challenges when it comes to understanding the provenance of information sources, especially in making the public aware if the information has been created or selected by algorithms rather than human beings.
How will we know - and will we care - if our future newspapers, television programmes and library enquiries are mediated and delivered by AI? Or if the content we are consuming is a machine rather than a human creation? We are used to making judgements about people and organisations that we trust on the basis of how we perceive their professional integrity, political leanings, their stance on the issues that we care about, or just likability and charisma of the individual in front of us. How will we make similar judgments about an algorithm and its inherent bias? And how will we govern and manage this new AI-powered environment?
Governmental regulation of AI is under development in the UK, the US, the EU and elsewhere. At the beginning of February 2024 the UK government released its response to the UK AI Regulation White Paper, signaling the continuation of ‘agile’ AI regulation in the UK, which attempts to balance innovation and economic benefits of AI while also giving greater responsibility related to AI to existing regulators. The government’s response also reserves an option for more binding regulation in the future. For some, such as tech companies investing in AI products, this creates uncertainty for their future business models. For others, especially many in the creative industries and artists affected by AI, there is a disappointment due to the absence of regulations in relation to AI being trained by using content under copyright.
Inevitably, as AI further develops and becomes more prevalent, the issues of its regulation and adoption in the society will continue to evolve. AI will continue to challenge the ways in which we understand creators’ rights, individual and corporate governance and management of information, and the ways in which we acquire knowledge, trust different information sources, and form our opinions on what to buy to who to vote for.