21 September 2023
Convert-a-Card: Helping Cataloguers Derive Records with OCLC APIs and Python
This blog post is by Harry Lloyd, Research Software Engineer in the Digital Research team, British Library. You can sometimes find him at the Rose and Crown in Kentish Town.
Last week Dr Adi Keinan-Schoonbaert delved into the invaluable work that she and others have done on the Convert-a-Card project since 2015. In this post, I’m going to pick up where she left off, and describe how we’ve been automating parts of the workflow. When I joined the British Library in February, Victoria Morris and former colleague Giorgia Tolfo had prototyped programmatically extracting entities from transcribed catalogue cards and searching by title and author in the OCLC WorldCat database for any close matches. I have been building on this work, and addressing the last yellow rectangle below: “Curator disambiguation and resolution”. Namely how curators choose between OCLC results and develop a MARC record fit for ingest into British Library systems.
Entity Extraction
We’re currently working with the digitised images from two drawers of cards, one Urdu and one Chinese. Adi and Giorgia used a layout model on Transkribus to successfully tag different entities on the Urdu cards. The transcribed XML output then had ‘title’, ‘shelfmark’ and ‘author’ tags for the relevant text, making them easy to extract.
The same method didn’t work for the Chinese cards, possibly because the cards are less consistently structured. There is, however, consistency in the vertical order of entities on the card: shelfmark comes above title comes above author. This meant I could reuse some code we developed for Rossitza Atanassova’s Incunabula project, which reliably retrieved title and author (and occasionally an ISBN).
Querying OCLC WorldCat
With the title and author for each card, we were set-up to query WorldCat, but how to do this when there are over two thousand cards in these two drawers alone? Victoria and Giorgia made impressive progress combining Python wrappers for the Z39.50 protocol (PyZ3950) and MARC format (Pymarc). With their prototype, a lot of googling of ASN.1, BER and Z39.50, and a couple of quiet weeks drifting through the web of references between the two packages, I built something that could turn a table of titles and authors for the Chinese cards into a list of MARC records. I had also brushed up on enough UTF-8 to fix why none of the Chinese characters were encoded correctly.
For all that I enjoyed trawling through it, Z39.50 is, in the words of a 1999 tutorial, “rather hard to penetrate” and nearly 35 years old. PyZ39.50, the Python wrapper, hasn’t been maintained for two years, and making any changes to the code is a painstaking process. While Z39.50 remains widely used for transferring information between libraries, that doesn’t mean there aren’t better ways of doing things, and in the name of modernity OCLC offer a suite of APIs for their services. Crucially there are endpoints on their Metadata API that allow search and retrieval of records in MARCXML format. As the British Library maintains a cataloguing subscription to OCLC, we have access to the APIs, so all that’s needed is a call to the OCLC OAuth Server, a search on the Metadata API using title and author, then retrieval of the MARCXML for any results. This is very straightforward in Python, and with the Requests package and about ten lines of code we can have our MARCXML matches.
Selecting Matches
At all stages of the project we’ve needed someone to select the best match for a card from WorldCat search results. This responsibility currently lies with curators and cataloguers from the relevant collection area. With that audience in mind, I needed a way to present MARC data from WorldCat so curators could compare the MARC fields for different matches. The solution needed to let a cataloguer choose a card, show the card and a table with the MARC fields for each WorldCat result, and ideally provide filters so curators could use domain knowledge to filter out bad results. I put out a call on the cross-government data science network, and a colleague in the 10DS data science team suggested Streamlit.
Streamlit is a Python package that allows fast development of web apps without needing to be a web app developer (which is handy as I’m not one). Adding Streamlit commands to the script that processes WorldCat MARC records into a dataframe quickly turned it into a functioning web app. The app reads in a dataframe of the cards in one drawer and their potential worldcat matches, and presents it as a table of cards to choose from. You then see the image of the card you’re working on and a MARC field table for the relevant WorldCat matches. This side-by-side view makes it easy to scan across a particular MARC field, and exclude matches that have, for example, the wrong physical dimensions. There’s a filter for cataloguing language, sort options for things like number of subject access fields and total number of fields, and the ability to remove bad matches from view. Once the cataloguer has chosen a match they can save a match to the original dataframe, or note that there were no good matches, or only a partial match.
After some very positive initial feedback, we sat down with the Chinese curators and had them test the app out. That led to a fun, interactive, user experience focussed feedback session, and a whole host of GitHub issues on the repository for bugs and design suggestions. Behind the scenes discussion on where to host the app and data are ongoing and not straightforward, but this has been a deeply easy product to prototype, and I’m optimistic it will provide a light weight, gentle learning curve complement to full deriving software like Aleph (the Library’s main cataloguing system).
Next Steps
The project currently uses a range of technologies in Transkribus, the OCLC APIs, and Streamlit, and tying these together has in itself been a success. Going forwards, we have the possibility of extracting non-English text from the cards to look forward to, and the richer list of entities this would make available. Working with the OCLC APIs has been a learning curve, and they’re not working perfectly yet, but they represent a relatively accessible option compared to Z39.50. And my hope for the Streamlit app is that it will be a useful tool beyond the project for wherever someone wants to use Worldcat to help derive records from minimal information. We still have challenges in terms of design, data storage, and hosting to overcome, but these discussions should have their own benefits in making future development easier. The goal for automation part of the project is a smooth flow of data from Transkribus, through OCLC and on to the curators, and while it’s not perfect, we’re definitely getting there.