Are you working in Galleries-Libraries-Archives-Museums (GLAM) and cultural heritage organisations as research support and research-active staff? Are you interested in developing knowledge and skills in open scholarship? Would you like to establish good practices, share your experience with others and collaborate? If your answer is yes to one or more of these questions, we invite you to join the Cultural Heritage Open Scholarship Network (CHOSN).
Initiated by the British Library’s Research Infrastructure Services built on the experience of and positive responses received from the open scholarship training programme, which was run earlier this year. CHOSN is a community of practice for research support and research-active staff who work in GLAMs, organisations interested in developing and sharing open scholarship knowledge and skills, organising events, and supporting each other in this area.
GLAMs demonstrate a significant amount of research showcases, but we may find ourselves with inadequate resources to make that research openly available, gain relevant open scholarship skills to make it happen, or even identify what forms research in these environments. CHOSN aims to provide a platform to create synergy for those aiming for good practice in open scholarship.
This network can be of interest to anyone who is facilitating, enabling, supporting research activities in GLAM organisations. They include but are not limited to research support staff, research-active staff, librarians, curatorial teams, IT specialists, copyright officers and so on. Anyone interested in the areas of open scholarship and works in cultural heritage organisations are welcome.
Join us in the Cultural Heritage Open Scholarship Network (CHOSN) to;
explore research activities, roles in GLAMs and make them visible,
develop knowledge and skills in open scholarship,
carry out capacity development activities to learn and grow, and
create a community of practice to collaborate and support each other.
We have set up a JISC mailing list to start communication with the network, you can join by signing up here. We will shortly organise an online meeting to kick off the network plans, explore how to move forward and to collectively discuss what we would like to do next. This will all be communicated via the CHOSN mailing list.
If you have any questions about CHOSN, we are happy to hear from you at [email protected].
This blog post is by Harry Lloyd, Research Software Engineer in the Digital Research team, British Library. You can sometimes find him at the Rose and Crown in Kentish Town.
Last week Dr Adi Keinan-Schoonbaert delved into the invaluable work that she and others have done on the Convert-a-Card project since 2015. In this post, I’m going to pick up where she left off, and describe how we’ve been automating parts of the workflow. When I joined the British Library in February, Victoria Morris and former colleague Giorgia Tolfo had prototyped programmatically extracting entities from transcribed catalogue cards and searching by title and author in the OCLC WorldCat database for any close matches. I have been building on this work, and addressing the last yellow rectangle below: “Curator disambiguation and resolution”. Namely how curators choose between OCLC results and develop a MARC record fit for ingest into British Library systems.
The Convert-a-Card workflow at the start of 2023
Entity Extraction
We’re currently working with the digitised images from two drawers of cards, one Urdu and one Chinese. Adi and Giorgia used a layout model on Transkribus to successfully tag different entities on the Urdu cards. The transcribed XML output then had ‘title’, ‘shelfmark’ and ‘author’ tags for the relevant text, making them easy to extract.
Card with layout model and resulting XML for an Urdu card, showing the `structure {type:title;}` parameter on line one
The same method didn’t work for the Chinese cards, possibly because the cards are less consistently structured. There is, however, consistency in the vertical order of entities on the card: shelfmark comes above title comes above author. This meant I could reuse some code we developed for Rossitza Atanassova’s Incunabula project, which reliably retrieved title and author (and occasionally an ISBN).
Chinese cards. Although the layouts are variable, shelfmark is reliably the first line, with title and author following.
Querying OCLC WorldCat
With the title and author for each card, we were set-up to query WorldCat, but how to do this when there are over two thousand cards in these two drawers alone? Victoria and Giorgia made impressive progress combining Python wrappers for the Z39.50 protocol (PyZ3950) and MARC format (Pymarc). With their prototype, a lot of googling of ASN.1, BER and Z39.50, and a couple of quiet weeks drifting through the web of references between the two packages, I built something that could turn a table of titles and authors for the Chinese cards into a list of MARC records. I had also brushed up on enough UTF-8 to fix why none of the Chinese characters were encoded correctly.
For all that I enjoyed trawling through it, Z39.50 is, in the words of a 1999 tutorial, “rather hard to penetrate” and nearly 35 years old. PyZ39.50, the Python wrapper, hasn’t been maintained for two years, and making any changes to the code is a painstaking process. While Z39.50 remains widely used for transferring information between libraries, that doesn’t mean there aren’t better ways of doing things, and in the name of modernity OCLC offer a suite of APIs for their services. Crucially there are endpoints on their Metadata API that allow search and retrieval of records in MARCXML format. As the British Library maintains a cataloguing subscription to OCLC, we have access to the APIs, so all that’s needed is a call to the OCLC OAuth Server, a search on the Metadata API using title and author, then retrieval of the MARCXML for any results. This is very straightforward in Python, and with the Requests package and about ten lines of code we can have our MARCXML matches.
Selecting Matches
At all stages of the project we’ve needed someone to select the best match for a card from WorldCat search results. This responsibility currently lies with curators and cataloguers from the relevant collection area. With that audience in mind, I needed a way to present MARC data from WorldCat so curators could compare the MARC fields for different matches. The solution needed to let a cataloguer choose a card, show the card and a table with the MARC fields for each WorldCat result, and ideally provide filters so curators could use domain knowledge to filter out bad results. I put out a call on the cross-government data science network, and a colleague in the 10DS data science team suggested Streamlit.
Streamlit is a Python package that allows fast development of web apps without needing to be a web app developer (which is handy as I’m not one). Adding Streamlit commands to the script that processes WorldCat MARC records into a dataframe quickly turned it into a functioning web app. The app reads in a dataframe of the cards in one drawer and their potential worldcat matches, and presents it as a table of cards to choose from. You then see the image of the card you’re working on and a MARC field table for the relevant WorldCat matches. This side-by-side view makes it easy to scan across a particular MARC field, and exclude matches that have, for example, the wrong physical dimensions. There’s a filter for cataloguing language, sort options for things like number of subject access fields and total number of fields, and the ability to remove bad matches from view. Once the cataloguer has chosen a match they can save a match to the original dataframe, or note that there were no good matches, or only a partial match.
Screenshot from the Streamlit Convert-a-Card web app, showing the card and the MARC table curators use to choose between matches. As the cataloguers are familiar with MARC, providing the raw fields is the easiest way to choose between matches.
After some very positive initial feedback, we sat down with the Chinese curators and had them test the app out. That led to a fun, interactive, user experience focussed feedback session, and a whole host of GitHub issues on the repository for bugs and design suggestions. Behind the scenes discussion on where to host the app and data are ongoing and not straightforward, but this has been a deeply easy product to prototype, and I’m optimistic it will provide a light weight, gentle learning curve complement to full deriving software like Aleph (the Library’s main cataloguing system).
Next Steps
The project currently uses a range of technologies in Transkribus, the OCLC APIs, and Streamlit, and tying these together has in itself been a success. Going forwards, we have the possibility of extracting non-English text from the cards to look forward to, and the richer list of entities this would make available. Working with the OCLC APIs has been a learning curve, and they’re not working perfectly yet, but they represent a relatively accessible option compared to Z39.50. And my hope for the Streamlit app is that it will be a useful tool beyond the project for wherever someone wants to use Worldcat to help derive records from minimal information. We still have challenges in terms of design, data storage, and hosting to overcome, but these discussions should have their own benefits in making future development easier. The goal for automation part of the project is a smooth flow of data from Transkribus, through OCLC and on to the curators, and while it’s not perfect, we’re definitely getting there.
This year we will be continuing our collaboration with the British Fashion Council running our annual student research competition, which encourages fashion students to use the British Library collections in creating their fashion designs. Once again, we will start the collaboration with a fashion show produced by a leading designer. This year we are delighted to be working with Priya Ahluwalia. Earlier this year Priya worked with the Business and IP centre, contributing to the Inspiring Entrepreneurs’ International Women’s Day event, which discussed how we can best embrace and encourage diversity and inclusion in business.
On 15 September during London Fashion Week Priya will showcase her SS24 collection at the British Library. Following the show, Priya will lead this year’s student competition, focusing on the importance of research in design process. As a part of this competition students across the UK will create fashion portfolios inspired by the Library’s unique collections.
The previous collaborations with the British Fashion Council involved a range of exciting designers such as Nabil El Nayal, Phoebe English, Supriya Lele and Charles Jeffrey.
Phoebe English’s fashion installation at the British Library in 2021
The previous student work utilised the riches of the Library’s digital and physical collections, with the Flickr collection being especially popular with students. However, the inspiration came from many different directions - from art books, photographs and maps to the reading room bags.
This year’s student competition will be launched in October 2023.
From the winning portfolio of Mihai Popesku, Middlesex University student, who used the Library collections to research traditional Romanian dress