THE BRITISH LIBRARY

Digital scholarship blog

45 posts categorized "LIS research"

26 November 2020

Using British Library Cultural Heritage Data for a Digital Humanities Research Course at the Australian National University

Add comment

Posted on behalf of Terhi Nurmikko-Fuller, Senior Lecturer, Centre for Digital Humanities Research, Australian National University by Mahendra Mahey, Manager of BL Labs.

The teaching philosophy and pedagogy of the Centre for Digital Humanities Research (CDHR) at the Australian National University (ANU) focus on research-fuelled, practice-led, object-orientated learning. We value collaboration, experimentation, and individual growth, rather than adhering to standardised evaluation matrix of exams or essays. Instead, students enrolled in jointly-taught undergraduate and postgraduate courses are given a task: to innovate at the intersection of digital technologies and cultural heritage sector institutions. They are given a great degree of autonomy, and are trusted to deliver. Their aim is to create digital prototypes, which open up GLAM sector material to a new audience.

HUMN2001: Digital Humanities Theories and Projects, and its postgraduate equivalent HUMN6001 are core courses for the programs delivered from the CDHR. HUMN2001 is a compulsory course for both the Minor and the Major in Digital Humanities for the Bachelor of Arts; HUMN6001 is a core, compulsory course in the Masters of Digital Humanities and Public Culture. Initially the course structure was quite different: experts would be invited to guest lecture on their Digital Humanities projects, and the students were tasked with carrying out critical evaluations of digital resources of various kinds. What quickly became apparent, was that without experience of digital projects, the students struggled to meaningfully and thoughtfully evaluate the projects they encountered. Many focused exclusively on the user-interface; too often critical factors like funding sources were ignored; the critical evaluative context in which the students operated was greatly skewed by their experiences of tools such as Google and platforms such as Facebook.

The solution to the problem became clear - students would have to experience the process of developing digital projects themselves before they could reasonably be expected to evaluate those of others. This revelation brought on a paradigm shift in the way in which the CDHR engages with students, projects, and their cultural heritage sector collaborators.

In 2018, we reached out to colleagues at the ANU for small-scale projects for the students to complete. The chosen project was the digitisation and the creation of metadata records for a collection of glass slides that form part of the Heritage in the Limelight project. The enthusiasm, diligence, and care that the students applied to working with this external dataset (external only to the course, since this was an ANU-internal project) gave us confidence to pursue collaborations outside of our own institution. In Semester 1 of 2019, Dr Katrina Grant’s course HUMN3001/6003: Digital Humanities Methods and Practices ran in collaboration with the National Museum of Australia (NMA) to almost unforeseeable success: the NMA granted five of the top students a one-off stipend of $1,000 each, and continued working with the students on their projects, which were then added to the NMA’s Defining Moments Digital Classroom, launched in November 2020. This collaboration was featured in a piece in the ANU Reporter, the University’s internal circular. 

Encouraged by the success of Dr Grant’s course, and presented with a serendipitous opportunity to meet up at the Australasian Association for Digital Humanities (aaDH) conference in 2018 where he was giving the keynote, I reached out to Mahendra Mahey to propose a similar collaboration. In Semester 2, 2019 (July to November), HUMN2001/6001 ran in collaboration with the British Library. 

Our experiences of working with students and cultural heritage institutions in the earlier semester had highlighted some important heuristics. As a result, the delivery of HUMN2001/6001 in 2019 was much more structured than that of HUMN3001/6003 (which had offered the students more freedom and opportunity for independent research). Rather than focus on a theoretical framework per se, HUMN2001/6001 focused on the provision of transferable skills that improved the delivery and reporting of the projects, and could be cited directly in future employment opportunities as a skills-base. These included project planning and time management (such as Gantt charts and SCRUM as a form of agile project management), and each project was to be completed in groups.

The demographic set up of each group had to follow three immutable rules:

  • The first, was that each team had to be interdisciplinary, with students from more than one degree program.
  • Second, the groups had to be multilingual, and not each member of the group could have the same first language, or be monolingual in the same language.
  • Third, was that the group had to represent more than one gender.

Although not all groups strictly implemented these rules, the ones that did benefitted from the diversity and critical lens afforded by this richness of perspective to result in the top projects.

Three examples that best showcase the diversity (and the creative genius!) of these groups and their approach to the British Library’s collection include a virtual reality (VR) concert hall, a Choose-You-Own-Adventure-Game travelling through Medieval manuscripts, and an interactive treasure hunt mobile app.

Examples of student projects

(VR)2 : Virtuoso Rachmaninoff in Virtual Reality

Research Team: Angus Harden, Noppakao (Angel) Leelasorn, Mandy McLean, Jeremy Platt, and Rachel Watson

Fig. 1 Angel Leelasorn testing out (VR)2
Figure 1: Angel Leelasorn testing out (VR)2
Figure 2: Snapshots documenting the construction of (VR)2
Figure 2: Snapshots documenting the construction of (VR)2

This project is a VR experience of the grand auditorium of the Bolshoi Theatre in Moscow. It has an audio accompaniment of Sergei Rachmaninoff’s Prelude in C# Minor, Op.3, No.2, the score for which forms part of the British Library’s collection. Reflective of the personal experiences of some of the group members, the project was designed to increase awareness of mental health, and throughout the experience the user can encounter notes written by Rachmaninoff during bouts of depression. The sense of isolation is achieved by the melody playing in an empty auditorium. 

The VR experience was built using Autodesk Maya and Unreal Engine 4. The music was produced  using midi data, with each note individually entered into Logic Pro X, and finally played through Addictive Keys Studio Grand virtual instrument.

The project is available through a website with a disclosure, and links to various mental health helplines, accessible at: https://virtuosorachmaninoff.wixsite.com/vrsquared

Fantastic Bestiary

Research Team: Jared Auer, Victoria (Vick) Gwyn, Thomas Larkin, Mary (May) Poole, Wen (Raven) Ren, Ruixue (Rachel) Wu, Qian (Ariel) Zhang

Fig. 3 Homepage of A Fantastic Bestiary
Figure 3:  Homepage of A Fantastic Bestiary

This project is a bilingual Choose-Your-Own-Adventure hypertext game that engages with the Medieval manuscripts (such as Royal MS 12 C. xix. Folios 12v-13, based off the Greek Physiologus and the Etymologiae of St. Isidore of Seville) collection at the British Library, first discovered through the Turning the Pages digital feature. The project workflow included design and background research, resource development, narrative writing, animation, translation, audio recording, and web development. Not only does it open up the Medieval manuscripts to the public in an engaging and innovative way through five fully developed narratives (~2,000-3,000 words each), all the content is also available in Mandarin Chinese.

The team used a plethora of different tools, including Adobe Animate, Photoshop, Illustrator, and Audition and Audacity. The website was developed using HTML, CSS, and JavaScript in the Microsoft Visual Studio Integrated Development Environment

The project is accessible at: https://thomaslarkin7.github.io/hypertextStory/

ActionBound

Research Team: Adriano Carvalho-Mora, Conor Francis Flannery, Dion Tan, Emily Swan

Fig 4 (Left)Testing the app at the Australian National Botanical Gardens, (Middle) An example of one of the tasks to complete in ActionBound (Right) Example of sound file from the British Library (a dingo)
Figure 4: (Left) Testing the app at the Australian National Botanical Gardens, (Middle) An example of one of the tasks to complete in ActionBound (Right) Example of sound file from the British Library (a dingo)

This project is a mobile application, designed as a location-based authoring tool inspired by the Pokemon Go! augmented reality mobile game. This educational scavenger-hunt aims to educate players about endangered animals. Using sounds of endangered or extinct animals from the British Library’s collection, but geo-locating the app at the Australian National Botanical Gardens, this project is a perfect manifestation of truly global information sharing and enrichment.

The team used a range of available tools and technologies to build this Serious Game or Game-With-A-Purpose. These include GPS and other geo-locating (and geo-caching), they created QR codes to be scanned during the hunt, locations are mapped using Open Street Map

The app can be downloaded from: https://en.actionbound.com/bound/BotanicGardensExtinctionHunt

Course Assessment

Such a diverse and dynamic learning environment presents some pedagogical challenges and required a new approach to student evaluation and assessment. The obvious question here is how to fairly, objectively, and comprehensively grade such vastly different projects? Especially since not only do they differ in both methodology and data, but also in the existing level of skills within the group. The approach I took for the grading of these assignments is one that I believe will have longevity and to some extent scalability. Indeed, I have successfully applied the same rubric in the evaluation of similarly diverse projects created for the course in 2020, when run in collaboration with the National Film and Sound Archives of Australia

The assessment rubric for this course awards students on two axis: ambition and completeness. This means that projects that were not quite completed due to their scale or complexity are awarded for the vision, and the willingness of the students to push boundaries, do new things, and take on a challenge. The grading system allows for four possible outcomes: a High Distinction (for 80% or higher), Distinction (70-79%), Credit (60-69%), and Pass (50-59%). Projects which are ambitious and completed to a significant extent land in the 80s; projects that are either ambitious but not fully developed, or relatively simple but completed receive marks in the 70s; those that very literally engaged with the material, implemented a technologically straightforward solution (such as building a website using WordPress or Wix, or using one of the suite of tools from Northwestern University’s Knightlab) were awarded marks in the 60s. Students were also rewarded for engaging with tools and technologies they had no prior knowledge of. Furthermore, in week 10 of a 12 week course, we ran a Digital Humanities Expo! Event, in which the students showcased their projects and received user-feedback from staff and students at the ANU. Students able to factor these evaluations into their final project exegeses were also rewarded by the marking scheme.

Notably, the vast majority of the students completed the course with marks 70 or higher (in the two top career brackets). Undoubtedly, the unconventional nature of the course is one of its greatest assets. Engaging with a genuine cultural heritage institution acted as motivation for the students. The autonomy and trust placed in them was empowering. The freedom to pursue the projects that they felt best reflected their passions, interests in response to a national collection of international fame resulted, almost invariably, in the students rising to the challenge and even exceeding expectations.

This was a learning experience beyond the rubric. To succeed students had to develop the transferable skills of project-planning, time-management and client interaction that would support a future employment portfolio. The most successful groups were also the most diverse groups. Combining voices from different degree programs, languages, cultures, genders, and interests helped promote internal critical evaluations throughout the design process, and helped the students engage with the materials, the projects, and each other in a more thoughtful way.

Two groups discussing their projects with Mahendra Mahey
Figure 5: Two groups discussing their projects with Mahendra Mahey
Figure 6 : National Museum of Australia curator Dr Lily Withycombe user-testing a digital project built using British Library data, 2019.
Figure 6: National Museum of Australia curator Dr Lily Withycombe user-testing a digital project built using British Library data, 2019.
User-testing feedback! Staff and students came to see the projects and support our students in the Digital Humanities Expo in 2019.
Figure 7: User-testing feedback! Staff and students came to see the projects and support our students in the Digital Humanities Expo in 2019.

Terhi Nurmikko-Fuller Biography

Dr. Terhi Nurmikko-Fuller
Dr. Terhi Nurmikko-Fuller

Terhi Nurmikko-Fuller is a Senior Lecturer in Digital Humanities at the Australian National University. She examines the potential of computational tools and digital technologies to support and diversify scholarship in the Humanities. Her publications cover the use of Linked Open Data with musicological information, library metadata, the narrative in ancient Mesopotamian literary compositions, and the role of gamification and informal online environments in education. She has created 3D digital models of cuneiform tables, carved boab nuts, animal skulls, and the Black Rod of the Australian Senate. She is a British Library Labs Researcher in Residence and a Fellow of the Software Sustainability Institute, UK; an eResearch South Australia (eRSA) HASS DEVL (Humanities Arts and Social Sciences Data Enhanced Virtual Laboratory) Champion; an iSchool Research Fellow at the University of Illinois at Urbana-Champaign, USA (2019 - 2021), a member of the Australian Government Linked Data Working Group; and, since September 2020 has been a member of the Territory Records Advisory Council for the Australian Capital Territory Government.

BL Labs Public Awards 2020 - REMINDER - Entries close NOON (GMT) 30 November 2020

Inspired by this work that uses the British Library's digitised collections? Have you done something innovative using the British Library's digital collections and data? Why not consider entering your work for a BL Labs Public Award 2020 and win fame, glory and even a bit of money?

This year's public awards 2020 are open for submission, the deadline for entry is NOON (GMT) Monday 30 November 2020

Whilst we welcome projects on any use of our digital collections and data (especially in research, artistic, educational and community categories), we are particularly interested in entries in our public awards that have focused on anti-racist work, about the pandemic or that are using computational methods such as the use of Jupyter Notebooks.

Work will be showcased at the online BL Labs Annual Symposium between 1400 - 1700 on Tuesday 15 December, for more information and a booking form please visit the BL Labs Symposium 2020 webpage.

11 November 2020

BL Labs Online Symposium 2020 : Book your place for Tuesday 15-Dec-2020

Add comment

Posted by Mahendra Mahey, Manager of BL Labs

The BL Labs team are pleased to announce that the eighth annual British Library Labs Symposium 2020 will be held on Tuesday 15 December 2020, from 13:45 - 16:55* (see note below) online. The event is FREE, but you must book a ticket in advance to reserve your place. Last year's event was the largest we have ever held, so please don't miss out and book early, see more information here!

*Please note, that directly after the Symposium, we are organising an experimental online mingling networking session between 16:55 and 17:30!

The British Library Labs (BL Labs) Symposium is an annual event and awards ceremony showcasing innovative projects that use the British Library's digital collections and data. It provides a platform for highlighting and discussing the use of the Library’s digital collections for research, inspiration and enjoyment. The awards this year will recognise outstanding use of British Library's digital content in the categories of Research, Artistic, Educational, Community and British Library staff contributions.

This is our eighth annual symposium and you can see previous Symposia videos from 201920182017201620152014 and our launch event in 2013.

Dr Ruth Anhert, Professor of Literary History and Digital Humanities at Queen Mary University of London Principal Investigator on 'Living With Machines' at The Alan Turing Institute
Ruth Ahnert will be giving the BL Labs Symposium 2020 keynote this year.

We are very proud to announce that this year's keynote will be delivered by Ruth Ahnert, Professor of Literary History and Digital Humanities at Queen Mary University of London, and Principal Investigator on 'Living With Machines' at The Alan Turing Institute.

Her work focuses on Tudor culture, book history, and digital humanities. She is author of The Rise of Prison Literature in the Sixteenth Century (Cambridge University Press, 2013), editor of Re-forming the Psalms in Tudor England, as a special issue of Renaissance Studies (2015), and co-author of two further books: The Network Turn: Changing Perspectives in the Humanities (Cambridge University Press, 2020) and Tudor Networks of Power (forthcoming with Oxford University Press). Recent collaborative work has taken place through AHRC-funded projects ‘Living with Machines’ and 'Networking the Archives: Assembling and analysing a meta-archive of correspondence, 1509-1714’. With Elaine Treharne she is series editor of the Stanford University Press’s Text Technologies series.

Ruth's keynote is entitled: Humanists Living with Machines: reflections on collaboration and computational history during a global pandemic

You can follow Ruth on Twitter.

There will be Awards announcements throughout the event for Research, Artistic, Community, Teaching & Learning and Staff Categories and this year we are going to get the audience to vote for their favourite project in those that were shortlisted, a people's BL Labs Award!

There will be a final talk near the end of the conference and we will announce the speaker for that session very soon.

So don't forget to book your place for the Symposium today as we predict it will be another full house again, the first one online and we don't want you to miss out, see more detailed information here

We look forward to seeing new faces and meeting old friends again!

For any further information, please contact labs@bl.uk

23 October 2020

BL Labs Public Award Runner Up (Research) 2019 - Automated Labelling of People in Video Archives

Add comment

Example people identified in TV news related programme clips
People 'automatically' identified in digital TV news related programme clips.

Guest blog post by Andrew Brown (PhD researcher),  Ernesto Coto (Research Software Engineer) and Andrew Zisserman (Professor) of the Visual Geometry Group, Department of Engineering Science, University of Oxford, and BL Labs Public Award Runner-up for Research, 2019. Posted on their behalf by Mahendra Mahey, Manager of BL Labs.

In this work, we automatically identify and label (tag) people in large video archives without the need for any manual annotation or supervision. The project was carried out with the British Library on a sample of 106 videos from their “Television and radio news” archive; a large collection of news programs from the last 10 years. This archive serves as an important and fascinating resource for researchers and the general public alike. However, the sheer scale of the data, coupled with a lack of relevant metadata, makes indexing, analysing and navigating this content an increasingly difficult task. Relying on human annotation is no longer feasible, and without an effective way to navigate these videos, this bank of knowledge is largely inaccessible.

As users, we are typically interested in human-centric queries such as:

  • “When did Jeremy Corbyn first appear in a Newsnight episode?” or
  • “Show me all of the times when Hugh Grant and Shirley Williams appeared together.

Currently this is nigh on impossible without trawling through hundreds of hours of content. 

We posed the following research question:

Is it possible to enable automatic person-search capabilities such as this in the archive, without the need for any manual supervision or labelling?

The answer is “yes”, and the method is described next.

Video Pre-Processing

The basic unit which enables person labelling in videos is the face-track; a group of consecutive face detections within a shot that correspond to the same identity. Face-tracks are extracted from all of the videos in the archive. The task of labelling the people in the videos is then to assign a label to each one of these extracted face-tracks. The video below gives an example of two face-tracks found in a scene.


Two face-tracks found in British Library digital news footage by Visual Geometry Group - University of Oxford.

Techniques at Our Disposal

The base technology used for this work is a state-of-the-art convolutional neural network (CNN), trained for facial recognition [1]. The CNN extracts feature-vectors (a list of numbers) from face images, which indicate the identity of the depicted person. To label a face-track, the distance between the feature-vector for the face-track, and the feature-vector for a face-image with known identity is computed. The face-track is labelled as depicting that identity if the distance is smaller than a certain threshold (i.e. they match). We also use a speaker recognition CNN [2] that works in the same way, except it labels speech segments from unknown identities using speech segments from known identities within the video.

Labelling the Face-Tracks

Our method for automatically labelling the people in the video archive is divided into three main stages:

(1) Our first labelling method uses what we term a “celebrity feature-vector bank”, which consists of names of people that are likely to appear in the videos, and their corresponding feature-vectors. The names are automatically sourced from IMDB cast lists for the programmes (the titles of the programmes are freely available in the meta-data). Face-images for each of the names are automatically downloaded from image-search engines. Incorrect face-images and people with no images of themselves on search engines are automatically removed at this stage. We compute the feature-vectors for each identity and add them to the bank alongside the names. The face-tracks from the video archives are then simply labelled by finding matches in the feature-vector bank.

Face-tracks from the video archives are labelled by finding matches in the feature-vector bank.
Face-tracks from the video archives are labelled by finding matches in the feature-vector bank. 

(2) Our second labelling method uses the idea that if a name is spoken, or found displayed in a scene, then that person is likely to be found within that scene. The task is then to automatically determine whether there is a correspondence or not. Text is automatically read from the news videos using Optical Character Recognition (OCR), and speech is automatically transcribed using Automatic Speech Recognition (ASR). Names are identified and they are searched for on image search engines. The top ranked images are downloaded and the feature-vectors are computed from the faces. If any are close enough to the feature-vectors from the face-tracks present in the scene, then that face-track is labelled with that name. The video below details this process for a written name.


Using text or spoken word and face recognition to identify a person in a news clip.

(3) For our third labelling method, we use speaker recognition to identify any non-labelled speaking people. We use the labels from the previous two stages to automatically acquire labelled speech segments from the corresponding labelled face-tracks. For each remaining non-labelled speaking person, we extract the speech feature-vector and compute the distance of it to the feature-vectors of the labelled speech segments. If one is close enough, then the non-labelled speech segment and corresponding face-track is assigned that name. This process manages to label speaking face-tracks with visually challenging faces, e.g. deep in shadow or at an extremely non-frontal pose.

Indexing and Searching Identities

The results of our work can be browsed via a web search engine of our own design. A search bar allows for users to specify the person or group of people that they would like to search for. People’s names are efficiently indexed so that the complete list of names can be filtered as the user types in the search bar. The search results are returned instantly with their associated metadata (programme name, data and time) and can be displayed in multiple ways. The video associated with each search result can be played, visualising the location and the name of all identified people in the video. See the video below for more details. This allows for the archive videos to be easily navigated using person-search, thus opening them up for use by the general public.


Archive videos easily navigated using person-search.

For examples of more of our Computer Vision research and open-source software, visit the Visual Geometry Group website.

This work was supported by the EPSRC Programme Grant Seebibyte EP/M013774/1

[1] Qiong Cao, Li Shen, Weidi Xie, Omkar M. Parkhi, and Andrew Zisserman. VGGFace2: A dataset for recognising faces across pose and age. In Proc. International Conference on Automatic Face & Gesture Recognition, 2018.

[2] Joon Son Chung, Arsha Nagrani and Andrew Zisserman. VoxCeleb2: Deep Speaker Recognition. INTERSPEECH, 2018

BL Labs Public Awards 2020

Inspired by this work that uses the British Library's digital archived news footage? Have you done something innovative using the British Library's digital collections and data? Why not consider entering your work for a BL Labs Public Award 2020 and win fame, glory and even a bit of money?

This year's public and staff awards 2020 are open for submission, the deadline for entry for both is Monday 30 November 2020.

Whilst we welcome projects on any use of our digital collections and data (especially in research, artistic, educational and community categories), we are particularly interested in entries in our public awards that have focused on anti-racist work, about the pandemic or that are using computational methods such as the use of Jupyter Notebooks.

19 October 2020

The 2020 British Library Labs Staff Award - Nominations Open!

Add comment

Looking for entries now!

A set of 4 light bulbs presented next to each other, the third light bulb is switched on. The image is supposed to a metaphor to represent an 'idea'
Nominate an existing British Library staff member or a team that has done something exciting, innovative and cool with the British Library’s digital collections or data.

The 2020 British Library Labs Staff Award, now in its fifth year, gives recognition to current British Library staff who have created something brilliant using the Library’s digital collections or data.

Perhaps you know of a project that developed new forms of knowledge, or an activity that delivered commercial value to the library. Did the person or team create an artistic work that inspired, stimulated, amazed and provoked? Do you know of a project developed by the Library where quality learning experiences were generated using the Library’s digital content? 

You may nominate a current member of British Library staff, a team, or yourself (if you are a member of staff), for the Staff Award using this form.

The deadline for submission is NOON (GMT), Monday 30 November 2020.

Nominees will be highlighted on Tuesday 15 December 2020 at the online British Library Labs Annual Symposium where some (winners and runners-up) will also be asked to talk about their projects (everyone is welcome to attend, you just need to register).

You can see the projects submitted by members of staff and public for the awards in our online archive.

In 2019, last year's winner focused on the brilliant work of the Imaging Team for the 'Qatar Foundation Partnership Project Hack Days', which were sessions organised for the team to experiment with the Library's digital collections. 

The runner-up for the BL Labs Staff Award in 2019 was the Heritage Made Digital team and their social media campaign to promote the British Library's digital collections one language a week from letters 'A' to 'U' #AToUnknown).

In the public Awards, last year's winners (2019) drew attention to artisticresearchteaching & learning, and community activities that used our data and / or digital collections.

British Library Labs is a project within the Digital Scholarship department at the British Library that supports and inspires the use of the Library's digital collections and data in exciting and innovative ways. It was previously funded by the Andrew W. Mellon Foundation and is now solely funded by the British Library.

If you have any questions, please contact us at labs@bl.uk.

12 October 2020

Fiction Readers Wanted for PhD Research Study

Add comment

This a guest post is by British Library collaborative doctoral student Carol Butler, you can follow her on twitter as @fantomascarol.

Update: Due to a phenomenal response, Carol has recruited enough interviewees for the study, so the link to the application form has been removed (13/10/2020).

In 2016 I started a PhD project in partnership with the British Library and the Centre for Human-Computer Interaction Design (CHCID) at City, University of London. My research has focused on the phenomena of fiction authors interacting with readers through online media, such as websites, forums and social media, to promote and discuss their work. My aim is to identify potential avenues for redesigning or introducing new technology to better support authors and readers. I am now in my fourth and final year, aiming to complete my research this winter.

The internet has impacted how society interacts with almost everything, and literature has been no exception. It’s often thought that if a person or a business is not online, they are effectively invisible, and over the last ten years or so it has become increasingly common – expected, even - for authors to have an online presence allowing readers, globally, to connect with them.

Opportunities for authors and readers to interact together existed long before the internet, through events such as readings, signings, and festivals. The internet does not replace these – indeed, festivals have grown in popularity in recent years, and many have embraced technology to broaden their engagement outside of the event itself. However, unlike organised events, readers and authors can potentially interact online far more directly, outside of formal mediation. Perceived benefits from this disintermediation are commonly hailed – i.e. that it can break down access barriers for readers (e.g. geography and time, so they can more easily learn about the books they enjoy and the person behind the story), and help authors to better understand their market and the reception to their books. However, being a relatively new phenomenon, we don’t know much yet about how interacting with each other online may differ to doing so at a festival or event, and what complications the new environment may introduce to the experience, or even exacerbate. It is this research gap that my work has been addressing.

Early in my research, I conducted interviews with fiction authors and readers who use different online technologies (e.g. social media such as Twitter and Facebook, forums such as Reddit, or literary-specific sites such as GoodReads) to interact with other readers and authors. All participants generously shared their honest, open accounts about what they do, where and why, and where they encounter problems. It became clear that, although the benefits to being online are widely accepted and everyone had good experiences to report, in reality, people’s reasons for being online were riddled with contradictions, and, in some cases, it was debatable whether the positives outweighed the negatives, or whether the practice served a meaningful purpose at all. Ultimately – it’s complex, and not everything we thought we knew is necessarily as clear cut as it’s often perceived. 

This led me to make a U-turn in my research. Before working out how to improve technology to better support interactions as they currently stand, I needed to find out more about people’s motivations to be online, and to question whether we were focused on the right problem in the first place. From this I’ve been working to reframe how we, in the research field of Human-Computer Interaction, may understand the dynamics between authors and readers, by building a broader picture of context and influences in the literary field.

I’m going to write another blog post in the coming months to talk about what I’ve found, and what I think we need to focus on in the near future. In particular, I think it is important to improve support for authors, as many find themselves in a tricky position because of the expectation that they are available and public-facing, effectively 24/7. However, before I expand on that, I am about to embark on one final study to address some outstanding questions I have about the needs of their market – fiction readers. 

Over the next few weeks, I will be recruiting people who read fiction – whether they interact online about reading or not - to join me for what I am informally referring to as ‘an interview with props’. This study is happening a few months later than I’d originally intended, as restrictions in relation to Covid-19 required me to change my original plans (e.g. to meet people face-to-face). My study has ‘gone digital’, changing how I can facilitate the sessions, and what I can realistically expect from them.

I will be asking people to join me to chat online, using Zoom, to reflect on a series of sketched interface design ideas I have created, and to discuss their current thoughts about authors being available online. The design sketches represent deviations from the technology currently in common use - some significant, and some subtle. The designs are not being tested on behalf of any affiliated company, and neither do I necessarily anticipate any of them to be developed into working technology in the future. Ultimately, they are probes to get us talking about broader issues surrounding author and reader interactions, and I’m hoping that by getting peoples perspectives about them, I’ll learn more about why the designs *don’t* work, moreover why they do, to help inform future research and design work.

I’ve been ‘umming and ahhing’ about how best to share these designs with participants through a digital platform. Sitting together in the same room, as I’d originally planned, we could all move them around, pick them up, take a red pen to them, make notes on post-its, and sketch alternative ideas on paper. There are fantastic online technologies available these days, which have proved invaluable during this pandemic. But they can’t provide the same experience that being physically present together can (a predicament which, perhaps ironically, is fitting with the research problem itself!).

A screen image of the Miro platform, showing a drawing of a person wearing glasses, with a text box underneath saying Favourite Author
A sneaky peek at a sketch in the making, on Miro

I have decided to use a website called Miro.com to facilitate the study – an interactive whiteboard tool that allows participants to add digital post-it notes, doodles, and more. I’ve never used it before now, and to my knowledge there is no published research out there (yet) by others in my research field who have used it with participants, for me to learn from their experience. I think I must prepare myself for a few technical glitches! But I am hopeful that participants will enjoy the experience, which will be informal, encouraging, and in no way a judgement of their abilities with the technology. I am confident that their contribution will greatly help my work – and future work which will help authors and readers in the real world.

If anyone who is reading this is interested in participating, please do get in touch. Information about the study and how to contact me can be found here or please email carol.butler@city.ac.uk.

Update: Due to a phenomenal response, Carol has recruited enough interviewees for the study, so the link to the application form has been removed (13/10/2020). Thanks to everyone who has applied.

11 September 2020

BL Labs Public Awards 2020: enter before NOON GMT Monday 30 November 2020! REMINDER

Add comment

The sixth BL Labs Public Awards 2020 formally recognises outstanding and innovative work that has been carried out using the British Library’s data and / or digital collections by researchers, artists, entrepreneurs, educators, students and the general public.

The closing date for entering the Public Awards is NOON GMT on Monday 30 November 2020 and you can submit your entry any time up to then.

Please help us spread the word! We want to encourage any one interested to submit over the next few months, who knows, you could even win fame and glory, priceless! We really hope to have another year of fantastic projects to showcase at our annual online awards symposium on the 15 December 2020 (which is open for registration too), inspired by our digital collections and data!

This year, BL Labs is commending work in four key areas that have used or been inspired by our digital collections and data:

  • Research - A project or activity that shows the development of new knowledge, research methods, or tools.
  • Artistic - An artistic or creative endeavour that inspires, stimulates, amazes and provokes.
  • Educational - Quality learning experiences created for learners of any age and ability that use the Library's digital content.
  • Community - Work that has been created by an individual or group in a community.

What kind of projects are we looking for this year?

Whilst we are really happy for you to submit your work on any subject that uses our digital collections, in this significant year, we are particularly interested in entries that may have a focus on anti-racist work or projects about lock down / global pandemic. We are also curious and keen to have submissions that have used Jupyter Notebooks to carry out computational work on our digital collections and data.

After the submission deadline has passed, entries will be shortlisted and selected entrants will be notified via email by midnight on Friday 4th December 2020. 

A prize of £150 in British Library online vouchers will be awarded to the winner and £50 in the same format to the runner up in each Awards category at the Symposium. Of course if you enter, it will be at least a chance to showcase your work to a wide audience and in the past this has often resulted in major collaborations.

The talent of the BL Labs Awards winners and runners up over the last five years has led to the production of remarkable and varied collection of innovative projects described in our 'Digital Projects Archive'. In 2019, the Awards commended work in four main categories – Research, Artistic, Community and Educational:

BL_Labs_Winners_2019-smallBL  Labs Award Winners for 2019
(Top-Left) Full-Text search of Early Music Prints Online (F-TEMPO) - Research, (Top-Right) Emerging Formats: Discovering and Collecting Contemporary British Interactive Fiction - Artistic
(Bottom-Left) John Faucit Saville and the theatres of the East Midlands Circuit - Community commendation
(Bottom-Right) The Other Voice (Learning and Teaching)

For further detailed information, please visit BL Labs Public Awards 2020, or contact us at labs@bl.uk if you have a specific query.

Posted by Mahendra Mahey, Manager of British Library Labs.

21 April 2020

Clean. Migrate. Validate. Enhance. Processing Archival Metadata with Open Refine

Add comment

This blogpost is by Graham Jevon, Cataloguer, Endangered Archives Programme 

Creating detailed and consistent metadata is a challenge common to most archives. Many rely on an army of volunteers with varying degrees of cataloguing experience. And no matter how diligent any team of cataloguers are, human error and individual idiosyncrasies are inevitable.

This challenge is particularly pertinent to the Endangered Archives Programme (EAP), which has hitherto funded in excess of 400 projects in more than 90 countries. Each project is unique and employs its own team of one or more cataloguers based in the particular country where the archival content is digitised. But all this disparately created metadata must be uniform when ingested into the British Library’s cataloguing system and uploaded to eap.bl.uk.

Finding an efficient, low-cost method to process large volumes of metadata generated by hundreds of unique teams is a challenge; one that in 2019, EAP sought to alleviate using freely available open source software Open Refine – a power tool for processing data.

This blog highlights some of the ways that we are using Open Refine. It is not an instructional how-to guide (though we are happy to follow-up with more detailed blogs if there is interest), but an introductory overview of some of the Open Refine methods we use to process large volumes of metadata.

Initial metadata capture

Our metadata is initially created by project teams using an Excel spreadsheet template provided by EAP. In the past year we have completely redesigned this template in order to make it as user friendly and controlled as possible.

Screenshot of spreadsheet

But while Excel is perfect for metadata creation, it is not best suited for checking and editing large volumes of data. This is where Open Refine excels (pardon the pun!), so when the final completed spreadsheet is delivered to EAP, we use Open Refine to clean, validate, migrate, and enhance this data.

WorkflowDiagram

Replicating repetitive tasks

Open Refine came to the forefront of our attention after a one-day introductory training session led by Owen Stephens where the key takeaway for EAP was that a sequence of functions performed in Open Refine can be copied and re-used on subsequent datasets.

ScreenshotofOpenRefineSoftware1

This encouraged us to design and create a sequence of processes that can be re-applied every time we receive a new batch of metadata, thus automating large parts of our workflow.

No computer programming skills required

Building this sequence required no computer programming experience (though this can help); just logical thinking, a generous online community willing to share their knowledge and experience, and a willingness to learn Open Refine’s GREL language and generic regular expressions. Some functions can be performed simply by using Open Refine’s built-in menu options. But the limits of Open Refine’s capabilities are almost infinite; the more you explore and experiment, the further you can push the boundaries.

Initially, it was hoped that our whole Open Refine sequence could be repeated in one single large batch of operations. The complexity of the data and the need for archivist intervention meant that it was more appropriate to divide the process into several steps. Our workflow is divided into 7 stages:

  1. Migration
  2. Dates
  3. Languages and Scripts
  4. Related subjects
  5. Related places and other authorities
  6. Uniform Titles
  7. Digital content validation

Each of these stages performs one or more of four tasks: clean, migrate, validate, and enhance.

Task 1: Clean

The first part of our workflow provides basic data cleaning. Across all columns it trims any white space at the beginning or end of a cell, removes any double spaces, and capitalises the first letter of every cell. In just a few seconds, this tidies the entire dataset.

Task 1 Example: Trimming white space (menu option)

Trimming whitespace on an individual column is an easy function to perform as Open Refine has a built in “Common transform” that performs this function.

ScreenshotofOpenRefineSoftware2

Although this is a simple function to perform, we no longer need to repeatedly select this menu option for each column of each dataset we process because this task is now part of the workflow that we simply copy and paste.

Task 1 Example: Capitalising the first letter (using GREL)

Capitalising the first letter of each cell is less straightforward for a new user as it does not have a built-in function that can be selected from a menu. Instead it requires a custom “Transform” using Open Refine’s own expression language (GREL).

ScreenshotofOpenRefineSoftware3


Having to write an expression like this should not put off any Open Refine novices. This is an example of Open Refine’s flexibility and many expressions can be found and copied from the Open Refine wiki pages or from blogs like this. The more you copy others, the more you learn, and the easier you will find it to adapt expressions to your own unique requirements.

Moreover, we do not have to repeat this expression again. Just like the trim whitespace transformation, this is also now part of our copy and paste workflow. One click performs both these tasks and more.

Task 2: Migrate

As previously mentioned, the listing template used by the project teams is not the same as the spreadsheet template required for ingest into the British Library’s cataloguing system. But Open Refine helps us convert the listing template to the ingest template. In just one click, it renames, reorders, and restructures the data from the human friendly listing template to the computer friendly ingest template.

Task 2 example: Variant Titles

The ingest spreadsheet has a “Title” column and a single “Additional Titles” column where all other title variations are compiled. It is not practical to expect temporary cataloguers to understand how to use the “Title” and “Additional Titles” columns on the ingest spreadsheet. It is much more effective to provide cataloguers with a listing template that has three prescriptive title columns. This helps them clearly understand what type of titles are required and where they should be put.

SpreadsheetSnapshot

The EAP team then uses Open Refine to move these titles into the appropriate columns (illustrated above). It places one in the main “Title” field and concatenates the other two titles (if they exist) into the “Additional Titles” field. It also creates two new title type columns, which the ingest process requires so that it knows which title is which.

This is just one part of the migration stage of the workflow, which performs several renaming, re-ordering, and concatenation tasks like this to prepare the data for ingest into the British Library’s cataloguing system.

Task 3: Validate

While cleaning and preparing the data for migration is important, it also vital that we check that the data is accurate and reliable. But who has the time, inclination, or eye stamina to read thousands of rows of data in an Excel spreadsheet? What we require is a computational method to validate data. Perhaps the best way of doing this is to write a bespoke computer program. This indeed is something that I am now working on while learning to write computer code using the Python language (look out for a further blog on this later).

In the meantime, though, Open Refine has helped us to validate large volumes of metadata with no programming experience required.

Task 3 Example: Validating metadata-content connections

When we receive the final output from a digitisation project, one of our most important tasks is to ensure that all of digital content (images, audio and video recordings) correlate with the metadata on the spreadsheet and vice versa.

We begin by running a command line report on the folders containing the digital content. This provides us with a csv file which we can read in Excel. However, the data is not presented in a neat format for comparison purposes.

SpreadsheetSnapshot2

Restructuring data ready for validation comparisons

For this particular task what we want is a simple list of all the digital folder names (not the full directory) and the number of TIFF images each folder contains. Open Refine enables just that, as the next image illustrates.

ScreenshotofOpenRefineSoftware4

Constructing the sequence that restructures this data required careful planning and good familiarity with Open Refine and the GREL expression language. But after the data had been successfully restructured once, we never have to think about how to do this again. As with other parts of the workflow, we now just have to copy and paste the sequence to repeat this transformation on new datasets in the same format.

Cross referencing data for validation

With the data in this neat format, we can now do a number of simple cross referencing checks. We can check that:

  1. Each digital folder has a corresponding row of metadata – if not, this indicates that the metadata is incomplete
  2. Each row of metadata has a corresponding digital folder – if not, this indicates that some digital folders containing images are missing
  3. The actual number of TIFF images in each folder exactly matches the number of images recorded by the cataloguer – if not this may indicate that some images are missing.

For each of these checks we use Open Refine’s cell.cross expression to cross reference the digital folder report with the metadata listing.

In the screenshot below we can see the results of the first validation check. Each digital folder name should match the reference number of a record in the metadata listing. If we find a match it returns that reference number in the “CrossRef” column. If no match is found, that column is left blank. By filtering that column by blanks, we can very quickly identify all of the digital folders that do not contain a corresponding row of metadata. In this example, before applying the filter, we can already see that at least one digital folder is missing metadata. An archivist can then investigate why that is and fix the problem.

ScreenshotofOpenRefineSoftware5

Task 4: Enhance

We enhance our metadata in a number of ways. For example, we import authority codes for languages and scripts, and we assign subject headings and authority records based on keywords and phrases found in the titles and description columns.

Named Entity Extraction

One of Open Refine’s most dynamic features is its ability to connect to other online databases and thanks to the generous support of Dandelion API we are able to use its service to identify entities such as people, places, organisations, and titles of work.

In just a few simple steps, Dandelion API reads our metadata and returns new linked data, which we can filter by category. For example, we can list all of the entities it has extracted and categorised as a place or all the entities categorised as people.

ScreenshotofOpenRefineSoftware6

Not every named entity it finds will be accurate. In the above example “Baptism” is clearly not a place. But it is much easier for an archivist to manually validate a list of 29 phrases identified as places, than to read 10,000 scope and content descriptions looking for named entities.

Clustering inconsistencies

If there is inconsistency in the metadata, the returned entities might contain multiple variants. This can be overcome using Open Refine’s clustering feature. This identifies and collates similar phrases and offers the opportunity to merge them into one consistent spelling.

ScreenshotofOpenRefineSoftware7

Linked data reconciliation

Having identified and validated a list of entities, we then use other linked data services to help create authority records. For this particular task, we use the Wikidata reconciliation service. Wikidata is a structured data sister project to Wikipedia. And the Open Refine reconciliation service enables us to link an entity in our dataset to its corresponding item in Wikidata, which in turn allows us to pull in additional information from Wikidata relating to that item.

For a South American photograph project we recently catalogued, Dandelion API helped identify 335 people (including actors and performers). By subsequently reconciling these people with their corresponding records in Wikidata, we were able to pull in their job title, date of birth, date of death, unique persistent identifiers, and other details required to create a full authority record for that person.

ScreenshotofOpenRefineSoftware8

Creating individual authority records for 335 people would otherwise take days of work. It is a task that previously we might have deemed infeasible. But Open Refine and Wikidata drastically reduces the human effort required.

Summary

In many ways, that is the key benefit. By placing Open Refine at the heart of our workflow for processing metadata, it now takes us less time to do more. Our workflow is not perfect. We are constantly finding new ways to improve it. But we now have a semi-automated method for processing large volumes of metadata.

This blog puts just some of those methods in the spotlight. In the interest of brevity, we refrained from providing step-by-step detail. But if there is interest, we will be happy to write further blogs to help others use this as a starting point for their own metadata processing workflows.

20 April 2020

BL Labs Research Award Winner 2019 - Tim Crawford - F-Tempo

Add comment

Posted on behalf of Tim Crawford, Professorial Research Fellow in Computational Musicology at Goldsmiths, University of London and BL Labs Research Award winner for 2019 by Mahendra Mahey, Manager of BL Labs.

Introducing F-TEMPO

Early music printing

Music printing, introduced in the later 15th century, enabled the dissemination of the greatest music of the age, which until that time was the exclusive preserve of royal and aristocratic courts or the Church. A vast repertory of all kinds of music is preserved in these prints, and they became the main conduit for the spread of the reputation and influence of the great composers of the Renaissance and early Baroque periods, such as Josquin, Lassus, Palestrina, Marenzio and Monteverdi. As this music became accessible to the increasingly well-heeled merchant classes, entirely new cultural networks of taste and transmission became established and can be traced in the patterns of survival of these printed sources.

Music historians have tended to neglect the analysis of these patterns in favour of a focus on a canon of ‘great works’ by ‘great composers’, with the consequence that there is a large sub-repertory of music that has not been seriously investigated or published in modern editions. By including this ‘hidden’ musical corpus, we could explore for the first time, for example, the networks of influence, distribution and fashion, and the effects on these of political, religious and social change over time.

Online resources of music and how to read them

Vast amounts of music, mostly audio tracks, are now available using services such as Spotify, iTunes or YouTube. Music is also available online in great quantity in the form of PDF files rendering page-images of either original musical documents or modern, computer-generated music notation. These are a surrogate for paper-based books used in traditional musicology, but offer few advantages beyond convenience. What they don’t allow is full-text search, unlike the text-based online materials which are increasingly the subject of ‘distant reading’ in the digital humanities.

With good score images, Optical Music Recognition (OMR) programs can sometimes produce useful scores from printed music of simple texture; however, in general, OMR output contains errors due to misrecognised symbols. The results often amount to musical gibberish, severely limiting the usefulness of OMR for creating large digital score collections. Our OMR program is Aruspix, which is highly reliable on good images, even when they have been digitised from microfilm.

Here is a screen-shot from Aruspix, showing part of the original page-image at the top, and the program’s best effort at recognising the 16th-century music notation below. It is not hard to see that, although the program does a pretty good job on the whole, there are not a few recognition errors. The program includes a graphical interface for correcting these, but we don’t make use of that for F-TEMPO for reasons of time – even a few seconds of correction per image would slow the whole process catastrophically.

The Aruspix user-interface
The Aruspix user-interface

 

 

Finding what we want – error-tolerant encoding

Although OMR is far from perfect, online users are generally happy to use computer methods on large collections containing noise; this is the principle behind the searches in Google Books, which are based on Optical Character Recognition (OCR).

For F-TEMPO, from the output of the Aruspix OMR program, for each page of music, we extract a ‘string’ representing the pitch-name and octave for the sequence of notes. Since certain errors (especially wrong or missing clefs or accidentals) affect all subsequent notes, we encode the intervals between notes rather than the notes themselves, so that we can match transposed versions of the sequences or parts of them. We then use a simple alphabetic code to represent the intervals in the computer.

Here is an example of a few notes from a popular French chanson, showing our encoding method.

A few notes from a Crequillon chanson, and our encoding of the intervals
A few notes from a Crequillon chanson, and our encoding of the intervals

F-TEMPO in action

F-TEMPO uses state-of-the-art, scalable retrieval methods, providing rapid searches of almost 60,000 page-images for those similar to a query-page in less than a second. It successfully recovers matches when the query page is not complete, e.g. when page-breaks are different. Also, close non-identical matches, as between voice-parts of a polyphonic work in imitative style, are highly ranked in results; similarly, different works based on the same musical content are usually well-matched.

Here is a screen-shot from the demo interface to F-TEMPO. The ‘query’ image is on the left, and searches are done by hitting the ‘Enter’ or ‘Return’ key in the normal way. The list of results appears in the middle column, with the best match (usually the query page itself) highlighted and displayed on the right. As other results are selected, their images are displayed on the right. Users can upload their own images of 16th-century music that might be in the collection to serve as queries; we have found that even photos taken with a mobile phone work well. However, don’t expect coherent results if you upload other kinds of image!

F-Tempo-User Interface
F-Tempo-User Interface

The F-TEMPO web-site can be found at: http://f-tempo.org

Click on the ‘Demo’ button to try out the program for yourself.

What more can we do with F-TEMPO?

Using the full-text search methods enabled by F-TEMPO’s API we might begin to ask intriguing questions, such as:

  • ‘How did certain pieces of music spread and become established favourites throughout Europe during the 16th century?’
  • ‘How well is the relative popularity of such early-modern favourites reflected in modern recordings since the 1950s?’
  • ‘How many unrecognised arrangements are there in the 16th-century repertory?’

In early testing we identified an instrumental ricercar as a wordless transcription of a Latin motet, hitherto unknown to musicology. As the collection grows, we are finding more such unexpected concordances, and can sometimes identify the composers of works labelled in some printed sources as by ‘Incertus’ (Uncertain). We have also uncovered some interesting conflicting attributions which could provoke interesting scholarly discussion.

Early Music Online and F-TEMPO

From the outset, this project has been based on the Early Music Online (EMO) collection, the result of a 2011 JISC-funded Rapid Digitisation project between the British Library and Royal Holloway, University of London. This digitised about 300 books of early printed music at the BL from archival microfilms, producing black-and-white images which have served as an excellent proof of concept for the development of F-TEMPO. The c.200 books judged suitable for our early methods in EMO contain about 32,000 pages of music, and form the basis for our resource.

The current version of F-TEMPO includes just under 30,000 more pages of early printed music from the Polish National Library, Warsaw, as well as a few thousand from the Bibliothèque nationale, Paris. We will soon be incorporating no fewer than a further half-a-million pages from the Bavarian State Library collection in Munich, as soon as we have run them through our automatic indexing system.

 (This work was funded for the past year by the JISC / British Academy Digital Humanities Research in the Humanities scheme. Thanks are due to David Lewis, Golnaz Badkobeh and Ryaan Ahmed for technical help and their many suggestions.)