THE BRITISH LIBRARY

Digital scholarship blog

168 posts categorized "Experiments"

26 November 2020

Using British Library Cultural Heritage Data for a Digital Humanities Research Course at the Australian National University

Add comment

Posted on behalf of Terhi Nurmikko-Fuller, Senior Lecturer, Centre for Digital Humanities Research, Australian National University by Mahendra Mahey, Manager of BL Labs.

The teaching philosophy and pedagogy of the Centre for Digital Humanities Research (CDHR) at the Australian National University (ANU) focus on research-fuelled, practice-led, object-orientated learning. We value collaboration, experimentation, and individual growth, rather than adhering to standardised evaluation matrix of exams or essays. Instead, students enrolled in jointly-taught undergraduate and postgraduate courses are given a task: to innovate at the intersection of digital technologies and cultural heritage sector institutions. They are given a great degree of autonomy, and are trusted to deliver. Their aim is to create digital prototypes, which open up GLAM sector material to a new audience.

HUMN2001: Digital Humanities Theories and Projects, and its postgraduate equivalent HUMN6001 are core courses for the programs delivered from the CDHR. HUMN2001 is a compulsory course for both the Minor and the Major in Digital Humanities for the Bachelor of Arts; HUMN6001 is a core, compulsory course in the Masters of Digital Humanities and Public Culture. Initially the course structure was quite different: experts would be invited to guest lecture on their Digital Humanities projects, and the students were tasked with carrying out critical evaluations of digital resources of various kinds. What quickly became apparent, was that without experience of digital projects, the students struggled to meaningfully and thoughtfully evaluate the projects they encountered. Many focused exclusively on the user-interface; too often critical factors like funding sources were ignored; the critical evaluative context in which the students operated was greatly skewed by their experiences of tools such as Google and platforms such as Facebook.

The solution to the problem became clear - students would have to experience the process of developing digital projects themselves before they could reasonably be expected to evaluate those of others. This revelation brought on a paradigm shift in the way in which the CDHR engages with students, projects, and their cultural heritage sector collaborators.

In 2018, we reached out to colleagues at the ANU for small-scale projects for the students to complete. The chosen project was the digitisation and the creation of metadata records for a collection of glass slides that form part of the Heritage in the Limelight project. The enthusiasm, diligence, and care that the students applied to working with this external dataset (external only to the course, since this was an ANU-internal project) gave us confidence to pursue collaborations outside of our own institution. In Semester 1 of 2019, Dr Katrina Grant’s course HUMN3001/6003: Digital Humanities Methods and Practices ran in collaboration with the National Museum of Australia (NMA) to almost unforeseeable success: the NMA granted five of the top students a one-off stipend of $1,000 each, and continued working with the students on their projects, which were then added to the NMA’s Defining Moments Digital Classroom, launched in November 2020. This collaboration was featured in a piece in the ANU Reporter, the University’s internal circular. 

Encouraged by the success of Dr Grant’s course, and presented with a serendipitous opportunity to meet up at the Australasian Association for Digital Humanities (aaDH) conference in 2018 where he was giving the keynote, I reached out to Mahendra Mahey to propose a similar collaboration. In Semester 2, 2019 (July to November), HUMN2001/6001 ran in collaboration with the British Library. 

Our experiences of working with students and cultural heritage institutions in the earlier semester had highlighted some important heuristics. As a result, the delivery of HUMN2001/6001 in 2019 was much more structured than that of HUMN3001/6003 (which had offered the students more freedom and opportunity for independent research). Rather than focus on a theoretical framework per se, HUMN2001/6001 focused on the provision of transferable skills that improved the delivery and reporting of the projects, and could be cited directly in future employment opportunities as a skills-base. These included project planning and time management (such as Gantt charts and SCRUM as a form of agile project management), and each project was to be completed in groups.

The demographic set up of each group had to follow three immutable rules:

  • The first, was that each team had to be interdisciplinary, with students from more than one degree program.
  • Second, the groups had to be multilingual, and not each member of the group could have the same first language, or be monolingual in the same language.
  • Third, was that the group had to represent more than one gender.

Although not all groups strictly implemented these rules, the ones that did benefitted from the diversity and critical lens afforded by this richness of perspective to result in the top projects.

Three examples that best showcase the diversity (and the creative genius!) of these groups and their approach to the British Library’s collection include a virtual reality (VR) concert hall, a Choose-You-Own-Adventure-Game travelling through Medieval manuscripts, and an interactive treasure hunt mobile app.

Examples of student projects

(VR)2 : Virtuoso Rachmaninoff in Virtual Reality

Research Team: Angus Harden, Noppakao (Angel) Leelasorn, Mandy McLean, Jeremy Platt, and Rachel Watson

Fig. 1 Angel Leelasorn testing out (VR)2
Figure 1: Angel Leelasorn testing out (VR)2
Figure 2: Snapshots documenting the construction of (VR)2
Figure 2: Snapshots documenting the construction of (VR)2

This project is a VR experience of the grand auditorium of the Bolshoi Theatre in Moscow. It has an audio accompaniment of Sergei Rachmaninoff’s Prelude in C# Minor, Op.3, No.2, the score for which forms part of the British Library’s collection. Reflective of the personal experiences of some of the group members, the project was designed to increase awareness of mental health, and throughout the experience the user can encounter notes written by Rachmaninoff during bouts of depression. The sense of isolation is achieved by the melody playing in an empty auditorium. 

The VR experience was built using Autodesk Maya and Unreal Engine 4. The music was produced  using midi data, with each note individually entered into Logic Pro X, and finally played through Addictive Keys Studio Grand virtual instrument.

The project is available through a website with a disclosure, and links to various mental health helplines, accessible at: https://virtuosorachmaninoff.wixsite.com/vrsquared

Fantastic Bestiary

Research Team: Jared Auer, Victoria (Vick) Gwyn, Thomas Larkin, Mary (May) Poole, Wen (Raven) Ren, Ruixue (Rachel) Wu, Qian (Ariel) Zhang

Fig. 3 Homepage of A Fantastic Bestiary
Figure 3:  Homepage of A Fantastic Bestiary

This project is a bilingual Choose-Your-Own-Adventure hypertext game that engages with the Medieval manuscripts (such as Royal MS 12 C. xix. Folios 12v-13, based off the Greek Physiologus and the Etymologiae of St. Isidore of Seville) collection at the British Library, first discovered through the Turning the Pages digital feature. The project workflow included design and background research, resource development, narrative writing, animation, translation, audio recording, and web development. Not only does it open up the Medieval manuscripts to the public in an engaging and innovative way through five fully developed narratives (~2,000-3,000 words each), all the content is also available in Mandarin Chinese.

The team used a plethora of different tools, including Adobe Animate, Photoshop, Illustrator, and Audition and Audacity. The website was developed using HTML, CSS, and JavaScript in the Microsoft Visual Studio Integrated Development Environment

The project is accessible at: https://thomaslarkin7.github.io/hypertextStory/

ActionBound

Research Team: Adriano Carvalho-Mora, Conor Francis Flannery, Dion Tan, Emily Swan

Fig 4 (Left)Testing the app at the Australian National Botanical Gardens, (Middle) An example of one of the tasks to complete in ActionBound (Right) Example of sound file from the British Library (a dingo)
Figure 4: (Left) Testing the app at the Australian National Botanical Gardens, (Middle) An example of one of the tasks to complete in ActionBound (Right) Example of sound file from the British Library (a dingo)

This project is a mobile application, designed as a location-based authoring tool inspired by the Pokemon Go! augmented reality mobile game. This educational scavenger-hunt aims to educate players about endangered animals. Using sounds of endangered or extinct animals from the British Library’s collection, but geo-locating the app at the Australian National Botanical Gardens, this project is a perfect manifestation of truly global information sharing and enrichment.

The team used a range of available tools and technologies to build this Serious Game or Game-With-A-Purpose. These include GPS and other geo-locating (and geo-caching), they created QR codes to be scanned during the hunt, locations are mapped using Open Street Map

The app can be downloaded from: https://en.actionbound.com/bound/BotanicGardensExtinctionHunt

Course Assessment

Such a diverse and dynamic learning environment presents some pedagogical challenges and required a new approach to student evaluation and assessment. The obvious question here is how to fairly, objectively, and comprehensively grade such vastly different projects? Especially since not only do they differ in both methodology and data, but also in the existing level of skills within the group. The approach I took for the grading of these assignments is one that I believe will have longevity and to some extent scalability. Indeed, I have successfully applied the same rubric in the evaluation of similarly diverse projects created for the course in 2020, when run in collaboration with the National Film and Sound Archives of Australia

The assessment rubric for this course awards students on two axis: ambition and completeness. This means that projects that were not quite completed due to their scale or complexity are awarded for the vision, and the willingness of the students to push boundaries, do new things, and take on a challenge. The grading system allows for four possible outcomes: a High Distinction (for 80% or higher), Distinction (70-79%), Credit (60-69%), and Pass (50-59%). Projects which are ambitious and completed to a significant extent land in the 80s; projects that are either ambitious but not fully developed, or relatively simple but completed receive marks in the 70s; those that very literally engaged with the material, implemented a technologically straightforward solution (such as building a website using WordPress or Wix, or using one of the suite of tools from Northwestern University’s Knightlab) were awarded marks in the 60s. Students were also rewarded for engaging with tools and technologies they had no prior knowledge of. Furthermore, in week 10 of a 12 week course, we ran a Digital Humanities Expo! Event, in which the students showcased their projects and received user-feedback from staff and students at the ANU. Students able to factor these evaluations into their final project exegeses were also rewarded by the marking scheme.

Notably, the vast majority of the students completed the course with marks 70 or higher (in the two top career brackets). Undoubtedly, the unconventional nature of the course is one of its greatest assets. Engaging with a genuine cultural heritage institution acted as motivation for the students. The autonomy and trust placed in them was empowering. The freedom to pursue the projects that they felt best reflected their passions, interests in response to a national collection of international fame resulted, almost invariably, in the students rising to the challenge and even exceeding expectations.

This was a learning experience beyond the rubric. To succeed students had to develop the transferable skills of project-planning, time-management and client interaction that would support a future employment portfolio. The most successful groups were also the most diverse groups. Combining voices from different degree programs, languages, cultures, genders, and interests helped promote internal critical evaluations throughout the design process, and helped the students engage with the materials, the projects, and each other in a more thoughtful way.

Two groups discussing their projects with Mahendra Mahey
Figure 5: Two groups discussing their projects with Mahendra Mahey
Figure 6 : National Museum of Australia curator Dr Lily Withycombe user-testing a digital project built using British Library data, 2019.
Figure 6: National Museum of Australia curator Dr Lily Withycombe user-testing a digital project built using British Library data, 2019.
User-testing feedback! Staff and students came to see the projects and support our students in the Digital Humanities Expo in 2019.
Figure 7: User-testing feedback! Staff and students came to see the projects and support our students in the Digital Humanities Expo in 2019.

Terhi Nurmikko-Fuller Biography

Dr. Terhi Nurmikko-Fuller
Dr. Terhi Nurmikko-Fuller

Terhi Nurmikko-Fuller is a Senior Lecturer in Digital Humanities at the Australian National University. She examines the potential of computational tools and digital technologies to support and diversify scholarship in the Humanities. Her publications cover the use of Linked Open Data with musicological information, library metadata, the narrative in ancient Mesopotamian literary compositions, and the role of gamification and informal online environments in education. She has created 3D digital models of cuneiform tables, carved boab nuts, animal skulls, and the Black Rod of the Australian Senate. She is a British Library Labs Researcher in Residence and a Fellow of the Software Sustainability Institute, UK; an eResearch South Australia (eRSA) HASS DEVL (Humanities Arts and Social Sciences Data Enhanced Virtual Laboratory) Champion; an iSchool Research Fellow at the University of Illinois at Urbana-Champaign, USA (2019 - 2021), a member of the Australian Government Linked Data Working Group; and, since September 2020 has been a member of the Territory Records Advisory Council for the Australian Capital Territory Government.

BL Labs Public Awards 2020 - REMINDER - Entries close NOON (GMT) 30 November 2020

Inspired by this work that uses the British Library's digitised collections? Have you done something innovative using the British Library's digital collections and data? Why not consider entering your work for a BL Labs Public Award 2020 and win fame, glory and even a bit of money?

This year's public awards 2020 are open for submission, the deadline for entry is NOON (GMT) Monday 30 November 2020

Whilst we welcome projects on any use of our digital collections and data (especially in research, artistic, educational and community categories), we are particularly interested in entries in our public awards that have focused on anti-racist work, about the pandemic or that are using computational methods such as the use of Jupyter Notebooks.

Work will be showcased at the online BL Labs Annual Symposium between 1400 - 1700 on Tuesday 15 December, for more information and a booking form please visit the BL Labs Symposium 2020 webpage.

13 November 2020

Reflections during International Games Week and Transgender Awareness Week

Add comment

This week is International Games Week in libraries - “an initiative run by volunteers from around the world to reconnect communities through their libraries around the educational, recreational, and social value of all types of games.”

As a volunteer, participant and collaborator on game events organised by Stella Wisdom in the British Library's Digital Scholarship Team, I’ve particularly enjoyed the International Games Week events held at the Library during previous years, including Adventure X and WordPlay. It’s fitting that a national library acknowledges the value of narratives in games and interactive fiction, as well as those held in books and other formats.

International Games Week logo with a games controller, 2 dice and a meeple

In this post, I wanted to highlight some things that cut across projects I’ve been involved in with the British Library. These include curating UK websites and running online game jams, in addition to the game events mentioned above.

Back in 2018, I co-organised the online Gothic Novel Jam with Stella. In terms of the gothic and supernatural, it’s appropriate that this blog post is published today on Friday the 13th! We’ve blogged about this jam previously, but in summary, the intention was to encourage participants to create games, interactive fiction and other creative outputs using the theme of the gothic novel and British Library Flickr images as inspiration. The response was fantastic, and resulted in a large number of great narrative games being created. I particularly liked As a Glow Brings Out a Haze for the creative reuse of British Library images.

In addition to co-running game jams, I'm a volunteer curator for the UK Web Archive, and as representative for CILIP’s LGBTQ+ Network, I’ve been co-lead on the LGBTQ+ Lives Online project with Steven Dryden from the British Library. This project has focused on identifying UK LGBTQ+ websites, blogs etc. for inclusion in the collection, as a way to preserve them for future generations. To a lesser extent, I’ve also been supporting the curation of the Video Games collection and also Interactive Narratives, which is part of the broader E-publishing trends/Emerging formats collection.

I find it interesting to see where different seemingly unrelated projects overlap, and in this instance, the overlap is an online game called The Tower created by Freya Campbell, which she originally created for Gothic Novel Jam. The game itself is a piece of interactive fiction combining both text and images. For me it was a great example of a narrative that is clearly gothic and dark, but takes a new focus to frame that genre. 

This week is Transgender Awareness Week, and as more UK content is published online about transgender issues and experiences, these sites will be added to the UKWA LGBTQ+ Lives collection. The Tower includes subject matter that is particularly high profile in UK media discussions surrounding LGBTQ+ lives at the moment - transgender identities. As the creator of The Tower is based in the UK, this game is now part of the Interactive Narratives and LGBTQ+ Lives collections in the UK Web Archive.

Anyone can suggest UK published websites to be included in the UK Web Archive by filling in this online nominations form: https://www.webarchive.org.uk/en/ukwa/nominate. As part of both International Games Week and Transgender Awareness Week, why not nominate UK websites for inclusion in the Video GamesInteractive Narratives, and LGBTQ+ Lives Online collections. 

Another overlap connected to The Tower, is that Freya exhibited two other games (Perseids, and Super Lunary ep.1) at AdventureX, when it was held at the British Library during International Games Week in 2018 and 2019. Sadly AdventureX is cancelled in 2020 due to Covid-19, but if you make games and interactive fiction, why not consider taking part in AdvXJam, which starts tomorrow.

This post is by Ash Green (@ggnewed) from the CILIP LGBTQ+ Network.

11 November 2020

BL Labs Online Symposium 2020 : Book your place for Tuesday 15-Dec-2020

Add comment

Posted by Mahendra Mahey, Manager of BL Labs

The BL Labs team are pleased to announce that the eighth annual British Library Labs Symposium 2020 will be held on Tuesday 15 December 2020, from 13:45 - 16:55* (see note below) online. The event is FREE, but you must book a ticket in advance to reserve your place. Last year's event was the largest we have ever held, so please don't miss out and book early, see more information here!

*Please note, that directly after the Symposium, we are organising an experimental online mingling networking session between 16:55 and 17:30!

The British Library Labs (BL Labs) Symposium is an annual event and awards ceremony showcasing innovative projects that use the British Library's digital collections and data. It provides a platform for highlighting and discussing the use of the Library’s digital collections for research, inspiration and enjoyment. The awards this year will recognise outstanding use of British Library's digital content in the categories of Research, Artistic, Educational, Community and British Library staff contributions.

This is our eighth annual symposium and you can see previous Symposia videos from 201920182017201620152014 and our launch event in 2013.

Dr Ruth Anhert, Professor of Literary History and Digital Humanities at Queen Mary University of London Principal Investigator on 'Living With Machines' at The Alan Turing Institute
Ruth Ahnert will be giving the BL Labs Symposium 2020 keynote this year.

We are very proud to announce that this year's keynote will be delivered by Ruth Ahnert, Professor of Literary History and Digital Humanities at Queen Mary University of London, and Principal Investigator on 'Living With Machines' at The Alan Turing Institute.

Her work focuses on Tudor culture, book history, and digital humanities. She is author of The Rise of Prison Literature in the Sixteenth Century (Cambridge University Press, 2013), editor of Re-forming the Psalms in Tudor England, as a special issue of Renaissance Studies (2015), and co-author of two further books: The Network Turn: Changing Perspectives in the Humanities (Cambridge University Press, 2020) and Tudor Networks of Power (forthcoming with Oxford University Press). Recent collaborative work has taken place through AHRC-funded projects ‘Living with Machines’ and 'Networking the Archives: Assembling and analysing a meta-archive of correspondence, 1509-1714’. With Elaine Treharne she is series editor of the Stanford University Press’s Text Technologies series.

Ruth's keynote is entitled: Humanists Living with Machines: reflections on collaboration and computational history during a global pandemic

You can follow Ruth on Twitter.

There will be Awards announcements throughout the event for Research, Artistic, Community, Teaching & Learning and Staff Categories and this year we are going to get the audience to vote for their favourite project in those that were shortlisted, a people's BL Labs Award!

There will be a final talk near the end of the conference and we will announce the speaker for that session very soon.

So don't forget to book your place for the Symposium today as we predict it will be another full house again, the first one online and we don't want you to miss out, see more detailed information here

We look forward to seeing new faces and meeting old friends again!

For any further information, please contact labs@bl.uk

05 November 2020

World Digital Preservation Day 2020

Add comment

World Digital Preservation Day (WDPD) is held on the first Thursday of every November, providing an opportunity for the international digital preservation community to connect and celebrate the positive impact that digital preservation has. Follow #WDPD2020 for discussion throughout the day. Our colleagues in the UK Web Archive (UKWA) have already blogged earlier for WDPD about their Coronavirus Collection, which includes preservation of the ‘Children of Lockdown’ project website.

 A number of WDPD online events are taking place, including a book launch party for Electronic Legal Deposit Shaping the library collections of the future, for which our collaborative doctoral research student Linda Berube co-wrote chapter 9; Follow the Users: Assessing UK Non-Print Legal Deposit Within the Academic Discovery Environment

World Digital Preservation Day logo

WDPD is also when the annual Digital Preservation Awards are announced, #DPA2020, and we wish to offer our warmest congratulations to all today's winners, including our wonderful UKWA colleagues who have won the The National Archives Award for Safeguarding the Digital Legacy, recognising 15 years of web archiving work. You can read more about the UKWA's 15 year anniversary in 2020 here and watch a recording of the online Digital Preservation Awards ceremony in the video below.

Here in Digital Scholarship we enjoy collaborating with the British Library's Digital Preservation and UKWA teams. Last year we hosted a six month post-doctoral placement; ‘Emerging Formats: Discovering and Collecting Contemporary British Interactive Fiction’, where Lynda Clark created an Interactive Narratives UKWA collection and evaluated how crawlers captured web hosted works of interactive fiction.

This research project was part of the Library’s ongoing Emerging Formats work, which acknowledges that without intervention, many culturally valuable digital artefacts are at risk of being lost. Interactive narratives are particularly endangered due to the ‘hobbyist’ nature of many creators, meaning they do not necessarily subscribe to standardised practices. However, this also means that digital interactive fiction is created by and for a wide variety of creators and audiences, including various marginalised groups.

Two reports written by Lynda during her innovation placement are publicly available on the BL Research Repository; https://doi.org/10.23636/1192 and https://doi.org/10.23636/1193. Furthermore, a long paper about the Interactive Narratives collection is part of the proceedings of this week's International Conference on Interactive Digital Storytelling (ICIDS).[1] This event is a great opportunity to meet both scholars and creative practitioners who make digital stories. I was delighted to be a reviewer for the ICIDS 2020 online art exhibition, which has the theme "Texts of Discomfort" and presents some very thought provoking work.

This post is by Digital Curator Stella Wisdom (@miss_wisdom)

1. Clark L., Rossi G.C., Wisdom S. (2020) Archiving Interactive Narratives at the British Library. In: Bosser AG., Millard D.E., Hargood C. (eds) Interactive Storytelling. ICIDS 2020. Lecture Notes in Computer Science, vol 12497. Springer, Cham. https://doi.org/10.1007/978-3-030-62516-0_27  ↩︎

30 October 2020

Mind Your Paws and Claws

Add comment

I’m not a summer creature, autumn is my favourite time of the year and I especially love Halloween. It is a perfect excuse for reading ghost stories, watching folk horror films and playing spooky videogames. If this sounds like fun to you too, then I recommend taking a look at the games created for Gothic Novel Jam.

Screen capture of the Gothic Novel Jam itch.io website with thumbnails of the games made as part of this jam

One of my favourite entries is The Lady's Book of Decency, A Practical Treatise on Manners, Feeding, and Etiquette, by Sean S. LeBlanc. I don't want to give away any spoilers, but I will say that it is a real howl! - also remember that this year there is a full moon on 31st October.

Game makers taking part in Gothic Novel Jam were encouraged to use images from the Ghosts & Ghoulish Scenes album in the British Library's Flickr site, which are all freely available for artistic and commercial reuse.

It is always a pleasure to see how creatives use the Flickr images to make new works, such as animations, like The Phantom Monk shown below, made by my talented colleague Carlos Rarugal from the UK Web Archive. He has animated a few spooky creatures for Halloween, which will shared be shared from the WildlifeWeb Archive and Digital Scholarship Twitter accounts. My colleague Cheryl Tipp has been Going batty for Halloween, making a Flappy Bat online game using Scratch, and the UK Web Archive have been celebrating their crawlers with this blog post.

Video created by Carlos Rarugal, using a British Library digitised image from page 377 of "The Lancashire Witches. A novel". Audio is Thunder, Eric & May Nobles, Wales, 1989 (W Thunder r3 C1) and Grey Wolf, Tom Cosburn, Canada, 1995 (W1CDR0000681 BD9)

If you enjoy making games and works of interactive fiction, then you may want to sign up to participate in AdventureX Game Jam, which is taking place online, during 14-28 November 2020. The jam's theme will be announced when AdvXJam opens on the 14th November. You are invited to interpret the theme in any way you choose, and AdventureX are very open-minded about what constitutes a narrative game. All genres, styles and game engines are welcome, as they are very keen to encourage participants to get involved regardless of background or experience level. 

Sadly the AdventureX Narrative Games Convention event is cancelled this year due to Covid-19, but we are hoping that the online AdventureX Game Jam will bring some cheer, creativity and community spirit during this year's International Games Week in Libraries in November. So keep your eyeballs peeled for blog posts about this jam next month.

This post is by Digital Curator Stella Wisdom (@miss_wisdom)

29 October 2020

Happy Eighth Birthday Wikidata!

Add comment

Sadly 2020 is not being a year for in-person parties! However, I hope you'll raise a socially distanced glass safely at home to celebrate the eighth birthday of Wikidata, which first went live on 29th October 2012.

You can follow the festivities on social media with posts tagged #WikidataBirthday and read a message from the development team here. The WikiCite 2020 virtual conference kicked the celebrations off a few days early, with sessions about open citations and linked bibliographic data (videos online here) and depending what time you read this post, you may still be able to join a 24-hours long online meetup, where people can drop in to chat to others about Wikidata.

If you are reading this post and wondering what Wikidata is, then you might want to read this introduction. Essentially it "is a document-oriented database, focused on items, which represent topics, concepts, or objects. Each item is allocated a unique, persistent identifier, a positive integer prefixed with the upper-case letter Q, known as a "QID". This enables the basic information required to identify the topic that the item covers to be translated without favouring any language."[1]

Wikidata 8th birthday logo

Many libraries around the world have been actively adding data about their collections to Wikidata, and a number of groups to support and encourage this work have been established.

The IFLA Wikidata Working Group was formed in late 2019 to explore and advocate for the use of and contribution to Wikidata by library and information professionals. To support the integration of Wikidata and Wikibase with library systems, and alignment of the Wikidata ontology with library metadata formats such as BIBFRAME, RDA, and MARC.

This group was originally due to host a satellite event for the World Library and Information Congress 2020 in Dublin, which was sadly cancelled due to Covid-19. However this event was quickly converted into the Wikicite + Libraries series of six online discussions; about open citations, language revitalisation, knowledge equity, access to scholarly publications, linking and visualising bibliographic data. The recordings of which have all been made available online, via a Youtube playlist.

They have also set up a mailing list (wikidatawg@iflalists.org) and held an online launch party on the 8th October (slides). If you would like to attend their next meeting, it will be on the 24th November, the booking form is here.

illustration of a hand taking a book out of an image of a bookshelf on a computer monitor

Another online community for librarians working with Wikidata, is the LD4 Wikidata Affinity Group, which explores how libraries can contribute to and leverage Wikidata as a platform for publishing, linking, and enriching library linked data. They meet biweekly via Zoom. At each meeting, either the co-facilitators or an invited guest will give a presentation, or a demonstration, then there is a wider discussion of any issues, which members have encountered, and an opportunity for sharing helpful resources.

If you work in libraries and are curious about Wikidata, I highly recommend attending these groups. If you are looking for a introductory guide, then Practical Wikidata for Librarians is an excellent starting point. There is also Library Carpentry Wikidata currently in development, which is shaping up to be a very useful resource.

It can't be all work and no play though, so I'm celebrating Wikidata's birthday with a seasonal slice of Frankencolin the Caterpillar cake!

This post is by Digital Curator Stella Wisdom (@miss_wisdom)

1. https://en.wikipedia.org/wiki/Wikidata  ↩︎

23 October 2020

BL Labs Public Award Runner Up (Research) 2019 - Automated Labelling of People in Video Archives

Add comment

Example people identified in TV news related programme clips
People 'automatically' identified in digital TV news related programme clips.

Guest blog post by Andrew Brown (PhD researcher),  Ernesto Coto (Research Software Engineer) and Andrew Zisserman (Professor) of the Visual Geometry Group, Department of Engineering Science, University of Oxford, and BL Labs Public Award Runner-up for Research, 2019. Posted on their behalf by Mahendra Mahey, Manager of BL Labs.

In this work, we automatically identify and label (tag) people in large video archives without the need for any manual annotation or supervision. The project was carried out with the British Library on a sample of 106 videos from their “Television and radio news” archive; a large collection of news programs from the last 10 years. This archive serves as an important and fascinating resource for researchers and the general public alike. However, the sheer scale of the data, coupled with a lack of relevant metadata, makes indexing, analysing and navigating this content an increasingly difficult task. Relying on human annotation is no longer feasible, and without an effective way to navigate these videos, this bank of knowledge is largely inaccessible.

As users, we are typically interested in human-centric queries such as:

  • “When did Jeremy Corbyn first appear in a Newsnight episode?” or
  • “Show me all of the times when Hugh Grant and Shirley Williams appeared together.

Currently this is nigh on impossible without trawling through hundreds of hours of content. 

We posed the following research question:

Is it possible to enable automatic person-search capabilities such as this in the archive, without the need for any manual supervision or labelling?

The answer is “yes”, and the method is described next.

Video Pre-Processing

The basic unit which enables person labelling in videos is the face-track; a group of consecutive face detections within a shot that correspond to the same identity. Face-tracks are extracted from all of the videos in the archive. The task of labelling the people in the videos is then to assign a label to each one of these extracted face-tracks. The video below gives an example of two face-tracks found in a scene.


Two face-tracks found in British Library digital news footage by Visual Geometry Group - University of Oxford.

Techniques at Our Disposal

The base technology used for this work is a state-of-the-art convolutional neural network (CNN), trained for facial recognition [1]. The CNN extracts feature-vectors (a list of numbers) from face images, which indicate the identity of the depicted person. To label a face-track, the distance between the feature-vector for the face-track, and the feature-vector for a face-image with known identity is computed. The face-track is labelled as depicting that identity if the distance is smaller than a certain threshold (i.e. they match). We also use a speaker recognition CNN [2] that works in the same way, except it labels speech segments from unknown identities using speech segments from known identities within the video.

Labelling the Face-Tracks

Our method for automatically labelling the people in the video archive is divided into three main stages:

(1) Our first labelling method uses what we term a “celebrity feature-vector bank”, which consists of names of people that are likely to appear in the videos, and their corresponding feature-vectors. The names are automatically sourced from IMDB cast lists for the programmes (the titles of the programmes are freely available in the meta-data). Face-images for each of the names are automatically downloaded from image-search engines. Incorrect face-images and people with no images of themselves on search engines are automatically removed at this stage. We compute the feature-vectors for each identity and add them to the bank alongside the names. The face-tracks from the video archives are then simply labelled by finding matches in the feature-vector bank.

Face-tracks from the video archives are labelled by finding matches in the feature-vector bank.
Face-tracks from the video archives are labelled by finding matches in the feature-vector bank. 

(2) Our second labelling method uses the idea that if a name is spoken, or found displayed in a scene, then that person is likely to be found within that scene. The task is then to automatically determine whether there is a correspondence or not. Text is automatically read from the news videos using Optical Character Recognition (OCR), and speech is automatically transcribed using Automatic Speech Recognition (ASR). Names are identified and they are searched for on image search engines. The top ranked images are downloaded and the feature-vectors are computed from the faces. If any are close enough to the feature-vectors from the face-tracks present in the scene, then that face-track is labelled with that name. The video below details this process for a written name.


Using text or spoken word and face recognition to identify a person in a news clip.

(3) For our third labelling method, we use speaker recognition to identify any non-labelled speaking people. We use the labels from the previous two stages to automatically acquire labelled speech segments from the corresponding labelled face-tracks. For each remaining non-labelled speaking person, we extract the speech feature-vector and compute the distance of it to the feature-vectors of the labelled speech segments. If one is close enough, then the non-labelled speech segment and corresponding face-track is assigned that name. This process manages to label speaking face-tracks with visually challenging faces, e.g. deep in shadow or at an extremely non-frontal pose.

Indexing and Searching Identities

The results of our work can be browsed via a web search engine of our own design. A search bar allows for users to specify the person or group of people that they would like to search for. People’s names are efficiently indexed so that the complete list of names can be filtered as the user types in the search bar. The search results are returned instantly with their associated metadata (programme name, data and time) and can be displayed in multiple ways. The video associated with each search result can be played, visualising the location and the name of all identified people in the video. See the video below for more details. This allows for the archive videos to be easily navigated using person-search, thus opening them up for use by the general public.


Archive videos easily navigated using person-search.

For examples of more of our Computer Vision research and open-source software, visit the Visual Geometry Group website.

This work was supported by the EPSRC Programme Grant Seebibyte EP/M013774/1

[1] Qiong Cao, Li Shen, Weidi Xie, Omkar M. Parkhi, and Andrew Zisserman. VGGFace2: A dataset for recognising faces across pose and age. In Proc. International Conference on Automatic Face & Gesture Recognition, 2018.

[2] Joon Son Chung, Arsha Nagrani and Andrew Zisserman. VoxCeleb2: Deep Speaker Recognition. INTERSPEECH, 2018

BL Labs Public Awards 2020

Inspired by this work that uses the British Library's digital archived news footage? Have you done something innovative using the British Library's digital collections and data? Why not consider entering your work for a BL Labs Public Award 2020 and win fame, glory and even a bit of money?

This year's public and staff awards 2020 are open for submission, the deadline for entry for both is Monday 30 November 2020.

Whilst we welcome projects on any use of our digital collections and data (especially in research, artistic, educational and community categories), we are particularly interested in entries in our public awards that have focused on anti-racist work, about the pandemic or that are using computational methods such as the use of Jupyter Notebooks.

20 October 2020

The Botish Library: developing a poetry printing machine with Python

Add comment

This is a guest post by Giulia Carla Rossi, Curator of Digital Publications at the British Library. You can find her @giugimonogatari.

In June 2020 the Office for Students announced a campaign to fill 2,500 new places on artificial intelligence and data science conversion courses in universities across the UK. While I’m not planning to retrain in cyber, I was lucky enough to be in the cohort for the trial run of one of these courses: Birkbeck’s Postgraduate Certificate in Applied Data Science. The course started as a collaborative project between The British Library, The National Archives and Birkbeck University to develop a computing course aimed at professionals working in the cultural heritage sector. The trial run has now ended and the course is set to start in full from January 2021.

The course is designed for graduates who are new to computer science – which was perfect for me, as I had no previous coding knowledge besides some very basic HTML and CSS. It was a very steep learning curve, starting from scratch and ending with developing my own piece of software, but it was great to see how code could be applied to everyday issues to facilitate and automate parts of our workload. The fact that it was targeted at information professionals and that we could use existing datasets to learn from real life examples made it easier to integrate study with work. After a while, I started to look at the everyday tasks in my to-do list and wonder “Can this be solved with Python?”

After a taught module (Demystifying Computing with Python), students had to work on an individual project module and develop a software based on their work (to solve an issue, facilitate a task, re-use and analyse existing resources). I had an idea of the themes I wanted to explore – as Curator of Digital Publications, I’m interested in new media and platforms used to deliver content, and how text and stories are shaped by these tools. When I read about French company Short Édition and the short story vending machine in Canary Wharf I knew I had found my project.

My project is to build a stand-alone printer that prints random poems from a dataset of out-of-copyright texts. A little portable Bot-ish (sic!) Library to showcase the British Library collections and fill the world with more poetry.

This is a compilation of two images, a portable printer and a design sketch of the same by the author.
A Short Story Station in Canary Wharf, London and my own sketch of a printing machine. (photo by the author)

 

Finding poetry

For my project, I decided to use the British Library’s “Digitised printed books (18th-19th century)” collection. This comprises over 60,000 volumes of 18th and 19th century texts, digitised in partnership with Microsoft and made available under Public Domain Mark. My work focused on the metadata dataset and the dataset of OCR derived text (shout out to the Digital Research team for kindly providing me with this dataset, as its size far exceeded what my computer is able to download).

The British Library actively encourages researchers to use its “digital collection and data in exciting and innovative ways” and projects with similar goals to mine had been undertaken before. In 2017, Dr Jennifer Batt worked with staff at the British Library on a data mining project: her goal was to identify poetry within a dataset of 18th Century digitised newspapers from the British Library’s Burney Collection. In her research, Batt argued that employing a set of recurring words didn’t help her finding poetry within the dataset, as only very few of the poems included key terms like ‘stanza’ and ‘line’ – and none included the word ‘poem’. In my case, I chose to work with the metadata dataset first, as a way of filtering books based on their title, and while, as Batt proved, it’s unlikely that a poem itself includes a term defining its poetry style I was quite confident that such terms might appear in the title of a poetry collection.

My first step then was to identify books containing poetry, by searching through the metadata dataset using key words associated with poetry. My goal was not to find all the poetry in the dataset, but to identify books containing some form of poetry, that could be reused to create my printer dataset. I used the Poetry Foundation’s online “Glossary of Poetic Terms - Forms & Types of Poems” to identify key terms to use, eliminating the anachronisms (no poetry slam in the 19th century, I'm afraid) and ambiguous terms (“romance” returned too many results that weren’t relevant to my research). The result was 4580 book titles containing one or more poetry-related words.

 

A screenshot showing key terms defined as 'poem, sonnet, ballad, rhyme, verse etc.
My list of poetry terms used to search through the dataset

 

 

Creating verses: when coding meets grammar

I then wanted to extract individual poems from my dataset. The variety of book structures and poetry styles made it impossible to find a blanket rule that could be applied to all books. I chose to test my code out on books that I knew had one poem per page, so that I could extract pages and easily get my poems. Because of its relatively simple structure - and possibly because of some nostalgia for my secondary school Italian class - I started my experiments with Giacomo Pincherle’s 1865 translation of Dante’s sonnets, “In Omaggio a Dante. Dante's Memorial. [Containing five sonnets from Dante, Petrarch and Metastasio, with English versions by G. Pincherle, and five original sonnets in English by G. Pincherle.]

Once I solved the problem of extracting single poems, the issue was ‘reshaping’ the text to match the print edition. Line breaks are essential to the meaning of a poem and the OCR text was just one continuous string of text that completely disregarded the metric and rhythm of the original work. The rationale behind my choice of book was also that sonnets present a fairly regular structure, which I was hoping could be of use when reshaping the text. The idea of using the poem’s metre as a tool to determine line length seemed the most effective choice: by knowing the type of metre used (iambic pentameter, terza rima, etc.) it’s possible to anticipate the number of syllables for each line and where line breaks should occur.

So I created a function to count how many syllables a word has following English grammar rules. As it’s often the case with coding, someone has likely already encountered the same problem as you and, if you’re lucky, they have found a solution: I used a function found online as my base (thank you, StackOverflow), building on it in order to cover as many grammar rules (and exceptions) as I was aware of. I used the same model and adapted it to Italian grammar rules, in order to account for the Italian sonnets in the book as well. I then decided to combine the syllable count with the use of capitalisation at the beginning of a line. This increased the chances of a successful result in case the syllable count would return a wrong result (which might happen whenever typos appear in the OCR text).

 

An image showing the poem 'To My Father', both written as a string of lines, and in its original form
The same sonnet restructured so that each line is a new string (above), and matches the line breaks in the print edition (below)

 

It was very helpful that all books in the datasets were digitised and are available to access remotely (you can search for them on the British Library catalogue by using the search term “blmsd”), so I could check and compare my results to the print editions from home even during lockdown. I also tested my functions on sonnets from Henry Thomas Mackenzie Bell’s “Old Year Leaves Being old verses revived. [With the addition of two sonnets.]” and Welbore Saint Clair Baddeley’s “Legend of the Death of Antar, an eastern romance. Also lyrical poems, songs, and sonnets.

Another image showing a poem, this time a sonnet, written as both a string of lines, and in its original form
Example of sonnet from Legend of the Death of Antar, an eastern romance. The function that divides the poems into lines could be adapted to accommodate breaks between stanzas as well.

 

Main challenges and gaps in research

  • Typos in the OCR text: Errors and typos were introduced when the books in the collection were first digitised, which translated into exceptions to the rules I devised for identifying and restructuring poems. In order to ensure the text of every poem has been correctly captured and that typos have been fixed, some degree of manual intervention might be required.
  • Scalability: The variety of poetry styles and book structures, paired with the lack of tagging around verse text, make it impossible to find a single formula that can be applied to all cases. What I created is quite dependent on a book having one poem per page, and using capitalisation in a certain way.
  • Time constraint: the time limit we had to deliver the project - and my very-recently-acquired-and-still-very-much-developing skill set - meant I had to focus on a limited number of books and had to prioritise writing the software over building the printer itself.

 

Next steps

One of the outputs of this project is a JSON file containing a dictionary of poetry books. After searching for poetry terms, I paired the poetry titles and relative metadata with their pages from the OCR dataset, so the resulting file combines useful data from the two original datasets (book IDs, titles, authors’ names and the OCR text of each book). It’s also slightly easier to navigate compared to the OCR dataset as books can be retrieved by ID, and each page is an item in a list that can be easily called. One of the next steps will be to upload this onto the British Library data repository, in the hope that people might be encouraged to use it and conduct further research around this data collection.

Another, very obvious, next step is: building the printer! The individual components have already been purchased (Adafruit IoT Pi Printer Project Pack and Raspberry Pi 3). I will then have to build the thermal printer with Raspberry Pi and connect it to my poetry dataset. It’s interesting to note that other higher education institutions and libraries have been experimenting with similar ideas - like the University of Idaho Library’s Vandal Poem of the Day Bot and the University of British Columbia’s randomised book recommendations printer for libraries.

A photograph of technical components
Component parts of the Adafruit IoT Pi Printer Project Pack. (photo by the author)

My aim when working on this project was for the printer to be used to showcase British Library collections; the idea was for it to be located in a public area in the Library, to reach new audiences that might not necessarily be there for research purposes. The printer could also be reprogrammed to print different genres and be customised for different occasions (e.g. exhibitions, anniversary celebrations, etc.) All of this was planned before Covid-19 happened, so it might be necessary to slightly adapt things now - and any suggestions in merit are very welcome! :)

Finally, none of this would have been possible without Nora McGregor, Stelios Sotiriadis, Peter Wood, the Digital Scholarship and BL Labs teams, and the support of my line manager and my team.