Digital scholarship blog

118 posts categorized "Research collaboration"

20 April 2022

Importing images into Zooniverse with a IIIF manifest: introducing an experimental feature

Digital Curator Dr Mia Ridge shares news from a collaboration between the British Library and Zooniverse that means you can more easily create crowdsourcing projects with cultural heritage collections. There's a related blog post on Zooniverse, Fun with IIIF.

IIIF manifests - text files that tell software how to display images, sound or video files alongside metadata and other information about them - might not sound exciting, but by linking to them, you can view and annotate collections from around the world. The IIIF (International Image Interoperability Framework) standard makes images (or audio, video or 3D files) more re-usable - they can be displayed on another site alongside the original metadata and information provided by the source institution. If an institution updates a manifest - perhaps adding information from updated cataloguing or crowdsourcing - any sites that display that image automatically gets the updated metadata.

Playbill showing the title after other large text
Playbill showing the title after other large text

We've posted before about how we used IIIF manifests as the basis for our In the Spotlight crowdsourced tasks on LibCrowds.com. Playbills are great candidates for crowdsourcing because they are hard to transcribe automatically, and the layout and information present varies a lot. Using IIIF meant that we could access images of playbills directly from the British Library servers without needing server space and extra processing to make local copies. You didn't need technical knowledge to copy a manifest address and add a new volume of playbills to In the Spotlight. This worked well for a couple of years, but over time we'd found it difficult to maintain bespoke software for LibCrowds.

When we started looking for alternatives, the Zooniverse platform was an obvious option. Zooniverse hosts dozens of historical or cultural heritage projects, and hundreds of citizen science projects. It has millions of volunteers, and a 'project builder' that means anyone can create a crowdsourcing project - for free! We'd already started using Zooniverse for other Library crowdsourcing projects such as Living with Machines, which showed us how powerful the platform can be for reaching potential volunteers. 

But that experience also showed us how complicated the process of getting images and metadata onto Zooniverse could be. Using Zooniverse for volumes of playbills for In the Spotlight would require some specialist knowledge. We'd need to download images from our servers, resize them, generate a 'manifest' list of images and metadata, then upload it all to Zooniverse; and repeat that for each of the dozens of volumes of digitised playbills.

Fast forward to summer 2021, when we had the opportunity to put a small amount of funding into some development work by Zooniverse. I'd already collaborated with Sam Blickhan at Zooniverse on the Collective Wisdom project, so it was easy to drop her a line and ask if they had any plans or interest in supporting IIIF. It turns out they had, but hadn't had the resources or an interested organisation necessary before.

We came up with a brief outline of what the work needed to do, taking the ability to recreate some of the functionality of In the Spotlight on Zooniverse as a goal. Therefore, 'the ability to add subject sets via IIIF manifest links' was key. ('Subject set' is Zooniverse-speak for 'set of images or other media' that are the basis of crowdsourcing tasks.) And of course we wanted the ability to set up some crowdsourcing tasks with those items… The Zooniverse developer, Jim O'Donnell, shared his work in progress on GitHub, and I was very easily able to set up a test project and ask people to help create sample data for further testing. 

If you have a Zooniverse project and a IIIF address to hand, you can try out the import for yourself: add 'subject-sets/iiif?env=production' to your project builder URL. e.g. if your project is number #xxx then the URL to access the IIIF manifest import would be https://www.zooniverse.org/lab/xxx/subject-sets/iiif?env=production

Paste a manifest URL into the box. The platform parses the file to present a list of metadata fields, which you can flag as hidden or visible in the subject viewer (public task interface). When you're happy, you can click a button to upload the manifest as a new subject set (like a folder of items), and your images are imported. (Don't worry if it says '0 subjects).

 

Screenshot of manifest import screen
Screenshot of manifest import screen

You can try out our live task and help create real data for testing ingest processes at ​​https://frontend.preview.zooniverse.org/projects/bldigital/in-the-spotlight/classify

This is a very brief introduction, with more to come on managing data exports and IIIF annotations once you've set up, tested and launched a crowdsourced workflow (task). We'd love to hear from you - how might this be useful? What issues do you foresee? How might you want to expand or build on this functionality? Email digitalresearch@bl.uk or tweet @mia_out @LibCrowds. You can also comment on GitHub https://github.com/zooniverse/Panoptes-Front-End/pull/6095 or https://github.com/zooniverse/iiif-annotations

Digital work in libraries is always collaborative, so I'd like to thank British Library colleagues in Finance, Procurement, Technology, Collection Metadata Services and various Collections departments; the Zooniverse volunteers who helped test our first task and of course the Zooniverse team, especially Sam, Jim and Chris for their work on this.

 

12 April 2022

Making British Library collections (even) more accessible

Daniel van Strien, Digital Curator, Living with Machines, writes:

The British Library’s digital scholarship department has made many digitised materials available to researchers. This includes a collection of digitised books created by the British Library in partnership with Microsoft. This is a collection of books that have been digitised and processed using Optical Character Recognition (OCR) software to make the text machine-readable. There is also a collection of books digitised in partnership with Google. 

Since being digitised, this collection of digitised books has been used for many different projects. This includes recent work to try and augment this dataset with genre metadata and a project using machine learning to tag images extracted from the books. The books have also served as training data for a historic language model.

This blog post will focus on two challenges of working with this dataset: size and documentation, and discuss how we’ve experimented with one potential approach to addressing these challenges. 

One of the challenges of working with this collection is its size. The OCR output is over 20GB. This poses some challenges for researchers and other interested users wanting to work with these collections. Projects like Living with Machines are one avenue in which the British Library seeks to develop new methods for working at scale. For an individual researcher, one of the possible barriers to working with a collection like this is the computational resources required to process it. 

Recently we have been experimenting with a Python library, datasets, to see if this can help make this collection easier to work with. The datasets library is part of the Hugging Face ecosystem. If you have been following developments in machine learning, you have probably heard of Hugging Face already. If not, Hugging Face is a delightfully named company focusing on developing open-source tools aimed at democratising machine learning. 

The datasets library is a tool aiming to make it easier for researchers to share and process large datasets for machine learning efficiently. Whilst this was the library’s original focus, there may also be other uses cases for which the datasets library may help make datasets held by the British Library more accessible. 

Some features of the datasets library:

  • Tools for efficiently processing large datasets 
  • Support for easily sharing datasets via a ‘dataset hub’ 
  • Support for documenting datasets hosted on the hub (more on this later). 

As a result of these and other features, we have recently worked on adding the British Library books dataset library to the Hugging Face hub. Making the dataset available via the datasets library has now made the dataset more accessible in a few different ways.

Firstly, it is now possible to download the dataset in two lines of Python code: 

Image of a line of code: "from datasets import load_dataset ds = load_dataset('blbooks', '1700_1799')"

We can also use the Hugging Face library to process large datasets. For example, we only want to include data with a high OCR confidence score (this partially helps filter out text with many OCR errors): 

Image of a line of code: "ds.filter(lambda example: example['mean_wc_ocr'] > 0.9)"

One of the particularly nice features here is that the library uses memory mapping to store the dataset under the hood. This means that you can process data that is larger than the RAM you have available on your machine. This can make the process of working with large datasets more accessible. We could also use this as a first step in processing data before getting back to more familiar tools like pandas. 

Image of a line of code: "dogs_data = ds['train'].filter(lamda example: "dog" in example['text'].lower()) df = dogs_data_to_pandas()

In a follow on blog post, we’ll dig into the technical details of datasets in some more detail. Whilst making the technical processing of datasets more accessible is one part of the puzzle, there are also non-technical challenges to making a dataset more usable. 

 

Documenting datasets 

One of the challenges of sharing large datasets is documenting the data effectively. Traditionally libraries have mainly focused on describing material at the ‘item level,’ i.e. documenting one dataset at a time. However, there is a difference between documenting one book and 100,000 books. There are no easy answers to this, but libraries could explore one possible avenue by using Datasheets. Timnit Gebru et al. proposed the idea of Datasheets in ‘Datasheets for Datasets’. A datasheet aims to provide a structured format for describing a dataset. This includes questions like how and why it was constructed, what the data consists of, and how it could potentially be used. Crucially, datasheets also encourage a discussion of the bias and limitations of a dataset. Whilst you can identify some of these limitations by working with the data, there is also a crucial amount of information known by curators of the data that might not be obvious to end-users of the data. Datasheets offer one possible way for libraries to begin more systematically commuting this information. 

The dataset hub adopts the practice of writing datasheets and encourages users of the hub to write a datasheet for their dataset. For the British library books, we have attempted to write one of these datacards. Whilst it is certainly not perfect, it hopefully begins to outline some of the challenges of this dataset and gives end-users a better sense of how they should approach a dataset. 

14 March 2022

The Lotus Sutra Manuscripts Digitisation Project: the collaborative work between the Heritage Made Digital team and the International Dunhuang Project team

Digitisation has become one of the key tasks for the curatorial roles within the British Library. This is supported by two main pillars: the accessibility of the collection items to everybody around the world and the preservation of unique and sometimes, very fragile, items. Digitisation involves many different teams and workflow stages including retrieval, conservation, curatorial management, copyright assessment, imaging, workflow management, quality control, and the final publication to online platforms.

The Heritage Made Digital (HMD) team works across the Library to assist with digitisation projects. An excellent example of the collaborative nature of the relationship between the HMD and International Dunhuang Project (IDP) teams is the quality control (QC) of the Lotus Sutra Project’s digital files. It is crucial that images meet the quality standards of the digital process. As a Digitisation Officer in HMD, I am in charge of QC for the Lotus Sutra Manuscripts Digitisation Project, which is currently conserving and digitising nearly 800 Chinese Lotus Sutra manuscripts to make them freely available on the IDP website. The manuscripts were acquired by Sir Aurel Stein after they were discovered  in a hidden cave in Dunhuang, China in 1900. They are thought to have been sealed there at the beginning of the 11th century. They are now part of the Stein Collection at the British Library and, together with the international partners of the IDP, we are working to make them available digitally.

The majority of the Lotus Sutra manuscripts are scrolls and, after they have been treated by our dedicated Digitisation Conservators, our expert Senior Imaging Technician Isabelle does an outstanding job of imaging the fragile manuscripts. My job is then to prepare the images for publication online. This includes checking that they have the correct technical metadata such as image resolution and colour profile, are an accurate visual representation of the physical object and that the text can be clearly read and interpreted by researchers. After nearly 1000 years in a cave, it would be a shame to make the manuscripts accessible to the public for the first time only to be obscured by a blurry image or a wayward piece of fluff!

With the scrolls measuring up to 13 metres long, most are too long to be imaged in one go. They are instead shot in individual panels, which our Senior Imaging Technicians digitally “stitch” together to form one big image. This gives online viewers a sense of the physical scroll as a whole, in a way that would not be possible in real life for those scrolls that are more than two panels in length unless you have a really big table and a lot of specially trained people to help you roll it out. 

Photo showing the three individual panels of Or.8210S/1530R with breaks in between
Or.8210/S.1530: individual panels
Photo showing the three panels of Or.8210S/1530R as one continuous image
Or.8210/S.1530: stitched image

 

This post-processing can create issues, however. Sometimes an error in the stitching process can cause a scroll to appear warped or wonky. In the stitched image for Or.8210/S.6711, the ruled lines across the top of the scroll appeared wavy and misaligned. But when I compared this with the images of the individual panels, I could see that the lines on the scroll itself were straight and unbroken. It is important that the digital images faithfully represent the physical object as far as possible; we don’t want anyone thinking these flaws are in the physical item and writing a research paper about ‘Wonky lines on Buddhist Lotus Sutra scrolls in the British Library’. Therefore, I asked the Senior Imaging Technician to restitch the images together: no more wonky lines. However, we accept that the stitched images cannot be completely accurate digital surrogates, as they are created by the Imaging Technician to represent the item as it would be seen if it were to be unrolled fully.

 

Or.8210/S.6711: distortion from stitching. The ruled line across the top of the scroll is bowed and misaligned
Or.8210/S.6711: distortion from stitching. The ruled line across the top of the scroll is bowed and misaligned

 

Similarly, our Senior Imaging Technician applies ‘digital black’ to make the image background a uniform colour. This is to hide any dust or uneven background and ensure the object is clear. If this is accidentally overused, it can make it appear that a chunk has been cut out of the scroll. Luckily this is easy to spot and correct, since we retain the unedited TIFFs and RAW files to work from.

 

Or.8210/S.3661, panel 8: overuse of digital black when filling in tear in scroll. It appears to have a large black line down the centre of the image.
Or.8210/S.3661, panel 8: overuse of digital black when filling in tear in scroll

 

Sometimes the scrolls are wonky, or dirty or incomplete. They are hundreds of years old, and this is where it can become tricky to work out whether there is an issue with the images or the scroll itself. The stains, tears and dirt shown in the images below are part of the scrolls and their material history. They give clues to how the manuscripts were made, stored, and used. This is all of interest to researchers and we want to make sure to preserve and display these features in the digital versions. The best part of my job is finding interesting things like this. The fourth image below shows a fossilised insect covering the text of the scroll!

 

Black stains: Or.8210/S.2814, panel 9
Black stains: Or.8210/S.2814, panel 9
Torn and fragmentary panel: Or.8210/S.1669, panel 1
Torn and fragmentary panel: Or.8210/S.1669, panel 1
Insect droppings obscuring the text: Or.8210/S.2043, panel 1
Insect droppings obscuring the text: Or.8210/S.2043, panel 1
Fossilised insect covering text: Or.8210/S.6457, panel 5
Fossilised insect covering text: Or.8210/S.6457, panel 5

 

We want to minimise the handling of the scrolls as much as possible, so we will only reshoot an image if it is absolutely necessary. For example, I would ask a Senior Imaging Technician to reshoot an image if debris is covering the text and makes it unreadable - but only after inspecting the scroll to ensure it can be safely removed and is not stuck to the surface. However, if some debris such as a small piece of fluff, paper or hair, appears on the scroll’s surface but is not obscuring any text, then I would not ask for a reshoot. If it does not affect the readability of the text, or any potential future OCR (Optical Character Recognition) or handwriting analysis, it is not worth the risk of damage that could be caused by extra handling. 

Reshoot: Or.8210/S.6501: debris over text  /  No reshoot: Or.8210/S.4599: debris not covering text.
Reshoot: Or.8210/S.6501: debris over text  /  No reshoot: Or.8210/S.4599: debris not covering text.

 

These are a few examples of the things to which the HMD Digitisation Officers pay close attention during QC. Only through this careful process, can we ensure that the digital images accurately reflect the physicality of the scrolls and represent their original features. By developing a QC process that applies the best techniques and procedures, working to defined standards and guidelines, we succeed in making these incredible items accessible to the world.

Read more about Lotus Sutra Project here: IDP Blog

IDP website: IDP.BL.UK

And IDP twitter: @IDP_UK

Dr Francisco Perez-Garcia

Digitisation Officer, Heritage Made Digital: Asian and African Collections

Follow us @BL_MadeDigital

10 March 2022

Scoping the connections between trusted arts and humanities data repositories

CONNECTED: Connecting trusted Arts and Humanities data repositories is a newly funded activity, supported by AHRC. It is led by the British Library, with the Archaeology Data Service and the Oxford Text Archive as co-investigators, and is supported by consultants from MoreBrains Cooperative.The CONNECTED team believes that improving discovery and curation of heritage and emergent content types in the arts and humanities will increase the impact of cultural resources, and enhance equity. Great work is already being done on discovery services for the sector, so we decided to look upstream, and focus on facilitating repository and archive deposit.

The UK boasts a dynamic institutional repository environment in the HE sector, as well as a range of subject- or field-specific repositories. With a distributed repository landscape now firmly established, challenges and inefficiencies still remain that reduce its impact. These include issues around discovery and access, but also questions around interoperability, the relationship of specialised vs general infrastructures, and potential duplication of effort from an author/depositor perspective. Greater coherence and interoperability will effectively unite different trusted repository services to form a resilient distributed data service, which can grow over time as new individual services are required and developed. Alongside the other projects funded as part of ‘Scoping future data services for the arts and humanities’, CONNECTED will help to deliver this unified network.

As practice in the creative arts becomes more digital and the digital humanities continue to thrive, the diversity of ways in which this research is expressed continues to grow. Researchers are increasingly able to combine artefacts, documents, and materials in new and innovative ways; practice-based research in the arts is creating a diverse range of (often complex) outputs, creating new curation and discovery needs; and heritage collections often contain artefacts with large amounts of annotation and commentary amassed over years or centuries, across multiple formats, and with rich contextual information. This expansion is already exposing the limitations of our current information systems, with the potential for vital context and provenance to become invisible. Without additional, careful, future-proofing, the risks of information loss and limits on access will only expand. In addition, metadata creation, deposit, preservation, and discovery strategies should, therefore, be tailored to meet the very different needs of the arts and humanities.

A number of initiatives are aimed at improving interoperability between metadata sources in ways that are more oriented towards the needs of the arts and humanities. Drawing these together with the insights to be gained from the abilities (and limitations) of bibliographic and data-centric metadata and discovery systems, will help to generate robust services in the complex, evolving landscape of arts and humanities research and creation. 

The CONNECTED project will assemble experts, practitioners, and researchers to map current gaps in the content curation and discovery ecosystem and weave together the strengths and potentials of a range of platforms, standards, and technologies in the service of the arts and humanities community. Our activities will run until the end of May, and will comprise three phases:

Phase 1 - Discovery

We will focus on repository or archive deposit as a foundation for the discovery and preservation of diverse outputs, and also as a way to help capture the connections between those objects and the commentary, annotation, and other associated artefacts. 

A data service for the arts and humanities must be developed with researcher needs as a priority, so the project team will engage in a series of semi-structured interviews with a variety of stakeholders including researchers, librarians, curators, and information technologists. The interviews will explore the following ideas:

  • What do researchers need when engaging in discovery of both heritage materials and new outputs?
  • Are there specific needs that relate to different types of content or use-cases? For example, research involving multimedia or structured information processing at scale?
  • What can the current infrastructure support, and where are the gaps between what we have and what we need?
  • What are the feasible technical approaches to transform information discovery?

Phase 2 - Data service programme scoping and planning

The findings from phase 1 will be synthesised using a commercial product strategy approach known as a canvas analysis. Based on the initial impressions from the semi-structured interviews, it is likely that an agile, product, or value proposition canvas will be used to synthesise the findings and structure thinking so that a coherent and robust strategy can be developed. Outputs from the strategy canvas exercise will then be applied to a fully costed and scoped product roadmap and budget for a national data deposit service for the arts and humanities.

Phase 3 - Scoping a unified archiving solution

Building on the partnerships and conversations from the previous phases, the feasibility of a unified ‘deposit switchboard’ will be explored. The purpose of such a switchboard is to enable researchers, curators, and creators to easily deposit items in the most appropriate repository or archive in their field for the object type they are uploading. Using insights gained from the landscaping interviews in phase 1, the team will identify potential pathways to developing a routing service for channelling content to the most appropriate home.

We will conclude with a virtual community workshop to explore the challenges and desirability of the switchboard approach, with a special focus on the benefits this could bring to the uploader of new content and resources.

This is an ambitious project, through which we hope to deliver:

  • A fully costed and scoped technical and organisational roadmap to build the required components and framework for the National Collection
  • Improved usage of resources in the wider GLAM and institutional network, including of course the Archaeology Data Service, The British Library's Shared Research Repository, and the Oxford Text Archive
  • Steps towards a truly community-governed data infrastructure for the arts and humanities as part of the National Collection

As a result of this work, access to UK cultural heritage and outputs will be accelerated and simplified, the impact of the arts and humanities will be enhanced, and we will help the community to consolidate the UK's position as a global leader in digital humanities and infrastructure.

This post is from Rachael Kotarski (@RachPK), Principal Investigator for CONNECTED, and Josh Brown from MoreBrains.

14 February 2022

PhD Placement on Mapping Caribbean Diasporic Networks through Correspondence

Every year the British Library host a range of PhD placement scheme projects. If you are interested in applying for one of these, the 2022 opportunities are advertised here. There are currently 15 projects available across Library departments, all starting from June 2022 onwards and ending before March 2023. If you would like to work with born digital collections, you may want to read last week’s Digital Scholarship blog post about two projects on enhanced curation, hybrid archives and emerging formats. However, if you are interested in Caribbean diasporic networks and want to experiment creating network analysis visualisations, then read on to find out more about the “Mapping Caribbean Diasporic Networks through correspondence (2022-ACQ-CDN)” project.

This is an exciting opportunity to be involved with the preliminary stages of a project to map the Caribbean Diasporic Network evident in the ‘Special Correspondence’ files of the Andrew Salkey Archive. This placement will be based in the Contemporary Literary and Creative Archives team at the British Library with support from Digital Scholarship colleagues. The successful candidate will be given access to a selection of correspondence files to create an item level dataset and explore the content of letters from the likes of Edward Kamau Brathwaite, C.L.R. James, and Samuel Selvon.

Photograph of Andrew Salkey
Photograph of Andrew Salkey, from the Andrew Salkey Archive, Deposit 10310. With kind permission of Jason Salkey.

The main outcome envisaged for this placement is to develop a dataset, using a sample of ten files, linking the data and mapping the correspondent’s names, location they were writing from, and dates of the correspondence in a spreadsheet. The placement student will also learn how to use the Gephi Open Graph Visualisation Platform to create a visual representation of this network, associating individuals with each other and mapping their movement across the world between the 1950s and 1990s.

Gephi is open-source software  for visualising and analysing networks, they provide a step-by-step guide to getting started, with the first step to upload a spreadsheet detailing your ‘nodes’ and ‘edges’. To show an example of how Gephi can be used, We've included an example below, which was created by previous British Library research placement student Sarah FitzGerald from the University of Sussex, using data from the Endangered Archives Programme (EAP) to create a Gephi visualisation of all EAP applications received between 2004 and 2017.

Gephi network visualisation diagram
Network visualisation of EAP Applications created by Sarah FitzGerald

In this visualisation the size of each country relates to the number of applications it features in, as country of archive, country of applicant, or both.  The colours show related groups. Each line shows the direction and frequency of application. The line always travels in a clockwise direction from country of applicant to country of archive, the thicker the line the more applications. Where the country of applicant and country of archive are the same the line becomes a loop. If you want to read more about the other visualisations that Sarah created during her project, please check out these two blog posts:

We hope this new PhD placement will offer the successful candidate the opportunity to develop their specialist knowledge through access to the extensive correspondence series in the Andrew Salkey archive, and to undertake practical research in a curatorial context by improving the accessibility of linked metadata for this collection material. This project is a vital building block in improving the Library’s engagement with this material and exploring the ways it can be accessed by a wider audience.

If you want to apply, details are available on the British Library website at https://www.bl.uk/research-collaboration/doctoral-research/british-library-phd-placement-scheme. Applications for all 2022/23 PhD Placements close on Friday 25 February 2022, 5pm GMT. The application form and guidelines are available online here. Please address any queries to research.development@bl.uk

This post is by Digital Curator Stella Wisdom (@miss_wisdom) and Eleanor Casson (@EleCasson), Curator in Contemporary Archives and Manuscripts.

10 February 2022

In conversation: Meet Silvija Aurylaitė, the new British Library Labs Manager

The newly appointed manager of the British Library Labs (BL Labs), Silvija Aurylaitė, is excited to start leading the BL Labs Labs transformation with a new focus on computational creative thinking. The BL Labs is a welcoming space for everyone curious about computational research and using the British Library’s digital collections. We welcome all researchers - data scientists, digital humanists, artists, creative practitioners, and everyone curious about digital research.

Image of BL Labs Manager Silvija Aurylaite
Introducing Silvija Aurylaitė, new manager of BL Labs

Find out more from Silvija, in conversation with Maja Maricevic, BL Head of Higher Education and Science.

 

Maja: The Labs have a proud history of experimenting and innovating with the British Library’s digital collections. Can you tell us more about your own background?

Silvija: Ever since I discovered the BL Labs in London 8 years ago, I have been immersed into the world of experimentation with digital collections. I started researching collections from open GLAMs (galleries, libraries, archives and museums) around the world and the implications of copyright and licensing for creative reuse. In a large ecosystem of open digital collections, my special interest has been identifying content for people to use to bring to life their creative ideas such as new design works.

Inspired by the Labs, I started developing my own curatorial web project, which won the Europeana Creative Design Challenge in 2015. The award gave me the chance to work with a team of international experts to learn new skills in areas such as IT, copyright and social entrepreneurship. This experience later evolved into the ‘Revivo Images’, a pilot website that gives guidance on open image collections around the world, which are carefully selected for quality, reliability of copyright and licence information, with explanations how to use the databases. It was a result of collaboration with a great interdisciplinary team including an IT lead, programmers, curators, designers and a copywriter.

All this gave me invaluable experience in overseeing a digital collections web project from vision to implementation. I learned about curating content from across collections, building an image database and mapping metadata using various standards. We also used AI and human input to create keywords and thematic catalogs and designed a simple minimalist user interface.

What I most enjoyed about this journey, actually, was meeting a great range of creative people in many creative fields, from professional animators to students looking for a theme for their BA final thesis - and learning what excited them most, and what barriers they faced in using open collections. I met many of them at various art festivals, universities, design schools and events where I delivered talks and creative workshops in my free time to spread the word about open digital collections for creativity. For two years I was also responsible for the ‘Bridgeman Education’ online database, one of the largest digital image collections with over 1.300.000 images from the GLAM sector, designed for the use of art images in higher education curricula. I had the opportunity to talk to many librarians, lecturers and students from around the world about what they find most useful in this new digital turn.

As a result of this, I am particularly excited about introducing the Labs to university students: from students in computer science departments with coding skills to researchers in social sciences and humanities, to creativity champions in fashion, graphic design or jewelry, who might be attracted to aesthetic qualities of our collections or those looking to pick up creative coding skills.

The landscape has changed a lot in the last 8 years since I learned about the Labs, and I gradually started my own journey of learning code and algorithmic thinking. Already in my previous role in the British Library, as the Rights Officer for the Heritage Made Digital project, we approached digital collections as data. Now we are all embracing computational data science methods to gain new insights into digital collections, and that is what the future British Library Labs is going to celebrate.

 

Maja: You have a strong connection to the BL Labs since you were the Labs volunteer 8 years ago. What most inspired you when you first heard of the Labs?

Silvija: Personally, the Labs were my first professional experience abroad after my MA studies in intellectual history at the American university in Budapest, and happened to be one of the main incentives to stay in London.

This city has attracted me for its serendipity - you can have a great range of urban experiences from attending the oldest special interest societies and visiting antiquarian bookshops to meeting founders of latest startups in their regular gatherings and getting up to speed with the mindset of perpetual innovation.

When I first heard about the Labs in one of its public events, this sentence struck me: “experiment with the BL digital collections to create something new”, with the “new” being undefined and open. I had this idea of a perpetuity - the possibility of endlessly combining the knowledge and aesthetics of the past, safeguarded by one of the biggest libraries of the world, with the creative visions, skills and technology of today and tomorrow.

Such endless new experiences of digital collections can be accelerated by creating a dedicated space for experimentation - a collider or a matchmaker - that contributes to the diverse serendipitous urban experience of London itself. This is how I see the Labs.

Looking from a user point of view, I am particularly excited about the ‘semiotic democracy’, or ‘the ability of users to produce and disseminate new creations and to take part in public cultural discourse’[1] (Stark, 2006). I believe this new playful approach to digitise out-of-copyright cultural materials will fundamentally change the way we see GLAMs. We’ll look at them less and less as spaces that are only there to learn about the past as it used to be, as a recipient, and more and more as a co-creator, able to enter into a meaningful dialogue and reshape meanings, narratives and experiences.

 

Maja: Prior to Labs appointment, you also have a significant rights management experience. What have you learned that will be useful for the Labs?

Silvija: It was a delight to work with Matthew Lambert, the Head of Copyright, Policy & Assurance, for the Heritage Made Digital project, led by Sandra Tuppen, in setting up the British Library’s copyright workflow for both current and historical digitisation projects. This project now allows users to explore the BL’s digital images in the Universal Viewer with attributed rights statements and usage terms.

These last 3.5 years was a great exercise in dealing with very large, often very messy, data to create complex systems, policies and procedures which allow oversight of all important aspects of the digital data including copyright and licencing, data protection and sensitivities. Of course, such work in the Library is of massive importance because it affects the level of freedom we later have to experiment, reuse and do further research based on this data.

Personally, the Heritage Made Digital project is also very precious to me because of its collaborative nature. They use MS SharePoint tool to facilitate data contributions from across many departments in the BL. And they are just fantastic at promoting and celebrating digitisation as a common effort to make content publicly accessible. I will definitely use this experience to suggest solutions on how to register and document both the BL’s datasets and related reuse projects as a similar collaborative project within the Library.

 

Maja: There is so much that is changing in digital research all the time. Are there particular current developments that you find exciting and why?

Silvija: Yes! First, I find the moment of change itself exciting - there is no book about the tools we use today that won’t be running out of date tomorrow. This is a good neuroplasticity exercise that trains the mind not to sleep and be constantly attentive to new developments and opportunities.

Second, I absolutely love to see how many people, from creators to researchers and library staff, are gradually and naturally embracing code languages. With this comes associated critical thinking, such as the ability to surpass often outdated old database interfaces to reveal exciting data insights simply by having a liberating package of new digital skills.

And, third, I am super excited about the possibility of upscaling and creating a bigger impact with existing breakthrough projects and brilliant ideas relating to the British Library’s data. I believe this could be done by finding consensus on how we want to register and document data science initiatives - finalised, ongoing and most wanted, both internally and externally - and then by promoting this knowledge further.

This would allow us to enter a new stage of the BL Labs. The new ecosystem of re-use would promote sustainability, reproducibility, adaptation and crowdsourced improvement of existing projects, giving us new super powers!

↩︎ Stark, Elisabeth (2006). Free culture and the internet: a new semiotic democracy. opendemocracy.net (June 20). URL: https://www.opendemocracy.net/en/semiotic_3662jsp

23 December 2021

Three crowdsourcing opportunities with the British Library

Digital Curator Dr Mia Ridge writes, In case you need a break from whatever combination of weather, people and news is around you, here are some ways you can entertain yourself (or the kids!) while helping make collections of the British Library more findable, or help researchers understand our past. You might even learn something or make new discoveries along the way!

Your help needed: Living with Machines

Mia Ridge writes: Living with Machines is a collaboration between the British Library and the Alan Turing Institute with partner universities. Help us understand the 'machine age' through the eyes of ordinary people who lived through it. Our refreshed task builds on our previous work, and includes fresh newspaper titles, such as the Cotton Factory Times.

What did the Victorians think a 'machine' was - and did it matter where you lived, or if you were a worker or a factory owner? Help us find out: https://www.zooniverse.org/projects/bldigital/living-with-machines

Your contributions will not only help researchers - they'll also go on display in our exhibition

Image of a Cotton Factory Times masthead
You can read articles from Manchester's Cotton Factory Times in our crowdsourced task

 

Your help needed: Agents of Enslavement? Colonial newspapers in the Caribbean and hidden genealogies of the enslaved

Launched in July this year, Agents of Enslavement? is a research project which explores the ways in which colonial newspapers in the Caribbean facilitated and challenged the practice of slavery. One goal is to create a database of enslaved people identified within these newspapers. This benefits people researching their family history as well as those who simply want to understand more about the lives of enslaved people and their acts of resistance.

Project Investigator Graham Jevon has posted some insights into how he processes the results to the project forum, which is full of fascinating discussion. Join in as you take part: ​​https://www.zooniverse.org/projects/gjevon/agents-of-enslavement

Your help needed: Georeferencer

Dr. Gethin Rees writes: The community have now georeferenced 93% of 1277 maps that were added from our War Office Archive back in July (as mentioned in our previous newsletter).  

Some of the remaining maps are quite tricky to georeference and so if there is a perplexing map that you would like some guidance with do get in contact with myself and our curator for modern mapping  by emailing georeferencer@bl.uk and we will try to help. Please do look forward to some exciting news maps being released on the platform in 2022!

01 December 2021

Open and Engaged 2021: Review

Engagement with cultural heritage collections and the research impact beyond mainstream metrics in arts and humanities

Open and Engaged, the British Library’s annual event in Open Access Week, took place virtually on 25 October. The theme of the conference was Understanding the Impact of Open in the Arts and Humanities beyond the University as you may see in a previous blog post.

The slides and the video recordings together with their transcripts are now available through the British Library’s Research Repository. This blog post will give you a flavour of the talks and the sessions in a nutshell.

Two main sessions formed the programme of the conference; one was on increasing the engagement with cultural heritage collections and the other one was on measuring and evaluating impact of open resources beyond journal articles.

British Library in the background with the piazza full of people in the front
British Library and Piazza by Paul Grundy

 

Session One: Increasing Engagement with Cultural Heritage Collections

The first session was opened with a talk from Brigitte Vézina from Creative Commons (CC). It was about how CC supports GLAM (Galleries, Libraries, Archives and Museums) in embracing open access and unlocking universal access to knowledge and culture. Brigitte introduced CC’s Open GLAM programme which is a coordinated global effort to help GLAMs make the content they steward openly available and reusable for the public good.

The British Library’s Sam van Schaik presented Endangered Archives Programme (EAP) which provides funding for projects to digitise and preserve archival materials at risk of destruction. The resulting digital images and sound files are made available via the British Library’s website. Sam drew attention to the challenges around ethical issues with the CC licenses used for these digital materials and the practical considerations with working globally.

Merete Sanderhoff from National Gallery of Denmark (SMK) raised a concern about how the GLAM sector at the institutional level is lagging behind in embracing the full potential of open cultural heritage. Merete explained that GLAM users increasingly benefit from arts and knowledge beyond institutional walls by using data from GLAM collections and by spurring on developments in digital literacy, citizen science and democratic citizenship.

Towards a National Collection (TaNC), the research development programme funded by AHRC was the last talk of this session and presented by Rebecca Bailey, Programme Director at TaNC. The programme sponsors projects that are working to link collections and encourage cross-searching of multiple collection types, to enable research and enhance public engagement. Rebecca outlined the achievements and ambitions of the projects, as they start to look ahead to a national collections research infrastructure.

This session highlighted that the GLAM sector should embrace their full potential in making cultural heritage open for public good beyond their physical premises. The use of more open and public domain licences will make it easier to use digital heritage content and resources in the research and creative spheres. The challenge comes with the unethical use of digital collections in some cases, but licensing mechanisms are not the tools with which to police research ethics.

 

Session Two: Measuring and Evaluating Impact of Open Resources Beyond Journal Articles

The second half of the conference started with a metrics project, Cobaltmetrics, which works towards making altmetrics genuinely alternative by using URIs. Luc Boruta from Thunken talked about bringing algorithmic fairness to impact measurement, from web-scale attention tracking to computer-assisted data storytelling.

Gemma Derrick from University of Lancaster presented on the hidden REF experience and highlighted assessing the broader value of research culture. Gemma noted that the doubt in whether the impact can be measured doesn’t comes from lack of tools, but it is more about what is considered as impact that differs between individuals, institutions, and fields of disciplines. As she stated, “the nature of impact and the nature of evaluation is inherently better when humans are involved, mainly because mitigating factors and mitigating aspects of our research, and what makes our research culture really important, are less likely to be overlooked by an automated system.” This is what they addressed in the hidden REF, celebrating all research outputs and every role that makes research possible

Anne Boddington from Kingston University reflected on research impact in three parts; looking at its definition, partnering and collaboration between GLAMs and higher education institutions, and the reflections on future benefits. Anne talked about the challenges of impact, the kinds of evidence it demands and the opportunities it presents. She concluded her talk noting that impact is here to stay and there are significant areas for growth, opportunities for innovation and leadership in the context of impact.

Helen Adams from Oxford University Gardens, Libraries & Museums (GLAM) presented the Online Active Community Engagement (O-ACE) project where they combined arts and science to measure the benefits of online culture for mental health in young people. She highlighted how GLAM organizations can actively involve audiences in medical research and how cultural interventions may positively impact individual wellbeing, prior to diagnosis, treatment, or social prescribing pathways. The conference ended with this great case study on impact assessment.

In her closing remarks, Rachael Kotarski of the British Library underlined that opening up GLAM organizations is not only allowing us to break down the walls of our buildings to get content out there but also crosses those geographic boundaries to get content in front of communities who might not have had a chance to experience it before. It also allows us to work with communities who originated content to understand their concerns and not just the concerns of our organizations. Rachael echoed that licensing restrictions are not the solution to all our questions, or to the ethical issues. It is important that we can reflect on what we have learned to adjust and rethink our approach and identify what really allows us to balance access, engagement, and creativity.

In the context of research impact, we need to centre the human in our assessment and the processes. The other factor in impact assessments is the relatively short period of time to assess it. The examples like O-ACE project also showed us that the creation of impact can take much longer than we think and what impacts can be seen will vary through that time. So, assessing those interventions also needs a longer-term views.

Those who didn’t attend the conference or would like to re-visit the talks can find the recordings in the British Library’s Research Repository. The social media interactions can be followed with #OpenEngaged hashtag.

We are looking forward to hosting the Open and Engaged 2022 hopefully in person at the British Library.

This blog post was written by Ilkay Holt, Scholarly Communications Lead, part of the Research Infrastructure Services team.

Digital scholarship blog recent posts

Archives

Tags

Other British Library blogs