Digital scholarship blog

256 posts categorized "Digital scholarship"

10 February 2022

In conversation: Meet Silvija Aurylaitė, the new British Library Labs Manager

The newly appointed manager of the British Library Labs (BL Labs), Silvija Aurylaitė, is excited to start leading the BL Labs Labs transformation with a new focus on computational creative thinking. The BL Labs is a welcoming space for everyone curious about computational research and using the British Library’s digital collections. We welcome all researchers - data scientists, digital humanists, artists, creative practitioners, and everyone curious about digital research.

Image of BL Labs Manager Silvija Aurylaite
Introducing Silvija Aurylaitė, new manager of BL Labs

Find out more from Silvija, in conversation with Maja Maricevic, BL Head of Higher Education and Science.

 

Maja: The Labs have a proud history of experimenting and innovating with the British Library’s digital collections. Can you tell us more about your own background?

Silvija: Ever since I discovered the BL Labs in London 8 years ago, I have been immersed into the world of experimentation with digital collections. I started researching collections from open GLAMs (galleries, libraries, archives and museums) around the world and the implications of copyright and licensing for creative reuse. In a large ecosystem of open digital collections, my special interest has been identifying content for people to use to bring to life their creative ideas such as new design works.

Inspired by the Labs, I started developing my own curatorial web project, which won the Europeana Creative Design Challenge in 2015. The award gave me the chance to work with a team of international experts to learn new skills in areas such as IT, copyright and social entrepreneurship. This experience later evolved into the ‘Revivo Images’, a pilot website that gives guidance on open image collections around the world, which are carefully selected for quality, reliability of copyright and licence information, with explanations how to use the databases. It was a result of collaboration with a great interdisciplinary team including an IT lead, programmers, curators, designers and a copywriter.

All this gave me invaluable experience in overseeing a digital collections web project from vision to implementation. I learned about curating content from across collections, building an image database and mapping metadata using various standards. We also used AI and human input to create keywords and thematic catalogs and designed a simple minimalist user interface.

What I most enjoyed about this journey, actually, was meeting a great range of creative people in many creative fields, from professional animators to students looking for a theme for their BA final thesis - and learning what excited them most, and what barriers they faced in using open collections. I met many of them at various art festivals, universities, design schools and events where I delivered talks and creative workshops in my free time to spread the word about open digital collections for creativity. For two years I was also responsible for the ‘Bridgeman Education’ online database, one of the largest digital image collections with over 1.300.000 images from the GLAM sector, designed for the use of art images in higher education curricula. I had the opportunity to talk to many librarians, lecturers and students from around the world about what they find most useful in this new digital turn.

As a result of this, I am particularly excited about introducing the Labs to university students: from students in computer science departments with coding skills to researchers in social sciences and humanities, to creativity champions in fashion, graphic design or jewelry, who might be attracted to aesthetic qualities of our collections or those looking to pick up creative coding skills.

The landscape has changed a lot in the last 8 years since I learned about the Labs, and I gradually started my own journey of learning code and algorithmic thinking. Already in my previous role in the British Library, as the Rights Officer for the Heritage Made Digital project, we approached digital collections as data. Now we are all embracing computational data science methods to gain new insights into digital collections, and that is what the future British Library Labs is going to celebrate.

 

Maja: You have a strong connection to the BL Labs since you were the Labs volunteer 8 years ago. What most inspired you when you first heard of the Labs?

Silvija: Personally, the Labs were my first professional experience abroad after my MA studies in intellectual history at the American university in Budapest, and happened to be one of the main incentives to stay in London.

This city has attracted me for its serendipity - you can have a great range of urban experiences from attending the oldest special interest societies and visiting antiquarian bookshops to meeting founders of latest startups in their regular gatherings and getting up to speed with the mindset of perpetual innovation.

When I first heard about the Labs in one of its public events, this sentence struck me: “experiment with the BL digital collections to create something new”, with the “new” being undefined and open. I had this idea of a perpetuity - the possibility of endlessly combining the knowledge and aesthetics of the past, safeguarded by one of the biggest libraries of the world, with the creative visions, skills and technology of today and tomorrow.

Such endless new experiences of digital collections can be accelerated by creating a dedicated space for experimentation - a collider or a matchmaker - that contributes to the diverse serendipitous urban experience of London itself. This is how I see the Labs.

Looking from a user point of view, I am particularly excited about the ‘semiotic democracy’, or ‘the ability of users to produce and disseminate new creations and to take part in public cultural discourse’[1] (Stark, 2006). I believe this new playful approach to digitise out-of-copyright cultural materials will fundamentally change the way we see GLAMs. We’ll look at them less and less as spaces that are only there to learn about the past as it used to be, as a recipient, and more and more as a co-creator, able to enter into a meaningful dialogue and reshape meanings, narratives and experiences.

 

Maja: Prior to Labs appointment, you also have a significant rights management experience. What have you learned that will be useful for the Labs?

Silvija: It was a delight to work with Matthew Lambert, the Head of Copyright, Policy & Assurance, for the Heritage Made Digital project, led by Sandra Tuppen, in setting up the British Library’s copyright workflow for both current and historical digitisation projects. This project now allows users to explore the BL’s digital images in the Universal Viewer with attributed rights statements and usage terms.

These last 3.5 years was a great exercise in dealing with very large, often very messy, data to create complex systems, policies and procedures which allow oversight of all important aspects of the digital data including copyright and licencing, data protection and sensitivities. Of course, such work in the Library is of massive importance because it affects the level of freedom we later have to experiment, reuse and do further research based on this data.

Personally, the Heritage Made Digital project is also very precious to me because of its collaborative nature. They use MS SharePoint tool to facilitate data contributions from across many departments in the BL. And they are just fantastic at promoting and celebrating digitisation as a common effort to make content publicly accessible. I will definitely use this experience to suggest solutions on how to register and document both the BL’s datasets and related reuse projects as a similar collaborative project within the Library.

 

Maja: There is so much that is changing in digital research all the time. Are there particular current developments that you find exciting and why?

Silvija: Yes! First, I find the moment of change itself exciting - there is no book about the tools we use today that won’t be running out of date tomorrow. This is a good neuroplasticity exercise that trains the mind not to sleep and be constantly attentive to new developments and opportunities.

Second, I absolutely love to see how many people, from creators to researchers and library staff, are gradually and naturally embracing code languages. With this comes associated critical thinking, such as the ability to surpass often outdated old database interfaces to reveal exciting data insights simply by having a liberating package of new digital skills.

And, third, I am super excited about the possibility of upscaling and creating a bigger impact with existing breakthrough projects and brilliant ideas relating to the British Library’s data. I believe this could be done by finding consensus on how we want to register and document data science initiatives - finalised, ongoing and most wanted, both internally and externally - and then by promoting this knowledge further.

This would allow us to enter a new stage of the BL Labs. The new ecosystem of re-use would promote sustainability, reproducibility, adaptation and crowdsourced improvement of existing projects, giving us new super powers!

↩︎ Stark, Elisabeth (2006). Free culture and the internet: a new semiotic democracy. opendemocracy.net (June 20). URL: https://www.opendemocracy.net/en/semiotic_3662jsp

07 February 2022

New PhD Placements on Enhanced Curation: Hybrid Archives and Emerging Formats

The British Library is accepting applications for the new round of 2022 PhD Placement opportunities: there are 15 projects available across Library departments, all starting from June 2022 onwards and ending before March 2023. Two of the projects within the Contemporary British Collections department focus on Enhanced Curation as an approach to add to the research value of an archival object or digital publication.

Developing an enhanced curation framework for contemporary hybrid archives (2022-CB-HAC)” will outline a framework for Enhanced Curation in relation to contemporary hybrid archives. These archival collections are the record of the creative and professional lives of prominent individuals in UK society, containing both paper and digital material.  So far we have defined Enhanced Curation as the means by which the research value of these records can be enhanced through the creation, collection, and interrogation of the contextual information which surrounds them.

Luckily, we’re in a privileged position – most of our archive donors are living individuals who can illuminate their creative practice for us in real-time. Similarly, with forensic techniques, we’re capturing more data than ever before when we acquire an archive. The truly live questions are then – how can we use this position to best effect? What can we do with what we’re already collecting? What else should we be collecting? And how can we represent this data in engaging and enlightening new ways for the benefit of everyone, including our researchers and exhibition audiences?

Enhanced Curation, as we see it, is about bringing these dynamic collections to life for as many people as possible.  In approaching these questions, the chosen student will engage in a mixture of theoretical and practical work – first outlining the relevant debates and techniques in and around curation, archival science, museology and digital humanities, and then recommending a course of action for one particular hybrid personal archive. This is a collaborative exercise, though, and they will be provided with hands-on training for working with (and getting the most out of) this growing collection area by specialist curatorial staff at the Library.

Photograph of a floppy disk and its case
Floppy disk from the Will Self archive.

Collecting complex digital publications: Testing an enhanced curation method (2022-CB-EF)” focuses on the Library collection of emerging formats. Emerging formats are defined as born-digital publications whose structure, technical dependencies and highly interactive nature challenge our traditional collection methods. These publications include apps, such as the interactive adventure 80 Days, as well as digital interactive narratives, such as the examples collected in the UK Web Archive Interactive Narratives and New Media Writing Prize collections. Collection and preservation of these digital formats in their entirety might not always be possible: there are many challenges and implications in terms of technical capabilities, software and hardware dependencies, copyright restrictions and long-term solutions that are effective against technical obsolescence.

The collection and creation of contextual information is one approach to fill in the gaps and enhance curation for these digital publications. The placement student will helps us test a collection matrix for contextual information relating to emerging formats, which include – but is not limited to – webpages, interviews, reviews, blog posts and screenshots/screencast of usage of a work. These might be collected using a variety of methods (e.g. web archiving, direct transfer from the author, etc.) as well as created by the student themselves (e.g. interviews with the author, video recordings of usage, etc.) Through this placement, the student will have the opportunity to participate in a network of cultural heritage institutions concerned with the preservation of digital publications while helping develop one of the Library contemporary collections.

Photograph of a man looking at an iPad screen and reading an app
Interacting with the American Interior app on iPad.

Both PhD Placements are offered for 3 months full time, or part-time equivalent. They can be undertaken as hybrid placements (i.e. remotely, with some visits to the British Library building in London, St. Pancras), with the option of a fully remote placement for “Collecting complex digital publications: Testing an enhanced curation method”.

Applications for all 2022/23 PhD Placements close on Friday 25 February 2022, 5pm GMT. The application form and guidelines are available online here. Please address any queries to research.development@bl.uk

This post is by Giulia Carla Rossi, Curator of Digital Publications on twitter as @giugimonogatari and Callum McKean, Digital Lead Curator, Contemporary Archives and Manuscripts.

26 January 2022

Which Came First: The Author or the Text? Wikidata and the New Media Writing Prize

Congratulations to the 2021 New Media Writing Prize (NMWP) winners, which were announced at a Bournemouth University online event recently: Joannes Truyens and collaborators (Main Prize), Melody MOU (Student Award) and Daria Donina (FIPP Journalism Award 2021). The main prize winner ‘Neurocracy’ is an experimental dystopian narrative that takes place over 10 episodes, through Omnipedia, an imagined future version of Wikipedia in 2049. So this seemed like a very apt jumping off point for today’s blog post, which discusses a recent project where we added NMWP data to Wikidata.

Screen image of Omnipediaan imagined futuristic version of Wikipedia from Neurocracy by Joannes Truyens
Omnipedia, an imagined futuristic version of Wikipedia from Neurocracy by Joannes Truyens

Note: If you wish to read ‘Neurocracy’ and are prompted for a username and password, use NewMediaWritingPrize1 password N3wMediaWritingPrize!. You can learn more about the work in this article and listen to an interview with the author in this podcast episode.

Working With Wikidata

Dr Martin Poulter describes learning how to work with Wikidata as being like learning a language. When I first heard this description, I didn’t understand: how could something so reliant on raw data be anything like the intricacies of language learning?

It turns out, Martin was completely correct.

Imagine a stack of data as slips of paper. Each slip has an individual piece of data on it: an author’s name, a publication date, a format, a title. How do you start to string this data together so that it makes sense?

One of the beautiful things about Wikidata is that it is both machine and human readable. In order for it to work this way, and for us to upload it effectively, thinking about the relationships between these slips of paper is essential.

In 2021, I had an opportunity to see what Martin was talking about when he spoke about language, as I was asked to work with a set of data about NMWP shortlisted and winning works, which the British Library has collected in the UK Web Archive. You can read more about this special collection here and here

Image of blank post-it notes and a hand with a marker pen preparing to write on one.

About the New Media Writing Prize

The New Media Writing Prize was founded in 2010 to showcase exciting and inventive stories and poetry that integrate a variety of digital formats, platforms, and media. One of the driving forces in setting up and establishing the prize was Chris Meade, director of if:book uk, a ‘think and do tank’ for exploring digital and collaborative possibilities for writers and readers. He was the lead sponsor of the if:book UK New Media Writing Prize, and the Dot Award, which he created in honour of his mother, Dorothy, and he chaired every NMWP awards evening since 2010. Very sadly Chris passed away on 13th January 2022 and the recent 2021 awards event was dedicated to Chris and his family.

Recognising the significance of the NMWP, in recent years the British Library created the New Media Writing Prize Special Collection as part of its emerging formats work. With 11 years of metadata about a born digital collection, this was an ideal data set for me to work with in order to establish a methodology for working with Wikidata uploads in the Library.

Last year I was fortunate to collaborate with Tegan Pyke, a PhD placement student in the Contemporary British Publications Collections team, supervised by Guilia Carla Rossi, Curator for Digital Publications. Tegan's project examined the digital preservation challenges of complex digital objects, developing and testing a quality assurance process for examining works in the NMWP collection. If you want to read more about this project, a report is available here.  For the Wikidata work Tegan and Giulia provided two spreadsheets of data (or slips of paper!), and my aim was to upload linked data that covered the authors, their works, and the award itself - who had been shortlisted, who had won, and when.

Simple, right?

Getting Started

I thought so - until I began to structure my uploads. There were some key questions that needed to be answered about how these relationships would be built, and I needed to start somewhere. Should I upload the authors or the texts first? Should I go through the prize year by year, or be led by other information? And what about texts with multiple authors?

Suddenly it all felt a bit more intimidating!

I was fortunate to attend some Wikidata training run by Wikimedia UK late last year. Martin was our trainer, and one piece of advice he gave us was indispensable: if you’re not sure where to start, literally write it out with pencil and paper. What is the relationship you’re trying to show, in its simplest form? This is where language framing comes in especially useful: thinking about the basic sentence structures I’d learned in high school German became vital.

Image shows four simple sentences: Christine Wilks won NMWP in 2010. Christine Wilks wrote Underbelly. Underbelly won NMWP in 2010. NMWP was won by Christine Wilks in 2010. Christine is circled in green, NMWP in people, and Underbelly in yellow.  QIDs are listed: Q108810306, highlighted in green Q108459688, highlighted in purple Q109237591, highlighted in yellow  Properties are listed: P166, highlighted in blue P800, highlighted in turquoise P585, highlighted in orange
Image by the author, notes own.

The Numbers Bit

You can see from this image how the framework develops: specific items, like nouns, are given identification numbers when they become a Wikidata item. This is their QID. The relationships between QIDs, sort of like the adjectives and verbs, are defined as properties and have P numbers. So Christine Wilks is now Q108810306, and her relationship to her work, Underbelly, or Q109237591, is defined with P800 which means ‘notable work’.

Q108810306 - P800 - Q109237591

You can upload this relationship using the visual editor on Wikidata, by clicking fields and entering data. If you have a large amount of information (remember those slips of paper!) tools like QuickStatements become very useful. Dominic Kane blogged about his experience of this system during his British Library student placement project in 2021.

The intricacies of language are also very important on Wikidata. The nuance and inference we can draw from specific terms is important. The concept of ‘winning’ an award became a subject of semantic debate: the taxonomy of Wikidata advises that we use ‘award received’ in the case of a literary prize, as it’s less of an active sporting competition than something like a marathon or an athletic event.

Asking Questions of the Data

Ultimately we upload information to Wikidata so that it can be queried. Querying uses SPAQRL, a language which allows users to draw information and patterns from vast swathes of data. Querying can be complex: to go back to the language analogy, you have to phrase the query in precisely the right way to get the information you want.

One of the lessons I learned during the NMWP uploads was the importance of a unifying property. Users will likely query this data with a view to surveying results and finding patterns. Each author and work, therefore, needed to be linked to the prize and the collection itself (pictured above). By adding this QID to the property P6379 (‘has works in the collection’), we create a web of data that links every shortlisted author over the 11 year time period.

Getting Started

To have a look at some of the NMWP data, here are some queries I prepared earlier. Please note that data from the 2021 competition has not yet been uploaded!

Authors who won NMWP

Works that won NMWP

Authors nominated for NMWP

Works nominated for NMWP

If you fancy trying some queries but don’t know where to start, I recommend these tutorials:

Tutorials

Resources About SPARQL

This post is by Wikimedian in Residence Dr Lucy Hinnie (@BL_Wikimedian

23 December 2021

Three crowdsourcing opportunities with the British Library

Digital Curator Dr Mia Ridge writes, In case you need a break from whatever combination of weather, people and news is around you, here are some ways you can entertain yourself (or the kids!) while helping make collections of the British Library more findable, or help researchers understand our past. You might even learn something or make new discoveries along the way!

Your help needed: Living with Machines

Mia Ridge writes: Living with Machines is a collaboration between the British Library and the Alan Turing Institute with partner universities. Help us understand the 'machine age' through the eyes of ordinary people who lived through it. Our refreshed task builds on our previous work, and includes fresh newspaper titles, such as the Cotton Factory Times.

What did the Victorians think a 'machine' was - and did it matter where you lived, or if you were a worker or a factory owner? Help us find out: https://www.zooniverse.org/projects/bldigital/living-with-machines

Your contributions will not only help researchers - they'll also go on display in our exhibition

Image of a Cotton Factory Times masthead
You can read articles from Manchester's Cotton Factory Times in our crowdsourced task

 

Your help needed: Agents of Enslavement? Colonial newspapers in the Caribbean and hidden genealogies of the enslaved

Launched in July this year, Agents of Enslavement? is a research project which explores the ways in which colonial newspapers in the Caribbean facilitated and challenged the practice of slavery. One goal is to create a database of enslaved people identified within these newspapers. This benefits people researching their family history as well as those who simply want to understand more about the lives of enslaved people and their acts of resistance.

Project Investigator Graham Jevon has posted some insights into how he processes the results to the project forum, which is full of fascinating discussion. Join in as you take part: ​​https://www.zooniverse.org/projects/gjevon/agents-of-enslavement

Your help needed: Georeferencer

Dr. Gethin Rees writes: The community have now georeferenced 93% of 1277 maps that were added from our War Office Archive back in July (as mentioned in our previous newsletter).  

Some of the remaining maps are quite tricky to georeference and so if there is a perplexing map that you would like some guidance with do get in contact with myself and our curator for modern mapping  by emailing georeferencer@bl.uk and we will try to help. Please do look forward to some exciting news maps being released on the platform in 2022!

21 December 2021

Intro to AI for GLAM

Earlier this year Daniel van Strien and I teamed up with colleagues Mike Trizna from the Smithsonian and Mark Bell at the National Archives, UK in a Carpentries Lesson Development Study Group with an eye to developing an Introduction to AI for GLAM (Galleries, Libraries, Archives and Museums) lesson for eventual inclusion in Library Carpentry. The commitment was a ten-week program running between 8 February and 23 April 2021 with weekly 1hr Study Group discussion calls and "homework" tasks requiring at least 3-4 hours each week.

The result is the framework and foundations for what we hope will be a useful, ever evolving and continuously collaboratively written workshop that can provide a gentle and practical introduction for GLAM to the world of machine learning and its implications for the sector. Developed with the GLAM practitioner in mind, this beta course aims to offer an entry point for staff in cultural heritage institutions to begin to support, participate in, and undertake in their own right, machine learning-based research and projects with their collections.

Screenshot of Intro to GLAM course page

View the beta lessons at https://carpentries-incubator.github.io/machine-learning-librarians-archivists/index.html

We had the honour of running a 3-hour bitesize online version of the workshop as part of the AI4LAM Les Futurs Fantastiques Conference (#FF21) early in December. In a bit of an experiment, we delivered it using Mentimeter, hoping to bring some fresh interactivity into what could feel like a long virtual workshop. I'm happy to report it was good fun and the mode very well received in the feedback from instructors and participants alike. 

The full video presentation recording is available to view at FF21 workshop: Carpentries Incubator Introduction to AI for GLAM - Zoom as well as our slides (PDF).

00:08:07 Intro to AI & Machine Learning: A brief overview (Mark Bell, The National Archives)

00:46:09 What is ML good at? (Mike Trizna / @miketrizna, Smithsonian)

01:26:35 Managing bias (Nora McGregor / @ndalyrose, British Library)

02:01:02 Machine learning projects (Daniel van Strien / @vanstriendaniel, British Library)

Have a look at these wonderful live sketch notes taken during the session by the talented Mélanie Leroy-Terquem (@mleroyterquem)!

Notebook page spread showing illustrations of key points in workshop

If you would like to contribute to the further development of these lessons, all the content and materials can be found over on the lesson GitHub  and we'd love to hear from you! 

This blog post is by Nora McGregor, Digital Curator, British Library. She's on Twitter as @ndalyrose.

01 December 2021

Open and Engaged 2021: Review

Engagement with cultural heritage collections and the research impact beyond mainstream metrics in arts and humanities

Open and Engaged, the British Library’s annual event in Open Access Week, took place virtually on 25 October. The theme of the conference was Understanding the Impact of Open in the Arts and Humanities beyond the University as you may see in a previous blog post.

The slides and the video recordings together with their transcripts are now available through the British Library’s Research Repository. This blog post will give you a flavour of the talks and the sessions in a nutshell.

Two main sessions formed the programme of the conference; one was on increasing the engagement with cultural heritage collections and the other one was on measuring and evaluating impact of open resources beyond journal articles.

British Library in the background with the piazza full of people in the front
British Library and Piazza by Paul Grundy

 

Session One: Increasing Engagement with Cultural Heritage Collections

The first session was opened with a talk from Brigitte Vézina from Creative Commons (CC). It was about how CC supports GLAM (Galleries, Libraries, Archives and Museums) in embracing open access and unlocking universal access to knowledge and culture. Brigitte introduced CC’s Open GLAM programme which is a coordinated global effort to help GLAMs make the content they steward openly available and reusable for the public good.

The British Library’s Sam van Schaik presented Endangered Archives Programme (EAP) which provides funding for projects to digitise and preserve archival materials at risk of destruction. The resulting digital images and sound files are made available via the British Library’s website. Sam drew attention to the challenges around ethical issues with the CC licenses used for these digital materials and the practical considerations with working globally.

Merete Sanderhoff from National Gallery of Denmark (SMK) raised a concern about how the GLAM sector at the institutional level is lagging behind in embracing the full potential of open cultural heritage. Merete explained that GLAM users increasingly benefit from arts and knowledge beyond institutional walls by using data from GLAM collections and by spurring on developments in digital literacy, citizen science and democratic citizenship.

Towards a National Collection (TaNC), the research development programme funded by AHRC was the last talk of this session and presented by Rebecca Bailey, Programme Director at TaNC. The programme sponsors projects that are working to link collections and encourage cross-searching of multiple collection types, to enable research and enhance public engagement. Rebecca outlined the achievements and ambitions of the projects, as they start to look ahead to a national collections research infrastructure.

This session highlighted that the GLAM sector should embrace their full potential in making cultural heritage open for public good beyond their physical premises. The use of more open and public domain licences will make it easier to use digital heritage content and resources in the research and creative spheres. The challenge comes with the unethical use of digital collections in some cases, but licensing mechanisms are not the tools with which to police research ethics.

 

Session Two: Measuring and Evaluating Impact of Open Resources Beyond Journal Articles

The second half of the conference started with a metrics project, Cobaltmetrics, which works towards making altmetrics genuinely alternative by using URIs. Luc Boruta from Thunken talked about bringing algorithmic fairness to impact measurement, from web-scale attention tracking to computer-assisted data storytelling.

Gemma Derrick from University of Lancaster presented on the hidden REF experience and highlighted assessing the broader value of research culture. Gemma noted that the doubt in whether the impact can be measured doesn’t comes from lack of tools, but it is more about what is considered as impact that differs between individuals, institutions, and fields of disciplines. As she stated, “the nature of impact and the nature of evaluation is inherently better when humans are involved, mainly because mitigating factors and mitigating aspects of our research, and what makes our research culture really important, are less likely to be overlooked by an automated system.” This is what they addressed in the hidden REF, celebrating all research outputs and every role that makes research possible

Anne Boddington from Kingston University reflected on research impact in three parts; looking at its definition, partnering and collaboration between GLAMs and higher education institutions, and the reflections on future benefits. Anne talked about the challenges of impact, the kinds of evidence it demands and the opportunities it presents. She concluded her talk noting that impact is here to stay and there are significant areas for growth, opportunities for innovation and leadership in the context of impact.

Helen Adams from Oxford University Gardens, Libraries & Museums (GLAM) presented the Online Active Community Engagement (O-ACE) project where they combined arts and science to measure the benefits of online culture for mental health in young people. She highlighted how GLAM organizations can actively involve audiences in medical research and how cultural interventions may positively impact individual wellbeing, prior to diagnosis, treatment, or social prescribing pathways. The conference ended with this great case study on impact assessment.

In her closing remarks, Rachael Kotarski of the British Library underlined that opening up GLAM organizations is not only allowing us to break down the walls of our buildings to get content out there but also crosses those geographic boundaries to get content in front of communities who might not have had a chance to experience it before. It also allows us to work with communities who originated content to understand their concerns and not just the concerns of our organizations. Rachael echoed that licensing restrictions are not the solution to all our questions, or to the ethical issues. It is important that we can reflect on what we have learned to adjust and rethink our approach and identify what really allows us to balance access, engagement, and creativity.

In the context of research impact, we need to centre the human in our assessment and the processes. The other factor in impact assessments is the relatively short period of time to assess it. The examples like O-ACE project also showed us that the creation of impact can take much longer than we think and what impacts can be seen will vary through that time. So, assessing those interventions also needs a longer-term views.

Those who didn’t attend the conference or would like to re-visit the talks can find the recordings in the British Library’s Research Repository. The social media interactions can be followed with #OpenEngaged hashtag.

We are looking forward to hosting the Open and Engaged 2022 hopefully in person at the British Library.

This blog post was written by Ilkay Holt, Scholarly Communications Lead, part of the Research Infrastructure Services team.

30 November 2021

BL Labs Online Symposium 2021, Special Climate Change Edition: Speakers Announced!

BL Labs 9th Symposium – Special Climate Change Edition is taking place on Tuesday 7 December 2021. This special event is devoted to looking at computational research and climate change.

A polar bear jumping off an iceberg with the rear of a ship showing. Image captioned: 'A Bear Plunging Into The Sea'
British Library digitised image from page 303 of "A Voyage of Discovery, made under the orders of the Admiralty, in his Majesty's ships Isabella and Alexander for the purpose of exploring Baffin's Bay, and enquiring into the possibility of a North-West Passage".

To help us explore a range of complex issues at the intersection of computational research and climate change we are delighted to announce our expert panel:

  • Schuyler Esprit – Founding Director of Create Caribbean Research Institute & Research Officer at the School of Graduate Studies and Research at the University of West Indies
  • Helen Hardy – Science Digital Programme Manager at the Natural History Museum, London, responsible for mass digitisation of the Museum’s collections of 80 million items
  • Joycelyn Longdon – Founder of ClimateInColour, a platform at the intersection of climate science and social justice, and PhD Student on the Artificial Intelligence for Environmental Risk programme at University of Cambridge
  • Gavin Shaddick – Chair of Data Science and Statistics, University of Exeter, Director of the UKRI funded Centre for Doctoral Training in Environmental Intelligence: Data Science and AI for Sustainable Futures, co-Director of the University of Exeter-Met Office Joint Centre for Excellence in Environmental Intelligence and an Alan Turing Fellow
  • Richard Sandford – Professor of Heritage Evidence, Foresight and Policy at the Institute of Sustainable Heritage at University College London
  • Joseph Walton – Research Fellow in Digital Humanities and Critical and Cultural Theory at the University of Sussex

Join us for this exciting discussion addressing issues such as how digitisation can improve research efficiency, discussing pros and cons of AI and machine learning in relation to climate change, and the links between new technologies, climate and social justice.

You can see more details about our panel and book your place here.

11 November 2021

The British Library Adopts a New Persistent Identifier Policy

Since 29 September, to support and guide the management of its collection, the Library has adopted a new persistent identifier policy. A persistent identifier or PID is a long lasting digital reference to an entity whether it is physical or digital. PIDs are a core component in providing reliable, long-term access to collections and improve their discoverability. They also make it easier to track when and how collections are used. The Library has been using PIDs in various forms for almost a decade but following the creation of a case study as part of the AHRC’s Towards a National Collection funded project, PIDs as IRO Infrastructure, the Library recognised the need to document its rationale and approach to PIDs and lay down principles and requirements for their use.

An image of the world at night from space, showing the bright lights of cities and towns
Photo by NASA on Unsplash

The Library encourages the use of PIDs across its collections and collection metadata. It recognises the role PIDs have as a component in sustainable, open infrastructure and in enabling interoperability and the use of Library resources. PIDs also support the Library’s content strategy and its goal of connecting rather than collecting as they enable long term and reliable access to resources.  

Many different types of PIDs are used across the Library, some of which it creates for itself, e.g. ARKs, and others which it harvests from elsewhere, e.g. DOIs that are used to identify journal articles. While not all existing Library services may meet the requirements described in this policy, it provides a benchmark against which they can be measured and aspire to develop.

To make sure staff at the Library are supported in implementing the policy, a working group has been convened to run until the end of December 2022. This group will raise awareness of the policy and ensure that guidance is made available to any project or service which is under review to consider the use of PIDs.

A public version of the policy is available on this page and an extract with the key points are provided below. The group would like to acknowledge the Bibliothèque nationale de France’s policy which was influential in the creation of this policy.

Principles

In its use of identifiers, the British Library adheres to the following principles, which describe the qualities PIDs created, contributed or consumed by the Library must have.  

  • A PID must never be deleted but may be marked as deprecated if required
  • A PID must be usable in perpetuity to identify its associated entry
  • A PID must only describe one entity and must never be reused for different entities 
  • A PID must have established versioning processes and procedures in place; these may be defined locally by the Library as a creator or by the PID provider  
  • A PID must have established governance mechanisms, such as contracts, in place to ensure the standards of use of the PID are met and continue to be met  
  • A PID must resolve to metadata about the entity available in both a human and machine readable format 
  • A publicly accessible PID must be resolvable via a global resolver
  • A PID must have an operating model that is sustainable for long-term persistent use 

Established user community 

  • A PID must have an established user community, which has adopted it as a standard, either through an organisation such as the International Organization for Standardization (ISO) or as a de factostandard through widespread adoption; the Library will support and develop the use of new types of PIDs where there is a defined and recognised use case which they would address 

Interoperable 

  • A PID must be able to link with the other identifiers in use at the Library through open metadata standards and the capability to cross-reference resources 

New PID types or new use 

  • New types of PIDs should only be considered for use in the Library where there is a defined need which cannot reasonably be met by a combination of PIDs already in use 
  • Any new PID type used by the Library should meet the requirements described in this policy 
  • Where a PID type is emerging and does not have an established community, the Library can seek to influence its development in line with principles for open and sustainable infrastructures 

Requirements

These requirements outline the Library’s responsibilities in using PID services and creating PIDs. While the Library uses identifiers which do not meet all of these requirements, they are included for future work and developments.  

  • The Library aspires to assign PIDs to all resources within its collections, both physical and digital, and associated entities, in alignment with the guiding principles of the Library’s content strategy 2020-2023
  • The Library has varying levels of involvement in different PID schemes, but all PIDs created by the Library must meet the requirements described in this section and the Library prefers the use of PIDs which meet the principles
  • Identifiers created by the Library must have an opaque format, i.e. not contain any semantic information within them, to ensure their longevity 
  • A PID must resolve to information about the entity to which it refers 
  • The Library must have a process to specify the granularity at which PIDs are assigned and how relationships between PIDs for component and overarching entities are managed 
  • The Library must have a process to manage versioning including changes, merges and retirement of entities 
  • Standard descriptive information about an entity, e.g. creator, should have a PID 
  • All metadata associated with a PID should comply with Collection Metadata Licensing Guidelines 
  • Where a PID referring to a citable resource resolves to a webpage, that webpage should display a suggested citation including the hyperlink to the PID to encourage ongoing use of the PID outside the Library

If you would like to hear more about this policy and the Library’s approach to persistent identifiers, feel free to contact the Heritage PIDs project on Twitter or email openaccess@bl.uk.

This post is by Frances Madden (@maddenfc, orcid.org/0000-0002-5432-6116), Research Associate (PIDs as IRO Infrastructure) in the Research Infrastructure Services team.

Digital scholarship blog recent posts

Archives

Tags

Other British Library blogs