THE BRITISH LIBRARY

Digital scholarship blog

53 posts categorized "Contemporary Britain"

04 August 2020

Having a Hoot for International Owl Awareness Day

Add comment

Who doesn’t love owls? Here at the British Library we certainly do.

Often used as a symbol of knowledge, they are the perfect library bird. A little owl is associated and frequently depicted with the Greek goddess of wisdom Athena. The University of Bath even awarded Professor Yoda the European eagle owl a library card in recognition of his valuable service deterring seagulls from nesting on their campus.

The British Library may not have issued a reader pass to an owl (as far as I am aware!), but we do have a wealth of owl sound recordings in our wildlife and environmental sounds collection, you can read about and listen to some of these here.

Little Owl calls recorded by Nigel Tucker in Somerset, England (BL ref 124857)

Owls can also be discovered in our UK Web Archive. Our UK Web Archivists recently examined the Shine dataset to explore which UK owl species is the most popular on the archived .uk domain. Read here to find out which owl is the winner.

They also curate an Online Enthusiast Communities in the UK collection, which features bird watching and some owl related websites in the Animal related hobbies subsection. If you know of websites that you think should be included in this collection, then please fill in their online nomination form.

Here in Digital Scholarship I recently found many fabulous illustrations of owls in our Mechanical Curator Flickr image collection of over a million Public Domain images. So to honour owls on International Owl Awareness Day, I put together an owl album.

These owl illustrations are freely available, without copyright restrictions, for all types of creative projects, including digital collages. My colleague Hannah Nagle blogged about making collages recently and provided this handy guide. For finding more general images of nature for your collages, you may find it useful to browse other Mechanical Curator themed albums, such as Flora & Fauna, as these are rich resources for finding illustrations of trees, plants, animals and birds.

If you creatively use our Mechanical Curator Flickr images, please do share them with us on twitter, using the hashtag #BLdigital, we always love to see what people have done with them. Plus if you use any of our owls today, remember to include the #InternationalOwlAwarenessDay hashtag too!

We also urge you to be eagle-eyed (sorry wrong bird!) and look out for some special animated owls during the 4th August, like this one below, which uses both sounds and images taken from our collections. These have been created by Carlos Rarugal, our arty Assistant Web Archivist and will shared from the WildlifeWeb Archive and Digital Scholarship Twitter accounts. 


Video created by Carlos Rarugal,  using Tawny Owl hoots recorded by Richard Margoschis in Gloucestershire, England (BL ref 09647) and British Library digitised image from page 79 of "Woodland Wild: a selection of descriptive poetry. From various authors. With ... illustrations on steel and wood, after R. Bonheur, J. Bonheur, C. Jacque, Veyrassat, Yan Dargent, and other artists"

One of the benefits of making digital art, is that there is no risks of spilling paint or glue on your furniture! As noted in this tweet from Damyanti Patel "Thanks for the instructions, my kids were entertained & I had no mess to clean up after their art so a clear win win, they really enjoyed looking through the albums". I honestly did not ask them to do this, but it is really cool that her children included this fantastic owl in the centre of one of their digital collages:

I quite enjoy it when my library life and goth life connect! During the covid-19 lockdown I have attended several online club nights. A few months ago I was delighted to see that one of these; How Did I Get Here? Alternative 80s Night! regularly uses the British Library Flickr images to create their event flyers, using illustrations of people in strange predicaments to complement the name of their club; like this sad lady sitting inside a bird cage, in the flyer below.

Their next online event is Saturday 22nd August and you can tune in here. If you are a night owl, you could even make some digital collages, while listening to some great tunes. Sounds like a great night in to me!

Illustration of a woman sitting in a bird cage with a book on the floor just outside the cage
Flyer image for How Did I Get Here? Alternative 80s Night!

This post is by Digital Curator Stella Wisdom (@miss_wisdom

15 June 2020

Marginal Voices in UK Digital Comics

Add comment

I am an AHRC Collaborative Doctoral Partnership student based at the British Library and Central Saint Martins, University of the Arts London (UAL). The studentship is funded by the Arts and Humanities Research Council’s Collaborative Doctoral Partnership Programme.

Supervised jointly by Stella Wisdom from the British Library, Roger Sabin and Ian Hague from UAL, my research looks to explore the potential for digital comics to take advantage of digital technologies and the digital environment to foster inclusivity and diversity. I aim to examine the status of marginal voices within UK digital comics, while addressing the opportunities and challenges these comics present for the British Library’s collection and preservation policies.

A cartoon strip of three vertical panel images, in the first a caravan is on the edge of a cliff, in the second a dog asleep in a bed, in the third the dog wakes up and sits up in bed
The opening panels from G Bear and Jammo by Jaime Huxtable, showing their caravan on The Gower Peninsula in South Wales, copyright © Jaime Huxtable

Digital comics have been identified as complex digital publications, meaning this research project is connected to the work of the broader Emerging Formats Project. On top of embracing technological change, digital comics have the potential to reflect, embrace and contribute to social and cultural change in the UK. Digital comics not only present new ways of telling stories, but whose story is told.

One of the comic creators, whose work I have been recently examining is Jaime Huxtable, a Welsh cartoonist/illustrator based in Worthing, West Sussex. He has worked on a variety of digital comics projects, from webcomics to interactive comics, and also runs various comics related workshops.

Samir's Christmas by Jaime Huxtable, this promotional comic strip was created for Freedom From Torture’s 2019 Christmas Care Box Appeal. This comic was  made into a short animated video by Hands Up, copyright © Jaime Huxtable

My thesis will explore whether the ways UK digital comics are published and consumed means that they can foreground marginal, alternative voices similar to the way underground comix and zine culture has. Comics scholarship has focused on the technological aspects of digital comics, meaning their potentially significant contribution reflecting and embracing social and cultural change in the UK has not been explored. I want to establish whether the fact digital comics can circumvent traditional gatekeepers means they provide space to foreground marginal voices. I will also explore the challenges and opportunities digital comics might present for legal deposit collection development policy.

As well as being a member of the Comics Research Hub (CoRH) at UAL, I have already begun working with colleagues from the UK Web Archive, and hope to be able to make a significant contribution to the Web Comic Archive. Issues around collection development and management are central to my research, I feel very fortunate to be based at the British Library, to have the chance to learn from and hopefully contribute to practice here.

If anyone would like to know more about my research, or recommend any digital comics for me to look at, please do contact me at Tom.Gebhart@bl.uk or @thmsgbhrt on Twitter. UK digital comic creators and publishers can use the ComicHaus app to send their digital comics directly to The British Library digital archive. More details about this process are here.

This post is by British Library collaborative doctoral student Thomas Gebhart (@thmsgbhrt).

10 June 2020

International Conference on Interactive Digital Storytelling 2020: Call for Papers, Posters and Interactive Creative Works

Add comment

It has been heartening to see many joyful responses to our recent post featuring The British Library Simulator; an explorable, miniature, virtual version of the British Library’s building in St Pancras.

If you would like to learn more about our Emerging Formats research, which is informing our work in collecting examples of complex digital publications, including works made with Bitsy, then my colleague Giulia Carla Rossi (who built the Bitsy Library) is giving a Leeds Libraries Tech Talk on Digital Literature and Interactive Storytelling this Thursday, 11th June at 12 noon, via Zoom.

Giulia will be joined by Leeds Libraries Central Collections Manager, Rhian Isaac, who will showcase some of Leeds Libraries exciting collections, and also Izzy Bartley, Digital Learning Officer from Leeds Museums and Galleries, who will talk about her role in making collections interactive and accessible. Places are free, but please book here.

If you are a researcher, or writer/artist/maker, of experimental interactive digital stories, then you may want to check out the current call for submissions for The International Conference on Interactive Digital Storytelling (ICIDS), organised by the Association for Research in Digital Interactive Narratives, a community of academics and practitioners concerned with the advancement of all forms of interactive narrative. The deadline for proposing Research Papers, Exhibition Submissions, Posters and Demos, has been extended to the 26th June 2020, submissions can be made via the ICIDS 2020 EasyChair Site.

The ICIDS 2020 dates, 3-6 November, on a photograph of Bournemouth beach

ICIDS showcases and shares research and practice in game narrative and interactive storytelling, including the theoretical, technological, and applied design practices. It is an interdisciplinary gathering that combines computational narratology, narrative systems, storytelling technology, humanities-inspired theoretical inquiry, empirical research and artistic expression.

For 2020, the special theme is Interactive Digital Narrative Scholarship, and ICIDS will be hosted by the Department of Creative Technology of Bournemouth University (also hosts of the New Media Writing Prize, which I have blogged about previously). Their current intention is to host a mixed virtual and physical conference. They are hoping that the physical meeting will still take place, but all talks and works will also be made available virtually for those who are unable to attend physically due to the COVID-19 situation. This means that if you submit work, you will still need to register and present your ideas, but for those who are unable to travel to Bournemouth, the conference organisers will be making allowances for participants to contribute virtually.

ICIDS also includes a creative exhibition, showcasing interactive digital artworks, which for 2020 will explore the curatorial theme “Texts of Discomfort”. The exhibition call is currently seeking Interactive digital art works that generate discomfort through their form and/or their content, which may also inspire radical changes in the way we perceive the world.

Creatives are encouraged to mix technologies, narratives, points of view, to create interactive digital artworks that unsettle interactors’ assumptions by tackling the world’s global issues; and/or to create artworks that bring to a crisis interactors’ relation with language, that innovate in their way to intertwine narrative and technology. Artworks can include, but are not limited to:

  • Augmented, mixed and virtual reality works
  • Computer games
  • Interactive installations
  • Mobile and location-based works
  • Screen-based computational works
  • Web-based works
  • Webdocs and interactive films
  • Transmedia works

Submissions to the ICIDS art exhibition should be made using this form by 26th June. Any questions should be sent to icids2020arts@gmail.com. Good luck!

This post is by Digital Curator Stella Wisdom (@miss_wisdom

21 May 2020

The British Library Simulator

Add comment

The British Library Simulator is a mini game built using the Bitsy game engine, where you can wander around a pixelated (and much smaller) version of the British Library building in St Pancras. Bitsy is known for its compact format and limited colour-palette - you can often recognise your avatar and the items you can interact with by the fact they use a different colour from the background.

The British Library building depicted in Bitsy
The British Library Simulator Bitsy game

Use the arrow keys on your keyboard (or the WASD buttons) to move around the rooms and interact with other characters and objects you meet on the way - you might discover something new about the building and the digital projects the Library is working on!

Bitsy works best in the Chrome browser and if you’re playing on your smartphone, use a sliding movement to move your avatar and tap on the text box to progress with the dialogues.

Most importantly: have fun!

The British Library, together with the other five UK Legal Deposit Libraries, has been collecting examples of complex digital publications, including works made with Bitsy, as part of the Emerging Formats Project. This collection area is continuously expanding, as we include new examples of digital media and interactive storytelling. The formats and tools used to create these publications are varied, and allow for innovative and often immersive solutions that could only be delivered via a digital medium. You can read more about freely-available tools to write interactive fiction here.

This post is by Giulia Carla Rossi, Curator of Digital Publications (@giugimonogatari).

20 May 2020

Bringing Metadata & Full-text Together

Add comment

This is a guest post by enthusiastic data and metadata nerd Andy Jackson (@anjacks0n), Technical Lead for the UK Web Archive.

In Searching eTheses for the openVirus project we put together a basic system for searching theses. This only used the information from the PDFs themselves, which meant the results looked like this:

openVirus EThOS search results screen
openVirus EThOS search results screen

The basics are working fine, but the document titles are largely meaningless, the last-modified dates are clearly suspect (26 theses in the year 1600?!), and the facets aren’t terribly useful.

The EThOS metadata has much richer information that the EThOS team has collected and verified over the years. This includes:

  • Title
  • Author
  • DOI, ISNI, ORCID
  • Institution
  • Date
  • Supervisor(s)
  • Funder(s)
  • Dewey Decimal Classification
  • EThOS Service URL
  • Repository (‘Landing Page’) URL

So, the question is, how do we integrate these two sets of data into a single system?

Linking on URLs

The EThOS team supplied the PDF download URLs for each record, but we need a common identifer to merge these two datasets. Fortunately, both datasets contain the EThOS Service URL, which looks like this:

https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.755301

This (or just the uk.bl.ethos.755301 part) can be used as the ‘key’ for the merge, leaving us with one data set that contains the download URLs alongside all the other fields. We can then process the text from each PDF, and look up the URL in this metadata dataset, and merge the two together in the same way.

Except… it doesn’t work.

The web is a messy place: those PDF URLs may have been direct downloads in the past, but now many of them are no longer simple links, but chains of redirects. As an example, this original download URL:

http://repository.royalholloway.ac.uk/items/bf7a78df-c538-4bff-a28d-983a91cf0634/1/10090181.pdf

Now redirects (HTTP 301 Moved Permanently) to the HTTPS version:

https://repository.royalholloway.ac.uk/items/bf7a78df-c538-4bff-a28d-983a91cf0634/1/10090181.pdf

Which then redirects (HTTP 302 Found) to the actual PDF file:

https://repository.royalholloway.ac.uk/file/bf7a78df-c538-4bff-a28d-983a91cf0634/1/10090181.pdf

So, to bring this all together, we have to trace these links between the EThOS records and the actual PDF documents.

Re-tracing Our Steps

While the crawler we built to download these PDFs worked well enough, it isn’t quite a sophisticated as our main crawler, which is based on Heritrix 3. In particular, Heritrix offers details crawl logs that can be used to trace crawler activity. This functionality would be fairly easy to add to Scrapy, but that’s not been done yet. So, another approach is needed.

To trace the crawl, we need to be able to look up URLs and then analyse what happened. In particular, for every starting URL (a.k.a. seed) we want to check if it was a redirect and if so, follow that URL to see where it leads.

We already use content (CDX) indexes to allow us to look up URLs when accessing content. In particular, we use OutbackCDX as the index, and then the pywb playback system to retrieve and access the records and see what happened. So one option is to spin up a separate playback system and query that to work out where the links go.

However, as we only want to trace redirects, we can do something a little simpler. We can use the OutbackCDX service to look up what we got for each URL, and use the same warcio library that pywb uses to read the WARC record and find any redirects. The same process can then be repeated with the resulting URL, until all the chains of redirects have been followed.

This leaves us with a large list, linking every URL we crawled back to the original PDF URL. This can then be used to link each item to the corresponding EThOS record.

This large look-up table allowed the full-text and metadata to be combined. It was then imported into a new Solr index that replaced the original service, augmenting the records with the new metadata.

Updating the Interface

The new fields are accessible via the same API as before – see this simple search as an example.

The next step was to update the UI to take advantage of these fields. This was relatively simple, as it mostly involved exchanging one field name for another (e.g. from last_modified_year to year_i), and adding a few links to take advantage of the fact we now have access to the URLs to the EThOS records and the landing pages.

The result can be seen at:

EThOS Faceted Search Prototype

The Results

This new service provides a much better interface to the collection, and really demonstrates the benefits of combining machine-generated and manually curated metadata.

New openVirus EThOS search results interface
New improved openVirus EThOS search results interface

There are still some issues with the source data that need to be resolved at some point. In particular, there are now only 88,082 records, which indicates that some gaps and mismatches emerged during the process of merging these records together.

But it’s good enough for now.

The next question is: how do we integrate this into the openVirus workflow? 

 

14 May 2020

Searching eTheses for the openVirus project

Add comment

This is a guest post by Andy Jackson (@anjacks0n), Technical Lead for the UK Web Archive and enthusiastic data-miner.

Introduction

The COVID-19 outbreak is an unprecedented global crisis that has prompted an unprecedented global response. I’ve been particularly interested in how academic scholars and publishers have responded:

It’s impressive how much has been done in such a short time! But I also saw one comment that really stuck with me:

“Our digital libraries and archives may hold crucial clues and content about how to help with the #covid19 outbreak: particularly this is the case with scientific literature. Now is the time for institutional bravery around access!”
– @melissaterras

Clearly, academic scholars and publishers are already collaborating. What could digital libraries and archives do to help?

Scale, Audience & Scope

Almost all the efforts I’ve seen so far are focused on helping scientists working on the COVID-19 response to find information from publications that are directly related to coronavirus epidemics. The outbreak is much bigger than this. In terms of scope, it’s not just about understanding the coronavirus itself. The outbreak raises many broader questions, like:

  • What types of personal protective equipment are appropriate for different medical procedures?
  • How effective are the different kinds of masks when it comes to protecting others?
  • What coping strategies have proven useful for people in isolation?

(These are just the examples I’ve personally seen requests for. There will be more.)

Similarly, the audience is much wider than the scientists working directly on the COVID-19 response. From medical professions wanting to know more about protective equipment, to journalists looking for context and counter-arguments.

As a technologist working at the British Library, I felt like there must be some way I could help this situation. Some way to help a wider audience dig out any potentially relevant material we might hold?

The openVirus Project

While looking out for inspiration, I found Peter Murray-Rust’s openVirus project. Peter is a vocal supporter of open source and open data, and had launched an ambitious attempt to aggregate information relating to viruses and epidemics from scholarly publications.

In contrast to the other efforts I’d seen, Peter wanted to focus on novel data-mining methods, and on pulling in less well-known sources of information. This dual focus on text analysis and on opening up underutilised resources appealed to me. And I already had a particular resource in mind…

EThOS

Of course, the British Library has a very wide range of holdings, but as an ex-academic scientist I’ve always had a soft spot for EThOS, which provides electronic access to UK theses.

Through the web interface, users can search the metadata and abstracts of over half a million theses. Furthermore, to support data mining and analysis, the EThOS metadata has been published as a dataset. This dataset includes links to institutional repository pages for many of the theses.

Although doctoral theses are not generally considered to be as important as journal articles, they are a rich and underused source of information, capable of carrying much more context and commentary than a brief article[1].

The Idea

Having identified EThOS as source of information, the idea was to see if I could use our existing UK Web Archive tools to collect and index the full-text of these theses, build a simple faceted search interface, and perform some basic data-mining operations. If that worked, it would allow relevant theses to be discovered and passed to the openVirus tools for more sophisticated analysis.

Preparing the data sources

The links in the EThOS dataset point to the HTML landing-page for each theses, rather than to the full text itself. To get to the text, the best approach would be to write a crawler to find the PDFs. However, it would take a while to create something that could cope with the variety of ways the landing pages tend to be formatted. For machines, it’s not always easy to find the link to the actual theses!

However, many of the universities involved have given the EThOS team permission to download a copy of their theses for safe-keeping. The URLs of the full-text files are only used once (to collect each thesis shortly after publication), but have nevertheless been kept in the EThOS system since then. These URLs are considered transient (i.e. likely to ‘rot’ over time) and come with no guarantees of longer-term availability (unlike the landing pages), so are not included in the main EThOS dataset. Nevertheless, the EThOS team were able to give me the list of PDF URLs, making it easier to get started quickly.

This is far from ideal: we will miss theses that have been moved to new URLs, and from universities that do not take part (which, notably, includes Oxford and Cambridge). This skew would be avoided if we were to use the landing-page URLs provided for all UK digital theses to crawl the PDFs. But we need to move quickly.

So, while keeping these caveats in mind, the first task was to crawl the URLs and see if the PDFs were still there…

Collecting the PDFs

A simple Scrapy crawler was created, one that could read the PDF URLs and download them without overloading the host repositories. The crawler itself does nothing with them, but by running behind warcprox the web requests and responses (including the PDFs) can be captured in the standardised Web ARChive (WARC) format.

For 35 hours, the crawler attempted to download the 130,330 PDF URLs. Quite a lot of URLs had already changed, but 111,793 documents were successfully downloaded. Of these, 104,746 were PDFs.

All the requests and responses generated by the crawler were captured in 1,433 WARCs each around 1GB in size, totalling around 1.5TB of data.

Processing the WARCs

We already have tools for handling WARCs, so the task was to re-use them and see what we get. As this collection is mostly PDFs, Apache Tika and PDFBox are doing most of the work, but the webarchive-discovery wrapper helps run them at scale and add in additional metadata.

The WARCs were transferred to our internal Hadoop cluster, and in just over an hour the text and associated metadata were available as about 5GB of compressed JSON Lines.

A Legal Aside

Before proceeding, there’s legal problem that we need to address. Despite being freely-available over the open web, the rights and licenses under which these documents are being made available can be extremely varied and complex.

There’s no problem gathering the content and using it for data mining. The problem is that there are limitations on what we can redistribute without permission: we can’t redistribute the original PDFs, or any close approximation.

However, collections of facts about the PDFs are fine.

But for the other openVirus tools to do their work, we need to be able to find out what each thesis are about. So how can we make this work?

One answer is to generate statistical summaries of the contents of the documents. For example, we can break the text of each document up into individual words, and count how often each word occurs. These word frequencies are a no substitute for the real text, but are redistributable and suitable for answering simple queries.

These simple queries can be used to narrow down the overall dataset, picking out a relevant subset. Once the list of documents of interest is down to a manageable size, an individual researcher can download the original documents themselves, from the original hosts[2]. As the researcher now has local copies, they can run their own tools over them, including the openVirus tools.

Word Frequencies

second, simpler Hadoop job was created, post-processing the raw text and replacing it with the word frequency data. This produced 6GB of uncompressed JSON Lines data, which could then be loaded into an instance of the Apache Solr search tool [3].

While Solr provides a user interface, it’s not really suitable for general users, nor is it entirely safe to expose to the World Wide Web. To mitigate this, the index was built on a virtual server well away from any production systems, and wrapped with a web server configured in a way that should prevent problems.

The API this provides (see the Solr documentation for details) enables us to find which theses include which terms. Here are some example queries:

This is fine for programmatic access, but with a little extra wrapping we can make it more useful to more people.

APIs & Notebooks

For example, I was able to create live API documentation and a simple user interface using Google’s Colaboratory:

Using the openVirus EThOS API

Google Colaboratory is a proprietary platform, but those notebooks can be exported as more standard Jupyter Notebooks. See here for an example.

Faceted Search

Having carefully exposed the API to the open web, I was also able to take an existing browser-based faceted search interface and modify to suite our use case:

EThOS Faceted Search Prototype

Best of all, this is running on the Glitch collaborative coding platform, so you can go look at the source code and remix it yourself, if you like:

EThOS Faceted Search Prototype – Glitch project

Limitations

The main limitation of using word-frequencies instead of full-text is that phrase search is broken. Searching for face AND mask will work as expected, but searching for “face mask” doesn’t.

Another problem is that the EThOS metadata has not been integrated with the raw text search. This would give us a much richer experience, like accurate publication years and more helpful facets[4].

In terms of user interface, the faceted search UI above is very basic, but for the openVirus project the API is likely to be of more use in the short term.

Next Steps

To make the search more usable, the next logical step is to attempt to integrate the full-text search with the EThOS metadata.

Then, if the results look good, we can start to work out how to feed the results into the workflow of the openVirus tool suite.

 


1. Even things like negative results, which are informative but can be difficult to publish in article form. ↩︎

2. This is similar data sharing pattern used by Twitter researchers. See, for example, the DocNow Catalogue. ↩︎

3. We use Apache Solr a lot so this was the simplest choice for us. ↩︎

4. Note that since writing this post, this limitation has been rectified. ↩︎

 

14 April 2020

BL Labs Artistic Award Winner 2019 - The Memory Archivist - Lynda Clark

Add comment

Posted on behalf of Lynda Clark, BL Labs Artistic Award Winner 2019 by Mahendra Mahey, Manager of BL Labs.

My research, writing and broader critical practice are inextricably linked. For example, the short story “Ghillie’s Mum”, recently nominated for the BBC Short Story Award, was an exploration of fraught parent / child relationships, which fed into my interactive novella Writers Are Not Strangers, which was in turn the culmination of research into the way readers and players respond to writers and creators both directly and indirectly. 

The Memory Archivist” BL Labs Artistic award winner 2019, offers a similar blending of creative work, research and reflection. The basis for the project was the creation of a collection of works of interactive fiction for the UK Web Archive (UKWA) as part of an investigation into whether it was possible to capture interactive works with existing web archiving tools. The project used WebRecorder and Web ACT to add almost 200 items to the UKWA. An analysis of these items was then undertaken, which indicated various recurring themes, tools and techniques used across the works. These were then incorporated into “The Memory Archivist” in various ways.

Memory Archvist
Opening screen for the Memory Archivist

The interactive fiction tool Twine was the most widely used by UK creators across the creative works, and was therefore used to create “The Memory Archivist”. Key themes such as pets, public transport and ghosts were used as the basis for the memories the player character may record. Elements of the experience of, and challenges relating to, capturing interactive works (and archival objects more generally) were also incorporated into the narrative and interactivity. When the player-character attempts to replay some of the memories they have recorded, they will find them captured only partially, or with changes to their appearance.

There were other, more direct, ways in which the Library’s digital content was included too, in the form of  repurposing code. ‘Link select’ functionality was adapted from Jonathan Laury’s Ostrich and CSS style sheets from Brevity Quest by Chris Longhurst were edited to give certain sections their distinctive look. An image from the Library’s Flickr collection was used as the central motif for the piece not only because it comes from an online digital archive, but because it is itself a motif from an archive – a French 19th Century genealogical record. Sepia tones were used for the colour palette to reflect the nostalgic nature of the piece.

Example-screen-memory-archvist
Example screen shots from the Memory Archivist

Together, these elements aim to emphasise the fact that archives are a way to connect memories, people and experiences across time and space and in spite of technological challenges, while also acknowledging that they can only ever be partial and decontextualised. 

The research into web archiving was presented at the International Internet Preservation Consortium in Zagreb and the Digital Preservation Coalition’s Web Archiving & Preservation Working Group event in Edinburgh

Other blog posts from Lynda's related work are available here:

06 April 2020

Poetry Mobile Apps

Add comment

This is a guest post by Pete Hebden, a PhD student at Newcastle University, currently undertaking a practice-led PhD; researching and creating a poetry app. Pete has recently completed a three month placement in Contemporary British Published Collections at the British Library, where he assisted curators working with the UK Web Archive, artists books and emerging formats collections, you can follow him on Twitter as @Pete_Hebden

As part of my PhD research, I have been investigating how writers and publishers have used smartphone and tablet devices to present poetry in new ways through mobile apps. In particular, I’m interested in how these new ways of presenting poetry compare to the more familiar format of the printed book. The mobile device allows poets and publishers to create new experiences for readers, incorporating location-based features, interactivity, and multimedia into the encounter with the poem.

Since the introduction of smartphones and tablet computers in the early 2010s, a huge range of digital books, e-literature, and literary games have been developed to explore the possibilities of this technology for literature. Projects like Ambient Literature and the work of Editions at Play have explored how mobile technology can transform story-telling and narrative, and similarly my project looks at how this technology can create new experiences of poetic texts.

Below are a few examples of poetry apps released over the past decade. For accessibility reasons, this selection has been limited to apps that can be used anywhere and are free to download. Some of them present work written with the mobile device in mind, while others take existing print work and re-mediate it for the mobile touchscreen.

Puzzling Poetry (iOS and Android, 2016)

Dutch developers Studio Louter worked with multiple poets to create this gamified approach to reading poetry. Existing poems are turned into puzzles to be unlocked by the reader word-by-word as they use patterns and themes within each text to figure out where each word should go. The result is that often new meanings and possibilities are noticed that might have been missed in a traditional linear reading experience.

Screen capture of Puzzling Poetry
Screen capture image of  the Puzzling Poetry app

This video explains and demonstrates how the Puzzling Poetry app works:

 

Translatory (iOS, 2016)

This app, created by Arc Publications, guides readers in creating their own English translations of contemporary foreign-language poems. Using the digital display to see multiple possible translations of each phrase, the reader gains a fresh understanding of the complex work that goes into literary translation, as well as the rich layers of meaning included within the poem. Readers are able to save their finished translations and share them through social media using the app.

Screen capture image of Translatory
Screen capture image of the Translatory app

 

Poetry: The Poetry Foundation app (iOS and Android, 2011)

At nearly a decade old, the Poetry Foundation’s Poetry app was one of the first mobile apps dedicated to poetry, and has been steadily updated by the editors of Poetry magazine ever since. It contains a huge array of both public-domain work and poems published in the magazine over the past century. To help users find their way through this, Poetry’s developers created an entertaining and useful interface for finding poems with unique combinations of themes through a roulette-wheel-style ‘spinner’. The app also responds to users shaking their phone for a random selection of poem. 

Screen capture image of The Poetry Foundation app
Screen capture image of The Poetry Foundation app

 

ABRA: A Living Text  (iOS, 2014)

A collaboration between the poets Amaranth Borsuk and Kate Durbin, and developer Ian Hatcher, the ABRA app presents readers with a range of digital tools to use (or spells to cast) on the text, which transform the text and create a unique experience for each reader. A fun and unusual way to encounter a collection of poems, giving the reader the opportunity to contribute to an ever-shifting, crowd-edited digital poem.

Screen capture image of the ABRA app
Screen capture image of the ABRA app

This artistic video below demonstrates how the ABRA app works. Painting your finger and thumb gold is not required! 

I hope you feel inspired to check out these poetry apps, or maybe even to create your own.