THE BRITISH LIBRARY

Digital scholarship blog

113 posts categorized "Digital scholarship"

07 December 2018

Introducing an experimental format for learning about content mining for digital scholarship

Add comment

This post by the British Library’s Digital Curator for Western Heritage Collections, Dr Mia Ridge, reports on an experimental format designed to provide more flexible and timely training on fast-moving topics like text and data mining.

This post covers two topics – firstly, an update to the established format of sessions on our Digital Scholarship Training Programme (DSTP) to introduce ‘strands’ of related modules that cumulatively make up a ‘course’, and secondly, an overview of subjects we’ve covered related to content mining for digital scholarship with cultural heritage collections.

Introducing ‘strands’

The Digital Research team have been running the DSTP for some years now. It’s been very successful but we know that it's hard for people to get away for a whole day, so we wanted to break courses that might previously have taken 5 or 6 hours of a day into smaller modules. Shorter sessions (talks or hands-on workshops) only an hour or at most two long seemed to fit more flexibly into busy diaries. We can also reach more people with talks than with hands-on workshops, which are limited by the number of training laptops and the need to offer more individual

A 'strand' is a new, flexible format for learning and maintaining skills, with training delivered through shorter modules that combine to build attendees’ knowledge of a particular topic over time. We can repeat individual modules – for example, a shorter ‘Introduction to’ session might run more often, or target people with some existing knowledge for more advanced sessions. I haven’t formally evaluated it but I suspect that the ability to pick and choose sessions means that attendees for each module are more engaged, which makes for a better session for everyone. We've seen a lot of uptake – in some cases the 40 or so places available go almost immediately - so offering shorter sessions seems to be working.

Designing courses as individual modules makes it easier to update individual sections as technologies and platforms change. This format has several other advantages: staff find it easier to attend hour-long modules, and they can try out methods on their own collections between sessions. It takes time for attendees to collect and prepare their own data for processing with digital methods (not to mention preparation time and complexity for the instructor), so we've stayed away from this in traditional workshops.

New topics can be introduced on a 'just in time' basis as new tools and techniques emerge. This seemed to address lots of issues I was having in putting together a new course on content mining. It also makes it easier to tackle a new subject than the established 5-6 hour format, as I can pilot short sessions and use the lessons learnt in planning the next module.

The modular format also means we can invite international experts and collaborators to give talks on their specialisms with relatively low organisational overhead, as we regularly run ‘21st Century Curatorship’ talks for staff. We can link relevant staff talks, or our monthly ‘Hack and Yack’ and Digital Scholarship Reading Groups sessions to specific strands.

We originally planned to start each strand with an introductory module outlining key concepts and terms, but in reality we dived into the first one as we already had talks that'd fit lined up.

Content mining for digital scholarship with cultural heritage collections

Tom and Nora trying out AntConcFrom the course blurb: ‘Content mining (sometimes ‘text and data mining’) is a form of computational processing that uses automated analytical techniques to analyse text, images, audio-visual material, metadata and other forms of data for patterns, trends and other useful information. Content mining methods have been applied to digitised and digital historic, cultural and scientific collections to help scholars answer new research questions at scale, analysing hundreds or hundreds of thousands of items. In addition to supporting new forms of digital scholarship that apply content mining methods, methods like Named Entity Recognition or Topic Modelling can make collection items more discoverable. Content mining in cultural heritage draws on data science, 'distant reading' and other techniques to categorise items; identify concepts and entities such as people, places and events; apply sentiment analysis and analyse items at scale.’

An easily updatable mixture of introductory talks, tutorial sessions, hands-on workshops and case studies from external experts fit perfectly into the modular format, and it's worked out well, with a range of topics and formats offered so far. Sessions have included: an Introduction to Machine Learning; Computational models for detecting semantic change in historical texts (Dr Barbara McGillivray, Alan Turing Institute); Computer vision tools with Dr Giles Bergel, from the University of Oxford's Visual Geometry Group; Jupyter Notebooks/Python for simple processing and visualisations of data from In the Spotlight; Listening to the Crowd: Data Science to Understand the British Museum's Visitors (Taha Yasseri, Turing/OII); Visualising cultural heritage collections (Olivia Fletcher Vane, Royal College of Art); An Introduction to Corpus Linguistics for the Humanities (Ruth Byrne, BL and Lancaster PhD student); Corpus Analysis with AntConc.

What’s next?

My colleagues Nora McGregor, Stella Wisdom and Adi Keinan-Schoonbaert have some great ‘strands’ planned for the future, including Stella’s on ‘Emerging Formats’ and Adi’s on ‘Place’, so watch this space for updates!

19 November 2018

The British Library / Qatar Foundation Partnership Imaging Hack Day

Add comment

The BL/QFP is digitising archive material related to Persian Gulf History as well as Arabic scientific manuscripts, in the past four years we have added in excess of 1.5 million images to the Qatar Digital Library. Our team of ~45 staff includes a group of eight dedicated imaging professionals, who between them produce 30,000 digitised images each month, to exacting standards that focus on presenting the information on the page in a visually clear and consistent manner.

 

Our imaging team are a highly-skilled group, with a variety of backgrounds, experiences and talents, and we wished to harness these. Therefore, we decided to set aside a day for our Imaging team to use their creative and technical skills to ‘hack’ the material in our collection.

By dedicating a whole day for our imaging team to experiment with different ways of capturing the material we are digitising we hoped it would reveal some interesting aspects of the collection, which were not seen through our standardised capture process. It also gave the Imaging team a chance to show off and share their skills amongst themselves and the wider BL/QFP team.

This was how we conceived of our first Imaging Hack Day, and the rest of this blog post outlines how we promoted and organised it.

From its conception the Imaging team were keen for the wider team to be involved, so we asked them to nominate material from the collections we are digitising that they thought could be ‘hacked’ and to state their reasons why.

To begin with it was mostly members of the Imaging team that nominated items. So we decided to wage a PR campaign: firstly the Imaging team delivered a presentation on the 9th of October at one of BL/QFP’s all-staff meetings. The presentation outlined some of the techniques and ideas they had for the hack day, in order to appeal to the rest of the team for nominations. Additionally, on the morning of the 9th members of the Imaging team snuck into the office and planted some not-so-subtle propaganda:

Posters

The impact of the posters and presentation was really pronounced. After having a handful of nominations from people outside of the Imaging team before 9th Oct, within days the number had increased by a factor in excess of five (see graph below). The posters also became highly sought after amongst the team.

Nominations
Graph showing how many shelfmarks were nominated each day, with cumulative totals for members of the imaging team vs non-imaging teams.

 

The day before the Hack Day, anyone who had nominated an item was invited to a prep session with the Imaging team. Here the nominated items were presented, as well as the ideas for hacks. Extra judicious use of Post-Its and Sharpies facilitated feedback, and by the end of the session the Imaging team were armed with lots of ideas, encouragement, and knew they had curatorial expertise from the rest of the BL/QFP team to call upon if necessary.

Postits

As a final surprise, and a sign of appreciation Hack Sacks filled with goodies were secreted into the imaging studio late on the eve of the Hack Day:

Hacksacks

The resulting images/hacks of the Hack Day will be covered in an upcoming post by our studio manager Renata Kaminska. However, in addition the non-material results were manifold. Throughout the lead-up and on the actual day there was a palpable buzz amongst the Imaging team, evidence of the positive impact on their morale. It also led to a greater exchange of knowledge between the Imaging team and their colleagues throughout the BL/QFP. The day allowed for different areas of the team to come together, combine their expertise and find new ways of working and innovative ways of capturing our collections. Finally, it also demonstrated the fantastic experience and skills of our imaging technicians, many of which had not previously been exposed to the rest of the team. It was a real celebration of both the material that we are digitising and our talented imaging studio.

This is a guest post by Sotirios Alpanis, Head of Digital Operations for the British Library's Qatar Project, on Twitter as @SotiriosAlpanis

02 November 2018

Digital Conversation: History and Games

Add comment

It is very nearly International Games Week; this is an initiative run by volunteers from around the world to reconnect communities through their libraries around the educational, recreational, and social value of all types of games. Here at the British Library we are excited to be hosting the narrative games convention AdventureX on Saturday 10th and Sunday 11th November, and to get the party started on Thursday 8th November we are delighted to run, in partnership with The National Archives and Wellcome, a Digital Conversation event on the topic of History and Games.

image from https://s3.amazonaws.com/feather-client-files-aviary-prod-us-east-1/2018-11-02/a94ae6e5-8ae4-4fca-b786-91c9fab10c7a.png

Our star Digital Conversation panel features:
  • Toni Brasting, Creative Partnerships Manager at Wellcome Trust, who collaborates with games studios, designers and scientific researchers to create games that inspire conversations about health.
  • Andrew Burn, Professor of Media Education at the UCL Institute of Education, who will launch MissionMaker Beowulf, a digital platform which empowers students to make 3-D adventure games.

A video showing the process of making a game in Missionmaker Beowulf, followed by a video capture of the game
  • James Delaney founder and Managing Director of BlockWorks, who built Minecraft maps for Great Fire 1666 at the Museum of London, to mark the 350th anniversary of London's Great Fire. Furthermore, this summer they teamed up with English Heritage on a castle building project.

Kenilworth Castle in Minecraft

Trailer of Winter Hall by Lost Forest Games

  • Nick Webber, Associate Professor at Birmingham City University, whose research explores the impact of virtual worlds and online games on the practice of history.
  • Stella Wisdom, Digital Curator for Contemporary British Collections at the British Library, who has collaborated on multiple games initiatives.

The Digital Conversation event takes place in The Knowledge Centre at the British Library on Thursday 8th November, 18.30- 20.30; for more details including booking, visit: https://www.bl.uk/events/digital-conversation-history-and-games. Hope to see you there.

This post is by Digital Curator Stella Wisdom, on twitter as @miss_wisdom

29 October 2018

Using Transkribus for automated text recognition of historical Bengali Books

Add comment

In this post Tom Derrick, Digital Curator, Two Centuries of Indian Print, explains the Library's recent use of Transkribus for automated text recognition of Bengali printed books.

Are you working with digitised printed collections that you want to 'unlock' for keyword search and text mining? Maybe you have already heard about Transkribus but thought it could only be used for automated recognition of handwritten texts. If so you might be surprised to hear it also does a pretty good job with printed texts too. You might be even more surprised to hear it does an impressive job with printed texts in Indian scripts! At least that is what we have found from recent testing with a batch of 19th century printed books written in Bengali script that have been digitised through the British Library’s Two Centuries of Indian Print project.

Transkribus is a READ project and available as a free tool for users who want to automate recognition of historical documents. The British Library has already had some success using Transkribus on manuscripts from our India Office collection, and it was that which inspired me to see how it would perform on the Bengali texts, which provides an altogether different type of challenge.

For a start, most text recognition solutions either do not support Indian scripts, or do not reach close to the same level of recognition as they do with documents written in English or other Latin scripts. In part this is down to supply and demand. Mainstream providers of tools have prioritised Western customers, yet there is also the relative lack of digitised Indian texts that can be used to train text recognition engines.

These text recognition engines have also been well trained on modern dictionaries and a collection of historical texts like the Bengali books will often contain words which are no longer in use. Their aged physicality also brings with it the delights of faded print, blotchy paper and other paper-based gremlins that keeps conservationists in work yet disrupts automated text recognition. Throw in an extensive alphabet that contains more diverse and complicated character forms than English and you can start to piece together how difficult it can be to train recognition engines to achieve comparable results with Bengali texts.

So it was with more with hope than expectation I approached Transkribus. We began by selecting 50 pages from the Bengali books representing the variety of typographical and layout styles within the wider collection of c. 500,000 pages as much as possible. Not an easy task! We uploaded these to Transkribus, manually segmenting paragraphs into text regions and automating line recognition. We then manually transcribed the texts to create a ground truth which, together with the scanned page images, were used to train the recurrent neural network within Transkribus to create a model for the 5,700 transcribed words.

Transkribus_Bengali_screenshot                                 View of a segmented page from one of the British Library's Bengali books along with its transcription, within the Transkribus viewer. 

The model was tested on a few pages from the wider collection and the results clearly communicated via the graph below. The model achieved an average character error rate (CER) of 21.9%, which is comparable to the best results we have seen from other text recognition services. Word accuracy of 61% was based on the number of words that were misspelled in the automated transcription compared to the ground truth. Eventually we would like to use automated transcriptions to support keyword searching of the Bengali books online and the higher the word accuracy increases the chances of users pulling back all relevant hits from their keyword search. We noticed the results often missed the upper zone of certain Bengali characters, i.e. the part of the character or glyph which resides above the matra line that connects characters in Bengali words. Further training focused on recognition of these characters may improve the results.

TranskribusResultsGraph showing the learning curve of the Bengali model using the Transkribus HTR tool.      

Our training set of 50 pages is very small compared to other projects using Transkribus and so we think the accuracy could be vastly improved by creating more transcriptions and re-training the model. However, we're happy with these initial results and would encourage others in a similar position to give Transkribus a try.

 

 

11 September 2018

Building Library Labs around the world - the event and complete our survey!

Add comment

Posted by Mahendra Mahey, BL Labs Manager.

Original labs lab (not cropped)
Building Library Labs

Around the world, leading national, state, university and public libraries are creating 'digital lab type environments' so that their digitised and born digital collections / data can be opened up and re-used for creative, innovative and inspiring projects by everyone such as digital researchers, artists, entrepreneurs and educators.

BL Labs, which has now been running for five years, is organising what we believe will be the first ever event of its kind in the world! We are bringing together national, state and university libraries with existing or planned digital 'Labs-style' teams for an invite-only workshop this Thursday 13 September and Friday 14 September, 2018.

A few months ago, we sent out special invitations to these organisations. We were delighted by the excitement generated, and by the tremendous response we received. Over 40 institutions from North America, Europe, Asia and Africa will be attending the workshop at the British Library this week. We have planned plenty of opportunities for networking, sharing lessons learned, and telling each other about innovative projects and services that are using digital collections / data in new and interesting ways. We aim to work together in the spirit of collaboration so that we can continue to build even better Library Labs for our users in the future.

Our packed programme includes:

  • 6 presentations covering topics such as those in our international Library Labs Survey;
  • 4 stories of how national Library Labs are developing in the UK, Austria, Denmark and the Netherlands;
  • 12 lightning talks with topics ranging from 3D-Imaging to Crowdsourcing;
  • 12 parallel discussion groups focusing on subjects such as funding, technical infrastructure and user engagement;
  • 3 plenary debates looking at the value to national Libraries of Labs environments and digital research, and how we will move forward as a group after this event.

We will collate and edit the outputs of this workshop in a report detailing the current landscape of digital Labs in national, state, university and public Libraries around the world.

If you represent one of these institutions, it's still not too late to participate, and you can do so in a few ways:

  • Our 'Building Library Labs' survey is still open, and if you work in or represent a digital Library Lab in one of our sectors, your input will be particularly valuable;
  • You may be able to participate remotely in this week's event in real time through Skype;
  • You can contribute to a collaborative document which delegates are adding to during the event.

If you are interested in one of these options, contact: mahendra.mahey@bl.uk.

Please note, that event is being videoed and we will be putting up clips on our YouTube channel soon after the workshop.

We will also return to this blog and let you know how we got on, and how you can access some of the other outputs from the event. Watch this space!

 

 

 

06 September 2018

Visualising the Endangered Archives Programme project data on Africa, Part 3. Finishing up

Add comment

Sarah FitzGerald is a linguistics PhD researcher at the University of Sussex investigating the origins and development of Cameroon Pidgin English. She is currently a research placement student in the British Library’s Digital Scholarship Team, using data from the Endangered Archives Programme to create data visualisations

This summer I have taken a break by working hard, I’ve broadened my academic horizons by ignoring academia completely, and I’ve felt at home while travelling hundreds of miles a week. But above all else, I’ve just had a really nice time.

In my last two blogs I covered the early stages of my placement at the British Library, and discussed the data visualisation tools I’ve been exploring.

In this final blog I am going to outline the later stages of my project, I am also going to talk about my experience of undertaking a British Library placement, what I’ve learned and whether it was worth it (spoiler alert, it was).

What I’ve been doing

The final stages of my project have mostly consisted of two separate lines of investigation.

Firstly, I have been working on finding out as much as I can about the  Endangered Archives Programme (EAP)’s projects in Africa and finding the best ways to visualise that information in order to create a sort of bank of visualisations that the EAP team can use when they are talking about the work that they do. Visualisations, such as the one below showing the number of applications related to each region of Africa by year, can make tables of data much easier to understand.

Chart

Secondly, I was curious about why some project applications get funded and some do not. I wanted to know if I could identify any patterns in the reasons why projects get rejected.

This gave me the opportunity to apply my skills as a linguist to the data, albeit on a small scale. I decided to examine the feedback given to unsuccessful applicants by the panel that awards the EAP grants to see if I could identify any patterns. To do this I created a corpus, or electronic database, of texts. This could then be run through corpus analysis software to look for patterns.

AntConc

This image shows a word list created for my corpus using AntConc software, which is a free and open source corpus analysis tool.

My analysis allowed me to identify a number of issues common to many unsuccessful applications. In addition to applications outside of the scope of EAP there are also proposals which would make excellent projects but their applications lack the necessary information to award a grant.

Based on my analysis I was able to make a number of recommendations about additional information EAP could provide for applicants which might help to prevent potentially valuable archives being lost due to poor applications.

What I’ve learned

As well as learning about visualisation software I’ve learned a lot this summer about the EAP archives.

I’ve found out where applications are coming from, and which African countries have the most associated applications. I’ve learned that there are many great data visualisation tools available for free online. I’ve learned that there are over 70 different languages represented in the EAP archived projects from Africa.

EAP656
James Ssali and an unknown woman, from the Ham Musaka archive, Uganda (EAP656)

One of the most interesting things I’ve learned is just how much archival material is available for research – and on an incredibly broad range of topics. The materials digitised and preserved in Africa over the last 13 years includes:

This wealth of information provides so much opportunity for research and these are just the archives from Africa. The EAP funds projects all over the world.

EAP143
Shui manuscript from China (EAP143)

In addition to learning about the EAP archives I’ve learned a lot from working in the British Library more generally. The scale of the work that is carried out is immense and I don’t think I fully appreciated before working here for three months just how large the challenges they face are.

In addition to preserving a copy of every book published in the UK, the BL is also working to create large digital archives in order to facilitate the way that modern scholarship has developed. They are digitising books, audio, websites, as well as historical documents such as the records of the East India Company.

East India House
View of East India House by Thomas Hosmer Shepherd

Was it worth it?

A PhD is an intense thing to undertake and you have a time limit to complete it. At first glance, taking three months out to work on a placement with little direct relevance to my PhD might seem a bit foolish, particularly when it means a daily commute from Brighton to London.

Far from wasting my time, however, this placement has been an enriching experience. My PhD is on the origins and development of Cameroon Pidgin English. This placement has given me a break from my work while broadening my understanding of African culture and the context in which the language I study is spoken.

I’ve always had an interest in data visualisation and my placement has given me time to play with visualisation tools and gain a real understanding of the resources available. I feel refreshed and ready for the new term despite having worked full time all summer.

The break has also given me thinking space, it has allowed ideas to percolate and given me new skills which I can apply to my work. Taking a break from academia has given me more perspective on my work and more options for how to develop it.

BL
The British Library, St Pancras

Finally, the travel has been a lot but my supervisors have been very flexible, allowing me to work from home two days a week. The up-side of coming to London regularly has been getting to work with interesting people.

Working in a large institution could be an intimidating and isolating experience but it has been anything but. The digital scholarship team have been welcoming and interested, in particular I have had two very supportive supervisors. The British Library are really keen to support and develop placement students, and there is a lovely community of PhD students at the BL some on placements, some doing their PhD here.

I have had a great time at the British Library this summer and can only recommend the scheme to anyone thinking of applying for a placement next year.

23 August 2018

BL Labs Symposium (2018): Book your place for Mon 12-Nov-2018

Add comment

The BL Labs team are pleased to announce that the sixth annual British Library Labs Symposium will be held on Monday 12 November 2018, from 9:30 - 17:30 in the British Library Knowledge Centre, St Pancras. The event is free, and you must book a ticket in advance. Last year's event was a sell out, so don't miss out!

The Symposium showcases innovative and inspiring projects which use the British Library’s digital content, providing a platform for development, networking and debate in the Digital Scholarship field as well as being a focus on the creative reuse of digital collections and data in the cultural heritage sector.

We are very proud to announce that this year's keynote will be delivered by Daniel Pett, Head of Digital and IT at the Fitzwilliam Museum, University of Cambridge.

Daniel Pett
Daniel Pett will be giving the keynote at this year's BL Labs Symposium. Photograph Copyright Chiara Bonacchi (University of Stirling).

  Dan read archaeology at UCL and Cambridge (but played too much rugby) and then worked in IT on the trading floor of Dresdner Kleinwort Benson. Until February this year, he was Digital Humanities lead at the British Museum, where he designed and implemented digital practises connecting humanities research, museum practice, and the creative industries. He is an advocate of open access, open source and reproducible research. He designed and built the award-winning Portable Antiquities Scheme database (which holds records of over 1.3 million objects) and enabled collaboration through projects working on linked and open data (LOD) with the Institute for the Study of the Ancient World (New York University) (ISAWNYU) and the American Numismatic Society. He has worked with crowdsourcing and crowdfunding (MicroPasts), and developed the British Museum's 3D capture reputation. He holds Honorary posts at UCL Institute of Archaeology and the Centre for Digital Humanities and publishes regularly in the fields of museum studies, archaeology and digital humanities.

Dan's keynote will reflect on his years of experience in assessing the value, impact and importance of experimenting with, re-imagining and re-mixing cultural heritage digital collections in Galleries, Libraries, Archives and Museums. Dan will follow in the footsteps of previous prestigious BL Labs keynote speakers: Josie Fraser (2017); Melissa Terras (2016); David De Roure and George Oates (2015); Tim Hitchcock (2014); and Bill Thompson and Andrew Prescott in 2013.

Stella Wisdom (Digital Curator for Contemporary British Collections at the British Library) will give an update on some exciting and innovative projects she and other colleagues have been working on within Digital Scholarship. Mia Ridge (Digital Curator for Western Heritage Collections at the British Library) will talk about a major and ambitious data science/digital humanities project 'Living with Machines' the British Library is about to embark upon, in collaboration with the Alan Turing Institute for data science and artificial intelligence.Throughout the day, there will be several announcements and presentations from nominated and winning projects for the BL Labs Awards 2018, which recognise work that have used the British Library’s digital content in four areas: Research, Artistic, Commercial, and Educational. The closing date for the BL Labs Awards is 11 October, 2018, so it's not too late to nominate someone/a team, or enter your own project! There will also be a chance to find out who has been nominated and recognised for the British Library Staff Award 2018 which showcases the work of an outstanding individual (or team) at the British Library who has worked creatively and originally with the British Library's digital collections and data (nominations close 12 October 2018).

Adam Farquhar (Head of Digital Scholarship at the British Library) will give an update about the future of BL Labs and report on a special event held in September 2018 for invited attendees from National, State, University and Public Libraries and Institutions around the world, where they were able to share best practices in building 'labs style environmentsfor their institutions' digital collections and data.

There will be a 'sneak peek' of an art exhibition in development entitled 'Imaginary Cities' by the visual artist and researcher Michael Takeo Magruder. His practice  draws upon working with information systems such as live and algorithmically generated data, 3D printing and virtual reality and combining modern / traditional techniques such as gold / silver gilding and etching. Michael's exhibition will build on the work he has been doing with BL Labs over the last few years using digitised 18th and 19th century urban maps bringing analog and digital outputs together. The exhibition will be staged in the British Library's entrance hall in April and May 2019 and will be free to visit.

Finally, we have an inspiring talk lined up to round the day off (more information about this will be announced soon), and - as is our tradition - the symposium will conclude with a reception at which delegates and staff can mingle and network over a drink and nibbles.

So book your place for the Symposium today and we look forward to seeing new faces and meeting old friends again!

For any further information, please contact labs@bl.uk

Posted by Mahendra Mahey and Eleanor Cooper (BL Labs Team)

13 August 2018

The Parts of a Playbill

Add comment

Beatrice Ashton-Lelliott is a PhD researcher at the University of Portsmouth studying the presentation of nineteenth-century magicians in biographies, literature, and the popular press. She is currently a research placement student on the British Library’s In the Spotlight project, cleaning and contextualising the crowdsourced playbills data. She can be found on Twitter at @beeashlell and you can help out with In the Spotlight at playbills.libcrowds.com.

In the Spotlight is a brilliant tool for spotting variations between playbills across the eighteenth and nineteenth centuries. The site provides participants with access to thousands of digitised playbills, and the sheets of the playbills in the site’s collections often have lists of the cast, scenes, and any innovative ‘machinery’ involved in the production. Whilst the most famous actors obviously needed to be emphasised and drew more crowds (e.g., any playbills featuring Mr Kean tend to have his name in huge letters), from the playbills in In the Spotlight’s volumes that doesn’t always seem to be the case with playwrights. Sometimes they’re mentioned by name, but in many cases famous playwrights aren't named on the playbill. I’ve speculated previously that this is because these playwrights were so famous that perhaps audiences would hear by word of mouth or press that a new play was out by them, so it was assumed that there was no point in adding the name as audiences would already know?

What can you expect to see on a playbill?

The basics of a playbill are: the main title of the performance, a subtitle, often the current date, future or past dates of performances, the cast and characters, scenery, short or long summaries of the scenes to be acted, whether the performance is to benefit anyone, and where tickets can be bought from. There are definitely surprises though: the In the Spotlight team have also come across apologies from theatre managers for actors who were scheduled to perform not turning up, or performing drunk! The project forum has a thread for interesting things 'spotted on In the Spotlight', and we always welcome posts from others.

Crowds would often react negatively if the scheduled performers weren’t on stage. Gilli Bush-Bailey also notes in The Performing Century (2007) that crowds would be used to seeing the same minor actors reappear across several parts of the performance and playbills, stating that ‘playbills show that only the lesser actors and actresses in the company appear in both the main piece and the following farce or afterpiece’ (p. 185), with bigger names at theatres royal committing only to either a tragic or comic performance.

From our late 18th century playbills on the site, users can see quite a standard format in structure and font.

Vdc_100022589157.0x000013
In this 1797 playbill from the Margate volume, the font is uniform, with variations in size to emphasise names and performance titles.

How did playbills change over time?

In the 19th century, all kinds of new and exciting fonts are introduced, as well as more experimentation in the structuring of playbills. The type of performance also influences the layout of the playbill, for instance, a circus playbill be often be divided into a grid-like structure to describe each act and feature illustrations, and early magician playbills often change orientation half-way down the playbill to give more space to describe their tricks and stage.

Vdc_100022589063.0x00001f
1834 Birmingham playbill

This 1834 Birmingham playbill is much lengthier than the previous example, showing a variety of fonts and featuring more densely packed text. Although this may look more like an information overload, the mix of fonts and variations in size still make the main points of the playbill eye-catching to passersby. 

James Gregory’s ‘Parody Playbills’ article, stimulated by the In the Spotlight project, contains a lot of great examples and further insights into the deeper meaning of playbills and their structure.

Works Cited

Davies, T. C. and P. Holland. (2007). The Performing Century: Nineteenth-Century Theatre History. Basingstoke: Palgrave Macmillan.

Gregory, J. (2018) ‘Parody Playbills: The Politics of the Playbill in Britain in the Eighteenth and Nineteenth Centuries’ in eBLJ.