Extracting Place Names from Web Archives at Archives Unleashed Vancouver
By Gethin Rees, Lead Curator of Digital Mapping, The British Library
I recently attended the Archives Unleashed hackathon in Vancouver. The fantastic Archives Unleashed project aims to help scholars research the recent past by using big data from web archives. The project organises a series of datathons where researchers collaboratively work with web archive collections over the course of two days. The participants divide into small teams with the aim of producing a piece of research using the archives that they can present at the end of the event and compete for a prize. One of the most important tools that we used in the datathon was the Archives Unleashed Toolkit (AUT).
The team I was on chose to use a dataset that documented a series of Wildfires in British Columbia from 2017 and 2018 (ubc-bc-wildfires). I came to the datathon with an interest in visualising web archive data geographically: place names or toponyms contained in the text from web pages would form the core of such a visualisation. I had little experience of natural language processing before the datathon but, keen to improve my python skills, I decided to take on the challenge in the true spirit of unleashing archives!
Group formation is always a fun part about #HackArchives! Look at all the brainstorming going on. Excited to see how teams will develop. pic.twitter.com/XXWP5TnziL
— The Archives Unleashed Project (@unleasharchives) November 1, 2018
My plan to produce such a visualisation consisted of several steps:
1) Pre-process the web archive data (Clean)
2) Extract named entities from the text (NER)
3) Determine which are place names (Geoparse)
4) Add coordinates to place names (Geocode)
5) Visualise the place names (Map)
This blog post is concerned primarily with steps 2 and 3.
An important lesson from the datathon for me is that Web Archive data are very messy. In order to get decent results from steps 2 and 3 it is important to really clean the data as thoroughly as possible. Luckily, the AUT contains several methods that can help to do this (outlined here). The analyses that follow were all run on the output of the AUT ‘Plain text minus boilerplate’ method.
There are a wealth of options available to achieve steps 2 and 3, the discussion that follows does not aim to be exhaustive but to evaluate the methods that we attempted in the datathon.
AUT NER
The first method we attempted was to use the AUT NER method (discussed here). The AUT does a great job of packaging up the Stanford Named Entity Recognizer for easy use with a simple scala command. We ran the method on the AUT derivative of the 2017 section of our Wildfires dataset (around 300mb) using the powerful virtual machines that were helpfully provided by the organisers. However, we found it difficult to get results as the analysis took a long time and often crashed the virtual machine. These problems persisted even when running the NER method on a small subset of the Wildfires dataset, making it difficult to use on a smallish set of WARCs.
The results came in in the following format:
(20170809,dns:www.nytimes.com,{"PERSON":[],"ORGANIZATION":[],"LOCATION":[]})
Which required processing with a simple python script.
When we did obtain results, the “LOCATIONS” arrays seemed to contain only a fraction of the total place names that appeared in the text.
AUT
- Positives: Simple to execute, tailored to web archive data
- Negatives: Time consuming, processor intensive, output requires processing, not all locations returned
Geoparser
So we next turned our attention to the Edinburgh Geoparser and the excellent accompanying tutorial that I have used to great effect on other projects. Unfortunately the analysis resulted in several errors which prevented the Geoparser returning results. During the time available in the datathon we were not able to resolve these errors. The Geoparser appeared unable to deal with the output of AUT’s ‘Plain text minus boilerplate’ method. I attempted other methods to clean the data including changing the encoding and removing ctrl characters. The following python commands:
import re
s = open('9196-fulltext.txt', mode='r', encoding='utf-8-sig').read()
re.sub(r'[\x00-\x1F]+', '', s)
s.rstrip()
removed these errors:
Error: Input error: Illegal character <0x1f> immediately before file offset 6307408
in unnamed entity at line 2169 char 1 of <stream>
Error: Expected whitespace or tag end in start tag
in unnamed entity at line 4 char 6 of <stream>
However the following error remained which we could not fix even after breaking the text into small chunks:
Error: Document ends too soon
in unnamed entity at line 1 char 1 of <stream>
I would be grateful for any input about how to overcome this error as I would love to use the Geoparser to extract place names from Warc files in the future.
Geoparser
- Positives: well-documented, powerful software. Fairly easy to use. Excellent results with OCR or plain text.
- Negatives: didn’t seem to deal well with the scale and/or messiness of web archive data.
The #archivesunleashed toolkit is an open-source platform for analyzing web archives with Apache Spark.....and is the underlying basis for the Archives Unleashed Cloud (AUK). Demo Time w/ @ruebot !! #webarchives #hackarchives
— The Archives Unleashed Project (@unleasharchives) November 1, 2018
Tour also available here: https://t.co/bo9Ezgcz2u pic.twitter.com/c9se9Cn4ls
NLTK
My final attempt to extract place names involved using the python NLTK library with the following packages 'averaged_perceptron_tagger', 'maxent_ne_chunker', 'words'. The initial aim was to extract the named entities from the text. A preliminary script designed to achieve this can be found here.
This extraction does not separate place names from other named entities such as proper nouns and therefore a second stage involved checking if the entities returned by NLTK were present in a gazetteer. We found a suitable gazetteer with a wealth of different information and in the final hours of the datathon I attempted to hack together something to match the NER results with the gazetteer.
Unfortunately I ran out of time both to write the necessary code and to run the script over the dataset. The script badly needs improvement using dataframes and other optimisation. Notwithstanding its preliminary nature, it is clear that this method of extracting place names is slow. The quality of results is also highly dependent on the quality and size of the gazetteer. Only place names found within the gazetteer will be extracted and therefore, if the gazetteer is biased or deficient in some way, the resulting output will be skewed. Furthermore, as the gazetteer becomes larger, the extraction of place names will become painfully slow.
The method described replicates the functionality of geoparser tools yet is a little more flexible, allowing the participant to take account for the idiosyncrasies of web archive data such as unusual characters.
NLTK
- positives: flexibility, works
- negatives: slow, reliant on the gazetteer, requires python skills
Concluding Remarks
Despite the travails that I have outlined, my team mates, adopting a non-programmatic approach, came up with this brilliant map by doing some nifty things with a gazetteer, Voyant tools and QGIS.
From a programmatic perspective it appears that there is still work required to develop a method to extract place names from web archive data at scale, particularly in the hectic and fast-paced environment of a datathon. The main challenge is the messiness of the data with many tools throwing errors that were difficult to rectify. In terms of future datathons, speed of analysis and implementation is a critical consideration as datathons aim to deal with big data in a short amount of time. Of course, the preceding discussion has hardly considered the quality of information outputted by the tools. This is another essential consideration and requires further work. Another future direction would be to examine other tools such as spaCy, Polyglot and NER-Tagger as described in this article.