THE BRITISH LIBRARY

Digital scholarship blog

18 posts categorized "LIS research"

21 April 2018

On the Road (Again)

Add comment

Flickr image: Wanderer
Image from the British Library’s Million Images on Flickr, found on p 198 of 'The Cruise of the Land Yacht “Wanderer”; or, thirteen hundred miles in my caravan, etc' by William Gordon Stables, 1886.

Now that British Summer Time has officially arrived, and with it some warmer weather, British Library Labs are hitting the road again with a series of events in Universities around the UK. The aim of these half-day roadshows is to inspire people to think about using the library's digitised collections and datasets in their research, art works, sound installations, apps, businesses... you name it!

A digitised copy of a manuscript is a very convenient medium to work on, especially if you are unable to visit the library in person and order an original item up to a reading room. But there are so many other uses for digitised items! Come along to one of the BL Labs Roadshows at a University department near you and find out more about the methods used by researchers in Digital Scholarship, from data-mining and crowd sourcing to optical character recognition for transcribing the words from an imaged page into searchable text. 

At each of the roadshow events, there will be speakers from the host institution describing some of the research projects they have already completed using digitised materials, as well as members of the British Library who will be able to talk with you about proposed research plans involving digitised resources. 

The locations of this year's roadshows are: 

Mon 9th April - BL Labs Roadshow 2018 (Open University) - internal event

Mon 26th March - BL Labs Roadshow 2018 (CityLIS) - internal event

Thu 12th April - BL Labs Roadshow 2018 (University of Bristol & Cardiff Digital Cultures Network)

Tue 24th April - BL Labs Roadshow 2018 (UCL)

Wed 25th April - BL Labs Roadshow 2018 (University of Kent)

Wed 2nd May - BL Labs Roadshow 2018 (University of Edinburgh)

Tue 15th May - BL Labs Roadshow 2018 (University of Wolverhampton)

Wed 16th May - BL Labs Roadshow 2018 (University of Lincoln)

Tue 5th June - BL Labs Roadshow 2018 (University of Leeds)

  BL Labs Roadshows 2018
See a full programme and book your place using the Eventbrite page for each event.

If you want to discover more about the Digital Collections, and Digital Scholarship at the British Library, follow us on Twitter @BL_Labs, read our Blog Posts, and get in touch with BL Labs if you have some burning research questions!

12 April 2018

British Library Labs application for Digital Research support

Add comment

BL Labs supports researchers, artists, entrepreneurs and educators who want to use the British Library's digital collections and data

We are proud to announce the launch of a new service where we will able to provide up to 5 days support to help you develop a project idea that uses our digital collections and data. In that time, we will help you understand the collection(s) you want to work with and will provide technical, curatorial and legal advice about your project. We can also help you with scope, costs, time-frames, risks and any other relevant issues.

Lightbulbs
Get support to develop an idea using the British Library's Digital Collections & Data

We will review and select applications at the beginning of each month. If your application is selected, we will work with you to provide targeted support and help you develop your project further.

We strongly recommend that before you submit your idea you explore the digital collections and data you are interested in and contact us at labs@bl.uk for some initial guidance.

You can also visit our previous ideas and projects pages for inspiration.

Once you're ready to go, send in your application using this form,

The 2018 BL Labs Awards: enter before midnight Thursday 11th October!

Add comment

With six months to go before the submission deadline, we would like to announce the 2018 British Library Labs Awards!

The BL Labs Awards are a way of formally recognising outstanding and innovative work that has been created using the British Library’s digital collections and data.

Have you been working on a project that uses digitised material from the British Library's collections? If so, we'd like to encourage you to enter that project for an award in one of our categories.

This year, the BL Labs Awards is commending work in four key areas:

  • Research - A project or activity which shows the development of new knowledge, research methods, or tools.
  • Commercial - An activity that delivers or develops commercial value in the context of new products, tools, or services that build on, incorporate, or enhance the Library's digital content.
  • Artistic - An artistic or creative endeavour which inspires, stimulates, amazes and provokes.
  • Teaching / Learning - Quality learning experiences created for learners of any age and ability that use the Library's digital content.

BLAwards2018
BL Labs Awards 2018 Winners (Top-Left- Research Award Winner – A large-scale comparison of world music corpora with computational tools , Top-Right (Commercial Award Winner – Movable Type: The Card Game), Bottom-Left(Artistic Award Winner – Imaginary Cities) and Bottom-Right (Teaching / Learning Award Winner – Vittoria’s World of Stories)

There is also a Staff award which recognises a project completed by a staff member or team, with the winner and runner up being announced at the Symposium along with the other award winners.

The closing date for entering your work for the 2018 round of BL Labs Awards is midnight BST on Thursday 11th October (2018)Please submit your entry and/or help us spread the word to all interested and relevant parties over the next few months. This will ensure we have another year of fantastic digital-based projects highlighted by the Awards!

The entries will be shortlisted after the submission deadline (11/10/2018) has passed, and selected shortlisted entrants will be notified via email by midnight BST on Friday 26th October 2018. 

A prize of £500 will be awarded to the winner and £100 to the runner up in each of the Awards categories at the BL Labs Symposium on 12th November 2018 at the British Library, St Pancras, London.

The talent of the BL Labs Awards winners and runners up from 2017, 2016 and 2015 has resulted in a remarkable and varied collection of innovative projects. You can read about some of the 2017 Awards winners and runners up in our other blogs, links below:

BLAwards2018-Staff
British Library Labs Staff Award Winner – Two Centuries of Indian Print


Research category Award (2017) winner: 'A large-scale comparison of world music corpora with computational tools', by Maria Panteli, Emmanouil Benetos and Simon Dixon. Centre for Digital Music, Queen Mary University of London

  • Research category Award (2017) runner up: 'Samtla' by Dr Martyn Harris, Prof Dan Levene, Prof Mark Levene and Dr Dell Zhang
  • Commercial Award (2017) winner: 'Movable Type: The Card Game' by Robin O'Keeffe
  • Artistic Award (2017) winner: 'Imaginary Cities' by Michael Takeo Magruder
  • Artistic Award (2017) runner up: 'Face Swap', by Tristan Roddis and Cogapp
  • Teaching and Learning (2017) winner: 'Vittoria's World of Stories' by the pupils and staff of Vittoria Primary School, Islington
  • Teaching and Learning (2017) runner up: 'Git Lit' by Jonathan Reeve
  • Staff Award (2017) winner: 'Two Centuries of Indian Print' by Layli Uddin, Priyanka Basu, Tom Derrick, Megan O’Looney, Alia Carter, Nur Sobers khan, Laurence Roger and Nora McGregor
  • Staff Award (2017) runner up: 'Putting Collection metadata on the map: Picturing Canada', by Philip Hatfield and Joan Francis

For any further information about BL Labs or our Awards, please contact us at labs@bl.uk.

22 January 2018

BL Labs 2017 Symposium: Data Mining Verse in 18th Century Newspapers by Jennifer Batt

Add comment

Dr Jennifer Batt, Senior Lecturer at the University of Bristol, reported on an investigation in finding verse using text and data-mining methods in a collection of digitised eighteenth-century newspapers in the British Library’s Burney Collection to recover a complex, expansive, ephemeral poetic culture that has been lost to us for well over 250 years. The collection equates to around 1 million pages, around 700 or so bound volumes of 1271 titles of newspapers and news pamphlets published in London and also some English provincial, Irish and Scottish papers, and a few examples from the American colonies.

A video of her presentation is available below:

Jennifer's slides are available on SlideShare by clicking on the image below or following the link:

Datamining for verse in eighteenth-century newspapers
Datamining for verse in eighteenth-century newspapers

https://www.slideshare.net/labsbl/datamining-for-verse-in-eighteenthcentury-newsapers

 

 

04 August 2017

BL Labs Awards (2017): enter before midnight Wednesday 11th October!

Add comment

Posted by Mahendra Mahey, Manager of of British Library Labs.

The BL Labs Awards formally recognises outstanding and innovative work that has been created using the British Library’s digital collections and data.

The closing date for entering the BL Labs Awards (2017) is midnight BST on Wednesday 11th October. So please submit your entry and/or help us spread the word to all interested and relevant parties over the next few months or so. This will ensure we have another year of fantastic digital-based projects highlighted by the Awards!

This year, the BL Labs Awards is commending work in four key areas:

  • Research - A project or activity which shows the development of new knowledge, research methods, or tools.
  • Commercial - An activity that delivers or develops commercial value in the context of new products, tools, or services that build on, incorporate, or enhance the Library's digital content.
  • Artistic - An artistic or creative endeavour which inspires, stimulates, amazes and provokes.
  • Teaching / Learning - Quality learning experiences created for learners of any age and ability that use the Library's digital content.

After the submission deadline of midnight BST on Wednesday 11th October for entering the BL Labs Awards has past, the entries will be shortlisted. Selected shortlisted entrants will be notified via email by midnight BST on Friday 20th October 2017. 

A prize of £500 will be awarded to the winner and £100 to the runner up of each Awards category at the BL Labs Symposium on 30th October 2017 at the British Library, St Pancras, London.

The talent of the BL Labs Awards winners and runners ups of 2016 and 2015 has led to the production a remarkable and varied collection of innovative projects. In 2016, the Awards commended work in four main categories – Research, Creative/Artistic and Entrepreneurship:

  • Research category Award (2016) winner: 'Scissors and Paste', by M. H. Beals. Scissors and Paste utilises the 1800-1900 digitised British Library Newspapers, collection to explore the possibilities of mining large-scale newspaper databases for reprinted and repurposed news content.
  • Artistic Award (2016) winner: 'Hey There, Young Sailor', written and directed by Ling Low with visual art by Lyn Ong. Hey There, Young Sailor combines live action with animation, hand-drawn artwork and found archive images to tell a love story set at sea. The video draws on late 19th century and early 20th century images from the British Library's Flickr collection for its collages and tableaux and was commissioned by Malaysian indie folk band The Impatient Sisters and independently produced by a Malaysian and Indonesian team.
BL Labs Award Winners 2016
Image: 'Scissors and Paste', by M. H. Beals (Top-left)
'Curating Digital Collections to Go Mobile', by Mitchell Davis; (Top-right)
 'Hey There, Young Sailor',
written and directed by Ling Low with visual art by Lyn Ong; (Bottom-left)
'Library Carpentry', founded by James Baker and involving the international Library Carpentry team;
(Bottom-right) 
  • Commercial Award (2016) winner: 'Curating Digital Collections to Go Mobile', by Mitchell Davis. BiblioBoard, is an award-winning e-Content delivery platform, and online curatorial and multimedia publishing tools to support it to make it simple for subject area experts to create visually stunning multi-media exhibits for the web and mobile devices without any technical expertise, the example used a collection of digitised 19th Century books.
  • Teaching and Learning (2016) winner: 'Library Carpentry', founded by James Baker and involving the international Library Carpentry team. Library Carpentry is software skills training aimed at the needs and requirements of library professionals taking the form of a series of modules that are available online for self-directed study or for adaption and reuse by library professionals in face-to-face workshops using British Library data / collections. Library Carpentry is in the commons and for the commons: it is not tied to any institution or person. For more information, see http://librarycarpentry.github.io/.
  • Jury’s Special Mention Award (2016): 'Top Geo-referencer -Maurice Nicholson' . Maurice leads the effort to Georeference over 50,000 maps that were identified through Flickr Commons, read more about his work here.

For any further information about BL Labs or our Awards, please contact us at labs@bl.uk.

24 February 2017

Library Carpentry: software skills workshops for librarians

Add comment

Guest post by James Baker, Lecturer in Digital History and Archives, University of Sussex.

Librarians play a crucial role in cultivating world-class research and in most disciplinary areas today world-class research relies on the use of software. Established non-profit organisations such as Software Carpentry and Data Carpentry offer introductory software skills training with a focus on the needs and requirements of research scientists. Library Carpentry is a comparable introductory software skills training programme with a focus on the needs and requirements of library professionals: and by software skills, I mean coding and data manipulation that go beyond the use of familiar office suites. As librarians have substantial expertise working with data, we believe that adding software skills to their armoury is an effective and important use of professional development resource that benefits both library professionals and their colleagues and collaborators across higher education and beyond.

In November 2015 the first Library Carpentry workshop programme took place at City University London Centre for Information, generously supported by the Software Sustainability Institute as part of my 2015 Fellowship. Since then 21 workshops have run in 7 countries across 4 continents and the Library Carpentry training materials have been developed by an international team of librarians, information scientists, and information technologists. Our half-day lessons, which double up as self-guided learning materials, now cover the basics of data and computing, using a command line prompt to manipulate data, version control in Git, normalising data in OpenRefine, working with databases in SQL, and programming with Python.

What distinguishes these lessons from other learning materials are that the exercises and use cases that frame Library Carpentry are drawn from library practice and are based on data familiar to librarians: in most cases, open datasets of publication metadata released under an open licence by the British Library. Library Carpentry then is as much about daily practice as it is about novelty, about dealing with what is front of us today as much as about preparing us for what is coming.

These lessons and everything we do is in the commons, for the commons, and are not tied to any institution or person. We are a community effort built and maintained by the community. For more on Library Carpentry and our future plans, see our recent article in LIBER Quarterly (Baker et al. Library Carpentry: software skills training for library professionals. 2016. DOI: http://doi.org/10.18352/lq.10176) and our website librarycarpentry.github.io.

James_baker
James Baker, receiving the BL Labs Award for Teaching and Learning 2016 on behalf of the Library Carpentry community 

The Learning and Teaching Award given to Library Carpentry at the 2016 British Library Labs Awards has enabled us to extend this community. In November we launched a call for Library Carpentry workshops seeking financial support. We were humbled by the volume and diversity of the responses received and are delighted to be able to fund two very different workshops that will reach very different communities of librarians. The first is a collaboration between Somerset Libraries Glass Box Project, {Libraries:Hacked}, and Plymouth Libraries for a Library Carpentry workshop that will target public, academic, and specialist librarians. The second workshop will take place at University of Sheffield and will be coordinated by the White Rose Consortium for the benefit of university librarians across the region. Details of these events will be advertised at librarycarpentry.github.io in due course, along with four or five Library Carpentry workshops that were unable to fund but that will still enjoy logistical support from members of the Library Carpentry community.

Library Carpentry has taken great strides in a short period of time. We continue to maintain and update our lesson materials to ensure that they fit with library practice and we are working closely with Software Carpentry and Data Carpentry to map out a future direction for Library Carpentry that meets the needs of this valuable community. We are always looking for people to bring their expertise and perspective to this work. So if you want to get involved in any capacity, please post something in our Gitter discussion forum, raise a issue on or suggest an edit to one of our lessons, contact us via Twitter, or request support with a workshop. We'd love to hear from you.

 

21 December 2016

Mobius programme – on the beach of learning

Add comment

This guest post is by Virve Miettinen, who spent four months with various teams at the British Library.

Every morning there’s a 100 meter queue in front of the British Library. It seems to say a lot about an unashamed nerdiness and love for learning in this city. Usually all the queuers have already put the things they might need in the Reading Room in a clear plastic bag, so they can head straight down to the lockers, stow away their coats, handbags and laptop cases and secure a place on the beach of learning.

Virve
Virve Miettinen

The Mobius fellowship programme, organised by the Finnish Institute in London, enables mobility for visual arts, museum, library and archives professionals, and customised working periods as part of the host organisation’s staff, in my case the British Library. The programme is a great opportunity to break away from daily routines, to think about one’s professional identity, find fresh ideas, compare the practices and methods between two countries, share knowledge and build meaningful networks.

Learn, relearn and unlearn from each other

Learning isn’t a destination, it’s a never-ending road of discovery, challenge, inspiration and wonder. Each learning moment builds character, shapes thoughts, guides futures. But what makes us learn? For me the answer is other people, and during the Mobius Fellowship I’ve been blessed with the chance to work with talented people willing to share their knowledge at the British Library.

I’ve familiarised myself with British Library Learning Team which is responsible for the library’s engagement with all kinds of learners. The Learning Team offers workshops, activities and resources for schools, teachers and learners of all ages.

I’ve been following the work of the Digital Scholarship team and BL Labs project to learn more about the incredible digital collections the library has to offer, and how to open them up for the public through various activities such as competitions, events and projects.

I’ve worked with the Knowledge Quarter, which is a network of now 76 partners within a one mile radius of Kings Cross and who actively create and disseminate knowledge. Partners include over 49 academic, cultural, research, scientific and media organisations large and small: from the British Library and University of the Arts London to the School of Life, Connected Digital Economy Catapult, Francis Crick Institute and Google.

I’ve assisted the Library’s Community Engagement Manager Emma Morgan. She has been working as a community engagement manager for six months now and the aim of her work is to create meaningful, long-lasting, mutually beneficial relationships with the surrounding community, i.e. residents, networks and organisations.

image from http://s3.amazonaws.com/feather-files-aviary-prod-us-east-1/98739f1160a9458db215cec49fb033ee/2016-12-21/8bd92af45559431385823ecce6782cb7.png
Inside the British Library

I’ve observed the library’s marketing and communications unit in action, and learned for example how they measure and research the customer experience, i.e. who visits and uses the BL, what they think of their experience and how the BL might improve it.

 

I’ve got many 'mental souvenirs' to take back home with me - if they interest you, read more from my Mobius blog: http://itssupercalafragilistic.tumblr.com/. 

100 digital stories about Finnish-British relations

As part of the Mobius programme I’ve been working on a co-operative project between the British Library, the National Library in Finland, the Finnish National Archives, The Finnish Institute in London and the Finnish Embassy. In the last three decades, contacts between Finland and UK, the two relatively distant nations have multiplied. At the same time, the network of cultural relationships has tightened into a seamless 'love-story' – something that would not have been easy to predict just 50 years ago. In the coming year of 2017 the Finnish Institute celebrates the centennial anniversary of Finland’s independence by telling the story of two nations – the aim is to make the history, the interaction and the links between these two countries tangible and visible.

We are collaborating to create a digital gallery open to all, which offers its visitors carefully curated pieces of the shared history of the two countries and their political, cultural and economic relations. It will offer new information on the relations and influences between the two countries. It consists of digitised historical materials, like letters, news, cards, photographs, tickets and maps. The British Library and other partners will select 100 digitised items to create the basis of the gallery.

The gallery will be expanded further through co-creation. In the spirit of the theme of Finland’s centenary 'together', the gallery is open to all and easily accessible. With the call 'Wanted – make your own heritage' we invite people to share their own stories and interpretations, and record history through them. The gallery feeds curiosity, creates interaction and engages users to share their own memories relating to Finnish-British experiences. The users are invited to interpret recent history from a personal point of view.

The work continues after my Mobius-period and the gallery will open in September 2017. Join us and share your memories. Be frank, withdrawn, furious, imaginative, witty or sad. Through your story you create history.

P.S. The British Library Reading Room is actually far from The Beach of Learning, it’s more like The Coolest Place To Be, I found myself freezing in the air-conditioned Rare Books Reading Room despite wearing my leather jacket and extra pair of leggings

Virve Miettinen is working at Helsinki City Library/ Central Library as a participation planner. Her job is to engage citizens and partners to design the library of the future. For Helsinki City Library co-operative planning and service design means designing the premises and services together with the library users while taking advantage of user centric methods. Her interests involve co-design, service design, community engagement and community-led city development. At the moment she is also working with her PhD under the title 'Co-creative practices in library services'.

22 August 2016

SherlockNet: tagging and captioning the British Library’s Flickr images

Add comment

Finalists of the BL Labs Competition 2016, Karen Wang, Luda Zhao and Brian Do, inform us on the progress of their SherlockNet project:

Sherlock

This is an update on SherlockNet, our project to use machine learning and other computational techniques to dramatically increase the discoverability of the British Library’s Flickr images dataset. Below is some of our progress on tagging, captioning, and the web interface.

Tags

When we started this project, our goal was to classify every single image in the British Library's Flickr collection into one of 12 tags -- animals, people, decorations, miniatures, landscapes, nature, architecture, objects, diagrams, text, seals, and maps. Over the course of our work, we realised the following:

  1. We were achieving incredible accuracy (>80%) in our classification using our computational methods.
  2. If our algorithm assigned two tags to an image with approximately equal probability, there was a high chance the image had elements associated with both tags.
  3. However, these tags were in no way enough to expose all the information in the images.
  4. Luckily, each image is associated with text on the corresponding page.

We thus wondered whether we could use the surrounding text of each image to help expand the “universe” of possible tags. While the text around an image may or may not be directly related to the image, this strategy isn’t without precedent: Google Images uses text as its main method of annotating images! So we decided to dig in and see how this would go.

As a first step, we took all digitised text from the three pages surrounding each image (the page before, the page of, and the page after) and extracted all noun phrases. We figured that although important information may be captured in verbs and adjectives, the main things people will be searching for are nouns. Besides, at this point this is a proof of principle that we can easily extend later to a larger set of words. We then constructed a composite set of all words from all images, and only kept words present in between 5% and 80% of documents. This was to get rid of words that were too rare (often misspellings) or too common (words like ‘the’, ‘a’, ‘me’ -- called “stop words” in the natural language processing field).

With this data we were able to use a tool called Latent Dirichlet Allocation (LDA) to find “clusters” of images in an automatic way. We chose the original 12 tags after manually going through 1,000 images on our own and deciding which categories made the most sense based on what we saw; but what if there are categories we overlooked or were unable to discern by hand? LDA solves this by trying to find a minimal set of tags where each document is represented by a set of tags, and each tag is represented by a set of words. Obviously the algorithm can’t provide meaning to each tag, so we provide meaning to the tag by looking at the words that are present or absent in each tag. We ran LDA on a sample of 10,000 images and found tags clusters for men, women, nature, and animals. Not coincidentally, these are similar to our original tags and represent a majority of our images.

This doesn’t solve our quest for expanding our tag universe though. One strategy we thought about was to just use the set of words from each page as the tags for each image. We quickly found, however, that most of the words around each image are irrelevant to the image, and in fact sometimes there was no relation at all. To solve this problem, we used a voting system [1]. From our computational algorithm, we found the 20 images most similar to the image in question. We then looked for the words that were found most often in the pages around these 20 images. We then use these words to describe the image in question. This actually works quite well in practice! We’re now trying to combine this strategy (finding generalised tags for images) with the simpler strategy (unique words that describe images) to come up with tags that describe images at different “levels”.

Image Captioning

We started with a very ambitious goal: given only the image input, can we give a machine -generated, natural-language description of the image with a reasonably high degree of accuracy and usefulness? Given the difficulty of the task and of our timeframe, we didn’t expect to get perfect results, but we’ve hoped to come up with a first prototype to demonstrate some of the recent advances and techniques that we hope will be promising for research and application in the future.

We planned to look at two approaches to this problem:

  • Similarity-based captioning. Images that are very similar to each other using a distance metric often share common objects, traits, and attributes that shows up in the distribution of words in their captions. By pooling words together from a bag of captions of similar images, one can come up with a reasonable caption for the target image.
  • Learning-based captioning. By utilising a CNN similar to what we used for tagging, we can capture higher-level features in images. We then attempt to learn the mappings between the higher-level features and their representations in words, using either another neural network or other methods.

We have made some promising forays into the second technique. As a first step, we used a pre-trained CNN-RNN architecture called NeuralTalk to caption our images. As the models are trained on the Microsoft COCO dataset, which consists of pictures and photograph that differs significantly from the British Library's  Flickr dataset, we expect the transfer of knowledge to be difficult. Indeed, the resulting captions of some ~1000 test images show that weakness, with the black-and-white exclusivity of the British Library illustration and the more abstract nature of some illustrations being major roadblocks in the qualities of the captioning. Many of the caption would comment on the “black and white” quality of the photo or “hallucinate” objects that did not exist in the images. However, there were some promising results that came back from the model. Below are some hand-pick examples. Note that this was generated with no other metadata; only the raw image was given.

S1 S2 S3
From a rough manual pass, we estimate that around 1 in 4 captions are of useable quality: accurate, contains interesting and useful data that would aid in search discovery, catalogisation etc., with occasional gems (like the elephant caption!). More work will be directed to help us increase this metric.

Web Interface

We have been working on building the web interface to expose this novel tag data to users around the world.

One thing that’s awesome about making the British Library dataset available via Flickr, is that Flickr provide an amazing API for developers. The API exposes, among other functions, the image website’s search logic via tags as well as free text search using the image title and description, and the capability to sort by a number of factors including relevance and “interestingness”. We’ve been working on using the Flickr API, along with AngularJS and Node.js to build a wireframe site. You can check it out here.

If you look at the demo or the British Library's Flickr album, you’ll see that each image has a relatively sparse set of tags to query from. Thus, our next steps will be adding our own tags and captions to each image on Flickr. We will pre-pend these with a custom namespace to distinguish them from existing user-contributed and machine tags, and utilise them in queries to find better results.

Finally, we are interested in what users will use the site for. For example, we could track user’s queries and which images they click on or save. These images are presumably more relevant to these queries, and we rank them higher in future queries. We also want to be able to track general analytics like the most popular queries over time. Thus incorporating user analytics will be the final step in building the web interface.

We welcome any feedback and questions you may have! Contact us at teamsherlocknet@gmail.com

References

[1] Johnson J, Ballan L, Fei-Fei L. Love Thy Neighbors: Image Annotation by Exploiting Image Metadata. arXiv (2016)