THE BRITISH LIBRARY

Science blog

26 posts categorized "Data"

29 August 2017

I4OC: The British Library and open data

Add comment

In August the British Library joined the Initiative for Open Citations as a stakeholder. The I4OC’s aim of promoting the availability of structured, separable, open citation data fits perfectly with the Library's established strategy for open metadata which has just marked its seventh anniversary. I4oc logo

In August 2010, responding to UK Government calls for increased access to public data to promote transparency, economic growth and research, the British Library launched the strategy by offering over 16m CC0 licensed records from its catalogue and national bibliography datasets. This initiative aimed to remove constraints created by restrictive licensing and library specific standards to enable wider community re-use. In doing so the Library aimed to unlock the value of the data while improving access to information and culture in line with its wider strategic objectives.
 
The initial release was followed in 2011 by the launch of the Library’s first Linked Open Data (LOD) bibliographic service. The Library believed Linked Open Data to be a logical evolutionary step for the established principle of freedom of access to information, offering trusted knowledge organisations a central role in the new information landscape. The development proved influential among the library community in moving the Linked Data debate from theory to practice.

Over 1,700 organisations in 123 countries now use the Library’s open metadata services with many more taking single files. The value of the Library’s open data work was recognised by the British National Bibliography linked dataset receiving a 5 star rating on the UK Government Data.gov.uk site and certification from the Open Data Institute (ODI). In 2016 the Library launched the http://data.bl.uk/ platform in order to offer copies of a range of its datasets available for research and creative purposes. In addition, the BL Labs initiative continues to explore new opportunities for public use of the Library’s digital collections and data in exciting and innovative ways. The British Library therefore remains committed to an open approach to enable the widest possible re-use of its rich metadata and generate the best return on the investment in its creation.

I4oc users
I4OC users by country

 

As the example of the British Library’s open data work shows, opening up metadata facilitates access to information, creates efficiencies and allows others to enhance existing and develop new services. This is particularly important for researchers and others who do not work for organisations with subscriptions to commercial citation databases. The British Library believes that opening up metadata on research facilitates both improved research information management and original research, and therefore benefits all.

The I4OC’s recent call to arms for its stakeholders is therefore very much in tune with the British Library’s open data work in promoting the many benefits of freely accessible citation data for scholars, publishers and wider communities. Such benefits proved compelling enough to enable the I4OC to secure publisher agreement for nearly half of indexed scholarly data to be made openly accessible. This data is now being used in a range of new projects and services including OpenCitations and Wikidata. It's encouraging to see I4OC spreading the open data ideal so successfully and it is to be hoped that it will also succeed in ensuring open citations become the default in future.

Correction: Image shows users of BL open data services by country, not I4OC

03 February 2017

HPC & Big Data

Add comment

Big-data-1667184_1280
Matt and Philip attended the HPC & Big Data conference on Wednesday 1st February. This is an annual one-day conference on the uses of high-performance computing and especially on big data. “Big data” is used widely to mean very large collections of data in science, social science, and business.

There were some very interesting presentations over the day. Anthony Lee from our friends the Turing Institute discussed the Institute’s plans for the future and the potential of big data in general. The increasing amounts of data being created in “big science” scientific experiments and the world at large mean that the problems of research have shifted from data collection being the hard part to processing capabilities being overwhelmed by the sheer volume of data.

A presentation from the Earlham Institute and Verne Global revealed that Iceland could become a centre for high-performance computing in the future, thanks to its combination of cheap, green electricity from hydroelectric and geothermal power, high-bandwidth data links to other continents, and a cool climate which reduces the need for active cooling of equipment. HPC worldwide now consumes more energy than the entire airline industry and whole countries of the size and development level of Italy and Spain. Seljalandsfoss-1207956_1280

Dave Underwood of the Met Office described the Met Office’s acquisition of the largest HPC computer in Europe. He also pointed out the extreme male-biased demographic of the event, something that both Matt and Philip had noticed (although we admit, one of our female team members could have gone instead of Philip).

Luciano Floridi of Oxford University discussed the ethical issues of Big Data and pointed out that as intangibles become a greater portion of companies’ value, so scandal becomes more damaging to them. Current controversies involving behaviour on the internet suggest that moral principles of security, privacy, and freedom of speech may be increasingly conflicting with one another, leading to difficult questions of how to balance them.

JISC gave a presentation on their actual and planned shared HPC data centres, and invited representatives from our friends and neighbours at the Crick Institute, and the Wellcome Trust’s Sanger Institute on their IT plans. Alison Davis from Crick pointed out that an under-rated problem for academic IT departments is individual researchers’ desire to carry huge quantities of digital data with them when they move institutions, causing extra demand on storage and raising difficult issues of ownership.

Finally, Richard Self of the University of Derby gave an illuminating presentation on the potential pitfalls of “big data” in social science and business, such as the fact that the size of a sample does not guarantee that it is representative of the whole population, the probability of finding apparent correlations in a large sample that are created by chance and not causation, and the lack of guaranteed veracity. (For example, in one investigation 14% of geographical locations from mobile phone data were 65km or more out of place.)

Philip Eagle, Content Expert - STM

05 September 2016

Social Media Data: What’s the use?

Add comment

Team ScienceBL is pleased to bring you #TheDataDebates -  an exciting new partnership with the AHRC, the ESRC and the Alan Turing Institute. In our first event on 21st September we’re discussing social media. Join us!

Every day people around the world post a staggering 400 million tweets, upload 350 million photos to Facebook and view 4 billion videos on YouTube. Analysing this mass of data can help us understand how people think and act but there are also many potential problems.  Ahead of the event, we looked into a few interesting applications of social media data.

Politically correct? 

During the 2015 General Election, experts used a technique called sentiment analysis to examine Twitter users’ reactions to the televised leadership debates1. But is this type of analysis actually useful? Some think that tweets are spontaneous and might not represent the more calculated political decision of voters.

On the other side of the pond, Obama’s election strategy in 2012 made use of social media data on an unprecedented scale2. A huge data analytics team looked at social media data for patterns in past voter characteristics and used this information to inform their marketing strategy - e.g. broadcasting TV adverts in specific slots targeted at swing voters and virtually scouring the social media networks of Obama supporters on the hunt for friends who could be persuaded to join the campaign as well. 

8167745752_44e8ff5737_b
Image from Flickr

In this year's US election, both Hillary Clinton and Donald Trump are making the most of social media's huge reach to rally support. The Trump campaign has recently released the America First app which collects personal data and awards points for recruiting friends3. Meanwhile Democrat nominee Clinton is building on the work of Barack Obama's social media team and exploring platforms such as Pinterest and YouTube4. Only time will tell who the eventual winner will be.

Playing the market

You know how Amazon suggests items you might like based on the items you’ve browsed on their site? This is a common marketing technique that allows companies to re-advertise products to users who have shown some interest in the brand but might not have bought anything. Linking browsing history to social media comments has the potential to make this targeted marketing even more sophisticated4.

Credit where credit’s due?

Many ‘new generation’ loan companies don’t use a traditional credit checks but instead gather other information on an individual - including social media data – and then decide whether to grant the loan5. Opinion is divided as to whether this new model is a good thing. On the one hand it allows people who might have been rejected by traditional checks to get credit. But critics say that people are being judged on data that they assume is private. And could this be a slippery slope to allowing other industries (e.g. insurance) to gather information in this way? Could this lead to discrimination?

5463888252_bd928fb95b_b
Image from Flickr

What's the problem?

Despite all these applications there’s lots of discussion about the best way to analyse social media data. How can we control for biases and how do we make sure our samples are representative? There are also concerns about privacy and consent. Some social media data (like Twitter) is public and can be seen and used by anyone (subject to terms and conditions). But most Facebook data is only visible to people specified by the user. The problem is: do users always know what they are signing up for?

Media-998990_960_720
Image from Pixabay

Lots of big data companies are using anonymised data (where obvious identifiers like name and date of birth are removed) which can be distributed without the users consent. But there may still be the potential for individuals to be re-identified - especially if multiple datasets are combined - and this is a major problem for many concerned with privacy.

If you are an avid social media user, a big data specialist, a privacy advocate or are simply interested in finding out more join us on 21st September to discuss further. Tickets are available here.

Katie Howe

15 March 2016

Tunny and Colossus: Donald Michie and Bletchley Park

Add comment Comments (1)

In honour of British Science Week Jonathan Pledge explores the work of Donald Michie, a code-breaker at Bletchley Park from 1942 to 1945. The Donald Michie papers are held at the British Library.

Donald Michie (1923-2007) was a scientist who made key contributions in the fields of cryptography, mammalian genetics and artificial intelligence (AI).

20160307_103111538_iOS
Copy of a photograph of Donald Michie taken while he was at Bletchley Park (Add MS 89072/1/5). Copyright the estate of Donald Michie/Crown Copyright.

In 1942, Michie began working at Bletchley Park in Buckinghamshire as a code-breaker under Max H. A. Newman. His role was to decrypt the German Lorenz teleprinter cypher - codenamed ‘Tunny’.

The Tunny machine was attached to a teleprinter and encoded messages via a system of two sets of five rotating wheels, named ‘psi’ and ‘chi’, by the code-breakers. The starting position of the wheels, known as a wheel pattern, was decided by a predetermined code before the operator entered the message. The encryption worked by generating an additional letter, derived from the addition of each letter generated by the psi and chi wheels to each letter of the unencrypted message entered by the operator. The addition worked by using a simple rule represented here as dots and crosses:

• + • = •

x + x = •

• + x = x

x + • = x

Therefore using these rules, M in the teleprinter alphabet, represented as:  • • x x x, added to N: • • x x •, gives • • • • x, the letter T.

800px-SZ42-6-wheels-lightened
Detail of the Lorenz machine showing the encoding wheels. Creative Commons Licence.

In order for messages to be decrypted it was initially necessary to know the position of the encoding wheels before the message was sent. These were initially established by the use of ‘depths’. A depth occurred when the Tunny operator mistakenly repeated the same message with subtle textual differences without first resetting the encoding wheels.

A depth was first intercepted on 30 August 1941 and the encoding text was deciphered by John Tiltman. From this the working details of Tunny were established by the mathematician William Tutte without his ever having seen the machine itself; an astonishing feat. Using Tutte’s deduction the mathematician Alan Turing came up with a system for devising the wheel patterns; known as ‘Turingery’.

Turing, known today for his role in breaking the German navy’s ‘Enigma ‘code, was at the time best known for his 1936 paper ‘On Computable Numbers’ in which he had theorised about a ‘Universal Turing Machine’ which today we would recognise as a computer. Turing’s ideas on ‘intelligent machines’, along with his friendship, were to have a lasting effect on Michie and his future career in AI and robotics. 

Between July and October 1942, all German Tunny messages were decrypted by hand. However changes to the way the cypher was generated meant that finding the wheel setting by hand was no longer feasible. It was again William Tutte who came up with a statistical method for finding the wheels settings and it was the mathematician Max Newman who suggested using a machine for processing the data.

FO 850_234-2
Colossus computer [c 1944]. By the end of the War there were ten such machines at Bletchley. Crown Copyright.

Initially an electronic counter dubbed ‘Heath Robinson’ was used for data processing. However it was not until the engineer Thomas Flowers, designed and built Colossus, the world’s first large scale electronic computer, that wheel patterns and therefore the messages could be decrypted at speed. Michie too, along with Jack Good, played a part, discovering a way of using Colossus to dramatically reduce the processing time for ciphered texts.

The decrypting of Tunny messages was critical in providing the Allies with information on high level German military planning in particular for the Battle of Kursk in 1943 and surrounding preparations for the D-Day invasion of 1944

One of the great ironies is that much of this pioneering and critical work remained a state secret until 1996. It was only through Donald Michie’s tireless campaigning that the General Report on Tunny, written in 1945 by Michie, Jack Good and Geoffrey Timmins, was finally declassified by the British Government; providing proof of the code-breakers collective achievements during the War. 

20160307_102709905_iOS
Pages from Donald Michie’s copy of the General Report on Tunny. (Add MS 89072/1/6). Crown Copyright.

 Donald Michie at the British Library

The Donald Michie Papers at the British Library comprises of three separate tranches of material gifted to the library in 2004 and 2007. They consist of correspondence, notes, notebooks, offprints and photographs and are available to researchers through the British Library’s Explore Archives and Manuscripts catalogue at Add MS 88958, Add MS 88975 and Add MS 89072.

 

Jonathan Pledge: Curator of Contemporary Archives and Manuscripts, Public and Political Life

Read more about ciphers in the British Library's collections on Untold Lives

13 October 2015

‘Your Puzzle-Mate’: Ada Lovelace and Charles Babbage

Add comment Comments (2)

On Ada Lovelace Day, Alexandra Ault explores the British Library's collection of correspondence between Ada Lovelace and Charles Babbage.                    

Did you know that the British Library holds an incredible set of letters from Ada Lovelace to Charles Babbage? Dating between 1836-1851, the letters from the mathematician and only daughter of Lord Byron to the inventor of the first successful automatic calculator, record a working relationship and friendship between two great minds. Despite Lovelace’s young age when she began writing to Babbage who was twenty-four years her senior, her letters reveal not only an incredible mathematical talent but an organised sensibility.

Lovelace1
Letter from Ada Lovelace to Charles Babbage, 10 July 1843, Add MS 37192. Noc


Add MS 37192 contains 29 letters from Lovelace to Babbage which sit with letters to Babbage from other great Victorian inventors, writers and politicians including Charles Dickens, Sir Robert Peel,  Michael Faraday and Isambard Kingdom Brunel.

Looking at excerpts from letters written by Lovelace to Babbage in 1843, it is possible to see not just collaboration between the two mathematicians, but a friendship whereby Lovelace chastised and encouraged Babbage:

On 19 (?) July 1843 Lovelace wrote

“My dear Babbage, It is quite evident to me that you have been looking over the superseded sheet 4, instead of the corrected one.”

And on the 13 July 1843:

“Will you come at mine on Saturday morning and stay as long as we find requisite. I name so early an hour because we shall have much to do I think. And it certainly must not be later than ten o’clock”. 

AdaLovelace
Ada Lovelace by William Henry Mote, after Alfred Edward Chalon, published 1839, National Portrait Gallery NPG D5123  NPG CC By

 In her letters, Lovelace displays both a keen sense of humour and dedication to mathematical investigation. On 10 July 1843 she wrote:

“Mr Dear Babbage, I am working very hard for you; like the Devil in fact (which perhaps I am). I think you will be pleased. I have made what appears to me some very important exclusions and improvements”.

21 July (?) 1843:

“My Dear Babbage, I am in much dismay at having got into so amazing a quagmire and botheration with these numbers”.

In this letter Lovelace signs herself off as “Your puzzle-mate” showing both the professional and friendly nature of their relationship.

The British Library has featured one of the Lovelace Letters on their Treasures Page:  http://www.bl.uk/highlights/articles/science

Alexandra Ault, Curator, Modern Archives and Manuscripts 1601-1850.

07 October 2015

The Ugly Truth

Add comment Comments (0)

On 28th September the British Library hosted the 10th Annual Sense About Science lecture entitled "The Ugly Truth" and delivered by Sense About Science director Tracey Brown. The British Library's mission is to make our intellectual heritage accessible to everyone for research, inspiration and enjoyment. This key purpose aligns with that of Sense About Science who are making research accessible by equipping people to make sense of science and evidence. In this guest post, Voice of Young Science member Sheena Cowell summarizes the lecture highlights.

Towards the end of my PhD I was often asked by interested friends and family “So, what have you found out then?” I knew this question was innocent enough, but in the complexity of my project and the stress of trying to write up, I would often revert to something along the lines of “we had this nice idea, but in the end it didn’t quite work”. This was not the truth. I was distilling my results, removing the nuances of my research and giving an answer that was simpler, easier. Science rarely has definitive answers. Scientists spend their days finding evidence to support or disprove arguments and hypotheses within their fields. Uncertainty is accepted. Probabilities and error bars are scrutinised alongside results. But, when it comes to explaining a body of scientific work to a wider audience, this uncertainty is often left out. Evidence is simplified. Results and outcomes are over or understated in order to get a point across. But what harm does this do?

 On Monday 28th September at the Sense About Science Annual Lecture, Tracey Brown gave a talk exploring just that; the difficulty of telling the whole ‘truth’ or challenging ‘truths’ in the public arena. As scientists or even as advocates of evidence, we can sometimes alter the evidential ‘truth’ in favour of a simplified explanation or an uncomplicated argument. However, in her talk, Tracey argued that evidence should be presented warts and all, including the uncertainty and unknowns that it can expose. “The Ugly Truth” explored the concept that the oversimplification of evidence and the lack of critical scrutiny of established claims, can be detrimental to public accountability and to the scientific community itself.

At the beginning of her lecture, Tracey Brown quoted Henning Mankell’s book ‘The White Lioness

“The truth is complicated, multi-faceted, contradictory. On the other hand, lies are black and white.”

This quote to me, sums up the messy nature of scientific ‘truths’. We do not live in a world of black and white, but one of endless shades of grey, where what we know as ‘true’ is constantly changing as science advances and technology evolves.

Tracey explored the many reasons that evidence can be overstated or uncertainties ignored. Often the truth can be difficult. If we look for instance at clinics offering miracle cures for cancer as Tracey did in her talk, we can see that the evidence for these ‘cures’ may be limited. In reality however, it is hard to question these ‘cures’ and destroy the hope they can provide. Other times it may appear in the public’s interest to simplify the evidence to make a point. This is often the case for many public health campaigns. Who cares about the evidence if the outcome is positive? For example the ‘5 A DAY’ campaign, where numbers touted may vary from country to country, but we can all agree that eating more fruit and vegetables is a good thing. And finally, it may be that a ‘fact’ or claim is so well established we don’t even think to question it, or put it under critical scrutiny.

Saslecture2
Tracey Brown. Photo: Richard Lakos

While these reasons can be compelling, they can become problematic. If uncertainty and accountability for evidence is not present at every level of public life, how can we introduce it in more nuanced scientific areas? By denying people the opportunity to understand scientific uncertainty, we can become trapped by our oversimplifications. We are left with the fear that uncertainty will be misused by critics and we begin to dread the question “But, are you sure?”

In the end Tracey’s argument comes down to mutual trust. The public needs to be trusted with uncertainty. As a scientific community we must be trustworthy and present the uncertainty that accompanies our work. We need to give the public the tools to ask for and demand evidence and accountability. There will be missteps and misunderstandings along the way. Opinion and motive will always find a way to clash with evidence. But by promoting the true nature of scientific evidence, people will be free to make fully informed decisions in a world where evidence and accountability cannot be ignored.

To listen to Tracey Brown’s talk in full (without any oversimplifications) visit the Guardian website or download the podcast here. To learn more about Sense About Science, or get involved in their Ask for Evidence campaign visit http://www.senseaboutscience.org/.

Sheena Cowell recently completed her PhD at Imperial College London in Medicinal Chemistry and Cancer Imaging. Sheena is a member of Voice of Young Science, a programme to encourage early career researchers to play an active role in public debates about science.Sense About Science is a charity that works with scientists and members of the public to change public debates and to equip people to make sense of science and evidence.

24 September 2015

A novel use of PhD data: Investigating the state of the Dementia Workforce

Add comment Comments (0)

Katie Howe explains how data from the British Library’s electronic thesis service EThOS has been used in a report into the state of dementia research in the UK.

EThOS is the British Library’s electronic theses service. By working with universities across the UK EThOS is able to provide records for over 400,000 UK PhD theses going back as far as the 19th century. For 165,000 of these PhD theses it is also possible to access a full text version of the document. A key feature of EThOS is that you don’t have to come to the BL to use it - in fact it is accessible from anywhere in the world.

In previous blog posts we have described how EThOS could be a valuable resource for scientific researchers (see here and here). However, as an extensive source of information on PhDs undertaken in the UK, EThOS data can also be used to look at trends in PhD research over time. A recent report by the Alzheimer’s Society illustrates this approach. Graph

The Alzheimer’s Society appointed RAND Europe to produce a report on the state of dementia research in the UK. RAND wished to investigate the dementia workforce pipeline - how many researchers are working on dementia and how this is changing over time. As EThOS contains records for a high (and growing proportion) of recent PhD theses, RAND contacted the EThOS team to ask for their help with this investigation. EThOS Metadata Manager Heather Rosie and her colleagues undertook bespoke analysis for RAND and produced a list of theses awarded from 1970 onwards. The graph above shows the results. Dementia-related PhD research has been steadily increasing over the last 30 years. However, cancer-related PhDs have skyrocketed over the same time frame. Now five times more PhD researchers chose to work on cancer than dementia.

InfographicRAND were also interested in what proportion of PhD students studying dementia stay in the field. To investigate this they traced about 1500 dementia PhD researchers to find out about their career since finishing their PhD. The results show that of those who do complete a PhD in dementia, retention in the field is poor with 70% leaving the field within four years. Only 21% are still researching dementia. (The results are summarised on the infographic opposite. A full version of which can be seen here)

The researchers gave a number of reasons for leaving the field of dementia but amongst the most common was a concern over the increasing competition for senior faculty positions. This is not a problem unique to dementia research but spans all of academia. This is a familiar issue for us in team ScienceBL and a previous series of blog posts outlines some alternative career options for those undertaking biomedical PhDs (here and here).

As well as being a great source of detailed information for scientific researchers, PhD theses accessed through EThOS can be used to find out about individual researchers or to help students structure their own PhD thesis. This report shows another novel use of PhD data enabled by the size and national scope of the EThOS resource. The full report can be seen here.

Katie Howe

06 February 2015

DataCite Case Study: ForestPlots.net at the Unviersity of Leeds

Add comment Comments (0)

In June last year, we held a DataCite workshop hosted by the University of Glasgow. We've now turned our speaker's use of Digital Object Identifiers (DOIs) for rainforest data into a video and printed case study.

You can still find a short summary of that event here. Our thanks go to Gabriela Lopez-Gonzalez for taking the time to come and film with us.

 

We hope that this case study will help institutions promote the idea of data citation and use of DOIs for data to their researchers, and that this in turn will encourage more submission of data to institutional repositories.

 

A DataCite DOI is not just for data

During January we had also been trying to spread the word that DOIs from DataCite aren't necessarily just for data. We've been working with the British Library's EThOS service to look at how UK institutions might give DOIs to their electronic theses and dissertations.

There was an initial workshop to divine the issues in November 2014, and on 16th January we held a bigger workshop, bringing more institutions together to look at how we might start to establish a common way of identifying e-theses in the UK.

The technical step of assigning a DOI to a thesis is relatively straightforward. Once an institution is working with DataCite (or CrossRef) they can use their established systems to assign a DOI to a thesis. But the policies surrounding the issue and management of this process are more complex. We're hoping that these workshops have helped everyone to pull in the same direction and collaborate on answers to common questions.

This work has given rise to a proposal to look at how to improve the connection between a thesis and the data it is built on. By triggering the consideration of sharing the data supporting a thesis, maybe we can "get 'em young" and introduce good data sharing practice as early in the research career as possible. Connecting the thesis and its data also increases the visibility of both, helping early career researchers to reap the benefits of their hard work sooner.

Watch this space to see what happens next!

 

12 December 2014

Wishing you a Merry Crystal-mas from DataCite UK

Add comment Comments (0)

As 2014 draws to a close, it has been another busy year for us here at the Library running DataCite UK. Over the past 12 months the number of organisations that are now using DataCite DOIs in the UK has gone up to 26.

One highlight from earlier in the year was the minting of 3millionth DOI, which you can find here: http://doi.org/10.5517/CCPHZ37. This was minted as part of the work by the Cambridge Crystallographic Data Centre to assign DOIs to their crystallographic datasets. This has been a particularly nice milestone to have as 2014 has been the International Year of Crystallography.

In this year of crystallography, CCDC are by no means the only crystallography database getting DOIs for their data. Both eCrystals (http://ecrystals.chem.soton.ac.uk/) based at Southampton and the SPECTRa project at Imperial (https://spectradspace.lib.imperial.ac.uk:8443/handle/10042/13) are doing the same thing.

This work now means that there are DOIs available for the crystal structure of caffeine (http://doi.org/10.5517/CCNH4QZ), paracetamol (http://doi.org/10.5517/CC4C64T) and theobromine (http://doi.org/10.5517/CC4D14P), all things that you might want to (or might need to) partake of this Christmas.

ChocolateimageTheobromine is a key flavour compound in milk and dark chocolate, and the reason you can't feed it to your pets: theobromine is particularly toxic to animals. Image from Flickr, CC-BY-NC-SA. https://www.flickr.com/photos/jhard/11399049754 

 

 

29 August 2014

Seeing Is Believing: Picturing the Nation's Health

Add comment Comments (0)

Our latest Beautiful Science video looks back a fantastic evening in which we welcomed Professor David Spiegelhalter and Dame Sally Davies to the Library for a discussion with Michael Blastland about the way in which public health messages are communicated.

In our recent Beautiful Science exhibition, we brought together some classics of data visualisation in the field of public health, showing the impact that powerful images can have in transforming the way we think about our own health and that of our society. But is John Snow's map of cholera deaths, or Florence Nightingale's rose diagram of deaths in the Crimean War really better than a table of numbers, like John Graunt’s Table of Casualities, based on his amalgamation of the data contained within the London Bills of Mortality? When it comes to our health, how and why do we make decisions to reform, or not reform our unhealthy behaviours?

Discussing this important question are:

Sir David Spiegelhalter is Winton Professor for the Public Communication of Risk at Cambridge University

Dr. Dame Sally Davies is the Chief Medical Officer for England

Michael Blastland, writer, broadcaster and author of the Tiger that Isn’t

 

 

Johanna Kieniewicz