03 October 2016
The future of radio: no. 3 - Paul Bennun
The British Library is working with the UK radio industry to develop a national radio archive and has invited experts from across the radio and music industry to consider what the future of radio might look like.
Paul Bennun, former CCO and non-executive director of UK content agency Somethin’ Else, considers how technology will affect the future of radio and the implications of this for the British Library's radio archive.
The future of radio from a technology perspective
Since its inception, radio has always been the testing ground for human endeavour in communications technology. If we’re asking ‘where next?’ for any medium, one could ask ‘where next?’ for technology – but radio will give you the answer first.
Maybe this is because at its root, radio is just the unalloyed human voice – the defining human technology – and it’s always been easier to apply ‘the new shiny’ to radio first.
The initial task was engineering: recording, broadcasting, amplification. With the basics down, radio tackled culture (and dogma): production, commerce and licensing – even the mechanics of celebrity. All these served radio first.
With transistors came portability. With micron-scale transistors inside PCs came digital production and delivery – more people could make more media, faster. This made a huge difference to radio producers (who previously chopped up tape) but made only minor qualitative difference to the listener. Again, radio led the way when nanoscale transistors in consumer computer processors and widespread packet-switched networks (i.e. the internet) came along.
The second-order quantitative and qualitative effects of communications technology mean hardware, content, network and author form a complex system not a stack and thus the immediacy and multiplicity of voices is innumerable. This is where we are today: any audio you want, anywhere you are. You’re the broadcaster, too, if you want to be.
So what’s next for radio? I’d point at two interesting technologies: firstly that set of technologies we call ‘artificial intelligence’ (AI) – systems that learn; secondly, audio as an unobtrusive human-system interface (don’t laugh at Apple’s AirPods)
The first order effects give us bots and basic control of devices like Siri on the iPhone. But combined these technologies can bring us autonomous cars that come when you whistle, systems that make sophisticated cultural choices about content previously restricted to humans, and systems that seamlessly merge human intention and desire with changes in a database. How far this can take us is very hard to predict.
Imagine a pop-up audio service programmed by bots that ensures everyone in an ephemeral social group (e.g. ‘trip to the pub’ or ‘Claire’s Hen Night’) can share a hilarious ‘radio show’ in their respective Ubers, where a personality-rich robot slags off their selfies and makes rude comments about Claire’s ex. You didn’t ask for it; it’s just there for a while and then it’s gone.
Your politics and cultural preferences are just another pattern to be interpreted. If we connect them to a survey of current events, a news-and-music service presented by a bot, keeping you in the bubble of what you want to hear, is easily imagined..
We can be sure that in the domain of time-based media these things will happen in ‘radio’ first. All this poses the most enormous challenges for a library of record that wishes to capture public culture in a way that is useful for future eyes and ears.
If, ten years ago, ‘radio’ was a simple concept, that is no longer the case. Linear, audio-only broadcast has given way to podcasts, audio-on-demand, extended universes of supporting content surrounding the audio ‘programme’ and cloud music services – all make a claim to being ‘radio’ to some degree.
This volume and variety of content poses unique challenges in discovery, selection and capture, if a useful, usable record of radio’s contribution is to be made for future generations. In particular it is currently impossible to capture every second of audio published. We’re limited by available resource, which is something not even measured in money, but in storage density, network speeds and computing power (although this is in all likelihood a temporary problem given the projected increase in computing horsepower).
We can most certainly put ourselves on the path to an archive that does not act under any such constraint. And if we take a limited sample of radio content, we can still create an archive that may have many of the benefits of a complete set (if we choose carefully, and we augment our sample with data about the audio we did not capture).
Let us start with the assumption of the impossibility of creating a complete archive of radio content: we therefore have to choose what to sample. We have to ask: ‘what, in the future, will we find interesting or important?’
Given we are not restricted to the capture of audio, and indeed want to capture audio in its full context, the question of what our sample should contain becomes interesting and challenging. The descriptive words used by its authors, the individuals featured on it and its categorisation by third-party publishers are just some of the useful data which can help us analyse what radio in the late 2010s can tell us about ourselves. In the future it will help tell us how we got to where we’re going.
In fact, building on the theme of AI raised earlier, it may be the question of our choice of sample is not something best left entirely to humans. ‘What did we think was important in 2016?’ is an interesting question, and building a sample based on a human’s understanding of this question is important – even if it tells us more about the selectors than the creators – but AI could take us further.
Indicators of ‘importance’ and ‘novelty’ may be less useful than indicators of evolution — we can turn to good old physics to say, with certainty, something is changing. Evolution is propelled ultimately by gradients between two actors in an environment. Patterns, and the change in patterns, in properties of our cultural output are most likely to reveal information of importance to someone accessing the archive in the future. Complex patterns are best unearthed by computers.
Neural networks, ‘deep learning’ and machine learning are all excellent at pattern recognition. Using this set of technologies to assist the decision making process may be crucial.
While not without its own potential issues, it may be automation of a daily (or hourly) census of available material will lead to the sample of captured information containing greatest utility to a future researcher. The surveys of the information space, can, themselves, be of enormous value in the future even if only a small sample of the content in that space is captured.
The views and opinions in these blog posts are those of their authors and do not necessarily reflect the views of the British Library.
Other blogs in this series:
Listen to a special British Library podcast discussion of The Future of Radio