Transforming the experience of sound and music

The O'Reilly Radar Podcast: Poppy Crum on sensory perception, algorithm design, and fundamental changes in music.

By Jenn Webb
September 30, 2015
Part of the soundboard (or sounding board) of a Vose & Sons upright piano that has been disassembled. Part of the soundboard (or sounding board) of a Vose & Sons upright piano that has been disassembled. (source: By Ragesoss on Wikimedia Commons)

Subscribe to the O’Reilly Radar Podcast to track the technologies and people that will shape our world in the years to come.

In this week’s Radar Podcast, author and entrepreneur Alistair Croll, who also co-chairs our Strata + Hadoop World conference, talks music science with Poppy Crum, senior principal scientist at Dolby Laboratories and a consulting professor at Stanford.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Their wide-ranging discussion covers fundamental changes in the music industry, sensory perception and algorithm design, and what the future of music might look like.

Here are a few snippets from their conversation:

As we see transformations to the next stage of how we consume content, things that are becoming very prevalent are more and more metadata. More and more information about the sounds, information about personalization. You aren’t given the answer; you’re given information and opportunities to have a closer tie to the artist’s intent because more information about the artist’s intent can be captured so that when you actually experience the sound or the music, that information is there to dictate how it deals with your personal environment.

Today, Dolby Atmos and other technologies have transformed [how we experience sound in the cinema] quite substantially, where if I’m a mixer, I can take a sound and can mix now, say, instead of seven channels, I can mix 128 sounds, and each one of those sounds has a data stream associated with it. That data stream carries information. It’s not going to a particular set of speakers; it has x, y, z coordinates, it has information about the diffusivity of that sound. …Every speaker is treated independently but it also treats the environment very specialized. … It’s the rendering algorithms — you can carry information not just about where, but how loud, how wide, how diffuse, and it’s going to figure out the right amalgamation of different speakers.

Subscribe to the O’Reilly Radar Podcast

Stitcher, TuneIn, iTunes, SoundCloud, RSS

The number of companies that use GSR, galvanic skin response, maybe some simple EEG metrics to try to drive your playlist and measure your heart rate and keep you in a certain state that you’re trying to achieve — there are multiple people looking at that. The key problems though are, obviously, the base line. There’s a lot of other sensor integration you need to actually get to a useful state of that type of information. … it’s just — are physiological states the right thing to track? Or do we get to a better place through a computational model that doesn’t include anything about my sensory physiology?

One of the labs that my group works with at Dolby is involved in closed-loop feedback sensing, to think about a lot of different technological interfaces. Emotion sensing, is definitely one of them. Attentional processing would be another. It’s a matter of integration of a lot of these different sensors and how you put them together to get to something that’s more useful then, say, tracking just my heart rate.

Related:

Post topics: Emerging Tech
Share: