Here’s the latest blog post in the New Play Space series. If you’d like to stay updated on what’s next in play, sign up here.

When we listen, we tend to think about what we’re hearing, not how. Spatial audio, or the ability to place sounds within a 3D space, is changing that tune.

As a storytelling medium, spatial audio is equally powerful in replicating real-world scenarios and adding texture to our world, without the need for visual stimulation. And as the latest generation of headphones are bringing these AR capabilities to the mass market, now is the time for creators, storytellers and communicators to explore the audience possibilities. This is how spatial audio is changing the way we listen.

Spatialising Sound

First, let’s begin with the fundamental part of spatial audio, the placement of sound within our environment. Spatialisation is the ability to play a sound positioned at a specific point of rotation, elevation and distance. This is what enables spatial audio to deliver immersive experiences because compared to binaural audio, 3D sounds provide realistic cues that make the listeners hear them as if they were part of their natural environment. 

Spatialisation conveys a sense of direction and distance – the farther away something is, the quieter it should sound. Beatsy, an AR music visualiser, recently added the capacity for spatial audio to allow their music experiences to live in, and interact with, specific locations. Imagine a hide-and-seek album release, with tracks pinned across secret locations that only reveal themselves once fans get close enough to find them.

Beatsy uses spatial audio to create location-specific music visualisations.


The placement of sound is not limited to your local area. Geo-spatial technologies are breaking ground for planet-scale layers of audio that add more depth and texture to the world from a 360 perspective. For example, you are in the woods, you can’t see the birds flying over ahead but you can hear them. While visual AR tends to create tunnel vision (as you’re naturally constrained to the device’s FOV) audio-led experiences present a low-threshold way to engage audiences beyond what’s visible on your screen.

The convergence of location-based tech and spatial audio offers new ways to navigate space and enables greater control over how we hear, through the active filtering of real-world soundscapes. A great example of this is Soundscapes by Microsoft, an audio map that uses 3D sounds to create wayfinding tools for the visually impaired. 


In the same way that your eyes are unique, so are your ears. Still most headphones and speakers are created with a one-size-fits-all design. The rapid development of immersive technologies such as HRTFs (head-related transfer function) and Beamforming are fixing this by augmenting our hearing in personalised ways. 

HRTFs are models of how the human head and ear interact with sound. When we hear sound we can determine the direction and distance of it based on which ear we hear it from, and the direction of our head shapes the sound. Sony have developed a program where you take a picture and analyse your ear shape to offer personalised HRTF solutions. As part of CES 2021 Sony also launched 360 Reality Audio, an HRTF audio format that puts the listener on a 3D soundstage, experiencing music as if it was performed for you. Audio startup Syng is also exploring this, creating immersive speakers for our homes.

Facebook's future AR glasses will use beamforming to isolate sounds in real-world environments.

Although a few years away from a consumer rollout, beamforming is another exciting area for spatial audio that will fully transform how we perceive virtual sound and objects in our world. Beamforming is used to understand, shape and isolate (the right) audio signals. Making sure that what we see is what we hear. Imagine being in a noisy bar but thanks to your AR headset, you’re able to hear the friend you’re talking to. This is an underlying framework for head-mounted AR to work, and Facebook Reality Labs are leading the audio research as part of project Aria, their prototype AR glasses.


Music tends to make us move, and spatial audio is no different. The latest generation of headphones, such as Apple’s AirPods Pro & AirPods Max, have built-in gyroscopes and accelerometers to track the motion of a user’s head, as well as the device. This allows for the continual remapping of sound fields, ensuring they stay anchored to the device, even as the user moves. 

For immersive storytelling, this freedom of movement is essential. In combination with volumetric sounds, sound fields that know where your head is at all times allow us to explore our home environments in new ways, and broadcasters such as Netflix are already experimenting with this.

Netflix is testing Apple's spatial audio support for AirPods Pro and Max.


The rollout of new headphones and adoption of progressive sound formats will bring a totally new way to experience audio. One that can deliver both deeper immersion and enhance storytelling as well as add new ways to orientate and enhance XR utilities. 

As the accuracy of geo-positioning increases, planet-scale, rich augmented audio experiences will become more viable, supported by headphones that understand their position, orientation in the world. The revolution will start in the living room with Apple’s new headphones and speakers, but this is only the sound of the beginning.

Header image @ Apple

What's next in play?

Sign up to our monthly newsletter to see how new technology is changing the play space.
Sign up
Phil Stuart

Phil is PRELOADED's founder and Executive Creative Director. He is passionate about the possibility space created by emerging and converging technologies, and inventing new forms of play with purpose.