Google has launched a new spatial audio SDK called Resonance Audio. It's based on technology from Google's VR Audio SDK, and it works at scale across mobile and desktop platforms for AR, VR, games and 360 video.
The SDK uses optimised digital signal processing algorithms based on higher order ambisonics to spatialise hundreds of simultaneous 3D sound sources, without compromising audio quality, even on mobile. A new feature in Unity enables precomputing highly realistic reverb effects that accurately match the acoustic properties of the environment, reducing CPU usage significantly during playback.
Multiplatform SDKs
Resonance Audio comes with cross-platform SDKs for the most popular game engines, audio engines, and digital audio workstations (DAW) to streamline workflows so that it integrate seamlessly with audio middleware and sound design tools.
The SDKs run on Android, iOS, Windows, MacOS and Linux platforms with integration for Unity, Unreal Engine, FMOD, Wwise and DAWs. There are native APIs for C/C++, Java, Objective-C and the web.
This multi-platform support enables developers to implement sound designs once, and easily deploy their project with consistent sounding results across the top mobile and desktop platforms.
Beyond Time And Spatial
The SDK enables developers to control the direction acoustic waves propagate from sound sources. The example cited on a blog post by Product Manager, Eric Mauskopf, is that of when standing behind a guitar player, it can sound quieter than when standing in front. And when facing the direction of the guitar, it can sound louder than when your back is turned.
Another SDK feature is automatically rendering near-field effects when sound sources get close to a listener's head, providing an accurate perception of distance, even when sources are close to the ear. The SDK also enables sound source spread, by specifying the width of the source, allowing sound to be simulated from a tiny point in space up to a wall of sound.
Also released is an ambisonic recording tool to spatially capture sound design directly within Unity, save it to a file, and use it anywhere ambisonic soundfield playback is supported, from game engines to YouTube videos.