Introduction

Platform

Technology

Summary

Speech and non-speech metadata allow people to understand and source sound within XR environments when they are not directly within the users field of view.

How it solves user need

People who cannot use the audio track need to know that a sound is being made, and in what direction that sound is coming from so they can adjust their point of view (not sure about that term)