Download presentation
Presentation is loading. Please wait.
Published byAron Jones Modified over 8 years ago
7
HRTF
8
8
10
AudioGraph
11
Spatial audio source Can be input source or submix Support #emitters depends on multiple factors Orientation, Position, Velocity Emitter Spatial of audio ‘reception’ Only supports 1 listener/AudioGraph Orientation, Position, Velocity Listener AudioGraph
15
AudioGraphSettings settings = new AudioGraphSettings(AudioRenderCategory.GameEffects); settings.EncodingProperties = AudioEncodingProperties.CreatePcm(48000, 2, 16); settings.EncodingProperties.Subtype = MediaEncodingSubtypes.Float; // Create the new Graph CreateAudioGraphResult createResult = await AudioGraph.CreateAsync(settings); if (createResult.Status == AudioGraphCreationStatus.Success) audioGraph = createResult.Graph; else throw new Exception("Failed to successfully create AudioGraph”); AudioGraph
16
// Create the output node CreateAudioDeviceOutputNodeResult outputNodeResult = await audioGraph.CreateDeviceOutputNodeAsync(); if (outputNodeResult.Status == AudioDeviceNodeCreationStatus.Success) outputNode = outputNodeResult.DeviceOutputNode; else throw new Exception("Failed to create AudioGraph Output"); //Create the submix node for reverb AudioSubmixNode submix = audioGraph.CreateSubmixNode(); submix.AddOutgoingConnection(outputNode); submix.OutgoingGain = 0.125d; //Add Reverb to the Submix ReverbEffectDefinition reverb = ReverbEffectDefinitionFactory.GetReverbEffectDefinition(audioGraph, ReverbEffectDefinitions.LargeRoom); submix.EffectDefinitions.Add(reverb); AudioGraph
17
// Create the Emitter AudioNodeEmitter audioNodeEmitter = new AudioNodeEmitter(AudioNodeEmitterShape.CreateOmnidirectional(), AudioNodeEmitterDecayModels.CreateDefault(), AudioNodeEmitterSettings.None); //X(-Left,+Right) Y(+Above,-Below), Z(-Front,+Back) //in a right-hand coordinate system audioNodeEmitter.Position = new Vector3(-225f, -0.5f, -225f); // Create the input node, all InputNodes with emitters need to be Mono, 48KHZ StorageFile file = await GetAudioFile(); CreateAudioFileInputNodeResult result = await audioGraph.CreateFileInputNodeAsync(file, audioNodeEmitter); if (result.Status == AudioFileNodeCreationStatus.Success) { AudioFileInputNode fileInputNode = result.FileInputNode; fileInputNode.AddOutgoingConnection(submix); fileInputNode.AddOutgoingConnection(outputNode); } AudioGraph
18
Powered by: Jabra Intelligent Headset Advanced sensor pack and dynamic 3D audio - and exciting new apps platform! https://intelligentheadset.com/ http://aka.ms/spatialspheredemocode
19
Demo – Head Tracking
20
//Get the current listener position from the head phones. //X(-left,+right) Y(+above,-below), Z(-Front,+Back) //in a right-hand coordinate system Vector3 listenerOrientation = await GetYawPitchRollAsync(); //Convert yaw,pitch,roll in degrees from the headset to radial angles Vector3 radialListnerOrientation = new Vector3((float)(Math.PI * listenerOrientation.X / 180f), (float)(Math.PI * listenerOrientation.Y / 180f), (float)(Math.PI * listenerOrientation.Z / 180f)); //Create a Quaternion from the Radial yaw,pitch,roll orientation //Quaternion is a four-dimensional vector (x,y,z,w), which is used to efficiently rotate an object about the (x, y, z) vector by the angle theta. Quaternion q = Quaternion.Normalize(Quaternion.CreateFromYawPitchRoll((float)radialListnerOrientation.X, (float)radialListnerOrientation.Y, (float)radialListnerOrientation.Z)); _deviceOutput.Listener.Orientation = q; Demo – Head Tracking
22
Applications of Spatial Audio
24
Cities Unlocked The Guide Dogs Project Lighting up the World through Sound Mission Can technology help us be more present, more human? Strategy Use spatial Audio to present location information to users who have visual impairments, making it easy for the user to know where they are, what’s around them, and how to get where they want to go. Increase confidence Increase enjoyment Key Technologies Spatial Audio – convey distance & direction, minimize cognitive load Background audio Headset and remote – connected to phone via BT and BLE Mapping services Applications of Spatial Audio
25
copyright 2008, Blender Foundation / www.bigbuckbunny.org
26
Virtual Speaker Positions
27
// Create a MediaClip from the video file and apply our spatial audio effect MediaClip clip = await MediaClip.CreateFromFileAsync(videoFile); clip.AudioEffectDefinitions.Add(new AudioEffectDefinition(typeof(SpatialAudioEffect).FullName)); // Create a MediaComposition object that will allow us to generate a stream source MediaComposition compositor = new MediaComposition(); compositor.Clips.Add(clip); // Set the stream source to the MediaElement control this.Player.SetMediaStreamSource(compositor.GenerateMediaStreamSource()); Virtual Speaker Positions
28
One lifecycle Run in the foreground or background Playback sponsors execution One knowledge model No playback state to sync One Process Auto release framework resources XAML textures Ask app to release its resources if needed Caches, UI views When Backgrounded … Notify app visibility has changed so it restores UI Notify app memory usage level has changed When Foregrounded … Playback sponsored execution with a manifest capability Background Audio
34
void Awake() { // Get the AudioSource component on the game object and set room size. audiosource = gameObject.GetComponent (); audiosource.SetSpatializerFloat(1, (float) ROOMSIZE.Small); }
35
public GameObject[] AudioSources; void Awake() { // Find all objects tagged SpatialEmitters AudioSources = GameObject.FindGameObjectsWithTag("SpatialEmittersSmall"); foreach (GameObject source in AudioSources) { var audiosource = source.GetComponent (); audiosource.spread = 0; audiosource.spatialBlend = 1; audiosource.rolloffMode = AudioRolloffMode.Custom; // Plugin: SetSpatializerFloats here to avoid a possible pop audiosource.SetSpatializerFloat(1, ROOMSIZE.SMALL); // 1 is the roomSize param audiosource.SetSpatializerFloat(2, _minGain); // 2 is the minGain param audiosource.SetSpatializerFloat(3, _maxGain); // 3 is the maxGain param audiosource.SetSpatializerFloat(4, 1); // 4 is the unityGain param – distance to 0 attn. }
39
Different devices have a different #of supported spatial voices. When Penrose is not present, AudioGraph uses a speaker panning algorithm that is optimized for headphones. You are in charge of estimating the #of voices for your app and the devices you target. Co-location using submixes “Traditional” Spatial AudioGraph Remove Emitters
45
MEDIA DEVELOPMENT VR / AR DEVELOPMENT VIRTUAL & AUGMENTED REALITY PRODUCTION // 360 VIDEO PRODUCTION INTERACTIVE // VISUAL FX // 3D PROJECTION MAPPING // EVENTS
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.