U.S. Army Research, Development and Engineering Command Braxton B. Boren, Mark Ericson Nov. 1, 2011 Motion Simulation in the Environment for Auditory Research.

Slides:



Advertisements
Similar presentations
System Integration and Performance
Advertisements

Vector Calculus Mengxia Zhu Fall Objective Review vector arithmetic Distinguish points and vectors Relate geometric concepts to their algebraic.
Voiceprint System Development Design, implement, test unique voiceprint biometric system Research Day Presentation, May 3 rd 2013 Rahul Raj (Team Lead),
Listening Tests and Evaluation of Simulated Sound Fields Using VibeStudio Designer Wersényi György Hesham Fouad SZÉCHENYI ISTVÁN UNIVERSITY, Hungary VRSonic,
SWE 423: Multimedia Systems Chapter 3: Audio Technology (2)
Pitch Shifting and Dynamic Filtering Rossum (1992a) Digital sampling instrument for digital audio data; Rossum (1992b) Dynamic digital IIR audio filter.
고려대학교 그래픽스 연구실 Chapter 6 Collision Detection 6.1~6.4 고려대학교 그래픽스연구실 민성환.
EVENTS: INRIA Work Review Nov 18 th, Madrid.
3-D Sound and Spatial Audio MUS_TECH 348. Cathedral / Concert Hall / Theater Sound Altar / Stage / Screen Spiritual / Emotional World Subjective Music.
Auralization Lauri Savioja (Tapio Lokki) Helsinki University of Technology, TKK.
Virtualized Audio as a Distributed Interactive Application Peter A. Dinda Northwestern University Access Grid Retreat, 1/30/01.
1 Autonomous Controller Design for Unmanned Aerial Vehicles using Multi-objective Genetic Programming Choong K. Oh and Gregory J. Barlow U.S. Naval Research.
1 Introduction to MPEG Surround 韓志岡 2/9/ Outline Background – Motivation – Perception of sound in space Pricicple of MPEG Surround – Downmixing.
1 View Coherence Acceleration for Ray Traced Animation University of Colorado at Colorado Springs Master’s Thesis Defense by Philip Glen Gage April 19,
Project Presentation: March 9, 2006
Computer Graphics Recitation 1. General Office hour: Sunday 16:00 – 17:00 in Schreiber 002 Webpage with the slides:
STUDIOS AND LISTENING ROOMS
1 Numerical geometry of non-rigid shapes Non-Euclidean Embedding Non-Euclidean Embedding Lecture 6 © Alexander & Michael Bronstein tosca.cs.technion.ac.il/book.
1Notes. 2 Time integration for particles  Back to the ODE problem, either  Accuracy, stability, and ease-of- implementation are main issues  Obviously.
EEE440 Modern Communication Systems Wireless and Mobile Communications.
Binaural Sound Localization and Filtering By: Dan Hauer Advisor: Dr. Brian D. Huggins 6 December 2005.
Abstract In this project we expand our previous work entitled "Design of a Robotic Platform and Algorithms for Adaptive Control of Sensing Parameters".
THE ULTRASOUND IMAGE: GENERATION AND DISPLAY
Sana Naghipour, Saba Naghipour Mentor: Phani Chavali Advisers: Ed Richter, Prof. Arye Nehorai.
GCT731 Fall 2014 Topics in Music Technology - Music Information Retrieval Overview of MIR Systems Audio and Music Representations (Part 1) 1.
A Navigation Mesh for Dynamic Environments Wouter G. van Toll, Atlas F. Cook IV, Roland Geraerts CASA 2012.
Binaural Sonification of Disparity Maps Alfonso Alba, Carlos Zubieta, Edgar Arce Facultad de Ciencias Universidad Autónoma de San Luis Potosí.
Spatial Perception of Audio vs. The Home Theatre James D. Johnston Chief Scientist, DTS, Inc.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Introduction Tracking the corners Camera model and collision detection Keyframes Path Correction Controlling the entire path of a virtual camera In computer.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Introduction SNR Gain Patterns Beam Steering Shading Resources: Wiki:
Improved 3D Sound Delivered to Headphones Using Wavelets By Ozlem KALINLI EE-Systems University of Southern California December 4, 2003.
TEMPLATE BASED SHAPE DESCRIPTOR Raif Rustamov Department of Mathematics and Computer Science Drew University, Madison, NJ, USA.
Umm Al-Qura University Collage of Computer and Info. Systems Computer Engineering Department Automatic Camera Tracking System IMPLEMINTATION CONCLUSION.
Collision handling: detection and response
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.
Numerical modeling in radar data analyzing Igor Đurović, Miloš Daković, Vesna Popović Center for Signals, Systems, and Information Theory Faculty of Electrical.
Virtual Worlds: Audio and Other Senses. VR Worlds: Output Overview Visual Displays: –Visual depth cues –Properties –Kinds: monitor, projection, head-based,
Supervisor: Dr. Boaz Rafaely Student: Limor Eger Dept. of Electrical and Computer Engineering, Ben-Gurion University Goal Directional analysis of sound.
Name : Arum Tri Iswari Purwanti NPM :
Basic Ray Tracing CMSC 435/634. Visibility Problem Rendering: converting a model to an image Visibility: deciding which objects (or parts) will appear.
3-D Sound and Spatial Audio MUS_TECH 348. Main Types of Errors Front-back reversals Angle error Some Experimental Results Most front-back errors are front-to-back.
Nico De Clercq Pieter Gijsenbergh.  Problem  Solutions  Single-channel approach  Multichannel approach  Our assignment Overview.
1 Interactive Thickness Visualization of Articular Cartilage Author :Matej Mlejnek, Anna Vilanova,Meister Eduard GröllerMatej MlejnekAnna VilanovaMeister.
Timo Haapsaari Laboratory of Acoustics and Audio Signal Processing April 10, 2007 Two-Way Acoustic Window using Wave Field Synthesis.
Simulation of small head-movements on a Virtual Audio Display using headphone playback and HRTF synthesis Wersényi György SZÉCHENYI ISTVÁN UNIVERSITY,
Interactive acoustic modeling of virtual environments Nicolas Tsingos Nicolas TsingosREVES-INRIA.
Adding Force Feedback to Graphics Systems: Issues and Solutions William Mark, Scott Randolph, Mark Finch, James Van Verth and Russell Taylor Proceedings.
Immersive Displays The other senses…. 1962… Classic Human Sensory Systems Sight (Visual) Hearing (Aural) Touch (Tactile) Smell (Olfactory) Taste (Gustatory)
UNIVERSITY OF PATRAS ELECTRICAL & COMPUTER ENG. DEPT. LABORATORY OF ELECTROMAGNETICS A Hybrid Electromagnetic-Discrete Multipath Model for Diversity Performance.
Progress on Component-Based Subsurface Simulation I: Smooth Particle Hydrodynamics Bruce Palmer Pacific Northwest National Laboratory Richland, WA.
Multipe-Symbol Sphere Decoding for Space- Time Modulation Vincent Hag March 7 th 2005.
February 4, Location Based M-Services Soon there will be more on-line personal mobile devices than on-line stationary PCs. Location based mobile-services.
Turning a Mobile Device into a Mouse in the Air
Exponential random graphs and dynamic graph algorithms David Eppstein Comp. Sci. Dept., UC Irvine.
1 Computational Vision CSCI 363, Fall 2012 Lecture 29 Structure from motion, Heading.
Dynamic View Morphing performs view interpolation of dynamic scenes.
Final Presentation CS491B ALAIN VINIOT DE LARA Spring 2007, CSULA.
3-D Sound and Spatial Audio MUS_TECH 348. What do these terms mean? Both terms are very general. “3-D sound” usually implies the perception of point sources.
Chapter 10 Digital Signal and Image Processing
Introduction to Audio Watermarking Schemes N. Lazic and P
Dynamic View Morphing performs view interpolation of dynamic scenes.
Crafting Sound and Space
3D Graphics Rendering PPT By Ricardo Veguilla.
FM Hearing-Aid Device Checkpoint 2
FM Hearing-Aid Device Checkpoint 3
Simulating convective impingement heating in HASPIF
Motion in Real and Virtual Worlds
Computer Animation Displaying animation sequences raster animation
GPAT – Chapter 7 Physics.
Presentation transcript:

U.S. Army Research, Development and Engineering Command Braxton B. Boren, Mark Ericson Nov. 1, 2011 Motion Simulation in the Environment for Auditory Research

Introduction ARL’s Environment for Auditory Research (EAR) contains state-of-the-art facilities for auditory simulations Realistic auditory environments should contain both static and moving sources Moving sources are much more difficult to simulate in a 57-channel audio system –Multichannel audio editors –Max/MSP –Matlab Using streaming audio buffers, the EAR’s Sphere Room has been equipped to simulate moving sources by automatically generating source paths and processing each source’s motion in real time.

ENVIRONMENT FOR AUDITORY RESEARCH Sphere Room The Sphere Room is a 140 m 3 (5.3m × 5.4m × 4.9m) auditory virtual reality space designed to facilitate investigations of: -Integrity of auditory virtual spaces -Realism of complex auditory simulations -Effects of changes in Head-Related Transfer Functions on auditory perception -Effect of helmets and other headgear on spatial orientation The room contains 57 loudspeakers separated vertically by 30°, constituting a sphere surrounding the listener. This configuration of loudspeakers enables virtual sound source movement and sound projection in an almost 360° sphere. Unlimited stationary or moving sound sources may be presented to any combination of the 57 loudspeakers permitting generation of realistic and dynamically changing acoustic environments

Streaming Audio in Matlab PortAudio API: allows low-level control of multichannel audio devices through the Matlab programming environment - Low system latency - High audio fidelity Streaming Audio: short buffers update every 11.5 milliseconds with new audio data - Loudspeaker gains are pre-calculated - Signal processing can be enacted in real time Static sources can be placed in background at specific positions Additional moving sources can be added in a single virtual environment

Source Motion Paths Virtual source motion paths are defined parametrically, over time - Circular - Elliptical - ‘Dogbone’

Panning Algorithms Distance-Based Amplitude Panning (DBAP), Lossius et al., 2009 –loudspeaker gains are determined by each speaker’s distance from virtual audio source –independent of listener position –provides smooth motion panning for virtual sources located on loudspeaker array –cannot simulate sources outside array

Panning Algorithms Vector Base Amplitude Panning (VBAP), Pulkki, 1997 –Defines each loudspeaker as a position vector –Given a set of three linearly independent speaker vectors, VBAP can simulate a source within the speaker triangle as a linear combination of the three vectors –The coefficient of each vector is the gain of the corresponding speaker

Panning Algorithms Vector Base Amplitude Panning (VBAP), Pulkki, 1997 –VBAP is more robust than DBAP given a fixed listener position –Allows efficient simulation of virtual sources outside the loudspeaker array –Requires an algorithm for detecting vector/triangle intersections

Assigning triangles to sources 1)Parametrically define the triangle’s plane: 2)If a given ray intersects the plane, find its parametric coordinates s and t 3)If (s + t) is between 0 and 1, the ray intersects the triangle Ray-Triangle Intersection, Sunday, 2003

Assigning triangles to sources Ray-Triangle Intersection, Sunday, 2003 Requires 5 distinct dot product operations Not as efficient as other algorithms for dynamic environments But it’s more efficient for static sets of triangles because the planes’ normal vectors can be pre-computed

Signal Processing High-quality vehicular recordings are available with included x-y-z coordinates –These already contain attenuation, air absorption, and Doppler shift –Position data can be read and interpolated to determine pan positions To allow arbitrary movement of any signal, signal processing is added –Attenuation and air absorption coefficients are pre-computed –Signal gain and one-pole filter are updated in real time before loading the streaming audio buffer –Doppler shift is more computationally expensive Matlab isn’t fast enough to do it in real time May later be implemented from C++ Need better constant recordings of vehicular motion –Current recordings of vehicles idling are unconvincing –Not as important for slower sources that don’t change with velocity

Discussion With a full signal processing load, this system can process up to four independent moving sources at once Pre-calculations can take longer if different sources’ velocities and path lengths have very high least-common multiples Static sources can be added in specific channels to add background ambience

Conclusions Real time streaming audio allows simulated motion of any parametric path Two different panning algorithms have been implemented –DBAP is simpler and better for listener-independent reproduction –VBAP is more robust and better for a fixed listener position Attenuation and air absorption filtering can be applied in real time to give more realistic distance cues This system will be used in a series of auditory simulations and experiments ongoing at EAR

References Henry, P., Amrein, B., & Ericson, M., “The Environment for Auditory Research”, Acoustics Today, 5(3), Kleiner M., Brainard D., & Pelli D., “What's new in Psychtoolbox-3?”, Perception 36 ECVP Abstract Supplement, Lossius, T. Baltazar, P, & de la Hogue, T., “DBAP - Distance-Based Amplitude Panning”, Proceedings of the 2009 International Computer Music Conference, Pulkki, V., “Virtual Sound Source Positioning Using Vector Base Amplitude Panning”, J. Audio Eng. Soc., 45, Sunday, D., “Intersections of Rays, Segments, Planes and Triangles in 3D”, 2003.