Virtualized Audio as a Distributed Interactive Application Peter A. Dinda Northwestern University Access Grid Retreat, 1/30/01.

Slides:



Advertisements
Similar presentations
| Page Angelo Farina UNIPR / ASK Industries | All Rights Reserved | Confidential Boundary conditions The external surface of the solid model.
Advertisements

Listening Tests and Evaluation of Simulated Sound Fields Using VibeStudio Designer Wersényi György Hesham Fouad SZÉCHENYI ISTVÁN UNIVERSITY, Hungary VRSonic,
Digital Filters. Filters Filters shape the frequency spectrum of a sound signal. Filters shape the frequency spectrum of a sound signal. –Filters generally.
VLSI Communication SystemsRecap VLSI Communication Systems RECAP.
Echo Generation and Simulated Reverberation R.C. Maher ECEN4002/5002 DSP Laboratory Spring 2003.
Review of Frequency Domain
Auralization Lauri Savioja (Tapio Lokki) Helsinki University of Technology, TKK.
REAL-TIME INDEPENDENT COMPONENT ANALYSIS IMPLEMENTATION AND APPLICATIONS By MARCOS DE AZAMBUJA TURQUETI FERMILAB May RTC 2010.
Host Load Trace Replay Peter A. Dinda Thesis Seminar 11/23/98.
Top Level System Block Diagram BSS Block Diagram Abstract In today's expanding business environment, conference call technology has become an integral.
Geometric Sound Propagation Anish Chandak & Dinesh Manocha UNC Chapel Hill
Project Presentation: March 9, 2006
1 Linear Time-Invariant ( “LTI” ) Systems Montek Singh Thurs., Feb. 7, :30-4:45 pm, SN115.
A Prediction-based Approach to Distributed Interactive Applications Peter A. Dinda Jason Skicewicz Dong Lu Prescience Lab Department of Computer Science.
1 Manipulating Digital Audio. 2 Digital Manipulation  Extremely powerful manipulation techniques  Cut and paste  Filtering  Frequency domain manipulation.
Acceleration on many-cores CPUs and GPUs Dinesh Manocha Lauri Savioja.
1 Dong Lu, Peter A. Dinda Prescience Laboratory Computer Science Department Northwestern University Virtualized.
EECS 20 Chapter 9 Part 21 Convolution, Impulse Response, Filters Last time we Revisited the impulse function and impulse response Defined the impulse (Dirac.
A Prediction-based Approach to Distributed Interactive Applications Peter A. Dinda Department of Computer Science Northwestern University
U.S. Army Research, Development and Engineering Command Braxton B. Boren, Mark Ericson Nov. 1, 2011 Motion Simulation in the Environment for Auditory Research.
Discrete-Time Convolution Linear Systems and Signals Lecture 8 Spring 2008.
Abstract In this project we expand our previous work entitled "Design of a Robotic Platform and Algorithms for Adaptive Control of Sensing Parameters".
So far: Historical overview of speech technology  basic components/goals for systems Quick review of DSP fundamentals Quick overview of pattern recognition.
lecture 2, linear imaging systems Linear Imaging Systems Example: The Pinhole camera Outline  General goals, definitions  Linear Imaging Systems.
ANISH CHANDAK COMP 770 (SPRING’09) An Introduction to Sound Rendering © Copyright 2009 Anish Chandak.
EE513 Audio Signals and Systems Noise Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
School of Informatics CG087 Time-based Multimedia Assets Sampling & SequencingDr Paul Vickers1 Sampling & Sequencing Combining MIDI and audio.
Chapter 2: Discrete time signals and systems
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Introduction SNR Gain Patterns Beam Steering Shading Resources: Wiki:
Internet Engineering Czesław Smutnicki Discrete Mathematics – Discrete Convolution.
Time-Domain Representations of LTI Systems
AUDIO SPOTLIGHTING PRESENTED BY: NAMRATA MAURYA EI-4 rth Year
Complex Variables & Transforms 232 Presentation No.1 Fourier Series & Transforms Group A Uzair Akbar Hamza Saeed Khan Muhammad Hammad Saad Mahmood Asim.
Digital Filters. Filters Filters shape the frequency spectrum of a sound signal. –Filters generally do not add frequency components to a signal that are.
Virtual Worlds: Audio and Other Senses. VR Worlds: Output Overview Visual Displays: –Visual depth cues –Properties –Kinds: monitor, projection, head-based,
Acoustic Noise Cancellation
1AES tutorial, 2008 Small Room Acoustics Ben Kok Nelissen ingenieursbureau
Supervisor: Dr. Boaz Rafaely Student: Limor Eger Dept. of Electrical and Computer Engineering, Ben-Gurion University Goal Directional analysis of sound.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Advanced Digital Signal Processing
Timo Haapsaari Laboratory of Acoustics and Audio Signal Processing April 10, 2007 Two-Way Acoustic Window using Wave Field Synthesis.
1 P. David, V. Idasiak, F. Kratz P. David, V. Idasiak, F. Kratz Laboratoire Vision et Robotique, UPRES EA 2078 ENSI de Bourges - Université d'Orléans 10.
Audio Systems Survey of Methods for Modelling Sound Propagation in Interactive Virtual Environments Ben Tagger Andriana Machaira.
Modal Analysis of Rigid Microphone Arrays using Boundary Elements Fabio Kaiser.
EE 113D Fall 2008 Patrick Lundquist Ryan Wong
Representation of CT Signals (Review)
Professor Walter W. Olson Department of Mechanical, Industrial and Manufacturing Engineering University of Toledo System Solutions y(t) t +++++… 11 22.
THE LAPLACE TRANSFORM LEARNING GOALS Definition
3-D Sound and Spatial Audio MUS_TECH 348. Stereo Loudspeaker Reproduction.
EEE 503 Digital Signal Processing Lecture #2 : EEE 503 Digital Signal Processing Lecture #2 : Discrete-Time Signals & Systems Dr. Panuthat Boonpramuk Department.
Computation of the complete acoustic field with Finite-Differences algorithms. Adan Garriga Carlos Spa Vicente López Forum Acusticum Budapest31/08/2005.
Numerical Analysis. Numerical Analysis or Scientific Computing Concerned with design and analysis of algorithms for solving mathematical problems that.
Pure Path Tracing: the Good and the Bad Path tracing concentrates on important paths only –Those that hit the eye –Those from bright emitters/reflectors.
Sound and Its System. What is sound? basically a waveform of energy that is produced by some form of a mechanical vibration (ex: a tuning fork), and has.
Automatic Equalization for Live Venue Sound Systems Damien Dooley, Final Year ECE Progress To Date, Monday 21 st January 2008.
Automatic Equalization for Live Venue Sound Systems Damien Dooley, Final Year ECE Initial Presentation, Tuesday 2 nd October 2007.
Project-Final Presentation Blind Dereverberation Algorithm for Speech Signals Based on Multi-channel Linear Prediction Supervisor: Alexander Bertrand Authors:
Real-Time Dynamic Shadow Algorithms Evan Closson CSE 528.
Time Based Processors. Reverb source:
Eeng360 1 Chapter 2 Linear Systems Topics:  Review of Linear Systems Linear Time-Invariant Systems Impulse Response Transfer Functions Distortionless.
Chapter 2. Signals and Linear Systems
ENEE 322: Continuous-Time Fourier Transform (Chapter 4)
Sound Source Location Stand Group 72: Hiroshi Fujii Chase Zhou Bill Wang TA: Katherine O’Kane.
MULTIPLICATION STRATEGIES
3-D Sound and Spatial Audio MUS_TECH 348. What do these terms mean? Both terms are very general. “3-D sound” usually implies the perception of point sources.
EE599-2 Audio Signals and Systems
Opracowanie językowe dr inż. J. Jarnicki
Research Methods in Acoustics Lecture 7: Convolution and Fourier Transform Jonas Braasch Partly based on Jens Blauert’s Communication Acoustics script.
Lect5 A framework for digital filter design
Signal and Systems Chapter 2: LTI Systems
Presentation transcript:

Virtualized Audio as a Distributed Interactive Application Peter A. Dinda Northwestern University Access Grid Retreat, 1/30/01

2 Overview Audio systems are pathetic and stagnant We can do better: Virtualized Audio (VA) VA can exploit distributed environments VA demands interactive response What I believe Why I care

3 Performer Microphones Performance Room Mixer Amp Listening Room Listener Sound Field 1 Sound Field 2 Loudspeakers Headphones Traditional Audio (TA) System

4 TA Mixing And Filtering Performer Performance Room Filter Mixing (reduction) Amp Filter Loudspeaker Filter Microphone Sampling Listening Room Filter Listener’s Location and HRTF Headphones Perception of Loudspeaker Reproduced Sound Listener’s Location and HRTF Perception of Real Sound Perception of Headphone Reproduced Sound

5 Virtualized Audio (VA) System

6 VA: Filtering, Separation, and Auralization VA Forward Problem VA Reverse Problem

7 The Reverse Problem - Source Separation Human SpaceMicrophones Recovery Algorithms microphone signals microphone positions “Reverse Problem” sound source positions room geometry and properties sound source signals other inputs Microphone signals are a result of sound source signals, positions, microphone positions, and the geometry and material properties of the room. We seek to recover these underlying producers of the microphone signals.

8 The Reverse Problem Blind source separation and deconvolution Statistical estimation problem Can “unblind” problem in various ways –Large number of microphones –Tracking of performers –Separate out room deconvolution from source location –Directional microphones –Phased arrays Potential to trade off computational requirements and specialized equipment Much existing research to be exploited

9 Transducer Beaming Transducer Wave L  L  L  L  L  L

10 Phased Arrays of Transducers Phased Array Physical Equivalent

11 The Forward Problem - Auralization Auralization Algorithms sound source positions room geometry/properties sound source signals Listener positions Listener signals Listener wearing Headphones (or HSS scheme) In general, all inputs are a function of time Auralization must proceed in real-time

12 Ray-based Approaches To Auralization For each sound source, cast some number of rays, then collect rays that intersect listener positions –Geometrical simplification for rectangular spaces and specular reflections Problems –Non-specular reflections requires exponential growth in number of rays to simulate –Most interesting spaces are not rectangular

13 Wave Propagation Approach Captures all properties except absorption absorption adds 1st partial terms  2 p/  2 t =  2 p/  2 x +  2 p/  2 y +  2 p/  2 z

14 Method of Finite Differences Replace differentials with differences Solve on a regular grid Simple stencil computation (2D Ex. in Fx) Do it really fast pdo i=2,Y-1 pdo j=2,X-1 workarray(m0,j,i) = (.99) * ( $ R*temparray(j+1,i) $ + 2.0*(1-2.0*R)*temparray(j,i) $ + R*temparray(j-1,i) $ + R*temparray(j,i+1) $ + R*temparray(j,i-1) $ - workarray(m1,j,i) ) endpdo

15 How Fast is Really Fast? O(xyz(kf) 4 / c 3 ) stencil operations per second are necessary –f=maximum frequency to be resolved –x,y,z=dimensions of simulated space –k=grid points per wavelength (2..10 typical) –c=speed of sound in medium for air, k=2, f=20 KHz, x=y=z=4m, need to perform 4.1 x stencil operations per second (~30 FP operations each)

16 LTI Simplification Consider the system as LTI - Linear and Time-Invariant We can characterize an LTI system by its impulse response h(t) In particular, for this system there is an impulse response from each sound source i to each listener j: h(i,j,t) Then for sound sources s i (t), the output m j (t) listener j hears is m j (t) =  i  h(i,j,t) * s i (t), where * is the convolution operator

17 LTI Complications Note that h(i,j) must be recomputed whenever space properties or signal source positions change The system is not really LTI –Moving sound source - no Doppler effect Provided sound source and listener movements, and space property changes are slow, approximation should be close, though. Possible “virtual source” extension

18 Where do h(i,j,t)’s come from? Instead of using input signals as boundary conditions to wave propagation simulation, use impulses (Dirac deltas) Only run simulation when an h(i,j,t) needs to be recomputed due to movement or change in space properties.

19 Exploiting a Remote Supercomputer or the Grid

20 Interactivity in the Forward Problem Auralization Algorithms Listener positions Listener signals Listener wearing headphones sound source positions room geometry/properties sound source signals

21 Full Example of Virtualized Audio Human SpaceMicrophones Recovery Algorithms microphone signals microphone positions “Reverse Problem” sound source positions room geometry and properties sound source signals other inputs Human SpaceMicrophones Recovery Algorithms microphone signals microphone positions “Reverse Problem” sound source positions room geometry and properties sound source signals other inputs Human SpaceMicrophones Recovery Algorithms microphone signals microphone positions “Reverse Problem” sound source positions room geometry and properties sound source signals other inputs Combine Auralization Algorithms room geometry/properties sound source signals sound source positions

22 VA as a Distributed Interactive Application Disparate resource requirements –Low latency audio input/output –Massive computation requirements Low latency control loop with human in the loop Response time must be bounded Adaptation mechanisms –Choice between full simulation and LTI simplification number of listeners –Frequency limiting versus delay –Truncation of impulse responses –Spatial resolution of impulse response functions

23 Conclusion We can and should do better than the current state of audio Lots of existing research to exploit –The basis of virtualized audio Trade off computation and specialized hardware VA is a distributed interactive application VA forward problem currently being implemented at Northwestern