3-D Spatialization and Localization and Simulated Surround Sound with Headphones Lucas O’Neil Brendan Cassidy.

Slides:



Advertisements
Similar presentations
3-D Sound and Spatial Audio MUS_TECH 348. Multi-Loudspeaker Reproduction: Surround Sound.
Advertisements

Audio Compression ADPCM ATRAC (Minidisk) MPEG Audio –3 layers referred to as layers I, II, and III –The third layer is mp3.
Analysis and Digital Implementation of the Talk Box Effect Yuan Chen Advisor: Professor Paul Cuff.
EE2F2 - Music Technology 2. Stereo and Multi-track Recording.
Binaural Hearing Or now hear this! Upcoming Talk: Isabelle Peretz Musical & Non-musical Brains Nov. 12 noon + Lunch Rm 2068B South Building.
Spatial Perception of Audio J. D. (jj) Johnston Neural Audio Corporation.
Guitar Effects Processor Using DSP
Extracting the pinna spectral notches Vikas C. Raykar | Ramani Duraiswami University of Maryland, CollegePark B. Yegnanaryana Indian Institute of Technology,
Auto-tuning for Electric Guitars using Digital Signal Processing Pat Hurney, 4ECE 31 st March 2009.
ELEC 407 DSP Project Algorithmic Reverberation – A Hybrid Approach Combining Moorer’s reverberator with simulated room IR reflection modeling Will McFarland.
Improvement of Audibility for Multi Speakers with the Head Related Transfer Function Takanori Nishino †, Kazuhiro Uchida, Naoya Inoue, Kazuya Takeda and.
Digital audio recording Kimmo Tukiainen. My background playing music since I was five first time in a studio at fourteen recording on my own for six months.
Back to Stereo: Stereo Imaging and Mic Techniques Huber, Ch. 4 Eargle, Ch. 11, 12.
Live Sound Analog Mixing Console. Live Sound Analog Mixing Consoles Come in many different sizes and configurations for different applications Come in.
All you have is a pair of instruments (basilar membranes) that measure air pressure fluctuations over time Localization.
Musical Sound Processing Student Name: 鄭建健
1 Audio Compression Techniques MUMT 611, January 2005 Assignment 2 Paul Kolesnik.
1 Introduction to MPEG Surround 韓志岡 2/9/ Outline Background – Motivation – Perception of sound in space Pricicple of MPEG Surround – Downmixing.
Dolby AC-3 Audio Encoding & THX Wai Kam (Winnie) Henele Adams Peter Boettcher.
Project Presentation: March 9, 2006
Implementing 3D Digital Sound In “Virtual Table Tennis” By Alexander Stevenson.
Hearing & Deafness (3) Auditory Localisation
1 Manipulating Digital Audio. 2 Digital Manipulation  Extremely powerful manipulation techniques  Cut and paste  Filtering  Frequency domain manipulation.
STUDIOS AND LISTENING ROOMS
Spectral centroid 6 harmonics: f0 = 100Hz E.g. 1: Amplitudes: 6; 5.75; 4; 3.2; 2; 1 [(100*6)+(200*5.75)+(300*4)+(400*3.2)+(500*2 )+(600*1)] / = 265.6Hz.
EECS 20 Chapter 9 Part 21 Convolution, Impulse Response, Filters Last time we Revisited the impulse function and impulse response Defined the impulse (Dirac.
Binaural Sound Localization and Filtering By: Dan Hauer Advisor: Dr. Brian D. Huggins 6 December 2005.
3-D Sound and Spatial Audio MUS_TECH 348. Multi-Loudspeaker Reproduction: Surround Sound.
ABSTRACT: Noise cancellation systems have been implemented to counter the effects of echoes in communications systems. These systems use algorithms that.
Self-Calibrating Audio Signal Equalization Greg Burns Wade Lindsey Kevin McLanahan Jack Samet.
Sub-band Mixing and Addition of Digital Effects for Consumer Audio ELECTRICAL & ELECTRONIC ENGINEERING FINAL YEAR PROJECTS 2012/2013 Presented by Fionn.
Audio Post Production Workflow Part 2. Mixing Dubbing (aka) Sweetening The Process of mixing and re-recording of individual tracks created by the editorial.
Introduction to Audio Post-Processing Ivan Lee, Jun
Mixer Settings for SHS Band Room Plug mics in Bal/Unbal Starting with mic 1 Plug RCA cables in Tape Output to connect To Computer input Connect powered.
Speech Segregation Based on Sound Localization DeLiang Wang & Nicoleta Roman The Ohio State University, U.S.A. Guy J. Brown University of Sheffield, U.K.
EE513 Audio Signals and Systems Digital Signal Processing (Systems) Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Improved 3D Sound Delivered to Headphones Using Wavelets By Ozlem KALINLI EE-Systems University of Southern California December 4, 2003.
Technical Seminar Presented by :- Debabandana Apta (EC ) National Institute of Science and Technology [1] “ECHO CANCELLATION” Presented.
Effect of the Phase on Speech and Musical signals using DWT By: Hafiz, Malik ECE418,DSPIISpring,02.
Applied Psychoacoustics Lecture: Binaural Hearing Jonas Braasch Jens Blauert.
Issac Garcia-Munoz Senior Thesis Electrical Engineering Advisor: Pietro Perona.
Rumsey Chapter 16 Day 3. Overview  Stereo = 2.0 (two discreet channels)  THREE-DIMENSIONAL, even though only two channels  Stereo listening is affected.
3-D Sound and Spatial Audio MUS_TECH 348. Main Types of Errors Front-back reversals Angle error Some Experimental Results Most front-back errors are front-to-back.
Advanced Digital Signal Processing
THE MANIFOLDS OF SPATIAL HEARING Ramani Duraiswami | Vikas C. Raykar Perceptual Interfaces and Reality Lab University of Maryland, College park.
Audio Systems Survey of Methods for Modelling Sound Propagation in Interactive Virtual Environments Ben Tagger Andriana Machaira.
‘Missing Data’ speech recognition in reverberant conditions using binaural interaction Sue Harding, Jon Barker and Guy J. Brown Speech and Hearing Research.
3-D Sound and Spatial Audio MUS_TECH 348. Physical Modeling Problem: Can we model the physical acoustics of the directional hearing system and thereby.
Spatial and Spectral Properties of the Dummy-Head During Measurements in the Head-Shadow Area based on HRTF Evaluation Wersényi György SZÉCHENYI ISTVÁN.
 Sound effects  Our project  Background & techniques  Applications  Methodology  Results  Conclusion.
Analog and Digital Filters used in Audio Contexts Tufts University – ME 93 October 22, 2015.
Microcomputer Systems Final Project “Speaker and Sound Modulation”
On the manifolds of spatial hearing
Design of a Coincident Microphone Array for 5.1- Channel Audio Recording Using the Mid-Side Recording Technique Jong Kun Kim, Chan Jun Chun, and Hong Kook.
Frequency Range Of Speakers. Frequency Of Subs The Frequency of The 2subwoofers go from Hz.
PSYC Auditory Science Spatial Hearing Chris Plack.
Fletcher’s band-widening experiment (1940)
Sampling Rate Conversion by a Rational Factor, I/D
Presentation 8 – Surround Sound
3-D Sound and Spatial Audio MUS_TECH 348. What do these terms mean? Both terms are very general. “3-D sound” usually implies the perception of point sources.
Doppler Simulation and the Leslie Julius Smith, Stefania Serafin Jonathan Abel, David Berners
Fantasound Developed by Disney for “Fantasia” (1940)
EE Audio Signals and Systems
Lab 1: (i) TIMS introduction (ii) Modeling an Equation
Voice Removal from Music
Volume 62, Issue 1, Pages (April 2009)
Volume 62, Issue 1, Pages (April 2009)
Analysis of Audio Using PCA
Govt. Polytechnic Dhangar(Fatehabad)
Week 13: Neurobiology of Hearing Part 2
Presentation transcript:

3-D Spatialization and Localization and Simulated Surround Sound with Headphones Lucas O’Neil Brendan Cassidy

Overview 3D with headphones –HRTF Model –Convolution –360 + elevation Panning Upmixing –Pro Logic –Delays –Filters –Sub –Autopanning Downmixing with HRTF

Mathematical HRTF Model We tell 3D directionality through 3 cues other than just ITD and IID Pinna Reflections Shoulder and Torso Reflections Head Shadow and ITD Can model using filters/delays.

Shoulder/Torso reflection simulated by echo: Pinna reflections via a tapped delay line

Head shadow diffracts the sound wave. Simulated in digital domain by 1st order IIR filter: ITD due to separation obtained by allpass filter with group delay:

Input azimuth and elevation angle. Delay by shoulder echo. Add delay line due to pinna reflections. Filter through Head Shadow and ITD filters. spatialization~!

Convolving with HRIR HRIR = Head Related Impulse Response Measured with KEMAR dummy (MIT) Convolve audio with impulse response corresponding to appropriate angle

360 Corkscrew Panning Demo to shown point source spatialization. Pick rotation frequency for azimuth and elevation. Breaks up signal into blocks and performs HRTF with different angles on each block to simulate 360 rotation around head and elevation from -90 to +90 Done with both mathematical model and convolution techniques.

5.1 Surround Sound

Upmixing to 5.1 Surround Investigated Dolby Pro Logic decoder. Initially used gains/phase shift matrix to split up stereo signal Tweaked further adding delays to center and surround channels

Pro Logic II has 3 modes of operation: –Movie (not used in this project) –‘Pro Logic’ –Music Surround channel uses 7kHz LPF in Pro Logic mode Surround channel uses Shelving Filter in music mode –Used 4kz cutoff for shelving. Surround channel has 20ms delay in Pro Logic mode, but not music mode

Subwoofer Simulation 5 channel surround was losing some low frequency due to cross talk corellation and phase cancellation in surround channels. Solved by cloning low frequencies of signal (using 300Hz LPF), then mixing them back in the stereo channel after downmixing the 5 channels

Autopanning Pro Logic mode uses autopanning to detect directionality and adjust 5 speaker mix. Preserves RMS energy in signal.

Sub-band Autopanning Scope of project did not allow for implementation Surround upmixer breaks signals into bands and pans those bands to the appropriate location Can detect different instruments in music (like frequency keying in DAW software) and localize each instrument.

Dolby Pro Logic Decoder

Our Implementation of Upmixer

Downmixing with HRTF

Resultant stereo sound file has convincing spatialization effects. Pro Logic mode autopanning implemented without sub-band separation tends to have vocals that jump back and forth across the left and right channels. Music mode sounded better (for music).

Questions?