Download presentation
Presentation is loading. Please wait.
2
Auralization Lauri Savioja (Tapio Lokki) Helsinki University of Technology, TKK
3
AGENDA, 8:45 – 9:20 Auralization, i.e., sound rendering Impulse response Basic principle + Marienkirche demo Source signals and modeling of directivity of sources Modeling from perceptual point of view Dynamic auralization Evaluation of auralization quality Spatial sound reproduction Headphones Loudspeakers
4
Impulse response of a room 10 meters 7 meters
5
Impulse response of a room
6
A linear time-invariant system (LTI) can be modeled with an impulse response The output y(t) is the convolution of the input x(t) and the impulse response h(t) Discrete form (convolution is sum) Impulse response
7
Measured (binaural) impulse response of Tapiola concert hall
8
Two goals of room acoustics modeling Goal 1: room acoustics prediction Static source and receiver positions No real-time requirement Goal 2: auralization, sound rendering Possibly moving source(s) and listener, even geometry Both off-line and interactive (real-time) applications Need of anechoic stimulus signals (Binaural rendering, Lokki, 2002)
9
Goal 2: Auralization / sound rendering -“Auralization is the process of rendering audible, by physical or mathematical modeling, the sound field of a source in a space, in such a way as to simulate the binaural listening experience at a given position in the modeled space.” (Kleiner et al. 1993, JAES) -Sound rendering: plausible 3-D sound, e.g., in games 3-D model spatial IR * dry signal = auralization
10
Auralization Goal: Plausible 3-D sound, authentic auralization The most intuitive way to study room acoustic prediction results Not only for experts Anechoic stimulus signal Reproduction with binaural or multichannel techniques Impulse response has to contain also spatial information
11
Auralization, input Input data: Anechoic stimulus signal(s) ! Geometry + material data source(s) and receiver(s) locations and orientations
12
Auralization, modeling Source(s): omnidirectional, sometimes directional Medium: physically-based sound propagation in a room perceptual models, i.e., artificial reverb Receiver: spatial sound reproduction (binaural or multichannel)
13
Marienkirche, concert hall in Neubrandenburg (Germany)
14
source – medium – receiver (Savioja et al. 1999, Väänänen 2003)
15
Stimulus Sound signal synthesis Anechoic recordings Source Modeling – stimulus signal
16
Directivity is a measure of the directional characteristic of a sound source. Point sources omnidirectional omnidirectional frequency dependent directivity characteristics frequency dependent directivity characteristics Line and volume sources Database of loudspeakers http://www.clfgroup.org/ http://www.clfgroup.org/ Source Modeling - Radiation
17
Anechoic stimulus signals In a concert hall typical sound source is an orchestra Anechoic recordings needed Directivity of instruments also needed We have just completed such recordings Demo All recordings with 22 microphones Recordings are publicly available for Academic purposes Contact: Tapio.Lokki@tkk.fi Contact: Tapio.Lokki@tkk.fiTapio.Lokki@tkk.fi http://auralization.tkk.fi http://auralization.tkk.fi
18
Sound field decomposition (Svensson, AES22 nd 2002) diffuse reflections handled by surface sources
19
Computation vs. human perception Computation vs. Frequency resolution Computation vs. Time resolution (Svensson & Kristiansen 2002)
20
Two approaches Perceptually-based Physically-based (Väänänen, 2003)
21
Auralization: Two approaches (1) Perceptually-based modeling Impulse response is not computed with a geometry A ”statistical” response is applied A ”statistical” response is applied Psychoacoustical (subjective) parameters are applied in tuning the response e.g. reverberation time, clarity, warmness, spaciousness e.g. reverberation time, clarity, warmness, spaciousness Applications: music production, teleconferencing, computer games...
22
Auralization: Two approaches (2) Physically-based modeling Sound propagation and reflections of boundaries are modeled based on physics. Impulse response is predicted based on the geometry and its properties depend on surface materials, directivity and position of sound source(s) as well as position and orientation of the listener Applications: prediction of acoustics, concert hall design, virtual auditory environments for games and virtual reality applications, education,...
23
Dynamic auralization (≈sound rendering) Method 1: A grid of impulse responses is computed and convolution is performed with interpolated responses: Applied in the CATT software (http://www.catt.se) Method 2: ”Parametric rendering”
24
Typical Auralization System 1. Scene definition 2. Parametric presentation of sound paths 3. Auralization with parametric DSP structure
25
Auralization parameters For the direct sound and each image source the following set of auralization parameters is provided: Distance from the listener Azimuth and elevation angles with respect to the listener Source orientation with respect to the listener Reflection data, e.g. as a set of filter coefficients which describe the material properties in reflections
26
Treatment of one image source – a DSP view Directivity Air absorption Distance attenuation Reflection filters Listener modeling Linear system Commutation Cascading (Adapted from Strauss, 1998)
27
Auralization block diagram
28
Treatment of each image source
29
Late reverberation algorithm A special version of feedback delay network (Väänänen et al. 1997)
30
A Case Study: a Lecture Room
31
Image sources 1st order
32
Image sources up to 2nd order
33
Image sources up to 3rd order
34
Distance attenuation
35
Distance attenuation (zoomed)
36
Gain + air absorption
37
Gain + air and material absorption
38
All monaural filtering
39
All monaural filtering (zoomed)
40
Treatment of each image source
41
Only ITD for pure impulse
42
Only ITD for pure impulse (zoom)
43
ITD + minimum phase HRTF
44
Monaural filterings + ITD
45
Monaural filterings + ITD + HRTF
46
Auralization block diagram
47
Reverb
48
Image sources + reverberation
51
Dynamic Sound Rendering Dynamic rendering Properties of image sources are time variant The coefficients of filters are changing all the time The coefficients of filters are changing all the time Every single parameter has to be interpolated In delay line pick-ups the fractional delay filters have to be used to avoid clicks and artifacts Late reverberation is static Update rate latency
52
Auralization quality What is the wanted quality? Assesment of quality is possible only by case studies Objectively: Acoustical attributes With auditory modeling Subjectively: Listening tests
53
A case study, lecture hall T3
54
Quality of auralization (Lokki, 2002) Stimuli: clarinet drum Results clarinet:recording auralization Results drum: recording auralization
55
Spatial auditory display Nicolas Tsingos Lauri Savioja
56
Spatial Sound Reproduction Techniques Reproduce the correct perceived location/direction of a virtual sound source to the ears of the listener Headphone or speaker based. Binaural stereoMultiple speakers
57
Binaural and Transaural Stereophony Natural filtering of the ears and torso Apply a directional filtering to the signal Head Related Transfer Functions (HRTFs) Headphones (binaural) Speaker pair (transaural)
58
Head Related Transfer Functions Modeling Finite element techniques Measuring Dummy-heads Human listener HRTFs strongly depend on the listener Morphological differences Adaptation by scaling in frequency domain
59
HRTF filter design Filters separated into two parts: 1. Inter-aural time difference (ITD) 2. Minimum-phase FIR-filter In movements: Linear interpolation of ITD Bilinear interpolation for FIR
60
Implementing HRTFs Principal component analysis HRTF is a linear combination of eigenfilters Allows for smooth interpolation Allows for reducing the number of operations
61
Transaural Stereophony Cross talk cancellation H ll and H rr are HRTFs H rl and H lr ?
62
Amplitude/Intensity Panning The common “surround sound” Apply the proper gain to every speaker to reproduce the proper perceived direction in 2D pair of loudspeakers in 3D loudspeaker triangle Vector-Base Amplitude Panning (image from Ville Pulkki, TKK)
63
Ambisonics Spherical harmonics decomposition of the pressure field at a given point 1st order spherical harmonics Sound field can be reproduced from 4 components 1 omnidirectional and 3 orthogonal figure-of-8 Allows for manipulating the sound-field Rotations, etc.
64
Wave Field Synthesis Reproduce the exact wave-field in the reproduction regions Use speakers on the boundary Kirchoff integral theorem Sound field valid everywhere in the room Heavy resources In practice limited to a planar configuration
65
Comparison Technique Setup (# chans) DSPelevationimaging Sweet spot recordi ng HRTF light (2) light (2)moderateyesv.goodn/ayes Transaural light (2+) moderateyesgoodsmall yes yes Amplitude Panning average (5+) low yes (3D array) averagemedium no no Ambisonics average (4+) moderate yes (3D array) goodsmall yes yes WFS heavy (100+) high ?v.goodn/a ?
66
Which Setup for which Environment ? Binaural systems for desktop use Includes stereo transaural Multi-speaker systems for multi-user Well suited to immersive projection-based VR systems Projection screens act as low-pass filters Projection screens act as low-pass filters Video projection constraints Video projection constraints
67
Other Issues for Immersive Environments Overall system latency Less than 100ms is OK Tracking the user’s head Update binaural/transaural filters Correction of loudspeakers gains Room problems Reflective surfaces
68
Summary Auralization Direct convolution with full directional impulse responses Computationally too heavy in practice Computationally too heavy in practice Parametric impulse response rendering Early reflections treated separately Early reflections treated separately Statistic late reverberation Statistic late reverberation Spatial sound reproduction Headphones: HRTFs Loudspeakers: VBAP, Ambisonics, Wave Field Synthesis
69
Thank you for your attention! Contact: Lauri.Savioja@tkk.fi http://auralization.tkk.fi
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.