Download presentation
Presentation is loading. Please wait.
1
Sonification of Fluid Field Data
October 11, 2006
2
Outline Sonification Background Example sonifications Project
General Data specific Project Background Sound options and considerations Specifics Example Further work
3
Sonification Background
Sonification is the use of non-speech audio to convey information [B.N. Walker] Data -> to sound As an alternative or to complement visual and possibly other displays (e.g.haptic) Increasing information bandwidth, reinforcement Recognition of features not obvious from visual displays Possibility to concentrate on different complementary information by two senses: Global events through sound, local details through visual cues visual -> spatial information, sound -> sequential / temporal information multi-modal sensory experience -> improve perception different senses, reducing the effect of the dimensional curse: temperature – visual, pressure – haptic stimulation, vorticity – sound
4
Sonification Background
What is being sonified: General sonification toolboxes Specific to data sets Time dependent or static data How: Prerecorded sound Modifying physical properties of sound, pressure, density, particle velocity Modifying pitch, envelope, duration, timber, etc… Sonification in real-time or not generic algorithms – convert generic datasets into sound with predefined mapping functions. can sonify all sorts of datasets, user needs to define how the dataset is mapped to the synthesized sound. data specific – concerned with sonification in a specific area and are trying to create meaningful sounds (in relation to that area) by mapping data and sounds in a specific way. These algorithms try to take advantage of the data properties specific to the domain. 2nd: sound particles are created for each sound source 3rd: modifying sound wave
5
Sonification Sandbox [B.N. Walker]
Input: several data sets in a form of MxN matrix Each data set can be mapped to pitch, timbre, volume, or pan. One dataset mapped to time. GUI for user manipulation of mappings Length that each data point is played corresponds to the relative spacing of the data points in the time dimension Program download: Very Simple creating descriptive sounds for vector flow field would require non-trivial tuning of the linear mapping function, making it unimportant to rendering of CFD data. Sonification Example I
6
Data Sonification and Sound Visualization [H.G. Kaper]
DIASS (a Digital Instrument for Additive Sound Synthesis) Sound by summation of simple sine waves: Static and dynamic control parameters applied at the level of partials and collected sound Various mappings from the degrees of freedom in the data to the parameters Another generic algorithm Pi – partials Alpha – amplitude then frequency and phase Sonification Example II
7
Data Sonification and Sound Visualization
Sonification (cont’d) Creating sound not real time Sound examples: Sound visualization To detect sound structure One-to-one mapping between control parameters and visual attributes Done in real time What makes this project stand out from the other ones is the idea of using sound visualization to better understand produced sonification Sonification Example II
8
Data Sonification and Sound Visualization
Visualization (cont’d) The grid indicates the frequency spectrum 8 octaves, corresponding approximately to the range of a piano Partials mapped to spheres: Diameter - amplitude Height - frequency Colors - amount of reverberation Reverberation: reechoed sound, the fact of being reverberated or reflected the persistence of a sound after its source has stopped [syn: echo, sound reflection, replication] Sonification Example II
9
Sonification of time dependent data [M. Noirhomme-Fraiture]
2D and 3D time-dependent graphs Value to frequency Discard outliers Pre-smooth curves Their experiments show that having a musical or a computer science background gives a minor advantage in using sonification of 2D curves and no difference for the 3D case First example of Data specific sonification Simple case Main idea: sound is good for sequential/temporal data Sonification Example III
10
Sonification Example IV
Cell Music [K. Metze]: Sonification of Digitalized Fast-Fourier Transformed Microscopic Cell Images: Luminance of each pixel to amplitude Distance from the central point to spatial frequency Vector is moving clockwise from 0 to 6 hour position. Sound of each pixel is played when the vector strikes it Sound duration is inversely proportional to frequency Most important frequencies filtered out Geodesic reconstruction: method defining subregions in FFT image around regional maxima with a luminance difference up to 3 gray levels lower like a pointer on a clock in 30 seconds Sonification Example IV
11
Sonification Example IV
Cell Music Lower frequencies predominate in malignant cells, thus these cells can be easily recognized in the cell sound as slowly moving chords of lower frequencies with intense amplitudes Only pixels within domes of gray level values in the FFT image contributed to the generation of sound Chromatin texture of a normal epithelial cell of bronchial mucosa and of a malignant neoplastic cell (adenocarcinoma) Corresponding FFT images and corresponding Domes (clusters) of the FFT images. Sonification Example IV
12
Heart Rate Sonification [M. Ballora]
Interbeat interval characteristics are mapped to sound characteristics Sound files and sonification mapping overview: Each interbeat interval is mapped to a pitch – short sine waves ("grains"). Higher heart rates – higher pitches. Successive intervals differing by more than 50 ms are given additional annotation – "tinkling" sound produced by phase modulation synthesis. Current interval – center of a sliding window of 300 values (=~ 5 min). Sonified by pulsing, spectrally rich waveform with all harmonics at equal amplitude to fundamental. Standard deviation mapped to pulsing speed and # of harmonics. 2 smaller windows sonify running means to hear changes on shorter time scales: 1st – 15 values, sounded by a glassy hum. 2nd – 5 rounded values so that changes occur with a coarser degree of precision, sounded by a clarinet-like timbre. 2 –> Several sound files present differences between normal, healthy heart beat and non-healthy heart beat Electrocardiographic recording of the heartbeat. Sonification Example V
13
LHEM for Interactive Sonification [T. Bovermann]:
Local Heat Exploration Model: Data Selection: An item x is selected if it is inside the selection aura Exploration Model: dynamical model whose configuration is determined by selected data Data items has position, feature and heat Feature vectors similar to each other produce high heat values, dissimilar – lower ones Motivated by physical model of heat, this local heat causes interactions between data items in feature space, depending on their local vicinity. Data Selection: interactive process of navigating position space Aura: a selection sphere with center c and radius r Exploration Model: dynamical model whose configuration is determined by selected data The overlap of two heats results in output h > 0 (green). For each data item, randomly select different nearby items to calculate interaction with. Sonification Example VI
14
LHEM for Interactive Sonification:
Exploration Display: Superimposing lots of short grains (~5ms) to compose a grain cloud grain cloud parameters: Example explorations: Example sound files: Exploration Display: perceptual front-end to user, links temporal evolution of the exploration model to perceivable units m – number of data items or mixed sine oscillators, e – envelope, f – frequency, a maximum amplitude, o – onset delay (time delaying the amplitude envelopes). Sonification Example VI
15
Vortex Sound Synthesis [Y. Shin]
3D time-varying scalar field data. Sound propagating from sound sources Physically-based sound synthesis: data is mapped to acoustic parameters like density and particle velocity Several sound sources Cosmic explosions Sonification Example VII
16
Vortex Sound Synthesis
Three steps: Synthesis: capture user movement, compute sounds generated by the sources Rendering: compute sound heard by the listener, taking into account effect like sound source distance Localization:virtual sounds mapped to a distribution of audio signals for real world speakers Example movie file: Reflection effect Sonification Example VII
17
Sonification of Numerical Fluid Flow Simulations [E. Childs]
Real-time sonification of CFD solution process To gain insight into the solver by listening to its progress Mappings Pitch Mapping: velocity values in x and y dimension are mapped to frequency and major triads Envelope: attack, sustain and decay derived from the matrix coefficients for each variable at each node Delays between nodes, columns, at the end of each iteration to convey calculation stage computational fluid dynamics 1 -> solution should converge and so should the sound 2 -> sine oscillators were set up to correspond to each column of “live” grid 2 -> solver (for u, v then p) sweeps through the domain from left to right, first the column at i = 1 sounds, from j = 1; 5, and so on, with a slight pause for each column, through i = 5. Thus, each iteration produces 25 notes per variable, for 75 total. Sonification Example VIII
18
Sonification of Numerical Fluid Flow Simulation, example
Two-dimensional developing flow in a planar duct 5 x 5 = 25 internal or “live” cells at which the values of u, v and p are updated at each iteration by the solver The solver converges in about 20 iterations Sound file: Sonification Example VIII
19
Sonification of Vector Fields [E. Klein]
Rectilinear grids of vectors A sphere at the listener’s position. Random samples within that sphere Mapping vector direction and magnitude of sampled particle: Vector direction to sound location Vector magnitude to sound level and pitch 1 -> experimented with several different base static sound samples: white noise, different variations of colored noise, pre-sampled wind noises, and cabin noise from a Boeing 747. Sound that introduced the least distortion of the data, and was easiest to interpret was the plain white noise. 2 -> vector direction to sound location: inverse of the velocity vector defines where the sound comes from. 2 -> vector magnitude to sound level and pitch: the greater the magnitude of the vector the higher the pitch, indicating faster wind blowing Sonification Example IX
20
Sonification of 3D Vector Fields
Two consecutive vector samples taken at random locations within the listener’s head volume Hermite curve to achieve C1 continuity between two sound positions Speed of the transition from one vector position to the next dependent upon the magnitude of the vectors involved Sonification Example IX
21
Sonification of 3D Vector Fields
Vorticity (turbulence) in the sampled area: All of the samples in the area are roughly the same magnitude and direction: constant and smooth sound – low vorticity Vectors vary widely: sound appears to shift more, giving the impression of higher turbulence Size of the sample volume in relation to the density of the vectors within the field plays an important role 1 -> + speed of transition from one vector to the next dependent upon their magnitude – faster flow shifting more rapidly 2 -> headphones vs. surrounding speakers, need good HRTF for use with headphones, speakers: sound interaction between them 3 -> ratio of the size of the sample volume, to the density of the vectors within the field. Sample volume: Too small – most or all samples fall very close to same vector, extremely low vorticity heard, even if neighboring vectors are different. Too large – includes too many vectors, sounds heard always sound turbulent, not only relatively nearby vectors are considered. The key to having the vorticity sound make sense is to have a reasonable sample neighborhood for the data being examined. Sonification Example IX
22
Comparison of sonification methods
23
Project Background Input: Output:
Fluid field with velocity vector, pressure, plus potentially density, temperature and other data Changes with time Output: Sound characterizing the given fluid field Ambient: global to the whole field Local: at the point or area of interaction
24
Project: sound options
Global Every particle in the field contributes to the sound The further sound source is from the virtual pointer the less contribution it makes, the quieter it is Local point Only the field characteristics at the virtual pointer position contribute to the sound Local region Particles of the specific subset area around the pointer contribute to the sound Possibility to add zoom factor to expand or contract the space of interaction
25
Project: sound options
Sonification along the pathlines, streaklines, streamlines, streamtubes Map field characteristics along above traces to the sound parameters Possibly starting from the point of virtual pointer Map changes in the streamtube appearance to the changes in sound parameters (twist, direction, cross-sectional area radius etc…) In an unsteady flow, streamlines, streaklines, and pathlines are not necessarily the same. In a steady flow, however, all three lines coincide [C.Wassgren]
26
Definitions [by C.Wassgren]
Pathline: A line that a single particle traces out over time. A line you get from a long exposure photograph highlighting a single particle Streakline: The locus of all particles that have passed through a prescribed fixed point during a specific interval of time. A line traced by the continuous injection at a certain point of dye, smoke, or bubbles Streamline: A curve everywhere tangential to the instantaneous velocity vectors, that is, everywhere parallel to the instantaneous flow direction Demonstration:
27
Sound parameters Possibility to map field data to: Frequency, Pitch
Duration, Envelope – attack, sustain and decay of sound Spatial location – direction of were the sound is coming from Loudness, intensity, amplitude of vibration Timbre, consonance, dissonance, beats, roughness, density, volume, vibrato, silence pauses Perceptual measurement: loudness, pitch – base frequency of a sound, timbre – harmonics frequencies of the sound
28
Psychoacoustics Sound parameters require a certain percentage of change for the change to be noticed, examples: Minimum audible angle Minimal intensity change Tone duration Softer tone is usually masked by a louder tone if their frequencies are similar Relation between subjective sound traits and their physical representations Loudness relation to intensity and frequency …. That has to be taking into consideration
29
Psychoacoustics Curves of equal loudness level:
30
Project specs Components (hardware, software, libraries):
Max/MSP: mapping data values to sound Omni Haptic Device: navigation through 3D fluid data SGI OpenGL Performer Library: graphical representation of the given field and virtual pointer Quanta libraries: to read data from the main server VRPN libraries: connections between different parts of the program VRPN: between haptic and sound and between haptic and visual programs VRPN: none blocking Quanta: blocking reads and writes, good for reading the dataset
31
Structure Solution Data Server Max/MSP Program Visualization Program
Main Program as Max/MSP object Pink: direct input/output to/from the program Haptic sends pointer position to both Main and Visualization programs thru VRPN libraries Solution server sends data thru Quanta libraries Max/MSP is a graphical programming environment for sound manipulation. You can write your own objects. Haptic Program Sound Haptic Device Image Each rendering program is independent of any other
32
Haptic Program Read from the haptic device
Position Orientation Buttons Converts the position to the data field coordinates Sends pointer info to the sound and visual programs Pointer position Pointer orientation Interaction sphere diameter - local region Sends position Using VRPN libraries
33
Haptic Program Gives a force feedback: Other feedback possible
Creates virtual walls of the dataset Provide a force disallowing movement of the device outside of the data field boundary Other feedback possible Produce a force that is proportional to the flow density and its direction
34
Visualization program
Receives dataset from the Solution Data Server Receives virtual pointer position & orientation, as well as sphere diameter from the haptic program Displays vector field, virtual pointer (microphone) and interaction sphere: SDS: Quanta libraries Haptic: VRPN libraries
35
Max/MSP Max/MSP is a graphical programming environment for sound manipulation Allows you to write your own objects Large capability for a very sophisticated program Various built in audio signal processing objects: noise~ - generates white noise reson~ - filters input signal, given center frequency and bandwidth *~ - product of two inputs, in given case scales a signal’s amplitude by a value noise~ - generates a signal consisting of uniformly distributed random (white noise values between -1 and 1. reson~ - The filtered input signal. The equation of the filter is yn = gain * (xn - r * xn-2) + c1 * yn-1 + c2 * yn-2 where r, c1, and c2 are parameters calculated from the center frequency and Q. “Q”— bandpass filter, roughly, the sharpness of the filter— where Q is defined by the center frequency divided by the filter bandwidth. *~ - Scale a signal’s amplitude by a constant or changing value, or by another audio signal ezdac~ - The signal received in the inlet is sent to the corresponding audio output channel. line~- Out left outlet: The current target value, or a ramp moving toward the target value according to the currently stored value and the target time.
36
Max/MSP object Receives dataset from the Solution Data Server
Receives virtual pointer position & orientation, as well as sphere diameter value from the haptic program Calculates velocity vector at the position of the virtual microphone Depending on interaction sphere radius: Small : from vertices of the grid cell Large: from all the vertices inside the influence sphere SDS: Quanta libraries Haptic: VRPN libraries Velocity value: 1) no sphere: interpolates from the velocity values of the cell vertices 2) sphere: interpolates from all the vertices in the sphere
37
Max/MSP object Calculates velocity vector at the position of the virtual microphone using Schaeffer’s interpolation scheme: From velocity vector at the point of interaction: velocity value at the position of the virtual cursor angle between pointer vector and velocity vector Velocity value: 1) no sphere: interpolates from the velocity values of the cell vertices 2) sphere: interpolates from all the vertices in the sphere 3) the further is the point the less influence it has on overall value
38
Max/MSP object Two output values for both angle and velocity:
Output = value / max value Output = (value / max value) 5/3 Relationship between loudness level and intensity: S ~ a3/5 [B.Gold] Thus, a function between values and amplitude should be: a = const * data value5/3 to imply S ~ data value Output = value / max value - to map to frequency (linear mapping) Output = (value / max value) 5/3 - to map to the amplitude (enforces linear mapping to loudness level)
39
Max/MSP program white band noise is modified in
amplitude and frequency to simulate a wind effect: 1) ( Velocity / max value) 5/3 2) ( Angle / max value) 5/3 3) Velocity / max value 4) Angle / max value Frequency ~ , were v -> [0,1] -> [500, 1500] Amplitude ~ * 5/3, were v5/3 -> [0,1] and a5/3 -> [0.5, 1]
40
Exploration example
41
Further work Refining the program Experiments
Mesh in the visual program Possible other set-ups for Max/MSP sound program Using headphones or speakers to convey spatial sound Experiments Possible other set-ups for Max/MSP program to correspond to other chosen parameter mapping: other mapping parameters : fluid field parameters to sound parameters sphere of influence Using headphones or speakers to convey spatial sound: more immersion
42
Sound Localization HRTF ( Head-Related Transfer Functions)
Describes changes in amplitudes and phases of a sound as it travels from a sound source towards the outer ear [W.A. Yost] ILD – interaural level difference IPD – interaural phase difference ITD – interaural time difference Intracranial (occurring inside the listener’s head) lateralization (right to left) vs. extracranial (occurring in space) localization (azimuth, vertical and distance spatial dimensions)
43
Horizontal HRTF [12] Spectrum of sound depends on the direction it came from [S.A. Gelfand]
44
Further work Refining the program Experiments Goals of experiments
Defining experiments Setting up experiments Collecting useful information Sound has to convey useful information to the listener
45
References [1] B.N. Walker, J.T. Cothran, July 2003, Sonification Sandbox: a Graphical Toolkit For Auditory Graphs, Proceedings of the 2003 International Conference on Auditory Display, Boston, MA [2] H.G. Kaper, S. Tipei, E. Wiebel, 5July 2000, Data Sonification and Sound Visualization [3] K. Metze, R.L. Adam, N.J. Leite, Cell Music: The Sonification of Digitalized Fast-Fourier Transformed Microscopic Images [4] M. Ballora, B. Pennycook, P.C. Ivanov, L.Glass, A.L. Goldberger, 2004, Heart Rate Sonification: A New Approach to Medical Diagnosis, LEONARDO, Vol. 37, No. 1, pp. 41–46 [5] M. Noirhomme-Fraiture, O. Schöller, C. Demoulin, S. Simoff, Sonification of time dependent data
46
References [6] Y. Shin, C. Bajaj, 2004, Auralization I: Vortex Sound Synthesis, Joint EUROGRAPHICS - IEEE TCVG Symposium on Visualization [7] E. Childs, 2001, The Sonification of Numerical Fluid Flow Simulations, Proceedings of the 2001 International Conference on Auditory Display, Espoo, Finland, July 29-August 1 [8] E. Klein, O.G. Staadt, 2004, Sonification of Three-Dimensional Vector Fields, Proceedings of the SCS High Performance Computing Symposium, pp 8 [9] G. Kramer, B. Walker, T. Bonebright, P. Cook, J. Flowers, N. Miner, J. Neuhoff, R. Bargar, S. Barrass, J. Berger, G. Evreinov, W.T. Fitch, M. Gröhn, S. Handel, H. Kaper, H. Levkowitz, S. Lodha, B. Shinn-Cunningham, M. Simoni, S. Tipei, Sonification Report: Status of the Field and Research Agenda,
47
References [10] C.Wassgren, C.M. Krousgrill, P. Carmody, Development of Java applets for interactive demonstration of fundamental concepts in mechanical engineering courses, [11] W.A. Yost, 2000, Fundamentals of Hearing: An Introduction, Forth Edition [12] S.A. Gelfand, 2004, Hearing: An Introduction to Psychological and Physiological Acoustics, Forth Edition, Revised and Expanded [13] T. Bovermann, T. Hermann, H. Ritter, July 2005, The Local Heat Exploration Model for Interactive Sonification, International Conference on Auditory Display, Limerick, Ireland [14] B. Gold, N. Morgan, 2000, Speech and Audio Signal Processing: Processing and Perception of Speech and Music
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.