Crafting Sound and Space

Slides:



Advertisements
Similar presentations
Royalty Free Music for Schools Do You Have the To Do a Podcast?
Advertisements

MUSC1010 – WEEK 8 Sound Radiation and Pro Tools Audio.
SWE 423: Multimedia Systems Chapter 3: Audio Technology (2)
SYED SYAHRIL TRADITIONAL MUSICAL INSTRUMENT SIMULATOR FOR GUITAR1.
Particle Synthesis A unified model for granular synthesis Øyvind Brandtsegg Sigurd Saue Thom Johansen.
Sound: Audio & Music B.Sc. (Hons) Multimedia ComputingMedia Technologies.
Enhanced Rendering of Fluid Field Data Using Sonification and Visualization Maryia Kazakevich May 10, 2007.
Project Presentation: March 9, 2006
Music Processing Roger B. Dannenberg. Overview  Music Representation  MIDI and Synthesizers  Synthesis Techniques  Music Understanding.
Tutorial 7 Working with Multimedia. XP Objectives Explore various multimedia applications on the Web Learn about sound file formats and properties Embed.
U.S. Army Research, Development and Engineering Command Braxton B. Boren, Mark Ericson Nov. 1, 2011 Motion Simulation in the Environment for Auditory Research.
1 Ambisonics: The Surround Alternative Richard G. Elen The Ambisonic Network.
Digital Sound and Video Chapter 10, Exploring the Digital Domain.
Music Tech Away Day 2012 Jonathan P Wakefield Overview of current research supervisions: Mark Mynett – PhD p/t - Music Production for Contemporary Metal.
Tutorial 7 Working with Multimedia. XP Objectives Explore various multimedia applications on the Web Learn about sound file formats and properties Embed.
Multimedia Technology and Applications Chapter 2. Digital Audio
Tutorial 7 Working with Multimedia. New Perspectives on HTML, XHTML, and XML, Comprehensive, 3rd Edition 2 Objectives Explore various multimedia applications.
Tutorial 7 Working with Multimedia. New Perspectives on HTML, XHTML, and XML, Comprehensive, 3rd Edition 2 Objectives Explore various multimedia applications.
Name : Arum Tri Iswari Purwanti NPM :
Sound element Week - 11.
8. 1 MPEG MPEG is Moving Picture Experts Group On 1992 MPEG-1 was the standard, but was replaced only a year after by MPEG-2. Nowadays, MPEG-2 is gradually.
Timo Haapsaari Laboratory of Acoustics and Audio Signal Processing April 10, 2007 Two-Way Acoustic Window using Wave Field Synthesis.
Origami Hiroko Terasawa, Grace Leslie. Today’s talk Motivation & Aesthetics Musical Goal Links btw/ objects and music Score & performance Acoustic properties.
1 City With a Memory CSE 535: Mobile Computing Andreea Danielescu Andrew McCord Brandon Mechtley Shawn Nikkila.
Present document contains informations proprietary to France Telecom. Accepting this document means for its recipient he or she recognizes the confidential.
Module 3: Dealing with Files Robotics – ll. Objectives Understand the file access block and its configuration Create and use files inside NXT programs.
David DuemlerMartin Pendergast Nick KwolekStephen Edwards.
Writing Tracker Developing Stamina and Fluency Missouri Middle School Association Conference “Success in Middle Grades” February 8 – 9, St. Louis,
Date:20 July, 2009 Abstract: This contribution contains DO Rev. C Mobile Transmit Diversity Proposal. Notice Contributors grant a free, irrevocable license.
XP Practical PC, 3e Chapter 14 1 Recording and Editing Sound.
ANALYSIS TRAIN ON THE GRID Mihaela Gheata. AOD production train ◦ AOD production will be organized in a ‘train’ of tasks ◦ To maximize efficiency of full.
1 Particle Synthesis A unified model for granular synthesis Øyvind Brandtsegg Sigurd Saue Thom Johansen.
TODAY’S GOALS Introduce and practice audio editing strategies Peer review the first draft of your podcast scripts.
Friction Investigating static and kinetic friction of a body on different surfaces.
Chapter 15 Recording and Editing Sound
Digital & Interactive Media
Tomás Pérez-García, Carlos Pérez-Sancho, José M. Iñesta
European Robotic LABoratory
Topics Introduction Hardware and Software How Computers Store Data
SOFTWARE DESIGN AND ARCHITECTURE
CS 591 S1 – Computational Audio -- Spring, 2017
Multimedia: Digitised Sound Data
Introduction What is a transducer? A device which converts energy in one form to another. Transducer Active Passive Generates its own electrical voltage.
Introduction to Audio Recording
Understanding Your Child’s Report Card
Analysis A way of understanding…of making meaning for clarity and significance in order to develop an idea. That meaning is then communicated to readers.
A unified model for granular synthesis
Introduction to Digital Audio
Introduction to Events
Tutorial 7 Working with Multimedia
6.4 Global Positioning of Nodes
Automatic Picking of First Arrivals
Traveling Waves The one-dimensional (1D) case
Overview What is Multimedia? Characteristics of multimedia
Functors, Comonads, and Digital Image Processing
Introduction to Digital Audio
Introduction to Digital Audio
An Overview of Simultaneous Remote Interpretation By Telephone
8th Grade Matter and Energy in Organisms and Ecosystems
Working with Multimedia
Introduction to Digital Audio
Title of poster... M. Author1, D. Author2, M. Author3
NSF Workshop on Spectrum Measurements & Spectrum Management
Introduction to Digital Audio
Institute of New Media Development and Research
DESIGN OF EXPERIMENTS by R. C. Baker
Title of ePoster... M. Author1, D. Author2, M. Author3
Digital Audio Application of Digital Audio - Selected Examples
An overview of course assessment
Presentation transcript:

Crafting Sound and Space Working with Ambisonics using blue and Csound Linux Audio Conference 2009 Parma, Italy Jan Jacob Hofmann jjh@sonicarchitecture.de Steven Yi stevenyi@gmail.com

Overview Brief Overview of Ambisonics Analyze requirements for working with Ambisonics for composition Discuss different strategies of working with blue and Csound to compose using Ambisonics

Ambisonics – Characteristics Supports Periphony The image is fairly stable and precise, independent from the position of the virtual sound in regard to the speaker The position of the listener is not that important for getting a fairly good impression of localisation It can be combined with the distance-clues of the virtual sound source Once encoded it can be decoded for performance to any desired speaker- setup in a versatile way, as long as it is symmetric Ambisonics is free and efficient

Ambisonics - Encoding/Decoding Encoding stores the position of the maximum of the sound energy in files which relate to different directions in space. Thus both, the energy and the velocity of a particle of air affected by a sound wave can be stored Once encoded, the encoded signal can be decoded to different speaker setups

Ambisonics Encoding/Decoding The higher the order of encoding, the higher is the accuracy with which the spatial information is stored. The higher the number of speakers for decoding, the higher is the accuracy with which the stored spatial information is reproduced.

Ambisonics – Encoding Orders Introduction to the Furse-Malham Set of Equations: First Order Ambisonic: Encoding into 4 audio files (W, X, Y, Z) Second Order Ambisonic: Encoding into 9 audio files (W, X, Y, Z, R, S, T, U, V) Third Order Ambisonic: Encoding into 16 audio files (W, X, Y, Z, R, S, T, U, V, L, M, N, O, P, Q) Fourth Order Ambisonic: … The higher the order, the higher the minimum number of speakers are recommended for periphonic decode

Enhancements to Ambisonics Because the Furse-Malham Set of equations only encodes the direction of sound, adding further processing to help determine the distance of the sound as well can further enhance the spatialisation by generating a sonic environment. These enhancements are modifications of the synthesized signal, not a property of recorded Ambisonics

Common Distance Clues Attenuation Filtering Local Reverb (in periphonic distribution) Global Reverb (in periphonic distribution) Early Reflections (in periphonic distribution)

Composing Strategies for Ambisonics Constant Position Changing Position * * * Distribution of continuous sound events Distribution of discrete sound events

Entering Data of Position for Csound Positional data static within instrument code Positional data given per note though pfields Positional data for instrument read from external signal generated by separate instrument Positional data for instrument read from score generated by separate program

Limitations of Using Csound Alone Entering note and positional data via text may be onerous to the detriment of the compositional experience. Controlling an unlimited number of sonic events is totally impossible that way. Text-alone may not optimally represent visually information about the musical material, requiring extra mental tracking of the overall piece when composing

Using blue and Csound Offers timeline to organize musical ideas in time Various Score SoundObjects allow different ways to enter in musical data (PianoRoll, Python Scripting, Jmask, etc.) blue Instruments allow for using widgets configure instrument parameters Widgets are automatable and adjustable in real-time And more!

Examples using blue and Csound Spatialisation of a continous sound source Spatialisation of discontinous sound events (spatial granular synthesis) Spatialisation in real time

Conclusion Comprehensive spatialisation leads to rather complex environments, which demand a convenient way to enter the data to control it Various ways can be chosen to enter the data (parameters) for spatialisation The appropriate method has to be chosen to gain the desired result: Each method offers different opportunities