Music Processing Roger B. Dannenberg. Overview  Music Representation  MIDI and Synthesizers  Synthesis Techniques  Music Understanding.

Slides:



Advertisements
Similar presentations
Basic Tools for Understanding Synthesis. Synthesizer A musical instrument that produces waveforms, typically in the audio range of about 20 to 20,000.
Advertisements

Sound Synthesis Part II: Oscillators, Additive Synthesis & Modulation.
Tamara Berg Advanced Multimedia
4.1Different Audio Attributes 4.2Common Audio File Formats 4.3Balancing between File Size and Audio Quality 4.4Making Audio Elements Fit Our Needs.
Musical Instrument Digital Interface. MIDI Basics First introduced in 1983.
Int 2 Multimedia Revision. Digitised Sound Analogue sound recorded from person, or real instruments.
Audio 1 Subject:T0934 / Multimedia Programming Foundation Session:8 Tahun:2009 Versi:1/0.
Guitar Effects Processor Using DSP
Chapter 5-Sound.
I Power Higher Computing Multimedia technology Audio.
SWE 423: Multimedia Systems Chapter 3: Audio Technology (2)
5/4/20151 Lesson 5 Sound. 5/4/20152 Overview Introduction to sound. Multimedia system sound. Digital audio. MIDI audio. Audio file formats.
What makes a musical sound? Pitch n Hz * 2 = n + an octave n Hz * ( …) = n + a semitone The 12-note equal-tempered chromatic scale is customary,
EE2F2 - Music Technology 9. Additive Synthesis & Digital Techniques.
EE2F2: Music Technology - Revision Two exam questions Music Recording Technology Mixing & multi-track recording Effects MIDI & Sequencers Virtual Studio.
Chapter 7 Principles of Analog Synthesis and Voltage Control Contents Understanding Musical Sound Electronic Sound Generation Voltage Control Fundamentals.
Music Processing Roger B. Dannenberg. Overview  Music Representation  MIDI and Synthesizers  Synthesis Techniques  Music Understanding.
Spring 2002EECS150 - Lec13-proj Page 1 EECS150 - Digital Design Lecture 13 - Final Project Description March 7, 2002 John Wawrzynek.
Customizable Audio Kaleidoscope Agustya Mehta, Dennis Ramdass, Tony Hwang Final Project Spring 2007.
1 Manipulating Digital Audio. 2 Digital Manipulation  Extremely powerful manipulation techniques  Cut and paste  Filtering  Frequency domain manipulation.
PH 105 Dr. Cecilia Vogel Lecture 24. OUTLINE  Electronic music  Theramin  Analog vs digital  components in electronic music.
Additional Notes on Wavetable Synthesis R.C. Maher ECEN4002/5002 DSP Laboratory Spring 2002.
EE2F2 - Music Technology 10. Sampling Early Sampling It’s not a real orchestra, it’s a Mellotron It works by playing tape recordings of a real orchestra.
EE2F2 - Music Technology 8. Subtractive Synthesis.
EE2F2 - Music Technology 5. MIDI. A Musical Interface Early synthesisers were often modular designs Sounds were built up by patching together several.
Human Psychoacoustics shows ‘tuning’ for frequencies of speech If a tree falls in the forest and no one is there to hear it, will it make a sound?
5. Multimedia Data. 2 Multimedia Data Representation  Digital Audio  Sampling/Digitisation  Compression (Details of Compression algorithms – following.
Digital Audio Multimedia Systems (Module 1 Lesson 1)
Digital audio and computer music COS 116, Spring 2012 Guest lecture: Rebecca Fiebrink.
2 Outline Digital music The power of FPGA The “DigitalSynth” project –Hardware –Software Conclusion Demo.
MIDI One choice for adding sounds to multimedia applications is the use of digital audio soundfiles This can become very memory intensive, however, for.
infinity-project.org Engineering education for today’s classroom 53 Design Problem - Digital Band Build a digital system that can create music of any.
Digital Sound and Video Chapter 10, Exploring the Digital Domain.
Synthesis Basics (1) Analog Synthesis Intro to Digital Oscillators.
Introduction to Interactive Media 10: Audio in Interactive Digital Media.
MIDI. A protocol that enables computers, synthesizers, keyboards, and other musical devices to communicate with each other. Instead of storing actual.
Copyright 2004 Ken Greenebaum Introduction to Interactive Sound Synthesis Lecture 17:Wavetable Synthesis Ken Greenebaum.
Multimedia Technology Digital Sound Krich Sintanakul Multimedia and Hypermedia Department of Computer Education KMITNB.
Beyond the Beep Sequencer-Controlled Music Synthesis for 8-bit Apple II’s Michael Mahon
Synthesis advanced techniques. Other modules Synthesis would be fairly dull if we were limited to mixing together and filtering a few standard waveforms.
Prof. Brian L. Evans Dept. of Electrical and Computer Engineering The University of Texas at Austin EE445S Real-Time Digital Signal Processing Lab Fall.
Lecture 3 MATLAB LABORATORY 3. Spectrum Representation Definition: A spectrum is a graphical representation of the frequency content of a signal. Formulae:
Signal Digitization Analog vs Digital Signals An Analog Signal A Digital Signal What type of signal do we encounter in nature?
Sound on the Web. Using Sound on a Web Site Conveying information  pronounce a word or describe a product Set a mood  music to match the web page scene.
Reason Devices Subtractor. Oscillators Select Waveform The Subtractor has two oscillators that can be used as sound sources for your patches Tuning Mix:
Multimedia Technology and Applications Chapter 2. Digital Audio
CMSCDHN1114/CMSCD1011 Introduction to Computer Audio
Sound element Week - 11.
Audio / Sound INTRODUCTION TO MULTIMEDIA SYSTEMS Lect. No 3: AUDIO TECHNOLOGY.
Chapter 5: Electronic Music and Synthesizers Who uses electronic musical synthesizers? Each advance in electronic technology is followed by a concomitant.
MIDI Musical Instrument Digital Interface. MIDI A data communications protocol that describes a means for music systems and related equipment to exchange.
Sound Representation Digitizing Sound Sound waves through Air Different Voltages Voltage converted to numbers.
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 3 – Digital Audio Representation Klara Nahrstedt Spring 2009.
Sampling BTEC Level 3 Extended Diploma in Music Technology Year 1 Sound Creation & Manipulation Modulation – LFOs & Envelopes.
MMDB-8 J. Teuhola Audio databases About digital audio: Advent of digital audio CD in Order of magnitude improvement in overall sound quality.
Sound. Sound Capture We capture, or record, sound by a process called sampling: “measuring” the sound some number of times per second. Sampling rate is.
1 桃園縣瑞塘國小演講 電腦音樂創作於國小教學上的應用 Reporter: 黃志方 Assistant Professor of Music Technology Group, Music Institute National Chiao Tung University Nov 8, 2006.
MIDI Musical Instrument Digital Interface Musical sound can be generated, unlike other types of sounds. The Musical Instrument Digital Interface standard.
Understanding Midi Audio Processing Describe the Midi Audio Processing.
Synthesizing a Clarinet Nicole Bennett. Overview  Frequency modulation  Using FM to model instrument signals  Generating envelopes  Producing a clarinet.
Lifecycle from Sound to Digital to Sound. Characteristics of Sound Amplitude Wavelength (w) Frequency ( ) Timbre Hearing: [20Hz – 20KHz] Speech: [200Hz.
XP Practical PC, 3e Chapter 14 1 Recording and Editing Sound.
Spectrum Analysis and Processing
EE2F2: Music Technology - Revision
CS 591 S1 – Computational Audio -- Spring, 2017
CSC 320 – Music Instrument Digital Interface (MIDI) and Digital Audio, Spring 2017 April 2017 Dr. Dale Parson.
Multimedia: making it Work
Developing a Versatile Audio Synthesizer TJHSST Computer Systems Lab
Representing Sound 2.6 – Data Representation.
Digital Audio Application of Digital Audio - Selected Examples
Presentation transcript:

Music Processing Roger B. Dannenberg

Overview  Music Representation  MIDI and Synthesizers  Synthesis Techniques  Music Understanding

Music Representation  Acoustic Level: sound, samples, spectra  Performance Information: timing, parameters  Notation Information: parts, clefs, stem direction  Compositional Structure: notes, chords, symbolic structure

Performance Information  MIDI bandwidth is 3KB/s, or 180KB/min  More typical: 3KB/minute, 180KB/hour Complete Scott Joplin: 1MB Output of 50 Composers (400 days of music): 500MB (1 CD-ROM)  Synthesis of acoustic instruments is a problem

Music Notation  Compact, symbolic representation  Does not capture performance information  Expressive “performance” not fully automated

Compositional Structure  Example: Nyquist (free software!) (defun melody1 () (seq (stretch q (note a4) (note b4) (note cs5) (note d5)))) (defun counterpoint () …) (defun composition () (sim (melody1) (counterpoint))) (play (transpose 4 (composition)))

MIDI: Musical Instrument Digital Interface  Musical Performance Information: Piano Keyboard key presses and releases “instrument” selection (by number) sustain pedal, switches continuous controls: volume pedal, pitch bend, aftertouch very compact (human gesture < 100Hz bandwidth)

MIDI (cont’d)  Point-to-point connections: MIDI IN, OUT, THRU Channels  No time stamps (almost) everything happens in real time  Asynchronous serial, 8-bit bytes+start+stop bits, 31.25K baud = 1MHz/32

MIDI Message Formats 8 chkey#vel Key Up 9 chkey#vel Key Down Program Change Polyphonic Aftertouch System Exclusive A chpresskey# C chindex# B chctrl#value Control Change Channel Aftertouch D chpress E chlo 7hi 7 Pitch Bend F 0 F E … DATA …

Standard MIDI Files  Key point: Must encode timing information  =1 or more, =, = midi data or, = FF =1 or more, =, = midi data or, = FF Delta times use variable length encoding, omit for zero. Interleave time differences with MIDI data...

Music Synthesis Introduction  Primary issue is control No control  Digital Audio (start, stop,...) Complete control  Digital Audio (S[0], S[1], S[2],... ) Parametric control  Synthesis

Music Synthesis Introduction (cont’d)  What parameters? pitch loudness timbre (e.g. which instrument) articulation, expression, vibrato, etc. spatial effects (e.g. reverberation)  Why synthesize? high-level representation provides precision of specification and supports interactivity

Additive Synthesis  amplitude A[i] and frequency  [i] specified for each partial (sinusoidal component)  potentially 2n more control samples than signal samples!

Additive Synthesis (cont’d)  often use piece-wise linear control envelopes to save space  still difficult to control because of so many parameters  and parameters do not match perceptual attributes

Table-Lookup Oscillators  If signal is periodic, store one period  Control parameters: pitch, amplitude, waveform Phase + Frequency Amplitude x n Efficient, but... n Spectrum is static n Efficient, but... n Spectrum is static (Note that phase and frequency are fixed point or floating point numbers)

FM Synthesis  Usually use sinusoids  “carrier” and “modulator” are both at audio frequencies  If frequencies are simple ratio ( R ), output spectrum is periodic  Output varies from sinusoid to complex signal as MOD increases A F AMPL out = AMPL· sin(2  ·FREQ· t + MOD sin(2  R ·FREQ· t )) + FREQMOD

FM Synthesis (cont’d)  Interesting sounds,  Time-varying spectra, and...  Low computation requirements  Often uses more than 2 oscillators … but …  Hard to recreate a specific waveform  No successful analysis procedure

 Samplers store waveforms for playback  Sounds are “looped” to extend duration  Spectrum is static (as in table- lookup), so: different samples are used for different pitches simple effects are added: filter, vibrato, amplitude envelope attack portion, where spectrum changes fastest, added to front Sample-based Synthesis AttackLoopLoop again...

Physical Models  Additive, FM, and sampling: more-or-less perception-based.  Physical Modeling is source-based: compute the wave equation, simulate attached reeds, bows, etc.  Example: ReedBoreBell

Physical Models (cont’d)  Difficult to control, and...  Can be very computationally intensive … but...  Produce “characteristic” acoustic sounds.

Music Understanding  Introduction  Score Following, Computer Accompaniment  Interactive Performance  Style Recognition  Conclusions