EE2F2 - Music Technology 4. Effects. Effects (FX) Effects are applied to modify sounds in many ways – we will look at some of the more common Effects.

Slides:



Advertisements
Similar presentations
EXAMPLE OF POSITIVE FEEDBACK. WHAT IS FEEDBACK Feedback- The high-pitched squeal or ringing caused by sound finding its way out of the loudspeaker back.
Advertisements

Sound Synthesis Part V: Effects. Plan Overview of effects Chorus effect Treble & bass amplification Saturation Pitch vocoder Summary.
Figures for Chapter 7 Advanced signal processing Dillon (2001) Hearing Aids.
Advanced Lecture.  Equalization, or EQ is the process of using passive or active electronic elements or digital algorithms for the purpose of altering.
Effects. Dynamic Range Processors Fixed Time Delay Effects Variable Time Delay Effects Reverberation Effects Time and Pitch Changing Effects Distortion.
Higher Music Technology Effects and Processes Effects Chorus - A chorus (or ensemble) is a modulation effect used to create a richer, thicker sound and.
Modulation: CHORUS AND FLANGE  Just as a chorus is a group of singers, the chorus effect can make a single instrument sound like there are actually.
Dynamic Processors. What does a dynamic processor do? They make very subtle changes to a musical sound They inject depth, warmth and life into.
Reflections Diffraction Diffusion Sound Observations Report AUD202 Audio and Acoustics Theory.
EE2F2: Music Technology - Revision Two exam questions Music Recording Technology Mixing & multi-track recording Effects MIDI & Sequencers Virtual Studio.
Chapter 7 Principles of Analog Synthesis and Voltage Control Contents Understanding Musical Sound Electronic Sound Generation Voltage Control Fundamentals.
SWE 423: Multimedia Systems Chapter 3: Audio Technology (1)
Audio signal processing. Audio signal processing, sometimes referred to as audio processing. It is the processing of a representation of auditory signals,
SYED SYAHRIL TRADITIONAL MUSICAL INSTRUMENT SIMULATOR FOR GUITAR1.
Popular Music in Context
Sound Mixer. Sound Mixers: Overview Applications Some of the most common uses for sound mixers include: Music studios and live performances: Combining.
EE2F2 - Music Technology 3. Mixing. Mixing Basics In the simplest terms, mixing is just adding two or more sounds together. Of course, things are rarely.
Sound Transmission and Echolocation Sound transmission –Sound properties –Attenuation Echolocation –Decoding information from echos.
Signal processing and Audio storage Equalization Effect processors Recording and playback.
1 Manipulating Digital Audio. 2 Digital Manipulation  Extremely powerful manipulation techniques  Cut and paste  Filtering  Frequency domain manipulation.
EE2F2 - Music Technology 8. Subtractive Synthesis.
So far: Historical overview of speech technology  basic components/goals for systems Quick review of DSP fundamentals Quick overview of pattern recognition.
 Distortion – the alteration of the original shape of a waveform.  Function of distortion analyzer: measuring the extent of distortion (the o/p differs.
Joshua “Rock Star” Jenkins Jeff “Tremolo” Smith Jairo “the boss” Rojas
DSP Term Project By: Ashley Downs And Jeff Malen By: Ashley Downs And Jeff Malen.
Sound Training 2B 23/30th November 2011 JP & JC. Objectives Understand some theory about sound and the equipment used Learn how to fully build and plug.
EE513 Audio Signals and Systems Noise Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Inspire School of Arts and Science Jim White. Equalization (EQ) Equalization means boosting or cutting specific frequencies within an audio signal. A.
1 Live Sound Reinforcement Equalizers and other signal processing equipment.
 Sound is a form of energy similar to light, which travels from one place to another by alternately compressing and expanding the medium through which.
EE Audio Signals and Systems Effects Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
“TMS320C5505 USB Stick Teaching Materials”
Synthesis advanced techniques. Other modules Synthesis would be fairly dull if we were limited to mixing together and filtering a few standard waveforms.
Acoustics/Psychoacoustics Huber Ch. 2 Sound and Hearing.
Dynamic Range and Dynamic Range Processors
Digital Filters. Filters Filters shape the frequency spectrum of a sound signal. –Filters generally do not add frequency components to a signal that are.
Sound Quality.
Audio Sequencing. Audio Sequencing Workshop for Beginners A sequencer is a virtual multitracker where you can record, edit and mix your material. The.
Copyright © 2011 by Denny Lin1 Computer Music Synthesis Chapter 6 Based on “Excerpt from Designing Sound” by Andy Farnell Slides by Denny Lin.
If you speak one word at the microphone at a level of 80 dB and the loudspeaker returns that word to the microphone at 80 dB, then you can go home. The.
The microphone is your primary tool in the sound chain from sound source to audio storage medium.
Authors: Sriram Ganapathy, Samuel Thomas, and Hynek Hermansky Temporal envelope compensation for robust phoneme recognition using modulation spectrum.
Revision CUS30109 Certificate III in music. Microphones - Condenser w phantom power - Dynamic - What each is used for - Polar patterns/ frequency response.
Signal Processing (time-based effects)
Equalisation.
 Sound effects  Our project  Background & techniques  Applications  Methodology  Results  Conclusion.
Digital Audio. Acknowledgement Some part of this lecture note has been taken from multimedia course made by Asst.Prof.Dr. William Bares and from Paul.
Maximising the acoustics of your meeting room
Sound and Its System. What is sound? basically a waveform of energy that is produced by some form of a mechanical vibration (ex: a tuning fork), and has.
Effects. Effects in Music n All music that is recorded or amplified relies on effects to enhance certain characteristics of the sound. n Guitarists typically.
Stage 6. 2.Select the track you want to give a timbre to. 3.Make sure this says HALion 4.Click here and search for an instrument that sounds closest to.
Digital Audio II. Volume Scaling The volume of a digitized audio source may be increased or decreased by multiplying each sample value by an appropriate.
Time Based Processors. Reverb source:
Today’s lesson Discovering and understanding compression Working on club dance portfolios.
SOUNDS RECORDING AND REPRODUCTION The Volume of the Wave n The Amplitude is a measure of volume n The wave pink is softer than the blue wave. n It represents.
Performance of Digital Communications System
Objective Sound and Psychoacoustics. Viers on Location Sound Gathering production sound Nat, b-roll, interview, dialog “Technique will trump technology”
8th Grade Physical Science
Chapter 5 Sound Analysis.
CS 591 S1 – Computational Audio -- Spring, 2017
Signal Output Send effect Send/Return effects Definition:
Understanding Mixing Desks.
EE2F2: Music Technology - Revision
Amplitude Modulation Circuits
Brought to you by Mike and Mike Inc.
C-15 Sound Physics 1.
EE 445S Real-Time Digital Signal Processing Lab Spring 2017
Higher Music Technology
Processing.
AUDIO MIXER AND AUDIO PROCESSING
Presentation transcript:

EE2F2 - Music Technology 4. Effects

Effects (FX) Effects are applied to modify sounds in many ways – we will look at some of the more common Effects processes can be broadly categorised as: Filtering/equalisation effects Altering the frequency content of a sound Dynamic effects Altering the amplitude of a sound Delay effects Modifying a sound using time delays or phase shifts

Equalisation Effects Equalisation is probably the most widely used effect, so much so that it is usually provided as standard on most mixing desks We looked at equalisation in some detail during lecture 2 on mixers. As a reminder, however, it is used for many purposes including: Correcting a non-uniform microphone response Suppressing resonant modes Enhancing vocal clarity Suppressing high-frequency noise (hiss) Suppressing low-frequency rumble (e.g. traffic) Modifying wide-band sounds (e.g. cymbals) to avoid masking other parts

Dynamic Effects The ‘dynamics’ of a musical signal refer to how loud or soft it sounds Dynamic effects can be thought of as automatic volume controls They mostly work by turning the volume down for loud signals and back up again for soft ones Differences between dynamic effects are: How quickly they respond Length of the window over which the input volume is estimated How much the gain is altered in response to volume changes

Limiting If the sound from the TV is below a threshold, everyone’s happy If it goes above that threshold, the volume needs turning down.

Limiting Example Input Level Output Level No effect Limited Threshold Time Input Signal Threshold Time Output Signal

Compression Compression is a less severe form of limiting. Note that ‘compression’ in this context is not the same as data compression Input Level Output Level No effect Limiting Threshold Compression

Application Compression and limiting are used to reduce the dynamic range of a signal They “smooth-off” the peaks Compressed sounds can be made louder on average without overpowering a mix. Compression is very commonly used for vocals and bass guitars Usually, compression is a subtle effect – you shouldn’t really notice it

Overdrive & Distortion Compression and limiting work by monitoring the average level, or the envelope, of the input If the input voltage is monitored directly, with no averaging, a different effect is produced A non-averaged version of compression is known as overdrive, and the equivalent of limiting is known as distortion Overdrive and distortion don’t just affect the signal level, they also change the shape of the waveform and, thereby, alter its timbre (how it sounds) Very popular effects with electric guitarists and on electric organs

Dynamic Effects Summary Input CompressionLimiting OverdriveDistortion

Delay Effects This group of effects all work by combining two or more time-delayed versions of the input signal Delay effects are particularly useful as they model many ‘real-world’ environments The differences between them are mostly concerned with the length of the delay: Very short delays: Chorus, flanger, phaser Medium delays (>100 ms): Echo Long delays (several seconds): Reverberation

Echo Simplest possible delay effect models a single, fixed echo The input signal is attenuated and time delayed The output is the sum of the delayed signal and the original This creates a very crude echo Delay + + Attenuation InOut

Chorus, Flanger and Phaser Using much the same single delay structure are: Chorus Very short modulated time delay Sounds like more than one instrument Flanger Like chorus but slower modulation Creates a ‘swirling’ effect Phaser Like flanger but uses phase shift rather than time delay

Reverberation Real echoes are the result of multiple reflections from several surfaces This is reverberation

Modelling Reverberation Most realistic way to model a reverberant environment is to: Go there Measure the impulse response of the room Convolve that with the input Example: The basilica at Foligno, Italy. Impulse Response Time (1.5 seconds) Impulse response Synthetic Organ Organ * reverb

Comb Filter Processing-wise, a more economical method is to simulate the multiple reflections using comb filters A comb filter can simulate the multiple back-and- forth reflections between a pair of parallel surfaces Delay + + Attenuation InOut Time In Time Out

An Economical Reverberation Model To model a typical room, several comb filters are used in parallel to simulate different pairs of surfaces The delay and feedback attenuation of each filter is different in order to mix up the reflections Comb Filter + + Attenuation In Out Comb Filter

Summary Effects are applied for many reasons, e.g. EQ Corrective treatment Creative control of tonal colour Dynamic effects Aid to mixing vocals (compression) Modifying sounds (overdrive and distortion) Delay effects Special effects (chorus, flanging etc.) Adding realism to synthetic sounds (reverberation)