Presentation is loading. Please wait.

Presentation is loading. Please wait.

Audio Audio is a wave resulting from air pressure disturbance that reaches our eardrum generating the sound we hear. –Humans can hear frequencies in the.

Similar presentations


Presentation on theme: "Audio Audio is a wave resulting from air pressure disturbance that reaches our eardrum generating the sound we hear. –Humans can hear frequencies in the."— Presentation transcript:

1 Audio Audio is a wave resulting from air pressure disturbance that reaches our eardrum generating the sound we hear. –Humans can hear frequencies in the range 20-20,000 Hz. ‘Acoustics’ is the branch of physics that studies sound

2 Reading Assignment Media Coding and Content Processing: Chapter 3 (Sections 3.1 … 3.4) Multimedia Signals and Systems (Chapter 2) Fundamentals of Multimedia (Chapter 6: Section 6.1) Self Reading Material of Section 2.2 of Multimedia Signals and Systems (Human Auditory System) will be on Quiz 2.

3 Characteristics of Audio Audio has normal wave properties –Reflection –Refraction –Diffraction A sound wave has several different properties: –Amplitude (loudness/intensity) –Frequency (pitch) –Envelope (waveform)

4 Audio Amplitude Audio amplitude is often expressed in decibels (dB) Sound pressure levels (loudness or volume) are measured in a logarithmic scale (deciBel, dB) used to describe a ratio –Suppose we have two loudspeakers, the first playing a sound with power P 1, and another playing a louder version of the same sound with power P 2, but everything else (how far away, frequency) is kept the same. –The difference in decibels between the two is defined to be 10 log 10 (P 2 /P 1 ) dB

5 Audio Amplitude In microphones, audio is captured as analog signals (continuous amplitude and time) that respond proportionally to the sound pressure, p. The power in a sound wave, all else equal, goes as the square of the pressure. –Expressed in dynes/cm 2. The difference in sound pressure level between two sounds with p 1 and p 2 is therefore 20 log 10 (p 2 /p 1 ) dB The “acoustic amplitude” of sound is measured in reference to p 1 = p ref = 0.0002 dynes/cm 2. –The human ear is insensitive to sound pressure levels below p ref.

6 Audio Amplitude IntensityTypical Examples 0 dBThreshold of hearing 20 dBRustling of paper 25 dBRecording studio (ambient level) 40 dBResident (ambient level) 50 dBOffice (ambient level) 60 - 70 dBTypical conversation 80 dBHeavy road traffic 90 dBHome audio listening level 120 - 130 dBThreshold of pain 140 dBRock singer screaming into microphone

7 Audio Frequency Audio frequency is the number of high-to-low pressure cycles that occurs per second. –In music, frequency is referred to as pitch. Different living organisms have different abilities to hear high frequency sounds –Dogs: up to 50KHz –Cats: up to 60 KHz –Bats: up to 120 KHz –Dolphins: up to 160KHz –Humans: Called the audible band. The exact audible band differs from one to another and deteriorates with age.

8 Audio Frequency The frequency range of sounds can be divided into –Infra sound0 Hz– 20 Hz –Audible sound20 Hz– 20 KHz –Ultrasound20 KHz – 1 GHz –Hypersound1 GHz – 10 GHz Sound waves propagate at a speed of around 344 m/s in humid air at room temperature (20  C) –Hence, audio wave lengths typically vary from 17 m (corresponding to 20Hz) to 1.7 cm (corresponding to 20KHz). Sound can be divided into periodic (e.g. whistling wind, bird songs, sound from music) and nonperiodic (e.g. speech, sneezes and rushing water).

9 Audio Frequency Most sounds are combinations of different frequencies and wave shapes. Hence, the spectrum of a typical audio signal contains one or more fundamental frequency, their harmonics, and possibly a few cross- modulation products. –Fundamental frequency –Harmonics The harmonics and their amplitude determine the tone quality or timbre.

10 Audio Envelope When sound is generated, it does not last forever. The rise and fall of the intensity of the sound is known as the envelope. A typical envelope consists of four sections: attack, decay, sustain and release.

11 Audio Envelope Attack: The intensity of a note increases from silence to a high level Decay: The intensity decreases to a middle level. Sustain: The middle level is sustained for a short period of time Release: The intensity drops from the sustain level to zero.

12 Audio Envelope Different instruments have different envelope shapes –Violin notes have slower attacks but a longer sustain period. –Guitar notes have quick attacks and a slower release

13 Audio Signal Representation Waveform representation –Focuses on the exact representation of the produced audio signal. Parametric form representation –Focuses on the modeling of the signal generation process. –Two major forms Music synthesis (MIDI Standard) Speech synthesis

14 Waveform Representation Audio Capture Sampling & Digitization Storage or Transmission Receiver Digital to Analog Playback (speaker) Audio Source Human Ear Audio Generation and Playback

15 Digitization To get audio (or video for that matter) into a computer, we must digitize it (convert it into a stream of numbers). This is achieved through sampling, quantization, and coding.

16 Example Signal Amplitude

17 Sampling Sampling: The process of converting continuous time into discrete values.

18 Sampling Process 1.Time axis divided into fixed intervals 2.Reading of the instantaneous value of the analog signal is taken at the beginning of each time interval (interval determined by a clock pulse) 3.Frequency of clock is called sampling rate or sampling frequency The sampled value is held constant for the next time interval (sampling and hold circuit)

19 Sampling Example Amplitude

20 Quantization The process of converting continuous sample values into discrete values. –Size of quantization interval is called quantization step. –How many values can a 4-bit quantization represent? 8-bit? 16-bit? The higher the quantization, the resulting sound quality.............

21 Quantization Example Amplitude

22 Coding The process of representing quantized values digitally

23 Analog to Digital Conversion Amplitude

24 MIDI Interface Musical sound can be generated, unlike other types of sounds. Therefore, the Musical Instrument Digital Interface standard has been developed –The standard emerged in its final form in August 1982 –A music description language in binary form A given piece of music is represented by a sequence of numbers that specify how the musical instruments are to be played at different time instances.

25 MIDI Components An MIDI studio consists of –Controller: Musical performance device that generates MIDI signal when played. MIDI Signal: A sequence of numbers representing a certain note. –Synthesizer: A piano-style keyboard musical instrument that simulates the sound of real musical instruments –Sequencer: A device or a computer program that records a MIDI signal. –Sound Module: A device that produces pre-recorded samples when triggered by a MIDI controller or sequencer

26 MIDI Components

27 MIDI Data Describes –Start/end of a score –Intensity –Instrument –Basis frequency –.... Header Chunk Track Header Track Chunk Track Header Track Chunk Track 1Track 2 Status Byte Data Bytes Status Byte Data Bytes MIDI File Organization Actual Music Data

28 MIDI Data MIDI standard specifies 16 channels –A MIDI device is mapped onto one channel E.g. MIDI Guitar controller, MIDI wind machine, Drum machine. –128 instruments are identified by the MIDI standard Electric grand piano (2) Telephone ring (124) Helicopter (125) Applause (126) Gunshot (127)

29 MIDI Instruments Can play 1 single score (e.g. flute) vs. multiple scores (e.g. organ) Maximum number of scores that can be played concurrently is an important property of a synthesizer –3..16 scores per channel.

30 3D Sound Projection After experimentation, it was shown that the use of a two-channel sound (stereo) produces the best hearing effects for most people.

31 Spatial Sound Direct sound path: The shortest path between the sound source and the auditor. –Carries the first sound waves to the auditors head. All other sound paths are reflected All sound paths leading to the human ear are influenced by the auditor’s individual head-related transfer function (HRTF) –HRTF is a function of the path’s direction (horizontal and vertical angles) to the auditor. Pulse response in a closed room

32 Question What determines the quality of the digitization process?

33 Basic Types of a Digital Signal Unit Impulse Function  [n] Unit Step Function u[n]

34 Sinc Function

35 To plot the sinc function in Matlab x = linspace(-5,5); y = sinc(x); plot(x,y);

36 Determining the Sampling Rate Suppose we are sampling a sine wave. How often do we need to sample it to figure out its frequency?

37 Sampling Theorem If the highest frequency contained in an analog is B and the signal is sampled at a rate F > 2B, then the signal can be exactly recovered from its sample values. F=2B is called the Nyquist Rate.

38 Quantization Levels Determines amplitude fidelity of the signal relative to the original analog signal. Quantization error (noise) is the maximum difference between the quantized sample values and the analog signal values. The digital signal quality relative to the original signal is measured by the signal to noise ratio (SNR). –SNR=20log 10 (S/N), where S is the maximum signal amplitude and N is the quantization noise (=quantization step).


Download ppt "Audio Audio is a wave resulting from air pressure disturbance that reaches our eardrum generating the sound we hear. –Humans can hear frequencies in the."

Similar presentations


Ads by Google