Richard Dobson Dr Archer Endrich Composers Desktop Project CAS Wiltshire Hub Kingdown School Warminster 14 November 2012.

Slides:



Advertisements
Similar presentations
| Page Angelo Farina UNIPR | All Rights Reserved | Confidential Digital sound processing Convolution Digital Filters FFT.
Advertisements

DCSP-2: Fourier Transform I Jianfeng Feng Department of Computer Science Warwick Univ., UK
DCSP-2: Fourier Transform I
Analog Representations of Sound Magnified phonograph grooves, viewed from above: When viewed from the side, channel 1 goes up and down, and channel 2 goes.
Sound Synthesis Part II: Oscillators, Additive Synthesis & Modulation.
Copyright 2001, Agrawal & BushnellVLSI Test: Lecture 181 Lecture 18 DSP-Based Analog Circuit Testing  Definitions  Unit Test Period (UTP)  Correlation.
ACHIZITIA IN TIMP REAL A SEMNALELOR. Three frames of a sampled time domain signal. The Fast Fourier Transform (FFT) is the heart of the real-time spectrum.
EE2F2 - Music Technology 9. Additive Synthesis & Digital Techniques.
Chi-Cheng Lin, Winona State University CS412 Introduction to Computer Networking & Telecommunication Theoretical Basis of Data Communication.
The frequency spectrum
Analog to Digital Conversion. 12 bit vs 16 bit A/D Card Input Volts = A/D 12 bit 2 12 = Volts = Volts = 2048 −10 Volts = 0 Input Volts.
SIMS-201 Characteristics of Audio Signals Sampling of Audio Signals Introduction to Audio Information.
IT-101 Section 001 Lecture #8 Introduction to Information Technology.
Analogue to Digital Conversion
Image and Sound Editing Raed S. Rasheed Sound What is sound? How is sound recorded? How is sound recorded digitally ? How does audio get digitized.
Music Processing Roger B. Dannenberg. Overview  Music Representation  MIDI and Synthesizers  Synthesis Techniques  Music Understanding.
5. Multimedia Data. 2 Multimedia Data Representation  Digital Audio  Sampling/Digitisation  Compression (Details of Compression algorithms – following.
Fundamentals of Digital Audio. The Central Problem n Waves in nature, including sound waves, are continuous: Between any two points on the curve, no matter.
SAMPLING & ALIASING. OVERVIEW Periodic sampling, the process of representing a continuous signal with a sequence of discrete data values, pervades the.
Digital Audio Multimedia Systems (Module 1 Lesson 1)
Computer Science 101 Introduction to Programming with Sounds.
Digital to Analogue Conversion Natural signals tend to be analogue Need to convert to digital.
LE 460 L Acoustics and Experimental Phonetics L-13
Computer Science 121 Scientific Computing Winter 2014 Chapter 13 Sounds and Signals.
Vibrationdata 1 Unit 5 The Fourier Transform. Vibrationdata 2 Courtesy of Professor Alan M. Nathan, University of Illinois at Urbana-Champaign.
School of Informatics CG087 Time-based Multimedia Assets Sampling & SequencingDr Paul Vickers1 Sampling & Sequencing Combining MIDI and audio.
Ni.com Data Analysis: Time and Frequency Domain. ni.com Typical Data Acquisition System.
Fall 2004EE 3563 Digital Systems Design Audio Basics  Analog to Digital Conversion  Sampling Rate  Quantization  Aliasing  Digital to Analog Conversion.
Digital Sound and Video Chapter 10, Exploring the Digital Domain.
Sampling Terminology f 0 is the fundamental frequency (Hz) of the signal –Speech: f 0 = vocal cord vibration frequency (>=80Hz) –Speech signals contain.
Where we’re going Speed, Storage Issues Frequency Space.
Fundamentals Rawesak Tanawongsuwan
Lab #8 Follow-Up: Sounds and Signals* * Figures from Kaplan, D. (2003) Introduction to Scientific Computation and Programming CLI Engineering.
Lecture 1 Signals in the Time and Frequency Domains
DATA ACQUISITION Today’s Topics Define DAQ and DAQ systems Signals (digital and analogue types) Transducers Signal Conditioning - Importance of grounding.
Computing with Digital Media: A Study of Humans and Technology Mark Guzdial, School of Interactive Computing.
COMP Representing Sound in a ComputerSound Course book - pages
Dr Archer Endrich Richard Dobson BETT January 2015, ExCeL, London ©CDP Ltd
Synthesis advanced techniques. Other modules Synthesis would be fairly dull if we were limited to mixing together and filtering a few standard waveforms.
Understanding ADC Specifications September Definition of Terms 000 Analogue Input Voltage Digital Output Code FS1/2.
CSC361/661 Digital Media Spring 2002
Digital Recording Theory Using Peak. Listening James Tenney, Collage #1 (“Blue Suede”),  Available in Bracken Library, on James Tenney Selected.
Basics of Digital Audio Outline  Introduction  Digitization of Sound  MIDI: Musical Instrument Digital Interface.
Wireless and Mobile Computing Transmission Fundamentals Lecture 2.
Introduction to SOUND.
Vibrationdata 1 Unit 5 The Fourier Transform. Vibrationdata 2 Courtesy of Professor Alan M. Nathan, University of Illinois at Urbana-Champaign.
1 Introduction to Information Technology LECTURE 6 AUDIO AS INFORMATION IT 101 – Section 3 Spring, 2005.
Sound and Digital Sound v © Allan C. Milne Abertay University.
Marwan Al-Namari 1 Digital Representations. Bits and Bytes Devices can only be in one of two states 0 or 1, yes or no, on or off, … Bit: a unit of data.
Vibrationdata 1 Unit 6a The Fourier Transform. Vibrationdata 2 Courtesy of Professor Alan M. Nathan, University of Illinois at Urbana-Champaign.
1 Manipulating Audio. 2 Why Digital Audio  Analogue electronics are always prone to noise time amplitude.
GG313 Lecture 24 11/17/05 Power Spectrum, Phase Spectrum, and Aliasing.
COMP135/COMP535 Digital Multimedia, 2nd edition Nigel Chapman & Jenny Chapman Chapter 2 Lecture 2 – Digital Representations.
Digital Audio I. Acknowledgement Some part of this lecture note has been taken from multimedia course made by Asst.Prof.Dr. William Bares and from Paul.
Fundamentals of Multimedia Chapter 6 Basics of Digital Audio Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
1 CS 177 Week 7 Recitation Slides Modifying Sounds using Loops + Discussion of some Exam Questions.
Electronic instrumentation Digitization of Analog Signal in TD
FUNCTION GENERATOR.
Fourier Analysis Patrice Koehl Department of Biological Sciences National University of Singapore
CS 591 S1 – Computational Audio – Spring 2017
CS 591 S1 – Computational Audio
The Physics of Sound.
COMPUTER NETWORKS and INTERNETS
Multimedia Systems and Applications
Fundamentals of Multimedia
MECH 373 Instrumentation and Measurements
C-15 Sound Physics 1.
Representing Sound 2.6 – Data Representation.
Digital Audio Application of Digital Audio - Selected Examples
Presentation transcript:

Richard Dobson Dr Archer Endrich Composers Desktop Project CAS Wiltshire Hub Kingdown School Warminster 14 November 2012

The Science of Sound – a Micro-history We stand on the shoulders of many giants. Musical training is a more potent instrument than any other, because rhythm and harmony find their way into the inward places of the soul, on which they mightily fasten, imparting grace, and making the soul of him who is rightly educated graceful. Plato Pythagoras Guido dArezzo Hermann von Helmholtz Max Mathews

Some topics in SMC Digital Audio – sampling, synthesis, processing Music Representation and Analysis Performance and Interactive Composition Languages for Music – Algorithmic Composition Software and Hardware Design Acoustics and Psychoacoustics Sonification and Audification

The Shapes of Sound A sound wave is bipolar. A wave comprises alternating displacements from a central zero position. For a sound wave the zero line corresponds to silence. Displacements are both positive and negative, and should sum to zero. area above = area below

Sampling Sound 1 The overall process is generally called digitising Two aspects: we need to digitise both amplitude and time Quantisation of amplitude to discrete levels (represented in N-bit words) Sampling properly refers strictly to discretising of time (sampling rate) (Hence technical literature will refer to periodic sampling, discrete-time, etc) Quantisation introduces quantisation error, which manifests asquantisation noise Sampling depends on a very accurate clock. Errors in timing are known as jitter; not something we need to worry about. Soundcard clocks are based on crystals, just as CPUs are. Nothing is perfect; one Hz clock may not exactly match another. Independent devices will drift out of sync over time. Sound Example 1 – quantisation noise for N = 16, 12, 10,8,6,4,2,1

Quantisation – the Challenge Integers: N bits gives us 2 N levels - an even number. Where is the middle? Standard quantisation is called mid-tread qval = floor(val + 0.5) Includes zero valued sample = twos complement arithmetic Asymmetrical: e.g. 16-bit range = to Tiny values quantise to zero, so are lost Standard choice for audio codecs The alternative is mid-rise quantisation qval = floor(val) No zero value Symmetric for all level values Bipolar one-bit quantisation possible Tiny values quantise to quasi square wave 4-bit quantisation

Sampling Sound 2: Nyquist The modified Nyquist-Shannon sampling theorem Perfect reconstruction requires phase independence. cosine phase : amplitude = 1sine phase : amplitude = 0 (what the textbooks usually show)(what the textbooks usually don't show) For perfect reconstruction, input frequencies must be below Nyquist. The Nyquist limit itself (sr/2) defines the onset of frequency aliasing, where the Nyquist frequency aliases with DC. Put another way: we need more than two samples per cycle.

Sampling Sound 3: anti-aliasing Aliasing is now impossible to demonstrate using consumer hardware. This is a Cirrus Logic 8ch 192KHz Sigma-Delta oversampling ADC Anti-Alias filter is integrated into the device Even cheap chips do this now So whatever sample rate you set, the input is correctly filtered The crystal

Examples of aliasing To sample analogue audio without anti-alias filters we can use older types of ADC (e.g. using the method of successive approximation), or industrial data acquisition systems. These examples were prepared by Dr R.W. Stewart, University of Strathclyde, for his CDROM project DSPedia. On the other hand, aliasing is very easy to demonstrate using digital sound synthesis. The dominant sources of aliasing these days are synthesis and processing, not recording. Sound Examples 2

Aliasing - a synthetic example We need a program to generate a plain sine frequency sweep (chirp signal) and write it to a sound file. Use a low sampling rate : Hz Let the sweep rise to an extreme value : Hz! Listen to it…. And view it in the frequency domain (Audacity) Sound Example 3

Reconstruction : the Digital to Analogue Converter The DAC is strangely absent from most CS curricula but it is more important than the ADC: We can manage without audio input but not without audio output! With oversampling (as in the ADC), the final analogue reconstruction filter can be very simple – and cheap It restores the required smooth curves of the underlying waveform

Periodic Waves: Time v Distance The Time Domain The speed of sound in air is approximately 340 M/sec. So we can measure frequency either in terms of distance or in terms of time. Wavelength – literally the length of a cycle Period – duration of one cycle Frequency = speed of sound / wavelength Frequency = 1 / period Frequency is not a measure of either length or duration. It is therefore best to avoid labelling either wavelength or period directly as frequency.

Audio Data Representation –Time Domain Two basic forms – data stream, and a file format. Two primary number representations: Integer (e.g to 32767) Floating point These days, the ± 1.0 floating point normalised representation is the most important. We can display amplitude (V scale) either as normalised sample values, or in decibels (dB) Normalised - AudacityDecibel (logarithmic) scale – Adobe Audition To convert an amplitude a to dB: dBval = 20.log 10 (a)

Amplitude Display – the dB log scale Using the standard display, most of the signal is invisible. The ear senses both loudness and frequency on a logarithmic scale e.g. from a maximum of 1.0 (0 dB) to less than (-80 dB). Where does the sound finish? Here? Here!

Representation – Frequency Domain We have two primary and complementary ways to represent sound. Time Domain : amplitude / time Frequency Domain : two related forms. Spectrum : amplitude / frequency Spectrogram (or sonogram) : spectrum / time Audacity supports both, with linear/logarithmic options Again, the log frequency scale reflects how we hear – e.g. axis marks in octaves – or musical notes. The figures below display a sine log frequency sweep (without aliasing) Log vertical scaleLinear vertical scale

Digital Sound Synthesis A basic definition : using algorithms (and some maths) to generate audio data Two computer-based approaches: Real-time, e.g. using soft or hardware-based synthesisers Offline – writing data to a soundfile for later playback. Many possible approaches; most are technically difficult, maths-heavy, and (especially for real-time) computationally demanding – need fast hardware, and compilers able to generate very fast code. One (relatively) simple but classic approach identifies three fundamental ingredients: Sine waves Noise Time-varying Control functions (automation, breakpoint data) Together, these form the basis of additive synthesis. This means, quite literally, arbitrarily or algorithmically adding sound waves together – also known as mixing.

Music Synthesis and Algorithmic Composition Concentrates on the control aspect. Most common route is the algorithmic generation of MIDI data, in real time or written as a standard MIDI file. Many free domain-specific languages are available. Some support both direct synthesis and algorithmic score generation using a library of freely arranged modules. The (arguably) pre-eminent example is Csound. Algorithmic composition can be very complex, but can also be very simple, such as loop-based generation of scale and chord patterns. The auto-arpeggiator built into many synths and home organs is a simple example of a musical automaton. MIT Scratch: supports basic soundfile playback and MIDI note generation. Loose timing limits scope to simple patterns. Python: many extension libraries available, for both synthesis and MIDI programming. It includes standard modules for basic soundfile i/o.

Sonification and Audification The rendering of non-audio data as sound in order to reveal patterns and features. Audification : source data already has a time dimension. e.g. seismic, volcanic, astrophysics, even stock price movements. Sonification: applied to any arbitrary numeric data. Generally applied to large data sets which are already a challenge to analyse. For example, we have worked on particle collision data from the Large Hadron Collider (searching for the Higgs boson), as part of the LHCsound outreach project 1. However, it can be applied to small data sets and processes too: Any algorithms involving lists, iteration and loops Shapes of mathematical functions and formulae Whether the output is sonification or algorithmic composition depends entirely on your intention and interest – the process itself is the same. (Examples of simple sonification were presented in Scratch) 1 and