Introduction To XAudio2 © Allan C. Milne Abertay University v14.1.15.

Slides:



Advertisements
Similar presentations
Introduction to Sockets Jan Why do we need sockets? Provides an abstraction for interprocess communication.
Advertisements

4/1/2017 4:16 PM.
Using Audacity Audacity is an audio-editing application available from and can be used with most PC operating systems.
CSE 380 – Computer Game Programming Audio Engineering.
Module R2 CS450. Next Week R1 is due next Friday ▫Bring manuals in a binder - make sure to have a cover page with group number, module, and date. You.
CHAPTER 16 Audio © 2008 Cengage Learning EMEA. LEARNING OBJECTIVES In this chapter you will learn about: – –The fundamentals of sound – –DirectX Audio.
A Pipeline for Lockless Processing of Sound Data David Thall Insomniac Games.
Image and Sound Editing Raed S. Rasheed Sound What is sound? How is sound recorded? How is sound recorded digitally ? How does audio get digitized.
Introduction To X3DAudio
C For Java Programmers Tom Roeder CS sp. Why C? The language of low-level systems programming  Commonly used (legacy code)  Trades off safety.
1 File Output. 2 So far… So far, all of our output has been to System.out  using print(), println(), or printf() All input has been from System.in 
Precept 3 COS 461. Concurrency is Useful Multi Processor/Core Multiple Inputs Don’t wait on slow devices.
Copyright © 2002 Bolton Institute Sound in SDL Andrew Williams
Java Audio.
Chapter 8: I/O Streams and Data Files. In this chapter, you will learn about: – I/O file stream objects and functions – Reading and writing character-based.
The Audio Processing Graph © Allan C. Milne Abertay University v
Palm Multimedia and VoIP Design CS525 Semester Research/Design Marc Pevoteaux Ron Erickson.
Hjemmeeksamen 1 INF3190. Oppgave Develop a monitoring/administration tool which allows an administrator to use a client to monitor all processes running.
Windows audio architecture Win MM Application DirectSound Application SysAudio.SYS Kmixer.SYS WinMM.DLLDSound.DLL Device Drive Container USB Device Driver.
Networking Nasrullah. Input stream Most clients will use input streams that read data from the file system (FileInputStream), the network (getInputStream()/getInputStream()),
1 Client API Goals: evolvable, easy to use Design decision: –event-driven, non-blocking programming model –Data items are immutable Main data structure:
PIKA Technologies Inc. Digital Logger Application Sample April 2010.
CP104 Introduction to Programming File I/O Lecture 33 __ 1 File Input/Output Text file and binary files File Input/output File input / output functions.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Sound-Cards Software Patrick Horne. Introduction What is a sound card Basic History of Soundcards What is Streaming Audio how is it used in software What.
FYP Final Presentation: Distributed Audio Mixing Daire O'Neill, Final Year Electronic Engineering Project Supervisor: Dr Peter Corcoran Co-Supervisor:
OOMI From COM to DCOM.
Anatomy of a Sound File v © Allan C. Milne Abertay University.
Panning and Filters © Allan C. Milne Abertay University v
This tutorial has been created to help educators and CSME collaborators navigate the computer application “Audacity” to facilitate the creation of podcasts,
Operating Systems Lecture 7 OS Potpourri Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of Software.
4/7 Multimedia Roll call Video Lecture: –multimedia sound –multimedia video Image courtesy of
Sound DirectMusic & DirectSound. DirectShow Video Formats DirectShow is an open architecture, it can support any format as long as there are filters to.
File Input and Output (I/O) Engineering 1D04, Teaching Session 7.
OS2014 PROJECT 2 Supplemental Information. Outline Sequence Diagram of Project 2 Kernel Modules Kernel Sockets Work Queues Synchronization.
Introduction to Digital Media. What is it? Digital media is what computers use to; Store, transmit, receive and manipulate data Raw data are numbers,
Speech Processing and Recognition © Florida Institute of Technology Access audio data in real time and apply to speech recognition Final Exam Project By.
CS 290: VoIP Project By Rudy Sevile. Summary AudioRecord AudioTrack UDP TCP Eclipse and LogCat.
Chapter 13 DirectSound 로 잡음 만들기. 2 History of Sound Programming Sound programming always gets put off until the end DOS –Third party sound libraries:
Sound and Digital Sound v © Allan C. Milne Abertay University.
Threads Chapter 26. Threads Light-weight processes Each process can have multiple threads of concurrent control. What’s wrong with processes? fork() is.
Sound Fundamentals 2 Beginning Live P.A..
1 Sound in Java Summary: r Sound API Basics r MIDI JavaSound - Part of the UIT User Interface Toolkit.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
An Introduction to MPI (message passing interface)
4P13 Week 12 Talking Points Device Drivers 1.Auto-configuration and initialization routines 2.Routines for servicing I/O requests (the top half)
CSC 360, Instructor: Kui Wu Thread & PThread. CSC 360, Instructor: Kui Wu Agenda 1.What is thread? 2.User vs kernel threads 3.Thread models 4.Thread issues.
NUS.SOC.CS Roger Zimmermann (based on slides by Ooi Wei Tsang) Project Packetize MP3 audio into RTP Packets.
NUS.SOC.CS Roger Zimmermann Project Packetize MP3 audio into RTP Packets.
4P13 Week 9 Talking Points
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Random Logic l Forum.NET l State Machine Mechanism Forum.NET 1 st Meeting ● December 27, 2005.
The Telephone Analog vs Digital View the video first: Digital
© Copyright 1992–2004 by Deitel & Associates, Inc. and Pearson Education Inc. All Rights Reserved. Linked Lists Outline Introduction Self-Referential Structures.
Ogre Resource Managers References: 1. anagers&structure=Tutorialshttp://
1 Network Access to Charm Programs: CCS Orion Sky Lawlor 2003/10/20.
Real Numbers Device driver process within the operating system that interacts with I/O controller logical record 1 logical record 2 logical record 3.
Build your own 2D Game Engine and Create Great Web Games using HTML5, JavaScript, and WebGL. Sung, Pavleas, Arnez, and Pace, Chapter 4 Implementing.
DYNAMIC MEMORY ALLOCATION. Disadvantages of ARRAYS MEMORY ALLOCATION OF ARRAY IS STATIC: Less resource utilization. For example: If the maximum elements.
5.13 Recursion Recursive functions Functions that call themselves
An Introduction to JACK
Chapter 3: Process Concept
Apartments and COM Threading Models
Introduction to MySQL.
Out-of-Process Components
Angularjs Interview Questions and Answers By Hope Tutors.
Please use speaker notes for additional information!
Student: Popa Andrei-Sebastian
Out-of-Process Components
Processes Creation and Threads
Presentation transcript:

Introduction To XAudio2 © Allan C. Milne Abertay University v

Agenda.  The XAudio2 pipeline.  Playing a sound.  XAUDIO2_BUFFER.  Sound Elements.

General Audio Pipeline. Original sound  microphone  ADC  wav file  pre-production  audio program code  soundcard  DAC  amp  speakers  sound

XAudio2 Pipeline..wav file  XAudio2 buffer / wave format  source voice  submix voice  mastering voice  soundcard

Source Voices. Operate on audio data provided by the client program. Send output to –1 or more submix voices; and/or –the mastering voice.

Submix Voices. Mix the audio from all voices feeding them. Operate on the result of this mix. Send output to –1 or more other submix voices; and/or –the mastering voice.

Mastering Voice. Mixes the audio from all voices feeding it Operates on the result of this mix. Writes the audio data to an audio device. There will normally be only one mastering voice.

Audio Processing Graph. The voice objects with their connections form an audio processing graph. Source voice objects are the entry points into this graph. the mastering voice is the exit from the graph to the audio device. The Xaudio2 engine processes and manages this graph.

Playing A sound. The following slides go through the coding steps for playing a sound from a.wav file. In summary we have to –create an XAudio2 engine; –create a mastering voice; –create a source voice; –submit the sound sample.

Setting Up. #include #include "PCMWave.hpp" using AllanMilne::Audio::PCMWave; using AllanMilne::Audio::WaveFmt; Also requires relevant libraries in the project set- up. PCMWave is my encapsulation.

Create The XAudio2 Engine. IXAudio2 *gEngine; … … … CoInitializeEx( NULL, COINIT_MULTITHREADED ); XAudio2Create( &gEngine ); The managing processor for the audio graph. All XAudio2 function calls return an HRESULT value that should be checked for success. The CoInitializeEx call allows XAudio2 to run in a separate thread. The engine is the only COM object in XAudio2.

Create The Mastering Voice. IXAudio2MasteringVoice *gMaster; … … … gEngine->CreateMasteringVoice( &gMaster ); The final rendering component, connected to the audio device. Created by the XAudio2 engine.

Creating A Source Voice. To create a source voice we need a wave format struct that defines the attributes of the wave audio data. To create this wave format struct we need to load the.wav file and extract the attributes from its fmt chunk. Therefore we need to –load the.wav file; –create a WAVEFORMATEX struct; –create the source voice object.

Load A.wav File. PCMWave *gWave; … … … gWave = new PCMWave ("MySound.wav"); if (gWave->GetStatus() != PCMWave::OK) { … … … } My own class to wrap file loading functionality. Creating a PCMWave object reads the fmt and data chunks. Check explicitly for success since this is my own encapsulation, no HRESULT value is returned.

Define Wave Format. WAVEFORMATEX gWFmt; … … … memset ((void*)&gWFmt, 0, sizeof (WAVEFORMATEX)); memcpy_s ((void*)&gWFmt, sizeof (WaveFmt), (void*)&(gWave- >GetWaveFormat()), sizeof (WaveFmt)); WAVEFORMATEX is a Windows struct. Defines the attributes of the audio data. Is copied from the PCMWave WaveFmt struct.

Create A Source Voice. IXAudio2SourceVoice *gSource; … … … gEngine->CreateSourceVoice( &gSource, &gWFmt); gSource->Start(); Managed by the XAudio2 engine. Wave format defines the format of all sound samples submitted to it. Called with only 2 arguments routes the source voice to the mastering voice. Note Start() activates the source voice but since we have not submitted any audio data to it nothing will be played yet.

Submit The Sound Sample. To play a sound sample we must submit it to a source voice. We need a XAUDIO2_BUFFER that defines the audio data and how it is to be handled. Therefore we need to –create an XAUDIO2_BUFFER; –submit the buffer to the source voice.

Define An XAudio2 Buffer. XAUDIO2_BUFFER gXABuffer; … … … memset ((void*)&gXABuffer, 0, sizeof (XAUDIO2_BUFFER)); gXABuffer.AudioBytes = gWave->GetDataSize (); gXABuffer.pAudioData = (BYTE*)(gWave->GetWaveData ()); Used to define audio data buffer and characteristics. Here we only define audio data size and audio data. All other characteristics are set to 0. Note audio data buffer points to the buffer in the PCMWave object.

Play The Sound. gSource->SubmitSourceBuffer (&gXABuffer); Submits a sound sample to the audio graph. The sound sample will be played only once. Multiple calls to Submit will queue the requests. This is an asynchronous operation.

Tidying Up. gSource->Stop (); gEngine->Release(); CoUninitialize(); delete gWave; Stop de-activates the source voice from the audio graph. Only the engine is released since this is the only COM object. PCMWave is deleted as this is an object of my own class. –Note this will free the audio data buffer.

XAUDIO2_BUFFER. XAUDIO2_BUFFER defines to the source voice the audio data and how to handle it. It defines –the audio data samples; –where to begin and stop playing within the audio data; –whether to loop, what to loop, and how many times.

.AudioBytes // number of bytes of audio data..pAudioData // pointer to audio data samples..Flags // almost always 0..PlayBegin // First sample in the buffer that should be played..PlayLength // Number of samples to play; 0=entire buffer (begin must also be 0)..LoopBegin // First sample to be looped; must be <(PlayBegin+PlayLength); can be <PlayBegin..LoopLength // Number of samples in loop; =0 indicates entire sample; PlayBegin > (LoopBegin+LoopLength) < PlayBegin+PlayLength)..LoopCount // Number of times to loop; =XAUDIO2_LOOP_INFINITE to loop forever; if 0 then LoopBegin and LoopLength must be 0..pContext // pointer to context to be passed to the client in callbacks.

Sound Elements. We will now summarise the elements that define a sound. these elements include bare data, XAudio2 components and my framework components. the main objects are of type: –PCMWave. –XAUDIO2_BUFFER. –WAVEFORMATEX. –IXAudio2SourceVoice.

PCMWave string filename; Status status; (enum) char *sample_data; WaveFmt format; … SAMPLE DATA … (Just raw data) XAUDIO2_BUFFER … Info on looping and sample data. IXAudio2SourceVoice … WaveFormatEx … (look it up!) Submit( );