Java Audio.

Slides:



Advertisements
Similar presentations
MUSC1010 – WEEK 5 MIDI sequencing in Pro Tools. Cycle Record including MIDI Merge Follow the steps under “STARTING WITH MIDI & SOFT-SYNTHS” in last week’s.
Advertisements

CS335 Principles of Multimedia Systems Audio Hao Jiang Computer Science Department Boston College Oct. 11, 2007.
Audio Programming With Python Copyright © Software Carpentry 2011 This work is licensed under the Creative Commons Attribution License See
Using Multimedia on the Web Enhancing a Web Site with Sound, Video, and Applets.
CNIT 132 – Week 9 Multimedia. Working with Multimedia Bandwidth is a measure of the amount of data that can be sent through a communication pipeline each.
Sound in multimedia How many of you like the use of audio in The Universal Machine? What about The Universal Computer? Why or why not? Does your preference.
Sound can make multimedia presentations dynamic and interesting.
4.1Different Audio Attributes 4.2Common Audio File Formats 4.3Balancing between File Size and Audio Quality 4.4Making Audio Elements Fit Our Needs.
1. Digitization of Sound What is Sound? Sound is a wave phenomenon like light, but is macroscopic and involves molecules of air being compressed and expanded.
GCSE Computing#BristolMet Session Objectives#10 MUST define the term sample rate COULD explain how sound can be sampled and stored in digital form SHOULD.
1 Multimedia on the Web: Issues of Bandwidth Bandwidth is a measure of the amount of data that can be sent through a communication pipeline each second.
Audio 2 Subject:T0934 / Multimedia Programming Foundation Session:9 Tahun:2009 Versi:1/0.
I Power Higher Computing Multimedia technology Audio.
SWE 423: Multimedia Systems Chapter 3: Audio Technology (2)
CHAPTER 16 Audio © 2008 Cengage Learning EMEA. LEARNING OBJECTIVES In this chapter you will learn about: – –The fundamentals of sound – –DirectX Audio.
Audio Programming in Java
Java Sound API Medialogy, Semester 7, 2010 Aalborg University, Aalborg David Meredith
Audio Technical Metadata Importance of metadata to digital audio preservation Brief history of audio preservation standards Overview Kinds of Audio Metadata.
4.3. S OUND Use of sound within games. In lecture exploration of answers to frequently asked student questions.
NSF REU JULY 27 TH, 2012 Sound Pixel Project. Building a Frame Design Concept  Transportable, study, and easily constructed Material: 80/20  Lightweight.
Eee116j1 1 Digital Information Engineering Science EEE116J1 Prof Paul Maguire w.
1 Digital Audio Storage Formats. 2 Formats  There are many different formats for storing and communicating digital audio:  CD audio  Wav  Aiff  Au.
Quicktime Howell Istance School of Computing De Montfort University.
1 Revision. 2 Multimedia Systems 1  Lecture 1 - The Physics of Sound  Lecture 2 - The Java Sound API  Lecture 3 - Input Transducers (Microphones) 
1 PC Audio 2 Sound Card  An expansion board that enables a computer to receive, manipulate and output sounds.
EET 450 Chapter 18 – Audio. Analog Audio Sound is analog Consists of air pressure that has a variety of characteristics  Frequencies  Amplitude (loudness)
TCP/IP Protocol Suite 1 Chapter 25 Upon completion you will be able to: Multimedia Know the characteristics of the 3 types of services Understand the methods.
Chapter 9 Audio.
Chapter 14 Recording and Editing Sound. Getting Started FAQs: − How does audio capability enhance my PC? − How does your PC record, store, and play digital.
Sound Chapter Types of Sound Waveforms MIDI Sound is related to many things in computers but only Wav and MIDI exist in PCs.
Introduction to Digital Audio
Digital Sound and Video Chapter 10, Exploring the Digital Domain.
Introduction to Interactive Media 10: Audio in Interactive Digital Media.
Making all the right connections Signal Flow 101.
GODIAN MABINDAH RUTHERFORD UNUSI RICHARD MWANGI.  Differential coding operates by making numbers small. This is a major goal in compression technology:
Audio. Why Audio Essential tool for – Interface – Narrative – Setting & Mood.
Home entertainment. The hardware, software and techniques used for sound MP3 players: play music files stored digitally in MP3 file format are small and.
Video Conferencing-introduction --- IT Acumens. COM --- IT Acumens. COMIT Acumens. COMIT Acumens. COM.
 Refers to sampling the gray/color level in the picture at MXN (M number of rows and N number of columns )array of points.  Once points are sampled,
3. Multimedia Systems Technology
XP Tutorial 8New Perspectives on HTML and XHTML, Comprehensive 1 Using Multimedia on the Web Enhancing a Web Site with Sound, Video, and Applets Tutorial.
Signal Digitization Analog vs Digital Signals An Analog Signal A Digital Signal What type of signal do we encounter in nature?
Sound or Audio, whichever you prefer –MIDI Files.midi or.mid (Musical Instrument Digital Interface) use for instrumental music. –This format is supported.
Multimedia Elements: Sound, Animation, and Video.
Sound on the Web. Using Sound on a Web Site Conveying information  pronounce a word or describe a product Set a mood  music to match the web page scene.
Module 8 Review Questions 1.VGA stands for A. Video Graphic Association B. Video Gradient Array C. Video Graphic Array D. Video Graphic Arrangement.
XP Tutorial 8New Perspectives on Creating Web Pages with HTML, XHTML, and XML 1 Using Multimedia on the Web Enhancing a Web Site with Sound, Video, and.
Advanced Computer Architecture 0 Lecture # 1 Introduction by Husnain Sherazi.
Anatomy of a Sound File v © Allan C. Milne Abertay University.
Multimedia Technology and Applications Chapter 2. Digital Audio
Chapter 15 Recording and Editing Sound. 2Practical PC 5 th Edition Chapter 15 Getting Started In this Chapter, you will learn: − How sound capability.
Sound element Week - 11.
Digital Sound Actual representation of sound Stored in form of thousands of individual numbers (called samples) Not device dependent Stored in bits.
Digital Audio IV MIDI Overview. Sending MIDI Information I. Serial Transmission A. Single cable to move data B. Slower than parallel, but is less expensive.
Objective Understand digital audio production methods, software, and hardware. Course Weight : 6%
1 Sound in Java Summary: r Sound API Basics r MIDI JavaSound - Part of the UIT User Interface Toolkit.
Glencoe Introduction to Multimedia Chapter 8 Audio 1 Section 8.1 Audio in Multimedia Audio plays many roles in multimedia. Effective use in multimedia.
Multimedia Retrieval Architecture Electrical Communication Engineering, Indian Institute of Science, Bangalore – , India Multimedia Retrieval Architecture.
© M. Winter COSC 3P40 – Advanced Object-Oriented Programming 10.1 Sound effects and music Java Sound API: javax.sound.sampled 8- or 16-bit samples from.
Music and Audio Computing I A Prof. Marcelo M. Wanderley Week 8.
Garage Band For MAC. What is it? A digital audio workstation that can record and play back multiple tracks of audio. Is a software application for OS.
Sound and music.
Using Multimedia on the Web
Objective % Explain concepts used to create digital audio.
Objective % Explain concepts used to create digital audio.
Data Representation Keywords Sound
Registers.
Computer components is a programmable machine that receives input, stores and manipulates data, and provides output in a useful format. Computer The computer.
Objective Explain concepts used to create digital audio.
Digital Audio Application of Digital Audio - Selected Examples
Presentation transcript:

Java Audio

Java Sound API 1.0 Specifies a software layer that allows application programs to communicate with an audio and MIDI engine Abbreviation of application program interface, a set of routines, protocols, and tools for building software applications. A good API makes it easier to develop a program by providing all the building blocks. A programmer puts the blocks together. You have already used the AWT API. Java sound API 1.0 is part of the Java 2 platform standard edition J2SE, version 1.3 which you can download for free. Java software development kit (SDK) is the Java runtime environment (JRE) plus the compiler, de-bugger and other coding software. Integrated development environment - A programming environment integrated into an application. For example, Microsoft Office applications support various versions of the BASIC programming language. You can develop a WordBasic application while running Microsoft Word

Java Sound API 1.0 Includes support for both digital and MIDI data, provided in two separate packages: javax.sound.sampled This package specifies interfaces for the capture, mixing and playback of digital (sampled) audio javax.sound.midi Provides interfaces for MIDI synthesis, sequencing and event transport Unfortunately the java.sound API is still in its infancy, so no books, but there is some information available on the sun website- see links on course web site.

javax.sound.sampled The central task that the java sound API addresses is how to move bytes of formatted audio data into and out of the system To play or capture audio using Java sound API you need at least two things: Formatted audio data A line

Formatted Audio Data Formatted audio data refers to sound in any of a number of standard formats The Java Sound API distinguishes between data formats and file formats

Data Formats A data format tells you how to interpret a series of bytes of “raw” sampled audio data In the Java Sound API, a data format is represented by an AudioFormat object, which includes the following attributes: Encoding technique, usually Pulse Code Modulation Number of channels (1 for mono, 2 for stereo, etc.) Sample rate (samples per second, per channel) Number of bits per sample (per channel) Frame size in bytes Frame rate Byte order (big-endian or little-endian) Explain Pulse Code Modulation You might need to know, for example, how many bits constitute one sample (the representation of the shortest instant of sound), and similarly you might need to know the sound's sample rate (how fast the samples are supposed to follow one another). When setting up for playback or capture, you specify the data format of the sound you are capturing or playing. The Java Sound API distinguishes between data formats and file formats A frame contains the data for all channels at a particular time. For PCM-encoded data, the frame is simply the set of simultaneous samples in all channels, for a given instant in time, without any additional information Big endian 128 – u 65536 256

File Formats Specifies the structure of a sound file, including the format of the raw audio data in the file, and other information that can be stored in the file File format is represented by an AudioFileFormat object, which contains: The file type (WAVE, AIFF, etc.) The file's length in bytes The length, in frames, of the audio data contained in the file An AudioFormat object that specifies the data format of the audio data contained in the file

File Formats The AudioSystem class provides methods for reading and writing sounds in different file formats, and for converting between different data formats By reading a sound file as an AudioInputStream, you get immediate access to the samples, without having to worry about the sound file's structure (its header, chunks, etc.) The AudioSystem class acts as a clearinghouse for audio components, including built-in services and separately installed services from third-party providers. AudioSystem serves as an application program's entry point for accessing these installed sampled-audio resources. Some of the methods let you access a file's contents through a kind of stream called an AudioInputStream. An AudioInputStream is a subclass of the generic Java InputStream class, which encapsulates a series of bytes that can be read sequentially. To its superclass, the AudioInputStream class adds knowledge of the bytes' audio data format (represented by an AudioFormat object).

AudioInputStream An audio input stream is an input stream with a specified audio format and length that enables you to: obtain an audio input stream from an external audio file, stream, or URL write an external file from an audio input stream convert an audio input stream to a different audio format

Reading from an Audio File File soundFile = new File(String filename); AudioInputStream audioInputStream = null; audioInputStream = AudioSystem.getAudioInputStream(soundFile); File soundFile = new File(String filename); Creates a new file instance for the audio file with a file name of the existing audio file. AudioInputStream audioInputStream = null; Creates a new instance of the AudioInputStream object audioInputStream = AudioSystem.getAudioInputStream(soundFile); Obtains an audio input stream from the provided File. The File must point to valid audio file data Returns: an AudioInputStream object based on the audio file data pointed to by the File We have now created a stream for the audio file.

Finding Data Format From the AudioInputStream, i.e. from the sound file we fetch information about the format of the audio data AudioFormat audioFormat = audioInputStream.getFormat(); This information is needed to ask Java Sound for a suitable line for this audio file

Line A line is an element of the digital audio “pipeline” - that is, a path for moving audio into or out of the system A Clip is a kind of line into which you can load audio data prior to playback A SourceDataLine is an input that accepts a real-time stream of audio data Usually the line is a path into or out of a mixer (although technically the mixer itself is also a kind of line). Audio input and output ports are lines. These are analogous to the microphones and speakers connected to a physical mixing console. Another kind of line is a data path through which an application program can get input audio from, or send output audio to, a mixer. These data paths are analogous to the tracks of the multitrack recorder connected to the physical mixing console. A Possible Configuration of Lines for Audio Output and Audio Input

Line Interface Hierarchy The base interface, Line, describes the minimal functionality common to all lines: Controls - Data lines and ports often have a set of controls that affect the audio signal passing through the line Open and Closed status Events Ports are simple lines for input or output of audio to or from audio devices. As mentioned earlier, some common types of ports are the microphone, line input, CD-ROM drive, speaker, headphone, and line output A line generates events when it opens or closes. Subinterfaces of Line can introduce other types of events. When a line generates an event, the event is sent to all objects that have registered to "listen" for events on that line. An application program can create these objects, register them to listen for line events, and react to the events as desired.

Line Types Ports are simple lines for input or output of audio to or from audio devices some common types of ports are the microphone, line input, CD-ROM drive, speaker, headphone, and line output A TargetDataLine receives audio data from a mixer The TargetDataLine interface provides methods for reading the data from the target data line's buffer and for determining how much data is currently available for reading I’m not going to include Mixers here as they are a special type of line that we will discuss in more detail later. Target data line is used for output. Note that the naming convention for this interface reflects the relationship between the line and its mixer A target data line is a type of DataLine from which audio data can be read. The most common example is a data line that gets its data from an audio capture device. (The device is implemented as a mixer that writes to the target data line.)

Line Types A SourceDataLine receives audio data for playback Provides methods for writing data to the source data line's buffer for playback, and for determining how much data the line is prepared to receive without blocking A Clip is a data line into which audio data can be loaded prior to playback. The data is pre-loaded rather than streamed, thus the clip's duration is known before playback Any starting position in the media can be chosen Clips can be looped Source data line is used for input. A source data line is a data line to which data may be written. It acts as a source to its mixer An application writes audio bytes to a source data line, which handles the buffering of the bytes and delivers them to the mixer. The mixer may mix the samples with those from other sources and then deliver the mix to a target such as an output port (which may represent an audio output device on a sound card). A clip is essentially a line into which audio data is stored before it is sent to the mixer. If we are only interested in outputting the sound from the clip then we don’t use a mixer and the clip will be an output line too. From the perspective of an application, a source data line may act as a target for audio data.

Getting a Line Asking for a line is a rather tricky thing. We have to construct an Info object that specifies the desired properties for the line. First, we have to say which kind of line we want SourceDataLine (for playback), Clip (for repeated playback) and TargetDataLine (for recording) Then, we have to pass an AudioFormat object, so that the Line knows which format the data passed to it will have

Getting a Clip Clip soundclip = null; DataLine.Info clipinfo = new DataLine.Info(Clip.class, audioFormat); try{ soundclip = (Clip) AudioSystem.getLine(clipinfo); soundclip.open(audioInputStream); }catch exceptions soundclip.start(); Hear we are just interested in simple playback so we will use a Clip Create a Clip object Create a DataLine.Info object (remember DataLine is a superclass of Clip) that contains the audio format information on our sound file. Try to get a line of the type clipinfo for our sound clip Open the line. Start soundclip playing So now we can play a clip, because output can come directly from the clip. Lets now look at recording sound.

Getting a SourceDataLine This is the same process as getting a Clip However, the SourceDataLine does not store all the audio data, rather it buffers it in real-time So to output the audio we have to read it from the AudioInputStream in real-time

Getting a SourceDataLine byte[] buffer = new byte[EXTERNAL_BUFFER_SIZE]; while (nBytesRead != -1) { nBytesRead = audioInputStream.read(buffer, 0, buffer.length); if (nBytesRead >= 0) nBytesWritten = line.write(buffer, 0, nBytesRead); } Creates a buffer of size [EXTERNAL_BUFFER_SIZE] public int read(byte[] b, int off, int len) throws IOException Reads up to a specified maximum number of bytes of data from the audio stream, putting them into the given byte array Parameters: b - the buffer into which the data is read off - the offset, from the beginning of array b, at which the data will be written len - the maximum number of bytes to read Returns: the total number of bytes read into the buffer, or -1 if there is no more data because the end of the stream has been reached Line.write – write the bytes from the buffer abData into the SourceDataLine -line

Fin Fin