Download presentation
Presentation is loading. Please wait.
1
Java Audio
2
Java Sound API 1.0 Specifies a software layer that allows application programs to communicate with an audio and MIDI engine Abbreviation of application program interface, a set of routines, protocols, and tools for building software applications. A good API makes it easier to develop a program by providing all the building blocks. A programmer puts the blocks together. You have already used the AWT API. Java sound API 1.0 is part of the Java 2 platform standard edition J2SE, version 1.3 which you can download for free. Java software development kit (SDK) is the Java runtime environment (JRE) plus the compiler, de-bugger and other coding software. Integrated development environment - A programming environment integrated into an application. For example, Microsoft Office applications support various versions of the BASIC programming language. You can develop a WordBasic application while running Microsoft Word
3
Java Sound API 1.0 Includes support for both digital and MIDI data, provided in two separate packages: javax.sound.sampled This package specifies interfaces for the capture, mixing and playback of digital (sampled) audio javax.sound.midi Provides interfaces for MIDI synthesis, sequencing and event transport Unfortunately the java.sound API is still in its infancy, so no books, but there is some information available on the sun website- see links on course web site.
4
javax.sound.sampled The central task that the java sound API addresses is how to move bytes of formatted audio data into and out of the system To play or capture audio using Java sound API you need at least two things: Formatted audio data A line
5
Formatted Audio Data Formatted audio data refers to sound in any of a number of standard formats The Java Sound API distinguishes between data formats and file formats
6
Data Formats A data format tells you how to interpret a series of bytes of “raw” sampled audio data In the Java Sound API, a data format is represented by an AudioFormat object, which includes the following attributes: Encoding technique, usually Pulse Code Modulation Number of channels (1 for mono, 2 for stereo, etc.) Sample rate (samples per second, per channel) Number of bits per sample (per channel) Frame size in bytes Frame rate Byte order (big-endian or little-endian) Explain Pulse Code Modulation You might need to know, for example, how many bits constitute one sample (the representation of the shortest instant of sound), and similarly you might need to know the sound's sample rate (how fast the samples are supposed to follow one another). When setting up for playback or capture, you specify the data format of the sound you are capturing or playing. The Java Sound API distinguishes between data formats and file formats A frame contains the data for all channels at a particular time. For PCM-encoded data, the frame is simply the set of simultaneous samples in all channels, for a given instant in time, without any additional information Big endian 128 – u
7
File Formats Specifies the structure of a sound file, including the format of the raw audio data in the file, and other information that can be stored in the file File format is represented by an AudioFileFormat object, which contains: The file type (WAVE, AIFF, etc.) The file's length in bytes The length, in frames, of the audio data contained in the file An AudioFormat object that specifies the data format of the audio data contained in the file
8
File Formats The AudioSystem class provides methods for reading and writing sounds in different file formats, and for converting between different data formats By reading a sound file as an AudioInputStream, you get immediate access to the samples, without having to worry about the sound file's structure (its header, chunks, etc.) The AudioSystem class acts as a clearinghouse for audio components, including built-in services and separately installed services from third-party providers. AudioSystem serves as an application program's entry point for accessing these installed sampled-audio resources. Some of the methods let you access a file's contents through a kind of stream called an AudioInputStream. An AudioInputStream is a subclass of the generic Java InputStream class, which encapsulates a series of bytes that can be read sequentially. To its superclass, the AudioInputStream class adds knowledge of the bytes' audio data format (represented by an AudioFormat object).
9
AudioInputStream An audio input stream is an input stream with a specified audio format and length that enables you to: obtain an audio input stream from an external audio file, stream, or URL write an external file from an audio input stream convert an audio input stream to a different audio format
10
Reading from an Audio File
File soundFile = new File(String filename); AudioInputStream audioInputStream = null; audioInputStream = AudioSystem.getAudioInputStream(soundFile); File soundFile = new File(String filename); Creates a new file instance for the audio file with a file name of the existing audio file. AudioInputStream audioInputStream = null; Creates a new instance of the AudioInputStream object audioInputStream = AudioSystem.getAudioInputStream(soundFile); Obtains an audio input stream from the provided File. The File must point to valid audio file data Returns: an AudioInputStream object based on the audio file data pointed to by the File We have now created a stream for the audio file.
11
Finding Data Format From the AudioInputStream, i.e. from the sound file we fetch information about the format of the audio data AudioFormat audioFormat = audioInputStream.getFormat(); This information is needed to ask Java Sound for a suitable line for this audio file
12
Line A line is an element of the digital audio “pipeline” - that is, a path for moving audio into or out of the system A Clip is a kind of line into which you can load audio data prior to playback A SourceDataLine is an input that accepts a real-time stream of audio data Usually the line is a path into or out of a mixer (although technically the mixer itself is also a kind of line). Audio input and output ports are lines. These are analogous to the microphones and speakers connected to a physical mixing console. Another kind of line is a data path through which an application program can get input audio from, or send output audio to, a mixer. These data paths are analogous to the tracks of the multitrack recorder connected to the physical mixing console. A Possible Configuration of Lines for Audio Output and Audio Input
13
Line Interface Hierarchy
The base interface, Line, describes the minimal functionality common to all lines: Controls - Data lines and ports often have a set of controls that affect the audio signal passing through the line Open and Closed status Events Ports are simple lines for input or output of audio to or from audio devices. As mentioned earlier, some common types of ports are the microphone, line input, CD-ROM drive, speaker, headphone, and line output A line generates events when it opens or closes. Subinterfaces of Line can introduce other types of events. When a line generates an event, the event is sent to all objects that have registered to "listen" for events on that line. An application program can create these objects, register them to listen for line events, and react to the events as desired.
14
Line Types Ports are simple lines for input or output of audio to or from audio devices some common types of ports are the microphone, line input, CD-ROM drive, speaker, headphone, and line output A TargetDataLine receives audio data from a mixer The TargetDataLine interface provides methods for reading the data from the target data line's buffer and for determining how much data is currently available for reading I’m not going to include Mixers here as they are a special type of line that we will discuss in more detail later. Target data line is used for output. Note that the naming convention for this interface reflects the relationship between the line and its mixer A target data line is a type of DataLine from which audio data can be read. The most common example is a data line that gets its data from an audio capture device. (The device is implemented as a mixer that writes to the target data line.)
15
Line Types A SourceDataLine receives audio data for playback
Provides methods for writing data to the source data line's buffer for playback, and for determining how much data the line is prepared to receive without blocking A Clip is a data line into which audio data can be loaded prior to playback. The data is pre-loaded rather than streamed, thus the clip's duration is known before playback Any starting position in the media can be chosen Clips can be looped Source data line is used for input. A source data line is a data line to which data may be written. It acts as a source to its mixer An application writes audio bytes to a source data line, which handles the buffering of the bytes and delivers them to the mixer. The mixer may mix the samples with those from other sources and then deliver the mix to a target such as an output port (which may represent an audio output device on a sound card). A clip is essentially a line into which audio data is stored before it is sent to the mixer. If we are only interested in outputting the sound from the clip then we don’t use a mixer and the clip will be an output line too. From the perspective of an application, a source data line may act as a target for audio data.
16
Getting a Line Asking for a line is a rather tricky thing.
We have to construct an Info object that specifies the desired properties for the line. First, we have to say which kind of line we want SourceDataLine (for playback), Clip (for repeated playback) and TargetDataLine (for recording) Then, we have to pass an AudioFormat object, so that the Line knows which format the data passed to it will have
17
Getting a Clip Clip soundclip = null;
DataLine.Info clipinfo = new DataLine.Info(Clip.class, audioFormat); try{ soundclip = (Clip) AudioSystem.getLine(clipinfo); soundclip.open(audioInputStream); }catch exceptions soundclip.start(); Hear we are just interested in simple playback so we will use a Clip Create a Clip object Create a DataLine.Info object (remember DataLine is a superclass of Clip) that contains the audio format information on our sound file. Try to get a line of the type clipinfo for our sound clip Open the line. Start soundclip playing So now we can play a clip, because output can come directly from the clip. Lets now look at recording sound.
18
Getting a SourceDataLine
This is the same process as getting a Clip However, the SourceDataLine does not store all the audio data, rather it buffers it in real-time So to output the audio we have to read it from the AudioInputStream in real-time
19
Getting a SourceDataLine
byte[] buffer = new byte[EXTERNAL_BUFFER_SIZE]; while (nBytesRead != -1) { nBytesRead = audioInputStream.read(buffer, 0, buffer.length); if (nBytesRead >= 0) nBytesWritten = line.write(buffer, 0, nBytesRead); } Creates a buffer of size [EXTERNAL_BUFFER_SIZE] public int read(byte[] b, int off, int len) throws IOException Reads up to a specified maximum number of bytes of data from the audio stream, putting them into the given byte array Parameters: b - the buffer into which the data is read off - the offset, from the beginning of array b, at which the data will be written len - the maximum number of bytes to read Returns: the total number of bytes read into the buffer, or -1 if there is no more data because the end of the stream has been reached Line.write – write the bytes from the buffer abData into the SourceDataLine -line
20
Fin Fin
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.