Whole document tree Chapter 4Playing Back AudioPlayback is sometimes referred to as presentation or rendering. These are general terms that are applicable to other kinds of media besides sound. The essential feature is that a sequence of data is delivered somewhere for eventual perception by a user. If the data is time-based, as sound is, it must be delivered at the correct rate. With sound even more than video, it's important that the rate of data flow be maintained, because interruptions to sound playback often produce loud clicks or irritating distortion. The JavaTM Sound API is designed to help application programs play sounds smoothly and continuously, even very long sounds. The previous chapter discussed how to obtain a line from the audio system or from a mixer. This chapter shows how to play sound through a line.
There are two kinds of line that you can use for playing sound: a
Using a Clip
You obtain a Setting Up the Clip for Playback
Obtaining a line just means you've gotten a way to refer to it; Despite thevoid open(AudioInputStream stream) void open(AudioFormat format, byte[] data, int offset, int bufferSize) bufferSize argument in the second open method above, Clip (unlike SourceDataLine ) includes no methods for writing new data to the buffer. The bufferSize argument here just specifies how much of the byte array to load into the clip. It's not a buffer into which you can subsequently load more data, as you can with a SourceDataLine's buffer.
After opening the clip, you can specify at what point in the data it should start playback, using Starting and Stopping Playback
When you're ready to start playback, simply invoke the
A Using a SourceDataLine
Obtaining a Setting Up the SourceDataLine for Playback
Opening the Notice that when you open avoid open(AudioFormat format) SourceDataLine , you don't associate any sound data with the line yet, unlike opening a Clip . Instead, you just specify the format of the audio data you want to play. The system chooses a default buffer length.
You can also stipulate a certain buffer length in bytes, using this variant:
For consistency with similar methods, the buffersize argument is expressed in bytes, but it must correspond to an integral number of frames.
How would you select a buffer size? It depends on your program's needs. To start with, shorter buffer sizes mean less latency. When you send new data, you hear it sooner. For some application programs, particularly highly interactive ones, this kind of responsiveness is important. For example, in a game, the onset of playback might need to be tightly synchronized with a visual event. Such programs might need a latency of less than 0.1 second. As another example, a conferencing application needs to avoid delays in both playback and capture. However, many application programs can afford a greater delay, up to a second or more, because it doesn't matter exactly when the sound starts playing, as long as the delay doesn't confuse or annoy the user. This might be the case for an application program that streams a large audio file using one-second buffers. The user probably won't care if playback takes a second to start, because the sound itself lasts so long and the experience isn't highly interactive. On the other hand, shorter buffer sizes also mean a greater risk that you'll fail to write data fast enough into the buffer. If that happens, the audio data will contain discontinuities, which will probably be audible as clicks or breakups in the sound. Shorter buffer sizes also mean that your program has to work harder to keep the buffers filled, resulting in more intensive CPU usage. This can slow down the execution of other threads in your program, not to mention other programs.
So an optimal value for the buffer size is one that minimizes latency just to the degree that's acceptable for your application program, while keeping it large enough to reduce the risk of buffer underflow and to avoid unnecessary consumption of CPU resources. For a program like a conferencing application, delays are more annoying than low-fidelity sound, so a small buffer size is preferable. For streaming music, an initial delay is acceptable, but breakups in the sound are not, so a larger buffer size-say, a second-is preferable. (Note that high sample rates make the buffers larger in terms of the number of bytes, which are the units for measuring buffer size in the
Instead of using the open method described above, it's also possible to open a Starting and Stopping Playback
Once the The start method permits the line to begin playing sound as soon as there's any data in its buffer. You place data in the buffer by the following method: The offset into the array is expressed in bytes, as is the array's length.int write(byte[] b, int offset, int length)
The line begins sending data as soon as possible to its mixer. When the mixer itself delivers the data to its target, the
So how do you know how much data to write to the buffer, and when to send the second batch of data? Fortunately, you don't need to time the second invocation of write to synchronize with the end of the first buffer! Instead, you can take advantage of the
SourceDataLine for playback:
If you don't want the// read chunks from a stream and write them to a source data line line.start(); while (total < totalToRead && !stopped)} numBytesRead = stream.read(myData, 0, numBytesToRead); if (numBytesRead == -1) break; total += numBytesRead; line.write(myData, 0, numBytesRead); } write method to block, you can first invoke the available method (inside the loop) to find out how many bytes can be written without blocking, and then limit the numBytesToRead variable to this number, before reading from the stream. In the example given, though, blocking won't matter much, since the write method is invoked inside a loop that won't complete until the last buffer is written in the final loop iteration. Whether or not you use the blocking technique, you'll probably want to invoke this playback loop in a separate thread from the rest of the application program, so that your program doesn't appear to freeze when playing a long sound. On each iteration of the loop, you can test whether the user has requested playback to stop. Such a request needs to set the stopped boolean, used in the code above, to true .
Since You can intentionally stop playback prematurely, of course. For example, the application program might provide the user with a Stop button. Invokeline.write(b, offset, numBytesToWrite); //this is the final invocation of write line.drain(); line.stop(); line.close(); line = null; DataLine's stop method to stop playback immediately, even in the middle of a buffer. This leaves any unplayed data in the buffer, so that if you subsequently invoke start , the playback resumes where it left off. If that's not what you want to happen, you can discard the data left in the buffer by invoking flush .
A Monitoring a Line's Status
Once you have started a sound playing, how do you find when it's finished? We saw one solution above-invoking the
Any object in your program that implements the Whenever the line opens, closes, starts, or stops, it sends anpublic void addLineListener(LineListener listener) update message to all its listeners. Your object can query the LineEvent that it receives. First you might invoke LineEvent.getLine to make sure the line that stopped is the one you care about. In the case we're discussing here, you want to know if the sound is finished, so you see whether the LineEvent is of type STOP . If it is, you might check the sound's current position, which is also stored in the LineEvent object, and compare it to the sound's length (if known) to see whether it reached the end and wasn't stopped by some other means (such as the user's clicking a Stop button, although you'd probably be able to determine that cause elsewhere in your code).
Along the same lines, if you need to know when the line is opened, closed, or started, you use the same mechanism. Synchronizing Playback on Multiple Lines
If you're playing back multiple tracks of audio simultaneously, you probably want to have them all start and stop at exactly the same time. Some mixers facilitate this behavior with their
To find out whether a particular mixer offers this feature for a specified group of data lines, invoke the The first parameter specifies a group of specific data lines, and the second parameter indicates the accuracy with which synchronization must be maintained. If the second parameter isboolean isSynchronizationSupported(Line[] lines, boolean maintainSync) true , the query is asking whether the mixer is capable of maintaining sample-accurate precision in controlling the specified lines at all times; otherwise, precise synchronization is required only during start and stop operations, not throughout playback.
Processing the Outgoing AudioSome source data lines have signal-processing controls, such as gain, pan, reverb, and sample-rate controls. Similar controls, especially gain controls, might be present on the output ports as well. For more information on how to determine whether a line has such controls, and how to use them if it does, see Chapter 6, "Processing Audio with Controls." [Top] [Prev] [Next] [Bottom] Copyright © 2000, Sun Microsystems Inc. All rights reserved. |