About Audio Buffers
An audio buffer in audio programming is a portion of memory used to temporarily store audio data for processing or playback. The audio buffer is usually represented as a two-dimensional array of samples, where each row represents a channel of audio data (e.g., left and right channels for stereo audio). The size of the buffer is determined by the buffer length, which is typically specified in samples or milliseconds. The buffer length is chosen to balance processing latency, memory usage, and the risk of buffer underruns.
Using the AudioBuffer Class
AudioBuffer class provides a convenient abstraction layer on top of the raw audio data arrays. It keeps the audio data and its properties together:
- data type of samples (float, int16)
- number of channels
- number of frames
- sample rate
- interleaved ([LRLRLR]) or non-interleaved ([LLL], [RRR]).
You can initialize an
AudioBuffer instance using the
AudioData class as preallocated memory (this is safe to do on the real-time thread). It handles memory allocation/deallocation internally.
// Preallocating memory on a non-realtime thread
AudioData<float> data(MONO, 480);
// Creating audio buffer from preallocated memory on the real-time thread
const AudioBuffer<float> myAudioBuffer(MONO, 480, SAMPLE_RATE_48k, data.getBuffer());
Converting Data Types
Different data types can be used for audio buffers depending on the application and the requirements for processing and playback. The most common data types used for audio buffers are integer and floating-point data types.
To convert between these data types you can use the
AudioData<float> floatData(STEREO, 480);
AudioData<int16> shortData(STEREO, 480);
AudioBuffer<float> floatBuffer(STEREO, 480, SAMPLE_RATE_48k, INTERLEAVED, floatData.getBuffers());
AudioBuffer<int16> shortBuffer(STEREO, 480, SAMPLE_RATE_48k, INTERLEAVED, shortData.getBuffers());
Interleaving and deinterleaving
Interleaved and deinterleaved audio refer to different ways of storing and processing multi-channel audio data.
In interleaved audio, the samples for each channel are stored consecutively in memory. For example, in stereo interleaved audio, the left and right channel samples are stored alternately, like this: LRLRLRLR. Interleaved audio is more memory-efficient and can be processed more efficiently by some audio processing algorithms, as the data for each channel is stored consecutively.
In deinterleaved audio, the samples for each channel are stored separately in memory. For example, in stereo deinterleaved audio, the left and right channel samples are stored in separate arrays. Deinterleaved audio can be more convenient for some processing tasks, as the data for each channel is stored separately and can be processed independently.
To interleave and deinterleave audio bufffers you can use the
AudioData<float> interleavedData(STEREO, 480);
AudioData<float> nonInterleavedData(STEREO, 480);
AudioBuffer<float> interleavedBuffer(STEREO, 480, SAMPLE_RATE_48k, INTERLEAVED, interleavedData.getBuffers());
AudioBuffer<float> nonInterleavedBuffer(STEREO, 480, SAMPLE_RATE_48k, NONINTERLEAVED, nonInterleavedData.getBuffers());
All converting operations are vectorized and super fast.