Audio Queue

 

What is it?

The audio queue is an implementation of an event queue. An event queue holds a queue of events/messages/any object that requires processing. Using this technique, you decouple when an event is sent from when it is processed.

The events are stored in a FIFO order using a circular buffer.

A linear array with pointers to the head and tail of the buffer. These are incremented when something is added/removed from the array respectively and they wraparound to the beginning of the array when they reach the end.
Linear implementation of a circular buffer (used in the audio queue). Image found here.

When the buffer is filled, new data is written starting at the beginning of the buffer and overwriting the old. A reference to the head (where to add data) and tail (where to retrieve data) are maintained. This avoids the need for dynamic allocation, and no unnecessary shifting of elements that would occur using in-built lists.

Audioclips are added to the queue using an event system (see this post for more information). However, you can add audio to the queue in any manner you see fit.

 

Why should you use it?

Decoupling! In our audio example, systems (such as enemy AI, environment etc) can simply add an audio clip to the queue. They do not need to know how to play clips. The sender and receiver are decoupled.

The systems adding audio clips to the queue are not blocked while the program attempts to play the audio clip. It adds the clip to the queue and then continues on its merry way, knowing that it is no longer responsible for the clip.

By implementing an audio queue, you can prevent the same audio clips being played in the same frame. For example, in a shooter, if you’re surrounded by ten enemies and they all fire at you at the same time, then ten requests to play the same clip are initiated. Without some way to handle this, the clips waveforms are added together and the shooting audio clip sounds ten times louder than you intended. Ouch! The audio queue can check for existing clips of the same type and only play the one.

You can easily manage the resources of your audio engine by limiting the number of clips that can be added to the queue. Useful for mobile and platforms without many resources. You can also implement the audio system on a separate thread to increase performance further.

 

The code

The full code listing for the audio queue is reproduced below. It can also be found on GitHub (link included at the top of the page).

using UnityEngine;
/// <summary>
/// Listens to audio events. Adds audio events to circular array with MaxPending
/// and plays associated audio clip each time step.
/// </summary>
[RequireComponent(typeof(AudioSource))]
public class AudioPlayer2D : MonoBehaviour
{
/// <summary>
/// The maximum number of queued clips. Oldest clips are overwritten when max reached.
/// </summary>
public int MaxPending = 30;
private AudioSource source;
/// <summary>
/// Queue of events to play.
/// </summary>
private IAudioEvent2D[] pending;
/// <summary>
/// Reference to current head of cicular array (index to pop new audio event).
/// </summary>
private int head;
/// <summary>
/// Index to add new events.
/// </summary>
private int tail;
void Awake()
{
source = GetComponent<AudioSource>();
source.spatialBlend = 0f;
}
void OnEnable()
{
head = tail = 0;
pending = new IAudioEvent2D[MaxPending];
Events.instance.AddListener<AudioEvent2D>(OnAudio);
}
void OnDisable()
{
Events.instance.RemoveListener<AudioEvent2D>(OnAudio);
}
/// <summary>
/// Plays pending clips.
/// </summary>
void Update()
{
if (head == tail)
return;
Debug.Log("Playing AudioClip: " + pending[head].Audio.name);
source.PlayOneShot(pending[head].Audio);
head = (head + 1) % MaxPending;
}
void OnAudio(IAudioEvent2D e)
{
// Do not add duplicate events. Prevents situation where the same
// audio clips are played in parallel increasing the effects volume.
for (int i = head; i != tail; i = (i + 1) % MaxPending)
{
if (pending[i].Audio.name.Equals(e.Audio.name))
{
return;
}
}
pending[tail] = e;
tail = (tail + 1) % MaxPending;
}
}

 

How do you use it?

Although this article is not about the vent system that audio queue uses, I thought it could prove useful to include a quick example of how events work in relation to the queue.

All audio events inherit from the same interface.

/// <summary>
/// Implemented by all audio events. Provides access to audio clip.
/// </summary>
public interface IAudioEvent2D
{
AudioClip Audio { get; }
}

Example of a concrete audio class.

/// <summary>
/// Raised when audio clip should be played. Picked up by AudioPlayer2D.
/// </summary>
public class AudioEvent2D : GameEvent, IAudioEvent2D
{
private AudioClip audioClip;
public AudioClip Audio
{
get
{
return audioClip;
}
}
public AudioEvent2D(AudioClip audioClip)
{
this.audioClip = audioClip;
}
}

Separate events are created for 2D and 3D data. The 3D audio clips also include spatial data in the form of a vector representing where the clip originated.

Once created, the events can be raised by any class as shown below (as long as the event scripts have been imported from the above GitHub project).

Events.instance.Raise(new AudioEvent2D(audioClip));

And that’s it! The audioclip will be added to the audio queue.

Related Posts