This was going to be part of a longer blog thats coming soon, but I decided to write a separate blog for this topic since it is important and I plan to use it in other projects in the future. If you are reading from a future blog, hello from the past!
While the WebAudio API provides a variety of built-in nodes for common audio tasks (like gain control, filtering, and panning), there are cases where developers might need custom audio processing that isn’t covered by the built-in nodes.
AudioWorkletNode
is a part of the WebAudio API that is designed to facilitate custom audio processing in web applications. It bridges the main thread (where your usual JavaScript code runs) with the audio processing thread (where the actual sound manipulation happens).
The system consists of two main parts:
AudioWorkletNode
which resides on the main JavaScript thread.AudioWorkletProcessor
which runs on the audio processing thread.The AudioWorkletNode
and its associated AudioWorkletProcessor
can communicate with each other. This allows parameters to be passed from the main thread to the audio processing thread and for any kind of custom messaging to be sent between them. It can also use a port type system like with WebWorkers.
The AudioWorkletNode
is perfect for developing custom audio effects, synthesizers, audio visualizers, and more. Anytime you need to step outside the bounds of the standard WebAudio nodes, AudioWorkletNode
gives you a good place to do so with optimal performance.
You can customize audio processing by extending the AudioWorkletProcessor
class. To effectively communicate with this processor, you’d typically use an AudioWorkletNode
on the main thread. Below is a simplified example of how to implement this, with documentation in line.
// my-processor.js
class MyProcessor extends AudioWorkletProcessor {
static get parameterDescriptors() {
return [
{
name: "param1",
defaultValue: 0.5,
minValue: 0,
maxValue: 1,
},
{
name: "param2",
defaultValue: 1000,
minValue: 20,
maxValue: 20000,
},
{
name: "param3",
defaultValue: true,
minValue: 0,
maxValue: 1,
},
];
}
process(inputs, outputs, parameters) {
// Accessing the current value of params
const param1Value = parameters.param1[0];
const param2Value = parameters.param2[0];
const param3Value = parameters.param3[0];
// For the sake of this demonstration, we won't apply these parameters to the example
// No-Op: Just pass the input directly to the output
for (let channel = 0; channel < inputs[0].length; ++channel) {
// The inputs and outputs are arrays of multi-channel audio. For instance:
// inputs[0] accesses the first input (a stereo input would have 2 channels: left and right).
// inputs[0][0] accesses the first channel (left) of the first input.
// inputs[0][0][i] accesses the ith sample of the first channel of the first input.
outputs[0][channel].set(inputs[0][channel]);
}
return true;
}
}
registerProcessor("my-processor", MyProcessor);
// On the main thread:
const audioContext = new AudioContext();
await audioContext.audioWorklet.addModule("path/to/my-processor.js");
const myNode = new AudioWorkletNode(audioContext, "my-processor");
// Set parameter values
myNode.parameters.get("param1").setValueAtTime(0.75, audioContext.currentTime); // Set param to 0.75 immediately
myNode.parameters.get("param2").setValueAtTime(440, audioContext.currentTime);
myNode.parameters.get("param3").setValueAtTime(0, audioContext.currentTime + 2); // set param after 2 seconds
In the above code, there are also examples of custom parameters you’ve defined for your processor. The parameter passing system provides a mechanism to send data from the main thread to the audio processing thread. The setValueAtTime
function allows for precise scheduling of parameter changes, you can also schedule parameter changes in the future and they will be applied at the appropriate time! I have not tried to schedule parameter changes in the past, I’m guessing that is cursed in some way ⛧.
You can also chain AudioWorkletProcessors
! When chaining AudioWorkletProcessors
, the output of one processor becomes the input to the next in the chain. Think of this like a conveyor belt in a factory: a product (or in this case, audio data) moves from one station to the next, getting modified at each step along the way.
Here’s an illustrative example in code.
// step1-processor.js
class Step1Processor extends AudioWorkletProcessor {
process(inputs, outputs) {
const input = inputs[0];
const output = outputs[0];
for (let channel = 0; channel < input.length; ++channel) {
for (let i = 0; i < input[channel].length; ++i) {
// Double the amplitude in Step 1
output[channel][i] = input[channel][i] * 2;
}
}
return true;
}
}
registerProcessor("step1-processor", Step1Processor);
// step2-processor.js
class Step2Processor extends AudioWorkletProcessor {
process(inputs, outputs) {
const input = inputs[0];
const output = outputs[0];
for (let channel = 0; channel < input.length; ++channel) {
for (let i = 0; i < input[channel].length; ++i) {
// Invert the phase in Step 2
output[channel][i] = -input[channel][i];
}
}
return true;
}
}
registerProcessor("step2-processor", Step2Processor);
// step3-processor.js
class Step3Processor extends AudioWorkletProcessor {
process(inputs, outputs) {
const input = inputs[0];
const output = outputs[0];
for (let channel = 0; channel < input.length; ++channel) {
for (let i = 0; i < input[channel].length; ++i) {
// Simply pass the audio data in Step 3
output[channel][i] = input[channel][i];
}
}
return true;
}
}
registerProcessor("step3-processor", Step3Processor);
async function setupAudio() {
const audioContext = new AudioContext();
// Load the processors
await audioContext.audioWorklet.addModule("path/to/step1-processor.js");
await audioContext.audioWorklet.addModule("path/to/step2-processor.js");
await audioContext.audioWorklet.addModule("path/to/step3-processor.js");
// Create the nodes
const step1Node = new AudioWorkletNode(audioContext, "step1-processor");
const step2Node = new AudioWorkletNode(audioContext, "step2-processor");
const step3Node = new AudioWorkletNode(audioContext, "step3-processor");
// Chain the nodes: Source -> Step 1 -> Step 2 -> Step 3 -> Destination
const source = audioContext.createBufferSource();
source
.connect(step1Node)
.connect(step2Node)
.connect(step3Node)
.connect(audioContext.destination);
source.start();
}
setupAudio();
Sending data between an AudioWorkletProcessor (which runs on the audio rendering thread) and the main thread is achieved using the MessagePort
interface.
class MyProcessor extends AudioWorkletProcessor {
sendToMainThread(data) {
this.port.postMessage(data);
}
process(inputs, outputs, parameters) {
// Example: Sending data to main thread
this.sendToMainThread({ message: "Hello from processor!" });
return true;
}
}
registerProcessor("my-processor", MyProcessor);
On the main thread, after you create an instance of the AudioWorkletNode`` that corresponds to the
AudioWorkletProcessor``, you can set up an event listener to receive messages from the processor:
// On the main thread:
const audioContext = new AudioContext();
await audioContext.audioWorklet.addModule("path/to/my-processor.js");
const myNode = new AudioWorkletNode(audioContext, "my-processor");
// Set up an event listener to receive messages from the processor
myNode.port.onmessage = (event) => {
console.log("Received from processor:", event.data);
};
⚠️ Performance Warning ⚠️ Be cautious about sending messages from the process method, especially if they’re sent frequently! It’s usually best to send messages from the process method in response to specific conditions or events rather than on a regular, ongoing basis.
And thats it, hopefully this helps you get started with AudioWorkletNode
and AudioWorkletProcessor
if you find this blog without context!. I’ll be referencing back to this blog in future blogs where I use AudioWorkletNode
and AudioWorkletProcessor
in my projects!
<3