[go: nahoru, domu]

Skip to content

Commit

Permalink
Audio Worklets (#16449)
Browse files Browse the repository at this point in the history
* Add first implementation of Wasm Audio Worklets, based on Wasm Workers.

Fix Chrome not continuing the AudioWorklet processing if a number 1 is returned from the callback - must return 'true' specifically.

Add new tone generator sample.

Adjust comment.

Use MessagePort.onmessage instead of add/removeEventListener(), since onmessage .start()s the MessagePort automatically.

Fix name noise-generator to tone-generator

Improve assertions.

* Add src/audio_worklet.js to eslint ignore

* Optimize code size

* Add emscripten_current_thread_is_audio_worklet(), remove ENVIRONMENT_IS_AUDIO_WORKLET.

* Fix to work with Closure

* Simplify MINIMAL_RUNTIME shell module preamble generation.

* Fix Closure and simple AudioWorklet creation

* Fix Module import for AudioWorklets, and move towards globalThis

* Disable -sAUDIO_WORKLET + -sTEXTDECODER=2 combo

* Fix shell case

* Mark AUDIO_WORKLET and SINGLE_FILE not mutually compatible

* Default runtime support

* Revert unnecessary code

* Fix merge

* AudioWorklets functioning in default runtime.

* Remove #if WASM_WORKERS checks since Wasm Workers is unconditionally depended on.

* Add more interactive tests

* Test Audio Worklets with Closure
  • Loading branch information
juj committed Feb 5, 2023
1 parent 44b2c2a commit 5402fc9
Show file tree
Hide file tree
Showing 33 changed files with 1,525 additions and 53 deletions.
1 change: 1 addition & 0 deletions .eslintrc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ ignorePatterns:
- "src/emrun_postjs.js"
- "src/worker.js"
- "src/wasm_worker.js"
- "src/audio_worklet.js"
- "src/wasm2js.js"
- "src/webGLClient.js"
- "src/webGLWorker.js"
Expand Down
2 changes: 2 additions & 0 deletions ChangeLog.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@ See docs/process.md for more on how version tagging works.
- --pre-js and --post-js files are now fed through the JS preprocesor, just
like JS library files and the core runtime JS files. This means they can
now contain #if/#else/#endif blocks and {{{ }}} macro blocks. (#18525)
- Added support for Wasm-based AudioWorklets for realtime audio processing
(#16449)
- `-sEXPORT_ALL` can now be used to export symbols on the `Module` object
when used with `-sMINIMA_RUNTIME` and `-sMODULARIZE` together. (#17911)
- The llvm version that emscripten uses was updated to 17.0.0 trunk.
Expand Down
40 changes: 35 additions & 5 deletions emcc.py
Original file line number Diff line number Diff line change
Expand Up @@ -2360,7 +2360,7 @@ def phase_linker_setup(options, state, newargs):
if settings.WASM_WORKERS:
# TODO: After #15982 is resolved, these dependencies can be declared in library_wasm_worker.js
# instead of having to record them here.
wasm_worker_imports = ['_emscripten_wasm_worker_initialize']
wasm_worker_imports = ['_emscripten_wasm_worker_initialize', '___set_thread_state']
settings.EXPORTED_FUNCTIONS += wasm_worker_imports
building.user_requested_exports.update(wasm_worker_imports)
settings.DEFAULT_LIBRARY_FUNCS_TO_INCLUDE += ['_wasm_worker_initializeRuntime']
Expand All @@ -2369,6 +2369,19 @@ def phase_linker_setup(options, state, newargs):
settings.WASM_WORKER_FILE = unsuffixed(os.path.basename(target)) + '.ww.js'
settings.JS_LIBRARIES.append((0, shared.path_from_root('src', 'library_wasm_worker.js')))

settings.SUPPORTS_GLOBALTHIS = feature_matrix.caniuse(feature_matrix.Feature.GLOBALTHIS)

if settings.AUDIO_WORKLET:
if not settings.SUPPORTS_GLOBALTHIS:
exit_with_error('Must target recent enough browser versions that will support globalThis in order to target Wasm Audio Worklets!')
if settings.AUDIO_WORKLET == 1:
settings.AUDIO_WORKLET_FILE = unsuffixed(os.path.basename(target)) + '.aw.js'
settings.JS_LIBRARIES.append((0, shared.path_from_root('src', 'library_webaudio.js')))
if not settings.MINIMAL_RUNTIME:
# MINIMAL_RUNTIME exports these manually, since this export mechanism is placed
# in global scope that is not suitable for MINIMAL_RUNTIME loader.
settings.EXPORTED_RUNTIME_METHODS += ['stackSave', 'stackAlloc', 'stackRestore']

if settings.FORCE_FILESYSTEM and not settings.MINIMAL_RUNTIME:
# when the filesystem is forced, we export by default methods that filesystem usage
# may need, including filesystem usage from standalone file packager output (i.e.
Expand Down Expand Up @@ -3143,6 +3156,17 @@ def phase_final_emitting(options, state, target, wasm_target, memfile):
minified_worker = building.acorn_optimizer(worker_output, ['minifyWhitespace'], return_output=True)
write_file(worker_output, minified_worker)

# Deploy the Audio Worklet module bootstrap file (*.aw.js)
if settings.AUDIO_WORKLET == 1:
worklet_output = os.path.join(target_dir, settings.AUDIO_WORKLET_FILE)
with open(worklet_output, 'w') as f:
f.write(shared.read_and_preprocess(shared.path_from_root('src', 'audio_worklet.js'), expand_macros=True))

# Minify the audio_worklet.js file in optimized builds
if (settings.OPT_LEVEL >= 1 or settings.SHRINK_LEVEL >= 1) and not settings.DEBUG_LEVEL:
minified_worker = building.acorn_optimizer(worklet_output, ['minifyWhitespace'], return_output=True)
open(worklet_output, 'w').write(minified_worker)

# track files that will need native eols
generated_text_files_with_native_eols = []

Expand Down Expand Up @@ -3800,11 +3824,16 @@ def modularize():
return %(return_value)s
}
%(capture_module_function_for_audio_worklet)s
''' % {
'maybe_async': async_emit,
'EXPORT_NAME': settings.EXPORT_NAME,
'src': src,
'return_value': return_value
'return_value': return_value,
# Given the async nature of how the Module function and Module object come into existence in AudioWorkletGlobalScope,
# store the Module function under a different variable name so that AudioWorkletGlobalScope will be able to reference
# it without aliasing/conflicting with the Module variable name.
'capture_module_function_for_audio_worklet': 'globalThis.AudioWorkletModule = Module;' if settings.AUDIO_WORKLET and settings.MODULARIZE else ''
}

if settings.MINIMAL_RUNTIME and not settings.USE_PTHREADS:
Expand Down Expand Up @@ -3864,14 +3893,15 @@ def module_export_name_substitution():
logger.debug(f'Private module export name substitution with {settings.EXPORT_NAME}')
src = read_file(final_js)
final_js += '.module_export_name_substitution.js'
if settings.MINIMAL_RUNTIME and not settings.ENVIRONMENT_MAY_BE_NODE and not settings.ENVIRONMENT_MAY_BE_SHELL:
if settings.MINIMAL_RUNTIME and not settings.ENVIRONMENT_MAY_BE_NODE and not settings.ENVIRONMENT_MAY_BE_SHELL and not settings.AUDIO_WORKLET:
# On the web, with MINIMAL_RUNTIME, the Module object is always provided
# via the shell html in order to provide the .asm.js/.wasm content.
replacement = settings.EXPORT_NAME
else:
replacement = "typeof %(EXPORT_NAME)s !== 'undefined' ? %(EXPORT_NAME)s : {}" % {"EXPORT_NAME": settings.EXPORT_NAME}
src = re.sub(r'{\s*[\'"]?__EMSCRIPTEN_PRIVATE_MODULE_EXPORT_NAME_SUBSTITUTION__[\'"]?:\s*1\s*}', replacement, src)
write_file(final_js, src)
new_src = re.sub(r'{\s*[\'"]?__EMSCRIPTEN_PRIVATE_MODULE_EXPORT_NAME_SUBSTITUTION__[\'"]?:\s*1\s*}', replacement, src)
assert new_src != src, 'Unable to find Closure syntax __EMSCRIPTEN_PRIVATE_MODULE_EXPORT_NAME_SUBSTITUTION__ in source!'
write_file(final_js, new_src)
shared.get_temp_files().note(final_js)
save_intermediate('module_export_name_substitution')

Expand Down
4 changes: 4 additions & 0 deletions site/source/docs/api_reference/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@ high level it consists of:
- :ref:`wasm_workers`:
Enables writing multithreaded programs using a web-like API.

- :ref:`wasm_audio_worklets`:
Allows programs to implement audio processing nodes that run in a dedicated real-time audio processing thread context.

- :ref:`Module`:
Global JavaScript object that can be used to control code execution and access
exported methods.
Expand Down Expand Up @@ -64,4 +67,5 @@ high level it consists of:
fiber.h
proxying.h
wasm_workers
wasm_audio_worklets
advanced-apis
188 changes: 188 additions & 0 deletions site/source/docs/api_reference/wasm_audio_worklets.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
.. _wasm_audio_worklets:

=======================
Wasm Audio Worklets API
=======================

The AudioWorklet extension to the `Web Audio API specification
<https://webaudio.github.io/web-audio-api/#AudioWorklet>`_ enables web sites
to implement custom AudioWorkletProcessor Web Audio graph node types.

These custom processor nodes process audio data in real-time as part of the
audio graph processing flow, and enable developers to write low latency
sensitive audio processing code in JavaScript.

The Emscripten Wasm Audio Worklets API is an Emscripten-specific integration
of these AudioWorklet nodes to WebAssembly. Wasm Audio Worklets enables
developers to implement AudioWorklet processing nodes in C/C++ code that
compile down to WebAssembly, rather than using JavaScript for the task.

Developing AudioWorkletProcessors in WebAssembly provides the benefit of
improved performance compared to JavaScript, and the Emscripten
Wasm Audio Worklets system runtime has been carefully developed to guarantee
that no temporary JavaScript level VM garbage will be generated, eliminating
the possibility of GC pauses from impacting audio synthesis performance.

Audio Worklets API is based on the Wasm Workers feature. It is possible to
also enable the `-pthread` option while targeting Audio Worklets, but the
audio worklets will always run in a Wasm Worker, and not in a Pthread.

Development Overview
====================

Authoring Wasm Audio Worklets is similar to developing Audio Worklets
API based applications in JS (see `MDN: Using AudioWorklets <https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Using_AudioWorklet>`_), with the exception that users will not manually implement
the JS code for the ScriptProcessorNode files in the AudioWorkletGlobalScope.
This is managed automatically by the Emscripten Wasm AudioWorklets runtime.

Instead, application developers will need to implement a small amount of JS <-> Wasm
(C/C++) interop to interact with the AudioContext and AudioNodes from Wasm.

Audio Worklets operate on a two layer "class type & its instance" design:
first one defines one or more node types (or classes) called AudioWorkletProcessors,
and then, these processors are instantiated one or more times in the audio
processing graph as AudioWorkletNodes.

Once a class type is instantiated on the Web Audio graph and the graph is
running, a C/C++ function pointer callback will be invoked for each 128
samples of the processed audio stream that flows through the node.

This callback will be executed on a dedicated separate audio processing
thread with real-time processing priority. Each Web Audio context will
utilize only a single audio processing thread. That is, even if there are
multiple audio node instances (maybe from multiple different audio processors),
these will all share the same dedicated audio thread on the AudioContext,
and will not run in a separate thread of their own each.

Note: the audio worklet node processing is pull-mode callback based. Audio
Worklets do not allow the creation of general purpose real-time prioritized
threads. The audio callback code should execute as quickly as possible and
be non-blocking. In other words, spinning a custom `for(;;)` loop is not
possible.

Programming Example
===================

To get hands-on experience with programming Wasm Audio Worklets, let's create a
simple audio node that outputs random noise through its output channels.

1. First, we will create a Web Audio context in C/C++ code. This is achieved
via the ``emscripten_create_audio_context()`` function. In a larger application
that integrates existing Web Audio libraries, you may already have an
``AudioContext`` created via some other library, in which case you would instead
register that context to be visible to WebAssembly by calling the function
``emscriptenRegisterAudioObject()``.

Then, we will instruct the Emscripten runtime to initialize a Wasm Audio Worklet
thread scope on this context. The code to achieve these tasks looks like:

.. code-block:: cpp
#include <emscripten/webaudio.h>
uint8_t audioThreadStack[4096];
int main()
{
EMSCRIPTEN_WEBAUDIO_T context = emscripten_create_audio_context(0);
emscripten_start_wasm_audio_worklet_thread_async(context, audioThreadStack, sizeof(audioThreadStack),
&AudioThreadInitialized, 0);
}
2. When the worklet thread context has been initialized, we are ready to define our
own noise generator AudioWorkletProcessor node type:

.. code-block:: cpp
void AudioThreadInitialized(EMSCRIPTEN_WEBAUDIO_T audioContext, EM_BOOL success, void *userData)
{
if (!success) return; // Check browser console in a debug build for detailed errors
WebAudioWorkletProcessorCreateOptions opts = {
.name = "noise-generator",
};
emscripten_create_wasm_audio_worklet_processor_async(audioContext, &opts, &AudioWorkletProcessorCreated, 0);
}
3. After the processor has initialized, we can now instantiate and connect it as a node on the graph. Since on
web pages audio playback can only be initiated as a response to user input, we will also register an event handler
which resumes the audio context when the user clicks on the DOM Canvas element that exists on the page.

.. code-block:: cpp
void AudioWorkletProcessorCreated(EMSCRIPTEN_WEBAUDIO_T audioContext, EM_BOOL success, void *userData)
{
if (!success) return; // Check browser console in a debug build for detailed errors
int outputChannelCounts[1] = { 1 };
EmscriptenAudioWorkletNodeCreateOptions options = {
.numberOfInputs = 0,
.numberOfOutputs = 1,
.outputChannelCounts = outputChannelCounts
};
// Create node
EMSCRIPTEN_AUDIO_WORKLET_NODE_T wasmAudioWorklet = emscripten_create_wasm_audio_worklet_node(audioContext,
"noise-generator", &options, &GenerateNoise, 0);
// Connect it to audio context destination
EM_ASM({emscriptenGetAudioObject($0).connect(emscriptenGetAudioObject($1).destination)},
wasmAudioWorklet, audioContext);
// Resume context on mouse click
emscripten_set_click_callback("canvas", (void*)audioContext, 0, OnCanvasClick);
}
4. The code to resume the audio context on click looks like this:

.. code-block:: cpp
EM_BOOL OnCanvasClick(int eventType, const EmscriptenMouseEvent *mouseEvent, void *userData)
{
EMSCRIPTEN_WEBAUDIO_T audioContext = (EMSCRIPTEN_WEBAUDIO_T)userData;
if (emscripten_audio_context_state(audioContext) != AUDIO_CONTEXT_STATE_RUNNING) {
emscripten_resume_audio_context_sync(audioContext);
}
return EM_FALSE;
}
5. Finally we can implement the audio callback that is to generate the noise:

.. code-block:: cpp
#include <emscripten/em_math.h>
EM_BOOL GenerateNoise(int numInputs, const AudioSampleFrame *inputs,
int numOutputs, AudioSampleFrame *outputs,
int numParams, const AudioParamFrame *params,
void *userData)
{
for(int i = 0; i < numOutputs; ++i)
for(int j = 0; j < 128*outputs[i].numberOfChannels; ++j)
outputs[i].data[j] = emscripten_random() * 0.2 - 0.1; // Warning: scale down audio volume by factor of 0.2, raw noise can be really loud otherwise
return EM_TRUE; // Keep the graph output going
}
And that's it! Compile the code with the linker flags ``-sAUDIO_WORKLET=1 -sWASM_WORKERS=1`` to enable targeting AudioWorklets.

Synchronizing audio thread with the main thread
===============================================

Wasm Audio Worklets API builds on top of the Emscripten Wasm Workers feature. This means that the Wasm Audio Worklet thread is modeled as if it was a Wasm Worker thread.

To synchronize information between an Audio Worklet Node and other threads in the application, there are two options:

1. Leverage the Web Audio "AudioParams" model. Each Audio Worklet Processor type is instantiated with a custom defined set of audio parameters that can affect the audio computation at sample precise accuracy. These parameters are passed in the ``params`` array into the audio processing function.

The main browser thread that created the Web Audio context can adjust the values of these parameters whenever desired. See `MDN function: setValueAtTime <https://developer.mozilla.org/en-US/docs/Web/API/AudioParam/setValueAtTime>`_ .

2. Data can be shared with the Audio Worklet thread using GCC/Clang lock-free atomics operations, Emscripten atomics operations and the Wasm Worker API thread synchronization primitives. See :ref:`wasm_workers` for more information.

3. Utilize the ``emscripten_audio_worklet_post_function_*()`` family of event passing functions. These functions operate similar to how the function family emscripten_wasm_worker_post_function_*()`` does. Posting functions enables a ``postMessage()`` style of communication, where the audio worklet thread and the main browser thread can send messages (function call dispatches) to each others.


More Examples
=============

See the directory tests/webaudio/ for more code examples on Web Audio API and Wasm AudioWorklets.
Loading

0 comments on commit 5402fc9

Please sign in to comment.