Overview
The audio sampler allows you to sample an audio track in your effect graph.
Property | Type | Description |
ChannelGroup | string | Name of the global audio channel to sample. |
Mode | enum | Sampling mode for the audio:
|
Output pins | ||
Data | dataAudio | The audio data that can be plugged in the AudioSample node. |
+ all sampler properties |
The main difference with other sampler data nodes is that this samples an audio track that is directly specified in the “Scene properties” panel and not in the SamplerDataAudio node.
The node just specifies which audio channel to sample from. By default, the node samples from the “master” audio source
Usage
Start by importing a simple audio file in your PopcornFX project. You can then have it played in the scene using the “Scene properties” panel.
Enable the “Sound” scene backdrop:
And select your audio file in the “Sound Path” property:
To test the particles reacting to audio data, we will create a basic effect with an event mulitplier and a particle layer:
The event multiplier spawns 20 particles per second for an infinite time:
You can now jump in the layer “MyParticles” to drive the particles behavior using the audio track.
We’ll change the “Life” of the particles to 5 seconds and we will create a simple graph to get a straight line of particles going sideway. We will use a ribbon renderer to connect all those particles in a continuous stripe:
We will now try to offset the particles on the “up” axis depending on the audio sampler.
To do that, let’s drop the Audio sample (rig)
in our graph:
In the “Audio” node, make sure the “Channel” property has the same name as the one specified in your “Scene properties” and select a “Mode” (by default, the mode is set to “Spectrum” which means you will sample the audio frequencies).
We will plug the node self.lifeRatio
as the cursor in the AudioSample node, meaning that particles that are just born, so closer to the effect origin, will sample the low frequencies and they will sample higher and higher frequencies during their lifetime.
We now multiply the output value of the AudioSample node by scene.axisUp
(scaled by 30 to better see the small frequency values) and offset those particle positions upward when the sampled value is higher.
We end up with the following graph:
That produces a fuzzy frequency pattern:
You can now play with the different “AudioSample” template options to get a smoother result. you can right click on the “AudioSample” node and select “Show documentation” to get a description of the different properties used by this node.
The most interesting options are in the Filtering
property category.
The Convolution Level
allows the user to average the output values. For example, a convolution of 0.5 will average the samples two by two and you will end up with two time less values to sample. a convolution of 1 will just get you the average intensity on all the frequency ranges (or the average intensity of the waveform depending on the audio mode) for your sound.
You can see that as some kind of mipmap filtering of the output data if you are familiar with 3D rendering APIs.
In there, you can also change the Filter
type to interpolate between the values:
Point
filtering:
Linear
filtering:
Cubic
filtering:
Convolution
property in the “AudioSample” node is ignored for now.