Check out the pattern “Blinkfade - Sound”. It is based of a PI controller which is the base “engine” to detect sound variations.
The pattern algorithm works as:
- Light up a pixel: luminosity based on the volume, hue based on the frequency
- Fade this pixel to black over time
- When the pixel gets to zero luminosity, re-light it up
The PI controller applies a coefficient to the “light up” so it adapts to the average sound volume to maintain the average luminosity of all your LED at a desired level.
You can easily adapt it so it reacts only to bass (I used 1-3 indexes of the frequency array: it depends of the music, sometimes it’s better to remove the first index to avoid triggerring due to sub bass caused by pads), and then program on top any kind of animation: use the “light up” as a “coefficient” at each loop to apply luminosity.
I don’t have the time to simplify it but here is my “improved version” of Blinkfade:
- Reacts more heavily to kick (a little bit of energyAverage to avoid fully black during breaks)
- Supports 2 buttons control (hue but it’s useless as it rotates automatically, and luminosity)
- Monochrome (very slight color variation due to frequencies) but rotating
- Button to add some movement to make it more “intense” if needed
/*
Sound - blink fade
This pattern is designed to use the sensor expansion board.
First please check out the "blink fade" pattern. With that as background, the
goal now is to make it sound reactive.
We're going to use something called a PI controller to perform an important
function common to many sound-reactive patterns: Adjusting the sensitivity
(the gain).
Imagine a person who observes the pixels reacting to sound and is continuously
tuning the overall brightness knob to keep things looking good. They would
turn the brightness up when the sound is faint and everything's too dark, and
turn it down if the sound is loud and the LEDs are pegged too bright. The PI
controller is code to perform this job. This form of Automatic Gain Control
allows the pattern to adapt over time so it can be in a visual Goldielocks
zone, whether the environment's sound is soft, loud, or changing.
The wikipedia article is more approachable than some:
https://en.wikipedia.org/wiki/PID_controller#PI_controller
*/
/* ______________BUTTON MANAGEMENT________________*/
// Buttons connected to GPIO input pins
var buttonOneValue, buttonTwoValue
var BUTTON_ONE_PIN = 26
var BUTTON_TWO_PIN = 25
pinMode(BUTTON_ONE_PIN, INPUT_PULLDOWN)
pinMode(BUTTON_TWO_PIN, INPUT_PULLDOWN)
//Initialize button adjust
hBT=0
vBT=1
export function getButtonsStatus(){
buttonOneValue = digitalRead(BUTTON_ONE_PIN)
buttonTwoValue = digitalRead(BUTTON_TWO_PIN)
}
export function processButtons(index) {
getButtonsStatus()
if (index == 0) {
if (buttonOneValue == 1) {
hBT=hBT+0.0005 //Loop hue
if (hBT>1){hBT=0}
}
}
if (index == 1) {
if (buttonTwoValue == 1) {
vBT=vBT+0.001 //Loop luminosity
if (vBT>1){vBT=0.01}
}
}
}
/* _______________________________________________*/
/*
By exporting these special reserved variable names, they will be set to
contain data from the sensor board at about 40 Hz.
By initializing energyAverage to a value that's not possible when the sensor
board is connected, we can choose when to simulate sound instead.
*/
export var energyAverage = -1 // Overall loudness across all frequencies
export var maxFrequency // Loudest detected tone with about 39 Hz accuracy
export var frequencyData
// Slider to adjust the speed and direction of rotation (middle for no speed)
export function sliderRotationSpeed(_v) { slRotationSpeed = (_v-0.5) }
// Toggle ON for full rainbow, OFF for less color
export function toggleRotation(c) { tgRotation = c}
export var pointer=0 //pointer used for rotation of the index
vals = array(pixelCount)
hues = array(pixelCount)
sats = array(pixelCount)
// The PI controller will work to tune the gain (the sensitivity) to achieve
// 20% average pixel brightness
export var slFillGain = .5 //1
// Slider to adjust the speed and direction of rotation (middle for no speed)
export function sliderFillGain(_v) { slFillGain = (_v) }
/*
We'll add up all the pixels' brightnesses values in each frame and store it
in brightnessFeedback. The difference between this (per pixel) and targetFill
will be the error that the PI controller is attempting to eliminate.
*/
brightnessFeedback = 0
/*
The output of a PI controller is the movement variable, which in our case is
the `sensitivity`. Sensitivity can be thought of as the gain applied to the
current sound loudness. It's a coefficient found to best chase our targetFill.
You can add "export var" in front of this to observe it react in the Vars
Watch. When the sound gets quieter, you can watch sensitivity rise. If it's
always at its maximum value of 150 (ki * max), try increasing the accumulated
error's starting value and max in makePIController().
*/
export var sensitivity = 0
/*
With these coefficients, it can take up to 20 seconds to fully adjust to a
sudden change, for example, from a long period of very loud music to silence.
Export this to watch pic[2], the accumulated error.
*/
pic = makePIController(.05, .15, 300, 0, 1000) //max def = 1000
// Makes a new PI Controller "object", which is 4 parameters and a state var for
// the accumulated error
function makePIController(kp, ki, start, min, max) {
var pic = array(5)
// kp is the proportional gain coefficient - the weight placed on the current
// difference between where we are and where we want to be (targetFill)
pic[0] = kp
/*
ki is the integral gain - the weight placed on correcting a situation where
the proportional corrective pressure isn't enough, so we want to use the
fact that time has passed without us approaching our target to step up
the corrective pressure.
*/
pic[1] = ki
/*
pic[2] stores the error accumulator (a sum of the historical differences
between where we want to be and where we were then). This is an integral,
the area under a curve. While you could certainly store historical samples
and evict the oldest, it's simpler to just have a min and max for what the
area under this curve could be.
We initialize it to a starting value of 300, and keep it within 0..1000.
*/
pic[2] = start
pic[3] = min
pic[4] = max
return pic
}
/*
Calculate a new output (the manipulated variable `sensitivity`), given
feedback about the current error. The error is the difference between the
current average brightness and `targetFill`, our desired setpoint.
Notice that the error can be negative when the LEDs are fuller than desired.
This happens when the sensitivity was in a steady state and the sound is now
much louder.
*/
function calcPIController(pic, err) {
// Accumulate the error, subject to a min and max
pic[2] = clamp(pic[2] + err, pic[3], pic[4])
// The output of our controller is the new sensitivity.
// sensitivity = Kp * err + Ki * ∫err
// Notice that with Ki = 0.15 and a max of 1000, the output range is 0..150.
return max(pic[0] * err + pic[1] * pic[2], .3)
}
export function beforeRender(delta) {
targetFill= slFillGain
sensitivity = calcPIController(pic,
targetFill - brightnessFeedback / pixelCount)
// Reset the brightnessFeedback between each frame
brightnessFeedback = 0
if (energyAverage == -1) { // No sensor board is connected
//simulateSound()
} else { // Load the live data from the sensor board
//_energyAverage = energyAverage
_energyAverage = ((frequencyData[1]+frequencyData[2]+frequencyData[3])+energyAverage)/4
_maxFrequency = maxFrequency
}
for (i = 0; i < pixelCount; i++) {
// Decay the brightness of each pixel proportional to how much time has
// passed as well as how loud it is right now
vals[i] -= .00095 * delta + abs(_energyAverage * sensitivity / 2000) //sensitivity/5000
//Increase saturation for the pixels that have been desaturated
sats[i] += .01 * delta + abs(_energyAverage * sensitivity / 2000)
// If a pixel has faded out, reset it with a random brightness value that is
// scaled by the detected loudness and the computed sensitivity
if (vals[i] <= 0) {
vals[i] = random(1) * _energyAverage * sensitivity
//If the strongest frequency at the moment is bass, desaturate the newly
//resetted pixel to increase its impact
if(_maxFrequency<130){
sats[i] = 1-.31 * _energyAverage * sensitivity
}
/*
The reinitialized pixel's color will be selected from a rotating
pallette. The base hue cycles through the hue wheel with time.
Then, some variation (msall to still match colors) is added based on the loudest frequency
present. More varied sound produces more varied colors.
*/
hues[i] = fixH((time(3)+clamp(_maxFrequency / 7000,0,0.05))%1) //
}
}
if (tgRotation==1){
rotationSpeed=slRotationSpeed//+energyAverage*100
}
else {
rotationSpeed=0
}
pointer=(pixelCount+pointer+rotationSpeed)%(pixelCount)
}
/* ---------------------------------------------------- RENDERING ------------------------------------------------------------ */
export function render(index) { render3D(index, 1, index / pixelCount, 1) } //If pattern is 1D, it will use the y of the mapping as index, and z=1 by default
export function render2D(index, x, y) { render3D(index, x, y, 1) } //If pattern is 2D, it will use z=1 by default
export function render3D(index, x, y, z) {
if(z==0){rgb(0,0,0);return} //Disables all the LED at coordinates z=0
processButtons(index) //Does the cooking depending of buttons state
index2D=(pixelCount+0.5*(y+x)*pixelCount+pointer)%(pixelCount)
v = vals[index2D]
v = v * v *v // This could also go below the feedback calculation
// if (index>pointer && index<((pointer+200))){
vLed=v
// }
// else {
// vLed=0
// }
/*
Accumulate the brightness value from this pixel into an overall sum that
will be averaged across all pixels. This average will be fed back into the
PI controller so it can adjust the sensitivity continuously, trying to make
the average v equal the targetFill
*/
brightnessFeedback += clamp(v, 0, 1)
//hsv(h, s, v)
hsv(hues[index2D]+hBT, sats[index2D], clamp(vLed,0,1)*vBT)
}
// Return a more perceptually rainbow-ish hue
function fixH(pH) {
return wave((mod(pH, 1) - .5) / 2)
}