Working with sound

I’m trying to take some of the built-in sound reactive patterns and tweak them to be more responsive to certain frequency ranges. I’m having a hard time parsing out what’s already there (ki, kp, pic, etc.)

Can you give a primer on what’s going on in there?

Much appreciated!

The PI controller, 2/3rds of a PID controller takes output from the display side of things, and feeds it back into a sensitivity variable to make response to sound adaptive.

kp is the proportional coefficient - controls how much it reacts immediately
ki is the integral coefficient - controls how much it reacts over time

start, min, max control the possible ranges of integral values to keep it from getting too crazy from positive or negative feedback.

targetFill is the desired average brightness, and is used to calculate the error fed into the PI controller. brightnessFeedback is calculated each animation frame and adds up all of the pixel brightnesses (clamped between 0 and 2 per pixel).

In other words, it tries to make it more sensitive when LEDs are lit below targetFill, and less sensitive when they are more lit above targetFill.

You could hard-code sensitivity to some value instead, but it wouldn’t adapt to quiet and loud environments.

Aside from general sensitivity, the averages array is used as a slow reacting frequency filter, this keeps long bursts of frequencies from dominating the animation (constant bass, long lead ups, etc), while highlighting shorter bursts (beats, notes, etc).

You could create a PI controller for each frequency bucket, so that each was adaptive in its own band. This would make an adaptively flat response, but would end up bringing out noise in mostly unused frequencies.

Another way of doing it is creating an array or function that modifies sensitivity based on the frequency. Like an EQ, you could tune it to pick out frequencies or bands you were more interested in.

something like

eq = array(32);
eq[0] = .5;
eq[1] = .6;
//...
eq[31] = 1.5
//then in beforeRender where it calculates averages and vals, multiply this with sensitivity and use it instead

  for (i = 0; i < 32; i++) {
   eqSensitivity = sensitivity * eq[i]
    averages[i] = max(.00001, averages[i] * (1 - dw) + frequencyData[i] * dw * eqSensitivity)
    vals[i] = (frequencyData[i] * eqSensitivity - averages[i]*2) * 10 * (averages[i] * 1000 + 1)
  }

Sweet! Very informative. Thank you!

1 Like

Hi,

Have you come up with anything interesting so far?

I also just got the IO board, but have not played with it yet. Looking to re-make a sound reactive back pack.

Thanks,

@anthonyn, there are some built in patterns, anything with “sound” in the name. Also maybe a few uploads to the pattern website:

https://electromage.com/patterns

1 Like

Thanks!

Will check them out

I was playing around with the expansion board and the sound reactive demos.

I connected a 16 x 16 pixel matrix, set the strip length to 256 and ran the “sound - spectro kalidastrip” pattern.

Nothing new to share, but I really liked the bright flashes when a new frequency is detected. After its averaged out and the music does not change for a bit, it looks kind of flat, so some tweaking to do, but for the intro to a song, or where there is some variance, this looks pretty cool!

In most of the sound reactive patterns I wrote, the averageWindowMs setting changes how aggressive it is at filtering out background notes. Change it to a lower number to make it more adaptive/aggressive and filtering, and a larger number for a more constant response to notes.

1 Like

Thanks!

Will check it out