Volume Meter and Real Time / Live pattern updates

Recently came across pixelblaze and it seems like one of those most straightforward ways to program LEDs, which is great.

I’m looking to put together some LED lights that act as indicators for specific things in a recording studio setting. This seems achievable for simple things like turning a strip of lights Green, Red, etc on Play, Record, etc. via a webhook into the API from some scripting in my DAW.

What I am wondering though is:

  1. How fast can the PixelBlaze 3 respond to variable/scene changes?

  2. Will the PixelBlaze respond to “real time” commands targetted at specific LEDs, e.g. “Turn on and set to Green LEDS 1 through 40” then 1 through 5, or 12 or 64, or make them all Red, etc.

Essentially I am looking to use it as a Volume Meter where my DAW will post to the API every 15 - 30mss or so what the track peak is, and through a variable I can set the number of Leads from left to right to turn on, or do this on segments for different tracks. Potentially turning them red of it is over a certain peak.

Is this achievable?

1 Like

What’s a DAW?

I suspect it won’t be fast enough, due to the overhead of having processing the variables repeatedly… Seems like a poor fit for PB, and I’d suggest looking at a true e1.31 protocol supporting firmware/controller like WLED.

You really don’t need or want a smart controller, you want a fast dumb one that listens and converts received input into LEDs, which is what e1.31 is for.

I mean you can try to make a PB do it, but… I wouldn’t. It’s a screwdriver to your nail.

Digital Audio Workstation. The only one I know of offhand is Ardour because I’m a free software zealot.

You might want to consider sending your audio output to the sensor board and deciding which LEDs to turn on in a simple pattern. With a bit of calibration it would be accurate and responsive. You could combine the analog input with e.g. a peak value submitted via websocket which would have a bit more latency.

If you want it stereo, get two PBs and two sensor boards :relaxed:

I like your idea of using pixelblaze for this because if your software supports making websocket requests then you’ve got nice simplicity in the software/hardware necessary to achieve both your recording status and your volume meter. Having less moving parts in a system is ideal (in my eyes).

However I don’t know the latency myself. @wizard can most definitely tell us. If he doesn’t chime in I’ll offer to test the latency for you with my PB. I’m thinking recording status no problem, and we’ll see about the volume meter.

I had a few min free and thought I’d do a preliminary test of the latency with a UI slider and it’s pretty quick considering I think the UI is saving the state to memory alongside updating the variable, and also doing a bunch of websocket connections for the whole pixelblaze interface. With just a single request changing a variable and not doing anything else it should be faster.

It’s fast but it’s not 15ms fast… Depends on the pattern and the number of LEDs but I think you’ll find most render loops, as measured by the delta change (which is milliseconds) are slightly longer than that. Perhaps not with a simple short led strip…

I think for the practical purpose of a volume meter (at least any of the ones I’ve ever seen :man_shrugging:) it can be averaged (or max’d) over some delta and updated if the new value is different than the last value (saving a bunch of redundant websocket requests which may slow it down). So this meter with averaging and latency could still be within a 10th of a second, conceivably?

Edit: I just re-read OP and noticed he says track peak.

Thanks for the replies! Really loving how helpful this community is!

Timings
I should probably clarify the timings, I actually made a mistake in my post (don’t think I can edit my main post now?). My DAW can post at default 15 HZ, not ms like I originally thought, but I can seemingly update this quite a bit.

With some napkin math, at the most extreme of playing 16th Notes, 4/4 time at 180bpm (though usually it’s anywhere from 90 - 150 and whole to 8th notes), that would be approximately 1 note every ~83ms between two notes.

Now, for the kind of things I am playing that would be considered very fast, granted by other genres that’s probably quite average but for me that makes sense.

Given what @gmcmicken has got, 10th of a second or 100ms, I’d say that’s close enough! Granted this isn’t “perfect”, but really I’m just doing it for fun!

Making this work (Pseudo Code)
Latency aside, is something like this possible? Can you essentially flick individual LEDs on and off with a variable?

So the number of LEDs ON or OFF horizontally corresponds to the volume or peak sent by the DAW, and the colour of LEDs depends on #1 their position within the strip and #2, the playstate.

My very amateur pseudo-code is something like:

//DAW//
Var DAW Volume (int)
Var Playstate (string)


//Playstate is a variable within the DAW but for the purpose of this, an array below
Playstate = [Stopped, Recording, Paused, Playing]

//While the DAW is either Paused, Playing or Recording, send both the Playstate and current track's peak or volume to the Pixelblaze. Depending on how the DAW works, this will keep the LEDs lit up on Track Pause, which is ideal.
While (Playstate != Stopped) {
   Send "Playstate" to Pixelblaze
   Send "DAW Volume" to Pixelblaze
}


//Pixelblaze / API post - approx 100 RGBs//


Var PixelOnCount (int) //The number of pixels to light up horizontally
Var Playstate (string) //The a string

//Pulls the PixelOnCount from the DAW volume and Playstate from Playstate. From what I can see these can be set via the webhook?

PixelOnCount = DAW Volume
Playstate = Playstate


//Turn off all lights by default
Base Lighting {
    set LEDs 1 - 100 to "off"
}

//While recording, set the Lights "ON" colours, but don't turn them on.
While (Playstate = "Recording") {
    Set LEDS 1 - 50 to "White"
    Set LEDs 51 - 75 to "Yellow"
    Set LEDs 76 - 100 to "Red"
    IF PixelONCount >= 99 {  //IF the volume goes above "100", that would mean I have clipped and need to do a retake <-THIS IS THE MAIN BIT I WANT
        Set LEDs 1 - 100 to "Red"  
    }
}

//Set the colours while just Playing or if I'm Paused
While (Playstate = "Playing" OR "Paused") {
    Set LEDS 1 - 50 to "Green"
    Set LEDs 51 - 75 to "Orange"
    Set LEDs 76 - 100 to "Red"
}

//While Recording, Playing or Paused, turn on the number of LEDs from left to right associated with the Volume or Peak
While (Playstate =! Stopped) {
    Set 1 - (PixelOnCount) # Leds to "ON", rest OFF
}

//Default Colour
ELSE {
    Set all LEDs to Blue
    Turn all LEDs on
}
 

Hopefully that makes some semblance of sense?

That’s not a bad idea. I could probably connect an additional output from my Audio Interface, one way or another.

I presume I could then control the other parts through the websocket like you mentioned?

Absolutely. Your pseudocode makes sense and you could even push more of the logic to the PB side. When you change modes, you send just the state through the websocket (the PB can figure out the colors from that). The PB could get the PixelONCount from your audio signal or also from the websocket. The PB can easily do decaying-peak-bars, too.

I’m not sure how, uh, audiophilic the PB audio input is, so maybe your DAW could send a constant-frequency sine wave with the total sound pressure of your signal.

You might be able to get closer to realtime perfection with some other way of driving LEDs, but with PixelBlaze you will have more fun, finish the project and be able to tweak it and still have time for your actual job.

If you’re setting up a pixelblaze for the recording status, it’s so easy to implement other cool stuff with the strips and you’ll be able to test the volume meter to see if it is acceptable to you without much effort.

I find too that with everything I’ve done with PB it’s given me more ideas of things I want to try, and for a recording studio I think you’ll have fun with it. If you strictly get a fast “dumb” controller it won’t be any fun!

You would code more of this on the pixelblaze most likely but there’s quite a few ways to do it. We’ll help you when you’re there.

It depends on a ton of factors! First there’s the frame rate of the animation that depends on how many pixels, the type of LED, and of course the pattern complexity. Lets say you have 60 FPS, that means if data were available you’d still be looking at around 16.7ms best case, twice that worst case (if data were available right after rendering started, it would have to finish, then start a new render before the data was used).

The LED type matters too, WS2812 and the like get data at a fixed rate, and have internal refresh cycles that add small amounts to the apparent latency.

Then of course there’s getting the data there in the first place. This is the big one.

WiFi tends to hiccup from time to time (it has to share airspace), and any good audio/video over WiFi usually uses a buffer which adds to latency. Otherwise they drop frames, and you will get animation artifacts that look like stutters or freezes. This is true of e1.31 over wifi as well, folks generally go with wired ethernet when using these protocols.

If the signal is nice and strong and there isn’t too much radio interference / noise, then it’s not too bad but still has networking overhead. Figure on another 3-15ms, with spikes over 100ms. In some areas radio interference / noise can be quite bad, and you can end up with much worse performance.

So WiFi is generally the weakest link / most variable aspect of the setup.

The other option is using a physical link to send data, and currently that means using the sensor board, or emulating the sensor board. The sensor board sends over data about 39 times a second, every 25.6ms for the audio from that 25.6ms period. I forget the exact details, but it does take some time to transfer the data. There’s more bandwidth available on the serial link, so an emulated sensor board could send data more frequently. This could be done on your computer, and streamed out a USB serial adapter.

The good news is that Pixelblaze generates pixel data via pattern code that runs locally, so it can keep pumping out smooth FPS even in choppy WiFi or between sensor board updates.

For 15Hz, I think it would work great. I’d definitely send over the peaks, and let PB do the pixels. Any small hiccup will be much less noticeable when the pixels still update between getting new data.

That all said, if you do want to play around with sending pixels, a good place to start would be @zranger1’s proxy: Lightshowpi (and others) integration: sACN/e1.31 proxy


BTW, @gmcmicken, there’s a 100ms debounce on the UI sliders. It uses the most recent data, but limits the update rate to 10/sec. They could technically be much more responsive, but the UI limits this to be conservative with the websocket bandwidth. Saves/persists happen after a second 1s debounce timer to reduce writes.

1 Like

If we send a websocket request to update a variable outside of the pixelblaze UI and turn off “save”/persist. Is there anything that might slow it down or add latency?

Thats just on the UI side of things, the websocket has no restriction like that. The lower level limits/factors described above would still apply (FPS, wifi latency)