Volume Meter and Real Time / Live pattern updates

It depends on a ton of factors! First there’s the frame rate of the animation that depends on how many pixels, the type of LED, and of course the pattern complexity. Lets say you have 60 FPS, that means if data were available you’d still be looking at around 16.7ms best case, twice that worst case (if data were available right after rendering started, it would have to finish, then start a new render before the data was used).

The LED type matters too, WS2812 and the like get data at a fixed rate, and have internal refresh cycles that add small amounts to the apparent latency.

Then of course there’s getting the data there in the first place. This is the big one.

WiFi tends to hiccup from time to time (it has to share airspace), and any good audio/video over WiFi usually uses a buffer which adds to latency. Otherwise they drop frames, and you will get animation artifacts that look like stutters or freezes. This is true of e1.31 over wifi as well, folks generally go with wired ethernet when using these protocols.

If the signal is nice and strong and there isn’t too much radio interference / noise, then it’s not too bad but still has networking overhead. Figure on another 3-15ms, with spikes over 100ms. In some areas radio interference / noise can be quite bad, and you can end up with much worse performance.

So WiFi is generally the weakest link / most variable aspect of the setup.

The other option is using a physical link to send data, and currently that means using the sensor board, or emulating the sensor board. The sensor board sends over data about 39 times a second, every 25.6ms for the audio from that 25.6ms period. I forget the exact details, but it does take some time to transfer the data. There’s more bandwidth available on the serial link, so an emulated sensor board could send data more frequently. This could be done on your computer, and streamed out a USB serial adapter.

The good news is that Pixelblaze generates pixel data via pattern code that runs locally, so it can keep pumping out smooth FPS even in choppy WiFi or between sensor board updates.

For 15Hz, I think it would work great. I’d definitely send over the peaks, and let PB do the pixels. Any small hiccup will be much less noticeable when the pixels still update between getting new data.

That all said, if you do want to play around with sending pixels, a good place to start would be @zranger1’s proxy: Lightshowpi (and others) integration: sACN/e1.31 proxy


BTW, @gmcmicken, there’s a 100ms debounce on the UI sliders. It uses the most recent data, but limits the update rate to 10/sec. They could technically be much more responsive, but the UI limits this to be conservative with the websocket bandwidth. Saves/persists happen after a second 1s debounce timer to reduce writes.

1 Like