Two Questions about a Sound Reactive Piece - WiFi and Latency

I’m building a sound reactive piece for Unscruz called “Aurora” - it’s basically a bunch of LED grids driving Fiber optic mesh and covering the ceiling of a building.

I will be using 5-6 different pixelblaze pico’s and/or v3 (playing with both, we’ll see)

I have a few questions that I couldn’t easily find answers to in the docs. Hoping I can get some help:

  1. Is there a way to get a complete list of sound reactive patterns in the pattern library? I’d like to find various examples so I can build my own.

  2. I will be able to set up a wi-fi router, but I’m not sure if I can get it connected to the internet. Does pixelbaze work just on a LAN? Or does it require Internet connectivity to work?

  3. I’m noticing in testing at home that there is some latency when using the sensor board (microphone) to when the pattern actually reacts to the sound. Any tips to reduce that latency? Ideally it’s as close to when the sound hits as possible. It’s noticeable and on the order of 1/2 a second at least. Does connecting to sound directly via 1/8’’ help?

If I can’t fix the latency, I might need to buy a sound delay or something for the actual sound output…but this isn’t ideal.

Hi @jgladbach ,
Welcome to the forum!

  1. Finding sound patterns
    Most have “sound” in the name, and we hope to improve on the website in the future to allow for better searching / categorization. For now, its a lot of annoying paging (sorry!).
  2. Work without Internet?
    You can use Pixelblaze without internet - either connected to an access point on an isolated network, or set up a Pixelblaze in AP mode to create a network you can connect to anywhere. The discovery service won’t work, and the clock/time functions won’t be able to get the current time, but everything else including live-coding works without internet access. If a Pixelblaze is running in AP mode, you can reach at at
  3. Microphone latency
    The latency should be fairly minimal. Audio is processed roughly 39 times a second in 25.6ms sample windows. It does take a little while to process that audio, then send it up to PB, perhaps another 15-20ms. Nowhere near half a second. My guess is that is how the pattern code is responding to the stimulus, and other patterns will be more responsive.
    Frame rates can also play a part, since audio is only updated once for each animation frame, and it takes additional time to process an animation frame and send it out to the pixels.

Are you using the microphone, or a line out from your sound system? The line out is usually better, unless that audio is delayed for some reason.

1 Like

@wizard Tried the line in and it’s much better!

As for patterns, I looked through the entire 10 pages and found, maybe 10 total that had the word “sound” in them. Am I looking correctly?

If so I might need to get to work creating some new ones :slight_smile:

1 Like

Hi Jonathan - that sounds about right. We’d love to have more!

While it’s a bit of work to break them out, there are several audio-reactive patterns in Music Sequencer / Choreography.

The patterns within it that use the sensor board for audio reactivity (vs the ones that look good primarily from timing and an internal tempo metronome) are:

  1. splotchOnBeat
  2. rain
  3. flashPosterize
  4. elastic
  5. analyzer
  6. fizzleGrains
  7. soundrays

One more question - and maybe i should be asking the dev of firestorm. But if there’s, say 8 pixelblazes connected to a network is there a way through the UI to only control 6 of them and let the other two play different patterns?

I’m running a global pattern on 6 and having a few other PB run things like the sign or other lights.

I could use the new firmware, but I’m pretty much done building/coding etc and am putting this up next week.

I am just a contributor to Firestorm, but I am noodling on the implementation of disabling a controller in the Firestorm UI from being synced with other controllers in the network so that Firestorm will send patterns to specific controllers you choose, and by default, they’ll all be enabled. I have a very similar use case where I would like a very specific controller not synced with the same patterns my other PBs are using.

However, in the interim, you could take the two PBs you’re running solo, put them in AP mode and keep them off your network, and run the patterns directly in their UIs as a temporary fix. It is not ideal and exactly streamlined but will get the job done for those two while using Firestorm in its current implementation.

Firestorm only activates patterns that exist. You can name the patterns in order to take advantage of this, effectively creating groups outside of, or in combination with the new sync group feature.

Could be as simple as prefixing something like “group A” or “group B” to the pattern names.

Cool this helped immensely! Btw we used 7 pixelblazes in our “Aurora” sound reactive piece for the unscruz regional burn. I’d love to do a write up/shared social situation if you all are interested…it would make a great instructable or just a fun piece to talk about your new firmware.


Yes yes yes yes! I would love to see that, and a write up would be freaking amazing!