Mesh Networking - Sync Pixelblazes

Hi … I see @wizard has introduced Firestorm to control multiple pixelblazes on the same network.

WLED has something similar via it’s ‘sync’ function, which doesn’t need a seperate controller, any WLED device can be the controller and the rest follow. But it still requires all units to be on the same pre-existing WiFi network.

I’m wondering about control of multiple pixelblazes, where there is no seperate network - ie. each unit creating it’s own mesh network.

Thoughts? Ideas? Comments?

Working on sync without firestorm, it’s a bit trickier since you can load in different patterns, there’s no pre-defined set. Firestorm works by activating patterns by name.

Sync without a wifi network is possible, but also tricky, especially in conjunction with wifi.

These are definitely things I’m working on, and there’s a lot of different ways to handle it that require more or less setup beforehand.


I know I’ve been saying I’ve been working on this for a (long) while, but it has been much more top of list.

Here’s a quick demo of a proof of concept with 1 PB publishing and 3 subscribers.

My goals are roughly:

  1. Let multiple PB cooperate on a pattern more easily. You can do this now with the animation time syncing stuff, but is fairly manual to set up and copy the patterns around. There are already tools to clone across PBs, but I’d rather not require any setup.
  2. Live code across multiple PBs. That’s what it’s all about, right? Kinda goes hand in hand with #1
  3. Not require external apps, browser connections, Firestorm, etc.
  4. Be fairly robust in various network conditions.
  5. Allow each PB to work on a mapped piece of a larger whole. You can kind of do this now with map tricks, but it’s kind of hacky.
  6. Provide some way for the pattern code to do something different on each instance beyond map coordinates. Maybe some kind of chip ID, or member ordinal number.
  7. Sharing of expansion (sensor) board data, obviously.
  8. Sync controls.
  9. Forward button presses to the main PB, so any could effectively switch patterns.
  10. Sharing data or message passing between peers in a group in pattern code. This would also allow you to share a random seed for synced pseudo-random animations, share GPIO or analog (non-sensor board) data, etc.

The proof of concept gets to about #3 above.

Keeping the requirements light, but would mean connecting multiple PB to the same network and subscribing the 2nd+ PB to the first. Should work fine for a handful (< 9 or so) with the main PB in AP mode.

I’d love to hear everyones thoughts, feedback. Anything stand out?

(FYI @Sunandmooncouture, :point_up:)


All I was hoping for was #3 and #8, so when the PB in AP mode switches to a different pattern, the other PBs joined to that AP will also switch to the same pattern. I fully expected all the pre-setup to make sure they all have the same named patterns available.

This, what you’re working on, LIVE CODE ACROSS MULTIPLE PBs? Never even imagined that was a possibility. It’s so beautiful I might cry. Amazing work, really.

#5 would be very cool to have too, as FPS always falls even with multiple PBs when they all have to calculate the full map.


OH! And if there’s any way I can help, lmk. I’ve been doing front end dev forever, so great with UI/UX in HTML/JS/CSS, just not so much on the back end hardware level programming.

1 Like

Amazing! Let me know when you’re ready for beta testers!

1 Like

Sounds good! Here are some thoughts on how shared variables might be exposed.

Re: 6,7,8,10: Suppose there are magic variables for “who am I” and “how many PBs are we in total”, a new type of variable, “shared” which is broadcast to all PBs, and a new type of function “master” which is only exported on the master. Then suppose you have a bunch of PBs in a circle and you want a slider to move the bright point around a circle …

var idCount // magic variable, number of PBs
var myId // magic variable, this PB id, 0 = master
shared brightness // array synced across all PBs

master function sliderAngle(v) { // shared functions only run on master
   for (id = 0; id < maxId; id++) {
      brightness[id] = (cos(id/idCount * PI2) + 1.0) / 2.0

export function render(index) {
  rgb(brightness[myId], brightness[myId], brightness[myId])

This is probably easy-ish if the master is the only one who can write to a shared array. It could be broadcast after any function which alters it to keep bandwidth usage low.

For bonus points … allow any PB to update a shared array, and broadcast only the elements that it changes when it changes them. To share specific sensor data, stuff it into shared_array[myId]. This would give plenty of flexibility as to where the calculation happens. For example the PB with the sensors could smooth the data, stuff it into the array, and then each of the others would do further local calculations.

Race conditions are the responsibility of the pattern author. :slight_smile:

(and Re: 9 it would be great if patterns could trigger pattern switches, something that would be awesome independent of this other awesomeness.)

Yeah, I tossed this around in my head a lot. Anything I could come up with short of what I’m doing in the proof of concept had a lot of compromises, complexity, and potential snags.

I think we should chat and troubleshoot, because this shouldn’t be the case. If you want, I think I can get your FPS back.

There are ways to code patterns that still won’t work super well when the full list is complete, like KITT that uses timers based on delta from when the pattern runs (which is still some ms apart) instead of the synced time() calls. With #10 the main PB could use some new tools to help sync paired PBs, but would need to be added pattern-by-pattern.

Good ideas. I’m leaning toward message passing, or one-way memory copies, and want to avoid adding keywords that would break ES6 compatibility. I also want to avoid using master/slave terminology.

Ideally the show will continue with some interruption, as will tend to happen with any WiFi network. The main PB can help coordinate, but things should keep running without it, or with it temporarily unavailable.

Taking this example:

   for (id = 0; id < maxId; id++) {
      brightness[id] = (cos(id/idCount * PI2) + 1.0) / 2.0

I would flip this around (and I’m guessing you meant to use v in there somewhere so the slider moved the wave):

// control handlers are invoked for every PB in the group
export function sliderAngle(v) { 
  brightness = (cos((groupIndex/groupCount + v) * PI2) + 1.0) / 2.0

That way no state synchronization is required for a shared brightness array, but instead relies on control info being relayed down. Each PB has enough information to act on it individually and calculate their own brightness instance value given the magic groupIndex and optionally some knowledge about the total groupCount.

Just like pixelCount knowing how many belong to the group is useful at initialization time for things like allocating memory and/or figuring out how to divvy up work. That likely means rigidifying group membership, restarting patterns when new members are added, possibly statefulness. It might make it more difficult to use with some of the use cases like roaming groups of bikes that go in and out of range, or future p2p ad-hoc type groups.

Pushing data back up would be very useful, beyond just relaying controls/button/sensor board data, but to send data from local pins or code. That gets a bit trickier, especially if it goes full p2p and any kind of consistency needs to happen.

Magic syncing data/arrays would be cool, and are quite nice for local sensor board data, but an explicit API might be better for distributed patterns so that it the code can control the atomicity. An RPC style approach could be interesting too, so the data can be acted on when it arrives instead of polled for (which might get overwritten).

With typical message/event interfaces your code registers interest (observes) some event based on name/type and binds that to a callback, but we might be able to skip some of that boilerplate since we are running the exact same code, and doing something more RPC like with local semantics.

Something like this might be neat and let you do more p2p-like patterns without subscription management:

function messageHander(v1, v2) {

notify(messageHander, someValue, somethingElse)

//or lambda style
notify((v1, v2) => {...}, someValue, somethingElse)

So here notify takes a function reference and any arguments, and runs that function on every PB in the group.

If you pass in your own ID as an argument, the receiver could filter based on sender, though I think more often than not you wouldn’t need to filter like that.


This would be a massive improvement!
PB is already Best-in-Class and any or all of these proposed enhancements would just be icing on the cake. I remember the very first time that Firestorm synced all of my PB’s (4-V3 Standards and 2-V3 Picos), and it was mind-blowing! I could hardly believe the visuals I was seeing, all (almost) synced to the music playing in the room with no programming.
Thank you @wizard for your dedication to ongoing improvement and making a good product even better!
Another possible improvement would allow us to use these upgrades to better synchronise multiple PB’s. For example, as all of my units are free-standing, non-wired devices (12V Li-ion pack, 12V-5V converter, PB controller, RGB LED strip/matrix), this improvement would allow the two Pico units to share the sensor input and react properly instead of just displaying the same pattern as the others.
There could also be another benefit in the form of reduced latency between audio in and visual out. Early on in my design stages I scrapped wired units in favour of wireless simply because they allowed almost unlimited placement and gave aesthetics a huge priority over logistics. I loved the crisp response times of my first two wired units - every beat, pause, crescendo, silence, etc, was translated into a corresponding visual output. However, the slight lag between the speed of light and the speed of sound on the wireless units is still quite acceptable to me, and most people don’t notice the delay.
What sounds exciting to me is perhaps we could use the line-in option and wire the closest/easiest unit to the audio source, make it the leader in the network and reduce the latency in all the other wireless units on the network. Wow!
Once again thanks to all of you who make this stuff work. I don’t understand 90% of what it is you do to make this happen, but I certainly do appreciate the magic.

@Glitch glad to hear about your success! I’d love to see your setup if you have a video! It sounds amaizing.

Yep, that is definitely something I want to do! (Number 7 on the list above). I don’t know what the latency will look like just yet, but it should usually be pretty good, a handful of ms usually. WiFi is a shared radio though, so I expect some jitter.

@wizard Sorry for the redundant suggestion. On second reading I see #7 on your list above. If it can be done, I would be quite happy with a few milliseconds delay.
Regarding a video of my setup, it’s something I’m working on, but I’m not happy with the results so far. Two of my devices are back-lit black LED plexiglass and the camera does not seem to see them the same way as my eyes. Still working on getting it right. :thinking:

1 Like

No worries, just wanted to mention that I agree about the feature!
Yeah, LED photography is tricky!

To be fair, if anyone will understand what it should look like, it’s probably people on here :slight_smile:


Big +1 for sensor board sharing.

I’ve been using my homebrew multicast solution daily for a while now. I can’t overstate how handy it is to have the sensor board plugged in to my main audio source, and distributed to Pixelblazes in various configurations, all over the house.

Using multicast UDP between ESP8266s, I’ve seen no noticeable latency while doing this, and it’d likely be even lower going Pixelblaze->Pixelblaze.


Just curious, but did y’all figure out what the issue here was? I’m starting a large wearable project that will require multiple PBs to get the frame rate I’m targeting, and I’d love to avoid any potential pitfalls here!

Also, is there any update on this mesh networking change and/or possible beta on the horizon? For another project I’m looking at this summer, this would significantly simplify things, so I’m curious if there’s any chance of it landing in the next few months.

Good news: it’s highly likely to be released in the next few months. It will first be released for v3 devices, then a determination will be made whether the v2 boards are powerful enough to support the feature.


And it’s here! People can try out the beta firmware now!