A Firestorm of Pixelblaze - spread patterns and control via MIDI/OSC

Hello!
Let’s say you are setting up a stage using 8 Pixelblazes connected to handfuls of LED strips, and run Firestorm to synchronize their animations and switch between patterns / sets.
What’s the best way to:

  • Spread patterns across 8 Pixelblazes (so they appear as one large version of the pattern, not 8 copies)
  • Control settings over a network using some kind of common signal type (like midi or osc?) compatible with lighting control software like Lightkey and Resolume.

I kick this off with a link to a comment about spreading a pattern animation across multiple PB:

Also interesting: node-red-contrib-osc - npm
I know folks have used node-red to interface with PB, might be a way to integrate the 2 protocols.

I think the recent (and now semi-obvious) approach of mapping ALL of the pixels in the entire pattern but putting the “section” of the current controller FIRST (so if controller #3 has 80 pixels, those 80 go first in the mix), and setting the pixelcount to the total amount, is likely the best answer.

Timing: it’ll run thru all pixels on all controllers, so the total prerender and also the render calls should be the same time to run on each. Having different amount of “real pixels” might be an small issue, yes? [On second thought, it shouldn’t matter. If PB calculates up 1000 pixels, but the strand actually connected is 100 or 200, PB still pushes out 1000pixels of data to each, right?]

Obviously must be a mapped version of a pattern (2D) rather than a 1D that uses index. So again the porting all index driven patterns to 2D makes sense, as this would be another use case.

Potential issue: too many pixels overall, so that it can’t calculate the whole pattern on each, and has timeouts. (Since each calculates it’s own section first, this might not be an issue, if they all time out at the same time)

Congrats, you’ve made me to want to build a multi controller project, if only to test out the above. I do wonder if v3Pico will perform differently than v3PB (I suspect yes, you’d want identical models and firmware on each)

@Scruffynerf ,
You mean make each PB do the work of the entire combined rendering? One of the advantages of a multi-PB system is increased rendering throughput over a single large PB install with output expander ports.

True, some patterns like KITT may be much harder to render small portions of, but I think most of them would be better off with a partial render. I think the virtualized KITT could still render a single “swoosh” that render draws from, but perhaps the leader location should be based on time() instead of accumulation of direction * delta.

I like the map idea, include the whole thing in the mapper just put the “local” pixels first. This would let you see the big picture too.

How about for index based patterns? Most use index and pixelCount to get a ratio, so tweaking those could do the trick, except where patterns use the index 1:1 with an array like KITT and blinkfade. Still, I think it could make a good first pass for most patterns.

Let’s say a pattern uses index/pixelCount - this results in a number between 0 and nearly 1.0 at the far end. If we wanted to split that across 3 PBs in 3rds, we could just scale it down to 33%, and then offset where it is.

So on the first PB, we’d swap that out with ((1/3) * index / pixelCount). Now we have a number between 0 and .333, roughly the first 1/3rd of the animation.

For the second PB, we take that and “move” it to the right by 1/3rd: ((1/3) * index/pixelCount + 1/3).

And likewise for the last one, still scaled to 1/3rd but offset 2/3rds: ((1/3) * index/pixelCount + 2/3).

I think 2D and 3D will be easier since they work in world units instead of pixels. For example to change a render2D where you had 3 PBs in the same kind of arrangement:

localScale = 1/3
localOffset = 0/3 // for the first segment. 1/3 for the 2nd. 2/3 for the 3rd
export function render2D(index, x, y) {
  x = (x*localScale) + localOffset
}

I think maybe converting render to a render1D would be better perhaps. Swap out all instances of index/pixelCount with x in a render1D(index, x), then the coordinate math is similar to 2D/3D since you are working in “world units” again.

export function render(index) {
  render1D(index, index/pixelCount)
}

localScale = 1/3
localOffset = 0/3 // for the first segment. 1/3 for the 2nd. 2/3 for the 3rd
export function render1D(index, x) {
  x = (x*localScale) + localOffset
}
1 Like

I do mean this, and yes, it’s a trade-off. A synchronized pattern with individual sections of 4 PBs vs one PB with 4 connected channels. Maybe you can’t do the latter so you do the former.

The problem with synchronizing is that one PB might “loop” faster if it’s got less to do… Multiply complicity by N PBs. Of course if you can ensure timing remains constant across them all, so that frame rates are identical, that’s a different story. But that sounds WAY more complicated.

If PB1 is working on 100 pixels out of the big picture, and only those 100, call that X frames per second.
PB2 and PB3 are working on 200 pixels… If the time to prerender/render is double that of PB1, they’ll be at X/2 frame rate, so you’d want to slow PB1 down, right? And I doubt the timing math is that simple, given potential differences in calculation time (maybe PB2 has more edge case where you have more math or logic involved)

I like the render 1D approach. The biggest problem I see with teaching everyone index based pattern building is that while it’s simpler, as soon as you move to 2D, you need to use a whole different sort of logic/method (or hardcode matrix height/width as you did recently with the 2D->1D array method).

So I’m not saying remove index based pattern ability (there are cases it’s far simpler and better) but replacing it as default with render1D style learning/patterns means that people learn one technique/description and then can repeat with added dimensions when ready.

@Scruffynerf

Yep, that is the main benefit of using time(interval) for animations - not only does it provide consistent animation regardless of the frame rate, it will synchronize across all Pixelblaze when used with Firestorm or from a Pixelblaze in AP mode. It synchronizes down to a few milliseconds typically. When combined with math or a map to spit which part of the animation is drawn, the animation is seamless even with varying pixel counts and frame rates. This has been a core feature/benefit of Firestorm since its initial creation - timesync stuff was added at the same time in v2.10.

Using delta is another way to provide consistent animation regardless of frame rate, but will not synchronize animations over a network by default. It could still be used for some effects, like fading out pixels, as long as the main driving animation was time() based.

So for example let’s say you have 2 PB, one running at 100 FPS, one at 50 FPS. The faster one will run the code twice as often, but given the same time(interval) call in both patterns the values returned will increase half as much each frame for the faster PB. When time synced, they will return roughly the same value at the same time as well.

Here’s an example with a really contrasting setup. One PB drives a 16x16 panel, and the other drives a really tiny 8x8 panel prototype (ignore those dead pixels on the bottom row). Different pixel density, and frame rates are about 2:1 different, and each is only set to the correct number of pixels.

This is using the mapper trick, where I include the pixel map for both, and re-order them so that the used pixels are first. I had also attempted just pattern math, but my numbers were off and the side-by-side in the preview was helpful.

The small one is set to extend the display, just on the bottom corner.

This is running “cube fire 3D” unmodified:

2 Likes

Can you document this a bit better? I’m not sure I understood this before, nor do I quite understand how the PB in AP mode controls this? (In Firestorm, it makes sense that a sync would happen, even if I don’t quite grasp it.)

Current docs say only this:

time(interval)

A sawtooth waveform between 0.0 and 1.0 that loops about every 65.536*interval seconds. e.g. use .015 for an approximately 1 second.

And in the Firestorm Readme

Beacon and Time Sync Server

Pixelblaze v2.10 and above send out broadcast UDP packets that are used for discovery, and accept reply packets for time synchronization. The server participates in a time sync algorithm similar to NTP, allowing any number of Pixelblazes to have sychronized animations.

That’s pretty much it: the impact on patterns is synchronization of the time call results.

It works like this:
Once a second PB sends out a beacon broadcast. Firestorm, or a Pixelblaze in AP mode, will reply with a “timesync” packet which contains the time base (a 32 bit number in milliseconds) and will thus act as a time sync source. The python clients [1],[2] can act as time sync sources as well.

The actual time base number doesn’t matter so much as long as it increments by 1 for every millisecond. For Firestorm it’s the lower 32 bits of the current time in milliseconds since the Unix Epoch, and for a Pixelblaze in AP mode, it’s the number of milliseconds it has been running for.

When a PB gets a timesync reply, it takes the time base from the time sync source and half of the network round trip time, then adjusts its own time accordingly to try to match. If the difference is huge, it will jump to the base time (e.g. when a PB first comes online), otherwise it adjusts its local time slightly in the right direction. There’s a bit of filtering in there to prevent the odd network blip or jitter from throwing this off, but thats the gist. After a handful of seconds it’s within a few milliseconds of true.

The base time is then used when calculating return values for time(interval) which creates a sawtooth waveform roughly like this:
(currentTimeMs % intervalMs) / intervalMs

So if 2 PBs have roughly synchronized the same currentTimeMs, they will return similar values for calls to time(interval) with the same interval parameter.

By the way, the currentTimeMs is a snapshot taken for each frame, so multiple calls to time(interval) will always return the same result within a single animation frame even if it took a while to run the pattern code.

1 Like

Thanks. That does help.

I’m going to use time(interval) as the latest theme for this week’s task, if only to better help folks (including me) grasp how to build a variety of animations using that as the “drive wheel”

I feel like these are the keys, between this and the “map it all” approach, whether it’s a huge pattern running on a output expander, or multiple PBs, or even multiple PBs each with output expanders.

I’m personally thinking about some hoop projects where the hoops can sync up, even as individual PBs.

Where can I find more info on how you setup your PB to create the effect in the video above? What is the “mapper trick” you’re using? Does each PB have the same pixel map, but each is playing a different section?

Hey slippers -

That’s just two Pixelblaze, each running its own matrix of LEDs. One is driving a big 16x16, the other is driving the smaller 8x8 in the bottom right.

Ben’s also using a setup where the two Pixelblazes are synchronizing their animation timing. The three ways to do that are:

  1. Set up one as an access point and join the others to the WiFi network it provides
  2. Use Firestorm on a separate computer to catalog the Pixelblazes on a network, sync their animations, and launch the same animations on all of them
  3. (Future - likely before Summer 2023) An upcoming sync feature in future firmware will also make this possible.

 

The mapper trick is described above. To summarize, wizard decided to define all the pixels for both matrices in both Pixelblazes (so you can see both in the map preview, and so that the map is scaled correctly across both devices), but he swapped which points were defined first so that each Pixelblaze only consumes the pixel positions it’s responsible for rendering.

If a map says, “Here’s where 320 points are in space”, but the Settings page only says there are 256 pixels connected, patterns will only use the first 256 map points.

Therefore, the two maps might be like this:

// Map for the 16x16 (256 pixel) matrix
function (pixelCount) {
  // (Pseudocode)
  // These get used
  generateMapForSquare(perSide=16, location=[0,0], scale=200%)

  // These only show up on the map preview
  generateMapForSquare(perSide=8, location=[16,0], scale=50%)
}

 

// Map for the 8x8 (64 pixel) matrix
function (pixelCount) {
  // (Pseudocode)
  // These get used
  generateMapForSquare(perSide=8, location=[16,0], scale=50%)

  // These only show up on the map preview
  generateMapForSquare(perSide=16, location=[0,0], scale=200%)
  
}
3 Likes

A late housekeeping note for people who find this topic via forum search:

Any project looking to spread rendering across multiple Pixelblazes should be sure to check out the new sync feature. This feature is in firmware v3.40 and above. It allows a single Pixelblaze to sync patterns and playlists across multiple Pixelblaze (as well as to send sensor board data wirelessly to other Pixelblazes).

See how to use the sync feature in the announcement.

As of April 2023, the mapping discussion above still applies: If using 2D/3D patterns, and you want an overall effect to flow across the LEDs connected to different Pixelblazes, you’ll want to choose between two approaches:

  1. Adding phantom pixels in space to each Pixelblaze’s map. This can shift one set of LEDs relative to the other, as in the announcment’s example.
  2. Utilize a different spacial transformation (translate, rotate, scale) in each Pixelblaze’s pattern code. In this way, you might have mapped one object in full world coordinates (all points are between 0 and 1), but then you transform one to only have X coordinates between 0 and .5, and the other to be in .5 to 1.