WLED pattern porting?

So I hooked up a few of the new Hex panels I got to a new Atom lite (in the same shipment), and flashed it with the latest WLED. And I was reminded of a few things:

  1. WLED has almost no matrix/2D support… there is some in the WLED reactive fork, but not yet in the main. PB blows it away if you have anything but a string of lights. Mapping in PB, especially as we make it easier to map for newbies, is far and away the killer feature nobody else is close to… if someone codes in FastLED, they can use a map generator but they still have to write the code by hand, and it’s not nearly as easy to do so as in PB.

  2. WLED builds in dozens of ‘effects’, combined with dozens of ‘palettes’. Mixing and matching (most but not all effects use the palette selected) means it’s got a large number of built-ins, but really, many of the effects are variants, so it’s not as diverse as it seems. I suspect if you counted the number of patterns in the PB library, we probably have more actual different ‘effects’, by far.

That said, I’d like to propose a group effort, open to anyone who wants to help:

Port ALL the things!

The “re-org” of the pattern library into more of a github style is coming (soon?)
WLED bundles all of it’s built-in effects into just a file or two. Split that into concrete and fairly small tasks, we could build PB equivs (or better) for each one.

This could be a good way to folks learning to code in PB to feel really accomplished and contribute back. Some of those patterns are pretty easy to do in PB, perhaps easier than in WLED.

Who’s interested?


That’s a great idea, and as a starting point can I suggest that it would be an idea to put together a compatibility library? IIRC, WLED stuff is written for FastLED so some concepts are completely orthogonal to the PB way and other things can be fairly easily ported as library-able functions.

What I’m thinking is that we’d probably want a skeleton program that defines a FastLED-like environment with pixel framebuffers and a standard render() function to output them, and as as many compatibility functions as we can provide to make the process of “porting” mostly a copy-and-paste into the preRender() function and some minor syntax fixups.

I’ve already started playing with a couple of FastLED patterns so I made my own FastLED-like skeleton:

//  Framebuffers
var pixelReds = array(pixelCount);
var pixelGreens = array(pixelCount);
var pixelBlues = array(pixelCount);
// standard renderer
export function render(index) { RGB(pixelReds[index], pixelGreens[index], pixelBlues[index]); }

and a PB function to mimic the “EVERY_N_MILLISECONDS” macro:

//  FASTLED emulation functions (mine)
var maxTimers = 10;
var elapsedTime = array(maxTimers);
function EVERY_N_MILLISECONDS(delta, accumulator, target, func) {
  elapsedTime[accumulator] += delta;
  if (elapsedTime[accumulator] > target) {
    elapsedTime[accumulator] = 0;

Then to use it:

// Just an example; a real pattern ought to do something more interesting!
export var hue = 0;
function changeHue() {
  hue = (hue + 1/6) % 1;
    for (index=0; index<numPixels; index++) {
        pixelReds[index] = hue;
        pixelGreens[index] = hue;
        pixelBlues[index] = hue;
export var brightness = 0;
function changeBrightness() {
  brightness = (brightness + 1/10) % 1;
export function beforeRender(delta) {
  EVERY_N_MILLISECONDS(delta, 0, 500, changeHue);
  EVERY_N_MILLISECONDS(delta, 1, 50, changeBrightness);

Easy patterns like chasers will be simple enough to port, but a lot of the nicest patterns depend on things like palettes, blending, blurring and fading functions that we don’t have yet.

And at the moment I’m struggling with the difference in floating point representations and what FastLED functions do internally with them; for instance, the pattern code may calculate a 16.16 number which it then passes to “beatsin88” which expects an 8.8 number, so the C compiler silently does a downcast and/or a truncation, and then the parameters to beatsin88 are treated as an integer if less than 256 but as a 16.16 fixed-point number if greater. With all the side-effects it can be very difficult to tell by eye what some numbers used as parameters to FastLED functions are really doing; sometimes it’s necessary to single-step through in a debugger to see how it winds up.

Still, I think it would be a very worthwhile exercise, and when we’ve squeezed all the goodness we can out of the WLED pattern library we can move on to the other FastLED-based engines (I’ve seen quite a few patterns I like in the SoulMate library).

Embrace and extend!

while the idea of a FastLED compatibility library is a good idea, it’s such a radically different method of programming LEDs, that doing simple tasks (like most of the patterns in WLED) becomes more about emulating WLED than actually doing the pattern.

So let’s divide this in two pieces:

  1. Duplicate the pattern, using as close to the ‘PB way’ of doing it.

For example, look at WLED/FX.cpp at master · Aircoookie/WLED · GitHub

and take a simple case: blink() is used for a few patterns.

blink(color1, color2, bool strobe, bool do_palette)
alternate between color 1+2, maybe strobe, maybe use a palette of colors.

Porting this exactly, including segmenting code (which is used a lot in the patterns) seems pointless IF we can port the functionality.

We have ‘segmenting’ code now, the Multisegment pattern and Multimap multi-pattern will do this in either 1d or 2d, and just need a code fragment of a pattern to be installed into them.

Palette aside, doing a blink() is pretty easy.

  1. Build a FastLED library for porting things where it’s just easier to emulate than to do it the PB way.

I’m all in favor of doing this, but don’t want it in the way of the much simpler task of just ensuring that Pattern X from WLED has a PB equiv. In fact, having a PB equiv rather than a pure port can help teach people the PB way to write things.

I hear what you’re saying, but a quick look through the PB pattern library shows that a lot of the non-chaser patterns – most of the Cylon/KITT variants, Snake, Doom Fire and others – calculate into pixel buffers and then output them at render time. If we port most of the FastLED helper functions for dealing with buffers, then those WLED patterns can work on PB with a minimum of fuss (which is one consideration as a training exercise). Those who want to code golf can continue rewriting the patterns to try to eliminate the pixel buffers, but in many cases it won’t be possible.

Don’t get me wrong, I love the PB way of doing things and I’ve never seen better for getting something amazing up and running with a minimum of code, but there are times when you need a little more access to the render pipeline…

I don’t have a problem with a pixel buffer, in fact, I’m working on code for one now…

but the KITT variants are a good example of why I don’t want to just port it. We have not only an array method of doing it (Classic KITT by Ben), but we have new variants that do it on matrixes, using waves, with no array required.

Doing it again seems pointless, especially if we’re just faking doing via FastLED ‘emulation’.

I think we agree, making it easier to do FastLED style patterns is a good thing. I’m just saying, don’t port it wholesale, pick and choose.

As an example, look at the wonderful pattern ldirko did here:

Yes, you could port that FastLED code OR you could just do it the PB way. The PB way is so much cleaner. I’d way prefer that to just adding functions to fake doing FastLED.

(I will port this if someone else doesn’t do it first, but my PB allocated time is busy right now with a few things ahead of it)

Oh, and I think we should come up with a way ensure all of the folks like ldirko get a PB into their hands.

following up on the idea of just porting FastLED in general:


some of FastLED will be portable, some is only partial portable, and some is absolutely not portable. In part, the 8bit is definitely, the 16bit is often, but not always. When something takes a full 16bit input, for example, some things allow up to 65535 which we can’t do with the PB. Other features (currently) just can’t be emulated or to do so takes accepting that it’s a awkward fit at best. (millis being one example, since doing the INSTANTIATE_EVERY_N_TIME_PERIODS isn’t quite possible with PB, though we can fake a good attempt )

I suspect we can always (almost always?) port a pattern. Between WLED, and various collections of FastLED code, such Ldirko's Pastebin - Pastebin.com,
and FastLED Arduino Code Examples with Simulation (and many more wokwi hosted emulations of patterns)

I think the idea of ‘port all the things’ wrt patterns is a good step: find a pattern, and bring into the PB ecosystem, rewriting as needed (with credit to original source), so basically we end up with a huge library of patterns.

Thru that, examples of how to port FastLED effectively will happen anyway, and some standard functions will likely emerge to replace/emulate pieces of FastLED API as needed.

It’s all portable - in that you can convert the pattern to run essentially the same through changes in the code.

Where fastLED uses 8 or 16 bit integers to represent a fraction from 0 to 1 (or (n^2 - 1) / n^2, you can convert it to a fractional value. This is a common trick - fixed point math, which Pixelblaze uses but hides the implementation complexity.

Have an example you want to port?

Ok, I agree, Ben, you can convert the pattern to work, regardless. My discussion above was on the idea of instead focusing on FastAPI API porting, ala @pixie 's desire to make “the process of ‘porting’ mostly a copy-and-paste into the preRender() function and some minor syntax fixups.”

I don’t think that’s feasible.

Right, BTW I didn’t see it linked, but there’s some more thoughts in this post that are relevant:

While I’m all for doing Pixelblaze versions of interesting patterns, here’s an additional reason to not take the fastLED API compatibility approach:

I’ve been working a couple of client projects with OpenGL. Call me the slow person in the room, but it finally dawned on me that what goes on in an OpenGL fragment shader is NEARLY IDENTICAL to what goes on in a Pixelblaze render function.

Both present you with a single pixel at a time, in an environment where floating point numbers are your fundamental data type. Both support a similar set of native functions. The language subsets are even roughly similar. The main difference: a GPU-based shader runs its version of render() on thousands of tiny processors simultaneously while today’s Pixelblaze runs only one copy.

The implication though… if I write my render() function without side effects, it can inherently run in parallel at any scale without me having to change the code at all. Not far down the road as small, inexpensive GPUs trickle down from the phone world, this could actually be a huge advantage for future Pixelblazes.

(Also, it means that you can possibly port some shaderToy shaders to Pixelblaze without too much work. Don’t know how far you’d get with the ray marching stuff – haven’t tried yet. But some of the plasma/noise/color pattern stuff should be pretty approachable.)


Yup, this is why the IQ stuff is so interesting to me. I hadn’t made the explicit shader connection though. Nice.

Oh, multiple PBs with the big map trick (all normalized to 0…1, but each maps a different section, just has two pixels as corner references so it all is one big map). Multiple “PBGPUs” in parallel.

This is intentional :slight_smile: I intend to open that up with the ESP32’s dual core, so I could run 2 pixel render pipelines in parallel, and perhaps some kind of fork(fn) API so you could do some parallel stuff in init code and/or beforeRender.

I’ve also wanted to try designing a Pixelblaze compatible GPU using FPGAs. I got to dip my toe in the water a bit with the Supercon FPGA badge hacking, and Esden’s ICEBreaker. Its fun stuff, I just don’t have the experience to design a GPU at the HDL level just yet. The instruction set would be native at that point, and even a few cores of that would give completely insane performance.

That feels more approachable than trying to get openGL to run on a microcontroller, and certainly more timely than waiting for openGL compatible GPUs to arrive in the microcontroller world. Otherwise that language/toolset would be a clear choice over reinventing the wheel.

Raspberry Pi also has a GPU, and the thought has crossed my mind of doing some kind of GPU backed Pixelblaze setup there. While GPUs may not run really complicated renders efficiently (like with branches or loops), it would be possible to convert the simpler PB language to GLSL.


So we discussed porting patterns (aka effects).

We haven’t discussed palettes yet.
On PB, we really don’t have a palette library/need (yet). We have a good color picker, but not much in the way of presets.

Open to ideas and suggestions as to the best approach to palettes. We don’t really have “globals” that span patterns (set it once and it works on multiple patterns), so any palette code would have live in each pattern. That means that either we have a huge chunk of code in each pattern or come up with a good way of optimizing the code used. So if we had “pallette using code” in the pattern, maybe the actual pattern is a cut and pasted bit of definition code at the top of the pattern and changing a pallets requires changing that code? A bit ugly.

When we get map access from patterns, it might be possible to extend the “extra fake pixel” concept to store a little global data in the map. (Actually, you could do it now, just use the first render pass to extract the palette from the “extra” map data, and have your render function ignore it on subsequent passes. It’d take some thinking to figure out how to control normalization so you coiuld easily de-normalize the palette data.)

In general, I see palettes as an unneccessary legacy thing, but if you want to use a one to extend a color scheme to multiple patterns, that might be a way to do it.

The same pattern can look very different with a different pallette. Obvious example: Fire is all red/orange/yellows… Change that to a blue/green/violet and it’s a very different look.

I see palettes as a valid way to say “use these colors”, which we don’t otherwise have now.

Well, I’d still prefer to let the user select one or two base colors with UI sliders, then generate the rest computationally as part of the pattern. But having said that, here’s a palette-stored-in-map proof of concept, just as something to think about.

The main simplifying assumption this makes is that because we’re theoretically running ported WLED patterns, we’re driving a 1D strip, as most stock WLED patterns do. That leaves the whole 3D map available to us as storage. To try this out, install the test mapper, and run the pattern. The test palette isn’t especially inspired, but it is um, quite visible!

Here’s the test mapper, with a 5 element RGB palette:

function (pixelCount) {
  paletteSize = 5;

  // construct palette here.  First entry is palette size,
  // repeated in x,y,z.  The next entries are the
  // palette RGB colors, followed by low and high
  // sentinel values to control normalization.
  var map = [
// fill the rest of the map with zeros
  for (i = paletteSize + 3; i < pixelCount; i++) {
    map.push([0, 0, 0])
  return map

And here’s the demo pattern:

// This is a sneaky way of storing a palette in 3D map data
// for use in 1D patterns

var paletteLength = 0;
var paletteR = array(MAX_PALETTE_LENGTH);
var paletteG = array(MAX_PALETTE_LENGTH);
var paletteB = array(MAX_PALETTE_LENGTH);

var drawFrame = renderGetPalette;
var paletteRetrieved = 0;

// use this renderer on the first frame to retrieve the palette from the
// 3D map data
function renderGetPalette(index,x,y,z) {
  // de-normalize palette data ((x * range) - low Value)
  x = (x * 257)-1;
  y = (y * 257)-1;
  z = (z * 257)-1;  

  // palette length is duplicated in first x,y,z
  // entries, so if these are all identical, it's a good
  // bet we've got a palette instead of a "real" map.
  if (index == 0) {
    if ((x == y) && (y == z)) paletteLength = floor(x);
  else if (index <= paletteLength) {
    var n = index - 1;
    paletteR[n] = floor(x + 0.5) / 255;
    paletteG[n] = floor(y + 0.5) / 255;
    paletteB[n] = floor(z + 0.5) / 255;
  paletteRetrieved = 1;

// once the palette is retrieved, use this render to draw
// onto our 1D strip.
function renderRunPattern(index,x,y,z) {

export function beforeRender(delta) {
  if (paletteRetrieved) drawFrame = renderRunPattern;

export function render3D(index,x,y,z) {

// proof-of-concept -- just divides strips into equal segments
// with a palette color in each segment!
export function render(index) {
 var n = (index/pixelCount) * paletteLength; 
1 Like

While I get your idea… Changing a map is non trivial…

I really dislike the idea. Still mulling alternatives.

True enough!

For now, if anybody’s crazy enough to do something like this, it could easily be generalized a bit to store several palettes, which patterns could select via slider. It’s in no way an ideal solution, but it does show one way to share global (constant) data between patterns.

1 Like