WLED pattern porting?

I don’t have a problem with a pixel buffer, in fact, I’m working on code for one now…

but the KITT variants are a good example of why I don’t want to just port it. We have not only an array method of doing it (Classic KITT by Ben), but we have new variants that do it on matrixes, using waves, with no array required.

Doing it again seems pointless, especially if we’re just faking doing via FastLED ‘emulation’.

I think we agree, making it easier to do FastLED style patterns is a good thing. I’m just saying, don’t port it wholesale, pick and choose.

As an example, look at the wonderful pattern ldirko did here:

https://www.reddit.com/r/FastLED/comments/o4a7uv/let_me_show_you_my_last_distortion_waves_pattern

Yes, you could port that FastLED code OR you could just do it the PB way. The PB way is so much cleaner. I’d way prefer that to just adding functions to fake doing FastLED.

(I will port this if someone else doesn’t do it first, but my PB allocated time is busy right now with a few things ahead of it)

Oh, and I think we should come up with a way ensure all of the folks like ldirko get a PB into their hands.

following up on the idea of just porting FastLED in general:

http://fastled.io/docs/3.1/index.html

some of FastLED will be portable, some is only partial portable, and some is absolutely not portable. In part, the 8bit is definitely, the 16bit is often, but not always. When something takes a full 16bit input, for example, some things allow up to 65535 which we can’t do with the PB. Other features (currently) just can’t be emulated or to do so takes accepting that it’s a awkward fit at best. (millis being one example, since doing the INSTANTIATE_EVERY_N_TIME_PERIODS isn’t quite possible with PB, though we can fake a good attempt )

I suspect we can always (almost always?) port a pattern. Between WLED, and various collections of FastLED code, such Ldirko's Pastebin - Pastebin.com,
https://editor.soulmatelights.com/gallery,
and FastLED Arduino Code Examples with Simulation (and many more wokwi hosted emulations of patterns)

I think the idea of ‘port all the things’ wrt patterns is a good step: find a pattern, and bring into the PB ecosystem, rewriting as needed (with credit to original source), so basically we end up with a huge library of patterns.

Thru that, examples of how to port FastLED effectively will happen anyway, and some standard functions will likely emerge to replace/emulate pieces of FastLED API as needed.

It’s all portable - in that you can convert the pattern to run essentially the same through changes in the code.

Where fastLED uses 8 or 16 bit integers to represent a fraction from 0 to 1 (or (n^2 - 1) / n^2, you can convert it to a fractional value. This is a common trick - fixed point math, which Pixelblaze uses but hides the implementation complexity.

Have an example you want to port?

Ok, I agree, Ben, you can convert the pattern to work, regardless. My discussion above was on the idea of instead focusing on FastAPI API porting, ala @pixie 's desire to make “the process of ‘porting’ mostly a copy-and-paste into the preRender() function and some minor syntax fixups.”

I don’t think that’s feasible.

Right, BTW I didn’t see it linked, but there’s some more thoughts in this post that are relevant:

While I’m all for doing Pixelblaze versions of interesting patterns, here’s an additional reason to not take the fastLED API compatibility approach:

I’ve been working a couple of client projects with OpenGL. Call me the slow person in the room, but it finally dawned on me that what goes on in an OpenGL fragment shader is NEARLY IDENTICAL to what goes on in a Pixelblaze render function.

Both present you with a single pixel at a time, in an environment where floating point numbers are your fundamental data type. Both support a similar set of native functions. The language subsets are even roughly similar. The main difference: a GPU-based shader runs its version of render() on thousands of tiny processors simultaneously while today’s Pixelblaze runs only one copy.

The implication though… if I write my render() function without side effects, it can inherently run in parallel at any scale without me having to change the code at all. Not far down the road as small, inexpensive GPUs trickle down from the phone world, this could actually be a huge advantage for future Pixelblazes.

(Also, it means that you can possibly port some shaderToy shaders to Pixelblaze without too much work. Don’t know how far you’d get with the ray marching stuff – haven’t tried yet. But some of the plasma/noise/color pattern stuff should be pretty approachable.)

3 Likes

Yup, this is why the IQ stuff is so interesting to me. I hadn’t made the explicit shader connection though. Nice.

Oh, multiple PBs with the big map trick (all normalized to 0…1, but each maps a different section, just has two pixels as corner references so it all is one big map). Multiple “PBGPUs” in parallel.

This is intentional :slight_smile: I intend to open that up with the ESP32’s dual core, so I could run 2 pixel render pipelines in parallel, and perhaps some kind of fork(fn) API so you could do some parallel stuff in init code and/or beforeRender.

I’ve also wanted to try designing a Pixelblaze compatible GPU using FPGAs. I got to dip my toe in the water a bit with the Supercon FPGA badge hacking, and Esden’s ICEBreaker. Its fun stuff, I just don’t have the experience to design a GPU at the HDL level just yet. The instruction set would be native at that point, and even a few cores of that would give completely insane performance.

That feels more approachable than trying to get openGL to run on a microcontroller, and certainly more timely than waiting for openGL compatible GPUs to arrive in the microcontroller world. Otherwise that language/toolset would be a clear choice over reinventing the wheel.

Raspberry Pi also has a GPU, and the thought has crossed my mind of doing some kind of GPU backed Pixelblaze setup there. While GPUs may not run really complicated renders efficiently (like with branches or loops), it would be possible to convert the simpler PB language to GLSL.

2 Likes

So we discussed porting patterns (aka effects).

We haven’t discussed palettes yet.
On PB, we really don’t have a palette library/need (yet). We have a good color picker, but not much in the way of presets.

Open to ideas and suggestions as to the best approach to palettes. We don’t really have “globals” that span patterns (set it once and it works on multiple patterns), so any palette code would have live in each pattern. That means that either we have a huge chunk of code in each pattern or come up with a good way of optimizing the code used. So if we had “pallette using code” in the pattern, maybe the actual pattern is a cut and pasted bit of definition code at the top of the pattern and changing a pallets requires changing that code? A bit ugly.

When we get map access from patterns, it might be possible to extend the “extra fake pixel” concept to store a little global data in the map. (Actually, you could do it now, just use the first render pass to extract the palette from the “extra” map data, and have your render function ignore it on subsequent passes. It’d take some thinking to figure out how to control normalization so you coiuld easily de-normalize the palette data.)

In general, I see palettes as an unneccessary legacy thing, but if you want to use a one to extend a color scheme to multiple patterns, that might be a way to do it.

The same pattern can look very different with a different pallette. Obvious example: Fire is all red/orange/yellows… Change that to a blue/green/violet and it’s a very different look.

I see palettes as a valid way to say “use these colors”, which we don’t otherwise have now.

Well, I’d still prefer to let the user select one or two base colors with UI sliders, then generate the rest computationally as part of the pattern. But having said that, here’s a palette-stored-in-map proof of concept, just as something to think about.

The main simplifying assumption this makes is that because we’re theoretically running ported WLED patterns, we’re driving a 1D strip, as most stock WLED patterns do. That leaves the whole 3D map available to us as storage. To try this out, install the test mapper, and run the pattern. The test palette isn’t especially inspired, but it is um, quite visible!

Here’s the test mapper, with a 5 element RGB palette:

function (pixelCount) {
  paletteSize = 5;

  // construct palette here.  First entry is palette size,
  // repeated in x,y,z.  The next entries are the
  // palette RGB colors, followed by low and high
  // sentinel values to control normalization.
  var map = [
    [5,5,5],
    [216,0,0],
    [0,131,84],
    [238,75,106],
    [0,59,200],
    [15,113,115],
    [-1,-1,-1],
    [256,256,256]
  ]
  
// fill the rest of the map with zeros
  for (i = paletteSize + 3; i < pixelCount; i++) {
    map.push([0, 0, 0])
  }
  return map
}

And here’s the demo pattern:

// This is a sneaky way of storing a palette in 3D map data
// for use in 1D patterns

var MAX_PALETTE_LENGTH = 6;
var paletteLength = 0;
var paletteR = array(MAX_PALETTE_LENGTH);
var paletteG = array(MAX_PALETTE_LENGTH);
var paletteB = array(MAX_PALETTE_LENGTH);

var drawFrame = renderGetPalette;
var paletteRetrieved = 0;

// use this renderer on the first frame to retrieve the palette from the
// 3D map data
function renderGetPalette(index,x,y,z) {
  // de-normalize palette data ((x * range) - low Value)
  x = (x * 257)-1;
  y = (y * 257)-1;
  z = (z * 257)-1;  

  // palette length is duplicated in first x,y,z
  // entries, so if these are all identical, it's a good
  // bet we've got a palette instead of a "real" map.
  if (index == 0) {
    if ((x == y) && (y == z)) paletteLength = floor(x);
  } 
  else if (index <= paletteLength) {
    var n = index - 1;
    paletteR[n] = floor(x + 0.5) / 255;
    paletteG[n] = floor(y + 0.5) / 255;
    paletteB[n] = floor(z + 0.5) / 255;
  }
  paletteRetrieved = 1;
}

// once the palette is retrieved, use this render to draw
// onto our 1D strip.
function renderRunPattern(index,x,y,z) {
  render(index);
}

export function beforeRender(delta) {
  if (paletteRetrieved) drawFrame = renderRunPattern;
}

export function render3D(index,x,y,z) {
  drawFrame(index,x,y,z);
}

// proof-of-concept -- just divides strips into equal segments
// with a palette color in each segment!
export function render(index) {
 var n = (index/pixelCount) * paletteLength; 
 rgb(paletteR[n],paletteG[n],paletteB[n]);
}
1 Like

While I get your idea… Changing a map is non trivial…

I really dislike the idea. Still mulling alternatives.

True enough!

For now, if anybody’s crazy enough to do something like this, it could easily be generalized a bit to store several palettes, which patterns could select via slider. It’s in no way an ideal solution, but it does show one way to share global (constant) data between patterns.

1 Like

Yeah, we’re in a bit of bootstrap problem:

We don’t have pallettes, so the need for adding pallette support to the API/firmware isn’t needed, which would make it much easier.

Honestly, a global variable API (someway to stash multi pattern variables) is really the missing bit. Pallette storage is just one possible use.

I took a shot at porting Distortion Waves last night. First I did a more-or-less straight transliteration of the FastLED code (though I got rid of the gamma LUT and replaced the cosine LUT with a PB wave function) and that was OK, though there were lots of artefacts in the middle which I think were due to numeric overflows.

Then I started hacking away to make it more PB-like. I’ve now got it very simple and PB-friendly, but in the process of scaling from FastLED’s 0…255 intensities and Soulmate’s 1-20 coordinates to PB’s 0…1 world I lost track of what the scaling coefficients should be at each stage, so it’s now 95% correct but 100% wrong (i.e. the code looks good but the results don’t resemble the original).

I’m tired of looking at it, and feeling distinctly inadequate after seeing @zranger1’s Great Metaballs of Fire, so I’ll post it here as a starting point if someone with fresh eyes wants to carry it forward…

//  Cobbled together from the original at: https://editor.soulmatelights.com/gallery/1089-distorsion-waves

//  simple replacement for LUT
function cos_wave(proportion) { return 1-wave(proportion+0.25); }
function beatsin(bpm) { return wave(time(0.91552734375/bpm)); }

// adjustments
timeBase = 0.1;
speed = 5;
w = 2;

export function beforeRender(delta) {
  a1=time(timeBase); a2=time(2*timeBase); a3=time(3*timeBase);
  cx1 = beatsin(10-speed); cy1 = beatsin(12-speed); 
  cx2 = beatsin(13-speed); cy2 = beatsin(15-speed);
  cx3 = beatsin(17-speed); cy3 = beatsin(14-speed);
}

export function render2D(index, x, y) {
/*
  byte rdistort = cos_wave[   (cos_wave[((x << 3) + a1) & 255]    + cos_wave[   ((y << 3) - a2) & 255]    + a3     ) & 255   ] >> 1;
  byte gdistort = cos_wave[   (cos_wave[((x << 3) - a2) & 255]    + cos_wave[   ((y << 3) + a3) & 255]    + a1 + 32) & 255   ] >> 1;
  byte bdistort = cos_wave[   (cos_wave[((x << 3) + a3) & 255]    + cos_wave[   ((y << 3) - a1) & 255]    + a2 + 64) & 255   ] >> 1;
*/
  coeff1 = 0.06; 
  r1 = coeff1*cos_wave(x+a1); 
  g1 = coeff1*cos_wave(x-a2); 
  b1 = coeff1*cos_wave(x+a3);
  
  coeff2 = 1; 
  r2 = coeff2*cos_wave(y-a2); 
  g2 = coeff2*cos_wave(y+a1); 
  b2 = coeff2*cos_wave(y-a2);
  
  coeff3 = 1; 
  rdistort = coeff3*cos_wave(r1+r2+a3); 
  gdistort = coeff3*cos_wave(g1+g2+a1+1/8);  
  bdistort = coeff3*cos_wave(b1+b2+a2+1/4); 

/*
  byte valueR = rdistort + w * (a1 - (((xoffs - cx1) * (xoffs - cx1) + (yoffs - cy1) * (yoffs - cy1)) >> 7));
  byte valueG = gdistort + w * (a2 - (((xoffs - cx2) * (xoffs - cx2) + (yoffs - cy2) * (yoffs - cy2)) >> 7));
  byte valueB = bdistort + w * (a3 - (((xoffs - cx3) * (xoffs - cx3) + (yoffs - cy3) * (yoffs - cy3)) >> 7));
*/
  dx1 = x-cx1; dy1 = y-cy1; dx2 = x-cx2; dy2 = y-cy2; dx3 = x-cx3; dy3 = y-cy3;
  r = cos_wave(rdistort + w*(a1-(dx1*dx1 + dy1*dy1)));
  g = cos_wave(gdistort + w*(a2-(dx2*dx2 + dy2*dy2)));
  b = cos_wave(bdistort + w*(a3-(dx3*dx3 + dy3*dy3)));

  rgb(r*r, g*g, b*b); 
}
2 Likes

@Pixie – you had this! The only thing missing is that WLED’s coordinates come in as integer pixel numbers and Pixelblazes are normalized and scaled. To get the numbers back to something like the proper scaling for this pattern, try adding the line:

scale(0.5,0.25)

somewhere in the initialization section. (YMMV on the specific values – this looked good to me, and seems to correct for the aspect ratio of the rectangular display he’s running it on in the video.)

1 Like

I don’t know how I (re)missed the Palette Utility in the pattern collection.

Discussion here (which I even participated in)

I think next step is to try porting a WLED pattern that assumes palette usage and see how awkward it is… I still think maps aren’t an answer, but I’m not sure we have a good answer yet. If we had includes (for example), we could have a palette include with desired palettes.