In the name of coordinating efforts in the best interest of PB, just know that I think Ben is considering internalizing palette support on the roadmap!
I was searching for exactly this, and got to this thread via a Google query – I’d also love a re-implementation of FastLED’s color palette functions!
Replying to myself here – for anyone else looking – the Pattern “Utility: Palettes” within https://electromage.com/patterns/ (BTW: If there’s a deep link option to a given pattern, I don’t see it?) does basically exactly what you’d want if you’re used to FastLED palettes.
I came across this, as part of the amazing stuff from Inigo:
His version of a simple palette method, using cosines.
And found in the comments on the shadertoy demo
is this:
Another super powerful way to define a palette, and more so, it’s in the 0…1 format we want for PB. By default it uses RGB, (but I’m curious, and will play with it in HSV).
I’ll write up a demo pattern for this shortly.
The randomize button shows me, that basically if you just generate a random set of values between -PI and PI, you end up with a decent palette almost every single time (and the rest are mostly too small a variety of color, like light greenblue to darker greenblue, or one slightly varied shade of orange). That’s phenomenal.
Seems to work, and it will only get easier with array initializers on the way…
// Define palette.
var paramA = array(3); var paramB = array(3); var paramC = array(3); var paramD = array(3);
// rainbow1: [[0.500 0.500 0.500] [0.500 0.500 0.500] [1.000 1.000 1.000] [0.000 0.333 0.667]]
paramA[0] = 0.500; paramA[1] = 0.500; paramA[2] = 0.500;
paramB[0] = 0.500; paramB[1] = 0.500; paramB[2] = 0.500;
paramC[0] = 1.000; paramC[1] = 1.000; paramC[2] = 1.000;
paramD[0] = 0.000; paramD[1] = 0.333; paramD[2] = 0.667;
// Calculate the palette color at a particular point and return its RGB components.
function paletteColorAt(t, coeffA, coeffB, coeffC, coeffD, retVal) {
for (i=0;i<retVal.length;i++) retVal[i] = coeffA[i] + coeffB[i] * cos(PI2 * (coeffC[i] * t + coeffD[i]));
}
var paletteRGB = array(3);
export function render(index) {
paletteColorAt(index/pixelCount, paramA, paramB, paramC, paramD, paletteRGB);
rgb(paletteRGB[0], paletteRGB[1], paletteRGB[2]);
}
The cpt library is another good source of palettes; you can use PaletteKnife for FastLED to extract the coefficients and then render them like so:
// Define palette as a variable number of bands, each containing a startIndex, R, G, and B component.
function newPaletteRGB(numBands) { ary = array(numBands); ary.mutate((v) => array(4)); return ary; }
function setPaletteBandRGB(ary, index, offset, R, G, B) { ary[index][0] = offset; ary[index][1] = R >> 8; ary[index][2] = G >> 8; ary[index][3] = B >> 8; }
function LERP(percent, low, high) { return low + percent * (high - low); }
export var saveV;
function rgbFromFastLedPalette(v, palette, retVal) {
v = clamp(v << 8, 0, 255);
for (idx=0;idx<palette.length;idx++) {
if (v <= palette[idx][0]) {
// We're in this band, so LERP to find the appropriate shade.
if (v == 0) { retVal[0] = palette[idx][1]; retVal[1] = palette[idx][2]; retVal[2] = palette[idx][3]; }
else {
scale = (v - palette[idx-1][0]) / (palette[idx][0] - palette[idx-1][0]);
retVal[0] = LERP(scale, palette[idx-1][1], palette[idx][1]); retVal[1] = LERP(scale, palette[idx-1][2], palette[idx][2]); retVal[2] = LERP(scale, palette[idx-1][3], palette[idx][3]);
}
break;
}
}
}
// Gradient palette "quagga_gp" originally from http://soliton.vm.bytemark.co.uk/pub/cpt-city/rc/tn/quagga.png.index.html; converted for FastLED with gammas (2.6, 2.2, 2.5)
var paletteQuagga = newPaletteRGB(6);
setPaletteBandRGB(paletteQuagga, 0, 0, 1, 9, 84);
setPaletteBandRGB(paletteQuagga, 1, 40, 42, 24, 72);
setPaletteBandRGB(paletteQuagga, 2, 84, 6, 58, 2);
setPaletteBandRGB(paletteQuagga, 3, 168, 88, 169, 24);
setPaletteBandRGB(paletteQuagga, 4, 211, 42, 24, 72);
setPaletteBandRGB(paletteQuagga, 5, 255, 1, 9, 84);
// Pattern
var pixelRGB = array(3);
export function render(index) {
rgbFromFastLedPalette(index/pixelCount, paletteQuagga, pixelRGB);
rgb(pixelRGB[0], pixelRGB[1], pixelRGB[2]);
}
This too will become simpler with v3.19+…happy days!
Nice. Using array iterative functions, instead of the for loop, and moving the RGB call into the function, you can make it even shorter and cleaner. (I’ll do it with 3.19b shortly.)
Additionally:
It seems a huge waste to define 4 values of 256 (position, R,G,B) as a 4 value array, given that PB stores numbers in 16.16 and can compact those 4 numbers into 1 32 bit value, since the function could extract the 4 8bit values easily enough.
Plus in code, you could also write it as 2 0x hex numbers (since we can’t do hex decimals/fractionals directly), which means it’s still slightly human readable, if you know hex color codes. Seems better than the binary option with 32 1s and 0s. Decimal is still more human readable, though, so likely I’ll figure out a clean way to populate the single array.
Yes, for HDR LEDs, you’d want more bits… but for ‘normal’ usage, reducing the size would mean you could store more palettes in far less space (using less variables too)
So I’ll implement this as tightly as I can, in the hopes that @wizard will just adopt the code/method and add a basic palette selection into firmware. (Assuming people want this, of course)
@wizard, any chance of adding lerp()
as built in to 3.19?
Is it just me, or do you wish instead of random seeming names, pallettes were sorted/ordered by some sort of general scheme?
I don’t mind seeing Lava, but Red Lava would at least sort with other reds. Red Orangey is fine too. Even Red Yellow…
Bluegreens… I’m fine with Blue Ocean, Blue Clouds, Blue Atlantica, etc.
Redgreens… Etc
Rainbow
Browns, Yellows…
At least then I’d have an idea without having to look up a palette color scheme or “try it out blindly” (over and over and over…)
If I’m wanting a pallette for a Fire effect, looking at a handful of Red palettes is easy, not scrolling thru dozens to try to pick out the ones that might work.
In that vein, when I crib these (and I will), I’m gonna rename them appropriately.
If the palette bands are expressed as RGB values, they’re already sortable…
Off the top of my head, if sorting was done based on the component values of the bands presented in descending order (dropping any zeroed components) then similar color transitions would sort out typographically to be near one another:
Bands | Description |
---|---|
bFF->b80g40 | BlueOnly to BlueGreen |
bFF->g80b40 | BlueOnly to SeaGreen |
bFF->gFF | BlueOnly to GreenOnly |
b80g40->gFF | BlueGreen to GreenOnly |
g80b40->gFF | SeaGreen to GreenOnly |
gFF->bFF | GreenOnly to BlueOnly |
rFF->r80b80 | RedOnly to Purple |
rFF->r80b80->bFF | RedOnly to Purple to BlueOnly |
+1 for lerp(); I use it all over the place for scaling sliders and wave functions.
…and I like the improved preview renderer for WLED. Does Pixelblaze generate a preview once after each edit, or does it send the preview repeatedly over the websocket? Other things like Firestorm and @zranger1’s python library seem to have trouble communicating to PB when the editor is open.
I like that, but any rainbow ones would get missed, etc. But I suspect any autosort will. I just want names to be grouped into human categories, instead of spread spectrum
I believe the ‘generating’ delay we see is as it re-renders the jpg preview on compile, but the ‘live’ feed? @wizard, is that available other than via the UI? WLED has a /json/live url, which is how Scott is grabbing data in his script
And it does:
// palettes copied directly from http://dev.thi.ng/gradients/
var paletteGreenRed = [ [ 0.5, 0.5, 0], [0.5, 0.5, 0], [0.5, 0.5, 0], [0.5, 0, 0] ];
var paletteRainbow1 = [ [0.500, 0.500, 0.500], [0.500, 0.500, 0.500], [1.000, 1.000, 1.000], [0.000, 0.333, 0.667]];
var paletteHarvest = [ [ 0.5, 0.5, 0.5], [0.5, 0.5, 0.5], [1.0, 1.0, 0.5], [0.8, 0.9, 0.3] ];
// Calculate and render the palette color at a particular point.
function paletteAt(t, palette) {
_r = palette[0][0] + palette[1][0] * cos(PI2 * ((palette[2][0] * t) + palette[3][0]));
_g = palette[0][1] + palette[1][1] * cos(PI2 * ((palette[2][1] * t) + palette[3][1]));
_b = palette[0][2] + palette[1][2] * cos(PI2 * ((palette[2][2] * t) + palette[3][2]));
rgb(_r, _g, _b);
}
export function render(index) {
t = time(0.05);
x = index/pixelCount;
if (t < 0.3) paletteAt(x, paletteRainbow1);
else if (t < 0.6) paletteAt(x, paletteGreenRed);
else paletteAt(x, paletteHarvest);
}
Nice. Thanks @wizard and thanks @scruffynerf!
@pixie, are you running the editor and Firestorm/python on the same machine? If so, once the editor is up, that’d stop any other program from connecting from the same IP address. The web browser very sensibly doesn’t want to share its open sockets with other apps.
If you need to have the editor and another websocket connection going at the same time, you can set up multiple IP addresses on the same NIC and run Firestorm, etc. on the secondary IP.
I do this all the time. It’s handy for separating traffic for various tasks and services. It also gives you an easy way to get more than one port 80 on a machine, for programs that absolutely insist on that.
On recent Linux w/GUI, just go to the IPv4 settings in the Network manager and add as many addresses as you need.
It’s not much more complicated in Windows, but it’s harder to find to the right place to do it. Here’s a walkthrough of a couple of methods.
To summarize, you’ll need to go to the TCP/IP v4 properties dialog for your network adapter., switch yourself to a static IP address, then click the “Advanced” button and you can add addresses from there. Or you can just use one of the handy command line options in the walkthrough above.
Nice, I’m actually wondering if reversing the way this is put together makes sense…
Using A,B,C, and D do a [R, G, B], so you end up with 4 3-arrayed (RGB) values,
and you calculate a value for each of R, G, B.
But then http://dev.thi.ng/gradients/ offers a global tweak as well, which is kinda nifty.
What if it’s done as R, G, B, and then store A,B,C,D for each? That’s 3 values with an array of 4 in each, but now, the color channels are easily separated… so I could modify the R channel as a whole array, rather than parts of 4 arrays. Not to mention that A,B,C,D are all relative to PI.
Either way, considering seeing if compacting these into a 16.16 also makes sense…
That would make an entire palette reducible down to just a scant 3 or 4 single values. That seems an amazing reduction… and totally worthwhile.
“oh, that palette? It’s 0x12345678, 0x345AE444, 0x13405544, 0xFF00EEDD”
In the end, I didn’t bother with bit-packing the band values, because array initializers make the palette code short and sweet enough on its own:
function LERP(percent, low, high) { return low + percent * (high - low); }
// A palette is a variable number of bands, each containing a startIndex, R, G, and B component.
function fastLedPaletteAt(v, palette) {
v = clamp(v << 8, 0, 255);
for (idx=0;idx<palette.length;idx++) {
if (v <= palette[idx][0]) { // We're at the beginning of the band.
if (v == 0) { rgb(palette[idx][1] >> 8, palette[idx][2] >> 8, palette[idx][3] >> 8); }
else { // We're in the middle of this band, so LERP to find the appropriate shade.
scale = (v - palette[idx-1][0]) / (palette[idx][0] - palette[idx-1][0]);
rgb(LERP(scale, palette[idx-1][1], palette[idx][1]) >> 8, LERP(scale, palette[idx-1][2], palette[idx][2]) >> 8, LERP(scale, palette[idx-1][3], palette[idx][3]) >> 8);
}
break;
}
}
}
// Gradient palette "quagga_gp" originally from http://soliton.vm.bytemark.co.uk/pub/cpt-city/rc/tn/quagga.png.index.html; converted for FastLED with gammas (2.6, 2.2, 2.5)
var paletteQuagga = [ [ 0, 1, 9, 84], [ 40, 42, 24, 72], [ 84, 6, 58, 2], [ 168, 88, 169, 24], [211, 42, 24, 72], [255, 1, 9, 84] ];
// Gradient palette "scoutie_gp", originally from // http://soliton.vm.bytemark.co.uk/pub/cpt-city/rc/tn/scoutie.png.index.html; converted for FastLED with gammas (2.6, 2.2, 2.5)
var paletteScoutie = [ [ 0, 255,156, 0], [127, 0,195, 18], [216, 1, 0, 39], [255, 1, 0, 39] ];
// Pattern
export function render(index) {
t = time(0.05);
x = index/pixelCount;
if (t < 0.5) fastLedPaletteAt(x, paletteScoutie);
else fastLedPaletteAt(x, paletteQuagga);
}
@zranger1, I don’t think that’s the case.
A websockets client such as the one inside a webbrowser can only connect to a single server at a time, but a websockets server such as the one inside the PB’s webserver can handle multiple clients. If a server gets confused by multiple client connections from the same IP address, then it’s not identifying connections appropriately – that is to say, uniquely.
At any rate, PB can currently handle simultaneous connections from the same Windows computer (web UI inside Edge, and your python library connecting from a WSL session), but the python connection frequently disconnects with an error message if the preview band is running in the Editor window; it doesn’t seem to mind if the preview band is running atop the Pattern List page.
Also I forget sometimes that I have an editor window open and open a second, but that makes the response time slow down so drastically that I eventually notice and check the other browser tabs…
This is interesting. You can… I just tested it… open a browser window to a Pixelblaze from multiple different browsers on the same machine simultaneously. I had Edge, Chromium and Firefox going, and everything appears to be working, all simultaneously watching variables change.
I’ll look into this - it’d be interesting to know if the browsers were doing anything different with the socket, or just using a better websocket library. (And actually, being able to run the Python client while you had a browser window open didn’t work at all when I started. Python just refused to connect if the browser already had a websocket connection. This is an improvement!)
I took a closer look half an hour ago and the python library was throwing an exception from inside a call to waitForEmptyQueue(1000)
following a call to setActivePattern()
. I replaced the waitForEmptyQueue(1000)
call with a sleep(1000)
and it’s run without error since…
I don’t think this should be the case. Older versions had only global state to track settings for whether or not preview data and updates should be sent, but this has improved, and wouldn’t have prevented a WS connection anyhow. Sure each connection does add load, and there is a system limit of 5 (? IIRC) simultaneous connections. That includes any zombie connections waiting to time out. Older versions would also boot every connection if any stalled, but these are all resolved to the best of my knowledge.
@Scruffynerf yes preview data for the first 100 pixels are streamed over the websocket if it’s enabled for that connection. These come in as binary preview frames. WLED seems to do something similar these days, but as a hex string in JSON.
At some point I will lift the 100 pixel restriction and use the pixel map to render 2d/3d live previews. There will be some limit, and I might do something like sending a subset by skipping pixels or something.
@Wizard, thanks for keeping up with the small, edge-case network issues! Things have been a lot smoother in the last couple of firmware versions.
And thanks for localizing the Python problem, @pixie. I’ll see if I can find a graceful way to make waitForEmptyQueue() behave itself if the connection momentarily stalls, and will get it checked in ASAP. I’ve got a couple of bug fixes sitting around waiting to go into PyPl anyway.