Feature Request: Raw RGB values in preview JPG

I’ve been admiring the new animated pattern previews on the WLED site:

FX_0021

and decided to do something similar to document PB patterns with animated PNGs. It’s not as fancy as the live preview project @zRanger1 is working on, but does have the benefit of being lightweight and automatic.

Since the exported pattern file (.EPE) contains a preview JPG with the first 100 pixels of the first 150 passes through the render cycle, it’s easy enough to cycle through the JPG, extract the pixels, and then scale and format them.

Since the deviceSettings.json config specifies the pixelCount, it’s even possible to crop the preview (for devices with pixelCount less than 100) and, given the presence of a pixelMap.txt, assume that patterns with a render2D function are for a 2D square matrix:

DAFTPUNK Jeff's sinusoidal waves 2D Breakout

I’ll share it here when it’s finished, but I’m blocked by a problem that will affect anyone who uses it. Most of my own patterns came out so dark as to be unrecognizable:

xorcery 2D_3D

After running a few experiments re-saving and re-exporting this pattern:

export var pixels = array(pixelCount);
export function render(index) {
  value = 1 - index/pixelCount;
  pixels[index] = value;
  rgb(value,value,value);
}

and examining the preview JPGs I found out why. It turns out that while the renderer is generating 8-bit color values of 255, 254, 253, ... the color values stored in the preview JPG within the pattern file are downscaled according to the values of the Brightness slider and the maxBrightness setting AT THE TIME THE PATTERN WAS SAVED.

I happen to think that’s an unfortunate implementation; the pattern is the pattern is the pattern, whereas the maxBrightness and Brightness are specific to a particular device and to transient local conditions such as ambient light. Taking the separation of concerns involved in the OSI layer model for networking as an analogy, the render() function generates pure, invariant application-layer data which is then adapted as needed by the presentation layer (Brightness slider) and physical layer (maxBrightness setting) before being deployed onto the hardware. The preview should be the same regardless of the current hardware.

My development PB is powered by USB and connected to a matrix sitting underneath my monitor, about 20 inches from my face; I don’t want to melt the traces so the maxBrightness is down at 25% and even then it’s blindingly bright with the Brightness slider higher than 50%.

If I want to generate accurate previews, I would need to first set all the brightness settings to 100%, then load, re-save, and re-export all the patterns, and finally restore the brightness settings. If I later make a change to a pattern and save it without remembering to maximize all the brightness settings, the preview becomes unrepresentative again.

So…could we have the preview JPGs NOT be scaled according to the brightness settings?

The preview bar at the top of the editor is also dimmed according to the brightness slider/settings; I’m ambivalent about the value of that because I’ve been fooled many times into thinking my pattern code wasn’t working right when it was just that resulting values were too low for the current brightness settings. If there was a discrepancy between what I saw on the preview bar (actual RGB values) and on the actual hardware (brightness-adjusted RGB values) then I’d realize much sooner that the brightness settings were too low to be representative.

Given a choice between reality and a simulation of reality, I’d prefer reality.

4 Likes

I’ve second the idea that scaling by brightness seems odd. It means data is lost, when really a normalizing would likely make more sense, if not unfiltered to brightness (which I’d suspect would be close to normalized actually)

The render2d() looks great. I know @wizard was working on figuring out a 2D render, so hopefully it’ll be like this? With pixel map data normalizing to some fixed scale, should look good.

The pixel box render looks nice!

It’s an unfortunate artifact of where I tap in to the pixels, even the auto-off timer will black them out, and I have plans to address that too.

In the meantime as a workaround I switch to “no leds” type for generating previews so I can crank up brightness without blowing out my eyes or power supply :laughing:

Also BTW, I’ve tested about 1024 preview pixels. 2k was a bit too much, but 1k works nicely, and will skip pixels so you get a sampling over the whole thing instead of the first 1k.

1 Like

@wizard, are you saying we could have more pixels in the preview? If you’re willing to accept the knock-on impact on the pattern library or editor UI code, it might be nice if the preview matched the pixelCount of the device. Plenty of off-the-shelf strings (144/m), matrices (8x32, 16x16), and rings (241, 256, fibonacci) have more than 100 pixels.

@scruffynerf, I’m not using the pixel map at the moment (just transposing each pixel into a rectangular height x width grid) but if @wizard wants to tell me how to get the final map coordinates from the pixelmap.dat blob I could try to put the pixels in the right place for more complex layouts.

~-~-~-~-~

Anyway, if the previews will be fixed eventually then I might as well release it now. The forum won’t let me post .PY or .ZIP files, so here it is as a text blob:

animatePreview.py
import argparse, os, io, json, base64, struct
from PIL import Image

# ------------------------------------------------

# Here's where the magic happens.
def animatePreview(patternPreview, patternWidth, patternHeight, patternFrames, outputScale, borderColor, filenameBase, verbose):
        
    # Unpack the bencode'd string into an Image so we can process it as RGB pixels.
    with Image.open(io.BytesIO(base64.b64decode(patternPreview))) as imgPreview:

        # Calculate how big things need to be.
        if patternWidth == 0:
            patternWidth = imgPreview.width
        if patternFrames == 0:
            patternFrames = imgPreview.height
        outputWidth = patternWidth * (1 + outputScale) + 1
        outputHeight = patternHeight * (1 + outputScale) + 1

        # Debugging output
        if verbose:
            print("  imageWidth: ", imgPreview.width)
            print("  imageHeight: ", imgPreview.height)
            print("  patternWidth: ", patternWidth)
            print("  patternHeight: ", patternHeight)
            print("  patternFrames: ", patternFrames)
            print("  outputScale: ", outputScale)
            print("  outputWidth: ", outputWidth)
            print("  outputHeight: ", outputHeight)

        # Start pantographing pixels from the JPG into the animated PNG.
        animationFrames = []
        for iterRow in range(patternFrames):
            # Create a new blank frame.
            animationFrame = Image.new('RGB', (outputWidth, outputHeight), (borderColor, borderColor, borderColor))

            maxBrightness = 0
            for iterCol in range(patternWidth * patternHeight):
            #for iterCol in range(min(patternWidth * patternHeight, imgPreview.width)):
                r, g, b = imgPreview.getpixel((iterCol, iterRow))
                maxBrightness = max(maxBrightness, r, g, b)
                pixelX = 1 + (iterCol % patternWidth) * (1 + outputScale)
                pixelY = 1 + (iterCol // patternWidth) * (1 + outputScale)
                for hPixel in range(outputScale):
                    for vPixel in range(outputScale):
                        animationFrame.putpixel((pixelX + hPixel, pixelY + vPixel), (r, g, b))

            # save the frame.
            animationFrames.append(animationFrame)

        # save the output file.
        outputFilename = filenameBase + '.png' 
        if True or verbose:
            print("Saving", len(animationFrames), "frames to", outputFilename)
        animationFrames[0].save(outputFilename, save_all=True, append_images=animationFrames[1:], duration=40, loop=0)

        # Test for excessive dimming of preview image.
        if maxBrightness < 128:
            print("Warning: the brightest pixel in the preview image is less than 50%")
            print("  ...to get an accurate representation of the pattern, it is necessary ")
            print("  ...to set both the Brightness slider and maxBrightness (under Settings)")
            print("  ...to 100%, and then save and re-export the pattern.")

# ------------------------------------------------

# main
if __name__ == "__main__":

    # Parse command line.
    parser = argparse.ArgumentParser()
    parser.add_argument("patternFile", help="The pattern (EPE) to be animated")
    parser.add_argument("--width", type=int, default=0, help="The width of the pattern (if less than the preview size of 100)")
    parser.add_argument("--height", type=int, default=1, help="The height of the pattern if 2D; defaults to 1")
    parser.add_argument("--frames", type=int, default=0, help="The number of frames to render (if less than the preview size of 150)")
    parser.add_argument("--scale", type=int, default=5, help="How much to magnify each pixel of the preview; defaults to 5")
    parser.add_argument("--border", type=int, default=63, help="8-bit grayscale value for the border lines; defaults to 63")
    parser.add_argument("--verbose", action='store_true', help="Display debugging output")
    args = parser.parse_args()

    # Read the pattern archive (.EPE) file and pass the preview image to our converter.
    with open(args.patternFile, 'rb') as inputFile:

        # Debugging output
        if args.verbose:
            print("Extracting preview image from", args.patternFile)
        EPE = json.load(inputFile)

        # Check whether this is ought to be a 2D render, but isn't.
        if "render2D(" in EPE['sources']['main']:
            if args.height == 1:
                print("Warning: this pattern contains a 2D renderer; if you want a 2D preview you need to specify the height.")

        # Create the preview animation.
        inFilename, inExtension = os.path.splitext(args.patternFile)
        animatePreview(EPE['preview'], args.width, args.height, args.frames, args.scale, args.border, inFilename, args.verbose)

    # close the input file.
    inputFile.close()

I wrote it in Python because I also call it from inside the Python-based backup script I built using the Pixelblaze client libraries developed by @zranger1 and @Nick_W. Anyone on a Mac should already have all the dependencies to run this; anyone on a PC can use Python for Windows or run it in their favorite Linux distribution using WSL.

Syntax is as follows:

usage: animatePreview.py [-h] [--width WIDTH] [--height HEIGHT] [--frames FRAMES] [--scale SCALE]
                         [--border BORDER] [--verbose]
                         patternFile

positional arguments:
  patternFile      The pattern (EPE) to be animated

optional arguments:
  -h, --help       show this help message and exit
  --width WIDTH    The width of the pattern (if less than the preview size of 100)
  --height HEIGHT  The height of the pattern if 2D; defaults to 1
  --frames FRAMES  The number of frames to render (if less than the preview size of 150)
  --scale SCALE    How much to magnify each pixel of the preview; defaults to 5
  --border BORDER  8-bit grayscale value for the border lines; defaults to 63
  --verbose        Display debugging output

Some examples:

$ python3 animatePreview.py Cylon.epe
Saving 150 frames to Cylon.png

Cylon

There’s not much going on with the rightmost 36 pixels because the pattern file was exported from an 8x8 matrix. The animation also jumps at the end because the default preview length of 150 frames isn’t an integer multiple of the pattern length. That’s easily fixed:

$ python3 animatePreview.py Cylon.epe --width=64 --frames=136
Saving 136 frames to Cylon.png

Cylon

Or how about a 2D pattern?

$ python3 animatePreview.py --border=32 Breakout.epe
Warning: this pattern contains a 2D renderer; if you want a 2D preview you need to specify the height.
Saving 150 frames to Breakout.png

Breakout

Oops! Trying again with the --width and --height options to use the 2D renderer:

$ python3 animatePreview.py --border=32 --width=8 --height=8 Breakout.epe
Saving 150 frames to Breakout.png
Breakout

And if you like bigger pixels and black borders:

$ python3 animatePreview.py --scale=10 --border=0 --width=8 --height=8 cube\ fire\ 3D.epe
Saving 150 frames to cube fire 3D.png

cube fire 3D

So there it is. If you give it a try, leave any feedback here.

2 Likes

For V3, yes, up to 1k. It may depend on bandwidth / signal.

Generated preview jpgs are potentially much larger. I might need to increase compression, use fewer frames, or scale non-live previews below 1k to keep them to reasonable storage sizes.

The pixel map structure is pretty simple. It starts with a header:

typedef struct {
    uint32_t fileVersion;
    uint32_t dimensions;
    uint32_t dataSize;
} PixelMapFileHeader;

Then element data of dataSize bytes as either 8 bit (fileVersion=1) or 16 bit (fileVersion=2) values packed in dimensions number of elements per pixel. Thats all little endian.

Great, I’ll give it a try over the weekend.

(On pixelmap.dat - I figured this out a little while ago, but didn’t know if @wizard wanted us messing with it.)

Here’s Java for a general purpose fixed-to-float converter that will give you the proper normalized coordinates for both the 8-bit and 16-bit encodings if you need it. This is going to be even easier in Python – one of those times when Java’s lack of support for unsigned numbers really bites it in the rear.

// x is the input integer, e is the number of bits after the decimal point
float fixed_to_float(int x,int e) {
  return (float) x / pow(2,e);
}

// after you've read the header, the read loop looks something like this:
// (just reads the raw numbers - doesn't account for dimensionality)

     int valOne = 16; // the number of bits you'd have to left shift to produce a value of 1.0
                               //  i.e., 8 for the 1 byte coords, 16 for the 2 byte coords.

      byte data[] = new byte[2];
      ByteBuffer buf = ByteBuffer.wrap(data);
      buf.order(ByteOrder.LITTLE_ENDIAN);     
      
      while ((mapFile.read(data)) != -1) {
          float n = (float) fixed_to_float2((int) 0xFFFF & buf.getShort(0) ,valOne);
          print(n," ");
      }

Thanks, @zranger1! That will save me a few hours of head-scratching…

After a long delay, I’ve finally gotten back to finishing this.

I rewrote the pattern renderer to place each dot according to the pixelmap, but when I tried it on a range of different layouts I found that the new previews in v3.24 (which are wonderful, by the way) are only available to the pattern editor (probably for either compatibility or space reasons) – the preview JPEGs stored inside the exported EPE are still only 100 pixels wide.

So for an 8x8 matrix, the preview JPEG contains all the pattern plus some blank pixels:

cube fire 3D produces cube fire 3D

But for any configuration with more than 100 pixels there isn’t enough information in the preview JPEG to reconstruct the full pattern:

cube fire 3D produces cube fire 3D

Now, if I capture the websocket preview frames (which do contain the full pixelCount of the pattern) and assemble them into a facsimile of the preview JPEG, I can run them through the same process to produce an animation for any size and (1/2D) shape:

16x16
cube fire 3D produces cube fire 3D

8x32
cube fire 3D produces cube fire 3D

32x32

produces
cube fire 3D

…which works very welI, except that occasionally a websocket packet gets delayed or dropped which causes the pattern to skip a frame (as you can see above in the 32x32 preview JPEG).

I’m still holding out hope for getting the RAW (not scaled for brightness) pixel values, but my new question for @wizard is whether there’s room for a larger preview JPEG inside the exported EPEs. It makes the previews look much nicer:

Lissajous curve tracer Line Dance 2D 2D Doom Fire v2.1

versus:

Lissajous curve tracer Line Dance 2D Doom Fire (v2.0) 2D

Looking at the size of the embedded JPEG for the “cube fire 3D” pattern as regenerated-and-saved on various size matrixes, the image sizes scale more or less linearly:

JPEG quality 64px 256px 1024px
as shipped 8,558 8,558 8,558
50% 2,990 8,238 32,224
60% 3,281 9.176 36,016
70% 3,747 10,567 41,607
75% 4,035 11,441 45,085

and the bytecode (566 bytes) and sourcecode (2,295 bytes) take up a further 2,851 bytes.

Increasing the preview size makes each pattern takes up more space on the Pixelblaze, which will reduce the number of patterns that can be stored (and the larger the pixelCount, the greater the reduction). Extrapolating wildly from the numbers above, that means that depending on the pixelCount and JPEG compression the Pixelblaze filesystem (1572K bytes) might fit the following number of patterns:

Pixels 64px 256px 1024px
50% 258 135 42
60% 245 527 38
70% 228 112 33
75% 218 105 31

Or perhaps the quality factor could be scaled down as the pixelCount goes up to keep the image sizes reasonable: if you have 64 pixels, you get a JPEG quality of 75%; for 256 pixels, 50%; for 1024 pixels, maybe 10% or 25%.

Is the tradeoff worth it? It depends on how many pixels and how many patterns people have on their Pixelblazes.

Very cool!

I had played around with storing the full thing, and wanted to reconstruct the 2d/3d preview for the pattern list, like you have.
Right now space is tight, and a cause of some filesystem issues, so I scaled it down to 100px. It will also aggressively increase jpeg compression to try for 5k. down to 50% quality.

I played around with using the browser to compress to webm. Webm does a fantastic job of compressing small videos… when compressed outside of the browser. The browsers have a hard-coded minimum bitrate regardless of pixels, which unfortunately make the videos pretty big even for small pixel sizes. I have some ideas there, but haven’t had time to see if they would work.

Do you mean using a non-native encoder like webm-wasm?

If you want to keep the preview as an image there’s also webp-wasm-js or webp-js-encode.

No, I mean using the browser built-ins like MediaRecorder. webm-wasm would be too large to bundle, but would probably work.