Input needed: functions wishlist - Part 2

Actually, @zranger1 , it might be worth considering how much of the webgl API we can emulate/simulate…

https://xem.github.io/articles/webgl-guide.html

One step closer to shaders… Ok, 10 steps closer to shaders.

Maybe a checklist of what webgl does vs PB. Obviously, we can’t do it all, but I wonder if enough can be done (in code first, of course)

Also, Canvas API - Web APIs | MDN

How about an in-browser Git client so we can backup and version-control patterns? If you pull it down from a CDN it won’t take up any flash space…

1 Like

Hmm, add a setting for GitHub username and repo, and you’d be all set. I like that.

…and if I can go further outside the scope of “functions” to suggest some enhancements to the Pattern Library:

  • Filtering/searching for pattern names
  • Batch downloading of the entire pattern library as a .ZIP – else how can we keep up to date with other peoples’ contributions? Could regenerate the ZIP after each pattern submission and then push it to a CDN like CloudFlare so downloading won’t chew up your site bandwidth.
  • Detect the presence of render{1,2,3}D() functions and display badges for 1D/2D/3D/accelerometer/audio on each listing so we don’t need to put it into the name
  • Allow the original submitter to update the pattern code; also update the modifiedDate so it shows up as a recent pattern again
1 Like

I believe the next gen of the pattern library is planned to be a git setup, including both .epe and source…

That will help greatly with most of your points.

With the new change that render3D works with 2D patterns (using .5 Z always I believe), your suggestion won’t be accurate if automatic. I’m fine with labels saying 123D and audio (sensor?)

That would be great; at the moment it’s almost impossible to keep straight what’s in the library versus what’s on each PB. With today’s v2 firmware release bringing the language versions up to parity, now I’ve got to export all my patterns off all my 'blazes, convert them to text files and diff them before I’ll know where I need to remove v2 shims and add in arrays and matrices…

To that end, I’ve spent the afternoon knocking together a little applet to export all patterns off all PBs to a folder tree so I can check in a baseline to git; then after I do my language upgrades I can use the commit log to know what I need to re-upload to which PB (sadly, still a manual process).

If anybody wants to do the same, here’s the code:

#!/usr/bin/env python3

import requests, pathlib, json, base64
from pixelblaze import *
from lzstring import LZString


if __name__ == "__main__":
    # create a PixelblazeEnumerator object, listen for a couple of seconds, then list the Pixelblazes we found.
    pbList = PixelblazeEnumerator()
    print("Listening for Pixelblazes...")
    time.sleep(3)

    # Make a top-level folder to hold the backups.
    pathlib.Path('./Backups').mkdir(parents=True, exist_ok=True)

    for pixelblazeIP in pbList.getPixelblazeList():

        # create a Pixelblaze object.
        print("Connecting to Pixelblaze @", pixelblazeIP)
        pb = Pixelblaze(pixelblazeIP)
        pb.stopSequencer() # so the patterns don't change while we're copying them
        hardwareConfig = pb.getHardwareConfig()
        pixelblazeName = hardwareConfig['name']
        print("  Connected to ", pixelblazeName)
        time.sleep(1)

        # Make a subfolder for each Pixelblaze.
        devicePath = './Backups/' + pixelblazeName
        pathlib.Path(devicePath).mkdir(parents=True, exist_ok=True)

        # Save the hardware characteristics in case we need to rebuild it at a later date.
        hardwareFilename = devicePath + '/deviceSettings.json'
        with open(hardwareFilename, 'w') as outfile:
            json.dump(hardwareConfig, outfile)

        # Make a subfolder for Patterns.
        patternPath = devicePath + '/Patterns'
        pathlib.Path(patternPath).mkdir(parents=True, exist_ok=True)

        # Fetch the patterns and save them..
        print("  Fetching patterns:")
        for patternID, patternName in pb.getPatternList().items():
            # Set the active pattern so we can query it.
            print("    Saving pattern '%s'" % patternName)
            pb.setActivePattern(patternName)
            pb.waitForEmptyQueue(1000)
            time.sleep(1)

            # save the pattern as a binary blob (.BIN)
            patternFilename = patternPath + '/' + patternName.replace("/","_")
            suffix = ''
            url = 'http://' + pixelblazeIP + '/p/' + patternID + suffix
            r = requests.get(url)
            binaryData = r.content
            open(patternFilename + '.bin', 'wb').write(binaryData)

            # also save the pattern as a portable JSON archive (.EPE)
            header_size = 36
            offsets = struct.unpack('<9I', binaryData[:header_size])
            name_offset = offsets[1]
            name_length = offsets[2]
            jpeg_offset = offsets[3]
            jpeg_length = offsets[4]
            source_offset = offsets[7]
            source_length = offsets[8]
            epe = {'name': binaryData[name_offset:name_offset+name_length].decode('UTF-8'), 'id': patternID}
            epe['sources'] = json.loads(LZString.decompressFromUint8Array(binaryData[source_offset:source_offset+source_length]))
            epe['preview'] = base64.b64encode(binaryData[jpeg_offset:jpeg_offset+jpeg_length]).decode('UTF-8')

            with open(patternFilename + '.epe', 'w') as outfile:
                outfile.write(json.dumps(epe, indent=2))

            # also save the pattern as text (.JS)
            with open(patternFilename + '.js', 'w') as outfile:
                outfile.write(epe['sources']['main'])

            # save the control states, if any.
            if len(pb.getControls(patternName)) > 0:
                suffix = '.c'
                url = 'http://' + pixelblazeIP + '/p/' + patternID + suffix
                r = requests.get(url)
                open(patternFilename + '.c', 'wb').write(r.content)
                time.sleep(1)

        pb.close()

    print("Complete!")
    exit()

You’ll just need to put it in a folder along with a copy of @zranger1’s pixelblaze-client class and @wizard’s lzstring class.

1 Like

On webgl: I’d completely ignore the vertex shader portion for now. It really complicates the pipeline, and geometry isn’t really that interesting given the low spatial resolution of most LED things. I’d save the full implementation for when we’ve got more CPU and RAM, possibly a dedicated GPU, and everybody’s using generally higher res displays.

The fragment shader however…

What you see on shadertoy is done almost exclusively in fragment shaders. They’re enormously powerful. And Pixelblaze is already nearly identical in functionality. Even the limitations are similar. The only painful bit when porting would be dealing with the vector types, which is partly why I asked for them. ( The other reason is performance – you can do vectors in Pixelblaze code, but it’s exactly the kind of thing that performs marvelously better when done in the VM.)

If you find something that looks good on a low resolution display, are willing to deal with the vectors yourself, and can manage the (fairly 1:1) translation from GLSL C to Pixelblaze js, it’s already possible to port things from shadertoy to Pixelblaze without too much trouble.

2 Likes

I can add vector / matrix APIs. You’d call APIs like vec3_add(dest, v1, v2) or something like that. I could also add helper aliases for making fixed size arrays like v = vec3(1,2,3).

Extending the language operators themselves so that you can do somevec2 + anothervec2 and that sort of thing requires significant upgrades to the language / engine.

Would something like that be interesting?

3 Likes

Absolutely, see the other new thread on object property faking. If we had vectors, properties objects, and classes, done any easier than faked via the current kinda sorta way we can seem to do it now, that would be awesome.

1 Like

I wanted “objects” (not necessarily classes) when I was writing my own vector math to do coordinate transformations. Now that the transformations are baked into the language for both v2 and v3, I don’t need it any longer for that particular reason…but it would still be useful as syntactic sugar to keep related ‘things’ together in one place, like the coordinate positions and color components of objects that are being rendered. I’m working on a firefly simulator and it needs about a dozen separate arrays to handle the coordinates, velocities, lifecycle timing, flash cycle timing, and other attributes. Depending on how you implement address references in the VM, you might get a little speed increase from dereferencing the array pointer once to find an ‘object’ rather than repeating the lookup for each property access.

If you want a full JS implementation for ESPs, moddable is the best I’ve seen, but their execution model is very different (the bytecode compiler resides on the desktop, though they do support downloading ‘libraries’ at runtime) so you’d probably need to do a fair bit of work to have a browser-based pattern compiler. Or you could offer them a boxful of PBs to help with the changes!

1 Like

@wizard, I’m good with any implementation you come up with. The performance advantage and reduction of complexity in pattern code are what I think matters most.

(BTW, there are a couple more functions that are common in shaders that somehow absented themselves from my brain when I was writing about this yesterday. They’re generally useful in any graphics context, and would be good to have around in some form – the blending/interpolation functions, smoothstep(), and mix() )

2 Likes

Easing covers most of this, with smoothstep being a lerp with easing, and mix is a lerp with a weighting

Same as with other bits, I’m in favor of doing it userland code, deciding what is clean and useful, and then adding the best bits into the expression list.

I think things we can’t do in userland, like reading the pixel map, are far more important. Adding what’s missing from JS functionality is another good example.

Nothing stops addng something we have a userland solution/library/example for … Of course. But things we can’t do? I want those more.

1 Like

@pixie
Had a chance to play around with Moddable on the esp32. Some of their tools are pretty neat. I was curious how it’s JS performed doing the kinds of things you’d need to do on a PB. It’s slower places, and faster in others. Arrays are very slow (curiously so), and things like Math.sin are predictably slow (tuned for accuracy & compatibility over performance). I could see it being an interesting platform for doing JS in embedded/IoT.

The core JS VM they use was open sourced around the same time I started working on Pixelblaze, but it was completely off my radar despite searching for this kind of thing specifically.

I’ll keep playing with it and poking around.

3 Likes

I played with Moddable a bit too. I like it. It’s an impressive tool for general purpose, non-real-time IoT development. You’d have to tear a lot of it down to get it to blink lights as fast as Pixelblaze though.

I just had a look at the array code in their vm, starting here

It looks like they do the normal modern high level language thing – arrays aren’t flat chunks of memory, they’re linked lists of some sort. Means a ton of pointer dereferencing for every access. Haven’t looked at the memory management stuff yet, but it looks like they’re using a heap manager, which implies garbage collection, which means fast real time scheduling is more-or-less out the window.

There’s also a lot of code in the array system for handling multiple data types, which is useful for general purpose computing, but contributes to the general overhead.

1 Like

I mentioned Fade in another post, and the array stuff was one of the few bits I wondered how he implemented… It’s not JS though, it looks like he invented his own language.

I’ve actually been trying to figure out all of the possible ways to write an LED pattern (let’s limit it to ESP32/8266 for now)

JS: Pixelblaze, Moddable and so on… (PB is far and away the winner)

C: FastLED, and related options (WLED, etc etc)

Python: Micro or Circuitpython - libraries to do ws2812 and so on. Even FastLED compatible functions in some cases (and not in others)

Homebrew: I’d put Fade in that realm… Not hugely interested, as one of the advantages of language compatibility is that you leverage code from others. So while someone could add LED control to DOS running on an Esp32 (yup it exists) , I’m not hugely interested.

If you outsource the logic/etc,so that a Pi/PC sends the signals and just send to a esp controller via E1.31/etc, then that’s another category, and beyond the scope here. Processing would fall into that, for example.

Anything besides the above any of you have seen?

The ESP Lua implementation has library support for LEDs as well.

Lately I’ve been thinking about LED controllers as small, specialized GPUs rather than as general purpose computers. I see people here using more and more LEDs in their projects. And higher density products, like those nifty 300+ pixels/meter CoB strips are arriving every week.

Moddable, MicroPython, etc, are headed down the general purpose PC path, which is great if you’re doing general computing things, and dealing with relatively small numbers of lights.

But if a device’s main purpose is to drive LED displays, it just makes sense to follow the development path of the GPU, increase the number of cores/threads/output channels, and run in parallel as much as possible.

1 Like

Thanks, I didn’t have Lua on my list, but it absolutely fits.

I agree with your sentiment about scaling.

My working analogy is this right now:

Artists can work in any medium, but part of the challenge is to work in a given limited medium.
Using watercolors is different from using oils which are both different from using color pencils. Similar techniques/notions apply to all but each has unique qualities and looks.

In the “display” world, you can use a high resolution display, with Processing, GLSL, or other languages and create “art” but that’s just one medium. Using low-res pixels like LEDs, you can create things similar to but different from displays… You can add a 3rd dimension, as one option, you can mix mediums like sculpture or form… Put them on walls, or use them behind something and cast light reflectively… LEDs aren’t monitor pixels, and throwing a massive panel up is far harder than the equivalent number of pixels in a monitor, nor does it scale well.

We need to treat it like the different medium it is.
Just like you can use some techniques/ideas no matter what “paint” you use (oil/watercolor/spray paint/whatever), there are techniques you can apply from one pixel all the way up to super high resolution video displays with 33million pixels (8K). But there are things that will look amazing at 8K that fail at 256pixel (16x16) or 64 (8x8), and the task is always to find the right look and feel for the medium you are using.

Thinking of a PB as a GPU is useful, and the analogy works because GPUs also manipulate data, in this case to decide what pixels to light… But at the end of the day, they are different mediums.

This is why I find tools like Tixy so neat: they simulate a lower res environment (similar to LEDs in some ways but also different). It’s like using colored pencils to draw a watercolor. Similar but in the end, different.

I just bought a TV backlight as a gift for a relative. It has a 1080p fishlens camera that mounts above the TV looking at the screen, and a box that takes that image, and figure out the lighting to match the screen using a ws2812 strip on the back of the TV, so that your movie/show is enhanced by a glow that matches what’s on the screen, extending the high-res display with a offscreen background. In this analogy, that’s a mixed medium art project. High density pixels in the center, low density pixels surrounding it. People flash esp32s for this purpose (usually using a Pi or PC to generate the data, and sending it to the esp32 to drive the LEDs), but if someone said “is that doable with a Pixelblaze?”, I’d say No, it’s not, and shouldn’t be.

If you have these set up on an esp32, do y’all want to do some basic performance tests, compared to PB, and let me know? Stuff like loops, basic math, some trig, array work?

I’m all for using existing more advanced languages, but have to be mindful of how they perform when trying to render 10s of thousands of pixels a second. It’s completely possible there’s a faster VM/runtime, or something close, with a more full featured language.

1 Like

I haven’t set these all up (yet), but I’m looking options for the book(s), with the idea that different languages might be different books, teaching same skills.

Beware of the slippery slope. There is no perfect language – knowing more computer languages just gives you more to complain about! (@scruffynerf, this is in reference to “existing more advanced languages” above, not to your book plan. I agree totally that languages all have things to teach, and people might find core concepts easier to learn in a particular language that appeals to them.)

I haven’t (yet) tried driving LEDs with eLua and microPython, but I do know for certain that they can’t escape the hardware platform’s limits either – dereferencing variables in the language slows things down, there’s not a ton of memory left over for user processes after the vm and its resources have loaded, and any need for fast hardware I/O means you have to drop to C or assembly and turn interrupts off, cutting into the language vm’s time slice even more.

I did see one clever thing in microPython for ESP32 though. It’s neopixel driver uses the RMT (infrared remote control transmitter) to drive the LEDs. The transmitting code is here, setup code here.

I’ve not seen this trick before, so no idea if this is faster than what you’re doing now, but it looks like they’re able to buffer quite a bit of data, which might let you get back to rendering faster.

1 Like