Input needed: functions wishlist - Part 2

On webgl: I’d completely ignore the vertex shader portion for now. It really complicates the pipeline, and geometry isn’t really that interesting given the low spatial resolution of most LED things. I’d save the full implementation for when we’ve got more CPU and RAM, possibly a dedicated GPU, and everybody’s using generally higher res displays.

The fragment shader however…

What you see on shadertoy is done almost exclusively in fragment shaders. They’re enormously powerful. And Pixelblaze is already nearly identical in functionality. Even the limitations are similar. The only painful bit when porting would be dealing with the vector types, which is partly why I asked for them. ( The other reason is performance – you can do vectors in Pixelblaze code, but it’s exactly the kind of thing that performs marvelously better when done in the VM.)

If you find something that looks good on a low resolution display, are willing to deal with the vectors yourself, and can manage the (fairly 1:1) translation from GLSL C to Pixelblaze js, it’s already possible to port things from shadertoy to Pixelblaze without too much trouble.

2 Likes

I can add vector / matrix APIs. You’d call APIs like vec3_add(dest, v1, v2) or something like that. I could also add helper aliases for making fixed size arrays like v = vec3(1,2,3).

Extending the language operators themselves so that you can do somevec2 + anothervec2 and that sort of thing requires significant upgrades to the language / engine.

Would something like that be interesting?

3 Likes

Absolutely, see the other new thread on object property faking. If we had vectors, properties objects, and classes, done any easier than faked via the current kinda sorta way we can seem to do it now, that would be awesome.

1 Like

I wanted “objects” (not necessarily classes) when I was writing my own vector math to do coordinate transformations. Now that the transformations are baked into the language for both v2 and v3, I don’t need it any longer for that particular reason…but it would still be useful as syntactic sugar to keep related ‘things’ together in one place, like the coordinate positions and color components of objects that are being rendered. I’m working on a firefly simulator and it needs about a dozen separate arrays to handle the coordinates, velocities, lifecycle timing, flash cycle timing, and other attributes. Depending on how you implement address references in the VM, you might get a little speed increase from dereferencing the array pointer once to find an ‘object’ rather than repeating the lookup for each property access.

If you want a full JS implementation for ESPs, moddable is the best I’ve seen, but their execution model is very different (the bytecode compiler resides on the desktop, though they do support downloading ‘libraries’ at runtime) so you’d probably need to do a fair bit of work to have a browser-based pattern compiler. Or you could offer them a boxful of PBs to help with the changes!

1 Like

@wizard, I’m good with any implementation you come up with. The performance advantage and reduction of complexity in pattern code are what I think matters most.

(BTW, there are a couple more functions that are common in shaders that somehow absented themselves from my brain when I was writing about this yesterday. They’re generally useful in any graphics context, and would be good to have around in some form – the blending/interpolation functions, smoothstep(), and mix() )

2 Likes

Easing covers most of this, with smoothstep being a lerp with easing, and mix is a lerp with a weighting

Same as with other bits, I’m in favor of doing it userland code, deciding what is clean and useful, and then adding the best bits into the expression list.

I think things we can’t do in userland, like reading the pixel map, are far more important. Adding what’s missing from JS functionality is another good example.

Nothing stops addng something we have a userland solution/library/example for … Of course. But things we can’t do? I want those more.

1 Like

@pixie
Had a chance to play around with Moddable on the esp32. Some of their tools are pretty neat. I was curious how it’s JS performed doing the kinds of things you’d need to do on a PB. It’s slower places, and faster in others. Arrays are very slow (curiously so), and things like Math.sin are predictably slow (tuned for accuracy & compatibility over performance). I could see it being an interesting platform for doing JS in embedded/IoT.

The core JS VM they use was open sourced around the same time I started working on Pixelblaze, but it was completely off my radar despite searching for this kind of thing specifically.

I’ll keep playing with it and poking around.

3 Likes

I played with Moddable a bit too. I like it. It’s an impressive tool for general purpose, non-real-time IoT development. You’d have to tear a lot of it down to get it to blink lights as fast as Pixelblaze though.

I just had a look at the array code in their vm, starting here

It looks like they do the normal modern high level language thing – arrays aren’t flat chunks of memory, they’re linked lists of some sort. Means a ton of pointer dereferencing for every access. Haven’t looked at the memory management stuff yet, but it looks like they’re using a heap manager, which implies garbage collection, which means fast real time scheduling is more-or-less out the window.

There’s also a lot of code in the array system for handling multiple data types, which is useful for general purpose computing, but contributes to the general overhead.

1 Like

I mentioned Fade in another post, and the array stuff was one of the few bits I wondered how he implemented… It’s not JS though, it looks like he invented his own language.

I’ve actually been trying to figure out all of the possible ways to write an LED pattern (let’s limit it to ESP32/8266 for now)

JS: Pixelblaze, Moddable and so on… (PB is far and away the winner)

C: FastLED, and related options (WLED, etc etc)

Python: Micro or Circuitpython - libraries to do ws2812 and so on. Even FastLED compatible functions in some cases (and not in others)

Homebrew: I’d put Fade in that realm… Not hugely interested, as one of the advantages of language compatibility is that you leverage code from others. So while someone could add LED control to DOS running on an Esp32 (yup it exists) , I’m not hugely interested.

If you outsource the logic/etc,so that a Pi/PC sends the signals and just send to a esp controller via E1.31/etc, then that’s another category, and beyond the scope here. Processing would fall into that, for example.

Anything besides the above any of you have seen?

The ESP Lua implementation has library support for LEDs as well.

Lately I’ve been thinking about LED controllers as small, specialized GPUs rather than as general purpose computers. I see people here using more and more LEDs in their projects. And higher density products, like those nifty 300+ pixels/meter CoB strips are arriving every week.

Moddable, MicroPython, etc, are headed down the general purpose PC path, which is great if you’re doing general computing things, and dealing with relatively small numbers of lights.

But if a device’s main purpose is to drive LED displays, it just makes sense to follow the development path of the GPU, increase the number of cores/threads/output channels, and run in parallel as much as possible.

1 Like

Thanks, I didn’t have Lua on my list, but it absolutely fits.

I agree with your sentiment about scaling.

My working analogy is this right now:

Artists can work in any medium, but part of the challenge is to work in a given limited medium.
Using watercolors is different from using oils which are both different from using color pencils. Similar techniques/notions apply to all but each has unique qualities and looks.

In the “display” world, you can use a high resolution display, with Processing, GLSL, or other languages and create “art” but that’s just one medium. Using low-res pixels like LEDs, you can create things similar to but different from displays… You can add a 3rd dimension, as one option, you can mix mediums like sculpture or form… Put them on walls, or use them behind something and cast light reflectively… LEDs aren’t monitor pixels, and throwing a massive panel up is far harder than the equivalent number of pixels in a monitor, nor does it scale well.

We need to treat it like the different medium it is.
Just like you can use some techniques/ideas no matter what “paint” you use (oil/watercolor/spray paint/whatever), there are techniques you can apply from one pixel all the way up to super high resolution video displays with 33million pixels (8K). But there are things that will look amazing at 8K that fail at 256pixel (16x16) or 64 (8x8), and the task is always to find the right look and feel for the medium you are using.

Thinking of a PB as a GPU is useful, and the analogy works because GPUs also manipulate data, in this case to decide what pixels to light… But at the end of the day, they are different mediums.

This is why I find tools like Tixy so neat: they simulate a lower res environment (similar to LEDs in some ways but also different). It’s like using colored pencils to draw a watercolor. Similar but in the end, different.

I just bought a TV backlight as a gift for a relative. It has a 1080p fishlens camera that mounts above the TV looking at the screen, and a box that takes that image, and figure out the lighting to match the screen using a ws2812 strip on the back of the TV, so that your movie/show is enhanced by a glow that matches what’s on the screen, extending the high-res display with a offscreen background. In this analogy, that’s a mixed medium art project. High density pixels in the center, low density pixels surrounding it. People flash esp32s for this purpose (usually using a Pi or PC to generate the data, and sending it to the esp32 to drive the LEDs), but if someone said “is that doable with a Pixelblaze?”, I’d say No, it’s not, and shouldn’t be.

If you have these set up on an esp32, do y’all want to do some basic performance tests, compared to PB, and let me know? Stuff like loops, basic math, some trig, array work?

I’m all for using existing more advanced languages, but have to be mindful of how they perform when trying to render 10s of thousands of pixels a second. It’s completely possible there’s a faster VM/runtime, or something close, with a more full featured language.

1 Like

I haven’t set these all up (yet), but I’m looking options for the book(s), with the idea that different languages might be different books, teaching same skills.

Beware of the slippery slope. There is no perfect language – knowing more computer languages just gives you more to complain about! (@scruffynerf, this is in reference to “existing more advanced languages” above, not to your book plan. I agree totally that languages all have things to teach, and people might find core concepts easier to learn in a particular language that appeals to them.)

I haven’t (yet) tried driving LEDs with eLua and microPython, but I do know for certain that they can’t escape the hardware platform’s limits either – dereferencing variables in the language slows things down, there’s not a ton of memory left over for user processes after the vm and its resources have loaded, and any need for fast hardware I/O means you have to drop to C or assembly and turn interrupts off, cutting into the language vm’s time slice even more.

I did see one clever thing in microPython for ESP32 though. It’s neopixel driver uses the RMT (infrared remote control transmitter) to drive the LEDs. The transmitting code is here, setup code here.

I’ve not seen this trick before, so no idea if this is faster than what you’re doing now, but it looks like they’re able to buffer quite a bit of data, which might let you get back to rendering faster.

1 Like

Actually, I just noticed CircuitPython only works with the esp32-S2 (while Micropython does have a esp32 port).

Which leads me to ask the question of @wizard :

Any plans to do a esp32-S2 version of PB?

Specs of S2 vs esp8266 and esp32:

Specs seem favorable if you aren’t using dual core anyway. (I think the original esp32 and Pico are still far and away the best chips)

At the risk of reviving a moribund thread, here’s a new direction I hadn’t seen before: WebGL compiled to WebAssembly running on an ESP32 interpreter.

Most of the projects I’ve seen using WebAssembly on ESP32 (like this and that) do the compilation from a human-friendly source language (of which there are many) into WASM on a desktop device, but it looks like AssemblyScript might support in-browser compilation to WASM which is closer to PB’s architecture.

The WebAssembly interpreter has a page of benchmarks but without running the same test suite on an ESP32 using PBscript and compiled C we don’t have any basis for comparison.

This project instantiates the WebAssembly interpreter and runs a WASM app to generate a simple FastLED-style 1D pattern on a 16MHz Cortex M0; a completely unscientific comparison assuming that the instruction timings for MCUs are roughly similar and ignoring the effects of Neopixel protocol timing would suggest that a 240MHz ESP32 could be up to 15 times faster:

device clock 30 LEDs 144 LEDs
Cortex M0 16MHz 110 (actual) 23.5 (actual)
ESP32 240Mhz 1650 (estimated) 352.5 (estimated)

More food for thought…

Ah, neat BUT… It’s porting three.js, and while it’s talking webgl, that’s in a browser, so it’s using the browser to render that, not actually doing webgl itself.

What are your favorite library functions that you’ve written in Pixelblaze that you use across several patterns that you’d like to see included in the core API and sped up?

One idea that keeps popping into my head is how handy it would be to be able to include() (or hijack require()). The ‘library’ code I’m interested in including is more to do with WebSockets than performance gains so building into core doesn’t make sense. In theory simply inlining the code should work is most cases.

Ah…I see that my brain got ahead of my fingers.

I wasn’t intending to suggest that there was already a ready-made WebGL shader for ESP32; I was trying to say that:

  • there is a way to compile WebGL to WebAssembly;
  • WebAssembly is designed to be a fairly fast virtual machine, and
  • there’s a WebAssembly interpreter for ESP32.

It’s true that the end goal of the Lume project involves feeding WebGL-compiled-to-WASM back to a browser, but their WebGL-to-WASM compiler is standalone. If their library of WebGL bindings was replaced with a different set of endpoints then calls to WebGL functions could be executed by an interpreter instead of a browser’s GPU.

I mentioned this because WebAssembly has potential as a candidate for “faster virtual machine”, and the widespread-and-increasing support of WebAssembly as a compilation target gives a lot of future scope for writing patterns in ANY language – today PBscript or JS, tomorrow ???.

1 Like

It’s worth testing for sure. Compile a few simple JS snippets run by wasm, and compare to PB VM.

The GLAS project might have me confused, but I’ll try to share my understanding of it.

Three.js relies on WebGL APIs, and GLAS looks like a port of Three.js to TypeScript, then compiled to WASM, but would still need an underlying WebGL API. This makes Three.js interface more portable (requiring WASM, instead of JS), but doesn’t seem to help on the WebGL side. I don’t think it compiles WebGL to WASM. Put another way, I don’t think it’s implementing any of the lower level building blocks that make up WebGL, but rather uses them to present a higher level API compatible with that of Three.js for consumption in a WASM environment.

Still, WASM itself is interesting as a potential VM on the ESP32. Being able to write LEDs in ANY language that can compile to WASM would be pretty sweet.

2 Likes