I’m going to build a new 60x60 matrix using pre-made aliexpress strips to avoid the 2 weeks+ of sheer pain that it was to make my own by hand last time in 2018:
I have around 10,000 lines of code I’ve written for my ESP32 2D library:
So, I was considering using a PB plus expander board, but I’m not really interested in rewriting dozens of 2D demos from C++ to javascript.
I realize the board comes with 2 or 3 usable 2D demos, but is there a reasonable collection of extra 2D demos, so that I may just be ok running those?
I should add that I can see there are some patterns on https://electromage.com/patterns
but is there a good way to get all the 2D patterns at once, as one download and one upload on the board, or I have to find them all one by one, download them one by one, and then upload them one by one on my PB?
Thanks for that update.
For fonts, there may not be a need to re-do all the work Adafruit has already done with Adafruit::GFX and the many fonts that exist:
Have you considered importing them, or part of them? Sure, they are bitmap fonts in pre-computed sizes, but honestly that’s typically all one needs.
And if you need to zoom on a font for a fun effect, you can import multiple sizes of a font, and switch between them:
or
I guess my main point is we have a lot of 2D code in the arduino world, and actually I’ve written a lot of it myself, including multi API support, glue libs, and plenty of demo code.
Is there any way to leverage any of it in PB, or because of the javascript interface, it’s pretty much like starting over from scratch, even if ironically it’s on the exact same ESP32 and neopixel matrix, than my thousands of lines of 2D code are already running?
Cc @wizard
On the differences between Pixelblaze and FastLED-related firmware, there’s a lot to consider:
The key thing is that the Pixelblaze compiles its Javascript to a specialized byte code in real time. It runs your patterns in its internal bytecode engine. This has a couple of effects:
Development is wonderfully fast. Pixelblaze is the best tool I’ve ever had for thinking about graphics. You get instant feedback without the compile/upload cycle constantly eating time.
Since it’s not native code, it doesn’t generate frames as fast as compiled C++. This becomes noticeable at higher pixel counts. IMO, the practical per-Pixelblaze limit for fast moving patterns is 800-1000 LEDs. (For a 60x60, if you want to keep the frame rates reasonably high, I’d recommend using 3 or 4 Pixelblazes.)
It’s not likely that you’ll be able to reuse much frame buffer-based Arduino code. The Pixelblaze’s architecture is more along the lines of “functional” graphics. It doesn’t expose a system frame buffer - pixel colors are calculated at frame time as a function of pixel position, time, analog inputs, or sound, or whatever else you provide it. It is very much like writing shaders for a single core GPU.
The huge benefit of this architecture is that you can write 2D and 3D patterns that work regardless of the physical geometry of your LEDs. The downside is that the things that people typically do w/frame buffers are sometimes awkward to implement, and often don’t work as well as more “shader-ish” methods.
Sure, it would be trivial to import a bitmap font, but I wanted more fluid arbitrarily transformable (e.g. I have a slider for italicization) fractional-pixel-resolution antialiased characters: https://youtu.be/GWxssA3JNBc … which would be even cooler on a higher resolution matrix.
Yeah, it’s like starting over. I bet an LLM could translate it for you. Those newfangled things seem rather capable. On the other hand, working in a higher-level language lets you entirely rethink how you make patterns. Most of the things I’ve done with PB just took an evening or two. Pixelblaze could really use an in-device function library and variable namespaces, but copypasta always works.
The scroller in the above video is according to cloc 165 lines of PB JS code, including the glyph and string data, antialiasing line drawing, and HSVtoRGB function. Oh, plus the 25 lines of Ruby that extract the glyphs from the Brutalita JSON.
Oh, so I bought a 60x60 net array (well, 10 nets of 6x10 put together).
I honestly thought 3600 would not be “that many” pixels, I mean it’s barely bigger than a few sprites displayed on my Amstrad CPC with Z80 8bit CPU 40 years ago.
Are you saying JS is adding such a slowdown that despite an ESP32 being many times more capable than the Z80 I just mentioned, I will notice slowdowns?
Or were you talking about neopixel speeds that would be slow for a 4k-ish single line? If you meant the latter, I was definitely going to break that up in 8 or 16 substrings with an expander board. Back when I built my 4096 pixel array now 6-7 years ago, I had 16 strings of 256 and over 100fps
Yves Bazin even managed to use shift registers to drive 22 output lines with I2S and then 5 outputs per line with shift registers, so it’s a massive 100 lines of parallel output on a single ESP32 (!) https://old.reddit.com/r/esp32/comments/bkyeq0/20000_ws2812b_pushed_at_130fps_with_esp32_and/
(last message on that thread
“You can drive 20 shift registers with that version. Hence 100 data lines. There is a new version coming soon that will allow you to drive 8 virtual pins per esp32 pin over 15 pins hence 120 lines”
So basically an ESP32 can drive lots of lines if needed, and even PB should support plenty with multiple expander boards on a single chip.
It’s always useful to estimate one of the two speed limits by using the spec’d “48,000 points per second” (which applies to the relatively simple patterns the board ships with).
So, regardless of data transmission speed, 6,000 pixels will be limited to 8 FPS max before your start splitting the workload with parallel processing via sync mode on multiple PBs.
And certainly, as others have hinted, you’re going to get much less than 48KPPS when you port over procedural, frame buffer style code instead of rewriting it as a functional shader-like computation. You may also run out of memory or stack.
This isn’t an ESP32 limitation, it’s more what zranger was saying about the bytecode runtime. WLED on an ESP32 will be much faster computing those 6000 pixels, but then you give up the realtime joy and creative unlock of PB’s instant keydown compilation.
I’ve had good results with populating a framebuffer array in beforeRender() and outputting the results in a one-line render(), but it would be great if this final dump-array-to-output step could be optimized into a call to something like renderArray( ... array of RGB triples ... ) at the end of beforeRender() to avoid pixelCount calls to the render() function.
Sorry if I missed something: if using the PB expander, which gets around the PPS limit of neopixels on a single string, why is there 48kPPS limit?
I realize PB isn’t using a 16 or 20 parallel output solution I mentioned above and just the max output that can be done over its own protocol that is pushed to the expander board, but… ooooh, I see that 2Mbps protocol is limited to only 66kPPS
“The input serial protocol runs at 2Mbps. This allows up to 66k pixels/sec to be drawn per serial line, about twice the speed of typical WS2812.” from https://electromage.com/docs/output-expander
Ok, so so with a 60x60 array, I’ll get around 20fps in very best case scenario. I assume PB doesn’t use one ESP32 core just to push that 2Mbps while the other core computes pixels for its internal framebuffer, because one of the 2 cores is likely allocated to wifi already, correct?
I could live with around 20fps. If I setup an expander board all wired up nicely and after playing with the PB, I decide I want to switch back to my own ESP32 C++ code, is there an ESP32 driver somewhere I can import to push my framebuffer to the expander board over one wire?
I looked in pixelblaze_output_expander/firmware/Drivers at v3.x · simap/pixelblaze_output_expander · GitHub but didn’t find what I was looking for.
Worst case, I would rewire everything on the ESP32 to do my own parallel output and bypass the expander board.
The 48k pixels/sec is a cpu thing. Rough average of the built in patterns. Even with infinite led data bandwidth, that will come into play. Expanders won’t help with the cpu limit.
I’ve spent a lot of time optimizing that, but it is still going to be below native code speed. Pixelblaze can cheat a little in that any top level supported functionality can be implemented natively. So if a text and/or raster bitmap framebuffer API was added, it could do some of those things much faster than if implemented in pattern code alone.
I would be up for adding text, raster, bitmap, gif, sprites, etc. at some point. Anything with a permissive open source license is game for integration.
Currently the way around that cpu limit is to use multiple PB in a sync group. Each PB adds another unit of processing power for the LEDs attached, so divide and conquer.
Some future ideas include writing a JIT compiler, an external/hosted native compiler, or new hardware that would allow running existing optimized JIT JavaScript runtimes (and a ton more compute otherwise).
Some of those ideas are more difficult than others. I’m unsure if investing a ton of time on an xtensa JIT wouldn’t end up thrown away if the next big MCU ends up being riscv or arm. Running a cloud service to compile stuff would probably mean some kind of nominal subscription. And finally, cramming a raspberry pi or tablet into something the size of PB, making it relatively low power, and spinning up a whole system around it, is a fairly big chunk of work.
In the nearer term, adding a few more sync primitives would go a long way in making a cluster of PB a very feasible way to scale up and close the gap that exists for patterns that aren’t solely relying on time() for their animations. That’s what I intend to work on next.
thanks for the details, much appreciated.
On the front of “if I have to revert to ESP32 and my own native code”, is there an up to date open source driver for expander board output for ESP32?