@wizard What’s the timeline on a v2 firmware update to bring it up to v3 parity?
Mostly because I own a pile of v2s and realized that if I upgrade code to v3 standards, those new patterns won’t run on v2
@wizard What’s the timeline on a v2 firmware update to bring it up to v3 parity?
Mostly because I own a pile of v2s and realized that if I upgrade code to v3 standards, those new patterns won’t run on v2
One of us should really publish a pattern with a nice set of polyfills in the meantime. The new coordinate transforms would be a fun and meaty challenge.
It’s definitely on the list, but not at the top just yet.
I have an update to V2 that hasn’t yet been released that is largely the same as V2.25, no transform APIs, etc., but I did port over a few small improvements:
I’m happy to send that your way if you want to give it a spin.
Of course the real meat would be the array, math, and transformation APIs so that patterns are more portable. The transformation API uses floats internally, and the esp8266 lacks an FPU so would be much more of a performance hit than V3. Still, it may be better than a pattern-based implementation.
+1 for just adding API compatibility for the v2. IMO, optimized performance on the older platform would be nice, but is something that can be done over time.
Since we can stack up to 31 transforms, one thing that I’d like in the API eventually, is the ability to easily “pop”, or remove one or more transforms from the current set. This would make it easy to implement coordinate frames of reference for object subassemblies – the usual example is a car moving through world space, with wheels that are also rotating on their own set of car-relative axes.
Not critical immediately, but something to think about. It’s a good bet that in the near future the line between “LED lighting controller” and “display controller” is going to become even blurrier than it is now! Pixel cloth, anyone?
I had push and pop to save and restore transform, but it didn’t make sense. You’d have to change it mid render, and it would apply to the next pixel. It would be different if it was a canvas like API.
You’re right of course, @wizard. I’ll ask again later when we get around to transforming arbitrary vectors outside of render!