Receive pixel data through websocket API?


I’ve got some patterns which are too complex (state, look-ahead, complex maths) to render on the fly, and I’m currently pre-computing them and rendering them to an image where each row of the image is a frame of my LED strips. This is happening on a computer on the network with no memory and CPU constraints, I don’t think it’s possible to port to the pixelblaze.

What I’d like to do, however, is send a frame of LED data directly to the hardware through some kind of API, probably the websocket one (although I’ll be bridging that to MQTT to fir with the rest of the system I’m building). Is this something that’s possible to do in some way, or am I going to have to write my own firmware for the hardware?

I’m happy to do the software from scratch if needed, but if anyone can think of a workaround using the current system that would obviously be easier! I bought the v3 mostly because it saved me from having to design my own carrier board with level shifting etc. The web interface and other software is very neat, but I don’t think it can do what I need in this case.



Hi and welcome!

You can definitely do this, and many of the pieces already exist!

If you can use Python, here’s a link to the forum thread for @Nick_W’s async Python library and MQTT bridge.

And for some guidance on encoding pixels, sending them via the websocket api, and decoding them in Pixelblaze pattern code, here’s a pointer to my “proof-of-concept” e1.31 proxy


Great, that’s exactly what I suspected might be possible so thank you! I’d missed the set variable operation in the API.

I’m using Python for everything else and I’d already had a look at the async API library, I’ll need to write my own MQTT bridge as I’m defining my own set of messages but that’s trivial enough (just need to respond to a request to play a particular pattern from my pattern server and then keep pumping frame updates out to the pixelblaze based on that).

I’m curious whether this is better than just dumping micropython onto the pixelblaze and writing an MQTT listener directly, I’ll do a comparison though. I don’t actually need more LEDs than your proxy can handle but if I did I think it’d be simpler to do through either micropython (loses some memory and efficiency but simple) or just writing new firmware (more complex I guess, I’d need to dust off my C coding…).



Just a quick note - the number of LEDs the proxy supports is limited by the maximum number of pixels in a single DMX universe. (plus, it was written for the somewhat slower Pixelblaze v2)

With a v3, if you skip e1.31 and the rate limiting code, and just use the websockets API directly, you can send many more pixels. I haven’t tested it fully, but as a guess, the practical limit is somewhere around 1000. Maybe more, depending on your frame rate requirement.