Mapping irregular layouts

What’s the best way to automate the mapping of an “irregular” arrangement of pixels? Think strings of “bead” pixels looped through MOLLE-esque webbing on a garment. Relatively dense but no grid or isometric or strictly circular arrangement.

The standard trick is to video the controller lighting each pixel in sequence and then get the computer to note down the coordinates. Are there tools which do this for Pixelblaze? The added hurdle is that it’s a wearable so you’d need to do a few different angles to completely map the (roughly cylindrical) surface.

Hey @ratkins! Welcome to the forum!

This is a big unsolved problem in LEDs that I have been following. Right now there’s no video-based mapping tools specifically for Pixelblaze.

I’ve gotten a good head start mapping some projects in 3D from video using this tool:

You might be aware of some of these tools already, but if not, check out the approaches listed by me and some others in these comments:

As well as some referenced here: Blinkfade in Cube Frame - #4 by Scruffynerf

Good luck, and post pics/video of what you end up with, please!

1 Like

This was deep in the links above, but thought it should be at the top.
Ben made an online 2D mapping tool where you take a picture of the item, and upload it. Then you mouse click each LED in order to build a map as a JSON array. It doesn’t do 3D, but for my wearables I just laid them out flat and it came out great. So if you can, just unzip the back of the cylinder and lay it out flat then 2D map that. If not, do 2 maps, front and back, and append the arrays. You’ll get mirrored effects on front and back, but it will still look good.


Adding to basic photo pixel mapping techniques, one way to map more complex things that don’t lie flat is to take multiple photos from different angles, like front + back, and then use a photo editor to arrange them into a single image. Doing that side-by-side would give you something close to a cylindrical wrap. That might be especially handy if your wiring crosses between the front and back too.

The tricky part of this is you have to be able to see each LED in the photo (or make a decent guess), and click them in order.

You can see a good example of that in this project:

1 Like

Great suggestions, thanks for the help everyone, you’ve got me oriented.

I’ve spent a bit of time last year working on this. I’ve put together a proof of concept that works off multiple (2+ cameras). It automatically detects their relative position (doesn’t have to be e.g. at 90º) and can locate the pixels as long as they show up in at least two cameras.
The idea is similar to the one described above: light pixels up one by one, detect them automatically in every camera’s 2d image and then, knowing their relative positions, triangulate.

See the demo: 3d pixel mapping with multiple cameras - YouTube

Theoretically, the mechanics can be simplified to do this without separate calibration steps/patterns with just a single camera, but I never got to it. Would be happy to discuss this in more details.

This is not specific to PixelBlaze, but the particular controller doesn’t matter as long as it’s can be driven over network


Oh, very cool! What is the output of the process? A bunch of points in a 3D space, quantised to some kind of grid? I don’t understand “… the particular controller doesn’t matter as long as it can be driven over network”—don’t you just load up the points as data when you’re programming the controller? Why does it need network?

The output is point coordinates in some coordinate reference system (technically first camera). You can rotate/scale them with some external tool to match the real dimensions/directions.

The network requirement is for the detection process only – the software sends the commands to turn the pixels on/off. As the output is just a static (e.g. CSV) list of coordinates, no network is needed to use it.

1 Like