Automatic Mapping Preview

Here’s a first look at my automatic 2D map generator. I’ve got a few features to add before general release, like map rotation and the ability to manually edit points the vision engine can’t figure out, but it’s already pretty robust. Should be ready to try in a week or two!

18 Likes

You did it!!! First boss defeated!

This is super cool. When this is 3D and done via a mobile phone (perhaps with the help of the spatial awareness inferences in ARKit - scene understanding, ARAnchor/ARNode and hit testing) - oh my god you’ll have a utility app that everyone working with LEDs will need to have. Holy grail. I have several links saved to other people’s prior attempts on LAA and YouTube!

3 Likes

What. The. F.

This is absolutely insane. Impressive work ! I actually have an idea for an art piece where this would prove once again Pixelblaze is the right choice to power it !

Wow! Thats amazing! Do I see Python in the background? Are you using a CV library, and websocket to poke each pixel?
Very very cool!

This might be the most exciting new feature I’ve seen in a long, long time. It’d be interesting to see how it responds to light diffusion (aka a piece of white cloth) over the LEDs when you’re done with it – because that’d allow mapping of LEDs installed into wearables or other items already. So impressively cool already.

@wizard, yes, I’m using the python client with OpenCV. It’s super handy to be able to control the Pixelblaze while taking images.

A couple of things I found while researching this:

  • When you light up an LED, you actually have to throw away the next 10 or so frames while the camera tries to “help” you – adjusting exposure, white balance, etc. This takes time. When looking at other people’s detectors, I saw a lot of weird lighting and sampling problems that I think were related to this sort of issue.
  • Huge cheat #1: The Pixelblaze told me how many pixels it has, and I know exactly which one I’m trying to light up. I know the pixel is there. If I don’t get a good detection I can adjust parameters and automatically go back and try again.
3 Likes

@ZacharyRD, I’m working on diffusion now. I think paper and cloth – anything thin enough to let you distinguish the LEDs when they’re running fairly bright – will work. Complete diffusion over a large surface, maybe not though.

“Hard” reflection off mirrored surfaces is a bear too. Infinity objects are right out for the moment.

I mean, complete diffusion over a large surface seems almost impossible – I think you’re solving at the right level – through thin diffusion where you can still visually distinguish individual LEDs seems enough! Thanks.

Incredible, and relevant to my interests—“why can’t the computer do this?” I remember thinking as I clicked on all 400 LEDs in an image when I did this last time with @wizard ’s tool.

Do you have a “quantise” or “snap to grid” feature? Useful for wearables like mine which are theoretically regular arrangements but might not look so to any particular camera angle, given how the fabric lays.

I’ll have some kind of quantization for sure. Not sure about explicit snap-to-grid. Kinda… there is no grid. I’m trying hard not to be picky about lighting, camera angle, moderate amounts of camera movement.

Right now, the plan is this: it’ll tag the pixels, then you’ll be able to rotate the whole map to the angle you want, drag misbehaving pixels to where they should be, and send the map to the Pixelblaze, or export it as JSON or CSV so you can work on it further.

3 Likes