Persistence of Vision

One of the projects I’ve made over the years are a set of Persistence of Vision POI. The one challenge with them is changing out the patterns. It’s always a bit of a process to take an image, convert it to code, compile the new code and upload it to the poi.
The pixelblaze v3 is a nice small package which would fit easily into a new set of poi. From what I can see it doesn’t look like it supports any persistence of vision displays though, are there any plans to add this?

Correct me if I’m wrong but you’d need to add a acceleramator (or add the sensor board, soldered to the pads on the back) (at which point, the normal PB is likely just as compact if a bit larger)

I am interested in building a POV POI, so I’m curious what you envision.

I’m also trying to build a couple POV projects with the PixelBlaze and am very interested in this topic.

take an image, convert it to code, compile the new code and upload it to the poi.

I don’t think PixelBlaze was designed for this, but it should be possible? I’d love to hear what Ben has to say on this subject.

@karade,
I’d love to hear about what you’ve made.

It really depends on the look you are going for. Theres a lot of LED POV POI out there.

Pixelblaze doesn’t yet support images, but would be great for generated patterns.

In a general sense, the basic requirement is that you need to be able to spit out pixels at a fast enough frame rate to change the LEDs while it’s spinning. For a Pixelblaze, the FPS depends a lot on the number of pixels and pattern complexity. If you have 100 pixels, you can get 120-400 FPS on V2, 2-2.5X that for V3. Cut down to 32 pixels, and you can get almost 3x more.

Many poi just spit things out at a fixed rate. If you spin it faster, the image stretches. Spin slower and it squishes the pattern tighter together. You can absolutely do that with a Pixelblaze V3 Pico.

Usually that doesn’t matter much for patterns (or images that are just repeating textures), slightly for text, and more for logos. But it all depends on what you are trying to do.

Crosspost from Pixel Blaze 3V?

Can you run the Pixel Blaze on a 3.7 V 18650? Otherwise this whole project is going to fail before it starts. Is there a 5V battery that fits inside a poi stick or staff?

1 Like

What I made was basically this: https://learn.adafruit.com/genesis-poi-dotstar-led-persistence-of-vision-poi with 32 pixels. I used some plumbing fittings so I can screw the poi into a staff if I want to.
I’d like to re-make them using some APA102 2020 strips to increase the pixel density while making the whole poi a lot shorter. For the look I’m going for using a fixed rate will work well enough, it was more the uploading of an image that is the biggest hassle currently.

I was more asking to see if you had something in the works, but these things often hit the ground hard and I’d honestly rather trash a cheap ebay special than a good controller. Haven’t managed to break them yet which is lucky.

1 Like

So three ideas for images… @wizard might offer better insight into the possibility and limits of them all:

1: websocket setting of a variable which would be an encoded image. Concerns: size and saving

2: Generating code from Static Patterns

Write a simple image to data converter and paste output into new pattern. Could be combined with #1 to make a “image pattern” maker/pusher.

3: the preview image is really a small jpg
See "Improving" Patterns
And it’s sent to the PB as part of uploading an epe…
If there is a way to access that from within PB code (and If there isn’t, I wonder if @wizard could easily add that? Maybe checkbox to turn off pattern preview regenerator for a edited pattern, too at the same time, so once it’s set that way, it won’t be rebuilt from the pattern itself?), then perhaps a small jpg that contained exactly what the current one does (150 frames of limited pixel data) could be used to populate a POV. Most are short strands, and that jpg is already being stored for every pattern. We’d want a function to pull the data into useful format, line at a time, just like it’s done on the web interface now. This seems like the most promising approach, with some work by Ben required, but making PB much more POV friendly.

1 Like

@Scruffynerf

  1. Yep, could pipe over an array that’s new image contents from a tool that can read images.
  2. Right! there’s a tool and pattern code started!
  3. It is, and the chip can read the data, but I’d need a jpeg decoder (not small or fast). Storage format aside, the resulting uncompressed image can be quite large for arbitrarily sized images and would need to stay in memory, which is tight on the V2 with everything else that i going on, but relatively plentiful on the V3.

Before I wrote Pixelblaze, I had a project that served up a page where you uploaded an image, set the number of pixels (to scale height), how many frames (width), and a speed, and it would crossfade between columns of pixels from the image.

I think V3 will evolve over tie to support these kinds of things, and make working with images, text, sprites, 2d textures, coordinate transformations, and that sort of thing much easier.

3 Likes

A bump because I would love to play around with image-based aspects of this!