I’ve never been happy with the look of LED previews. Sure, they show you which pixels will light up, in which approximate colors, but they’re totally missing the pop – the vibrant feel of an LED display. So I’ve had an “improvement” project in the works for a while. Here’s a first look at what’s going in the next build of Pixel Teleporter, which should be out around the beginning of November.
This is the “Line Dancer” pattern from my repo, running on a PB3, feeding into PixelTeleporter. The MP4 was made using Processing’s built-in movie maker. No actual LEDs or cameras were harmed (or used) in the making of this film. It doesn’t even require a particularly powerful computer – this demo runs 60fps on my venerable i7-2600k w/NVIDIA GTX 1080.
To be clear, Pixelblaze’s in-browser live previews are fantastically useful, and wouldn’t really benefit from this sort of treatment. They’re a fast visual index for finding and selecting a pattern on your Pixelblaze.
What I’m working on is a better way to build a visual catalog for a pattern library. The 2D previews I’ve seen on the web so far really don’t do justice to the patterns they’re representing.
Anyway, here’s how it works:
There are two layers:
The background, a textured plane. I started to model it in detail, then realized… oh… and grabbed a photo of shiny, slightly scratched black plastic from the web instead.
A tiny bit of per-LED instanced geometry. Just a flattened cube with a lighter color circle drawn on the top. The LED object’s material is set to be emissive, with maximum shininess, and a bright white specular color. This let me take advantage of Processing’s built-in lighting to produce the “bloom” effect that simulates a slightly oversaturated camera.
Individual LEDs are “lit” with a large precalculated brightness map, which is built at startup time, then scaled and applied per-LED. The map includes the very bright LED center, interaction with the geometry, and light falloff across the scene.
The whole scene is lit with a very dim white directional light, shining from the viewer location into the scene. This provides just enough light that it sets off the “bloom” effect – crazy bright white specular highlights, really – when the light contribution from the brightness map is high.
(The recessed look in the video is a mistake – I set the LED geometry slightly inside the background instead of atop it – but it looked good enough that I’m keeping it as an option.)
It got me thinking - I know we have a roadmap for how the pattern library should support tags and be backed by a github repo and allow uploader updates…
But given how I almost always record a short video and link it in the code for anything 2D, I almost wonder if a shorter, more useful path would be to just have an optional field on the pattern upload for a video demo. If it’s not provided, we can use the 1D preview, or any 2D previews in the future. Granted you have hard decisions about accepting YouTube links (like Discourse), preview size, transcoding, storage…
Way back in the back of my mind, I’m thinking about a stand-alone java app that could potentially be run on the server backend to extract EPE preview data and make a detailed preview GIF. If that works, instead of a random link, we could just have people check a 1D/2D box and maybe specify the optmal dimensions. They can still mess up, but nobody’s going to have to watch Rick Astley for a preview.
Here’s what I’ve got along those lines so far:
The videos compress extremely well and are quite small – the one at the start of this thread is about 4MB at full quality, before uploading to YouTube.
Processing’s movie maker can already generate animated GIFs as well as MP4s. They are (if you keep the length to 5 seconds or so) also very small.
The upcoming Pixel Teleporter version will also have the ability to save incoming LED data frames to a JSON file and let you play it back later. Adding the ability to look at EPE files is relatively easy at this point.
If things continue to go well, I’ll have beta code out for the fearless to try later this week.
(Edit: Added Snake 2D video running on the current beta)
Ok, I’ve just put the demo out as a beta release on github. The direct link is below - v1.1.9995b. To use it, download PixelTeleporter.zip and copy the contents to your Processing libraries as usual (instructions are in the README.md in the PixelTeleporter repo if you haven’t done it before.) Or…
If you’ve updated to Processing 4, you can use the new easy installer. Just download PixelTeleporter.pdex and drop it on Processing (or double click it). You’ll be asked if you want to install “PixelTeleporter”. Respond “Yes!” and you’re done.
You can try out the new renderer from the “LEDRenderTest” example. It’s set up for a 16x16 matrix, but can easily be changed to other sizes. In addition to the normal PixelTeleporter UI, you can set the level of camera oversaturation - the glowing, blown out look you see in many LED videos, by pressing the 0 (off), 1,2,3,4 keys.
I’d appreciate any feedback, especially on performance, as I’m asking quite a lot more of both CPU and GPU…
As this is a prerelease, there are known bugs. Here they are:
Although PixelTeleporter’s rotate/translate/zoom mouse UI works as usual, rotating the rendered image can have …weird… results. Just press ‘r’ to reset the camera. Rotating about the Z axis (hold down alt) works fine. It’s just X and Y that cause the depth buffer to get confused. This’ll be fixed by release.
There are occasional color blending anomalies - stuff that should be smooth, but isn’t. This is due to some constraints on rendering order. It’s not terrible. Just not perfect. I’m still working on a fix that doesn’t kill performance.
If you restart LEDRenderTest very quickly after stopping it, it may take a few seconds to connect, and may even start the grey “I’m not connected” blink. Give it 20 seconds or so before trying again.
Real LEDs pump out WAY more light than monitors, but monitors have better dynamic range. This means that some patterns can be too bright or too dim to work well at first.
If a pattern is too bright and looks blown out, just turn the brightness down a little on your Pixelblaze. If it’s too dim (as in the case of SpinWheel 2D, for example), just turn the Pixelblaze to 100% brightness. If that’s not enough, scale up the final brightness in the pattern. SpinWheel, for example, looks better everywhere if you multiply the final output brightness by 4.
(Snake 2D running on a PB3, as captured by the Windows Game Bar. No editing, no post-processing.)
Pixel Teleporter HD progress report: Almost done. Just fine tuning for robustness and working to push as much work as possible out to the GPU.
Here are a few relaxing minutes of the new Pixel Teleporter simulating the awesome Evil Genius Labs Fibonacci 256 board. It’s being driven live by a Pixelblaze3 using the mapping function developed in this thread. A video of an actual board from the Evil Genius website is shown at right for comparison.
@twilight, I’ll post these to the library as I get them polished – nice controls, the code decently commented, etc.
Meanwhile, most of the patterns in the video are in the repo that @timster mentioned above – either in the “2D and 3D” folder (for finished patterns) or the “Experimental” folder (things that work, but need cleanup before release). Please take a look, play with them and make suggestions.
Some of my recent favorites are:
the three “rgbplasma-noX” patterns (in the 2D and 3D folder)
Twinsuns2D in the Experimental Folder
“Agitator” in the Experimental folder
and if you want something brand new, try the two negative space patterns (black designs against a lit background), darkstar2d and seastar2D in the Experimental folder. I’m still working on improving these, but they’re already really different and interesting.