Yes, that is the idea. Though in general you’d want to call one of the color functions at least once for every pixel and not think of it as an optional paint. If you want black, call rgb(0,0,0)
or something to be explicit. It definitely does not remember the pixel value from the last animation frame. You can however overwrite the current pixel, so it’s fine to call rgb(0,0,0)
first, then later conditionally call rgb
or hsv
and overwrite black with some color.
Yes and no. The engine only requires that you feed it some pixel data via one of the render functions. It is totally fine and valid to pre-process that in beforeRender
, storing pixel data in an array or something, and then having a light weight render
that does little more than echo back pixels from your pre-calculated data.
In some cases this is much easier and performant (such as selective painting) than doing all the work in render.
@Scruffynerf is totally right that if you are doing per-pixel work it is generally faster to do that in render rather than incurring the overhead of a buffer and loop in beforeRender
.
Consider the performance difference between these 2 patterns:
This is the basic “new pattern” rainbow. It calculates a hue for every pixel.
export function beforeRender(delta) {
t1 = time(.1)
}
export function render(index) {
h = t1 + index/pixelCount
s = 1
v = 1
hsv(h, s, v)
}
This modified version does pretty much the same thing, but does all the work in beforeRender
, storing hues to a buffer array, which is then later used as-is. The array iterator method mutate
is used here for speed, it’s faster than a for
loop.
var pixelHues = array(pixelCount)
export function beforeRender(delta) {
t1 = time(.1)
pixelHues.mutate((value, index) => {
return t1 + index/pixelCount //the same hue calculation as before
})
}
export function render(index) {
h = pixelHues[index] //use the pre-calculated hue
s = 1
v = 1
hsv(h, s, v)
}
Here’s the performance comparison of these 2 methods on my current setup with 500 pixels, no LEDs (just benchmark CPU).
test | FPS | Pixels/sec | Relative |
---|---|---|---|
rainbow | 242 | 121,000 | 100% |
rainbow buffer | 188 | 94,000 | 78% |
This is a relatively trivial example with a very low cost calculation, where the loop and buffer overhead dominate the performance.
Consider a closer real-world example using KITT. Your PB probably has this installed, and there’s a youtube video where I live-coded this using a buffer.
Some time later, @jeff improved this with additional comments and fixed the skip-pixel issue (where the leader would move more than one pixel per animation frame). This version ships on V3.
Again, some time after that we all got to talking about how to do this without a buffer, and @jeff came up with this beauty. It’s perhaps a little more reliant on math, but comes in at only 15 lines of code, about half of the previous implementation (not counting comments).
Here’s the performance:
test | FPS | Pixels/sec | Relative |
---|---|---|---|
KITT buffer (v3) | 115 | 57,500 | 100% |
KITT bufferless | 73 | 36,500 | 63% |
So in this case, the buffered version is faster. The reason for this is that the bufferless KITT has to do the math for 2 pulses so that it looks like it bounces off the edge. The buffered version only has to fade out pixel values and draw a single leader pixel.
Compared to the cost of the rest of the pattern, the buffer and loop overhead are minimal.
You can also hybrid, painting into a buffer/canvas during render. For an example, see the “Lissajous curve tracer” pattern. The distance gradient (which also serves to anti-alias it a bit) to a dot with a radius is calculated for each pixel, and painted to a canvas.
On the topic of supersampling, It comes in handy if you can generate more detail with a higher resolution than you could otherwise generate directly. Or perhaps if you need to work in a large canvas due to the nature of your pattern, such as ones that move pixels around or blur.
To implement anti-aliasing, you need to have a downsampling method in render
that uses multiple source input pixels.
This is the easiest to follow paper I could find: Filters for Common Resampling Task - Ken Turkowski
You are using nearest neighbor, and won’t have any anti-aliasing benefits of supersampling. You could effectively render that pattern directly without any supersampling or buffers.
If you are only looking to solve this problem, I think you can go with a 1:1 buffer, selectively painting pixels as needed.
I hope some of that helps, but I think I might be missing part of what problem you are trying to solve for!