Using beforeRender to generate an array of HSV values for supersampling

I just read the topic again: Using beforeRender to generate an array of HSV values for supersampling and I thought I should mention that supersampling H doesn’t really make sense since it’s a circular value. I fill my framebuffer with RGB (possibly using the HSVtoRGB fn you’ll see in my patterns).

2 Likes

I’ve definitely come to the conclusion by now that this entire idea was both bad in general, and not necessary. However, I’m still glad I at least got it to work in some way. A changing pattern that was generated in beforeRender scrolled across the pixels at 14 whole fps!. When I started working on it I knew it was probably going to end up being completely scrapped. It was just a fun goal to keep me engaged in “sitting in front of my screen with my eyes closed trying to imagine what the hell I’m doing” programming.

2 Likes

Most certainly worthwhile! I usually do that away from the computer, then I just sit down and write the code. For example the Sierpinksi pattern took me a week of thinking about it while falling asleep and a couple of hours of coding.

1 Like

Correct me if I am wrong, but this is true only if Time Function is gated with whatever pixel number x.
Something like this:

if (x == index)
{
  t1=time(whatever)
}

Otherwise t1 will be called once per each pixel, not once per frame.
But you are correct, whatever is in the “beforeRender” called only once per frame.

Incorrect. The value of time(whatever) won’t change inside of any given frame.

It’s a static value while render happens.

So time(.15) is always in the same spot of that cycle, no matter when you call it inside render()

Yes, time(.3) is a different value result, but it also won’t change.

I am pretty sure the documentation on time() says this, but if not, we should make it clearer.

Here’s a bit from a previous post.

1 Like

This must be true if Timer runs (much) slower than render.
time(.1) loop takes ~6.5S or 6500mS (I am guessing the exact loop time is 6553.5mS)
For the 300 LEDs max render loop is ~9.5mS I.e. render will run near 700 times per
one Timer loop. This way time(x) value will be (about) the same for a single given
render cycle. But this time will slowly drift because render loop and timer loop are
not synchronized.
I am guessing the expected values for the time(x) function is 1/65535 to 1
Am I correct? (16-bit physical counter with 1KHz reference clock is a related HW).
So, what happens with a nice looking pattern if timer argument is 1/65535?
In this case timer loop will be 1mS which is about 10 times faster than render loop
for the same 300 LEDs strip. And in this case it will be a difference from where
time(x) function is running (beforeRender or inside render).

Please don’t take me wrong.
I am trying to come up with some sort of universal approach how to smoothly control
timing for the running pattern.
I played with few preloaded patterns and quickly figured out how easy pattern
could be broken just by very minor adjustment for the timing variables.
I clearly understand, buffered approach has a limitation for the max FPS but from
the other side provides a very easy timing control with a single variable and pixels
manipulation with arrays. And of course, assuming the PB with LEDs is not a
substitution for the HD TV.

@Vitaliy , see my previous post, link to my comment in another topic.

The time() calls work off a snapshot taken once per animation frame. This has many advantages.

There’s no difference between a call in before render or the last pixel in render (for the same interval parameter), no matter how long rendering takes.

Values larger than 1 are just fine. The maximum interval is something like 24 days.

The time() function is based on milliseconds, so things start to get poorly quantized with extremely short intervals.

For POV with very high frame rates (like 1000) using delta provided in beforeRender will yield better results with resolution down to a clock cycle.

2 Likes