Using beforeRender to generate an array of HSV values for supersampling

TL;DR:
Has someone else already done supersampling in a way I can steal that makes animating motion in a direction relatively easy? Because I think I have it kind of sort of working and feel satisfied that my issue now is mostly math-related* and I could probably get mine to actually work so I don’t want to reinvent the wheel. And I’m pretty much at my limit for figuring out the math for getting motion to work. All of the weird choices I’ve made are because I think they’re necessary to avoid going over the entire supersampling array each beforeRender. If each LED was always sampling the same supersampled pixel, it would make the render function simpler, but I didn’t think that would be possible without going over the whole thing. So I figured just constantly overwriting it at some set speed, and making sure to always move in complete rows/columns, and then having the array mapped to the pixels and “wrapping” based on some line that was always moving at the same speed as the updates to the supersampling array would work.

I think my question is… does my end-goal make sense? Does my method make sense? Does any of my code make sense? Is there a 100x easier way to do this that I’m missing? Has someone else already done this, but correctly?

Here’s my code:

// ▼ Array Size Things ▼
xmax = 8
ymax = 8
ss = 8 //number of samples
arrayLen = (xmax*ss)*(ymax*ss) //total size of ssarray
hueArray = array(arrayLen) //create an array big enough to store supersampled hues
valArray = array(arrayLen) //for values
// ▼ SS Array Rendering Things ▼
speed = 0.1 //grids per second
lastRendered = 15 //the last pixel that the beforeRender function rendered
lastUsed = 7 //the last pixel that was used for rendering
min = ymax*ss //this should be based on speed 

export function beforeRender(delta){
  delt = delta
  t1 = time(0.015/speed)
  rowTime = 1000/(arrayLen/ymax) //how often a row should be rendered in ms
  colTime = 1000/(arrayLen/xmax)
  if(delta > rowTime){
    for(i=0; i<min; i++){ //I have no idea where +9 comes from
      xss = i % ymax*ss
      yss = i % xmax*ss
      hueArray[i] = 1/xss + t1
      valArray[i] = 1
    }
  }

  
}

// export function beforeRender(delta) {
//   t1 = time(0.1)
//   diff = abs(lastRendered - lastUsed) //distance between lastRendered and lastUsed.
//   if(diff < min || diff > arrayLen-min){ //check if lastUsed is too close or too far from lastRendered
//     nextPix = lastRendered + 1 % arrayLen
//     for(i = nextPix; i <= nextPix+2*min; i++){
//       x = i % ymax*ss
//       y = i % xmax*ss
//       hueArray[i] = t1
//       valArray[i] = 0
//       lastRendered = i
//     }
//   }
  
// }

export function render2D(index, x, y) {
  lastUsed = t1*min*speed //min gives us the width of the ss array, speed is how many we need per second, and t1 loops every second.
  x2 = round(x*ss) //finding the x-coordinate of the nearest ss pixel
  y2 = round(y*ss) //finding the y-coordinate of the nearest ss pixel
  h = hueArray[y2*ymax+x2]
  s = 1
  v = valArray[x2*xmax+y2]
  hsv(h, s, v)
}

I honestly don’t even know if it’s working, but the color changes one column at a time and moves left to right, which to me implies it is working. But looking at the code I have no idea why it would be working yet. There are clearly some math issues because the first column should actually be the last column, and the bottom and right row aren’t updated.

Long Version:

In my last pattern I ended up thinking a lot about the trade-offs between the on-the-fly, scale-independent, mapped 2d rendering of the pixelblaze, and more traditional ways of animating pixel grids.

My biggest issue was the fact that anything that changes over time, changes across the whole pixel space at once (if there’s a built-in way to deal with this automatically and I missed it, uh… please tell me), and if I want the image on the screen to persist once it has been displayed, and only update based on some directional “scrolling”, but without losing the fine, fast adjustment of each pixel that you get when computing things based on a time() function, I’d basically need supersampling.

Attempting that would be way, way above my skill level and I had no idea if it is even possible to implement in the way that I was imagining on a pixelblaze. So, I decided to begin, and step 1 was googling “how to declare 2d arrays in javascript”, because that’s how lost I was. I remembered the “intro to pixelblaze” pattern, there was something about using beforeRender() to create a buffer of pixels that render() would use, and to look at the KITT pattern. So I believed it was possible, but didn’t want to look at the KITT pattern.

Instead of a 2D array, I’m using a 1D array and just doing the math to figure out the X and Y coordinates when I need them.

Instead of one huge array of arrays for HSV values, I made two arrays, one for H, one for V, and I tell myself I don’t want to animate saturation but I can’t anyway because that would require too much RAM. I do this because when I tried to loop through the array to make arrays of size 2 inside it the pixelblaze got mad at me. I have no idea if this is dumb or not.

I tried some code with a variable for the last pixel cached, and the last one used by the render function, and having some moving dividing line along the array that determined which was the “end” and “beginning” for where to display on the pixel grid, so I could just overwrite the array as I went, but I started thinking about how to make that work with motion and decided to try again.

And that’s where I am now, after “starting again” and trying the method of whole rows and columns being updated.

See my bubble popping code for something like your idea.

Circles, circles, circles, and did I say Circles?

Code labeled as v0.7 of Soap Bubbles

It makes a larger than real matrix sized buffer so that it can move the bubbles upwards to float “on screen”, and then adjusts values so the bubbles “pop” in a way that lasts multiple frames with some sense of decaying history

From seeing a few of your posts, I think you’re at the level you can definitely understand and use 2D arrays. For a recent example of their syntax, see this post.

I think I can imagine a scenario where you’re right, where it’s best to use supersampling, perhaps when trying to composite multiple sprites that were computationally expensive to compute the first time. However, I think it’s worth starting with knowing what your ultimate goal is in concrete terms for a particular pattern; supersampling might not be worth the complexity (or FPS) if there’s a different way.

Perhaps the most helpful thing I can do is go through your code and comment some things I found counterintuitive.

// ▼ Array Size Things ▼
xmax = 8 // If this is to imply an 8-LED-wide matrix, based on the 0-indexed math use below, consider 7 instead.
ymax = 8 // ditto
ss = 8 //number of samples 
arrayLen = (xmax*ss)*(ymax*ss) //total size of ssarray = 4096, ok. 64 each dimension, rendered down to 8 per dimension.
hueArray = array(arrayLen)
valArray = array(arrayLen) 
// ▼ SS Array Rendering Things ▼
speed = 0.1 //grids per second -> Is this supposed to mean rows or columns per second? Pixels in the large array per second? .1 "anything" per secont implies we want it to take 10 seconds to do one of a thing. 
lastRendered = 15 
// Currently unused - I see it in the commented out beforeRender and I see then intent, a bit. 
// I could really benefit from a comment here explaining if the range for this and lastUsed is
// supposed to be in (0..ss), (0..ymax), (0..min), (0..arrayLen) etc.

lastUsed = 7 
//this isn't currently used in the code below. I see it in the commented out beforeRender and that it was intended to help find the 
// difference / distance between which have been rendered vs which have been used.

min = ymax*ss // So, the number of samples available to sample from for any column. Unsure why this is named min. min = 64


export var delt
export function beforeRender(delta){
  delt = delta // Currently unused
  t1 = time(0.015/speed) 
  // See comment above for defintiion of speed. .015 = 1/sec. If speed is 
  // ".1 grids per second", 1/.1 means t1 will loop 0->1 every 10 seconds. 
  rowTime = 1000/(arrayLen/ymax) //how often a row should be rendered in ms
  // I thought there were 8 rendered rows in the real matrix, and 8 rows to sample from per real row
  // so I'm expecting either 1000 ms divided by 8 or 64
  // Instead, this is 1000ms divided by (4096/8) - IE.. 512 something, IE ~2ms... seems too fast.
  
  colTime = 1000/(arrayLen/xmax)
   
  if(delta > rowTime){ 
  // From above, rowTime is about 2ms, yet delta is commonly 1-20ms on Pixelblaze3
  // and on my PBv3 with 64 APA102 pixels in an 8x8 matrix, I'm getting 470FPS and delta's
  // almost always 2.05 ms. So.. This loop is basically always running, but wouldn't run
  // if delta somehow experienced a very quick cycle under 1.94ms. 
  
    for(i=0; i<min; i++){ //I have no idea where +9 comes from
    //  i from 0 to 63 based on min above...
    
      xss = i % ymax*ss // xss from 0 to 63 still.. Maybe a mistake.
      yss = i % xmax*ss
      hueArray[i] = 1/xss + t1 
      // note the divide by zero problem here. xss can be zero.
      // Ignoring that, hue value can range from 0 to (1/1 + 1) IE we wrap the hue wheel twice. 0..2
      
      valArray[i] = 1
    }
  }

  
}

// Note from Jeff: I'm ignoring this commented beforeRender for now.

// export function beforeRender(delta) {
//   t1 = time(0.1)
//   diff = abs(lastRendered - lastUsed) //distance between lastRendered and lastUsed.
//   if(diff < min || diff > arrayLen-min){ //check if lastUsed is too close or too far from lastRendered
//     nextPix = lastRendered + 1 % arrayLen
//     for(i = nextPix; i <= nextPix+2*min; i++){
//       x = i % ymax*ss
//       y = i % xmax*ss
//       hueArray[i] = t1
//       valArray[i] = 0
//       lastRendered = i
//     }
//   }
// }

export function render2D(index, x, y) {
  lastUsed = t1*min*speed 
  // Original comment: //min gives us the width of the ss array, speed is how many we need per second, and t1 loops every second.
  // Jeff's notes: Other than setting this here, lastUsed isn't... used.
  // t1 is currently configured to go 0->1 every 10 seconds. 
  // min is the number of rows in the big array; since speed is .1, this saw 
  // wave goes from 0 to "10% of the number of rows to sample from" every 10 seconds
  // so I'm a little confused about what the eventual purpose of this is.
  // Let's also note that this doesn't depend on index, x, or y, so whatever the
  // design intent is, we are recalculating it a LOT - once for every single pixel,
  // every single frame (since it's here in render()). Even if it's correct as written,
  // It's definitely more effecient to do this in beforeRender() once per frame.
  
  x2 = round(x*ss) //finding the x-coordinate of the nearest ss pixel
  y2 = round(y*ss) //finding the y-coordinate of the nearest ss pixel
  // Jeff: OK... an integer between 0 and ss (8) based on x (0..1) from the map.
  // I can't help wonder if what is was intended here is that x2 should
  // go from 0 to the width (number of columns, so xmax, not ss)
  // But it happens to be close since they're both 8 for now.
  // Even further, I wonder if the intent was more like: 
  //    x2 = round(x*xmax*ss)
  // IE, a decimal coordinate in 0..63 (64 samples each in x and y) to pluck from the 
  // big arrays, but then rounded so it's an integer for array indexing. Note
  // I'd use floor() for this, because if they round() up to 64, hueArray[64*64] is out 
  // of bounds for a 4096-sized array.
  
  h = hueArray[y2*ymax+x2]
  // Jeff: OK - here I finally see the array layout scheme. Storing by rows, and
  // 256 potential column entries in each row. However, hueArray has 8*8*8*8 = 4096
  // entries; x2 and y2 can as-written, only be 0..8. So the largest index you can get 
  // from this in your 4096-element array is 8*8+8 = 72.
  // I'm tempted to thing what's desired is:
  //    h = hueArray[y2*ymax]
  
  s = 1
  v = valArray[x2*xmax+y2]
  // Jeff: But here I get confused - x2 and y2 are transposed; why would the scheme for values
  // be different than the scheme for hues?
  
  hsv(h, s, v)
}

Thanks a lot, I really appreciate it. I definitely should have cleaned things up more before posting the code, at least taking out things that don’t do anything anymore, so really thanks for giving me feedback even with the mess that goes beyond just sloppy code.

A lot of the things you pointed out need clarification or will have issues with the math became clear to me as I worked on it more last night. I wanted it to be as generalizable as possible, so that anything that could be in a render2D() function could be inside the loop in beforeRender() that ends up running once for each pixel in the supersampling array that should be updated, and computing some pattern based on the X and Y values that are computed based on that index.

I wrote out what I’m trying to accomplish, since the mistakes in my code make it hard to tell what is something I actually want to happen, and what is just bad math/code.

xmax = the length of the x-side of the physical LED grid
ymax = the width of the y-side of the physical LED grid
ss = the supersampling factor 
arraysize = xmax*ss*ymax*ss (each side is multiplied by the ss factor, so each real LEDs has ss^2 samples)

beforeRender(delta) 
    determine how many supersampled pixels need to be updated based on delta and desired speed
    determine the starting index of these supersampled pixels, constantly over-writing the array FILO
    interate through all the supersampled pixels that need to be updated and calculate the X and Y coordinates in the supersampled array.
    use those X and Y coordinates to generate a 2D pattern

render2D(index, x, y)
    offset the x and y values for this LED based on a time variable that matches the speed of updates from beforeRender
    this offset should make it so that the LEDs that were generated most recently in beforeRender are always displayed first on the "leading edge" of the direction of motion
    determine the pixel in the supersample array that is closest to the offset x and y coordinate of this LED
    use the values from the supersampled arrays for HSV

In my current code I have an xspeed and a yspeed, but I think that’s a little optimistic. Part of why I like using a 1D array and treating it as 2D is that for x-motion, if I know the index of the next pixel to render, I can just iterate through the indexes with i++ if I assume that the 1D array is mapped into the 2D array row by row rather than column by column (I definitely am not convinced it’s better to do it this way, that’s just my reasoning for why I am trying to keep using a 1D array). And if I’m only going to allow 1-dimension of motion at once, then I can’t think of a reason to separat it into X and Y motion instead of just rotating the pixel mapping or the LED grid.

I’m definitely struggling with switching between variables that represent some amount of something, and variables that represent the index of something in 0-indexed arrays.

Again, really appreciate the feedback. It was very helpful.

I appreciate the effort but you don’t want to do it that.

beforeRender() is meant for non-specific pixel related things. So per “animation frame” it runs once.

render() (and that includes render2D, 3D) is meant for specific pixels. It literally runs thru the entire pixel set once for each pixel. So in a 8x8 matrix, it’ll run 64 times. In a string of 400 pixels, it’ll run 400 times.

If your goal is to reduce calculations (example: I only want to update mathematical expensive even pixels every even frame, and mathematically expensive odd pixels every odd frame, so I want to cache values, and thus only do half the math each frame), then you’d just set a variable in beforeRender (is this an even or odd frame?), And with a array to cache pixel info, either pull from the cache OR calculate the value and push into the cache for next frame, and in either case, set the pixel as it needs to be set either way.

Correct me if I’m wrong, @wizard, but an unset value (so pixel 11, if I don’t do an RGB or HSV call during that loop, it’s populated with a zero, as there is no buffer of the old value, and ws2812 can’t skip a pixel, there must be a value to send, right?)

Trying to stuff lots of pixel calc (like a loop thru all pixels) into beforeRender will make your PB very slow, that’s NOT the way the engine works.

Yes, that is the idea. Though in general you’d want to call one of the color functions at least once for every pixel and not think of it as an optional paint. If you want black, call rgb(0,0,0) or something to be explicit. It definitely does not remember the pixel value from the last animation frame. You can however overwrite the current pixel, so it’s fine to call rgb(0,0,0) first, then later conditionally call rgb or hsv and overwrite black with some color.

Yes and no. The engine only requires that you feed it some pixel data via one of the render functions. It is totally fine and valid to pre-process that in beforeRender, storing pixel data in an array or something, and then having a light weight render that does little more than echo back pixels from your pre-calculated data.

In some cases this is much easier and performant (such as selective painting) than doing all the work in render.

@Scruffynerf is totally right that if you are doing per-pixel work it is generally faster to do that in render rather than incurring the overhead of a buffer and loop in beforeRender.

Consider the performance difference between these 2 patterns:

This is the basic “new pattern” rainbow. It calculates a hue for every pixel.

export function beforeRender(delta) {
  t1 = time(.1)
}

export function render(index) {
  h = t1 + index/pixelCount
  s = 1
  v = 1
  hsv(h, s, v)
}

This modified version does pretty much the same thing, but does all the work in beforeRender, storing hues to a buffer array, which is then later used as-is. The array iterator method mutate is used here for speed, it’s faster than a for loop.

var pixelHues = array(pixelCount)

export function beforeRender(delta) {
  t1 = time(.1)
  pixelHues.mutate((value, index) => {
    return t1 + index/pixelCount //the same hue calculation as before
  })
}

export function render(index) {
  h = pixelHues[index] //use the pre-calculated hue
  s = 1
  v = 1
  hsv(h, s, v)
}

Here’s the performance comparison of these 2 methods on my current setup with 500 pixels, no LEDs (just benchmark CPU).

test FPS Pixels/sec Relative
rainbow 242 121,000 100%
rainbow buffer 188 94,000 78%

This is a relatively trivial example with a very low cost calculation, where the loop and buffer overhead dominate the performance.

Consider a closer real-world example using KITT. Your PB probably has this installed, and there’s a youtube video where I live-coded this using a buffer.

Some time later, @jeff improved this with additional comments and fixed the skip-pixel issue (where the leader would move more than one pixel per animation frame). This version ships on V3.

Again, some time after that we all got to talking about how to do this without a buffer, and @jeff came up with this beauty. It’s perhaps a little more reliant on math, but comes in at only 15 lines of code, about half of the previous implementation (not counting comments).

Here’s the performance:

test FPS Pixels/sec Relative
KITT buffer (v3) 115 57,500 100%
KITT bufferless 73 36,500 63%

So in this case, the buffered version is faster. The reason for this is that the bufferless KITT has to do the math for 2 pulses so that it looks like it bounces off the edge. The buffered version only has to fade out pixel values and draw a single leader pixel.

Compared to the cost of the rest of the pattern, the buffer and loop overhead are minimal.

You can also hybrid, painting into a buffer/canvas during render. For an example, see the “Lissajous curve tracer” pattern. The distance gradient (which also serves to anti-alias it a bit) to a dot with a radius is calculated for each pixel, and painted to a canvas.


On the topic of supersampling, It comes in handy if you can generate more detail with a higher resolution than you could otherwise generate directly. Or perhaps if you need to work in a large canvas due to the nature of your pattern, such as ones that move pixels around or blur.

To implement anti-aliasing, you need to have a downsampling method in render that uses multiple source input pixels.

This is the easiest to follow paper I could find: Filters for Common Resampling Task - Ken Turkowski

You are using nearest neighbor, and won’t have any anti-aliasing benefits of supersampling. You could effectively render that pattern directly without any supersampling or buffers.

If you are only looking to solve this problem, I think you can go with a 1:1 buffer, selectively painting pixels as needed.

I hope some of that helps, but I think I might be missing part of what problem you are trying to solve for!

2 Likes

Thanks @wizard , that’s exactly why I tagged you. Your answer covered a lot of bits I skipped over.

Just a quick question about this because I don’t really know how mutate works: Is it still faster if the for loop would only go through exactly the items in the array that need to change? Like if there are only 64 pixels in the whole 4096 pixel array that I want to change, and the four loop only iterates through those 64 pixels? To use mutate on those 64 values, wouldn’t I have to put mutate inside a loop that also iterated 64 times, with the index in the mutate function = i?

I’m pretty sure I’m creating this problem for myself, but I’m not sure.

Lets say I want to have a sine wave scrolling across the screen change in wavelength over time. When the wavelength changes, the next frame will render with every peak on the screen now closer together, rather than just increasing the wavelength of the newest pixels to appear on screen. I’m guessing this is possible with some variable that is based on both time and index, with all of the things that change over time being offset by some other time variable, but I’ve come up short when trying to work out what that might look like.

So since I don’t want changes to be instantaneous across the whole screen, I figured I need an array to store the values to move down the grid, and a 1:1 buffer was definitely my first thought. The problem I saw with this was that that the array would have to be supersampled along the axis of motion, because each frame update is much faster than the time it would take for the value from one pixel to have “moved” to the next pixel completely, and I want that smooth X-motion. So I didn’t see how I could have animation that only changed when new pixels were added AND had smooth X-motion without an array that is much wider than the real pixel grid. figured this would also allow me to do more typical manual animation of individual pixels on top of the generated pattern, since I could manually animate pixels in the supersample array.

The actual use-case I had in mind when starting this is as the display for a random number generator, where something that looks like scrolling through a sine-gaussian wave moves across the screen. There would be an input from a geiger tube, and each detection would freeze one column of the pixels, so after x detections, the entire screen would be frozen with some random pattern, which would then be used to generate a number within a given range, and then a number could be displayed.

So I basically wanted temporal anti-aliasing through supersmapling of X-values. I was planning to just have the Y-axis for the supersample array match the number of real pixels, which would make the array 8x smaller, and work the same as long as the motion was only in the X direction. Realistically I expected to make the SS factor based on the total number of pixels so that a higher resolution display would have less supersampling, and the time to render would be relatively constant up to a point.

I’m definitely still working through the rest of the info in your post, but I at least wanted to reply to that part.

I see, so you are effectively moving all the pixels in your canvas over for each frame, then painting a new sine wave in the newest column.

Many ways to do this specific thing. If you generally want a scrolling 2D canvas like that, you can certainly do it. Nothing you said would make me thing you need a supersampled canvas though. If you did but still use nearest neighbor, you will get animation aliasing. You wouldn’t need Y supersampling if you paint your sine wave in a way that is anti-aliased. You could draw in 2 pixels whenever it doesn’t land on a whole number pixel index, relative to the strength. So for a value of 4.25 you would draw 75% in index 4, and 25% in index 5.

Three methods come to mind, you could move every pixel over left of right, copying pixels, and always paint in the left or rightmost column.

You could also implement a circular buffer, where you have an index pointing to the current “head” column, do your drawing there. To move, you just increase the head (wrapping if needed), and paint a new sine wave there. This has the advantage that you don’t need to move any pixels at all, just a single variable that points to the head. Rendering likewise starts at the head (or ends there depending on which direction you want it to move).

The more mathy version would be to store only the sine wave value/sample, instead of pixels, and render that to pixels in the render function. You could combine this with a circular buffer for speed.

Thanks! That gives me a lot to think about. The circle buffer is what I was planning originally for the code I posted in this thread, since it seemed like it would keep the speed of beforeRender relatively fast to not actually have to go through the whole array. Glad to know it will at least work in theory. I need to spend some more time thinking about the math of the whole thing and looking at other patterns.

I think I just like when the LEDs look like they’re holes with a more complex pattern behind them shining through as it moves under them, and that’s the look I’m trying to achieve.

1 Like

Again, the bubble code did something similar, moving most of the array upwards for a scrolling effect.

Using the new array functions (mutate and so on) is going to be faster than a loop, if only because it allows you use optimized loop code to iterate thru. So to make a scrolling array, you’d copy all elements in the right direction, and then set the edge ones as desired.

1 Like

I think you get this effect with may math based approaches using gradients. You hit aliasing issues once you start working in whole number pixels, quantized samples, etc. Unless aliasing is part of the desired look of course!

That looks like its a 2x vertical resolution?

1 Like

Yeah, so I could build a bubble “offscreen” and float it upwards. Seemed like the easiest method for the effect I wanted.

I think I realized that this is much, much easier than I’m making it out to be, as long as the overall pattern I want is a loop. I can just adjust X over time to act as a window into a much larger pattern, without wasting time rendering pixels that won’t be used unless they are the nearest neighbor, or RAM on an array. Not sure why it was so hard for me to realize that was possible.

I’m guessing that’s along the lines of what you meant by:

Even if it’s not at all along the same lines as what you meant, trying to figure out what exactly that means and how to do it lead me to this:

export function beforeRender(delta) {
  xmax = 8 //size of the actual x-side of LED array
  t1 = time(.2)
  
}

export function render2D(index, x, y) {
  v = 0
  x = x/50+(t1*(xmax-1)/(xmax-1)) // 1/(xmax-1) gives the separation between x values in the pixelmap, so this will make each full screen of real pixels = 1/xmax of the total length of the repeating pattern
  gauss = wave(x) // create a sine wave with a single peak over the entire length of the pattern
  detrend = wave(1-x) // create an offset  wave to keep the line centered
  y2 = wave(x*30) //draw a sine wave with frequency of 30 per full pattern
  y2 = y2-y2*gauss+0.5*gauss // when gauss = make amplitude of y2 inverse to gauss
  h = x //helps visualize the looping point
  if (abs(y2 - y) < .1){ // if distance of y from y2 is less than threshold
    v = 1-abs(y2 - y) // set v = value that is proportional to distance.
  }
  s = 1
  hsv(x, s, v*v)
}
4 Likes

Small tweak that will speed it up a wee bit:

Move (t1*(xmax-1)/(xmax-1)) into beforeRender
The value is consistent over the whole frame, you don’t need to keep recalcing it every pixel.
That right there will reduce math greatly

Coming back, cause I just noticed that math is bogus anyway, it’s always equal to t1

Uh, or just use t1 since (xmax-1)/(xmax-1) == 1 :yum:

1 Like

I am not a software guru neither a computer graphics. I am just EE.
So, from the EE (HW) point of view FPS is very depend on LEDs data rate and number of
serially connected LEDs. For example, WS2812b bit rate is 1.25mkS and each LED
requires 24 bits. So, each LED Frame (Pixel Frame) takes 30mkS.
This means per pixel timing for all math in the render is limited to this 30mkS time frame.
My good guess, in many cases (but not all) this should be OK. But if math takes
longer than 30mkS at least one pixel will be skipped and most likely will be Off.
Assuming all per pixel math is faster than 30mkS for the strip with 300 LEDs the
entire Strip Frame will be 9.5mS (min 0.5mS is required for the Strip Reset between
frames). The resulting max FPS will be no faster than 105.3
And this is physical limitation for the 300 WS2812b LEDs.

If all math is done in “beforeRender” function it implies necessity for the Frame Buffer
and it may (and will) have impact on max FPS. Frame Timing will be 9.5mS plus whatever
it takes for all calculations.
However this (buffered) approach will guaranty processing for all pixels.
Many things (I guess, even scrolling) could be done just by simply manipulating
array index(es) which is fast.

I have no idea how tasks are split between 2 cores but if one core is used
just for supporting all HW functions, second one could be entirely dedicated
for processing Frame Buffer. In this case 9.5mS should be enough for
relatively complex animation.

In addition to my eyes timing control (i.e. FPS) is much easier with function like this:

//
// ***** Frame Rate Timing *****
//

elapsedMs = 0

export function beforeRender(delta)
{
  elapsedMs = elapsedMs + delta
   
  if (elapsedMs > frameRate)
  {
    elapsedMs = 0
    updatePixelArray()
  }
}

This is nothing more than my thought from HW point of view.

If the slope is > 1, there could be more than 2 pixels. So you might want to calculate sin() between the pixel columns, draw a line between left and right, and then light each pixel according to the length of the line segment in it. Or you could do the integral …

Just to be clear: if you do t1=time(whatever), or wave(time(whatever)) or any other thing like this,
The value won’t change until the next frame, regardless of how long the render takes. Doesn’t matter where you do it, in render() or otherwise. Doing it in beforeRender() is where it will change once (per frame). Doing it in render(), it’s still the same result, no matter whether you are looking at pixel #1 or pixel #3000, and the loop between them takes a long time to run

1 Like