Sneak peek: coordinte transformation API

Continuing the discussion from Input needed: functions wishlist:

Here’s a sneak peek at the new APIs.

And part 2, port cubefire to use the new API and get a FPS boost!

4 Likes

Awesome. When can we expect this to arrive?

Reviewing the original thread, I see that this will likely be accompanied by some form of pixelmap API, which means the coordinate library issue I had (that you only have access to x,y(,z) info during render) will go away. Yay!

strangely, having an scale/transform API means that caching things like polar (Radius/Theta) would be affected by those changes, and likely require removing the cached value, since your underlying x,y(,z) values would have changed.

I’m looking forward to this API, even if it’ll entirely break what I’ve done so far. (thankfully, plenty will remain useful)

Probably 2+ weeks out. There’s a few other features that need to get in, and I need to polish a few others, add docs, etc. I haven’t added walking the map, but hope to add that in, and do it in a way that allows the coordinate transformation to be in effect.

Do you find a large performance boost caching values vs calculating them?

Huge savings… With 256 pixels, easily 20-30FPS saved if not more (my example code drawing radius/angle stuff runs almost 90FPS) so with more pixels, that would only do better.

There is no question, caching is worthwhile, and if we can make it cache earlier than Render, so that Render doesn’t even have to check for a valid cache on every loop thru the pixels, all the better. Moving that to beforeRender or even setup would only help.

Even if the cache is broken (via a scale/transform), its still faster than re-calculation on every frame.

Ideally, you’d only calc/cache the bits you need. I’m planning on my example doing everything, polar, cylindrical, 2D, 3D.

@Scruffynerf ,
I wonder if there are performance differences between the V2 and V3 that would make it a large improvement.

On V3 if I compare your caching pattern vs this, I see ~5% speedup for caching. Just taking out the translation step flips that to a 5% speedup for the calculated version over the cached version (and makes me wonder about optimizing stuff like x -= 0.5).

I’m getting 89.5 FPS on my 16x16 SK6812 panel using your code.

This gets me 85.2 FPS, centering and calculating every time (~5% slower):

function render2DCalc(index,x,y) {
  x = x - .5
  y = y - .5
  r = hypot(x,y)
  a = atan2(y, x)
  if (r <= .4 && r >= .3 && (a >= (t1 * PI2) - PI) ){
      h = t1 + index/pixelCount
      hsv(h, 1, 1)
  }
  else {
    hsv(0,0,0)
  }
}

Using the new transform stuff to center coordinates, with calc gets me 90.4 FPS. Not too bad if you consider that enables a full 3D affine transform for every pixel. The code looks a little different, setting up centering in the main code body once:

resetTransform()
translate(-.5, -.5)
//...
function render2DCalc(index,x,y) {
  r = hypot(x,y)
  a = atan2(y, x)
  if (r <= .4 && r >= .3 && (a >= (t1 * PI2) - PI) ){
      h = t1 + index/pixelCount
      hsv(h, 1, 1)
  }
  else {
    hsv(0,0,0)
  }
}

What if we had render2DPolar(index, x, y, radius, angle), where the framework is doing all the math every pixel, along with transforms? I got curious, hacked render2D, and it looks like we’d get something like 98.4 FPS!

2 Likes

I’m using a v3, so…

Let me do up a good test case. I do agree, optimizing and using things like translate could make a lot of this obsolete… The real question is caching vs no caching.