Apologies if this has been discussed before. I searched for “polar” and “spherical” but didn’t see a definitive answer to my question.
I’d like to create a playlist of patterns for pixels that sit on the surface of a sphere (project described in “Show and Tell” here). Some patterns use Cartesian coordinates via
render2D(index, x, y) and others (particularly ones which use the sensor board’s accelerometer) will use each pixel’s spherical coordinates.
The Cartesian pixel map works fine. For the patterns which use spherical coordinates, I’m trying to avoid recomputing them in every single call to
render2D because I assume the inverse trig calls are computationally expensive.
Is there a best practice for storing the spherical coordinates somewhere in the pattern code or elsewhere?
I’ve seen the trick of simply storing the spherical coordinates in the mapper instead of the Cartesian coordinates, but I’ll need to use both Cartesian AND spherical coordinates depending on the pattern.
My plan is to create global arrays for normalized spherical coordinates in the pattern code and initialize them with negative values. Then, in each call to
render2D(index, x, y), check to see if the values have already been computed for the specific pixel index. If they don’t exist, compute and save them in the global array before returning from render2D.
This seems workable, but I was wondering if there might be a better way within the Pixelblaze system?