Input needed: functions wishlist

Oooh Nice - Just wanted to lend my +1 to accessing map data in beforeRender, and sound normalizing (perhaps on the sensor board itself).


Sorry, couldn’t resist. I’ll mull this over and reply for real in the next few days. Brainstorming.

Pixel map dimensions

Remapping 2D and 3D pixels is very nice, though we have a lot of 1D patterns out there without a good system for remapping. The index/pixelCount idiom is also somewhat unintuitive to new pixel enchanters. So I’m thinking either render should be passed x or I should add render1D(index, x). At some point I will add support for 1D pixel maps too, with a default pixel map where x will be index/pixelCount and would be very easy to upgrade existing patterns.

It would be good to have information about an installed pixel map, so I would like to add something to help out:

  • has2DMap() - true/false
  • has3DMap() - true/false

or perhaps

  • pixelMapDimensions() returns 1, 2, or 3. Even if no map is installed, we’d assume the default pixel map of index/pixelCount.

Oh look, a poll! Vote for an API to get info about an installed pixel map

Choose an API to get info about an installed pixel map
  • has2DMap() / has3DMap()
  • pixelMapDimensions()
  • :woman_shrugging:t2:why not both?

0 voters


Ooh I’ve got one I just hit: there is a built in assumption in most code that the 2D map is equally dimensioned (x and y dimensions are equivalently sized aka a square matrix)

I have an 8x32 apa102 panel I hadn’t played with yet and just tested the 18x32 miniws2812 panel I got (yes, that’s a weird size)… And of course, things are stretched out to fit even with the map. Adjusting the ratio is going to take some work and I’ll try to come up with a clean way. (Since my goal is to write 2D patterns, having a “squashed” display will mean it’ll look weird unless it’s corrected for.)

Sone of the suggestions above include a way to access the non normalized… That could help. Still brainstorming how to fix this best.

In Ben’s vein of additions, I can see that having either the dimensional sizes (I have 18 in X, and 32 in Y, and PB learns that either as part of map or value in settings), Or a ratio.

BTW, this problem won’t be an issue for polar mapping… That’s one awesome advantage of it. “Wide” or irregular sized matrix “just work”

I do not plan to access the unnormalized map, but do plan for APIs to access the pixel map with the same kind of data that is available to render.

For aspect ratio, I have some ideas :slight_smile:

Map walking functions.

This lets you walk through the pixel map outside of a render call and could be used to make a pattern 2D/3D aware yet still only export render or pre-process pixels.

  • mapPixels(fn) - walk through the pixels with pixel map coordinates. fn is invoked with 4 arguments: (index, x, y, z) though you can specify fewer. If no pixel map is installed, x will be the same as index/pixelCount, and y and z will be 0. For 2D pixel maps z would be 0.

And/or perhaps:

  • map2DPixels(fn) - walk through the 2D pixel map, if installed (if not, nothing happens). fn is invoked with 3 arguments: (index, x, y).
  • map3DPixels(fn) - walk through the 3D pixel map, if installed (if not, nothing happens). fn is invoked with 4 arguments: (index, x, y, z).

Oh look a poll!

Vote for an API to walk a pixel map outside of render
  • mapPixels(fn) one size fits all
  • map2DPixels(fn) / map3DPixels(fn) an API for every dimension
  • :woman_shrugging:t2:why not both?

0 voters

Pixel Map Coordinate Transformation

The idea here is that the pixel map can be transformed before coordinates are given to render2D/3D and mapPixels and perhaps render/render1D x parameter as well. This will help reduce common coordinate transformations that have to be done in pattern code. A set of transformations can be applied globally on the mapper tab. Perhaps per pattern as well as a setting like controls. This allows you to center the coordinate space around 0, apply a non-square aspect ratio, or scale + translate the map to a segment of the overall world coordinate space for multiple PB rendering cooperation without code.

In addition to global transforms, the pattern has access to an API to apply pattern specific transformations as well. It would make things like GlowFlow much simpler and faster, and allow for cool effects with less code. Rotate, zoom, pan around with ease and speed in 2D or 3D. Scale and offset even 1D would be cool.

  • resetTransform() - resets coordinate transformations back to default or global transformation state. I might also need an applyTransforms() due to the non-commutative nature of matrix composition, but I’ll cross that bridge when I get to it :slight_smile:

For API, it’s tempting to have functions for 2D and for 3D, though for a hybrid compatible pattern working with the individual axes might reduce overall code footprint.

  • translateX(dx) - move on the X axis. Works on 1D and up
  • translateY(dy) - move on the Y axis. Works on 2D and up
  • translateZ(dz) - move on the Z axis. 3D only
  • scaleX(x) - scale the X axis. Works on 1D and up
  • scaleY(y) - scale the Y axis. Works on 2D and up
  • scaleZ(z) - scale the Z axis. 3D only

This kind of gets unintuitive with rotation. There isn’t much point to rotate 1D right? Rotation in 2D transforms both the X and Y (on the Z axis). Would it be weird to see rotateZ() in 2D patterns so perhaps rotate() would be more natural for 2D.

  • rotate(a) aka rotateZ(a) - rotate along the Z axis. Works on 2D and up
  • rotateX(a) - rotate along the X axis. Works in 3D only
  • rotateY(a) - rotate along the Y axis. Works in 3D only

+1 to several of the above regarding access to the map from beforeRender – it would certainly be useful to know the map’s “real” dimensions in order to determine aspect ratio.

Also, I can think of times I’d like to be able to do calculations on pixels in an array outside of render and know where that was going to fall in coordinate space on the display, regardless of wiring.

Re @scruffynerf’s fade(): – it would actually be nice to have an iterator that let us hand any expression to the vm to perform over an array. (And some way to get array size would be good too).

1 Like

I’d rather tackle the aspect ratio with methods mentioned above through scaling. so e.g. a 32x16 array is 2:1, so you could ether get X coordinates with a magnitude of 2 (0-2 or -1 to +1) or Y coordinates with a magnitude of 0.5 (0-0.5 or -.25 to +.25 or something like that).

There are times where you do want pixel level coordinates, the tixy stuff comes to mind, or any 1:1 pixel art, but I think these can still be done with appropriate map scaling (and perhaps snapping to integer values).

Another way to do it would be to allocate an image buffer that could be drawn into. This could be a simple array where you know the pixel arrangement, such as rows and columns, or a cavas-like object with an API, then be able to subsample or supersample from that image buffer inside render, which would still let you have arbitrary coordinates for your physical pixels. Your 32x32 Nyan Cat could be then rendered to any pixel setup (which would already have aspect ratio corrections).

1 Like

Can you explain the mapPixels() better? I am not sure I grasp how you mean it to be used?

I agree, scaling is the right way, and ideally you would scale based on the largest dimension equaling 1, to preserve the 0…1 scale universally. (You could scale to the smaller and have values above 1, but that requires all of your patterns to be aware of that… I’d rather count on 0…1 being boundaries. Far easier.)

1 Like

There are definitely advantages to having a pixel buffer where you could read/write the state. Otherwise, for those applications, you just ending up building an array to do it anyway.

If you add it, and it’ll be faster than
setup/create array,
populate array with initial values if needed,
And then in prerender you manipulate array,
then on render, you push the array to pixels,

then it’s worthwhile to have a built in buffer instead.

Maybe a (faster?) render() replacement that just flushes the buffer to pixels?

On direct frame buffer access: sometimes, I want this. Then I start thinking about color order, the whole RGBW thing, the extra APA brightness bits… Once your pattern has to worry about hardware details, portability becomes a lot more work. (I’d take access to the final pre-hardware 0-1 scaled hsv buffer if you gave it to me though.)

Transforms: I’ve come to like Processing’s pushMatrix()/popMatrix() model – call pushMatrix() to save the current set of transforms, then do whatever additional things you’re doing in local coordinate space with translate() and rotate(). When you’re done, call popMatrix() to restore the previous context.

Simplicity: Accessibility to new users is, I think, my favorite Pixelblaze feature. I give these things to more-or-less non-technical friends. And they love them!

Whatever you do in the advanced 2D/3D space, it should impose no new burden on users with more basic needs. Hopefully, they’ll see these new tools, be curious, and learn them in their own time, but explaining nested coordinate systems, proper order of scaling, rotation & translation… it’s a lot to take in up front.

1 Like

inoise1d(x), inoise2d(x,y), inoise3d(x,y,z) implemented as simplex noise the successor to Perlin noise.


The pixel map is a list of coordinates where the index of the element matches the pixel index. This function lets you iterate over the coordinates. It would be very similar to doing forEach() on the object your pixel code/json map returned (after normalization).


//during init, 
mapPixels((i,x,y,z) => {
  // called once for every pixel in the map, just like render2D/3D would be 

export function beforeRender(delta) {
  //during beforeRender
  mapPixels((i,x,y,z) => {
    // called once for ever pixel in the map, just like render2D/3D would be 
    // maybe implement a radial fade for those cool tree toppers

export function sliderNoiseDensity(scale) {
  //in a control, e.g. to regenerate a noise map
  mapPixels((i,x,y,z) => {
    noiseMap[i] = generateNoise(x * scale, y * scale)

mapPixels() would pretty much cover my universal iterator request. Great!

a function like FastLEDs beatsin/beatsin16 (beats_per_minute, low_val, high_val, timebase, phase_offset)

I think that’s doable in code now. Wave 0…1 so you’d have to add the range, offset, and timebase (it’s all math but I admit I don’t know how that last one works)

Yes, it’d be faster if it’s internal, but maybe just adding options to wave? And calling it equivalent?

@wizard posted this a while back:

//       beatsin( BPM, low, high) returns an value that
//                    rises and falls in a sine wave, 'BPM' times per minute,
//                    between the values of 'low' and 'high'.
function beatsin(bpm, low, high) {
  return wave(time(0.91552734375/bpm)) * (high - low) + low

Just applying a little arithmetic to wave(). I’ve used something along these lines in a few patterns to build low frequency waves using a time unit that’s easy to think about in terms of how it works with musical tempo. Totally could do without having this as a built-in though.

1 Like

@wizard, the clock functions need something for sub-seconds… if I could call a function like ClockSeconds for smaller values, like for milliseconds, that would be fine.

Stepping away from graphics entirely, is there anything we can do in the realm of inter-device communication? We have his marvelous situation where there are small inexpensive computers and sensors to do almost any job you can name. Pixelblaze shouldn’t have to do all the heavy lifting on its own!

I’m still new to the microcontroller universe – are there software standards for interdevice data exchange that Pixelblaze can implement? If there’s no existing standard, maybe you could let users configure a GPIO pin for a low speed serial variable get/set (and pattern changing) protocol. Just making the existing websocket protocol available over serial would be useful.