Feature Req: Additional Dimension in Mapping

Feature request that I think we’ve talked about in person before:

Adding an additional dimension to the mapping function. So each pixel would have x, y, z, and e mapping. This would allow for adding ‘masks’ to patterns, based on this 4th dimension.
The use cases I have for this is are:

  • Turn signals and brake lights on my 3D mapped bike and connected wearables, without totally overriding the pattern being shown
  • Pathway lighting for whole-room lighting grids (like my Cloud Ceiling)
    • I want to be able to ‘flip a switch’ and have parts of my room/hallway (walkways) light up white for better visibility, while the pattern still runs
  • Adding text to a display (i.e. the There U Glow sign)

This extra dimension would be particularly used along side the ‘global’ code feature that others have requested

Workarounds I’ve tried/thought about:
For my bike and ceiling:

  • I could distort the mapping, adding a small value (i.e. 0.00001) to an existing dimension to mark the pixels for special use, but filtering for this later is clunky
  • Use very specific logic based on 3D coords to try to only add the turn signal/brake light effect to the areas I want, but this is not a general solution and would cause unintended effect with each new wearable I connect
    For the sign:
  • I believe Hannah used the Z-dimension mapping to hard-code the text for the There U Glow sign. That strategy doesn’t work for 3D mapped objects, meaning I can’t have my 3D mapped wearables/art set up as followers of the sign

Hi @Allyctric!

Yeah, I could see that being handy. I think it would probably make sense to leave that 4th data element unnormalized since its mostly used for tacking on extra data instead of coordinate space. Currently everything is set up for 16 bits per element, and I think integers make the most sense here, but maybe an 8.8 fixed point number could work too (± 127 with 1/256ths resolution).

Alternatively, I think a global dataset that was outside of the coordinate system might be cleaner and could be used for all kinds of things, like large data tables, images, etc.

The current workaround is to embed an array with the data you need into the patterns. We did that for the Lux Lavalier to encode the fibonacci pixel index sequence for animations that rely on that instead of the 2D coordinates.

Another mathy trick/workaround you could do is to break up the pixels into segments or quadrants that are spaced evenly apart. This could be done on X and/or Y axes. Then add a scaling factor n with the coordinate transformation system. Use trunc() to get the segment and frac() to get the real coordinate. e.g. with 3 segments:

Map
[
[4,6],
[14,6],
[30,5],
[56,5],
[80,4],
[116,4],
[147,4],
[171,5],
[208,5],
[236,5],
[278,6],
[303,6],
[308,5],
[308,30],
[308,57],
[308,68],
[307,90],
[310,113],
[307,133],
[306,154],
[305,175],
[305,194],
[306,214],
[307,237],
[306,257],
[306,278],
[305,298],
[305,307],
[285,308],
[271,307],
[239,308],
[206,307],
[193,306],
[171,305],
[152,307],
[114,305],
[94,305],
[62,304],
[28,301],
[14,305],
[4,297],
[7,285],
[7,270],
[8,256],
[7,235],
[6,222],
[6,197],
[7,183],
[7,160],
[7,149],
[4,124],
[5,104],
[6,86],
[5,72],
[9,54],
[10,42],
[12,21],
[350,42],
[410,104],
[447,140],
[491,176],
[533,214],
[566,244],
[605,293],
[711,95],
[864,103],
[861,222],
[709,222],
[784,221],
[863,156],
[781,96],
[709,160],
[748,93],
[710,128],
[820,98],
[865,133],
[858,193],
[830,220],
[742,218],
[0,1],
[948,316]
]
scale(3, 1)
export function beforeRender(delta) {
  t1 = time(.1)
}
export function render2D(index, xi, y) {
  x = frac(xi) //real x coordinate
  h = trunc(xi) * .2 //segment
  s = 1
  v = wave(x + t1)
  hsv(h, s, v*v)
}

Aug-02-2024 12-24-53

1 Like