Programming Question

Hi @Vitaliy! Welcome to the forums!

(PS: I don’t have access to a running Pixelblaze at the moment to validate these - if anyone sees a problem, feel free to DM me and I’ll correct it)

I like to put this code before beforeRender(delta), or if I wrap it in a function, I’ll put it anywhere and then call it as the last line of the pattern. For example:

var hues = array(pixelCount)

// Let's initialize the hues to a random red-yellow hue per pixel. This only runs once.
hues.mutate(() => random(.2))

export function beforeRender(delta) {
  pulse = time(5/65.535)
}

export function render(index) { 
  hsv(hues[index], 1, pulse)
}

Wrapped and called on the last line:

var hues = array(pixelCount)

function init() {
  // Let's initialize the hues to a random red-yellow hue per pixel. This only runs once.
  hues.mutate(() => random(.2))
}

export function beforeRender(delta) {
  pulse = time(5/65.535)
}

export function render(index) { 
  hsv(hues[index], 1, pulse))
}

init()


Sure - there’s a few ways. Hopefully these are clear:

var arr2D = [[0, 0], [0, 1], [1, 0], [1, 1]]
arr2D[3][1] = 2 // [[0, 0], [0, 1], [1, 0], [1, ***2***]]

var arr2D = array(3)
arr2D.mutate(() => array(2))
arr2D[2][1] = 2 // [[null, null], [null, null], [null, ***2***]]

// A for loop works too
var width = 16
var height = pixelCount / width
var arr2D = array(width)
for (i = 0; i < arr2D.length; i++) arr2D[i] = array(height)
// Can access with arr2D[col][row]



Nope! We usually create our own if one is needed, though you’ll find some occasional functional bliss in discovering situations where you thought you needed one but you don’t.



:raising_hand_man: Welcome to the club. Except for in my case, I’m an inexperienced EE.

Hi @Vitaliy ,
Here’s my take on a 2D canvas in a pattern, I’ve used something similar in many patterns.

2D canvas example.epe (7.9 KB)

/* Rendering into a "canvas" example
 *
 * In this example, an array is created to represent a 2D canvas with
 * x and y values in world coordinates, which are values from 0 to 1 exclusive.
 * The canvas is then scaled and drawn to the LEDs. The LEDs could match 1:1
 * or could be some other size, or even a non uniform layout.
 * The canvas is set up with 2 arrays, one for values, one for hues, which
 * are then fed to hsv() during render2D.
 * 
 * This example draws a dot traveling around the circumference of a circle,
 * leaving a fading trail of color.
 */

var width = 8
var height = 8
var numPixels = width * height
var canvasValues = array(numPixels) //make a "canvas" of brightness values
var canvasHues = array(numPixels) //likewise for hues
var fade = .95


//find the pixel index within a canvas array
//pixels are packed in rows, then columns of rows
function getIndex(x, y) {
  return floor(x*width) + floor(y*height)*width
}

function isIndexValid(index) {
  return index >= 0 && index < numPixels
}

export function beforeRender(delta) { 
  //fade out any existing pixels
  canvasValues.mutate(p => p*fade) //TODO fade based on delta for consistent fade

  //draw into the canvas here
  //this draws a pixel moving in a circle
  //radius 1/3rd, centered at 0.5, 0.5
  var a = time(.01) * PI2
  var r = 1/3
  var x = sin(a) * r + 0.5 
  var y = cos(a) * r + 0.5
  
  //optionally, you can make the pixels "wrap" around to the other side if out of bounds
  // x = mod(x,.99999)
  // y = mod(y,.99999)
  
  //calc this pixel's index in the canvas based on position of our coordinate
  var index = getIndex(x, y)
  
  //check that the coordinate is within bounds of the canvas before using it
  if (isIndexValid(index)) {
    canvasValues[index] = 1
    canvasHues[index] = time(.015)
  }
}

export function render2D(index, x, y) {
  index = getIndex(x, y) //calc this pixel's index in the canvas based on position
  h = canvasHues[index]
  v = canvasValues[index]
  hsv(h, 1, v*v)
}

More context here:

1 Like

Thank you very much for the quick response and valuable tips.

I tried to put the init() function anywhere but not at the end.
I will try this and see how this will work.

For the 2-dimentional array I tried many different ways but
but did not seen anywhere example like yours: arr2D[x, y]
For the reading my static array I tried arr2D[x][y].
This worked for getting values but failed for putting values.
My current implementation is with for loop.

I will try yors example.

So, you are saying normally I don’t need a Frame Buffer.
This is very interesting thought.
For my first exercise I tried to create Color Wipe pattern
filling in strip with one color from left to right, change color
at the end and fill strip with new color from right to left.
With the Frame Buffer all was done in a 10 min.
But now even I looked through many different examples
I still have hard time to implement this algorithm without
a Frame Buffer. It looks like my EE thinking is preventing
me to do this.

1 Like

Hi @wizard,

Thank you very much for the example.
I will need some time to understand the details.

1 Like

Heh, you aren’t the only one who does that… I post all the time without a PB in front of me.

1 Like

Bare code not inside a function just runs. On pattern start. Using @jeff 's init example is good practice though.

From personal experience, performance on 2D arrays can be very slow. You might want to make separate 1D arrays, if possible. Sometimes a frame buffer makes it easier, but sometimes it’s unneeded. For an example of frame buffer like behavior, see Multimap multi-pattern

2 Likes

Hi @Vitaly,
Here’s an example of one way to do a bi-directional color wipe without using a frame buffer. This approach has the advantage of using very little memory and is completely resolution independent. It takes a little adjustment to start thinking in this way, using waveforms and timers instead of loop counters, but it is very flexible and rewarding in the long run.

// random bi-directional color wipe
// ZRanger1 12/22/2021

var bgHue,wipeHue;
var head = 0;
var lastHead = 0;
var dir = 0;
var indexCompare;

export function beforeRender(delta) {
  // use a sawtooth wave generated by the time() function to determine the leading
  // edge of our wipe as a percentage (0.0 to 1.0) of the total number of pixels.
  // time(0.015) traverses the strip in about a second.  Use a lower value to go faster, higher to go
  // slower.  
  head = floor(time(0.015) * pixelCount);
  
  // the sawtooth wave will restart when we've completed a pass.  Rather than
  // trying to catch the wave at zero, which might be difficult, we just see if
  // the new value is less than the old one, indicating that we've restarted.
  if (head < lastHead) {
     
    wipeHue = bgHue;
    bgHue = time(0.05);
    
    // switch directions on every pass
    dir = ~dir;
  }
  
  lastHead = head;        
  
  // set indexCompare to the actual pixel index at the front of our wipe.
  indexCompare = (dir) ? head : pixelCount - head;
}

export function render(index) {
  // hue is determined by the position of the current pixel relative to the
  // "head" pixel.
  var h = (index <= indexCompare) ? wipeHue : bgHue;
  
  // set the pixel.
  hsv(h, 1, 1)
}
2 Likes

Hi @zranger1,

Thank you very much for the example.
I will play with it tomorrow and will try to understand buffer-free programming approach.

BTW,
I am using yours PixelBlaze Diver for the HE.
It works very well.
Thank you for this one.

2 Likes

Hi @zranger1,

I tried your code “as is” and yes, it worked as expected.
I played a bit with “time” parameters and I found that
“Timing Control” is not intuitive and it is very easy to
brake the entire pattern behavior. I understand, this
code was very quick demo sketch how to achieve the
same results without using Frame Buffer. So yes,
what I was planning to do could be done without using
Frame Buffer. But what it looks like (I might be wrong) - the
code is not intuitive and easy maintainable because too
many dependencies between different code portions.
From the other side to my (EE, not a SW) eyes using a
Frame Buffer is very straight forward, easy to understand
and maintain approach.
I have no idea how to upload my example code but what
I achieved - Timing Control is a single variable in mS
(basically this is frame refresh rate).
All Lighting Effects require to deal with a Frame Buffer
outside the “beforeRender” and “render” functions.
This could be jus a single function or multiple functions.
Actually, creating multiple functions is something like building
an easy code reusable from project to project library.

I guess, for now I am all set but I will try to learn a technique
how to bypass Frame Buffer.

Many BIG Thanks to everyone who responded to my questions.

  • Vitaliy

Perhaps it would help to think of unbuffered patterns as math. Each pixel is calculated based on inputs like the pixel’s index or position, and time.

In simplest terms, you could think of the render function as the insides of a loop that would otherwise update the pixel buffer. In @zranger1’s example:

export function render(index) {
  var h = (index <= indexCompare) ? wipeHue : bgHue;
  hsv(h, 1, 1)
}

Is the same as this kind of pseudo code using a loop and updating a buffer:

for (int index = 0; index < pixelCount; index++) {
  float h = (index <= indexCompare) ? wipeHue : bgHue;
  leds[index] = hsv2rgb(h, 1, 1)
}

If the next pixel value depends on the previous pixel value, then a buffer comes in to play, but for many patterns the pixel value can be calculated without it.

1 Like

And timing is much simpler than I made it look - the cycle time of the sawtooth wave generated by time(n) is n*65.536 seconds. The value I used in the demo, 0.015 is 1/65.536, giving a one second cycle time. This is how long it will take traverse the LED strip in one direction, regardless of the number of pixels. I should’ve been more clear about how I arrived at that value.

You can’t actually break the pattern by changing the time parameter – it’s just that this sort of pattern only looks good at a relatively narrow range of speeds. More than a couple of seconds, and it looks like its not moving, and below about half a second it just looks like it is flashing.

(Also, choosing colors at random is not the best possible method - at times it can pick two colors that are visually identical, making it look as though it has stalled. Better to pick the first color, then add a controlled offset to it to generate the second.)

I think it’s inherently more intuitive to conceptualize writing a pattern as “code/generate the pattern, then display it on the LEDs” rather than this “on the fly” mathematical approach. But that’s not too say it’s better, quite the contrary. It’s probably rooted in in the minds of many of us from prior experiences with code, electronics, etc.

2 Likes

I agree with this statement 100+%
Few (5+) years ago I designed FPGA based WS2812b controller for
Instruments for the company I am working for. This significantly reduced
and simplified internal system wiring. For the SW each LED had 24bit
memory mapped location. SW immediately fell in low with this controller.

I clearly understand PixelBlaze buffer-free approach when “render(index)”
function provides a space for processing each Pixel on a fly.
I did try to use it but I quickly realized two things:

  • Timing control is not really intuitive and a bit complicated;
  • Programming is not easy portable, i.e. each case needs special handling.

So I switched to Frame Buffer approach.
Here is my “beforeRender(delta)” and “render(index)” functions.

//
// Frame Buffer Processor
//

// Step Timing Control
elapsedMs       = 0

export function beforeRender(delta)
{
  elapsedMs = elapsedMs + delta
   
  if (elapsedMs > timeDelay)
  {
    elapsedMs = 0
    fillPixelArray()
  }
}

// Display Frame Buffer
export function render(index)
{
  getRGBfromArray(index * 3)
  rgb(Red, Green, Blue)
}

Now I can use these two functions “as is” in every project.
All pixel’s processing is done outside these two main functions.
I can create and reuse any Pixel Processing functions completely
independent and make them as a reusable library.
Timing control (i.e. frame rate) easily controlled by a single very
intuitive “timeDelay” variable (I should rename it to a “frameRate”).
However the minimal value for the “frameRate” has a limitation
dictated by Strip Length (~9mS for the 300 WS2812b LED Strip)
plus whatever it takes for the buffer processing function (fillPixelArray()
in my example above).
From the other side Pixel Processing time for the same 300 LEDs
strip is limited to only 30mkS. I have no idea how long it may take for
the complex Pixel Processing algorithm but 30mkS does not sound
too good.

Before I came across PixelBlaze I played a bit with Adafruit and
NeoPixelBus Arduino-targeted libraries. Both are using Frame Buffer.

Anyway,
I already love PixelPlaze and found it much easier to use than Arduino.
Plus there is ready to go PixelBlazed Diver, designed for the Hubitat
Elevation home automation controller designed by @zranger1
I tested this driver and it works very well.

2 Likes

Unfortunately yes, timing is/was very easy to brake.
It took me just a few minutes to realize this.

Yes, this was very clear when I started to play with PixelBlaze.
And became very confusing how to use this approach in very
efficient way. I agree, simple and nice looking patters is not
very difficult to create using this approach. But I am/was thinking
ahead how to use PixelBlaze more efficiently (i. e. creating
something like creating a custom library of functions. Yes,
there are many built-in functions already available).

A request to you and others, please add the “code” format (click the </> symbol if you highlight, or just 3 single quotes in a row before and after) to your code.

As for the difficulty or ease of framebuffer vs non, I’m in the midst of writing up lessons on how to program the PB. Once you reallt understand the way PB works, the bufferless stuff makes more sense, and if you do need some form of history (between one frame and another) doing a buffer approach isn’t very hard to add.

But if you run into issues, do ask, those of us who frequently here are usually glad to help figure out problems.

We’ve also discussed adding library functions to make things easier. But without a good way to include those, it makes the library approach awkward for those not already versed in PB. If you search for library, you’ll see lots of related posts.

I am sorry. Definitely I will follow the rules in a future.
Unfortunately different forums have different formatting options.
This is a bit strange but it is what it is.

I definitely will go through your lesson(s) and tutorial(s).
Actually I did read as much as I could find and I did get the idea how to program PB.
I am coming from the HW design side. Bufferless programming style is very unusual
but I am trying to get on this train. Yes, adding Frame Buffer is not a rocket science.
I already did it basically in no time after few unsuccessful attempts to create
something bufferless.
Example provided by @zranger1 worked very well "as is’ but was easily destroyed
when I started to play with timing parameters. I am planning to use PB with Hubitat
Elevation home automation controller. The Device Driver for the HE integration
already developed by @zranger1 I tested this Driver and it works OK.
Thanks God both toys (PB and HE) are not cloud-based.
For this matter I need very simple and intuitive timing control with a single
timing parameter passed from HE to PB. In many examples I checked there are
more than one timing parameter. And it is not easy to understand a relationship
between them. My approach (re-displaying a frame buffer with a refresh rate,
I am sure, it is not ideal) solved this problem instantly.

As far as library goes, including libraries and calling functions is very preferable.
But just having a library of functions and using copy-and-paste technique should
be acceptable (certainly better than nothing).

@Vitaliy ,
Hey no worries. There’s no “right way” or “wrong way” just different options. By all means use whatever works for you!

By the way, I edited your post and added the formatting. If you edit it you can see the syntax used, just 3 backticks around the code.

To make it easier in the future, I’ve added a simple template with a code block for this category.

1 Like

I agree 100+%

Buffered approach was/is very intuitive right away for my EE eyes.
But I am trying to learn deeply “per pixel math” approach as we speak.

Buffered approach definitely requires more memory but it is OK if you have enough memory.
Timing for the buffer processing function is depend and limited by the frame rate requirement.
For the very smooth animation frame rate should be no less than 25mS.
Refreshing 300 WS2812b LED strip takes around 9mS. This leaves about 16mS for the
smooth frame processing. It looks like plenty of time for the fast modern CPUs (assuming
this is not a live video rendering).

Correct me if I wrong but “Per Pixel Math” approach for the same strip all calculations
must to be done within 30mkS per each pixel. This should be enough for the simple math
but complex math could be time limited.

But once again - you are absolutely correct: use whatever works.

I have to admit, PixelBlaze is the best Lighting Controller I have seen and tested.
Very BIG plus - it could be easily integrated with the HE home automation hub.
And thanks God!, both these toys are not cloud-based, all controls are local.

!!! HAPPY HOLIDAYS !!!

PS.
I am sorry, this is slightly of topic comment.
For my home automation projects I am using ESP8266 based HubDuino project.
HubDuino is very well integrated with HE hub and allows you to build variety of
custom sensors and controllers. One of the on the custom controller is a
PixelLED strip controller based on the NeoPixelBus free library (Arduino IDE).
This library is using buffered approach. As a result any lighting effect, designed
for buffered display method cold be easily used ether with NeoPixelBus/ESP8266
or PixelBlaze. This is another reason why I am leaning toward buffered effect design.

PS2.
I have no idea what ESP32’s buit-in HW is used by PixelBlaze in order to create
serial output stream but NeoPixelBus is using built-in DMA Engine.
This naturally implies Frame Buffer.
I. e SW is dealing only with Frame Buffer modifications but actual serial stream is
produced by build-in HW. Another words, CPU is not busy at all with creating
serial output stream.

The low level drivers are open source on my github.

For apa102, I wrote my own simple SPI based bufferless driver so that I could drive more LEDs than the esp8266 had memory for. The other advantage is that rendering and transmitting can be easily pipelined without a buffer. The next pixel is calculated while the previous pixel is being sent, so the frame rate is improved. To do the same for a buffer approach, a double buffer is needed so transmit can happen from one buffer as the next frame is rendered to another buffer.

Later I added WS2812 for Pixelblaze V2, originally using a UART, just like NeoPixelBus had done. It could be buffered or unbuffered. Unbuffered can pipeline, but it’s too easy to underfeed WS2812 and cause an early latch between 2 pixels, and requires using a slow data rate that ends up being the bottleneck. For most purposes buffered mode is best.

For V3 WS2812 support I switched to using the RMT peripheral based on a fixed fork of the driver used in fastLED (which was broken at the time). There are 2 options for using RMT for ws2812, one is using DMA to the RMT but this is incredibly memory intensive since RMT needs 32 bits for every bit sent. That’s 96 bytes for every RGB pixel, 490KB of ram to support the same 5k pixels max which is more than the entire ESP32 has. That buffer wouldn’t be useful for any pixel rendering engine frame buffer since it has to be in the RMT peripheral timing format. The other option is to feed RMT via interrupt a little at a time, so no giant 32x buffer is needed, just a normal pixel buffer to ensure the pixel data is always available when the interrupt needs it.

2 Likes