Programming Question

Hi @zranger1,

I tried your code “as is” and yes, it worked as expected.
I played a bit with “time” parameters and I found that
“Timing Control” is not intuitive and it is very easy to
brake the entire pattern behavior. I understand, this
code was very quick demo sketch how to achieve the
same results without using Frame Buffer. So yes,
what I was planning to do could be done without using
Frame Buffer. But what it looks like (I might be wrong) - the
code is not intuitive and easy maintainable because too
many dependencies between different code portions.
From the other side to my (EE, not a SW) eyes using a
Frame Buffer is very straight forward, easy to understand
and maintain approach.
I have no idea how to upload my example code but what
I achieved - Timing Control is a single variable in mS
(basically this is frame refresh rate).
All Lighting Effects require to deal with a Frame Buffer
outside the “beforeRender” and “render” functions.
This could be jus a single function or multiple functions.
Actually, creating multiple functions is something like building
an easy code reusable from project to project library.

I guess, for now I am all set but I will try to learn a technique
how to bypass Frame Buffer.

Many BIG Thanks to everyone who responded to my questions.

  • Vitaliy

Perhaps it would help to think of unbuffered patterns as math. Each pixel is calculated based on inputs like the pixel’s index or position, and time.

In simplest terms, you could think of the render function as the insides of a loop that would otherwise update the pixel buffer. In @zranger1’s example:

export function render(index) {
  var h = (index <= indexCompare) ? wipeHue : bgHue;
  hsv(h, 1, 1)
}

Is the same as this kind of pseudo code using a loop and updating a buffer:

for (int index = 0; index < pixelCount; index++) {
  float h = (index <= indexCompare) ? wipeHue : bgHue;
  leds[index] = hsv2rgb(h, 1, 1)
}

If the next pixel value depends on the previous pixel value, then a buffer comes in to play, but for many patterns the pixel value can be calculated without it.

1 Like

And timing is much simpler than I made it look - the cycle time of the sawtooth wave generated by time(n) is n*65.536 seconds. The value I used in the demo, 0.015 is 1/65.536, giving a one second cycle time. This is how long it will take traverse the LED strip in one direction, regardless of the number of pixels. I should’ve been more clear about how I arrived at that value.

You can’t actually break the pattern by changing the time parameter – it’s just that this sort of pattern only looks good at a relatively narrow range of speeds. More than a couple of seconds, and it looks like its not moving, and below about half a second it just looks like it is flashing.

(Also, choosing colors at random is not the best possible method - at times it can pick two colors that are visually identical, making it look as though it has stalled. Better to pick the first color, then add a controlled offset to it to generate the second.)

I think it’s inherently more intuitive to conceptualize writing a pattern as “code/generate the pattern, then display it on the LEDs” rather than this “on the fly” mathematical approach. But that’s not too say it’s better, quite the contrary. It’s probably rooted in in the minds of many of us from prior experiences with code, electronics, etc.

2 Likes

I agree with this statement 100+%
Few (5+) years ago I designed FPGA based WS2812b controller for
Instruments for the company I am working for. This significantly reduced
and simplified internal system wiring. For the SW each LED had 24bit
memory mapped location. SW immediately fell in low with this controller.

I clearly understand PixelBlaze buffer-free approach when “render(index)”
function provides a space for processing each Pixel on a fly.
I did try to use it but I quickly realized two things:

  • Timing control is not really intuitive and a bit complicated;
  • Programming is not easy portable, i.e. each case needs special handling.

So I switched to Frame Buffer approach.
Here is my “beforeRender(delta)” and “render(index)” functions.

//
// Frame Buffer Processor
//

// Step Timing Control
elapsedMs       = 0

export function beforeRender(delta)
{
  elapsedMs = elapsedMs + delta
   
  if (elapsedMs > timeDelay)
  {
    elapsedMs = 0
    fillPixelArray()
  }
}

// Display Frame Buffer
export function render(index)
{
  getRGBfromArray(index * 3)
  rgb(Red, Green, Blue)
}

Now I can use these two functions “as is” in every project.
All pixel’s processing is done outside these two main functions.
I can create and reuse any Pixel Processing functions completely
independent and make them as a reusable library.
Timing control (i.e. frame rate) easily controlled by a single very
intuitive “timeDelay” variable (I should rename it to a “frameRate”).
However the minimal value for the “frameRate” has a limitation
dictated by Strip Length (~9mS for the 300 WS2812b LED Strip)
plus whatever it takes for the buffer processing function (fillPixelArray()
in my example above).
From the other side Pixel Processing time for the same 300 LEDs
strip is limited to only 30mkS. I have no idea how long it may take for
the complex Pixel Processing algorithm but 30mkS does not sound
too good.

Before I came across PixelBlaze I played a bit with Adafruit and
NeoPixelBus Arduino-targeted libraries. Both are using Frame Buffer.

Anyway,
I already love PixelPlaze and found it much easier to use than Arduino.
Plus there is ready to go PixelBlazed Diver, designed for the Hubitat
Elevation home automation controller designed by @zranger1
I tested this driver and it works very well.

2 Likes

Unfortunately yes, timing is/was very easy to brake.
It took me just a few minutes to realize this.

Yes, this was very clear when I started to play with PixelBlaze.
And became very confusing how to use this approach in very
efficient way. I agree, simple and nice looking patters is not
very difficult to create using this approach. But I am/was thinking
ahead how to use PixelBlaze more efficiently (i. e. creating
something like creating a custom library of functions. Yes,
there are many built-in functions already available).

A request to you and others, please add the “code” format (click the </> symbol if you highlight, or just 3 single quotes in a row before and after) to your code.

As for the difficulty or ease of framebuffer vs non, I’m in the midst of writing up lessons on how to program the PB. Once you reallt understand the way PB works, the bufferless stuff makes more sense, and if you do need some form of history (between one frame and another) doing a buffer approach isn’t very hard to add.

But if you run into issues, do ask, those of us who frequently here are usually glad to help figure out problems.

We’ve also discussed adding library functions to make things easier. But without a good way to include those, it makes the library approach awkward for those not already versed in PB. If you search for library, you’ll see lots of related posts.

I am sorry. Definitely I will follow the rules in a future.
Unfortunately different forums have different formatting options.
This is a bit strange but it is what it is.

I definitely will go through your lesson(s) and tutorial(s).
Actually I did read as much as I could find and I did get the idea how to program PB.
I am coming from the HW design side. Bufferless programming style is very unusual
but I am trying to get on this train. Yes, adding Frame Buffer is not a rocket science.
I already did it basically in no time after few unsuccessful attempts to create
something bufferless.
Example provided by @zranger1 worked very well "as is’ but was easily destroyed
when I started to play with timing parameters. I am planning to use PB with Hubitat
Elevation home automation controller. The Device Driver for the HE integration
already developed by @zranger1 I tested this Driver and it works OK.
Thanks God both toys (PB and HE) are not cloud-based.
For this matter I need very simple and intuitive timing control with a single
timing parameter passed from HE to PB. In many examples I checked there are
more than one timing parameter. And it is not easy to understand a relationship
between them. My approach (re-displaying a frame buffer with a refresh rate,
I am sure, it is not ideal) solved this problem instantly.

As far as library goes, including libraries and calling functions is very preferable.
But just having a library of functions and using copy-and-paste technique should
be acceptable (certainly better than nothing).

@Vitaliy ,
Hey no worries. There’s no “right way” or “wrong way” just different options. By all means use whatever works for you!

By the way, I edited your post and added the formatting. If you edit it you can see the syntax used, just 3 backticks around the code.

To make it easier in the future, I’ve added a simple template with a code block for this category.

1 Like

I agree 100+%

Buffered approach was/is very intuitive right away for my EE eyes.
But I am trying to learn deeply “per pixel math” approach as we speak.

Buffered approach definitely requires more memory but it is OK if you have enough memory.
Timing for the buffer processing function is depend and limited by the frame rate requirement.
For the very smooth animation frame rate should be no less than 25mS.
Refreshing 300 WS2812b LED strip takes around 9mS. This leaves about 16mS for the
smooth frame processing. It looks like plenty of time for the fast modern CPUs (assuming
this is not a live video rendering).

Correct me if I wrong but “Per Pixel Math” approach for the same strip all calculations
must to be done within 30mkS per each pixel. This should be enough for the simple math
but complex math could be time limited.

But once again - you are absolutely correct: use whatever works.

I have to admit, PixelBlaze is the best Lighting Controller I have seen and tested.
Very BIG plus - it could be easily integrated with the HE home automation hub.
And thanks God!, both these toys are not cloud-based, all controls are local.

!!! HAPPY HOLIDAYS !!!

PS.
I am sorry, this is slightly of topic comment.
For my home automation projects I am using ESP8266 based HubDuino project.
HubDuino is very well integrated with HE hub and allows you to build variety of
custom sensors and controllers. One of the on the custom controller is a
PixelLED strip controller based on the NeoPixelBus free library (Arduino IDE).
This library is using buffered approach. As a result any lighting effect, designed
for buffered display method cold be easily used ether with NeoPixelBus/ESP8266
or PixelBlaze. This is another reason why I am leaning toward buffered effect design.

PS2.
I have no idea what ESP32’s buit-in HW is used by PixelBlaze in order to create
serial output stream but NeoPixelBus is using built-in DMA Engine.
This naturally implies Frame Buffer.
I. e SW is dealing only with Frame Buffer modifications but actual serial stream is
produced by build-in HW. Another words, CPU is not busy at all with creating
serial output stream.

The low level drivers are open source on my github.

For apa102, I wrote my own simple SPI based bufferless driver so that I could drive more LEDs than the esp8266 had memory for. The other advantage is that rendering and transmitting can be easily pipelined without a buffer. The next pixel is calculated while the previous pixel is being sent, so the frame rate is improved. To do the same for a buffer approach, a double buffer is needed so transmit can happen from one buffer as the next frame is rendered to another buffer.

Later I added WS2812 for Pixelblaze V2, originally using a UART, just like NeoPixelBus had done. It could be buffered or unbuffered. Unbuffered can pipeline, but it’s too easy to underfeed WS2812 and cause an early latch between 2 pixels, and requires using a slow data rate that ends up being the bottleneck. For most purposes buffered mode is best.

For V3 WS2812 support I switched to using the RMT peripheral based on a fixed fork of the driver used in fastLED (which was broken at the time). There are 2 options for using RMT for ws2812, one is using DMA to the RMT but this is incredibly memory intensive since RMT needs 32 bits for every bit sent. That’s 96 bytes for every RGB pixel, 490KB of ram to support the same 5k pixels max which is more than the entire ESP32 has. That buffer wouldn’t be useful for any pixel rendering engine frame buffer since it has to be in the RMT peripheral timing format. The other option is to feed RMT via interrupt a little at a time, so no giant 32x buffer is needed, just a normal pixel buffer to ensure the pixel data is always available when the interrupt needs it.

2 Likes

Hi @wizard,

WOW!
Thank you very much for the details on the low level drivers.
I wish, I could be more comfortable with the SW projects as I am with HW projects.

I get this completely.

I am also mostly a hardware guy. So I quickly hacked together a basic line driver circuit to drive strings of WS2811 controlled pixels down 60 feet of power cable (over 2000pF of capacitance between data and power lines). It just worked, and to be honest, I would have been disappointed if it hadn’t. And the oscilloscope also told me I had got it about right.

But I am also still getting my head around the various strategies for pattern generation. I just about manage with arrays versus maths, but when 2D mapping gets involved (never mind 3D) it all gets a bit sketchy, and the transition from a structured approach to a trial and error approach occurs all too quickly. But I’ll get there, I am sure.

I shall post a little video of my efforts when I get a chance (too much rain this evening).

1 Like

Hello another EE,

I am curious why do you need 60ft lines for the control signals.
The controller itself (specifically PicoBlaze) is small enough to
be mounted right next to the strip. Plus the communication it is WiFi.
I have all my Strip Lighting Controllers mounted within 1-2 inches
from a correspondent strip. This way I am OK with Signal Integrity
for the control signals.
As far as strip power I am using 24V Strips as a RGB(W,W) Strips
but I will replace all of them with WS2812b (or similar).
Higher voltage is very preferable because of a voltage drop across
the strip itself. WS2812b strips are only 5V and could draw up to
18A if entire strip is turned on 100% White. In my experiment strip
was powered from the beginning and connected right to the 5V PS
terminals. But measured voltage at the end of the strip was below 3V.
Surprisingly strip still worked but brightness at the end significantly
dropped and color at the end became almost orange instead of white.
No surprise there.
So, my power strategy for the low voltage strips is to use local
24-12V-to-5V @15A local down converter also mounted right next
to the strip. Since voltage input is 12-to-24V the size of this converter
is only 2x2x0.75 inches. Plus I can run relatively long power line from
main 24V@6A power brick. In this case I really don’t care about
voltage drop across 24V power line plus this is very safe voltage to
run anywhere I want.
In addition strip must be powered from the both ends.
I am running 16 gage flat audio cable alone the entire strip.
The result is - the entire strip lights up very clear. Every single pixel
is bright white with no noticeable brightness difference. I expected
some in a middle but did not notice any. Strip is silicon wrapped,
so I cannot measure voltage in a middle of the strip.

To answer your question… Mostly, it was a case of the fact that I already had a nice weatherproof box for the AC to 12 supply, that already had a 5V regulator as well. It also had all the power and signal cables already mounted. This was used for my pre-Pixelblaze efforts in previous years. That box sits on one side of a locked gate right by the house with no public access, but the tree that features the lights is right by the pavement at the front of the house. Low voltage wires are OK, but I did not want anyone to get curious and start messing with boxes (unlikely, really, I know). If I had been starting from scratch I suppose I might have done things differently. But, as mentioned, hardware guy here, so a quick hand-wired circuit board to implement a line driver or two was for me the easier solution. Also, I was not so confident about the wi-fi signal at that location in front of the house compared to where I am able to put the weatherproof box.

As for the rest - I am using 300 C9 sized pixel bulbs in six series connected strings of 50 (mine came from Holidaycoro.com in the US before they stopped shipping to the UK due to our Brexit nonsense). These are 12V devices, and so I inject power at 100, 200 and 300. I have another string of 50 on a separate Pixelblaze, also driven in the same way from the same box. It all works well.

For next year I want to improve the range of patterns that I am using as well as explore the possibilities of synchronisation. But I have time for that. I am recently retired, so I have a bit more time for toys. :laughing:

Thank you for the use case clarification.
Obviously different installations required different approaches.

You’re not alone. One of the biggest goal of my “tutorial” series is to help people understand how this all works. @zranger1 added yet another “layer” when he connected GPU shader thinking to the PB, on top of the already existing methods we had (of which, there are quite a few). It’s still evolving, and we’re all still learning and sharing tips and tricks.

1 Like

A post was split to a new topic: Pin Interrupts?

Thanks everyone. I learned sooo much from this post. It really helped put some foundational pieces in place for me.

1 Like