"Improving" Patterns

Interesting approach in that repo… Using epe as the definitive version and having to blow away the js src makes it viewable, but it’s an awkward method. I see .epe as the final step, “production ready” so to speak. If there is a good way to build an epe from the js (the name, id and graphic, added to a escaped version of code) then that would reverse the order. I hope that’s the direction of your git RFC.

Making a pattern in non epe form into two parts might help: pattern.js is the code, and pattern.info is the rest. That would allow rebuilding an epe from the two parts: “build.or combine” script, and a blank template .info file would be easy: name, random id that doesn’t collide with existing ones, and empty image. Once that epe is loaded to a PB and a pattern is generated, and a real epe is built, then a “split” script would break that epe back into code and info, which would allow diffs, including to the info part, like name or id. (The image is the annoying part, as it’s binary encoded, and not really editable)
So let me revise the proposed two parts into three parts:
Pattern.js is the code
Pattern.info is the noncode: name, id
Pattern.gif (is that the image format? Whatever it is)

Run “build Pattern” would build Pattern.epe
Running “split Pattern” would do the opposite, and create/update from Pattern.epe into those 3 files.

In fact, a “push” script to take “raw” code and upload it would now be easy: build first, then upload. If the graphic (or code or name or id) is updated by PB, then running an “extract” script would download the epe and split into components, and any changes would be diffable.

Ideally an id change could be seen by the script as a potentially different pattern (“hey, the id is different, do you want to override this code or extract as a new/different pattern?”)

Having non1D images (a 2d and/or 3d) might be additional files if Ben ever adds that option (nudge nudge) The build/split scripts could accommodate that easily, for example

For myself, and the repo, I planned on doing some intentional sorting into directories (so the src method you used is ultra awkward)

Potential Tree:

patterns

  • tutorials+example code
  • sound [these might live inside 1d,2d, TBD]
  • movement [same]
  • other sensors [same]
  • 1d (single long strip)
  • 1d (other) [circle, multiple strips, etc]
  • 2d (matrix)
  • 2d (polar)
  • 3d (surface)
  • multiPB [sync patterns]
  • other
  • helpercode (functions/techniques to include)

mappers

  • 1d (x only)
  • 2d (x+y)
  • 2d (polar)
  • 3d (matrix)
  • 3d (polar)
  • other

tools

  • build [build split scripts live here]
  • firestorm
  • emulator
  • userscripts
  • etc…

docs

  • etc…

Ha! Don’t worry, this is exactly the direction Ben articulated: the .epe isn’t the definitive record.

1 Like

I’ve started a repo with what I have in mind. It’s still lacking the tools to extract and reassemble .epe files - but is a start that can evolve to become a backing store for the pattern site.

In the repo you’ll find an example that outlines my current thinking.

So a tool that takes .epe files and generates files in this structure would help update the repo from a PB authored pattern. I think @jeff’s extract tool is a good start.

A tool could be made that assembles these pieces into an .epe file as well.

At some point the pattern sharing site could be integrated, such that main-line changes are updated and uploads to the website create automatic PRs.

What do you think?

2 Likes

Ah, it’s a jpg, not a gif? Interesting…

Yeah, metadata (name/id) staying in meta.json makes sense, better than the pattern.info I suggested, since it’s already json.

Otherwise pretty close to my (fresh eyeballs) take, so that’s good.

Making the pattern back end a git repo makes sense, and would allow both structured changed and new submissions, even from code newbies, since all of the files can be added/edited even from github itself, if need be.

Internal pattern storage has to be very compact, and is usually around 5-10K per pattern w/ preview + source + compiled. Both V2 and V3 have flash chips that are 4MB, and that has to cover 2x copies of firmware (to support updates w/o bricking), the entire front-end browser app, configuration, and pattern storage. GIFs are large and have color limitations, even 1px tall ones. A 5 second GIF can be hundreds of KB even when compressed to terrible-looking quality.

I’m creatively repurposing the y axis to cover time (height = 150, 5s @ 30FPS) across a 1 dimensional picture :smiley: and compressing using jpeg. Jpeg does some neat tricks to save space by compressing away some of the fine detail that is less perceptible, and it so happens that with animations being things that change over time, there’s a lot of detail that can be compressed across time. A movie compression codec like H.264 does similar tricks but with 2D images over time, but getting that to work across browsers and also small enough to embed in a stand-alone web page embedded in a tiny flash chip has been a challenge.

2 Likes

I think this does a good job outlining the idea for a roadmap, so I won’t start a new thread.

I know that personally I’ll start with a PR to merge my jvyduna/pb-examples into this new repo, but I suspect it’ll be a little after v3 launches that we’ll all have the time to make progress on migrating the existing pattern library over. There’s plenty of work to do with the two-way parsing, but I think there’s enough of a spec that some of us could chip away at it.

@wizard, for scripting tools, if we are willing to try any of the following, would you prefer us to try to write things in node/js, Python, bash, whatever we know best, or something else?

2 Likes

I’m well versed in node/js. Less so in python. Some tools are already in js. Might be some crossover with firestorm? So a few minor points toward node. Otoh, your tool exists and it’s almost half of the tools needed and seems easy enough. Hard to pass up on that!

2 Likes

This looks entirely sane and reasonable to me! I’ll make a parallel repository with the new structure so I can start playing with the toolset as it gets built. And let me know where I can help – javascript isn’t my strongest language, but I’m willing to pitch in and get better at it.

We do need some provision for mappers as well – looks like lots of people are using rings, pyramids, helical coils, etc. and it would be great to have one place to point them for mapper code.

2 Likes

Are we still missing an .epe generator?

@Nick_W, correct me if I’m wrong, but the python library can fetch epe data, and put it into a file, but it can’t create an epe file, right?

I mean, maybe something could take PB level source code, push that to a fresh pattern, and the PB could generate binary data and a jpg preview, and then download a epe from the PB, right? But as of today, we don’t have a PB independent way of making an epe. (Which is what I meant above… To automate generating epe from .js type files, say in a repo situation)

Yes, what it does is download the components from the PB (.jpg, binary code, plain text code), and assembles it into an .epe file, which it then saves.

It can’t make an .epe file from plain text.

1 Like

It’s unsurprising that the only source of an .epe is from a PB. We can’t make the bytecode binary without it. So any repo that wants to provide an epe (ready to run), needs to at some point send/cut-paste the plain text to a PB.

@wizard, is that a feasible if not currently possible feature? Akin to compiling a binary, as I see it. I can get a git repo of C code, but if it’s written in Visual C (random example), I need a Visual C compiler to generate a binary, and that binary will only run on supported machines/OSes. So if we desire being able to have repo libraries of PB that allow some people to “grab an epe and load it on their PB”, we need some scriptable method (as opposed to by hand) of generating, by pushing plain text to a PB, as a fresh pattern, and then having it save so we can retrieve an epe with jpg, binary and id.

Yes?

It’s kind of the opposite.

If you have the binary code, you can load just the binary code onto another PB, the PB will then generate the plain text and jpg (preview) image from that.

This is how cloning a pattern from one PB to another works (.epe files are not used).

I don’t know if the plain text is encoded in the binary code itself, but the only thing you actually need is the binary code.

Of course you can’t edit that, and the only way of making the binary code is via a PB, but a repo with just the binary code in it would work.

You wouldn’t get a preview image though.

Yes, I’m aware the bin is the key piece, and you can transfer a bin from PB #1, to a file to PB #2

But you can’t do anything with that code in plain text, like push it to a git repo, patch it, and push it back onto a PB, without (as of now) , literally cut and pasting it into a web page. That’s a huge blocker, in my mind, to using git… You can’t test and iterate your code, and git commit, without likely multiple manual steps.

The binary files are a a few files together. It has the bytecode to run, the preview animation (jpg), compressed sources, and name. The bytecode also has a list of exports.

Epe files are portable source and preview and the intended way of sharing between Pixelblazes.

No guarantee that the binary files will be compatible between models/versions.

2 Likes

So if you had the binary, you could extract the plain text, and have a preview image, but in a read only format.

For compatibility, you could just load the extracted text - in fact, you could recreate the .epe file just from the binary if we knew how to deconstruct the binary.

@Nick_W yes, which is what the UI effectively does. It requests the compressed source segment with a websocket request, decompresses with LZString, and puts together a json file with the .epe extension.

I’m still trying to figure out a workflow to use .epe s with something like github.

Right now, if I embrace putting PB patterns into github, I’m fine with editing the source text… but trying to integrate any sort of .epe is a nightmare. Any change in source would have to require a manual step to replace the .epe (cut and paste source text into UI, download .epe, assuming it actually works, and then put .epe in, replacing existing binary, and finally commit it all, and push.

vs: “I only provide source” workflow:

Edit source, git commit, push, done. Accept a patch? Done.

What’s missing is some script/ability to push that source into PB and get back a .epe, OR a script to build an epe from it directly (and I realize this is much less likely since it would mostly require reinventing/opening PB bytecode building.)

To be clear, there is nothing wrong with a epe-less github repo… but I see the plan is to provide epe files in github, and I’m wondering (other than a huge manual process, leading to a lack of github updates/patches) how you were envisioning this.

I’m thinking of how to build out my PB github repo, a task I keep putting off, and so far, I can’t see how I’d support adding .epes.

Yes, this is how I build the .epe file, once I figured out how to unzip LZString in python.

What you could do, is have the plain text in the GitHub repo. To load it into a PB, you would have to have a script that makes this into an epe, but all that script has to do is make a binary file with a dummy .jpg preview image and dummy byte code - say with an old version number or something.

So the constructed epe would have a dummy preview jpg, the actual plain text, and a manufactured binary containing the dummy jpg, the actual text LZString compressed, and some fake byte code.

Hopefully the PB has a mechanism for recognizing that the byte code is out of date, uncompressing the text, and recreating the preview image and byte code from that.

Would that work?