Tutorial/Questions: Music Sequencer - for V3 ONLY by Jeff Vyduna to run a DJ set

Hello everybody,

After hours and hours on this pretty tough piece of code, I managed to be able to to modify it without breaking stuff towards my needs. I post here to document my process (might be useful to beginners like me). I plan to edit this post in the future with new findings. I’m still far from understanding 100% but i got the logic (brilliant by the way).

My goal is to use the sound processing framework (which is more advanced than the usual PI controller used on other sound patterns) to run during a small festival stage DJ set (techno), with little to no intervention on the parameters. At the moment I got how to make a pattern that reacts to music always in the same way, and I would like to understand how to bring variations during breaks or drops.

The original code has been simplified a bit:

  1. Removed most of the built-in patterns (except volume tuning because it’s very useful to see how variables move according to the sound)
  2. Not using the “sequencer” at the moment as i technically want only one pattern running with little to no no intervention during the DJ set. I kept it for future use as I sense it is the key to detect drops/breaks.

Here is my version of the pattern (original author is by Jeff Vyduna):
_1-SOUND_Music Sequencer Full Engine(1).epe (37.1 KB)

___ CODE STRUCTURE ___ (in order from top to bottom)
You’re not required to modify anything before “6. Your renderers”. Everything above this section is the sound processing and sequencing framework.

  1. Variables declaration (inc. sensor board)

  2. Before render/render functions
    These are the classic functions used in all patterns. Before render contains all the sound processing. Render are nested to allow the code with 1D, 2D or 3D mapping. No specific “perso” pattern code comes here, you add your patterns at the end of the code (6.)

  3. SOUND, BEAT AND TEMPO DETECTION
    This is the really tough part of the code. It contains all the code updating variables that will be used to make the pattern react to sound.
    A bit of detail: https://electromage.com/docs/sensor-expansion-board#sound

  • function processVolume(delta): pretty well commented. It’s using a moving average to compare the last sampled sound to a more “smoothed” sound. It allows to detect if the current sound is getting louder for example.
  • function debounce(trigger, fn, timerIdx, duration, elapsed): when the code detects a beat for example, it will detect it over several samplings. Debounce is to avoid triggering it several times
  • function processInstruments(delta): here you have frequency analysis from the sensor board. I found out the detection of high hats or claps does not work very well with techno due to the lack of trebles. This is where you can adjust the core code if I understand correctly.
    High Hats use sensor board 29-30 bins = 7-9 kHz range
    Claps use SB 18-24 bins = 1.8-4kHz range (looks too high for techno but I still need to do some tests)
    Bass use SB 1-3 bins = 50-100 Hz range (pretty accurate)
  • :question:The rest of functions calculate the tempo / beat detection. On this i struggle to understand how it works
  1. PATTERN AND COMMAND QUEUE
    These control how your pattern is executed. This is to control basically how you jump from a pattern (below) to the next one. A lot of functions are actually aliases that are used when you call the patterns below.

  2. YOUR PATTERNS
    Some helping functions that can be used on the patterns. I don’t really touch here.

  • The PI controller is used if you want to use non-normalized sound variables. For example beat, beatDetected work as 0…1 or activated/desactivated. But if you want to use a Volume EMA, or a frequency EMA for example, this adjusts a gain. This allows you to have for example full luminosity at a drop, then gain is adjusted and luminosity is reduced a bit. This give more impact to sound spikes.
  • :question: I struggle to understand how to use efficiently the other helpers, even when looking in the built-in patterns
  1. RENDERERS
    This is where you write your patterns.
    Let’s see an example:
function autoplay(delta) {
  //Instructions here are equivalent to beforeRender() instructions in a "normal" pattern

  //Then you call a renderer and write inside all the instructions you usually find in render3D(index, x, y, z)
  renderer = (i, x, y, z) => {  
    //Render instructions below
    pct = i / pixelCount //pct returns a number 0..1 with the position in your index (same as 1..pixelCount)
  
    hArr[i]=1 //Hue for the pixel at your index
    sArr[i]=1 //Saturation for the pixel at your index
    vArr[i]=1 //Luminosity for the pixel at your index
    
    hsv(hArr[i],sArr[i],vArr[i]) //Display pixel at index i
  }
  1. MANUAL PATTERN SELECTION
    This basically creates a slider to allow you to manually loop between the patterns without modifying the queue. You need to use the function name of each pattern to fill the array. I have commented and improved a bit the code to be easier to adapt.

  2. PROGRAM SEQUENCE
    All the patterns to execute are written below. In my code there is nothing because i use the manual selector, but it’s exactly the same thing:

q("Name of the pattern function") //Executes the pattern

There are a bunch of ways to execute patterns depending of conditions (this is what “4. Patterns & Command Queue” is for). I have not dived in detail in it yet, i’ll update occasionally.

Hopefully it will help beginners who want to use this awesome code written by Jeff Vyduna.
I will ask my questions in a separate message below to keep this as an explanation only.

7 Likes