Music Sequencer / Choreography

What is it?

The “Music Sequencer” pattern is a framework to help you synchronize your patterns with music. There are separate versions for Pixelblaze v2 and v3.

Demo

Features

  • Pattern queue
  • Timing helpers: Understands phrases, measures, and note durations
  • Debounced beat detection
  • Beat, claps, and highhat callbacks / triggers
  • Tempo estimation + confidence flag
  • Note detection

New sound-reactive patterns

It also has a bunch of new 1D sound-reactive patterns. Here are some of my favorites:

Cautionary note

I’ve been working on this for about 8 months. The version with demo code is 1300 LOC - please save often on your Pixelblaze (using ‘clone’ for version control), and also in a separate text document. This large of a pattern is more likely to occasionally save corrupted.

It’s up in the pattern library.

8 Likes

For V3 only perhaps…

Debugging loading this on my v2, and it’s crashing the controller badly. Could be my controller starting to fail, but… happened on loading this.

Sorry about that. I’ll be working on a stripped down version that’s v2 safe soon. I think it has to do with memory for globals or mishandling a missing sensor board.

it’s not the sensor board, crashing with or without. Your demo is amazing, BTW.

This is mind blowing! Very cool effects, and I can’t wait to dive in to your effects engine a bit more! Beat detection, triggers, tempo, and notes? :star_struck:

White space to preserve the mood just a moment..








BTW I’ve taken the pattern off the pattern site temporarily just in case it can brick a V2, until we know more and have a fix or workaround. 1300 lines is likely quite a lot for V2.

2 Likes

OK, I found the issue and have created a workaround. It has to do the the limit for the number of globals or functions (which is 128 in a v2 and 256 on a v3).

I’ve reposted 2 versions - one for v2, and one for v3. They include an invalid line in the .epe that will prevent parsing unsafe code on a v2.

[Edit 3 weeks later]: Note: Firmware 3.19 includes fixes such that exceeding the number of allowed globals will not crash the board.

5 Likes

I have now recovered the v2 from loading the original version. Thank you to @wizard for the time and effort. I have a programmer board, and enough geek/tech experience that I was able to get this recovered. (I’ve also repaired my v3 that went haywire, cause unknown, a few days earlier… it was a bad week for me. That required reflashing too, as it was unwilling to run or update correctly.)

That said, having finally sat down and playing with this, Jeff, it’s awesome code, but my biggest frustration is you overloaded this, to the point where it’s not really usable without major effort. You made a wonderful sequencing tool, for example, but I suspect the average user will feel more overwhelmed than I am, and I’m overwhelmed (in a bad way)

I realize a lot of this code would be redundant and perhaps a lot of (large?) shared code, but I’d really like to see either a handful of patterns made from this, each pattern on it’s own, so I could use the PB playlist, or something to much more easily select a pattern. The single slider is really non-intuitive, and it’s not even a simple integer of # of patterns, where I can see ‘oh, I’m on pattern #7

maybe a really stripped down version of this, with ONE pattern, and no sequencer?

The sequencer looks great and super flexible, but I’m still finding it hard to figure out how to use this, in the state it’s in.

I tried it on a matrix, and my first thought was, oh… it’s all 1d patterns, but then why did he overload it onto 2d and 3d renders (it all ends up using render3d)? It absolutely didn’t do what I expected (rendering something looking in 1d in a way that took advantage of my 2d at all), and again, if I’m confused, I think most people will be worse confused.

Please don’t take any of the above as negative… it’s a fresh set of eyes, and all of the feedback is intended in the best possible way to make this really powerful code more useful to a wide audience.

I’m sure I’ll be digging into this for weeks… lots of good stuff.

I spent a few hours working on a new sound pattern (Chladni sound forms), got frustrated with the noise/gain issue popping up, and said… “ok, let’s see how Jeff handled it in that new thing”, loaded it onto my v3 (finally!), and quickly realized how different this approach was…

Hey, no offense taken. It’s a fair warning: it’s probably not usable for beginners, and it might be frustrating for experts too.

It’s a complex pattern - probably too many renderers in one thing. A demo would be clearer just keeping a handful of the simple 4-line patterns and not including stuff with physics, rain sim, the smeared spectrum analyzer, totally.

The overloading of the 3D and 2D renderers is just so that it was ready to be used with 2D or 3D applications. I might have made the wrong call on that decision.

I agree that most of it belongs as modules / libs.

I understand the desire to use the separate renderers in PB playlist mode, but mostly my bigger goal was to be able to sequence things down to beat-level accuracy and demo some new sound detection tools. The patterns were more to showcase stuff like using halfnote, phrase, or beatDetected = () => {}.

Maybe I could/should break out all the separate renderers into their own patterns; I’m not sure how I feel about putting the same initial 450 lines of framework at the top of 10 different separate patterns (or trying to individually pare down each to only the sound and timing bits each one uses). As you pointed out recently, people are frustrated enough trying to grok the PI controller; this might not be a much better experience.

Hey, if anyone names a renderer they really liked in it, I’ll factor one out for anyone who asks, time permitting.

2 Likes

Hi @Jeff ,

I find the sequencer fantastic and would love to separate the renderers but find it quite complex and way above my knowledge. If you can spare some time to do a striped down version of just one renderer(I personally love flashPosterize!), I believe I could do the same for the rest based on that.

Thanks!

1 Like

Sure! I did it at three levels.

First, with all the framework and helpers preserved, but all other patterns taken out. This one is still large, so I uploaded it to the pattern library.

Then I took out all parts of the framework that aren’t used by this pattern (e.g. tempo detection, note identification, start-on-beat, claps and highhat detection all removed), but left the ability to sequence new renderers.

Flash Posterize, minimal framework
/*
  Music Sequencer, with only the Flash Posterize pattern, and all unsed
  parts of the framework code removed. Requires the Pixelblaze Sensor Board.
*/


// Values that come from the Sensor Board
export var frequencyData = array(32)
export var energyAverage, maxFrequency, maxFrequencyMagnitude


export function beforeRender(delta) {
  processSound(delta)
  updateTimers(delta)
  currentBeforeRenderer(delta)
}

export function render(index) { render3D(index, index / pixelCount, 0, 0) }
export function render2D(index, x, y) { render3D(index, x, y, 0) }
export function render3D(index, x, y, z) {
  // `renderer()` will be reassigned by your patterns (every beforeRenderer)
  renderer(index, x, y, z);
}



// ************************************************************************
// * SOUND, BEAT AND TEMPO DETECTION                                      *
// ************************************************************************

function processSound(delta) {
  processInstruments(delta)
  inferTempo(delta)
}



// Debounced detectors
{  // Brackets are used at times to enable code folding in the editor.
var minBeatRetrigger = .2 // How much of a currently defined quarter note beat must pass before a detected instrument will retrigger? E.g. use .2 to allow .25 retrigger (e.g. to catch sixteenth note drums)
var beatTimerIdx = 0, clapsTimerIdx = 1, hhTimerIdx = 2
var debounceTimers = array(3)
var beatsToMs = (_beats) => 1000 / BPM * 60 * _beats
debounceTimers.mutate(() => beatsToMs(minBeatRetrigger))
}
function debounce(trigger, fn, timerIdx, duration, elapsed) {
  if (trigger && debounceTimers[timerIdx] <= 0) { 
    fn()
    debounceTimers[timerIdx] = duration
  } else { 
    debounceTimers[timerIdx] = max(-3e4, debounceTimers[timerIdx] - elapsed)
  }
}


var bass, maxBass, bassOn    // Bass and beats
var bassSlowEMA = .001, bassFastEMA = .001 // Exponential moving averages to compare to each other
var bassThreshold = .02      // Raise this if very soft music with no beats is still triggering the beat detector
var maxBass = bassThreshold  // Maximum bass detected recently (while any bass above threshold was present)

// Redefine these in your patterns to do something that reacts to these instruments
function beatDetected() {}

function processInstruments(delta) {
  // Assume Sensor Board updates at 40Hz (25ms); Max BPM 180 = 333ms or 13 samples; Typical BPM 500ms, 20 samples
  // Kickdrum fundamental 40-80Hz. https://www.bhencke.com/pixelblaze-sensor-expansion
  bass = frequencyData[1] + frequencyData[2] + frequencyData[3]
  maxBass = max(maxBass, bass)
  if (maxBass > 10 * bassSlowEMA && maxBass > bassThreshold) maxBass *= .99 // AGC - Auto gain control
  
  bassSlowEMA = (bassSlowEMA * 999 + bass) / 1000
  bassFastEMA = (bassFastEMA * 9 + bass) / 10
}

var bassVelocitiesSize = 5 // 5 seems right for most. Up to 15 for infrequent bass beats (slower reaction, longer decay), down to 2 for very fast triggering on doubled kicks like in drum n bass
var bassVelocities = array(bassVelocitiesSize) // Circular buffer to store the last 5 first derivatives of the `fast exponential avg/MaxSample`, used to calculate a running average
var lastBassFastEMA = .5, bassVelocitiesAvg = .5
var bassVelocitiesPointer = 0 // Pointer for circular buffer


function inferTempo(delta) {
  bassVelocities[bassVelocitiesPointer] = (bassFastEMA - lastBassFastEMA) / maxBass // Normalized first derivative of fast moving expo avg
  bassVelocitiesAvg += bassVelocities[bassVelocitiesPointer] / bassVelocitiesSize
  bassVelocitiesPointer = (bassVelocitiesPointer + 1) % bassVelocitiesSize
  bassVelocitiesAvg -= bassVelocities[bassVelocitiesPointer] / bassVelocitiesSize
  bassOn = bassVelocitiesAvg > .51 // `bassOn` is true when bass is rising
  
  debounce(bassOn, beatDetected, beatTimerIdx, beatsToMs(minBeatRetrigger), delta)

  lastBassFastEMA = bassFastEMA
}


// ************************************************************************
// * PATTERN AND COMMAND QUEUE                                            *
// ************************************************************************


export var BPM = 120        // Nominal BPM. Can be set mid-sequence with setBPM(bpm) or setBPMToDetected()
var SPB                     // Inferred "seconds per beat" from BPM
var beatsPerMeasure = 4
var beatsPerPhrase = 32     // A phrase is the default duration in beats when `enqueue(pattern)` is called with no second argunment for duration.
var currentPatternMs = 0    // ms into the current pattern, wrapped back to 0 after 32000 ms
var currentPatternS = 0     // Seconds into the current pattern
var currentPatternDuration = 0  // Current pattern's total duration, in seconds
var currentPatternBeats = 0     // Current pattern's total duration, in beats
var currentPatternPct = 0       // 0..1 like time(), this is the percentage of the current pattern that has run
var beatCount                   // Number of beats into the current pattern, as an increasing decimal counter
var phrasePct                   // Percent into the current phrase, as defined by beatsPerPhrase

// Percent remaining in wholenote, halfnote, beat (quarternote), 8th, 16th, and measure (as defined by `beatsPerMeasure`)
// These go from 1 to 0 and jump back suddently to 1 on the next interval, I.E., they have the opposite ramp as `time()`, `currentPatternPct`, and `phrasePct`
var measure, wholenote, halfnote, beat, note_8, note_16 
var currentBeforeRenderer       // A reference to the current pattern's beforeRender() equivalent. This is always called in beforeRender, and it should set `renderer` to a function like render3d(i, x, y, z)
var totalPatternCount = 0       // `enqueue()` increments this as patterns are added to the queue
var currentPatternIdx = 0             // Main index for the queue (which pattern is currently playing).
var beforeRendererQueue = array(256)  // This is the main pattern queue, storing beforeRenderers()
var durationQueue = array(256)        // Stores the duration for each corresponding pattern in beforeRendererQueue, in units of beats. Otherwise, for commands (immediate single execution), the entry is an argument passed to the function.
var continueModeQueue = array(256)    // Modality for when to proceed to the next entry in the queue
var continueMode                      // 0: Continue after a specified duration.  1: After beat detected or the duration.  2: After volume spikes or the duration.  9: Execute once immediately and proceed.


// enqueue(BRFn) - Add an action (a pattern, delay, or command) to the queue. Aliased as `q(BRFn)`.
/*
  BRFn: A beforeRender(delta) function. It should assign a `renderer = (index, x, y, z) => {}`
        If continueMode == 9, this is a command function that will be executed once, after which the queue proceeds

  _beats: The duration this renderer will execute for, in beats at the current BPM
          If continueMode == 1 or 2, the pattern may plan for less time if an audio condition is detected from an attached sensor board
          if continueMode == 9, this value will be passed as an argument to the command function

  continueMode: 0 - Proceed to the next pattern or command in the queue once the duration has expired
                1 - Like 0, but if a bass pulse is detected, then proceed to the next pattern early
                2 - Like 0, but if a volume spike is detected, such as from silence to any sound, then proceed to the next pattern early
                3-8 - Reserved for future use
                9 - Execute BrFn() once immediately with _beats passed as an argument, then proceed
                Anything else - expect a function reference. Execute the function and proceed if it evaluates truthy.
*/

function enqueue(BRFn, _beats, continueMode) {
  beforeRendererQueue[totalPatternCount] = BRFn
  durationQueue[totalPatternCount] = _beats
  continueModeQueue[totalPatternCount] = continueMode
  totalPatternCount++
}
q = enqueue     // Shorthand you'll appreciate when using this a lot

// These are "commands" in that they may a change to a global like the BPM tempo, but execute once instantly instead of for a specified duration.

// Note: setBPM(<30) screws up beat detection, missing beats. It's complicated why, and isn't worth refactoring for.
function setBPM(_bpm) {
  enqueue((__bpm) => BPM = __bpm, _bpm, 9)
}

// updateTimers() is called in beforeRender(). Updates timers that patterns can use, like 
// `beat` (% of beat remaing), etc. Also determines when a pattern is complete and it's 
// time to go to the next thing in the queue.
var ONE_MINUS_EPSILON = (0xFF >> 16) + (0xFF >> 8) // One minus the smallest number. Highest result of `x % 1`.

function updateTimers(delta) {
  currentPatternMs += delta
  if (currentPatternMs > 32000) { currentPatternMs -= 32000 }
  currentPatternS += delta / 1000
  if (currentPatternS >= currentPatternDuration) next()
  currentPatternPct = currentPatternS / currentPatternDuration

  SPB = 60 / BPM  // Seconds per beat
  beatCount = currentPatternS / SPB
  phrasePct = currentPatternS / (beatsPerPhrase * SPB) % 1
  measure = ONE_MINUS_EPSILON - currentPatternS / (beatsPerMeasure * SPB) % 1
  wholenote = ONE_MINUS_EPSILON - currentPatternS / (4 * SPB) % 1
  halfnote = 2 * wholenote % 1
  beat = 4 * wholenote % 1
  note_8 = 8 * wholenote % 1
  note_16 = 16 * wholenote % 1
}

// Code to run once between patterns to reset shared state
function beforeNext() {
  // Clear shared variables
  for (i = 0; i < pixelCount + 1; i++) {
    hArr[i] = 0; sArr[i] = 1; vArr[i] = 0
  }
  setupDone = 0    // Allow any setup block defined to execute once
  lastTrigger = -1 // Clear the rising/falling edge trigger
  beatDetected = clapsDetected = hhDetected = () => {}  // Unassign any instrument-reactive functions
}

// Start the next pattern in the queue, beginning `startAtMs` milliseconds into it (usually 0)
function next(startAtMs) {
  beforeNext()
  
  currentPatternMs = startAtMs
  currentPatternS = currentPatternMs / 1000
  currentPatternIdx++
  
  if (currentPatternIdx >= totalPatternCount) {
    begin() // loop
    return
  }
  
  currentPatternBeats = durationQueue[currentPatternIdx]
  continueMode = continueModeQueue[currentPatternIdx]
  
  if (continueMode <= 2) { // Run pattern for specified duration (0) or until beat detected (1) or until volume spikes (2)
    currentBeforeRenderer = beforeRendererQueue[currentPatternIdx]
    currentPatternBeats = currentPatternBeats || beatsPerPhrase
    currentPatternDuration = currentPatternBeats * 60 / BPM
  }
}

function begin() { // This must appear at the very end of the sequence / queue definition, usually the last line of the entire pattern
  currentPatternIdx = -1
  next()
}







// ************************************************************************
// * YOUR PATTERNS                                                        *
// ************************************************************************


//  SHARED VARIABLES that multiple patterns might use

var hArr = array(pixelCount + 1) // An extra value avoids index errors in interpolation loops
var sArr = array(pixelCount + 1)
var vArr = array(pixelCount + 1)


// HELPERS - Code that multiple patterns might use


// RENDERERS

// For the demo version of this pattern, these are very dense to reduce LOC. Sorry.

function off(delta) { renderer = (i, x, y, z) => hsv(0, 0, 0) }

// Gradients that posterize quickly into segments
// Creates segments by detecting zero-crossings in this: https://www.desmos.com/calculator/9z8twmylka
function gapGen(x, p) { return (wave(x/5/p)*wave(x/2) + wave(x/3/p)*wave(x/7)) / 2 - .25 }
function fillHue(l, r, h) { for (i = l; i < r; i++) hArr[i] = h }
var posterize = 0
function togglePosterize() { posterize = !posterize }
function flashPosterize() {
  beatDetected = togglePosterize
  gapParam = 1 + .4 * triangle(phrasePct) // This animates the posterized segments lengths
  FPLastSign = -1 // Init value. Will be 0 for gapGen(x) <= 0, 1 for positive gapGen(x)
  FPSegmentStart = 0
  FPHFn = (pct) => .3 + pct + time(5 / 65.536) // Hue function for gradient
  
  renderer = (i, x, y, z) => {
    pct = i / pixelCount

    if (posterize) {
      hsv(hArr[i], 1, (hArr[i] == hArr[max(0, i - 1)]))
    } else {
      hsv(FPHFn(pct), .75, .7)
    }

    // Calculate posterized segments for next frame
    var f = gapGen(5 + 10 * pct, gapParam) // Animate pct's coeffecient to vary segment frequency
    if (FPLastSign != f > 0) { // Detect a zero crossing in the gap function
      fillHue(FPSegmentStart, i, FPHFn((FPSegmentStart + i) / 2 / pixelCount))
      FPSegmentStart = i
    }
    FPLastSign = f > 0
  }
}

// ************************************************************************
// * PROGRAM SEQUENCE - A series of calls to enqueue()                    *
// ************************************************************************

setBPM(120)

q(flashPosterize, 512)       // 1024 beats. Pattern then resets / loops

begin()                       // Every sequence must end with `begin()` :)


And finally, here is the most minimal possible, with the entire sequencer framework removed:

Flash Posterize only
/*
  Music Sequencer, with only the Flash Posterize pattern, and all unsed
  parts of the framework code removed. Requires the Pixelblaze Sensor Board.
*/


// Values that come from the Sensor Board
export var frequencyData = array(32)
export var energyAverage, maxFrequency, maxFrequencyMagnitude

export var hArr = array(pixelCount + 1) // An extra value avoids index errors in interpolation loops

// Gradients that posterize quickly into segments
// Creates segments by detecting zero-crossings in this: https://www.desmos.com/calculator/9z8twmylka
function gapGen(x, p) { return (wave(x/5/p)*wave(x/2) + wave(x/3/p)*wave(x/7)) / 2 - .25 }
function fillHue(l, r, h) { for (i = l; i < r; i++) hArr[i] = h }
var posterize = 0
function beatDetected() { posterize = !posterize }

export function beforeRender(delta) {
  processBass(delta)

  gapParam = 1 + .4 * triangle(time(16 / 65.536)) // This animates the posterized segments lengths
  FPLastSign = -1 // Init value. Will be 0 for gapGen(x) <= 0, 1 for positive gapGen(x)
  FPSegmentStart = 0
  FPHFn = (pct) => .3 + pct + time(5 / 65.536) // Hue function for gradient
}

export function render(i) {
  pct = i / pixelCount

  if (posterize) {
    hsv(hArr[i], 1, (hArr[i] == hArr[max(0, i - 1)]))
  } else {
    hsv(FPHFn(pct), .75, .7)
  }

  // Calculate posterized segments for next frame
  var f = gapGen(5 + 10 * pct, gapParam) // Animate pct's coeffecient to vary segment frequency
  if (FPLastSign != f > 0) { // Detect a zero crossing in the gap function
    fillHue(FPSegmentStart, i, FPHFn((FPSegmentStart + i) / 2 / pixelCount))
    FPSegmentStart = i
  }
  FPLastSign = f > 0
}


// ************************************************************************
// * SOUND, BEAT AND TEMPO DETECTION                                      *
// ************************************************************************


var bass, maxBass, bassOn    // Bass and beats
var bassSlowEMA = .001, bassFastEMA = .001 // Exponential moving averages to compare to each other
var bassThreshold = .02      // Raise this if very soft music with no beats is still triggering the beat detector
var maxBass = bassThreshold  // Maximum bass detected recently (while any bass above threshold was present)

var bassVelocitiesSize = 5 // 5 seems right for most. Up to 15 for infrequent bass beats (slower reaction, longer decay), down to 2 for very fast triggering on doubled kicks like in drum n bass
var bassVelocities = array(bassVelocitiesSize) // Circular buffer to store the last 5 first derivatives of the `fast exponential avg/MaxSample`, used to calculate a running average
var lastBassFastEMA = .5, bassVelocitiesAvg = .5
var bassVelocitiesPointer = 0 // Pointer for circular buffer
var bassDebounceTimer = 0

function processBass(delta) {
  // Assume Sensor Board updates at 40Hz (25ms); Max BPM 180 = 333ms or 13 samples; Typical BPM 500ms, 20 samples
  // Kickdrum fundamental 40-80Hz. https://www.bhencke.com/pixelblaze-sensor-expansion
  bass = frequencyData[1] + frequencyData[2] + frequencyData[3]
  maxBass = max(maxBass, bass)
  if (maxBass > 10 * bassSlowEMA && maxBass > bassThreshold) maxBass *= .99 // AGC - Auto gain control
  
  bassSlowEMA = (bassSlowEMA * 999 + bass) / 1000
  bassFastEMA = (bassFastEMA * 9 + bass) / 10

  bassVelocities[bassVelocitiesPointer] = (bassFastEMA - lastBassFastEMA) / maxBass // Normalized first derivative of fast moving expo avg
  bassVelocitiesAvg += bassVelocities[bassVelocitiesPointer] / bassVelocitiesSize
  bassVelocitiesPointer = (bassVelocitiesPointer + 1) % bassVelocitiesSize
  bassVelocitiesAvg -= bassVelocities[bassVelocitiesPointer] / bassVelocitiesSize
  bassOn = bassVelocitiesAvg > .51 // `bassOn` is true when bass is rising

  if (bassOn && bassDebounceTimer <= 0) { 
    beatDetected()
    bassDebounceTimer = 100 // ms
  } else { 
    bassDebounceTimer = max(-3e4, bassDebounceTimer - delta)
  }
  lastBassFastEMA = bassFastEMA
}
3 Likes

Fantastic, thanks a lot for sparing the time!

Having 3 different versions makes for a better understanding at what each section of the code is responsible for.

So, does that work on v2?! I also tried to do it a couple of times, but right after debugging loading, the controller crashed, and I had to restart it. While, when I do it on my sister’s v3, everything is going just fine! Could you explain why does that happens? I even tried to do it on another v2, and still, the result is the same. I would like to use the sequencer with my royalty free music library. I got an insane variety of music there. So, I want to see the effects of listening to this music!

You shouldn’t use this main code on a v2, there is a known bug that can cause problems not easily fixed. @jeff released v2 specific version but if you use the v3 version on a v2, you are doing so at your own risk, since he added a manual requirement to uncomment to enable the pattern.

2 Likes

Also, recent firmware fixed that specific bug, so assuming people upgraded first, they would receive a red error in the editor about having too many global variables on the stack.

2 Likes

I’ve only just had a chance to look at this and it’s an impressive bit of work Jeff, thank you for sharing it!

I was looking at the way you’ve implemented the beat detection in particular and most of it makes sense. There’s one bit I’m not sure about though which is this line here:

if (beatIntervals[0] != 0) estimateBPM() // We have all 8 beatIntervalSamples, so estimate tempo

It seems at odds with the comment and I’m wondering if that should instead be:

if (beatIntervals[beatIntervalSamples - 1] != 0) estimateBPM()

or do I misunderstand what it’s trying to do?

Hey Chris! Thanks for checking it out and diving in.

Maybe this isn’t the most straightforward program flow, but this is how I made it:

  1. Store sample
  2. Advance pointer
  3. Test if position 0 has a nonzero value

It needs to have 9 detected beats to compute 8 intervals between them. Therefore, in its initial state (or if more than 5 seconds has passed), once it detects the first beat it will always store a zero interval in index 0. The first non-zero interval is found and stored after the second beat in index 1. On the 9th beat, it stores a non-zero interval back in index 0 and knows it can now try to estimate the BPM.

I think I could have done it in a clearer way too where interval 0 is stored only after the second beat is detected. Then that part would read better. I definitely should have commented this.

1 Like

Ah OK I see where you’re coming from now, thank you. But wouldn’t that only be the correct thing to do if you were to always store the time since the start, then in estimateBPM() take diffs between neighbouring elements in the beatIntervals array (e.g. beatIntervals[1] - beatIntervals[0], …) to determine the intervals. Instead you’re storing the interval since the last beat (since you reset beatIntervalTimer each time), and later in estimateBPM() you diff the intervals directly against the mean one array element at a time. So only 8 samples are required?

[EDIT: OK now I think I see what I’m missing. It looks like the first element/sample does in fact get set to zero rather than the interval, so that one effectively gets ignored/overwritten by the 9th sample before calling estimateBPM()]
[EDIT #2: Hmm, not quite - it only gets set to zero on the first iteration if bassOn is true, which I don’t think it is. So I still thing something is amiss here?]

The reason I’ve been looking at this is to see if I can find a way to make it recover from bad samples a bit more quickly/smoothly. Currently a single extraneous (or missed) beat causes the reliability flag to reset and then take another 8 beats (i.e. a few seconds) to recover. I’ll probably play around with throwing out the min & max of the 8 values in the estimateBPM() code, change the number of samples, and other things like that to see if it helps any.