Proof of concept: Shipping audio analysis from my laptop to a sensorboard-less Pixelblaze

This library is very much a WIP, but this is too fun not to share. There’s definitely some work to do on tuning the FFT analysis, which you might notice from the somewhat spotty beat detection on the pattern that’s playing (which is itself a WIP sound reactive pattern targeted at round layouts).

Code is available on GitHub and has only been tested on OSX. If anyone is able to get it running on Windows/Linux, a PR with directions would be so welcome.

Many thanks to the @wizard for sharing the OTA sensor data protocol details.

5 Likes

I don’t have a pixelblaze at hand, but cargo build got me a working binary and the following, so I expect it to work!

  0: default
  1: samplerate
  2: oss
  3: pulse
  4: hw:CARD=PCH,DEV=2
  5: hw:CARD=C920,DEV=0
  6: hw:CARD=N2,DEV=0
Multiple input devices available, select by number:
1 Like

That’s great to hear! Any chance you’re willing to try creating an audio loopback with a split stream going to it? I don’t have a linux box around to try it on. On OSX it takes a 3rd party program. I used Blackhole.

On Linux I can do that with the standard pulseaudio volume control dialog. I’ll give it a shot next time I have some hacking time, but don’t hold your breath for it. :wink:

1 Like

Nice!

Not working right now on Windows because cpal doesn’t exactly do the right thing with fixed buffer sizes with the default (wasapi) audio backend. What it seems to be doing is this:

  1. allocate a buffer of at least the specified size
  2. read audio data incrementally, so the buffer exposed to the rust callback function may be (and usually is) shorter than the requested size. This, of course, breaks the easy fft codepath.

This is kind of a known set of issues, and could probably be solved by using the fancier (asio) backend. That requires quite a lot more setup though.

2 Likes

Hmm, interesting. The easy-mode FFT library I’m using wraps microfft and is no_std in order to support embedded dev (I’ll probably put that to use at some point). With those bounds it’s no wonder they opt for power-of-two-only. I could pretty easily add zero-padding, though the signal quality would likely be significantly worse of course.

The joy of cross platform development! You can always count on at least one of the major platforms to do something weird. Zero padding as a fallback would likely be OK - the smallest buffer I saw was around 960bytes.

Also, if anybody really wants to use a Windows machine for this, they can set up the asio backend. It’s a bit of work, but latency is much, much lower and channel counts can be a lot higher too. It’s very fast and very nice - I used it w/ProTools, Cubase, etc, when I was doing a lot of audio.

1 Like

Ok, pushed up some zero padding, mind giving it a run?

It works! Hurrah!

I had to make a small change to the code that does the padding though - it wasn’t happy using copy_from_slice(audio) to the zero-length target. Here’s what I wound up with: (bearing in mind that I haven’t spent much time w/rust, and might’ve got it spectacularly wrong!)

        let audio = if audio.len().count_ones() == 1 {
            audio
        } else {
            // The default windows machinery doesn't always send a full buffer.
            // Might as well pad instead of panicking
            let new_len = audio.len().next_power_of_two();
            let mut padding = vec![0.0_f32; new_len - audio.len()];
            padding_buf.clear();
            padding_buf.extend_from_slice(audio);
            padding_buf.append(&mut padding);
            &padding_buf[..]
        };
2 Likes

Fixed on main, thanks so much!

1 Like

I also told you this on your post on the LEDs are Awesome group, but I’m so excited for this pattern, and it looks amazing.

It’s a perfect fit for what I’m trying to do for a helmet for Burning Man and assuming (1) I get that project done (looks likely; just finished proof of concept) and (2) the pattern looks as awesome as I think, and (3) it works as just a normal pattern for a v3 and Sensor Expansion Board.

I’ll report back with photos and documentation if so!

My stretch goal for my project was a sound-reactive pattern that “worked” and looked great, and your will be better than anything in the current pattern library for something “centered” like the helmet will be.

1 Like

Works great! I hooked it up to my webcam audio (via pulseaudio GUI) and was able to make pulses by clapping. No video because it involved alligator clips on a maybe-too-embedded-already WIP, so you’ll have to take my word for it. :wink:

I took a while to notice that the sound pattern I was using cleverly made up its own input when light==-1 that is … no input board is connected.

@wizard, @hbeck , @sorceror - have you seen the following on v3.45:

  • set Pixelblaze to a sound reactive pattern, run the sound packet sender and watch the pretty lights
  • leaving the packet sender running, unplug the Pixelblaze for a few seconds, then plug it back in. It will power up with the same pattern you had running before but…
  • there is now no sound reactivity. It doesn’t appear to be receiving the packets.
  • if you go to the web UI and reload the pattern, it starts working again (same if you reload the running pattern using the Python or other client.)

Behavior is the same whether I’m sending broadcast or point-to-point. Is this a known issue? Can we poke the Pixelblaze in some other way other than reloading the current pattern to get it started listening, or is it a matter of implementing more of the leader/follower protocol?

(While everybody was off getting muddy, I got started building a new audience participation engine for the next gen Titanic’s End – when done it’ll (at very least) send audio, beat and other control information from Chromatik to all Pixelblazes on its wifi segment. General-purpose standalone prototype soon! Want to get the protocol dead reliable though.)

image

3 Likes