Have you ever seen Philips TV with Ambilight technology onboard? This is pretty much what addressable led strips are capable of. It looks gorgeous behind a flatscreen, but it could look even better if it fills the entire space, with the light, for instance, coming from the ceiling. You might find it an unnecessary toy, but if you think about possible applications (other than converting your living room into a club), it starts to make sense.
I must be bored or something
I do not quite recall how I came across this technology, as even today, it does not seem to have gained much popularity. I believe it was around 2015-2016. My first experience with this technology was related to Arduino and Neopixel library, but I quickly noticed certain limitations of the library. Upgrading to RaspberryPI did not do the trick. I was looking for a way to control long strips at high framerates, and it turned out I had to work out a custom solution to achieve that. Also, for the beginning of my addressable led journey, I had to find a way to test the performance on some dynamic input. Ambilight uses the image. I decided to go with the sound.
The current architecture is the most basic client-server setup. The ESP32 worker module (coded in C) listens for UDP packets from the client, most likely a computer running Linux, Windows, or Mac OS (I have not run it on Mac yet, though) Python. The client sends to the server’s socket ready color data, which the server emits to the led strip. As always, it is all about communication. Implementation of simple opcodes on top of UDP protocol, and proper use of timeouts, both on the server and client, did the trick. With 2.4 GHz wireless communication, I achieve anywhere between 65-140 FPS while controlling 512 pixels (3 colors per pixels).
Hear me out
It was more comfortable for the testing purposes to use sound to generate input for the strip than image or video (I will implement it some time, anyway). PyAudio library comes in handy for such a purpose. Implementation of a simple real-time amplitude analysis was fast and easy with PyAudio (as long as you remember to stay on Python <= 3.6), but let’s complicate things a bit. I wanted to have freedom of choice regarding the source of the music. I wanted PyAudio to listen to all sounds on my computer, so if I play music from Spotify, YouTube, iTunes, or even Cubase, I would not need to change a thing around configuration. This is the tricky part, as you get it done by configuring a loopback device and feeding the input from this device to PyAudio. This configuration differs according to the platform you are on and might require the installation of additional drivers.
Measured input’s amplitude gets then interpolated to a set number of pixels, and the magic begins.
One of the most apparent steps must be integration with inaAmbient. So far, inaPixel does not feature any front-end (as it did not need one), and the one from inaAmbient is convenient. I already implemented support for addressable led strips there. The only thing I should migrate to make it work is the communication from inaAmbient, as there is already a specific method for registering devices. inaPixel should adapt to this method. When this is done, I will definitely move to the implementation of more input types and sources.