Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It takes about 10 hours per minute of song to make a sequence like that. Imagine if AI could help speed that up!


I wonder if you can break down the sequences into segments (parts) and then the AI doesn't have to know how to control LEDs directly, but can instead put sequences together in accordance with the music.

Maybe even process MIDI files somehow ...


When we do it as humans, that's basically how we do it. We may have an overall idea for a theme across the song, but usually you're zoomed into a few seconds of music and adding light effects to it.


I assume you've seen this https://news.ycombinator.com/item?id=47675446 which is related


I had not, thanks! Interestingly, using FFT for this has been around for a long time, but combining it with transformers could have interesting new results.


It's interesting to me that you can have something like this that is "hard to build" but "easy to verify" - humans are really good at telling if something is "off" about the visualization.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: