By analyzing brain signals, you can build a model to continuously gauge how a listener reacts to music. This is the perfect kind of data to send to a music streaming service to get better music recommendations, but this kind of data enables new recommendation methods altogether. If you can tell what audio features correspond to higher listener ratings, you can also recommend music directly to the listener by "listening" to music with a model of the listener's preferences. This is in stark contrast to modern music recommendation systems' collaborative filtering algorithm, which does not analyze the music that it recommends to you but instead recommends you songs that people with similar listening patterns like - this algorithm is why the hit rate on music recommendation systems is still pretty low. By building a model of a listener's preferences, you can frame a reinforcement learning problem and make progress towards an almost entirely unexplored domain - generating new, personalized music.

This would be revolutionary for anybody who values music. There is simply too much music to be able to listen to it all, so recommendations are incredibly important window into the space of available music. The state of the art music recommendation systems are not great, but people still use them because they are better than nothing. I believe that we can do an order of magnitude better by analyzing brain data in conjunction with raw audio, so you don't need a network of a million people before you can make any recommendations and each recommendation would have a much higher average hit rate. Furthermore, we could also push the fields of music, neuroscience, and computer science towards a breakthrough in artificially generated content.

Team members

Alex Cuozzo,

Computer Science, 2021