The neural correlates of listening to music have been investigated in several ways. However, mapping ongoing brain activity during naturalistic music listening combined with detailed models of musical features is an emerging approach. The so-called “encoding models” allow capturing the effect of multiple stimulus variables on brain responses that can then be subsequently used to decode or identify stimuli from brain activity.
Here, we apply this new concept of combining encoding/decoding models in order to identify musical content from brain activity. Furthermore, we explore the effects of spatial and temporal sparsity on identification accuracies.