Publications

Learning Affective Correspondence between Music and Image

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2019

Publication date: February 1, 2019

Gaurav Verma, Eeshan Gunesh Dhekane, Tanaya Guha

We introduce the problem of learning affective correspondence between audio (music) and visual data (images). For this task, a music clip and an image are considered similar (having true correspondence) if they have similar emotion content. In order to estimate this crossmodal, emotion-centric similarity, we propose a deep neural network architecture that learns to project the data from the two modalities to a common representation space, and performs a binary classification task of predicting the affective correspondence (true or false). To facilitate the current study, we construct a large scale database containing more than 3,500 music clips and 85,000 images with three emotion classes (positive, neutral, negative).

Learn More