Music is a language of emotions and music emotional recognition has been addressed by different disciplines (e.g., psychology, cognitive science and musicology). Nowadays, the music fruition mechanism is evolving, focusing on the music content. In this work, a framework for processing, classification and clustering of songs on the basis of their emotional contents, is explained. On one hand, the main emotional features are extracted after a pre-processing phase where both Sparse Modeling and Independent Component Analysis based methodologies are applied. The approach makes it possible to summarize the main sub-tracks of an acoustic music song (e.g., information compression and filtering) and to extract the main features from these parts (e.g., music instrumental features). On the other hand, a system for music emotion recognition based on Machine Learning and Soft Computing techniques is introduced. One user can submit a target song, representing his conceptual emotion, and obtain a playlist of audio songs with similar emotional content. In the case of classification, a playlist is retrieved from songs belonging to the same class. In the other case, the playlist is suggested by the system exploiting the content of the audio songs and it could also contains songs of different classes. Experimental results are proposed to show the performance of the developed framework.

Audio content-based framework for emotional music recognition

Ciaramella A.
;
Nardone D.;Staiano A.;
2020-01-01

Abstract

Music is a language of emotions and music emotional recognition has been addressed by different disciplines (e.g., psychology, cognitive science and musicology). Nowadays, the music fruition mechanism is evolving, focusing on the music content. In this work, a framework for processing, classification and clustering of songs on the basis of their emotional contents, is explained. On one hand, the main emotional features are extracted after a pre-processing phase where both Sparse Modeling and Independent Component Analysis based methodologies are applied. The approach makes it possible to summarize the main sub-tracks of an acoustic music song (e.g., information compression and filtering) and to extract the main features from these parts (e.g., music instrumental features). On the other hand, a system for music emotion recognition based on Machine Learning and Soft Computing techniques is introduced. One user can submit a target song, representing his conceptual emotion, and obtain a playlist of audio songs with similar emotional content. In the case of classification, a playlist is retrieved from songs belonging to the same class. In the other case, the playlist is suggested by the system exploiting the content of the audio songs and it could also contains songs of different classes. Experimental results are proposed to show the performance of the developed framework.
2020
978-3-030-51869-1
978-3-030-51870-7
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11367/87246
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact