Music is a language of emotions and music emotional recognition has been addressed by different disciplines (e.g., psychology, cognitive science and musicology). Nowadays, the music fruition mechanism is evolving, focusing on the music content. In this work, a framework for processing, classification and clustering of songs on the basis of their emotional contents, is explained. On one hand, the main emotional features are extracted after a pre-processing phase where both Sparse Modeling and Independent Component Analysis based methodologies are applied. The approach makes it possible to summarize the main sub-tracks of an acoustic music song (e.g., information compression and filtering) and to extract the main features from these parts (e.g., music instrumental features). On the other hand, a system for music emotion recognition based on Machine Learning and Soft Computing techniques is introduced. One user can submit a target song, representing his conceptual emotion, and obtain a playlist of audio songs with similar emotional content. In the case of classification, a playlist is retrieved from songs belonging to the same class. In the other case, the playlist is suggested by the system exploiting the content of the audio songs and it could also contains songs of different classes. Experimental results are proposed to show the performance of the developed framework.
|Titolo:||Audio content-based framework for emotional music recognition|
CIARAMELLA, Angelo (Corresponding)
|Data di pubblicazione:||2020|
|Appare nelle tipologie:||2.1 Contributo in volume (Capitolo o Saggio)|