Open Access   Article Go Back

A Historical View of the Progress in Music Mood Recognition

Swati Goel1 , Parichay Agrawal2 , Sahil Singh3 , Prashant Sharma4

Section:Research Paper, Product Type: Journal Paper
Volume-7 , Issue-3 , Page no. 39-45, Mar-2019

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v7i3.3945

Online published on Mar 31, 2019

Copyright © Swati Goel, Parichay Agrawal, Sahil Singh, Prashant Sharma . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: Swati Goel, Parichay Agrawal, Sahil Singh, Prashant Sharma, “A Historical View of the Progress in Music Mood Recognition,” International Journal of Computer Sciences and Engineering, Vol.7, Issue.3, pp.39-45, 2019.

MLA Style Citation: Swati Goel, Parichay Agrawal, Sahil Singh, Prashant Sharma "A Historical View of the Progress in Music Mood Recognition." International Journal of Computer Sciences and Engineering 7.3 (2019): 39-45.

APA Style Citation: Swati Goel, Parichay Agrawal, Sahil Singh, Prashant Sharma, (2019). A Historical View of the Progress in Music Mood Recognition. International Journal of Computer Sciences and Engineering, 7(3), 39-45.

BibTex Style Citation:
@article{Goel_2019,
author = {Swati Goel, Parichay Agrawal, Sahil Singh, Prashant Sharma},
title = {A Historical View of the Progress in Music Mood Recognition},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {3 2019},
volume = {7},
Issue = {3},
month = {3},
year = {2019},
issn = {2347-2693},
pages = {39-45},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=3793},
doi = {https://doi.org/10.26438/ijcse/v7i3.3945}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v7i3.3945}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=3793
TI - A Historical View of the Progress in Music Mood Recognition
T2 - International Journal of Computer Sciences and Engineering
AU - Swati Goel, Parichay Agrawal, Sahil Singh, Prashant Sharma
PY - 2019
DA - 2019/03/31
PB - IJCSE, Indore, INDIA
SP - 39-45
IS - 3
VL - 7
SN - 2347-2693
ER -

VIEWS PDF XML
933 416 downloads 178 downloads
  
  
           

Abstract

This paper aims at assessing the state as well as the progress made in classifying emotions in the music. Music is known as “language of emotions”, hence its logical to consider it as a medium for determining the emotions as well as categorize the music based on the emotions they bring forth [1]. Different segments of a particular music may express different emotions and since emotions are interpreted by humans there may arise some conflicts to come to a well-defined answer. The ability to deduce the emotions exhibited by music is of great significance. For example, the ability to deduce emotions can help understanding the patients suffering from Alexithymia, online music vendors like Spotify, iTunes etc. can provide customized playlists based on moods. The task of emotion determination comes under the task of Music Information Retrieval henceforth referred to as MIR. The paper explores the methods of emotion retrieval that includes methods that use textual information (lyrics, tags etc.), content-based approaches and systems combining multiple methods [2].

Key-Words / Index Term

Acoustic features, Music Emotion Recognition, MIREX, Social Tagging, MFCC, Centroid, Flux, Rolloff, Chroma, Gaussian Mixture Model, Support Vector Machine, VA Model, PAD Values

References

[1] C. C. Pratt, Music as the language of emotion. The Library of Congress, December 1950.
[2] Youngmoo E. Kim, Erik M. Schmidt, Raymond Migneco, Brandon G. Morton Patrick Richardson, Jeffrey Scott, Jacquelin A. Speck, and Douglas Turnbull, “Music emotion recognition: a state of the art review” in ISMIR, 2010.
[3] Yang Liu, Yan Liu, Yu Zhao, and Kien A Hua, “What strikes the strings of your heart? Feature mining for music emotion analysis,” IEEE Transactions on Affective Computing, Vol. 6, No. 3, pp. 247–260, 2015.
[4] Mathieu Barthet, Gyorgy Fazekas, and Mark Sandler “Music emotion recognition: From content- to contextbased models,” Computer music modeling and retrieval, pp. 228–252, 2012.
[5] Alicja AWieczorkowska, Piotr Synak, and ZbigniewW Ra´s, “Multi-label classification of emotions in music,” In Proc. of Intelligent Information Processing and Web Mining, Vol. 35, pp. 307–315, 2006.
[6] J. S. Downie, “The music information retrieval evaluation exchange (2005–2007): A window into music information retrieval research,” Acoustical Science and Technology, Vol. 29, No. 4, pp. 247–255, 2008.
[7] X. Hu, J. Downie, C. Laurier, M. Bay, and A. Ehmann, “The 2007 MIREX audio mood classification task: Lessons learned,” in Proc. of the Intl. Conf. on Music Information Retrieval, Philadelphia, PA, 2008.
[8] K. Hevner, “Experimental studies of the elements of expression in music,” American Journal of Psychology, Vol. 48, No. 2, pp. 246–267, 1936.
[9] M. Zentner, D. Grandjean, and K. R. Scherer, “Emotions evoked by the sound of music: Characterization, classification, and measurement.” Emotion, Vol. 8, pp. 494, 2008.
[10] E. Bigand, “Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts,” Cognition and Emotion, Vol. 19, No. 8, pp. 1113, 2005.
[11] U. Schimmack and R. Reisenzein, “Experiencing activation: energetic arousal and tense arousal are not mixtures of valence and activation,” Emotion, Vol. 2, No. 4, pp. 412, 2002.
[12] K. Trohidis, G. Tsoumakas, G. Kalliris, and I. Vlahavas, “Multilabel classification of music into emotion,” in Proc. Of the Intl. Conf. on Music Information Retrieval, Philadelphia, PA, 2008.
[13] F. Miller, M. Stiksel, and R. Jones, “Last.fm in numbers,” Last.fm press material, February 2008.
[14] E. L. M. Law, L. von Ahn, R. B. Dannenberg, and M. Crawford, “TagATune: A game for music and sound annotation,” in Proc. of the Intl. Conf. on Music Information Retrieval, Vienna, Austria, 2007.
[15] M. I. Mandel and D. P. W. Ellis, “A web-based game for collecting music metadata,” in Proc. of the Intl. Conf. on Music Information Retrieval, Vienna, Austria, 2007, pp. 365–366.
[16] Y. Kim, E. Schmidt, and L. Emelle, “Moodswings: A collaborative game for music mood label collection,” in Proc. Intl. Conf. on Music Information Retrieval, Philadelphia, PA, September 2008.
[17] O. Celma, P. Cano, and P. Herrera, “Search sounds: An audio crawler focused on weblogs,” in Proc. of the Intl. Conf. on Music Information Retrieval, Victoria, Canada, 2006.
[18] P. Knees, T. Pohle, M. Schedl, D. Schnitzer, and K. Seyerlehner, a Document-Centered Approach to a Natural Language Music Search Engine. Springer Berlin / Heidelberg, 2008, pp. 627–631.
[19] A. Mehrabian and J. A. Russell, an Approach to Environmental Psychology. MIT Press, 1974.
[20] M. M. Bradley and P. J. Lang, “Affective norms for English words (ANEW),” The NIMH Centre for the Study of Emotion and Attention, University of Florida, Tech. Rep., 1999.
[21] R. H. Chen, Z. L. Xu, Z. X. Zhang, and F. Z. Luo, “Content based music emotion analysis and recognition,” in Proc. Of the Intl. Workshop on Computer Music and Audio Technology, 2006.
[22] O. C. Meyers, “A mood-based music classification and exploration system,” Master’s thesis, Massachusetts Institute of Technology, June 2007.
[23] Y. Hu, X. Chen, and D. Yang, “Lyric-based song emotion detection with affective lexicon and fuzzy clustering method,” in Proc. of the Intl. Society for Music Information Conf., Kobe, Japan, 2009.
[24] L. Mion and G. D. Poli, “Score-independent audio features for description of music expression,” IEEE Transactions on Audio, Speech and Language Processing, Vol. 16, No. 2, pp. 458–466, 2008.
[25] E. M. Schmidt, D. Turnbull, and Y. E. Kim, “Feature selection for content-based, time-varying musical emotion regression,” in MIR ’10: Proc. of the Intl. Conf. on Multimedia Information Retrieval, Philadelphia, PA, 2010, pp. 267–274.
[26] T. Li and M. Ogihara, “Detecting emotion in music,” in Proc. of the Intl. Conf. on Music Information Retrieval, Baltimore, MD, October 2003.
[27] L. Lu, D. Liu, and H. J. Zhang, “Automatic mood detection and tracking of music audio signals,” IEEE Transactions on Audio, Speech and Language Processing, Vol. 14, No. 1, pp. 5–18, 2006.
[28] G. Tzanetakis, “Marsyas submissions to MIREX 2007,” MIREX 2007.
[29] G. Peeters, “A generic training and classification system for MIREX08 classification tasks: Audio music mood, audio genre, audio artist and audio tag,” MIREX 2008.
[30] C. Cao and M. Li, “Thinkit’s submissions for MIREX2009 audio music classification and similarity tasks.” ISMIR, MIREX 2009.
[31] D. Shrestha and D. Solomatine, “Experiments with AdaBoost. RT, an improved boosting scheme for regression,” Neural Computation, Vol. 18, No. 7, pp. 1678–1710, 2006.
[32] T. Eerola, O. Lartillot, and P. Toiviainen, “Prediction of multidimensional emotional ratings in music from audio using multivariate regression models,” in Proc. of the Intl. Society for Music Information Conf., Kobe, Japan, 2009.
[33] E. M. Schmidt, D. Turnbull, and Y. E. Kim, “Feature selection for content-based, time-varying musical emotion regression,” in MIR ’10: Proc. of the Intl. Conf. on Multimedia Information Retrieval, Philadelphia, PA, 2010, pp. 267–274.
[34] D. Turnbull, L. Barrington, M.Yazdani, and G. Lanckriet, “Combining audio content and social context for semantic music discovery,” ACM SIGIR, 2009.
[35] K. Bischoff, C. S. Firan, R. Paiu, W. Nejdl, C. Laurier, and M. Sordo, “Music mood and theme classification-a hybrid approach,” in Proc. of the Intl. Society for Music Information Retrieval Conf., Kobe, Japan, 2009.
[36] D. Yang and W. Lee, “Disambiguating music emotion using software agents,” in Proc. of the Intl. Conf. on Music Information Retrieval. Barcelona, Spain: Universitat Pompeu Fabra, October 2004.
[37] Y.-H. Yang, Y.-C. Lin, H.-T. Cheng, I.-B. Liao, Y.-C. Ho, and H. Chen, Advances in Multimedia Information Processing - PCM 2008, ser. Lecture Notes in Computer Science. Springer Berlin / Heidelberg, December 2008, Ch. 8, pp. 70–79.
[38] C. Laurier, J. Grivolla, and P. Herrera, “Multimodal music mood classification using audio and lyrics,” in Proc. of the Intl. Conf. on Machine Learning and Applications. Universitat Pompeu Fabra, 2008, pp. 1–6.