Joyful for you and tender for us: the influence of individual characteristics and language on emotion labeling and classification

Main Authors: Gómez-Cañón, Juan Sebastián, Cano, Estefanía, Herrera, Perfecto, Gómez, Emilia
Format: Proceeding eJournal
Bahasa: eng
Terbitan: , 2020
Subjects:
Online Access: https://zenodo.org/record/4076720
ctrlnum 4076720
fullrecord <?xml version="1.0"?> <dc schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><creator>G&#xF3;mez-Ca&#xF1;&#xF3;n, Juan Sebasti&#xE1;n</creator><creator>Cano, Estefan&#xED;a</creator><creator>Herrera, Perfecto</creator><creator>G&#xF3;mez, Emilia</creator><date>2020-10-12</date><description>Tagging a musical excerpt with an emotion label may result in a vague and ambivalent exercise. This subjectivity entangles several high-level music description tasks when the computational models built to address them produce predictions on the basis of a "ground truth". In this study, we investigate the relationship between emotions perceived in pop and rock music (mainly in Euro-American styles) and personal characteristics from the listener, using language as a key feature. Our goal is to understand the influence of lyrics comprehension on music emotion perception and use this knowledge to improve Music Emotion Recognition (MER) models. We systematically analyze over 30K annotations of 22 musical fragments to assess the impact of individual differences on agreement, as defined by Krippendorff's \(\alpha\) coefficient. We employ personal characteristics to form group-based annotations by assembling ratings with respect to listeners' familiarity, preference, lyrics comprehension, and music sophistication. Finally, we study our group-based annotations in a two-fold approach: (1) assessing the similarity within annotations using manifold learning algorithms and unsupervised clustering, and (2) analyzing their performance by training classification models with diverse "ground truths". Our results suggest that a) applying a broader categorization of taxonomies and b) using multi-label, group-based annotations based on language, can be beneficial for MER models.</description><identifier>https://zenodo.org/record/4076720</identifier><identifier>10.5281/zenodo.4076720</identifier><identifier>oai:zenodo.org:4076720</identifier><language>eng</language><relation>info:eu-repo/grantAgreement/EC/H2020/770376/</relation><relation>doi:10.5281/zenodo.4076719</relation><relation>url:https://zenodo.org/communities/ismir</relation><rights>info:eu-repo/semantics/openAccess</rights><rights>https://creativecommons.org/licenses/by/4.0/legalcode</rights><subject>music emotion recognition</subject><subject>group-based annotations</subject><subject>individual characteristics</subject><subject>annotation analysis</subject><title>Joyful for you and tender for us: the influence of individual characteristics and language on emotion labeling and classification</title><type>Journal:Proceeding</type><type>Journal:Proceeding</type><recordID>4076720</recordID></dc>
language eng
format Journal:Proceeding
Journal
Journal:eJournal
author Gómez-Cañón, Juan Sebastián
Cano, Estefanía
Herrera, Perfecto
Gómez, Emilia
title Joyful for you and tender for us: the influence of individual characteristics and language on emotion labeling and classification
publishDate 2020
topic music emotion recognition
group-based annotations
individual characteristics
annotation analysis
url https://zenodo.org/record/4076720
contents Tagging a musical excerpt with an emotion label may result in a vague and ambivalent exercise. This subjectivity entangles several high-level music description tasks when the computational models built to address them produce predictions on the basis of a "ground truth". In this study, we investigate the relationship between emotions perceived in pop and rock music (mainly in Euro-American styles) and personal characteristics from the listener, using language as a key feature. Our goal is to understand the influence of lyrics comprehension on music emotion perception and use this knowledge to improve Music Emotion Recognition (MER) models. We systematically analyze over 30K annotations of 22 musical fragments to assess the impact of individual differences on agreement, as defined by Krippendorff's \(\alpha\) coefficient. We employ personal characteristics to form group-based annotations by assembling ratings with respect to listeners' familiarity, preference, lyrics comprehension, and music sophistication. Finally, we study our group-based annotations in a two-fold approach: (1) assessing the similarity within annotations using manifold learning algorithms and unsupervised clustering, and (2) analyzing their performance by training classification models with diverse "ground truths". Our results suggest that a) applying a broader categorization of taxonomies and b) using multi-label, group-based annotations based on language, can be beneficial for MER models.
id IOS17403.4076720
institution Universitas PGRI Palembang
institution_id 189
institution_type library:university
library
library Perpustakaan Universitas PGRI Palembang
library_id 587
collection Marga Life in South Sumatra in the Past: Puyang Concept Sacrificed and Demythosized
repository_id 17403
city KOTA PALEMBANG
province SUMATERA SELATAN
repoId IOS17403
first_indexed 2022-07-26T01:57:24Z
last_indexed 2022-07-26T01:57:24Z
recordtype dc
_version_ 1739407347752632320
score 17.610363