It’s the end of the gold standard as we know it. On the impact of pre-aggregation on the evaluation of highly subjective tasks
Contributo in Atti di convegno
Data di Pubblicazione:
2020
Abstract:
Supervised machine learning, in particular in Natural Language Processing, is based on the creation of high-quality gold standard datasets for training and benchmarking. The de-facto standard annotation methodologies work well for traditionally relevant tasks in Computational Linguistics. However, critical issues are surfacing when applying old techniques to the study of highly subjective phenomena such as irony and sarcasm, or abusive and offensive language. This paper calls for a paradigm shift, away from monolithic, majority-aggregated gold standards, and towards an inclusive framework that preserves the personal opinions and culturally-driven perspectives of the annotators. New training sets and supervised machine learning techniques will have to be adapted in order to create fair, inclusive, and ultimately more informed models of subjective semantic and pragmatic phenomena. The arguments are backed by a synthetic experiment showing the lack of correlation between the difficulty of an annotation task, its degree of subjectivity, and the quality of the predictions of a supervised classifier trained on the resulting data.
Tipologia CRIS:
04A-Conference paper in volume
Keywords:
Inclusive Machine Learning; Linguistic Annotation; Subjectivity
Elenco autori:
Basile V.
Link alla scheda completa:
Link al Full Text:
Titolo del libro:
CEUR Workshop Proceedings
Pubblicato in: