Mult-IT Multiple Choice Questions on Multiple Topics in Italian: A CALAMITA Challenge
Contributo in Atti di convegno
Data di Pubblicazione:
2024
Abstract:
Multi-choice question answering (MCQA) is a powerful tool for evaluating the factual knowledge and reasoning capacities of Large Language Models (LLMs). However, there is a lack of large-scale MCQA datasets originally written in Italian. Existing Italian MCQA benchmarks are often automatically translated from English, an approach with two key drawbacks: Firstly, automatic translations may sound unnatural, contain errors, or use linguistics constructions that do not align with the target language. Secondly, they may introduce topical and ideological biases reflecting Anglo-centric perspectives. To address this gap, we present Mult-IT, an MCQA dataset comprising over 110, 000 manually written questions across a wide range of topics. All questions are sourced directly from preparation quizzes for Italian university entrance exams, or for exams for public sector employment in Italy. We are hopeful that this contribution enables a more comprehensive evaluation of LLMs' proficiency, not only in the Italian language, but also in their grasp of Italian cultural and contextual knowledge.
Tipologia CRIS:
04A-Conference paper in volume
Keywords:
Benchmarking; CALAMITA Challenge; Italian; LLMs; Multiple-Choice Questions
Elenco autori:
Rinaldi M.; Gili J.; Francis M.; Goffetti M.; Patti V.; Nissim M.
Link alla scheda completa:
Titolo del libro:
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024), Pisa, Italy, December 4-6, 2024
Pubblicato in: