Data di Pubblicazione:
2024
Abstract:
The rapid development of Large Language Models (LLMs) has called for robust benchmarks to assess their abilities, track progress, and compare iterations. While existing benchmarks provide extensive evaluations across diverse tasks, they predominantly focus on English, leaving other languages underserved. For Italian, the EVALITA campaigns have provided a long-standing tradition of classification-focused shared tasks. However, their scope does not fully align with the nuanced evaluation required for modern LLMs. To address this gap, we introduce “Challenge the Abilities of LAnguage Models in ITAlian” (CALAMITA), a collaborative effort to create a dynamic and growing benchmark tailored to Italian. CALAMITA emphasizes diversity in task design to test a wide range of LLM capabilities through resources natively developed in Italian by the community. This initiative includes a shared platform, live leaderboard, and centralized evaluation framework. This paper outlines the collaborative process, initial challenges, and evaluation framework of CALAMITA.
Tipologia CRIS:
04A-Conference paper in volume
Keywords:
Italian Benchmark; Language Models; Shared Task
Elenco autori:
Attanasio G.; Basile P.; Borazio F.; Croce D.; Francis M.; Gili J.; Musacchio E.; Nissim M.; Patti V.; Rinaldi M.; Scalena D.
Link alla scheda completa:
Link al Full Text:
Titolo del libro:
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024), Pisa, Italy, December 4-6, 2024
Pubblicato in: