Data di Pubblicazione:
2023
Abstract:
This scientific paper aims to investigate how a single large language model, such as ChatGPT, can be used to mimic lexical resources and generate ad hoc lexical knowledge in real time by incorporating contextual information. We conduct a comprehensive study on ChatGPT's ability to capture various aspects of lexical semantics such as synonyms, antonyms, hypernyms, and hyponyms, and compare it with well-known resources such as WordNet. We also evaluate ChatGPT's performance on tasks that require knowledge of lexical semantics, such as semantic similarity. Our results show that ChatGPT is able to capture a significant amount of lexical semantic information, with its performance on lexical semantic tasks being highly dependent on the quality and relevance of the contextual information. We also observe that ChatGPT's ability to generate ad hoc lexical knowledge in real time is a major advantage over traditional lexical resources, which may not be able to keep up with the constantly evolving nature of language. Overall, our study sheds light on the potential of large language models such as ChatGPT to mimic and even surpass traditional lexical resources in capturing and generating lexical semantic knowledge. This has important implications for natural language processing applications that require real-time access to up-to-date lexical information.
Tipologia CRIS:
04A-Conference paper in volume
Keywords:
Large Language Models; Lexical Semantics
Elenco autori:
Di Caro L.; Ventrice L.; Mignone R.; Locci S.
Link alla scheda completa:
Link al Full Text:
Titolo del libro:
CEUR Workshop Proceedings
Pubblicato in: