Data di Pubblicazione:
2022
Abstract:
The crucial role played by interpretability in many practical scenarios has led a large part of the research on machine learning towards the development of interpretable approaches. In this work, we present PRL, a game-theory-based method capable of achieving state-of-the-art accuracy, yet keeping the focus on the interpretability of the predictions. The proposed approach is an instance of the more general preference learning framework. By design, the method identifies the most relevant features even when dealing with high-dimensional problems. This is possible thanks to an online features generation mechanism. Moreover, the algorithm is proven to be theoretically well-founded, thanks to a game-theoretical analysis of its convergence. To assess the quality of the proposed approach, we compared PRL against state-of-the-art methods in a plethora of different classification settings. The experimental evaluation focuses on interpretability, with an in-depth analysis of visualization, feature selection, and explainability.
Tipologia CRIS:
03A-Articolo su Rivista
Keywords:
Feature selection; Game theory; Interpretability; Large margin; Preference learning
Elenco autori:
Polato M.; Faggioli G.; Aiolli F.
Link alla scheda completa:
Link al Full Text:
Pubblicato in: