Fairness, Debiasing and Privacy in Computer Vision and Medical Imaging
Contributo in Atti di convegno
Data di Pubblicazione:
2023
Abstract:
Deep Learning (DL) has become one of the predominant tools for solving a variety of issue, often with superior performance compared to previous state-of-the-art methods. DL models are often able to learn meaningful and abstract representations of the underlying data; however, they have also been shown to often learn additional features in the data, which are not necessarily relevant or required for the desired task. This could pose a number of issues, as the additional features can contain bias, sensitive or private information, that should not be taken into account (e.g. gender, race, age, etc.) by the model. We refer to this information as collateral. The presence of collateral information translates into practical issues when deploying DL models, especially if they involve users' data. Learning robust representations which are free of biased, private, and collateral information can be very relevant for a variety of fields and applications, for example for medical applications and decision support systems. In this work we present our group's activities aiming at devising methods to ensure that representations learned by DL models are robust to collateral features, biases and privacy-preserving with respect to sensitive information.
Tipologia CRIS:
04A-Conference paper in volume
Keywords:
Debiasing; Deep Learning; Fairness; Privacy; Representation Learning
Elenco autori:
Barbano C.A.; Duchesnay E.; Dufumier B.; Gori P.; Grangetto M.
Link alla scheda completa:
Link al Full Text:
Titolo del libro:
CEUR Workshop Proceedings
Pubblicato in: