Boosting the Federation: Cross-Silo Federated Learning without Gradient Descent
Contributo in Atti di convegno
Data di Pubblicazione:
2022
Abstract:
Abstract—Federated Learning has been proposed to develop better AI systems without compromising the privacy of final users and the legitimate interests of private companies. Initially deployed by Google to predict text input on mobile devices, FL has been deployed in many other industries. Since its introduction, Federated Learning mainly exploited the inner working of neural networks and other gradient descent-based algorithms by either exchanging the weights of the model or the gradients computed during learning. While this approach has been very successful, it rules out applying FL in contexts where other models are preferred, e.g., easier to interpret or known to work better.
This paper proposes FL algorithms that build federated models without relying on gradient descent-based methods. Specifically, we leverage distributed versions of the AdaBoost algorithm to acquire strong federated models. In contrast with previous approaches, our proposal does not put any constraint on the client-side learning models. We perform a large set of experiments on ten UCI datasets, comparing the algorithms in six non-iidness settings.
Tipologia CRIS:
04A-Conference paper in volume
Keywords:
federated learning, cross-silo, boosting, adaboost, ensemble learning
Elenco autori:
Mirko Polato, Roberto Esposito, Marco Aldinucci
Link alla scheda completa:
Titolo del libro:
Proceedings of the International Joint Conference on Neural Networks (IJCNN 2022)
Pubblicato in: