Data di Pubblicazione:
2025
Abstract:
In recent years, Federated Learning applied to neural networks has garnered significant attention, yet applying this approach to other machine learning algorithms remains underexplored. Support Vector Machines (SVMs), in particular, have seen limited exploration within the federated context, with existing techniques often constrained by the necessity to share the weight vector of the linear classifier. Unfortunately, this constraint severely limits the method’s utility, restricting its application to linear feature spaces. This study addresses and overcomes this limitation by proposing an innovative approach: instead of sharing weight vectors, we advocate sharing support vectors while safeguarding client data privacy through vector perturbation. Simple random perturbation works remarkably well in practice, and indeed we provide a bound on the approximation error of the learnt model which goes to zero as the number of input features grows. We also introduce a refined technique that involves strategically moving the support vectors along the margin of the decision function, which we empirically show to slightly improve the performances. Through extensive experimentation, we demonstrate that our proposed approach achieves state-of-the-art performance and consistently enables the federated classifier to match the performance of classifiers trained on the entire dataset.
Tipologia CRIS:
03A-Articolo su Rivista
Keywords:
Federated learning; kernel method; support vector machines
Elenco autori:
Polato M.; Esposito R.; Sciandra L.
Link alla scheda completa:
Link al Full Text:
Pubblicato in: