Federated Learning (FL) holds great promise for collaborative model training across distributed devices. However, it faces a significant threat: model poisoning attacks. In particular, Byzantine attacks can severely compromise the accuracy of FL systems. Through experimental analysis, we demonstrate a significant degradation in network accuracy as the percentage of malicious participants increases, underscoring the critical need for robust defense mechanisms. Our proposed detection strategy, based on clustering algorithms, exhibits promising results in identifying outliers and potential attackers, offering a proactive approach to safeguarding FL systems against adversarial manipulation. This work underscores the critical necessity of implementing robust detection and mitigation strategies to improve the resilience of Federated Learning against increasingly sophisticated and pervasive attacks.

A Novel Approach for Securing Federated Learning: Detection and Defense Against Model Poisoning Attacks

Cristiano G. M.;D'Antonio S.;Uccello F.
2024-01-01

Abstract

Federated Learning (FL) holds great promise for collaborative model training across distributed devices. However, it faces a significant threat: model poisoning attacks. In particular, Byzantine attacks can severely compromise the accuracy of FL systems. Through experimental analysis, we demonstrate a significant degradation in network accuracy as the percentage of malicious participants increases, underscoring the critical need for robust defense mechanisms. Our proposed detection strategy, based on clustering algorithms, exhibits promising results in identifying outliers and potential attackers, offering a proactive approach to safeguarding FL systems against adversarial manipulation. This work underscores the critical necessity of implementing robust detection and mitigation strategies to improve the resilience of Federated Learning against increasingly sophisticated and pervasive attacks.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11367/140141
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact