Intelligent Trust-Based Weighted Fusion with Blockchain for Adversarially Resilient in Federated Learning
Keywords:
Federated Learning, Blockchain Verification, Trust-Based Fusion, Adversarial Attacks, Medical Imaging Security, Model Poisoning, Secure Decentralized Learning, AI Robustness, Cybersecurity in FLAbstract
Federated Learning (FL) is a model training scheme with guaranteed preserved data privacy, but model poison and manipulation attack susceptibility, and model performance degradation through adversary attack. In this work, a Blockchain-Based Verification with Fusion Mechanism (BVFM) is designed to enhance FL’s security and robustness. With a tamper-evidence model update guaranteed through a blockchain layer, and a trust-dependent weighted fusion mechanism, trust values are granted to participating nodes, dynamically weighing them in contributing to a global model. Experimental evaluation in terms of performance under Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and Carlini & Wagner (C&W) attack scenarios in medical imaging tasks confirms efficacy of BVFM and, in comparison with baseline FL techniques, its accuracy improvement to 94.3%, outperforming local training nodes (88.7%–90.1%). Under adversarial conditions, BVFM reduces the Adversarial Success Rate (ASR) from 59.3% to 25.0% (C&W attack) and from 49.8% to 20.1% (PGD attack), significantly enhancing model robustness. Furthermore, t-SNE visualizations illustrate BVFM’s ability to maintain the separability of benign and malignant classifications despite adversarial perturbations. Compared to existing FL approaches, BVFM achieves the highest accuracy (94.4%), precision (92.5%), recall (93.1%), and F1-score (92.8%), while requiring only 85 seconds of training time—29% faster than leading methods. These results highlight BVFM as a scalable and secure FL solution for adversarial resilience in medical imaging, autonomous systems, and cybersecurity applications