Federated learning for autonomous systems needs multi-layered defense: detect poisoned models at the neuron level, exclude malicious participants based on history, and actively repair the global model after removing attackers.
FedTrident protects federated learning systems for road condition classification from poisoning attacks where malicious vehicles deliberately mislabel their training data. The system detects compromised models through neuron analysis, removes bad actors, and uses machine unlearning to fix the corrupted global model—maintaining safety-critical performance even under attack.