You can now build federated learning systems that defend against both Byzantine attacks and privacy breaches simultaneously, without needing unrealistic assumptions like bounded gradients or extra server datasets.
This paper tackles two critical security issues in federated learning: protecting against malicious servers (Byzantine attacks) and preventing data leakage (differential privacy).