The paper introduces practical variance-reduction techniques that significantly reduce the number of gradient computations needed to solve stochastic optimization problems, with proven convergence guarantees and real-world applications in machine learning.
This paper develops new optimization techniques for solving complex stochastic problems by combining variance reduction (reducing noise in gradient estimates) with a splitting method called forward-reflected-backward splitting.