Sequential neural network and sampling computations can be parallelized across sequence length using Newton's method, but success depends on the system's dynamical stability properties.
This work shows how to parallelize sequential computations like RNNs and MCMC by reformulating them as equation-solving problems solvable with Newton's method. It develops faster, more stable parallel algorithms and proves when parallelization actually speeds things up—determined by a system's Lyapunov exponent.