You can mathematically guarantee that independently trained models will converge to the same predictions by scaling up ensemble size, boosting iter...
This paper shows how to make different machine learning models agree with each other by using a technique called anchoring. The researchers prove that when you train multiple models together using common methods like stacking, boosting, or neural networks, you can reduce disagreement between them by adjusting simple parameters like the number of models or training iterations.