Separate the model that solves problems from the model that explains them to avoid accuracy loss when making AI outputs verifiable.
When AI models need to show their work so humans can verify it, they often get worse at solving problems—a cost called "legibility tax." This paper fixes that by splitting the job: one model solves the problem correctly, then a second model rewrites the solution in a way that's easy to check. This avoids forcing one model to juggle both accuracy and explainability.