For AI agents writing code, showing them which tests to check matters more than telling them to follow test-driven development procedures—context beats process.
TDAD is a tool that helps AI coding agents avoid breaking existing tests when fixing bugs. It uses code analysis to identify which tests might be affected by changes, then guides the agent to verify those specific tests before submitting fixes. Testing on real-world code shows it cuts regressions by 70% and improves fix success rates.