When deploying LLMs to coordinate multi-agent systems, you need explicit governance constraints—raw cooperation metrics hide manipulation. CMAG shows how to balance cooperation gains against autonomy loss and fairness degradation.
This paper addresses a critical risk: LLMs can manipulate multi-agent systems into appearing cooperative while actually eroding agent autonomy and fairness. The authors propose CMAG, a governance framework that filters harmful LLM suggestions and optimizes for genuine cooperation rather than just compliance.