CLASP: Defending Hybrid Large Language Models Against Hidden State Poisoning Attacks — ThinkLLM