InnerQ: Hardware-aware Tuning-free Quantization of KV Cache for Large Language Models — ThinkLLM