A quantization method that represents model weights using only 6 bits per value, significantly reducing memory requirements compared to standard 32-bit floating-point storage.