Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models | Read Paper on Bytez