Enhancing Large Language Models through Adaptive Tokenizers | Read Paper on Bytez