Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization | Read Paper on Bytez