How Transformers Utilize Multi-Head Attention in In-Context Learning? A Case Study on Sparse Linear Regression | Read Paper on Bytez