Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free | Read Paper on Bytez