Better Language Model Inversion by Compactly Representing Next-Token Distributions | Read Paper on Bytez