2025 1 22
5 papers were accepted at ICLR 2025 (3 as a co-first author)! We proposed long-context perplexity and invariant in-context learning for better training and usage of LLMs. We also looked into some fundamental questions, such as OOD generalization of in-context learning, interplay between monosemanticity and robustness, and the nature of projection heads.