Yifei Wang
I am a Member of Technical Staff at Amazon AGI SF Lab. I currently focus on post-training LLMs to build more capable agents at scale, improving their general reasoning across diverse real-world tasks including search, coding, and computer use.
I was a postdoc at MIT CSAIL (2023-2025), advised by Stefanie Jegelka. I received my Ph.D. in Applied Mathematics from Peking University, advised by Yisen Wang, Zhouchen Lin, and Jiansheng Yang. I also completed my B.S. and B.A. at Peking University.
My research interests lie broadly in self-supervised learning, representation learning, and reasoning. My work has received 5 best paper awards and has been featured by MIT News and Anthropic. I serve as an Area Chair for ICLR and ICML.
news
| January, 2026 | New papers are accepted at ICLR 2026, explaining scaling laws of CoT length, showing AR models rival diffusion models for any-order generation, and exploring real-world benefits of sparsity via ultra-sparse embeddings, sparse feature attention, and predicting LLM transferability. |
|---|---|
| December, 2025 | Our paper G1 has received the Best Paper Award at the NeurIPS 2025 NPGML Workshop. |
| August, 2025 | I gave an invited talk Two New Dimensions of Sparsity for Scaling LLMs at Google DeepMind on sparse long-context training (ICLR 2025) and sparse embedding (ICML 2025 Oral). |
| June, 2025 | Our ICML 2025 paper was featured in an MIT News article, Unpacking the bias of large language models, where we identified and theoretically proved the root causes of position bias in Transformers. |
| June, 2025 | I gave an invited talk at the ASAP Seminar on Your Next-Token Prediction and Transformers Are Biased for Long-Context Modeling—see the recording at YouTube. |
| May, 2025 | Three papers were accepted to ICML 2025. Our oral presentation (top 1%) introduces contrastive sparse representations (CSR) to compress state-of-the-art embedding models to just 32 active dimensions, enabling ~100× faster retrieval with minimal accuracy loss and low training cost for large-scale vector databases and RAG systems. |
selected papers (see full publication)
- ICLR
Best Paper Runner-up When More is Less: Understanding Chain-of-Thought Length in LLMsICLR, 2026🏆 Best Paper Runner-up Award at ICLR 2025 Workshop on Reasoning and Planning for LLMs - NeurIPS
Best Paper Award
at ICML-W’24A Theoretical Understanding of Self-Correction through In-context AlignmentIn NeurIPS, 2024🏆 Best Paper Award at ICML 2024 ICL Workshop
We proposed the first theoretical explanation of how LLM self-correction works (as in OpenAI o1) and showed its effectiveness against social bias and jailbreak attacks. - ICLR
JMLR Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation OverlapIn ICLR, 2022A new augmentation overlap theory for understanding the generalization of contrastive learning. Cited over 150 times. Extended version was accepted at JMLR.