Yifei Wang
Postdoc at MIT CSAIL
I am currently a postdoctoral researcher at MIT CSAIL, advised by Stefanie Jegelka. I am interested in principled, scalable, and safety-aware machine learning algorithms for building self-supervised foundation models, with applications to vision, language, graph, and multimodal domains.
My first-author papers received the sole Best ML Paper Award at ECML-PKDD 2021, the Silver Best Paper Award at the ICML 2021 AdvML workshop, and the Best Paper Award at the ICML 2024 ICL workshop.
I obtained my PhD in Applied Mathematics from Peking University in 2023, advised by Yisen Wang, Zhouchen Lin, Jiansheng Yang. Prior to that, I got my bachelor’s degrees from PKU math and philosophy.
I am on job market! Please reach out if you are aware of exciting opportunities.
news
December, 2024 | I will give a talk at the Department of Applied Mathematics and Statistics at Johns Hopkins University. |
---|---|
November, 2024 | I gave a guest lecture on Towards Test-time Self-supervised Learning (slides) at Boston College. |
October, 2024 | 3 new preprints are out, demystifying 1) why perplexity fails to represent long-context abilities of LLMs (paper), 2) how sparse autoencoders can potentially improve model robustness (paper), and 3) whether ICL can truly extrapolate to OOD scenarios (paper). |
October, 2024 | 6 papers were accepted to NeurIPS 2024. We inverstigated how LLMs are capable of self-correction (paper), how to enable representation-space in-context learning through joint embedding models (paper), how Transformers avoid feature collapse with LayerNorm (paper), and why predicting data corruptions (e.g., Gaussian noise) helps learn good representations (paper). |
September, 2024 | I gave a talk at NYU Tandon on Building Safe Foundation Models from Principled Understanding. |
August, 2024 | I will be organizing ML Tea seminar at MIT CSAIL this fall, a weekly 30-minute talk series from members of the machine learning community around MIT. Join us on Mondays at 32-G882! |
August, 2024 | I gave a talk at Princeton University on Reimagining Self-Supervised Learning with Context. |
August, 2024 | I will continue to serve as an Area Chair for ICLR 2025. |
July, 2024 | I will be organizing the NeurIPS 2024 workshop on Red Teaming GenAI: What Can We Learn from Adversaries? Join us to discuss the brighter side of redteaming. |
selected publications
- NeurIPSA Theoretical Understanding of Self-Correction through In-context AlignmentIn NeurIPS, 2024Best Paper Award at ICML 2024 ICL Workshop
We introduced the first theoretical explanation of how self-correction works in LLMs (as in o1) and showed its effectiveness against social bias and jailbreak attacks. - NeurIPSIn-Context Symmetries: Self-Supervised Learning through Contextual World ModelsIn NeurIPS, 2024Oral Presentation (top 4) at NeurIPS 2024 SSL Workshop
We introduced in-context learning abilities to joint embedding methods, making them more general-purpose and efficiently adaptable to downstream tasks. - ICLRA Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial TrainingIn ICLR, 2022Silver Best Paper Award at ICML 2021 AdvML workshop
From an energy-based perspective, we formulated contrastive learning as a generative model, and established the connection between adversarial training and maximum likelihood, thus briding generative and discriminative models together. - ECML-PKDDReparameterized Sampling for Generative Adversarial NetworksIn ECML-PKDD, 2021Best ML Paper Award (1/685), invited to Machine Learning
We explored using GAN discriminator (as a good reward model) to bootstrap sample quality through an efficient MCMC algorithm, which not only guarantees theoretical convergence but also improves sample efficiency and quality in practice.