research
* denotes shared first authorship
2024
- NeurIPSA Theoretical Understanding of Self-Correction through In-context AlignmentIn NeurIPS, 2024Best Paper Award at ICML 2024 ICL Workshop
We introduced the first theoretical explanation of how self-correction works in LLMs (as in o1) and showed its effectiveness against social bias and jailbreak attacks. - NeurIPSIn-Context Symmetries: Self-Supervised Learning through Contextual World ModelsIn NeurIPS, 2024Oral Presentation (top 4) at NeurIPS 2024 SSL Workshop
We introduced in-context learning abilities to joint embedding methods, making them more general-purpose and efficiently adaptable to downstream tasks. - NeurIPS
- NeurIPS WorkshopReasoning in Reasoning: A Hierarchical Framework for Better and Faster Neural Theorem ProvingIn NeurIPS 2024 Workshop on Mathematical Reasoning and AI, 2024
- NeurIPS WorkshopThe Multi-faceted Monosemanticity in Multimodal RepresentationsIn NeurIPS 2024 Workshop on Responsibly Building the Next Generation of Multimodal Foundational Models, 2024
- ICMLRethinking Invariance in In-context LearningIn ICML Workshop on Theoretical Foundations of Foundation Models (TF2M), 2024
2023
- ICML
- TIPEquilibrium Image Denoising with Implicit DifferentiationIEEE Transactions on Image Processing (IEEE TIP), 2023
- ICLRUnbiased Stochastic Proximal Solver for Graph Neural Networks with Equilibrium StatesIn ICLR, 2023
- AAAIOn the Connection between Invariant Learning and Adversarial Training for Out-of-Distribution GeneralizationIn AAAI, 2023Oral
2022
- NeurIPS Workshop
- ICML
- ICLRA Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial TrainingIn ICLR, 2022Silver Best Paper Award at ICML 2021 AdvML workshop
From an energy-based perspective, we formulated contrastive learning as a generative model, and established the connection between adversarial training and maximum likelihood, thus briding generative and discriminative models together.
2021
- ECML-PKDDReparameterized Sampling for Generative Adversarial NetworksIn ECML-PKDD, 2021Best ML Paper Award (1/685), invited to Machine Learning
We explored using GAN discriminator (as a good reward model) to bootstrap sample quality through an efficient MCMC algorithm, which not only guarantees theoretical convergence but also improves sample efficiency and quality in practice.