news

December, 2024 I will give a talk at the Department of Applied Mathematics and Statistics at Johns Hopkins University.
November, 2024 I gave a guest lecture on Towards Test-time Self-supervised Learning (slides) at Boston College.
October, 2024 3 new preprints are out, demystifying 1) why perplexity fails to represent long-context abilities of LLMs (paper), 2) how sparse autoencoders can potentially improve model robustness (paper), and 3) whether ICL can truly extrapolate to OOD scenarios (paper).
October, 2024 6 papers were accepted to NeurIPS 2024. We inverstigated how LLMs are capable of self-correction (paper), how to enable representation-space in-context learning through joint embedding models (paper), how Transformers avoid feature collapse with LayerNorm (paper), and why predicting data corruptions (e.g., Gaussian noise) helps learn good representations (paper).
September, 2024 I gave a talk at NYU Tandon on Building Safe Foundation Models from Principled Understanding.
August, 2024 I will be organizing ML Tea seminar at MIT CSAIL this fall, a weekly 30-minute talk series from members of the machine learning community around MIT. Join us on Mondays at 32-G882!
August, 2024 I gave a talk at Princeton University on Reimagining Self-Supervised Learning with Context.
August, 2024 I will continue to serve as an Area Chair for ICLR 2025.
July, 2024 I will be organizing the NeurIPS 2024 workshop on Red Teaming GenAI: What Can We Learn from Adversaries? Join us to discuss the brighter side of redteaming.

Title: Exciting New Paper Accepted
Date: November 2024
We are thrilled to announce that our new paper on self-supervised learning has been accepted to NeurIPS 2024.


Announcement 2

Title: Workshop on AI Safety
Date: October 2024
Join us at the upcoming workshop on AI Safety where we’ll discuss robust and interpretable models. The event will take place in November 2024.


Announcement 3

Title: Grant Awarded for Research on GNNs
Date: September 2024
We have been awarded a major grant for our research on graph neural networks. This funding will support our work for the next three years.