news
December, 2024 | I gave a talk on Principles of Foundations Models at Johns Hopkins University. |
---|---|
November, 2024 | I gave a guest lecture on Towards Test-time Self-supervised Learning (slides) at Boston College. |
October, 2024 | 3 new preprints are out, exploring 1) how existing long-context training of LLMs is problematic and how to address it (paper), 2) how sparse autoencoders can significantly improve robustness at noisy and few-shot scenarios (paper), and 3) whether ICL can truly extrapolate to OOD scenarios (paper). |
October, 2024 | 6 papers were accepted to NeurIPS 2024. We inverstigated how LLMs perform self-correction at test time (paper), how to build dynamic world models through joint embedding methods (paper), how Transformers avoid feature collapse with LayerNorm and attention masks (paper), and why equivariant prediction of data corruptions helps learn good representations (paper). |
September, 2024 | I gave a talk at NYU Tandon on Building Safe Foundation Models from Principled Understanding. |
August, 2024 | I gave a talk at Princeton University on Reimagining Self-Supervised Learning with Context. |
Title: Exciting New Paper Accepted
Date: November 2024
We are thrilled to announce that our new paper on self-supervised learning has been accepted to NeurIPS 2024.
Announcement 2
Title: Workshop on AI Safety
Date: October 2024
Join us at the upcoming workshop on AI Safety where we’ll discuss robust and interpretable models. The event will take place in November 2024.
Announcement 3
Title: Grant Awarded for Research on GNNs
Date: September 2024
We have been awarded a major grant for our research on graph neural networks. This funding will support our work for the next three years.