2025 5 2

3 papers were accepted at ICML 2025. We proposed CSR (Oral Presentation) that builds state-of-the-art shortening embedding (image/text/multimodal) with sparse coding. We characterized the reasons behind Transformers’ position bias and enhanced length generalization with output space alignment.