Boosting Unsupervised Contrastive Learning Using Diffusion-Based Data Augmentation From Scratch
Date:
Poster presentation at ICML 2024 showcasing the DiffAug framework for unsupervised contrastive learning. Demonstrated how diffusion models can generate high-quality augmented views that preserve semantic content while introducing beneficial variations. Engaged with leading researchers in self-supervised learning, discussing the theoretical foundations of diffusion-based augmentation and its advantages over traditional geometric and color transformations. Received positive feedback on the method’s generalizability across different domains including natural images, medical imaging, and scientific visualization. The work addresses fundamental challenges in learning robust visual representations without labeled data.
