TriCoLo: Trimodal Contrastive Loss for Fine-grained Text to Shape Retrieval [article]

Yue Ruan, Han-Hung Lee, Ke Zhang, Angel X. Chang
2022 arXiv   pre-print
Recent work on contrastive losses for learning joint embeddings over multimodal data has been successful at downstream tasks such as retrieval and classification. On the other hand, work on joint representation learning for 3D shapes and text has thus far mostly focused on improving embeddings through modeling of complex attention between representations , or multi-task learning . We show that with large batch contrastive learning we achieve SoTA on text-shape retrieval without complex
more » ... mechanisms or losses. Prior work in 3D and text representations has also focused on bimodal representation learning using either voxels or multi-view images with text. To this end, we propose a trimodal learning scheme to achieve even higher performance and better representations for all modalities.
arXiv:2201.07366v1 fatcat:ff2frrehjffivbr27tgndoddmu