Cross-Probe BERT for Fast Cross-Modal Search

Tan Yu, Hongliang Fei, Ping Li
2022 Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval  
Owing to the effectiveness of cross-modal attentions, text-vision BERT models have achieved excellent performance in text-image retrieval. Nevertheless, cross-modal attentions in text-vision BERT models require expensive computation cost when tackling textvision retrieval due to their pairwise input. Therefore, normally, it is impractical for deploying them for large-scale cross-modal retrieval in real applications. To address the inefficiency issue in exiting text-vision BERT models, in this
more » ... rk, we develop a novel architecture, cross-probe BERT. It devises a small number of text and vision probes, and the cross-modal attentions are efficiency achieved through the interactions between text and vision probes. It takes lightweight computation cost, and meanwhile effectively exploits cross-modal attention. Systematic experiments on public benchmarks demonstrate excellent effectiveness and efficiency of our cross-probe BERT.
doi:10.1145/3477495.3531826 fatcat:ciforymk2fcnpmrwcp5rmvu2h4