Filters








617 Hits in 2.9 sec

Quadruplet-Wise Image Similarity Learning

Marc T. Law, Nicolas Thome, Matthieu Cord
2013 2013 IEEE International Conference on Computer Vision  
From these quadruplet-wise constraints, we propose a similarity learning framework relying on a convex optimization scheme.  ...  Working with inequality constraints involving quadruplets of images, our approach aims at efficiently modeling similarity from rich or complex semantic label relationships.  ...  (5) is similar to the constraints used in triplet-wise approaches [10, 25, 28] with the exception that we use quadruplets of images.  ... 
doi:10.1109/iccv.2013.38 dblp:conf/iccv/LawTC13 fatcat:ivibwcplwfhxric6fxopy7nuey

Learning a Distance Metric from Relative Comparisons between Quadruplets of Images

Marc T. Law, Nicolas Thome, Matthieu Cord
2016 International Journal of Computer Vision  
Classic metric learning approaches focus on constraints that involve pairs or triplets of images.  ...  We propose a general Mahalanobis-like distance metric learning framework that exploits distance constraints over up to four different images.  ...  We investigate the impact of these strategies as a function of the number of exploited constraints. 3 Quadruplet-wise Similarity Learning Framework Quadruplet Constraints As explained in Section 2.5,  ... 
doi:10.1007/s11263-016-0923-4 fatcat:gilt2ozf6jffhgqvmg33ucp2he

Margin Sample Mining Loss: A Deep Learning Based Method for Person Re-identification [article]

Qiqi Xiao, Hao Luo, Chi Zhang
2017 arXiv   pre-print
Recently, deep learning with a metric learning loss has become a common framework for ReID.  ...  In this paper, we also propose a new metric learning loss with hard sample mining called margin smaple mining loss (MSML) which can achieve better accuracy compared with other metric learning losses, such  ...  Quadruplet loss adds a new negative pair, and a quadruplet samples four images from three identities.  ... 
arXiv:1710.00478v3 fatcat:x6t5vc4nyfautfy6gbv52k6ovu

Understanding Fashion Trends from Street Photos via Neighbor-Constrained Embedding Learning

Xiaoling Gu, Yongkang Wong, Pai Peng, Lidan Shou, Gang Chen, Mohan S. Kankanhalli
2017 Proceedings of the 2017 ACM on Multimedia Conference - MM '17  
Specifically, we present QuadNet, an effective CNN based image embedding network driven by both multi-task classification loss and neighbor-constrained similarity loss.  ...  The latter loss function is computed with a novel quadruplet loss function, which considers both hard and soft positive neighbors as well as a negative neighbor for each anchor image.  ...  Extending from pairwise or triplet-wise approaches, Law [18] introduced an image similarity learning framework with the quadruplet-wise constraints, while Ustinova [27] presented a Histogram loss for  ... 
doi:10.1145/3123266.3123441 dblp:conf/mm/GuWPS0K17 fatcat:qeqq77tuyvberp7iyuqvthrw5a

PCCT: Progressive Class-Center Triplet Loss for Imbalanced Medical Image Classification [article]

Kanghao Chen, Weixian Lei, Rong Zhang, Shen Zhao, Wei-shi Zheng, Ruixuan Wang
2022 arXiv   pre-print
Furthermore, the class-center involved triplet loss is extended to the pair-wise ranking loss and the quadruplet loss, which demonstrates the generalization of the proposed framework.  ...  Extensive experiments support that the PCCT framework works effectively for medical image classification with imbalanced training images.  ...  For extension, the class-center based pair-wise ranking loss and quadruplet loss can be obtained by simply replacing positive and negative samples with the corresponding class centers, which is similar  ... 
arXiv:2207.04793v1 fatcat:reado6j5zfcn3mvcg2t2h6crgm

Cross-modal Deep Metric Learning with Multi-task Regularization [article]

Xin Huang, Yuxin Peng
2017 arXiv   pre-print
similarity in a unified multi-task learning architecture.  ...  The quadruplet ranking loss can model the semantically similar and dissimilar constraints to preserve cross-modal relative similarity ranking information.  ...  Cross-modal similarity learning focuses on exploiting semantic correlation among multiple modalities like image and *Corresponding author. text.  ... 
arXiv:1703.07026v2 fatcat:osuc6rg5qnfang5fsqxvwui4ce

OSCARS: An Outlier-Sensitive Content-Based Radiography Retrieval System

Xiaoyuan Guo, Jiali Duan, Saptarshi Purkayastha, Hari Trivedi, Judy Wawira Gichoya, Imon Banerjee
2022 Proceedings of the 2022 International Conference on Multimedia Retrieval  
Our goal is to identify both intra/inter-class similarities for fine-grained retrieval.  ...  CCS CONCEPTS • Information systems → Learning to rank.  ...  Accordingly, we sample both the intra-class and interclass negative images to construct quadruplets for intra-class and inter-class similarity learning. (3) We demonstrate the model effectiveness with  ... 
doi:10.1145/3512527.3531425 fatcat:iiqp22hksfdttprkye2fophb5e

Learning Transformation-Aware Embeddings for Image Forensics [article]

Aparna Bharati, Daniel Moreira, Patrick Flynn, Anderson Rocha, Kevin Bowyer, Walter Scheirer
2020 arXiv   pre-print
Our approach learns transformation-aware descriptors using weak supervision via composited transformations and a rank-based quadruplet loss.  ...  This paper introduces a novel deep learning-based approach to provide a plausible ordering to images that have been generated from a single image through transformations.  ...  Hence, we propose learning embeddings using quadruplets to better facilitate ordering among the related images.  ... 
arXiv:2001.04547v1 fatcat:4vz2jdn62bfcze5w3qagdpytra

OSCARS: An Outlier-Sensitive Content-Based Radiography Retrieval System [article]

Xiaoyuan Guo, Jiali Duan, Saptarshi Purkayastha, Hari Trivedi, Judy Wawira Gichoya, Imon Banerjee
2022 arXiv   pre-print
Our goal is to identify both intra/inter-class similarities for fine-grained retrieval.  ...  We suggest a weighted metric learning objective to balance the intra and inter-class feature learning. We experimented on two representative public radiography datasets.  ...  All the images in a quadruplet are fed into the feature extractor to learn their latent embeddings (e a , e p , e n intra , e n inter ).  ... 
arXiv:2204.03074v1 fatcat:yg3mrwfrbre6neeu7rxkllhwia

Learning Quintuplet Loss for Large-scale Visual Geo-Localization [article]

Qiang Zhai
2020 arXiv   pre-print
While perspective deviation almost inevitably exists between training images and query images because of the arbitrary perspective.  ...  To cope with this situation, in this paper, we in-depth analyze the limitation of triplet loss which is the most commonly used metric learning loss in state-of-the-art LSVGL framework, and propose a new  ...  Luo [11] proposed a webly-supervised learning method for salient object detection with no pixel-wise annotations. Deep metric learning.  ... 
arXiv:1907.11350v2 fatcat:owkmdcsrarclvp6ppywuy2knma

Quadruplet Network with One-Shot Learning for Fast Visual Object Tracking [article]

Xingping Dong and Jianbing Shen and Yu Liu and Wenguan Wang and Fatih Porikli
2018 arXiv   pre-print
According to the similarity metric, we select the most similar and the most dissimilar instances as the positive and negative inputs of triplet loss from each multi-tuple.  ...  In the same vein of discriminative one-shot learning, Siamese networks allow recognizing an object from a single exemplar with the same class label.  ...  In this paper, we introduce a novel quadruplet network for one-shot learning. Our quadruplet network is a discriminative model to one-shot learning.  ... 
arXiv:1705.07222v2 fatcat:toqmnji66zhm5alzbksjlveu74

Quadruplet Selection Methods for Deep Embedding Learning [article]

Kaan Karaman, Erhan Gundogdu, Aykut Koc, A. Aydin Alatan
2019 arXiv   pre-print
In order to improve the recognition strength of the learned features, we present a novel feature selection method specifically designed for four training samples of a quadruplet.  ...  samples are utilized both for classification and a quadruplet-based loss function.  ...  Moreover, the loss function for the quadruplets is similar to the triplet based methods [6] .  ... 
arXiv:1907.09245v1 fatcat:hm6zm2ul2bderpbddwddcsqrqi

Fantope Regularization in Metric Learning

Marc T. Law, Nicolas Thome, Matthieu Cord
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition  
metric learning algorithms.  ...  This paper introduces a regularization method to explicitly control the rank of a learned symmetric positive semidefinite distance matrix in distance metric learning.  ...  In Metric learning optimization algorithm Optimization problem Constraints: we focus on quadruplet-wise constraints [13] that encompass pairwise and triplet-wise constraints.  ... 
doi:10.1109/cvpr.2014.138 dblp:conf/cvpr/LawTC14 fatcat:lzrkmw6jkrbcpbejaygd5knaqy

Visual Similarity Attention [article]

Meng Zheng, Srikrishna Karanam, Terrence Chen, Richard J. Radke, Ziyan Wu
2022 arXiv   pre-print
While there has been substantial progress in learning suitable distance metrics, these techniques in general lack transparency and decision reasoning, i.e., explaining why the input set of images is similar  ...  Furthermore, we make our proposed similarity attention a principled part of the learning process, resulting in a new paradigm for learning similarity functions.  ...  First, our design is not limited to a particular type of similarity learning architecture; we show applicability to and results with three different types of architectures: Siamese, triplet, and quadruplet  ... 
arXiv:1911.07381v2 fatcat:3v2preesgbak7h4qjvxybjrql4

A Quadruplet Loss for Enforcing Semantically Coherent Embeddings in Multi-output Classification Problems [article]

Hugo Proença, Ehsan Yaghoubi, Pendar Alirezazadeh
2020 arXiv   pre-print
triplet loss formulation, our proposal also privileges small distances between positive pairs, but at the same time explicitly enforces that the distance between other pairs corresponds directly to their similarity  ...  Also, in opposition to its triplet counterpart, the proposed loss is agnostic with regard to any demanding criteria for mining learning instances (such as the semi-hard pairs).  ...  Precondition: M : CNN, t e : Tot. epochs, s: mini-batch size, b: batch size, I: Learning set, n images for 1 to t e do for 1 to n s do b ← randomly sample b out of n images from I c ← create b 4 quadruplet  ... 
arXiv:2002.11644v3 fatcat:l7qmfohawrgy3ljjq4erstyzly
« Previous Showing results 1 — 15 out of 617 results