A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Instance Cross Entropy for Deep Metric Learning
[article]
2019
arXiv
pre-print
However, our Recall@1 on SOP is 77.3%, which is only 0.9% lower than 78.2% of (Wang et al., 2019c) . ...
∀a, c, ||f c a || 2 = 1. (6) The feature L 2 -normalisation layer is implemented according to Wang et al. (2017a) . ...
arXiv:1911.09976v1
fatcat:ae3enq5rxzfa7dmxustmbmsa5a
ID-aware Quality for Set-based Person Re-identification
[article]
2019
arXiv
pre-print
Set-based person re-identification (SReID) is a matching problem that aims to verify whether two sets are of the same identity (ID). Existing SReID models typically generate a feature representation per image and aggregate them to represent the set as a single embedding. However, they can easily be perturbed by noises--perceptually/semantically low quality images--which are inevitable due to imperfect tracking/detection systems, or overfit to trivial images. In this work, we present a novel and
arXiv:1911.09143v1
fatcat:kvwq7qloq5brhlsa4smi4cinii
more »
... simple solution to this problem based on ID-aware quality that measures the perceptual and semantic quality of images guided by their ID information. Specifically, we propose an ID-aware Embedding that consists of two key components: (1) Feature learning attention that aims to learn robust image embeddings by focusing on 'medium' hard images. This way it can prevent overfitting to trivial images, and alleviate the influence of outliers. (2) Feature fusion attention is to fuse image embeddings in the set to obtain the set-level embedding. It ignores noisy information and pays more attention to discriminative images to aggregate more discriminative information. Experimental results on four datasets show that our method outperforms state-of-the-art approaches despite the simplicity of our approach.
Mutual Distillation of Confident Knowledge
[article]
2022
arXiv
pre-print
Mutual knowledge distillation (MKD) improves a model by distilling knowledge from another model. However, not all knowledge is certain and correct, especially under adverse conditions. For example, label noise usually leads to less reliable models due to undesired memorization . Wrong knowledge misleads the learning rather than helps. This problem can be handled by two aspects: (i) improving the reliability of a model where the knowledge is from (i.e., knowledge source's reliability); (ii)
arXiv:2106.01489v2
fatcat:wmdxzb4eznfjpejixlam26gnri
more »
... ting reliable knowledge for distillation. In the literature, making a model more reliable is widely studied while selective MKD receives little attention. Therefore, we focus on studying selective MKD. Concretely, a generic MKD framework, Confident knowledge selection followed by Mutual Distillation (CMD), is designed. The key component of CMD is a generic knowledge selection formulation, making the selection threshold either static (CMD-S) or progressive (CMD-P). Additionally, CMD covers two special cases: zero-knowledge and all knowledge, leading to a unified MKD framework. Extensive experiments are present to demonstrate the effectiveness of CMD and thoroughly justify the design of CMD. For example, CMD-P obtains new state-of-the-art results in robustness against label noise.
Ranked List Loss for Deep Metric Learning
2021
IEEE Transactions on Pattern Analysis and Machine Intelligence
Xinshao Yang Elyor ...
Xinshao Wang has been working on core deep learning techniques with diverse applications: (1) Deep metric learning: to learn discriminative and robust image/video representations for downstream tasks, ...
doi:10.1109/tpami.2021.3068449
pmid:33760730
fatcat:onbyudurbfhwjgk3lagd3azhqa
Ranked List Loss for Deep Metric Learning
2019
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Xinshao Wang is working on core deep learning techniques with applications to visual recognition: (1) Deep metric learning: to learn discriminative and robust image/video representations for downstream ...
Wang is currently a PhD Student at Queens University Belfast. ...
doi:10.1109/cvpr.2019.00535
dblp:conf/cvpr/WangHKHGR19
fatcat:jrkbck4ljvfixii7dvlt2h3cwm
Deep Metric Learning by Online Soft Mining and Class-Aware Attention
[article]
2019
arXiv
pre-print
For mining negatives, mining difficult negatives is applied in (Wang and Gupta 2015; Simo-Serra et al. 2015; Oh Song et al. 2016) . ...
Therefore, a variety of sample mining strategies have been studied recently (Schroff, Kalenichenko, and Philbin 2015; Oh Song et al. 2016; Wang and Gupta 2015; Simo-Serra et al. 2015; Yuan, Yang, and ...
arXiv:1811.01459v3
fatcat:ntm2v4vjcfdfhmlhhjjkle7y7u
Deep Metric Learning by Online Soft Mining and Class-Aware Attention
2019
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
For mining negatives, mining difficult negatives is applied in (Wang and Gupta 2015; Simo-Serra et al. 2015; Oh Song et al. 2016) . ...
Introduction With success of deep learning, deep metric learning has attracted a great deal of attention and been applied to a wide range of visual tasks such as image retrieval (Wang et al. 2014; Huang ...
doi:10.1609/aaai.v33i01.33015361
fatcat:wklvfundbjd25lb2pthegegzam
Ranked List Loss for Deep Metric Learning
[article]
2021
arXiv
pre-print
Xinshao Wang is working on core deep learning techniques with applications to visual recognition: (1) Deep metric learning: to learn discriminative and robust image/video representations for downstream ...
Wang is currently a PhD Student at Queens University Belfast.He has started a PDRA role at University of Oxford. ...
arXiv:1903.03238v8
fatcat:rgr6dnthfnhffptpqvyqanmueu
Derivative Manipulation for General Example Weighting
[article]
2020
arXiv
pre-print
Xinshao Wang, Yang Hua, Elyor Kodirov, Guosheng Hu, and Neil M. Robertson. Deep metric
learning by online soft mining and class-aware attention. In AAAI, 2019a. ...
Xinshao Wang, Yang Hua, Elyor Kodirov, and Neil M Robertson. Proselflc: Progressive self label correction for training robust deep neural networks. arXiv preprint arXiv:2005.03788, 2020. ...
arXiv:1905.11233v10
fatcat:bu2wxnw4d5d6bdhngu53zocv64
ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks
[article]
2021
arXiv
pre-print
To train robust deep neural networks (DNNs), we systematically study several target modification approaches, which include output regularisation, self and non-self label correction (LC). Two key issues are discovered: (1) Self LC is the most appealing as it exploits its own knowledge and requires no extra models. However, how to automatically decide the trust degree of a learner as training goes is not well answered in the literature? (2) Some methods penalise while the others reward
arXiv:2005.03788v6
fatcat:hwo7trw4fjgeld4cwoeu5hu5mq
more »
... predictions, prompting us to ask which one is better? To resolve the first issue, taking two well-accepted propositions--deep neural networks learn meaningful patterns before fitting noise [3] and minimum entropy regularisation principle [10]--we propose a novel end-to-end method named ProSelfLC, which is designed according to learning time and entropy. Specifically, given a data point, we progressively increase trust in its predicted label distribution versus its annotated one if a model has been trained for enough time and the prediction is of low entropy (high confidence). For the second issue, according to ProSelfLC, we empirically prove that it is better to redefine a meaningful low-entropy status and optimise the learner toward it. This serves as a defence of entropy minimisation. We demonstrate the effectiveness of ProSelfLC through extensive experiments in both clean and noisy settings. The source code is available at https://github.com/XinshaoAmosWang/ProSelfLC-CVPR2021. Keywords: entropy minimisation, maximum entropy, confidence penalty, self knowledge distillation, label correction, label noise, semi-supervised learning, output regularisation
IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude's Variance Matters
[article]
2020
arXiv
pre-print
Implementation details. 1 Following (Liu et al., 2017; Wang et al., 2019b) , we train GoogleNet V2. ...
We follow the settings of recent SL (Wang et al., 2019e) and train ResNet44 (He et al., 2016) for a fair comparison with their reported results. ...
arXiv:1903.12141v9
fatcat:h5pbekokxjddvhhpypbcxe52ty
Deep learning in agriculture: A survey
2018
Computers and Electronics in Agriculture
G., et al., 2010) (Xinshao & Cheng, 2015) . ...
However, testing time is generally faster than other methods ML-based methods (Chen, Lin, Zhao, Wang, & Gu, 2014) . ...
doi:10.1016/j.compag.2018.02.016
fatcat:6ku7oneaorbm3miekfenus6lxe
Evaluation of soil fertility in the succession of karst rocky desertification using principal component analysis
2015
Solid Earth
We are grateful to the Forestry Bureau of Lianyuan, Longhui, Shaodong, Xinhua, and Xinshao counties of Hunan for providing the sampling sites. ...
Climate changes and anthropogenic driving forces (land overuse) are responsible for the development of aeolian/sandy desertification (Wang et al., 2013a; Wang et al., 2013b) which can cause dust storms ...
The permissions for sampling locations were approved by the forestry bureaus of Lianyuan, Longhui, Shaodong, Xinhua, and Xinshao counties. ...
doi:10.5194/se-6-515-2015
fatcat:wokat7itmzew7p4rlulrc3qwre
Evaluation of soil fertility in the succession of karst rocky desertification using principal component analysis
2014
Solid Earth Discussions
We are grateful to the Forestry Bureau of Lianyuan, Longhui, Shaodong, Xinhua, and Xinshao counties of Hunan for providing the sampling sites. ...
Climate changes and anthropogenic driving forces (land overuse) are responsible for the development of aeolian/sandy desertification (Wang et al., 2013a; Wang et al., 2013b) which can cause dust storms ...
The permissions for sampling locations were approved by the forestry bureaus of Lianyuan, Longhui, Shaodong, Xinhua, and Xinshao counties. ...
doi:10.5194/sed-6-3333-2014
fatcat:7vovrwxgvne6lgiugmwawue5w4
An assessment for health education and health promotion in chronic disease demonstration districts: a comparative study from Hunan Province, China
2019
PeerJ
The RSR method is a comprehensive evaluation tool for multi-indicators with the advantages of having no data type restrictions or bias of abnormal values (Wang et al., 2015b; Sun & Xu, 2014; Wang et al ...
There were four non-NCD demonstration districts: Anhua County (G), Xinhua County (H), Xinshao County (I), and Jishou County (J). ...
doi:10.7717/peerj.6579
pmid:30867995
pmcid:PMC6409084
fatcat:t25hsgnfj5djbls3rr4aq6ubmi
« Previous
Showing results 1 — 15 out of 29 results