Filters








19,364 Hits in 7.1 sec

Can Visual Recognition Benefit from Auxiliary Information in Training? [chapter]

Qilin Zhang, Gang Hua, Wei Liu, Zicheng Liu, Zhengyou Zhang
2015 Lecture Notes in Computer Science  
We examine an under-explored visual recognition problem, where we have a main view along with an auxiliary view of visual information present in the training data, but merely the main view is available  ...  The efficacy of our proposed auxiliary learning approach is demonstrated through three challenging visual recognition tasks with different kinds of auxiliary information.  ...  Then a question naturally emerges: can visual recognition on the main view benefit from such auxiliary information that only exists in the training data?  ... 
doi:10.1007/978-3-319-16865-4_5 fatcat:wfci3gergnaa7etsaiijknq5qm

Auxiliary Training Information Assisted Visual Recognition

Qilin Zhang, Gang Hua, Wei Liu, Zicheng Liu, Zhengyou Zhang
2015 IPSJ Transactions on Computer Vision and Applications  
The efficacy of our proposed auxiliary learning approach is demonstrated through four challenging visual recognition tasks with different kinds of auxiliary information.  ...  To effectively leverage the auxiliary information to train a stronger classifier, we propose a collaborative auxiliary learning framework based on a new discriminative canonical correlation analysis.  ...  We have verified that information from the auxiliary view in the training data can indeed lead to better recognition in the test phase even when the auxiliary view is entirely missing.  ... 
doi:10.2197/ipsjtcva.7.138 fatcat:6wzftkfairhsba7uyhsju4qq64

EmoBed: Strengthening Monomodal Emotion Recognition via Training with Crossmodal Emotion Embeddings

Han Jing, Zhang Zixing, Ren Zhao, Schuller Björn
2019 Zenodo  
This paper was published in IEEE Transactions on Affective Computing.  ...  In contrast to these works, we exploit the hidden correlation of multiple modalities in an implicit fusion manner, and thus it later can be implemented in a more flexible setting, as information from auxiliary  ...  In multi-task learning, during the training phase, an auxiliary task benefits the main task by updating the parameters in the shared frontend feature-learning network.  ... 
doi:10.5281/zenodo.3661155 fatcat:7tpenwbfqnho3cypcjsmdxul7m

Dynamic Difficulty Awareness Training for Continuous Emotion Prediction

Zixing Zhang, Jing Han, Eduardo Coutinho, Bjorn W. Schuller
2018 IEEE transactions on multimedia  
In this paper, motivated by the benefit of difficulty awareness in a human learning procedure, we propose a novel machine learning framework, namely, Dynamic Difficulty Awareness Training (DDAT), which  ...  The obtained difficulty level is then used in tandem with original features to update the model input in a second learning stage with the expectation that the model can learn to focus on high difficulty  ...  Curriculum learning presents the data from easy to hard during the training process so that the model can better avoid being caught in local minima in the presence of non-convex training criteria.  ... 
doi:10.1109/tmm.2018.2871949 fatcat:ves5ogal3fgddeswwfzkqeju5i

Reconstructing Training Data from Diverse ML Models by Ensemble Inversion [article]

Qian Wang, Daniel Kurz
2021 arXiv   pre-print
Model Inversion (MI), in which an adversary abuses access to a trained Machine Learning (ML) model attempting to infer sensitive information about its original training data, has attracted increasing research  ...  We achieve high quality results without any dataset and show how utilizing an auxiliary dataset that's similar to the presumed training data improves the results.  ...  We believe that existing model inversion approaches can also benefit from integrating the techniques proposed in this paper.  ... 
arXiv:2111.03702v1 fatcat:x6m7us5xlja2lgz77bxsfxbppi

Pareto Self-Supervised Training for Few-Shot Learning [article]

Zhengyu Chen, Jixie Ge, Heshen Zhan, Siteng Huang, Donglin Wang
2021 arXiv   pre-print
Previous works benefit from sharing inductive bias between the main task (FSL) and auxiliary tasks (SSL), where the shared parameters of tasks are optimized by minimizing a linear combination of task losses  ...  PSST explicitly decomposes the few-shot auxiliary problem into multiple constrained multi-objective subproblems with different trade-off preferences, and here a preference region in which the main task  ...  To encourage that the few-shot task benefits from auxiliary tasks, some parameters are shared across tasks to inductive knowledge transfer.  ... 
arXiv:2104.07841v2 fatcat:gywa2f3ikvf6fluxbdhjjjcoea

What Makes Training Multi-Modal Classification Networks Hard? [article]

Weiyao Wang and Du Tran and Matt Feiszli
2020 arXiv   pre-print
Consider end-to-end training of a multi-modal vs. a single-modal network on a task with multiple input modalities: the multi-modal network receives more information, so it should match or outperform its  ...  action recognition, and acoustic event detection.  ...  And only with G-Blend, it benefits from both visual and audio signals, performing better than both.  ... 
arXiv:1905.12681v5 fatcat:3mfnumdwkrdtlnkqfpuoenz52y

Category-orthogonal object features guide information processing in recurrent neural networks trained for object categorization [article]

Sushrut Thorat, Giacomo Aldegheri, Tim C. Kietzmann
2022 arXiv   pre-print
Using diagnostic linear readouts, we find that: (a) information about auxiliary variables increases across time in all network layers, (b) this information is indeed present in the recurrent information  ...  Recurrent neural networks (RNNs) have been shown to perform better than feedforward architectures in visual object categorization tasks, especially in challenging conditions such as cluttered images.  ...  In the primate visual cortex, recurrence is believed to underlie computations that can benefit from contextual signals, such as assigning local features to a figure or the background based on global shape  ... 
arXiv:2111.07898v2 fatcat:ky2ogo342vfzledtqsk65shvwe

Training Hierarchical Feed-Forward Visual Recognition Models Using Transfer Learning from Pseudo-Tasks [chapter]

Amr Ahmed, Kai Yu, Wei Xu, Yihong Gong, Eric Xing
2008 Lecture Notes in Computer Science  
In this paper, we present a framework for training hierarchical feed-forward models for visual recognition, using transfer learning from pseudo tasks.  ...  In addition to being extremely simple to implement, and adaptable across different domains with little or no extra tuning, our approach achieves promising results on challenging visual recognition tasks  ...  We note that the framework can generally benefit from all kinds of pseudo task constructions that comply with our prior knowledge for the recognition task at hand.  ... 
doi:10.1007/978-3-540-88690-7_6 fatcat:ljjwtpdi6rfupa3qxnw7laxo7y

Modularity Optimization as a Training Criterion for Graph Neural Networks [chapter]

Tsuyoshi Murata, Naveed Afzal
2018 Complex Networks IX  
We incorporate the objectives in two ways, through an explicit regularization term in the cost function in the output layer and as an additional loss term computed via an auxiliary layer.  ...  Such layers only consider attribute information of node neighbors in the forward model and do not incorporate knowledge of global network structure in the learning task.  ...  Acknowledgement This work was supported by Tokyo Tech -Fuji Xerox Cooperative Research (Project Code KY260195), JSPS Grant-in-Aid for Scientific Research(B) (Grant Number 17H01785) and JST CREST (Grant  ... 
doi:10.1007/978-3-319-73198-8_11 fatcat:avi7smb6xjcrbeetpusgmplmxq

Auxiliary Training: Towards Accurate and Robust Models

Linfeng Zhang, Muzhou Yu, Tong Chen, Zuoqiang Shi, Chenglong Bao, Kaisheng Ma
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
In the training stage, a novel distillation method named input-aware self distillation is proposed to facilitate the primary classifier to learn the robust information from auxiliary classifiers.  ...  Extensive experiments on CIFAR10, CIFAR100 and ImageNet show that noticeable improvements on both accuracy and robustness can be observed by the proposed auxiliary training.  ...  more benefits of robustness information from the auxiliary classifiers.  ... 
doi:10.1109/cvpr42600.2020.00045 dblp:conf/cvpr/ZhangYCSBM20 fatcat:gmn4363535brjab2nwn5yoojw4

Learning by expansion: Exploiting social media for image classification with few training examples

Sheng-Yuan Wang, Wei-Shing Liao, Liang-Chi Hsieh, Yan-Ying Chen, Winston H. Hsu
2012 Neurocomputing  
We propose an image expansion framework to mine more semantically related training images from the auxiliary image collection provided with very few training examples.  ...  The expansion is based on a semantic graph considering both visual and (noisy) textual similarities in the auxiliary image collections, where we also consider scalability issues (e.g., MapReduce) as constructing  ...  Lacking textual information in the training images, we correlate them to visually similar images from the social media for further graph-based expansion.  ... 
doi:10.1016/j.neucom.2011.05.043 fatcat:uyv5apblujasnkk36cegz2faky

Video versus lecture: effective alternatives for orthodontic auxiliary training

M S Chen, E N Horrocks, R D Evans
1998 British Journal of Orthodontics  
A meta-analysis of outcome studies of visual-based instruction, Educational Communication and Technology, 29, 26-36.  ...  where visual effects can be maximized.  ...  In recognition of the need to rationalise resources, it was recommended that in Britain the orthodontic working team should be expanded to include orthodontic auxiliaries (Nuffield Foundation, 1993) .  ... 
doi:10.1093/ortho/25.3.191 pmid:9800017 fatcat:vqvbrkg36bgvvd632z7m2zsf2q

IoT Networks Assisted Baduanjin Auxiliary Training System Based on Depth Camera

Pengfei Wan, Yongqiang Liu, Hongjie Gao, Muhammad Arif
2022 Security and Communication Networks  
than that of Kinect, motion intervention, and behavior recognition auxiliary training system, respectively.  ...  Baduanjin auxiliary training system has insufficient feature definition for similar actions, which affects the recognition accuracy of effective action data.  ...  to measure the practicability of the designed system for Baduanjin auxiliary training. e auxiliary training systems based on Kinect, motion intervention, and behavior recognition in the literature are  ... 
doi:10.1155/2022/4429898 fatcat:ebvjvmwgrrdhviehxkrondhs4y

Episodic Training for Domain Generalization

Da Li, Jianshu Zhang, Yongxin Yang, Cong Liu, Yi-Zhe Song, Timothy Hospedales
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
This shows that DG training can benefit standard practice in computer vision.  ...  In this paper we build on this strong baseline by designing an episodic training procedure that trains a single deep network in a way that exposes it to the domain shift that characterises a novel domain  ...  Our approach benefits from end-to-end learning, while being model agnostic (architecture independent), and simple and fast to train; in contrast to most existing DG techniques that rely on non-standard  ... 
doi:10.1109/iccv.2019.00153 dblp:conf/iccv/LiZYLSH19 fatcat:a4fhygn7cjagrfybv4pboqguwy
« Previous Showing results 1 — 15 out of 19,364 results