Filters








27,646 Hits in 11.1 sec

Constructing Multiple Tasks for Augmentation: Improving Neural Image Classification With K-means Features [article]

Tao Gui, Lizhi Qing, Qi Zhang, Jiacheng Ye, Hang Yan, Zichu Fei, Xuanjing Huang
2019 arXiv   pre-print
However, constructing multiple related tasks is difficult, and sometimes only a single task is available for training in a dataset.  ...  Multi-task learning (MTL) has received considerable attention, and numerous deep learning applications benefit from MTL with multiple objectives.  ...  Acknowledgments The authors wish to thank the anonymous reviewers for their helpful comments. This  ... 
arXiv:1911.07518v1 fatcat:5tdz6wotyrh6tcuqjeqjuhcura

Constructing Multiple Tasks for Augmentation: Improving Neural Image Classification with K-Means Features

Tao Gui, Lizhi Qing, Qi Zhang, Jiacheng Ye, Hang Yan, Zichu Fei, Xuanjing Huang
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
However, constructing multiple related tasks is difficult, and sometimes only a single task is available for training in a dataset.  ...  Multi-task learning (MTL) has received considerable attention, and numerous deep learning applications benefit from MTL with multiple objectives.  ...  Acknowledgments The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by China National Key R&D Program  ... 
doi:10.1609/aaai.v34i07.6719 fatcat:6v72hy7rpbepdkymm7t7rxxpty

Few-Shot Charge Prediction with Data Augmentation and Feature Augmentation

Peipeng Wang, Xiuguo Zhang, Zhiying Cao
2021 Applied Sciences  
Therefore, we propose a model with data augmentation and feature augmentation for few-shot charge prediction.  ...  Then, the charge information heterogeneous graph is introduced, and a novel graph convolutional network is designed to extract distinguishability features for feature augmentation.  ...  Therefore, we propose a novel model with data augmentation and feature augmentation for few-shot charge prediction.  ... 
doi:10.3390/app112210811 fatcat:j7h3udh2zjfy3ojhpchh7t2bbi

Unsupervised feature learning by augmenting single images [article]

Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox
2014 arXiv   pre-print
Finally we train a convolutional neural network to discriminate between these surrogate classes. The feature representation learned by the network can then be used in various vision tasks.  ...  In this paper we investigate if it is possible to use data augmentation as the main component of an unsupervised feature learning architecture.  ...  While deep convolutional neural networks have been known to yield good results on supervised image classification tasks such as MNIST for a long time [18] , the recent successes are made possible through  ... 
arXiv:1312.5242v3 fatcat:v47c7tutjncbbeqlmhru3li55i

Memory Augmented Matching Networks for Few-Shot Learnings

Kien Tran, The authors are with the Department of Computer Science at National Defense Academy of Japan, Hiroshi Sato, Masao Kubo
2019 International Journal of Machine Learning and Computing  
Index Terms-Few shot learning, matching network, memory augmented neural network, prototypical network.  ...  In our research, we propose a metric learning method for few-shot learning tasks by taking advantage of NTMs and Matching Network to improve few-shot learning task's learning accuracy on both Omniglot  ...  Meta Learning with Memory Augmented Neural Network for One-Shot Learning Task Recent works have suggested the Memory Augmented Neural Network (MANN) for one-shot learning tasks via meta-learning approach  ... 
doi:10.18178/ijmlc.2019.9.6.867 fatcat:27wrqnpmorg6pmexvmmtoknllu

An overview of mixing augmentation methods and augmentation strategies [article]

Dominik Lewy, Jacek Mańdziuk
2022 arXiv   pre-print
Deep Convolutional Neural Networks have made an incredible progress in many Computer Vision tasks.  ...  This survey focuses on two DA research streams: image mixing and automated selection of augmentation strategies.  ...  last task usually in combination with Recurrent Neural Networks).  ... 
arXiv:2107.09887v2 fatcat:isue7dmwxzdihgwiq2efj3k3qm

Augmentation Pathways Network for Visual Recognition [article]

Yalong Bai, Mohan Zhou, Yuxiang Chen, Wei Zhang, Bowen Zhou, Tao Mei
2021 arXiv   pre-print
Unlike traditional single pathway, augmented images are processed in different neural paths. The main pathway handles light augmentations, while other pathways focus on heavy augmentations.  ...  By interacting with multiple paths in a dependent manner, the backbone network robustly learns from shared visual patterns among augmentations, and suppresses noisy patterns at the same time.  ...  For example, the shared feature learned from Blur(k = 5) can benefit the recognition of image with Blur(k < 5).  ... 
arXiv:2107.11990v1 fatcat:p4gvpdhi6rh7jcdhs5dtt7mysq

Hypothesis-driven Online Video Stream Learning with Augmented Memory [article]

Mengmi Zhang, Rohil Badkundri, Morgan B. Talbot, Rushikesh Zawar, Gabriel Kreiman
2021 arXiv   pre-print
Second, hypotheses in the augmented memory can be re-used for learning new tasks, improving generalization and transfer learning ability.  ...  image features to avoid catastrophic forgetting.  ...  We tried initializing the memory bank with normalized k-means clustered centers trained on CIFAR100 [24] but there were no improvements.  ... 
arXiv:2104.02206v4 fatcat:2yi3jeo32ndnno2cs6m7tr76ka

Automatic Dataset Augmentation [article]

Yalong Bai, Kuiyuan Yang, Tao Mei, Wei-Ying Ma, Tiejun Zhao
2018 arXiv   pre-print
Large scale image dataset and deep convolutional neural network (DCNN) are two primary driving forces for the rapid progress made in generic object recognition tasks in recent years.  ...  Experiments show our method can automatically scale up existing datasets significantly from billions web pages with high accuracy, and significantly improve the performance on object recognition tasks  ...  With ImageNet, DCNN first proves its success and improves most object recognition tasks by the learned feature extractors [14] .  ... 
arXiv:1708.08201v2 fatcat:lf6lkvlgmnenph5rd57qvbpj6u

Deep Multi-Task Augmented Feature Learning via Hierarchical Graph Neural Network [article]

Pengxin Guo, Chang Deng, Linjie Xu, Xiaonan Huang, Yu Zhang
2020 arXiv   pre-print
In this paper, we propose a Hierarchical Graph Neural Network (HGNN) to learn augmented features for deep multi-task learning. The HGNN consists of two-level graph neural networks.  ...  Moreover, for classification tasks, an inter-class graph neural network is introduced to conduct similar operations on a finer granularity, i.e., the class level, to generate class embeddings for each  ...  Finally the task embeddings are used to augment the feature representation of the data to improve the learning performance. For classification tasks, we can learn augmented features in a Figure 1 .  ... 
arXiv:2002.04813v1 fatcat:tajh3bxyibec5cp2g7ohzdconu

TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning [article]

Sung Whan Yoon, Jun Seo, Jaekyun Moon
2019 arXiv   pre-print
We propose TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning.  ...  At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning.  ...  Acknowledgements This work is supported in part by the ICT R&D program of Institute for Information & Communications Technology  ... 
arXiv:1905.06549v2 fatcat:2ucimrdmurgizavjvqu2j2ynwy

Data augmentation and image understanding [article]

Alex Hernandez-Garcia
2020 arXiv   pre-print
A central subject of this dissertation is data augmentation, a commonly used technique for training artificial neural networks to augment the size of data sets through transformations of the images.  ...  Throughout this dissertation, I use these insights to analyse data augmentation as a particularly useful inductive bias, a more effective regularisation method for artificial neural networks, and as the  ...  deep neural network model pre-trained for large scale, image object recognition tasks, with additional features optimised for salience prediction.  ... 
arXiv:2012.14185v1 fatcat:qcip4vstzvbxzo4qevek5marrm

Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases [article]

Senthil Purushwalkam, Abhinav Gupta
2020 arXiv   pre-print
Somewhat mysteriously the recent gains in performance come from training instance classification models, treating each image and it's augmented versions as samples of a single class.  ...  Self-supervised representation learning approaches have recently surpassed their supervised learning counterparts on downstream tasks like object detection and image classification.  ...  The Top-K hidden units are chosen for each class separately and the mean task-dependent invariance score is computed.  ... 
arXiv:2007.13916v2 fatcat:hx5oblvkyvdr7gkonykndxhmxe

Text Data Augmentation for Deep Learning

Connor Shorten, Taghi M. Khoshgoftaar, Borko Furht
2021 Journal of Big Data  
with transfer and multi-task learning, and ideas for AI-GAs (AI-Generating Algorithms).  ...  We highlight studies that cover how augmentations can construct test sets for generalization. NLP is at an early stage in applying Data Augmentation compared to Computer Vision.  ...  In our previous survey of Image Data Augmentation, we explored works that use Neural Style Transfer for augmentation.  ... 
doi:10.1186/s40537-021-00492-0 fatcat:bcbaqkpicnd6dcwc34pdijosby

Sill-Net: Feature Augmentation with Separated Illumination Representation [article]

Haipeng Zhang, Zhong Cao, Ziang Yan, Changshui Zhang
2021 arXiv   pre-print
Sill-Net learns to separate illumination features from images, and then during training we augment training samples with these separated illumination features in the feature space.  ...  For visual object recognition tasks, the illumination variations can cause distinct changes in object appearance and thus confuse the deep neural network based recognition models.  ...  For instance, in conventional classification tasks, we use the real training images as support samples; in one-shot classification tasks, we construct the support set with template images (i.e., graphic  ... 
arXiv:2102.03539v2 fatcat:doseqoy3irbh7kwka7nxe7sofq
« Previous Showing results 1 — 15 out of 27,646 results