Filters








105,460 Hits in 5.3 sec

Learning Augmentation Network via Influence Functions

Donghoon Lee, Hyunsin Park, Trung Pham, Chang D. Yoo
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
By backpropagating the influence to the augmentation network, the augmentation network parameters are learned.  ...  Based on this function, a differentiable augmentation network is learned to augment an input training sample to reduce validation loss.  ...  The differentiable augmentation network is learned to generate augmented samples that maximize the influence function, thereby minimizing the validation loss.  ... 
doi:10.1109/cvpr42600.2020.01097 dblp:conf/cvpr/LeePPY20 fatcat:n656x6j6qjb4tavmylphwsbajy

Social Influence Prediction with Train and Test Time Augmentation for Graph Neural Networks

Hongbo Bo, Ryan McConville, Jun Hong, Weiru Liu
2021 2021 International Joint Conference on Neural Networks (IJCNN)  
the effectiveness of trainand test-time augmentation on graph neural networks for social influence prediction.  ...  Data augmentation has been widely used in machine learning for natural language processing and computer vision tasks to improve model performance.  ...  Representation Learning AugInf will primarily use a GAE to learn a representation of the graph via a transformation of the graph structure into a low-dimensional latent space.  ... 
doi:10.1109/ijcnn52387.2021.9533437 fatcat:twwp4yjn35btra6er2iogzpwji

Social Influence Prediction with Train and Test Time Augmentation for Graph Neural Networks [article]

Hongbo Bo, Ryan McConville, Jun Hong, Weiru Liu
2021 arXiv   pre-print
the effectiveness of train- and test-time augmentation on graph neural networks for social influence prediction.  ...  Data augmentation has been widely used in machine learning for natural language processing and computer vision tasks to improve model performance.  ...  Representation Learning AugInf will primarily use a GAE to learn a representation of the graph via a transformation of the graph structure into a low-dimensional latent space.  ... 
arXiv:2104.11641v1 fatcat:uojnwjdahvcnhakb2widuc27ka

NeuralSim: Augmenting Differentiable Simulators with Neural Networks [article]

Eric Heiden, David Millard, Erwin Coumans, Yizhou Sheng, Gaurav S. Sukhatme
2021 arXiv   pre-print
In this work, we study the augmentation of a novel differentiable rigid-body physics engine via neural networks that is able to learn nonlinear relationships between dynamic quantities and can thus learn  ...  , and present an approach for automatically discovering useful augmentations.  ...  At the same time, the resulting neural network augmentation converges to weights that clearly show that the joint velocitẏ q i influences the residual joint force τ i that explains the observed effects  ... 
arXiv:2011.04217v2 fatcat:awwqb4fzqzb6piwuxa3zim6xnm

LADA: Look-Ahead Data Acquisition via Augmentation for Active Learning [article]

Yoon-Yeong Kim, Kyungwoo Song, JoonHo Jang, Il-Chul Moon
2020 arXiv   pre-print
Hence, this paper proposes Look-Ahead Data Acquisition via augmentation, or LADA, to integrate data acquisition and data augmentation.  ...  Besides active learning, data augmentation is also an effective technique to enlarge the limited amount of labeled instances.  ...  Bayesian Generative Active Deep Learning (BGADL) integrates acquisition and augmentation by selecting data instances via f acq , then augmenting the selected instances via f aug , which is VAE, afterward  ... 
arXiv:2011.04194v3 fatcat:csx6adug6vauzggvxrtaelnfty

Augment Valuate : A Data Enhancement Pipeline for Data-Centric AI [article]

Youngjune Lee, Oh Joon Kwon, Haeju Lee, Joonyoung Kim, Kangwook Lee, Kee-Eung Kim
2021 arXiv   pre-print
This pipeline contains data valuation, cleansing, and augmentation.  ...  Data scarcity and noise are important issues in industrial applications of machine learning.  ...  Understanding black-box predictions via influence functions. In International Conference on Machine Learning, pages 1885–1894. PMLR, 2017.  ... 
arXiv:2112.03837v1 fatcat:wxku3ppvzzfflhichdy2hoe76q

LADA: Look-Ahead Data Acquisition via Augmentation for Deep Active Learning

Yoon-Yeong Kim, Kyungwoo Song, JoonHo Jang, Il-Chul Moon
2021 Neural Information Processing Systems  
One possible approach is a pipelined combination, which selects informative instances via the acquisition function and generates virtual instances from the selected instances via augmentation.  ...  The scarcity of labeled dataset leads us to consider the integration of data augmentation and active learning.  ...  Data Augmentation Policy Learning InfoMixup is an adaptive version of Mixup to train the data augmentation via the feedback from an acquisition function.  ... 
dblp:conf/nips/KimSJM21 fatcat:lil723efn5b7rbcs2fktl7uhhm

Combating Noisy Labels in Long-Tailed Image Classification [article]

Chaowei Fang, Lechao Cheng, Huiyan Qi, Dingwen Zhang
2022 arXiv   pre-print
To deal with this problem, we propose a new learning paradigm based on matching between inferences on weak and strong data augmentations to screen out noisy samples and introduce a leave-noise-out regularization  ...  Existing noise-robust learning methods cannot work in this scenario as it is challenging to differentiate noisy samples from clean samples of tail classes.  ...  Instead of using two networks [17] , we construct only one network with the help of the dualbranch batch normalization, which can avoid the influence of strongly augmented images on feature distribution  ... 
arXiv:2209.00273v1 fatcat:7uy7gtnvw5atnjv6bvjjnrvnau

Strongly Augmented Contrastive Clustering [article]

Xiaozhi Deng, Dong Huang, Ding-Hua Chen, Chang-Dong Wang, Jian-Huang Lai
2022 arXiv   pre-print
Deep clustering has attracted increasing attention in recent years due to its capability of joint representation learning and clustering via deep neural networks.  ...  Particularly, we utilize a backbone network with triply-shared weights, where a strongly augmented view and two weakly augmented views are incorporated.  ...  finding the K nearest neighbors, and then trains the network via a loss function that aims to pull each sample and its K nearest neighbors closer.  ... 
arXiv:2206.00380v2 fatcat:4klq6zqc2vf2fa33axier5xsu4

Multiple shooting with neural differential equations [article]

Evren Mert Turan, Johannes Jäschke
2021 arXiv   pre-print
Constraints introduced by multiple shooting can be satisfied using a penalty or augmented Lagrangian method.  ...  Supervised learning of neural networks is typically performed by the unconstrained optimization of a proxy cost function φ which has the constraints as penalties terms: φ = C(x, θ) + ρQ(h(x, θ)) (8) where  ...  The neural network has one hidden layer of 16 nodes, tanh is used as the activation function in the input and hidden layer, and initial weights are set via Glorot initialization [11] .  ... 
arXiv:2109.06786v1 fatcat:cf7dyqm5ivetrna7zuheihwh7a

Image Clustering with Contrastive Learning and Multi-scale Graph Convolutional Networks [article]

Yuanku Xu, Dong Huang, Chang-Dong Wang, Jian-Huang Lai
2022 arXiv   pre-print
First, many of them focus on some distribution-based clustering loss, lacking the ability to exploit sample-wise (or augmentation-wise) relationships via contrastive learning.  ...  Specifically, with two random augmentations performed on each image, the backbone network with two weight-sharing views is utilized to learn the representations for the augmented samples, which are then  ...  With the backbone and the auto-encoder trained via the loss function L 1 with the instance-level and cluster-level contrastive learning enforced, this section focuses on the neighborhood structure learning  ... 
arXiv:2207.07173v1 fatcat:jrr3mxg525ekvnluq32dl2uo3q

Sensor Transfer: Learning Optimal Sensor Effect Image Augmentation for Sim-to-Real Domain Adaptation [article]

Alexandra Carlson, Katherine A. Skinner, Ram Vasudevan, Matthew Johnson-Roberson
2019 arXiv   pre-print
To address this, we propose a learned augmentation network composed of physically-based augmentation functions.  ...  We provide experiments that demonstrate that augmenting synthetic training datasets with the proposed learned augmentation framework reduces the domain gap between synthetic and real domains for object  ...  That is, the network learns to transfer the real sensor effect domain -blur, exposure, noise, color cast, and chromatic aberration -to synthetic images via a generative augmentation network.  ... 
arXiv:1809.06256v2 fatcat:kblrvcs5cfhh5ovs2ohszodo2u

Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification

Justin Salamon, Juan Pablo Bello
2017 IEEE Signal Processing Letters  
a "shallow" dictionary learning model with augmentation.  ...  The ability of deep convolutional neural networks (CNN) to learn discriminative spectro-temporal patterns makes them well suited to environmental sound classification.  ...  Deep neural networks, which have a high model capacity, are particularly dependent on the availability of large quantities of training data in order to learn a non-linear function from input to output  ... 
doi:10.1109/lsp.2017.2657381 fatcat:dxk24gafxfakddxoo5cqd4ur3u

Neural Graph Similarity Computation with Contrastive Learning

Shengze Hu, Weixin Zeng, Pengfei Zhang, Jiuyang Tang
2022 Applied Sciences  
Specifically, we utilize vanilla graph convolutional networks to generate the graph representations and capture the cross-graph interactions via a simple multilayer perceptron.  ...  Recent years have witnessed a rapid increase in neural-network-based methods, which project graphs into embedding space and devise end-to-end frameworks to learn to estimate graph similarity.  ...  On Graph Augmentation in Contrastive Learning Next, we examine the influence of different data augmentation strategies in contrastive learning on the results.  ... 
doi:10.3390/app12157668 fatcat:uppopez255addkk26lcwsbp4fm

Hyperspectral Image Classification With Contrastive Graph Convolutional Network [article]

Wentao Yu, Sheng Wan, Guangyu Li, Jian Yang, Chen Gong
2022 arXiv   pre-print
In addition, an adaptive graph augmentation technique is designed to flexibly incorporate the spectral-spatial priors of HSI, which helps facilitate the subsequent contrastive representation learning.  ...  , which is termed Contrastive Graph Convolutional Network (ConGCN), for HSI classification.  ...  SimCLR [7] is a well-known contrastive learning method and is anchored on a Siamese network that learns latent representations via maximizing agreement between differently augmented views of the same  ... 
arXiv:2205.11237v1 fatcat:3ih5erimlfdxlh46ldrhrrfe74
« Previous Showing results 1 — 15 out of 105,460 results