Filters








1,607 Hits in 4.1 sec

Winner-Take-All Autoencoders [article]

Alireza Makhzani, Brendan Frey
2015 arXiv   pre-print
We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations.  ...  We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units.  ...  We first introduce the fully-connected winner-take-all autoencoders that learn to do sparse coding by directly enforcing a winner-take-all lifetime sparsity constraint.  ... 
arXiv:1409.2752v2 fatcat:h52wgsohlng2nofn4gfwdnzrm4

Anomaly Detection using a Convolutional Winner-Take-All Autoencoder

Hanh Tran, David Hogg
2017 Procedings of the British Machine Vision Conference 2017   unpublished
We propose a method for video anomaly detection using a winner-take-all convolutional autoencoder that has recently been shown to give competitive results in learning for classification task.  ...  , and (2) introducing a spatial winner-take-all step after the final encoding layer during training to introduce a high degree of sparsity.  ...  Convolutional Winner-Take-All autoencoder The Convolutional Winner-Take-All Autoencoder (Conv-WTA) [16] is a non-symmetric autoencoder that learns hierarchical sparse representations in an unsupervised  ... 
doi:10.5244/c.31.139 fatcat:qufkonavvbd5lhuuhqjfj4lize

Almost Unsupervised Learning for Dense Crowd Counting

Deepak Babu Sam, Neeraj N Sajjan, Himanshu Maurya, R. Venkatesh Babu
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Motivated by these challenges, we develop Grid Winner-Take-All (GWTA) autoencoder to learn several layers of useful filters from unlabeled crowd images.  ...  This implies creating large-scale annotated crowd data is expensive and directly takes a toll on the performance of existing CNN based counting models on account of small datasets.  ...  We develop Grid Winner-Take-All (GWTA) autoencoder to learn useful features from unlabeled images.  ... 
doi:10.1609/aaai.v33i01.33018868 fatcat:fuuvzlulfrehdgcxufxh5z2wm4

A K-Competitive Autoencoder for Aggression Detection in Social Media Text

Promita Maitra, Ritesh Sarkhel
2018 International Conference on Computational Linguistics  
In this paper, we have described the effects of introducing an winner-takes-all autoencoder for the task of aggression detection, reported its performance on four different datasets, analyzed some of its  ...  A winnertakes-all autoencoder, called Emoti-KATE is proposed for this purpose.  ...  In this paper we have proposed Emoti-KATE, a winner-takes-all autoencoder for representing social media text.  ... 
dblp:conf/coling/MaitraS18 fatcat:cndgk6ydh5cgvhgeqhpa5l5esu

KATE: K-Competitive Autoencoder for Text [article]

Yu Chen, Mohammed J. Zaki
2017 arXiv   pre-print
A comprehensive set of experiments show that KATE can learn better representations than traditional autoencoders including denoising, contractive, variational, and k-sparse autoencoders.  ...  In this paper, we propose a novel k-competitive autoencoder, called KATE, for text documents.  ...  The non-linearity in KATE's encoding comes from the tanh activation function and the winner-take-all operation (i.e., top k selection and amplifying energy reallocation).  ... 
arXiv:1705.02033v2 fatcat:fnyg4t3oovei5pg7p6ruu7mbaq

Vector Quantized Temporally-Aware Correspondence Sparse Autoencoders for Zero-Resource Acoustic Unit Discovery

Batuhan Gundogdu, Bolaji Yusuf, Mansur Yesilbursa, Murat Saraclar
2020 Interspeech 2020  
In this paper, we extend this system by incorporating vector quantization and an adaptation of the winner-take-all networks.  ...  Previously, we introduced recurrent sparse autoencoders fine-tuned with corresponding speech segments obtained by unsupervised term discovery.  ...  Winner-Take-All Network The architecture and the training of CoRSA is very similar to other correspondence autoencoder-based [7, 8] and VQ-VAE-based [14, 15] systems.  ... 
doi:10.21437/interspeech.2020-2765 dblp:conf/interspeech/GundogduYYS20 fatcat:523ivydvhzao3gjagvfljvrbuy

Author Correction: Employing fingerprinting of medicinal plants by means of LC-MS and machine learning for species identification task

Pavel Kharyuk, Dmitry Nazarenko, Ivan Oseledets, Igor Rodin, Oleg Shpigun, Andrey Tsitsilin, Mikhail Lavrentyev
2020 Scientific Reports  
takes all" approach.  ...  species. " now reads: "The most obvious increase was shown by bayesian networks on Test 2, where emergence of correct labels in Top5 jumped by around 20% compared to "winner takes all" approach.  ... 
doi:10.1038/s41598-020-67201-4 pmid:32641689 fatcat:53bfsx7pbbg27hbne4gypyhkje

Page 629 of Neural Computation Vol. 5, Issue 4 [page]

1993 Neural Computation  
Section 2.1 presents a novel method that encourages locally represented classes (like with winner-take-all networks).  ...  It is possible to show that the first term on the right-hand side of equation 2.1 is maximized subject to equa- tion 2.2 if each input pattern is locally represented (just like with winner- take-all networks  ... 

Learnable despeckling framework for optical coherence tomography images

Saba Adabi, Elaheh Rashedi, Anne Clayton, Hamed Mohebbi-Kalkhoran, Xue-wen Chen, Silvia Conforto, Mohammadreza Nasiriavanaki
2018 Journal of Biomedical Optics  
The architecture of LDF includes two main parts: (i) an autoencoder neural network, (ii) filter classifier.  ...  The autoencoder learns the figure of merit based on the quality assessment measures obtained from the OCT image including.Subsequently, the filter classifier identifies the most efficient filter from the  ...  The NLM filter algorithm changes the value of the target pixel by taking the average value of all or selected pixels in the image and weighting them based on their similarity to the target pixel.  ... 
doi:10.1117/1.jbo.23.1.016013 pmid:29368458 fatcat:e5mddyzvm5geld5oazwqrc6yim

Exploiting Spatio-Temporal Structure with Recurrent Winner-Take-All Networks [article]

Eder Santana, Matthew Emigh, Pablo Zegers, Jose C Principe
2017 arXiv   pre-print
We propose a convolutional recurrent neural network, with Winner-Take-All dropout for high dimensional unsupervised feature learning in multi-dimensional time series.  ...  Principe.Our contributions can be summarized as a scalable reinterpretation of the Deep Predictive Coding Networks trained end-to-end with backpropagation through time, an extension of the previously proposed Winner-Take-All  ...  Makhzani and Frey proposed Winner-Take-All (WTA) Autoencoders [8] which use aggressive Dropout, where all the elements but the strongest of a convolutional map are zeroed out.  ... 
arXiv:1611.00050v2 fatcat:osouttaz4vgrpft7s3lea6qgli

Anomaly Detection With Multiple-Hypotheses Predictions [article]

Duc Tam Nguyen, Zhongyu Lou, Michael Klar, Thomas Brox
2019 arXiv   pre-print
We propose to learn the data distribution of the foreground more efficiently with a multi-hypotheses autoencoder.  ...  In one-class-learning tasks, only the normal case (foreground) can be modeled with data, whereas the variation of all possible anomalies is too erratic to be described by samples.  ...  The winner-takes-all loss will encourage each hypothesis branch to predict a constant image with one value from [0,255].  ... 
arXiv:1810.13292v5 fatcat:obj63vegvvhptpk5rpoqjv2fcy

Duel-based Deep Learning system for solving IQ tests

Paulina Tomaszewska, Adam Zychowski, Jacek Mandziuk
2022 International Conference on Artificial Intelligence and Statistics  
The classifier takes a pair of filled-in RPMs that participate in a duel as an input.  ...  Therefore, we first of all changed the number of input panels.  ... 
dblp:conf/aistats/TomaszewskaZM22 fatcat:s5hoz7irfvbwlfvwkkt6vwn3vq

Anomaly Detection Based on Multiple-Hypothesis Autoencoder [article]

JoonSung Lee, YeongHyeon Park
2021 arXiv   pre-print
It takes a lot of cost and time to obtain abnormal data in the industrial field. Therefore the model trains only normal data and detects abnormal data in the inference phase.  ...  Recently Autoencoder(AE) based models are widely used in the field of anomaly detection. A model trained with normal data generates a larger restoration error for abnormal data.  ...  Hypothesis Pruning Generative Adversarial Network(HP-GAN) that is based on Winner-Take-Al theory consists of an adversarial neural network trained through matching of multiple hypotheses and latent vectors  ... 
arXiv:2107.08790v1 fatcat:oais5ukqyfa2fms7gkrq6f72ku

On DNN posterior probability combination in multi-stream speech recognition for reverberant environments

Feifei Xiong, Stefan Goetze, Bernd T. Meyer
2017 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
The method is tested in known and unknown environments against approaches based on inverse entropy and autoencoders, with average relative word error rate improvements of 46% and 29%, respectively, when  ...  In cases when some streams carry detrimental information, it might be better to pursue a winner-takes-all approach, which has also been explored in autoencoder approaches [12] , and is also investigated  ...  Stable results were obtained independently of the specific combination strategy (weighting or winner-takes-all), and the temporal context (frame-wise vs. utterance-level), indicating that the method is  ... 
doi:10.1109/icassp.2017.7953158 dblp:conf/icassp/XiongGM17a fatcat:exgdeg3m7vftznk2ozpaighh4y

Binomics - Where Metagenomics meets the Binary World

Inman Harvey, Nicholas Tomko
2010 Workshop on the Synthesis and Simulation of Living Systems  
Here the recombination is described in terms of 'infecting' the Loser with genetic material from the Winner, and we can note that this rate of infection can take different values.  ...  Experimental Results For making comparisons, we take the significant factor to be the number of autoencoders that need evaluating before a perfect score is achieved.  ... 
dblp:conf/alife/HarveyT10 fatcat:bzbwzahwvzcv5jr63m3osxfh4m
« Previous Showing results 1 — 15 out of 1,607 results