Filters








1,828 Hits in 5.8 sec

Zero-bias autoencoders and the benefits of co-adapting features [article]

Kishore Konda, Roland Memisevic, David Krueger
2015 arXiv   pre-print
We show that negative biases are a natural result of using a hidden layer whose responsibility is to both represent the input data and act as a selection mechanism that ensures sparsity of the representation  ...  Regularized training of an autoencoder typically results in hidden unit biases that take on large negative values.  ...  ACKNOWLEDGMENTS This work was supported by an NSERC Discovery grant, a Google faculty research award, and the German Federal Ministry of Education and Research (BMBF) in the project 01GQ0841 (BFNT Frankfurt  ... 
arXiv:1402.3337v5 fatcat:fhvwm7wbhvhiplum5qxonka3uu

Far-Field Speech Enhancement Using Heteroscedastic Autoencoder for Improved Speech Recognition

Shashi Kumar, Shakti P. Rath
2019 Interspeech 2019  
Here, we propose a more generalized loss based on non-zero mean and heteroscedastic co-variance distribution for the residual variables.  ...  We observe relative improvement of 7.31% in WER compared to conventional DA and overall, a relative improvement of 14.4% compared to mismatched train and test scenerio.  ...  distribution of the residual term is N (0, βn), i.e., zero-mean with heteroscedastic co-variance.  ... 
doi:10.21437/interspeech.2019-2032 dblp:conf/interspeech/KumarR19 fatcat:jbfcxjq4mnfcplgk3zr6gazrsi

Effect of Additive Noise for Multi-Layered Perceptron with AutoEncoders

Motaz SABRI, Takio KURITA
2017 IEICE transactions on information and systems  
It is shown that internal representation of learned features emerges and sparsity of hidden units increases when independent Gaussian noises are added to inputs of hidden units during the deep network  ...  This paper investigates the effect of noises added to hidden units of AutoEncoders linked to multilayer perceptrons.  ...  Acknowledgments This work was supported by JSPS KAKENHI Grant number 16K00239 and 16H01430.  ... 
doi:10.1587/transinf.2016edp7468 fatcat:dhbxv6kcqng5llrsly4ivtsv5m

A Convolutional Decoder for Point Clouds using Adaptive Instance Normalization

Isaak Lim, Moritz Ibing, Leif Kobbelt
2019 Computer graphics forum (Print)  
The results are evaluated in an autoencoding setup to offer both qualitative and quantitative analysis.  ...  Our convolutional autoencoder with Adaptive Instance Normalization was trained to output 2500 points for inputs with 2500 points.  ...  The MLP that estimates the density and classifies whether a grid cell contains points or not is constructed as FC16-FC8-FC4-FC2.  ... 
doi:10.1111/cgf.13792 fatcat:zx5z3u3oj5esxddmfuo6zg2d7m

Semisupervised Autoencoder for Sentiment Analysis [article]

Shuangfei Zhai, Zhongfei Zhang
2015 arXiv   pre-print
To reduce the bias brought by one single classifier, we define a posterior probability distribution on the weights of the classifier, and derive the marginalized loss of the autoencoder with Laplace approximation  ...  Traditional autoencoders suffer from at least two aspects: scalability with the high dimensionality of vocabulary size and dealing with task-irrelevant words.  ...  the bias for simplicity.  ... 
arXiv:1512.04466v1 fatcat:oe3vnmnmpfbmrisshjuuojhsh4

Semisupervised Autoencoder for Sentiment Analysis

Shuangfei Zhai, Zhongfei Zhang
2016 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
To reduce the bias brought by one single classifier, we define a posterior probability distribution on the weights of the classifier, and derive the marginalized loss of the autoencoder with Laplace approximation  ...  Traditional autoencoders suffer from at least two aspects: scalability with the high dimensionality of vocabulary size and dealing with task-irrelevant words.  ...  the bias for simplicity.  ... 
doi:10.1609/aaai.v30i1.10159 fatcat:evv6a7ophbdv5k3vemujs2ead4

Multimodal Co-learning: Challenges, Applications with Datasets, Recent Advances and Future Directions [article]

Anil Rahate, Rahee Walambe, Sheela Ramanna, Ketan Kotecha
2021 arXiv   pre-print
We present the comprehensive taxonomy of multimodal co-learning based on the challenges addressed by co-learning and associated implementations.  ...  The various techniques employed to include the latest ones are reviewed along with some of the applications and datasets.  ...  CRediT authorship contribution statement Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared  ... 
arXiv:2107.13782v2 fatcat:s4spofwxjndb7leqbcqnwbifq4

scGNN is a novel graph neural network framework for single-cell RNA-Seq analyses

Juexin Wang, Anjun Ma, Yuzhou Chang, Jianting Gong, Yuexu Jiang, Ren Qi, Cankun Wang, Hongjun Fu, Qin Ma, Dong Xu
2021 Nature Communications  
AbstractSingle-cell RNA-sequencing (scRNA-Seq) is widely used to reveal the heterogeneity and dynamics of tissues, organisms, and complex diseases, but its analyses still suffer from multiple grand challenges  ...  an effective representation of gene expression and cell–cell relationships.  ...  Acknowledgements This work was supported by awards R35-GM126985 and R01-GM131399 from the National Institute of General Medical Sciences of the National Institutes of Health.  ... 
doi:10.1038/s41467-021-22197-x pmid:33767197 pmcid:PMC7994447 fatcat:imdw6wdgnjaf5o5qbbasjuwuta

Towards Artificial Intelligence Serving as an Inspiring Co-Creation Partner

Kevin German, Marco Limm, Matthias Wölfel, Silke Helmerdig
2019 EAI Endorsed Transactions on Creative Technologies  
To be an inspiring co-creation partner by suggesting unexpected design variations and by learning the designer's taste.  ...  Besides the potentials of AI, we also point out and discuss moral threads caused by the latest developments in AI with respect to the creative sector.  ...  Therefore, each of the 20 individuals in the population is assigned a fitness value between zero and one by the user.  ... 
doi:10.4108/eai.26-4-2019.162609 fatcat:fj7ijmm2mjbv7g5caoaedaymfq

Marginalized Denoising Autoencoders for Domain Adaptation [article]

Minmin Chen, Zhixiang Xu (Washington University), Kilian Weinberger, Fei Sha (University of Southern California)
2012 arXiv   pre-print
Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation.  ...  In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features.  ...  We hope that our work on mSDA will inspire future research on efficient training of SDA, beyond domain adaptation, and impact a variety of research problems.  ... 
arXiv:1206.4683v1 fatcat:57a2i5zn7fbixjiga7a5ryobhy

Domain-invariant features for mechanism of action prediction in a multi-cell-line drug screen

2019 Bioinformatics  
For this, we propose multi-task autoencoders, including a domain-adaptive model used to construct domain-invariant feature representations across cell lines.  ...  The contribution of this article is 2-fold.  ...  Conflict of Interest: none declared.  ... 
doi:10.1093/bioinformatics/btz774 pmid:31608933 pmcid:PMC7058179 fatcat:4mrsgk473vexxosqlguzc6ofh4

Marginalizing stacked linear denoising autoencoders

Minmin Chen, Kilian Q. Weinberger, Zhixiang Eddie Xu, Fei Sha
2015 Journal of machine learning research  
In this paper, we propose marginalized Stacked Linear Denoising Autoencoder (mSLDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features  ...  Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation.  ...  Acknowledgements We would like to thank Laurens van der Maaten for pointing out the alternative Ridge Regression formulation of mSLDA under blank-out corruption.  ... 
dblp:journals/jmlr/ChenWXS15 fatcat:5jawfi3gnrdstid4gwn5fm4l7y

Multi-Modal Adversarial Autoencoders for Recommendations of Citations and Subject Labels

Lukas Galke, Florian Mai, Iacopo Vagliano, Ansgar Scherp
2018 Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization - UMAP '18  
We demonstrate, however, that the two tasks differ in the semantics of item co-occurrence in the sense that item co-occurrence resembles relatedness in case of citations, yet implies diversity in case  ...  We analyze the effects of adversarial regularization, sparsity, and different input modalities.  ...  Autoencoders retain this benefit and may learn to put appropriate weights in the bias parameters if it is helpful for the overall objective.  ... 
doi:10.1145/3209219.3209236 dblp:conf/um/GalkeMVS18 fatcat:yfevueeu2zarjdffri4z2vdyhq

Collaborative Reflection-Augmented Autoencoder Network for Recommender Systems [article]

Lianghao Xia, Chao Huang, Yong Xu, Huance Xu, Xiang Li, Weiguo Zhang
2022 arXiv   pre-print
The network architecture of CRANet is formed of an integrative structure with a reflective receptor network and an information fusion autoencoder module, which endows our recommendation framework with  ...  To address the issues, we develop a Collaborative Reflection-Augmented Autoencoder Network (CRANet), that is capable of exploring transferable knowledge from observed and unobserved user-item interactions  ...  the exploration of observed and unobserved interactive relations across users and items to address the sparsity bias challenge.  ... 
arXiv:2201.03158v1 fatcat:qm4pposhvfa4ligy5prsvj7fxa

Evolutionary Hierarchical Sparse Extreme Learning Autoencoder Network for Object Recognition

Yujun Zeng, Lilin Qian, Junkai Ren
2018 Symmetry  
Nevertheless, the input weights and biases of the hidden nodes in ELM are generated according to a random distribution and may lead to the occurrence of non-optimal and redundant parameters that deteriorate  ...  When extended to the stacked autoencoder network, which is a typical symmetrical representation learning model architecture, ELM manages to realize hierarchical feature extraction and classification, which  ...  ∆θ r2 represent the difference of two pairs of other individuals, which are chosen randomly, and C m is a constant used to control and adapt the mutation strength. µ θ i ,k = θ i,k i f f rand < C co m  ... 
doi:10.3390/sym10100474 fatcat:bgmt5gtjonerrkynpynn5akq7q
« Previous Showing results 1 — 15 out of 1,828 results