Filters








9,870 Hits in 3.0 sec

Hierarchical Importance Weighted Autoencoders [article]

Chin-Wei Huang, Kris Sankaran, Eeshan Dhekane, Alexandre Lacoste, Aaron Courville
2019 arXiv   pre-print
Importance weighted variational inference (Burda et al., 2015) uses multiple i.i.d. samples to have a tighter variational lower bound.  ...  The hope is that the proposals would coordinate to make up for the error made by one another to reduce the variance of the importance estimator.  ...  Importance weighted hierarchical variational inference. 2018. Tucker, G., Lawson, D., Gu, S., and Maddison, C. J.  ... 
arXiv:1905.04866v1 fatcat:rtj2lqxlxbe2zprvz2xvif2ll4

Deep Learning for Molecular Graphs with Tiered Graph Autoencoders and Graph Prediction [article]

Daniel T. Chang
2021 arXiv   pre-print
We briefly review graph prediction and the QM9 dataset for background information, and discuss the use of tiered graph embeddings for graph prediction, particularly weighted group pooling.  ...  In this paper, we discuss the use of tiered graph autoencoders together with graph prediction for molecular graphs.  ...  This assumes FGs are most important, RGs are less important, and a CCG is the least important for predicting molecular properties / activities.  ... 
arXiv:1910.11390v2 fatcat:xzllciv4qvaqfp2yjqj3dqtfqm

Variational Composite Autoencoders [article]

Jiangchao Yao, Ivor Tsang, Ya Zhang
2018 arXiv   pre-print
In this paper, we propose a variational composite autoencoder to sidestep this issue by amortizing on top of the hierarchical latent variable model.  ...  Previous variational autoencoders can be low effective due to the straightforward encoder-decoder structure.  ...  Methods Generative Structured modeling prediction VAE -112.8 -77.1 Concrete-s -105.3 -66.3 VAE-Con -111.5 -62.4 VCAE -95.7 -62.1 Table 1 : 1 The importance-weighted estimate of  ... 
arXiv:1804.04435v1 fatcat:6q5cr6qo5bfdjbf5qgwtdlovee

Evolutionary Hierarchical Sparse Extreme Learning Autoencoder Network for Object Recognition

Yujun Zeng, Lilin Qian, Junkai Ren
2018 Symmetry  
In this paper, a novel sparse autoencoder derived from ELM and differential evolution is proposed and integrated into a hierarchical hybrid autoencoder network to accomplish the end-to-end learning with  ...  When extended to the stacked autoencoder network, which is a typical symmetrical representation learning model architecture, ELM manages to realize hierarchical feature extraction and classification, which  ...  Moreover, Bartlett [29] has proven that the norm of weights in neural networks has a specifically important effect on the generalization performance, and the smaller the better.  ... 
doi:10.3390/sym10100474 fatcat:bgmt5gtjonerrkynpynn5akq7q

Machine Learning for Feature Selection and Cluster Analysis in Drug Utilisation Research

Sara Khalid, Daniel Prieto-Alhambra
2019 Current Epidemiology Reports  
The weight of a variable d at a given node j signified its importance in activating that node. The greater the weight of a variable, the more important it was for the activation.  ...  In the case study, in search of k, hierarchical and k-means clustering were performed on the dataset using the features selected by the autoencoder model.  ... 
doi:10.1007/s40471-019-00211-7 fatcat:543fpk76c5hffklh6mtpvwu6h4

Boosting Gene Expression Clustering with System-Wide Biological Information: A Robust Autoencoder Approach [article]

Hongzhu Cui, Chong Zhou, Xinyu Dai, Yuting Liang, Randy Paffenroth, Dmitry Korkin
2017 bioRxiv   pre-print
We tested our approach on two distinct gene expression datasets and compared the performance with two widely used clustering methods, hierarchical clustering and k-means, as well as with a recent deep  ...  The approach benefits from a new deep learning architecture, Robust Autoencoder, which provides a more accurate high-level representation of the feature sets, and from incorporating prior biological information  ...  The specific strategy of assigning a weight to the distance between a pair of genes is of critical importance.  ... 
doi:10.1101/214122 fatcat:cxwgxfb4tfeh7isvklyqfjv4p4

A Hierarchical Sparse Discriminant Autoencoder for Bearing Fault Diagnosis

Mengjie Zeng, Shunming Li, Ranran Li, Jiantao Lu, Kun Xu, Xianglian Li, Yanfeng Wang, Jun Du
2022 Applied Sciences  
In response to this problem, this research proposes a hierarchical sparse discriminant autoencoder (HSDAE) method for fault diagnosis of rotating components, which is a new semi-supervised autoencoder  ...  By considering the sparsity of autoencoders, a hierarchical sparsity strategy was proposed to improve the stacked sparsity autoencoders, and the particle swarm optimization algorithm was used to obtain  ...  Proposed Methodology Hierarchical Sparse Parameter Strategy The loss function is very important in the training of the neural network.  ... 
doi:10.3390/app12020818 fatcat:5zz5o2z5m5gozmcnmkzealuxxi

Autoencoder Trees [article]

Ozan İrsoy, Ethem Alpaydın
2014 arXiv   pre-print
We also see that the autoencoder tree captures hierarchical representations at different granularities of the data on its different levels and the leaves capture the localities in the input space.  ...  We use the soft decision tree where internal nodes realize soft multivariate splits given by a gating function and the overall output is the average of all leaves weighted by the gating values on their  ...  Furthermore, decoding process is also hierarchical in autoencoder trees.  ... 
arXiv:1409.7461v1 fatcat:frakejnpczdzpc7sfu2rj7n7ky

Learning a hierarchical representation of the yeast transcriptomic machinery using an autoencoder model

Lujia Chen, Chunhui Cai, Vicky Chen, Xinghua Lu
2016 BMC Bioinformatics  
Conclusions: Contemporary deep hierarchical latent variable models, such as the autoencoder, can be used to partially recover the organization of transcriptomic machinery.  ...  An important and yet challenging task in systems biology is to reconstruct cellular signaling system in a data-driven manner.  ...  A gene is regarded as being regulated by a hidden unit if their weight is in the top 15 % of all weights.  ... 
doi:10.1186/s12859-015-0852-1 pmid:26818848 pmcid:PMC4895523 fatcat:wqxqiguxmbcmxi7kqpx2kcc7u4

INFLUENCE ANALYSIS OF WATERLOGGING BASED ON DEEP LEARNING MODEL IN WUHAN

Y. Pan, Z. Shao, T. Cheng, Z. Wang, Z. Zhang
2017 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences  
This paper analyses a large number of factors related to the influence degree of urban waterlogging in depth, and constructs the Stack Autoencoder model to explore the relationship between the waterlogging  ...  Then, according to the relative importance of each sub-element in hierarchical model, the judgment matrix will be constructed.  ...  models, the detailed models are shown in Figure 2 ; (2) Each hierarchical model corresponds to a judgment matrix whose values are obtained by comparing the relative importance of the two sub-elements  ... 
doi:10.5194/isprs-archives-xlii-2-w7-1313-2017 fatcat:xcrcxjmg2nc6bcimmlgxopc6au

Construction and reduction methods of vulnerability index system In power SCADA

Yuancheng Li, Shengnan Chu
2014 International Journal of Security and Its Applications  
Autoencoder network.  ...  Auto encoder method can obtain the optimal initial weight in pre-training and then back-propagate error derivatives adjusting weights with the initial weights to minimize the reconstruction error finally  ...  Autoencoder can automatically adjust the weights and the important indicators are given to a larger weight while the redundancy indexes are given a smaller weight to reduce the objectivity of weight assigned  ... 
doi:10.14257/ijsia.2014.8.6.29 fatcat:low5nueijnhtpakhikjpb3zvye

Visualizing hierarchies in scRNA-seq data using a density tree-biased autoencoder [article]

Quentin Garrido
2022 arXiv   pre-print
We then introduce DTAE, a tree-biased autoencoder that emphasizes the tree structure of the data in low dimensional space.  ...  Given that many cellular differentiation processes are hierarchical, their scRNA-seq data is expected to be approximately tree-shaped in gene expression space.  ...  In contrast, the hierarchical structure is important for the biological interpretation of the data: it is much less important if an embedding is placed close to two centroids that are on the same branch  ... 
arXiv:2102.05892v3 fatcat:eman6nuo45fc7p5nd4pcsheloa

TopicAE: A Topic Modeling Autoencoder

2019 Acta Polytechnica Hungarica  
In this paper, we propose TopicAE, a simple autoencoder designed to perform topic modeling with input texts.  ...  There are several topic models to extract standard topics from with their evolution through time and hierarchical structure of the topics.  ...  Hierarchical TopicAE The composition of TopicAE autoencoders is also applicable for the extraction of the hierarchical structure of topics.  ... 
doi:10.12700/aph.16.4.2019.4.4 fatcat:2pd7kl4tv5d77adixhpwecnuuq

Hierarchical Self Attention Based Autoencoder for Open-Set Human Activity Recognition [article]

M Tanjid Hasan Tonmoy, Saif Mahmud, A K M Mahbubur Rahman, M Ashraful Amin, Amin Ahsan Ali
2021 arXiv   pre-print
The source code is available at: github.com/saif-mahmud/hierarchical-attention-HAR  ...  The decoder in this autoencoder architecture incorporates self-attention based feature representations from encoder to detect unseen activity classes in open-set recognition setting.  ...  Explainable feature attention maps are obtained from hierarchical self-attention layers to demonstrate dominant sensor placement and temporal importance within session to classify specific physical activity  ... 
arXiv:2103.04279v1 fatcat:mfm5ao4hxzaxvipfvgvizv4rse

Deep Learning with Anatomical Priors: Imitating Enhanced Autoencoders in Latent Space for Improved Pelvic Bone Segmentation in MRI [article]

Duc Duy Pham, Gurbandurdy Dovletov, Sebastian Warwas, Stefan Landgraeber, Marcus Jäger, Josef Pauli
2019 arXiv   pre-print
The autoencoder is additionally enhanced by means of hierarchical features, extracted by an U-Net module.  ...  We propose a 2D Encoder-Decoder based deep learning architecture for semantic segmentation, that incorporates anatomical priors by imitating the encoder component of an autoencoder in latent space.  ...  The adaptation restriction to the U-Net weights ensures generation of hierarchical features in the contracting path, which are suitable for segmentation.  ... 
arXiv:1903.09263v1 fatcat:fncapycepza3rdopuvi4valebq
« Previous Showing results 1 — 15 out of 9,870 results