Filters








1,104 Hits in 2.6 sec

Signal reconstruction by means of Embedding, Clustering and AutoEncoder Ensembles

Corrado Mio, Gabriele Gianini
2019 2019 IEEE Symposium on Computers and Communications (ISCC)  
We study the denoising and reconstruction of corrupted signals by means of AutoEncoder ensembles.  ...  The processing pipeline using Local Linear Embedding, k-means, then k Convolutional Denoising AutoEncoders reduces the reconstruction error by 35% w.r.t. the baseline approach.  ...  The work was partially funded by the EU H2020 Research Programme, within the projects Toreador (Grant Agreement No. 688797), Threat-Arrest (Grant Agreement No. 786890) and Concordia (Grant Agreement No  ... 
doi:10.1109/iscc47284.2019.8969655 dblp:conf/iscc/MioG19 fatcat:utquw5pleffnxnze5dn7vxpolu

Autoencoder-based cluster ensembles for single-cell RNA-seq data analysis

Thomas A. Geddes, Taiyun Kim, Lihao Nan, James G. Burchfield, Jean Y. H. Yang, Dacheng Tao, Pengyi Yang
2019 BMC Bioinformatics  
an autoencoder artificial neural network, and finally apply ensemble clustering across all encoded datasets to generate clusters of cells.  ...  clusters when applied with both the standard k-means clustering algorithm and a state-of-the-art kernel-based clustering algorithm (SIMLR) designed specifically for scRNA-seq data.  ...  Acknowledgements The authors thank their colleagues at the School of Mathematics and Statistics; and School of Life and Environmental Sciences for informative discussion and valuable feedback. 1  ... 
doi:10.1186/s12859-019-3179-5 pmid:31870278 fatcat:djtk24pjg5ebvjospqu2u43hda

Autoencoder-based cluster ensembles for single-cell RNA-seq data analysis [article]

Thomas A Geddes, Taiyun Kim, Lihao Nan, James G Burchfield, Jean Yee Hwa Yang, Dacheng Tao, Pengyi Yang
2019 bioRxiv   pre-print
using an autoencoder artificial neural network, and finally apply ensemble clustering across all encoded datasets for generating clusters of cells.  ...  clusters when applied with both the standard $k$-means clustering algorithm and a state-of-the-art kernel-based clustering algorithm (SIMLR) designed specifically for scRNA-seq data.  ...  Acknowledgements The authors thank their colleagues at the School of Mathematics and Statistics; and School of Life and Environmental Sciences for informative discussion and valuable feedback.  ... 
doi:10.1101/773903 fatcat:iuee56svofa5lgl32sbrpym5n4

Deep Double-Side Learning Ensemble Model for Few-Shot Parkinson Speech Recognition [article]

Yongming Li, Lang Zhou, Lingyun Qin, Yuwei Zeng, Yuchuan Liu, Yan Lei, Pin Wang, Fan Li
2020 arXiv   pre-print
As to speech sample reconstruction, a deep sample learning algorithm is designed in this paper based on iterative mean clustering to conduct samples transformation, so as to obtain new high-level deep  ...  As to feature reconstruction, an embedded deep stacked group sparse auto-encoder is designed in this paper to conduct nonlinear feature transformation, so as to acquire new high-level deep features, and  ...  , a deep double-side learning ensemble model is constructed by combining embedded deep stacked group sparse autoencoder and deep sample learning algorithm, which is helpful to improve the accuracy of speech  ... 
arXiv:2006.11593v1 fatcat:qn33ijpmincjtelkvwqlg2jjvu

Interpretable embeddings from molecular simulations using Gaussian mixture variational autoencoders

Yasemin Bozkurt Varolgunes, Tristan Bereau, Joseph F. Rudzinski
2020 Machine Learning: Science and Technology  
We illustrate our approach on two toy models, alanine dipeptide, and a challenging disordered peptide ensemble, demonstrating the enhanced clustering effect of the GMVAE prior compared to standard VAEs  ...  The GMVAE performs dimensionality reduction and clustering within a single unified framework, and is capable of identifying the inherent dimensionality of the input data, in terms of the number of Gaussians  ...  Acknowledgments The authors thank Kiran H Kanekal and Omar Valsson for critical reading of the manuscript.  ... 
doi:10.1088/2632-2153/ab80b7 fatcat:cjiljqv75ff5bnrthunr77fbeu

Regularized Deep Clustering Method for Fault Trend Analysis

Yongzhi Qu, Yue Zhang, David He, Miao He, Dude Zhou
2019 Proceedings of the Annual Conference of the Prognostics and Health Management Society, PHM  
In previous works, it is shown that some embedding methods and unsupervised deep learning methods have the ability to extract fault features from raw signals directly, such as PCA and deep autoencoder.  ...  In this paper, a regularized deep clustering algorithm is proposed to guide the optimization process of feature extraction which combines embedding method and semi-guided learning.  ...  process of embedded layer parameters, which is initialized by stack autoencoder to form the embedded network; the second part is clustering optimization by using the self-learning method.  ... 
doi:10.36001/phmconf.2019.v11i1.813 fatcat:6f4dmm5dorcwhg65pnieah34ya

Semantic Anomaly Detection in Medical Time Series [chapter]

Sven Festag, Cord Spreckelsen
2021 Studies in Health Technology and Informatics  
The best performing system reached an adjusted Rand index of 0.11 on real-world ECG signals labelled by medical experts.  ...  The cluster ensemble method called cluster-based similarity partitioning proved itself well suited for this task when used in combination with density-based spatial clustering of applications with noise  ...  SF acknowledges the valuable feedback by members of the Chair of Computer Science 5, RWTH Aachen University. The authors state that they have no conflict of interests.  ... 
doi:10.3233/shti210059 pmid:34042884 fatcat:msgt2leys5awhcrrmg5c4o73he

Interpretable Embeddings From Molecular Simulations Using Gaussian Mixture Variational Autoencoders [article]

Yasemin Bozkurt Varolgunes, Tristan Bereau, Joseph F. Rudzinski
2019 arXiv   pre-print
We illustrate our approach on two toy models, alanine dipeptide, and a challenging disordered peptide ensemble, demonstrating the enhanced clustering effect of the GMVAE prior compared to standard VAEs  ...  The GMVAE performs dimensionality reduction and clustering within a single unified framework, and is capable of identifying the inherent dimensionality of the input data, in terms of the number of Gaussians  ...  TB acknowledges financial support by the Emmy Noether program of the Deutsche Forschungsgemeinschaft (DFG) and the long program Machine Learning for Physics and the Physics of Learning at the Institute  ... 
arXiv:1912.12175v1 fatcat:uzy3zhhafbfwrhs5skr4o57uxy

Neural Approaches to Short-Time Load Forecasting in Power Systems—A Comparative Study

Stanislaw Osowski, Robert Szmurlo, Krzysztof Siwek, Tomasz Ciechulski
2022 Energies  
The important point in getting high-quality results is the composition of many solutions in the common ensemble and their fusion to create the final forecast of time series.  ...  They include such networks as multilayer perceptron, radial basis function, support vector machine, self-organizing Kohonen networks, deep autoencoder, and recurrent deep LSTM structures.  ...  ., reconstruction of the signals.  ... 
doi:10.3390/en15093265 fatcat:pz4zbvgrfrctxg7kec4l43vlq4

Challenges for Unsupervised Anomaly Detection in Particle Physics [article]

Katherine Fraser, Samuel Homiller, Rashmish K. Mishra, Bryan Ostdiek, Matthew D. Schwartz
2021 arXiv   pre-print
One way to define a score is to use autoencoders, which rely on the ability to reconstruct certain types of data (background) but not others (signals).  ...  In exploring the networks, we uncover a connection between the latent space of a variational autoencoder trained using mean-squared-error and the optimal transport distances within the dataset.  ...  Acknowledgments We thank Jack Collins, Philip Harris, and Sang Eon Park for useful discussions and comments on a previous version of this manuscript. This work is supported by the National  ... 
arXiv:2110.06948v1 fatcat:eclesqo5z5agxczxp7cmqct7gu

Unsupervised Abnormality Detection Using Heterogeneous Autonomous Systems [article]

Sayeed Shafayet Chowdhury, Kazi Mejbaul Islam, Rouhan Noor
2020 arXiv   pre-print
Moreover, the IMU data are used in autoencoder to predict abnormality. Finally, the results from these two algorithms are ensembled to estimate the final degree of abnormality.  ...  But the nature and degree of abnormality may vary depending upon the actual environment and adversary.  ...  And yi and ŷi are ground truth and reconstructed output for IMU/mag samples. And training loss is defined by the linear addition of the two losses, L1+L2.  ... 
arXiv:2006.03733v2 fatcat:g3nn4pavijcwjdrkuiwxu7pksa

A Comprehensive Survey on Community Detection with Deep Learning [article]

Xing Su, Shan Xue, Fanzhen Liu, Jia Wu, Jian Yang, Chuan Zhou, Wenbin Hu, Cecile Paris, Surya Nepal, Di Jin, Quan Z. Sheng, Philip S. Yu
2021 arXiv   pre-print
Despite the classical spectral clustering and statistical inference methods, we notice a significant development of deep learning techniques for community detection in recent years with their advantages  ...  The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders.  ...  Stacked Autoencoder-Based) Community Stacked autoencoder-based community detection method via CDMEC Detection Method via Ensemble Clustering [110] an ensemble clustering framework CNN Convolutional Neural  ... 
arXiv:2105.12584v2 fatcat:matipshxnzcdloygrcrwx2sxr4

Time-Window Group-Correlation Support vs. Individual Features: A Detection of Abnormal Users [article]

Lun-Pin Yuan, Euijin Choo, Ting Yu, Issa Khalil, Sencun Zhu
2020 arXiv   pre-print
ACOBE leverages a novel behavior representation and an ensemble of deep autoencoders and produces an ordered investigation list.  ...  Most existing approaches typically build models by reconstructing single-day and individual-user behaviors.  ...  Alam et al. proposed an ensemble of autoencoders accompanied by K-mean clustering algorithm, AutoPerf [25] . Mirsky et al. proposed an ensemble of lightweight autoencoders, Kitsune [5] .  ... 
arXiv:2012.13971v1 fatcat:7ajtgcengvhidhkll6voi53ow4

Model selection for deep audio source separation via clustering analysis [article]

Alisa Liu, Prem Seetharaman, Bryan Pardo
2020 arXiv   pre-print
Results show our confidence-based ensemble significantly outperforms the random ensemble over general mixtures and approaches oracle performance for music mixtures.  ...  We compare our confidence-based ensemble approach to using individual models with no selection, to an oracle that always selects the best model and to a random model selector.  ...  [5] produces a selection method for speech enhancement by training an ensemble of autoencoders and selecting the model with the lowest reconstruction error.  ... 
arXiv:1910.12626v2 fatcat:6mjthmobabeefjmcze2uw6tnhi

Unsupervised Abnormal Sensor Signal Detection with Channelwise Reconstruction Errors

Mingu Kwak, Seoung Bum Kim
2021 IEEE Access  
The reconstruction errors of abnormal and normal channels are shown to be different; therefore, it can be considered as an appropriate feature for anomaly detection.  ...  However, they lose valuable channel information inherent in the reconstruction errors by merely averaging the errors for both the channel and time, then consider the average value as an anomaly score.  ...  By coping with electrocardiography (ECG) data, an ensemble of autoencoders joined by the sparse connections of recurrent neural networks (RNNs) was developed [36] .  ... 
doi:10.1109/access.2021.3064563 fatcat:wbovlgrnjvf5fpvodillyiwlxq
« Previous Showing results 1 — 15 out of 1,104 results