Filters








48,264 Hits in 4.9 sec

Discovering Hidden Factors of Variation in Deep Networks [article]

Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, Bruno A. Olshausen
2015 arXiv   pre-print
Furthermore, we demonstrate these deep networks can extrapolate 'hidden' variation in the supervised signal.  ...  But there has been less exploration in learning the factors of variation apart from the classification signal.  ...  We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPUs used for this research. Bruno Olshausen was supported by NSF grant IIS-1111765.  ... 
arXiv:1412.6583v4 fatcat:tw5zd5az75dspkiargqwzjmboe

Deep Learning of Representations: Looking Forward [article]

Yoshua Bengio
2013 arXiv   pre-print
or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data.  ...  Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts.  ...  Generic Priors for Disentangling Factors of Variation.  ... 
arXiv:1305.0445v2 fatcat:cyfgf5trljfopcjsapicf4ay3q

Causal Phenotype Discovery via Deep Networks

David C Kale, Zhengping Che, Mohammad Taha Bahadori, Wenzhe Li, Yan Liu, Randall Wetzel
2015 AMIA Annual Symposium Proceedings  
We illustrate this idea with a two-stage framework that combines the latent representation learning power of deep neural networks with state-of-the-art tools from causal inference.  ...  The rapid growth of digital health databases has attracted many researchers interested in using modern computational methods to discover and model patterns of health and illness in a research program known  ...  Neural network training. We implemented all neural networks in Theano 13 as variations of a multilayer perceptron with 3-5 hidden layers (of the same size) of sigmoid units.  ... 
pmid:26958203 pmcid:PMC4765623 fatcat:hgt64kjk7zbn5jo2fntmnwh5ma

An empirical evaluation of deep architectures on problems with many factors of variation

Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, Yoshua Bengio
2007 Proceedings of the 24th international conference on Machine learning - ICML '07  
Here, we present a series of experiments which indicate that these models show promise in solving harder learning problems that exhibit many factors of variation.  ...  These models are compared with well-established algorithms such as Support Vector Machines and single hidden-layer feed-forward neural networks.  ...  Research in incorporating factors of variation into learning procedures has been abundant.  ... 
doi:10.1145/1273496.1273556 dblp:conf/icml/LarochelleECBB07 fatcat:bpa44sjckrefhf6ttg34zmqsqe

Deep Kernel Machines via the Kernel Reparametrization Trick

Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh
2017 International Conference on Learning Representations  
In particular, we construct a hierarchy of increasingly complex kernels that encode individual hidden layers of the network.  ...  In this paper, we develop a novel method for efficiently capturing the behaviour of deep neural networks using kernels.  ...  Guided by the idea of disentangling factors of variation, we choose the sets {ξ (l) im } m for each neuron i at layer l in a supervised fashion.  ... 
dblp:conf/iclr/MitrovicST17 fatcat:tkqxwhm375d6hgfsaq4sojjwbm

Opening the Black Box: Discovering and Explaining Hidden Variables in Type 2 Diabetic Patient Modelling

Leila Yousefi, Stephen Swift, Mahir Arzoky, Lucia Saachi, Luca Chiovato, Allan Tucker
2018 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)  
Identifying and understanding groups of patients with similar disease profiles (based on discovered hidden variables) makes it possible to better understand disease progression in different patients while  ...  of different discovered sub-groups.  ...  an enhanced variation of our previous work in [18] .  ... 
doi:10.1109/bibm.2018.8621484 dblp:conf/bibm/YousefiSASCT18 fatcat:q3k6f4pw6jfk5gumjjkas77iha

Making deep learning models transparent

Lujia Chen, Xinghua Lu
2018 Journal of Medical Artificial Intelligence  
Using deep learning to model the hierarchical structure and function of a cell. Nat Methods 2018;15:290-8.  ...  Footnote Conflicts of Interest: The authors have no conflicts of interest to declare.  ...  The DCell captures system structure by inferring the biological information of the hidden representation and studying the mechanisms leading to the outcome of variations in phenotype.  ... 
doi:10.21037/jmai.2018.07.01 fatcat:4bts7qef5vgz3bh3mjw6q5jppi

Representation Learning with Autoencoders for Electronic Health Records: A Comparative Study [article]

Najibesadat Sadati, Milad Zafar Nezhad, Ratna Babu Chinnam, Dongxiao Zhu
2019 arXiv   pre-print
Our method uses different deep architectures (stacked sparse autoencoders, deep belief network, adversarial autoencoders and variational autoencoders) for feature representation in higher-level abstraction  ...  Our focus is to present a comparative study to evaluate the performance of different deep architectures through supervised learning and provide insights in the choice of deep feature representation techniques  ...  We performed different deep networks which differ in the number of neurons in hidden layers for all autoencoders type and then select the best performance of autoencoders across all networks.  ... 
arXiv:1908.09174v2 fatcat:67w6e435abc7xgfzobid4rweza

Representation Learning with Autoencoders for Electronic Health Records: A Comparative Study [article]

Najibesadat Sadati, Milad Zafar Nezhad, Ratna Babu Chinnam, Dongxiao Zhu
2019 arXiv   pre-print
Our method uses different deep architectures (stacked sparse autoencoders, deep belief network, adversarial autoencoders and variational autoencoders) for feature representation in higher-level abstraction  ...  Our focus is to present a comparative study to evaluate the performance of different deep architectures through supervised learning and provide insights in the choice of deep feature representation techniques  ...  We performed different deep networks which differ in the number of neurons in hidden layers for all autoencoders type and then select the best performance of autoencoders across all networks.  ... 
arXiv:1801.02961v2 fatcat:w4rqvzuvcza37hfpyqt46hazvq

Deep Belief Networks using discriminative features for phone recognition

Abdel-rahman Mohamed, Tara N. Sainath, George Dahl, Bhuvana Ramabhadran, Geoffrey E. Hinton, Michael A. Picheny
2011 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
Deep Belief Networks (DBNs) are multi-layer generative models.  ...  These features can be used to initialize the hidden units of a feed-forward neural network that is then trained to predict the HMM state for the central frame of the window.  ...  If there are many hidden layers in the neural network and many hidden units in each layer, it is easy for the neural network to overfit.  ... 
doi:10.1109/icassp.2011.5947494 dblp:conf/icassp/MohamedSDRHP11 fatcat:56pjmtyzuvdopan4laxo3du3bi

Generic Feature Learning in Computer Vision

D. Kanishka Nithin, P. Bagavathi Sivakumar
2015 Procedia Computer Science  
Manually we might never be able to produce best and diverse set of features that closely describe all the variations that occur in our data.  ...  All these issues are resolved in learning deep representations.  ...  The highest abstraction will be invariant to geometric factors of variation.  ... 
doi:10.1016/j.procs.2015.08.054 fatcat:wsinii7ionfvvomuvr3dhmalp4

Stochastic Variational Deep Kernel Learning [article]

Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, Eric P. Xing
2016 arXiv   pre-print
Specifically, we apply additive base kernels to subsets of output features from deep neural architectures, and jointly learn the parameters of the base kernels and deep network through a Gaussian process  ...  Deep kernel learning combines the non-parametric flexibility of kernel methods with the inductive biases of deep learning architectures.  ...  The result is a deep probabilistic neural network, with a hidden layer composed of additive sets of infinite basis functions, linearly mixed to produce correlated output variables.  ... 
arXiv:1611.00336v2 fatcat:tdi46dwdejd3teezh3gkoqdjhm

Prognostic Gene Discovery in Glioblastoma Patients using Deep Learning

Kelvin Wong, Robert Rostomily, Stephen Wong
2019 Cancers  
Univariate and multivariate Cox survival models are used to assess the predictive value of deep learned features in addition to clinical, mutation, and methylation factors.  ...  This study aims to discover genes with prognostic potential for glioblastoma (GBM) patients' survival in a patient group that has gone through standard of care treatments including surgeries and chemotherapies  ...  Acknowledgments: The results published here are in whole or part based upon data generated by the TCGA Research Network: http://cancergenome.nih.gov/.  ... 
doi:10.3390/cancers11010053 pmid:30626092 pmcid:PMC6356839 fatcat:mulpm7waanctvm337fxerllsoi

Energy-based Models for Video Anomaly Detection [article]

Hung Vu, Dinh Phung, Tu Dinh Nguyen, Anthony Trevors, Svetha Venkatesh
2017 arXiv   pre-print
and network intrusion detection.  ...  Automated detection of abnormalities in data has been studied in research area in recent years because of its diverse applications in practice including video surveillance, industrial damage detection  ...  For further extension, we aim to develop a deep abnormality detection system that is a deep generative network specialising in the problem of anomaly detection, instead of adapting the popular deep networks  ... 
arXiv:1708.05211v1 fatcat:fkb2m2vdx5fs5grz3mbbmlyctu

Predictive Analysis of Water Quality Parameters using Deep Learning

Archana Solanki, Himanshu Agrawal, Kanchan Khare
2015 International Journal of Computer Applications  
The comparison of results show that robustness can be achieve by denoising autoencoder and deep belief network and also successfully handle the variability in the data.  ...  Unfortunately, these important resources are being polluted and the quality of water is being influenced by numerous factors.  ...  intermediate layers of the MLP are fed as input to the autoencoders Fig 1: Denoising auto-encoder3.2 Deep Belief NetworkDeep belief network are the deep neural networks with many hidden layers which are  ... 
doi:10.5120/ijca2015905874 fatcat:fn5zncuy4vac3gdfia6ipzfy6e
« Previous Showing results 1 — 15 out of 48,264 results