Filters








83 Hits in 5.0 sec

Learning Sparse Latent Representations with the Deep Copula Information Bottleneck [article]

Aleksander Wieczorek and Mario Wieser and Damian Murezzan and Volker Roth
2018 arXiv   pre-print
Deep latent variable models are powerful tools for representation learning.  ...  To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space.  ...  ACKNOWLEDGEMENTS This work was partially supported by the Swiss National Science Foundation under grants CR32I2 159682 and 51MRP0 158328 (SystemsX.ch).  ... 
arXiv:1804.06216v2 fatcat:43trwikyq5fnbgc7yhw5uw4pxi

On the Difference between the Information Bottleneck and the Deep Information Bottleneck

Aleksander Wieczorek, Volker Roth
2020 Entropy  
Combining the information bottleneck model with deep learning by replacing mutual information terms with deep neural nets has proven successful in areas ranging from generative modelling to interpreting  ...  The two assumed properties of the data, X and Y, and their latent representation T, take the form of two Markov chains T - X - Y and X - T - Y .  ...  The Gaussian information bottleneck has been further extended to sparse compression and to meta-Gaussian distributions (multivariate distributions with a Gaussian copula and arbitrary marginal densities  ... 
doi:10.3390/e22020131 pmid:33285906 pmcid:PMC7516540 fatcat:nn54fza57fawnbnig6sg2q4dnm

Advances in Variational Inference

Cheng Zhang, Judith Butepage, Hedvig Kjellstrom, Stephan Mandt
2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
with atypical divergences, and (d) amortized VI, which implements the inference over local latent variables with inference networks.  ...  Many modern unsupervised or semi-supervised machine learning algorithms rely on Bayesian probabilistic models. These models are usually intractable and thus require approximate inference.  ...  the manuscript.  ... 
doi:10.1109/tpami.2018.2889774 fatcat:xffyfbw5w5c4dklgs3uvwynp3u

Advances in Variational Inference [article]

Cheng Zhang, Judith Butepage, Hedvig Kjellstrom, Stephan Mandt
2018 arXiv   pre-print
with atypical divergences, and (d) amortized VI, which implements the inference over local latent variables with inference networks.  ...  Many modern unsupervised or semi-supervised machine learning algorithms rely on Bayesian probabilistic models. These models are usually intractable and thus require approximate inference.  ...  the manuscript.  ... 
arXiv:1711.05597v3 fatcat:st53lmyx5ndpvezmw6vhw4fnhy

Content-Based Image Retrieval and Feature Extraction: A Comprehensive Review

Afshan Latif, Aqsa Rasheed, Umer Sajid, Jameel Ahmed, Nouman Ali, Naeem Iqbal Ratyal, Bushra Zafar, Saadat Hanif Dar, Muhammad Sajid, Tehmina Khalil
2019 Mathematical Problems in Engineering  
We analyzed the main aspects of various image retrieval and image representation models from low-level feature extraction to recent semantic deep-learning approaches.  ...  In this paper, we aim to present a comprehensive review of the recent development in the area of CBIR and image representation.  ...  Based on this approach, the supervised deep hashing technique constructs a hash function from a latent layer in the deep neurons network and the binary code is learned from the objective functions that  ... 
doi:10.1155/2019/9658350 fatcat:dncplhkm6vcrvfh3q7ifxkrdkq

The Emerging Trends of Multi-Label Learning [article]

Weiwei Liu, Xiaobo Shen, Haobo Wang, Ivor W. Tsang
2020 arXiv   pre-print
Besides these, there are tremendous efforts on how to harvest the strong learning capability of deep learning to better capture the label dependencies in multi-label learning, which is the key for deep  ...  Exabytes of data are generated daily by humans, leading to the growing need for new efforts in dealing with the grand challenges for multi-label learning brought by big data.  ...  XML-CNN [36] applies convolutional neural network (CNN) and dynamic pooling to learn the text representation, and a hidden bottleneck layer much smaller than the output layer is used to achieve computational  ... 
arXiv:2011.11197v2 fatcat:hu6w4vgnwbcqrinrdfytmmjbjm

From Dependence to Causation [article]

David Lopez-Paz
2016 arXiv   pre-print
This thesis advances the art of causal inference in three different ways. First, we develop a framework for the study of statistical dependence based on copulas and random features.  ...  Machine learning is the science of discovering statistical dependencies in data, and the use of those dependencies to perform predictions.  ...  and a linear amount of data when learned using deep representations.  ... 
arXiv:1607.03300v1 fatcat:img5m23n5ncx5mfejgqkjft2ua

TwiSE at SemEval-2016 Task 4: Twitter Sentiment Classification

Georgios Balikas, Massih-Reza Amini
2016 Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)  
Without his support and continuous encouragement several of the results presented in this thesis would not have been obtained.  ...  This thesis is the result of efforts of a ten-year university student period. During this endeavor is was supported by several people, that I would like to deeply thank.  ...  Text Representations using deep neural networks The skipgram and cbow models of the previous section learn word embeddings using a prediction task that relies on how words co-occur with their contexts  ... 
doi:10.18653/v1/s16-1010 dblp:conf/semeval/BalikasA16 fatcat:w7o56n5ny5hkjgtnqghp2sdeua

An Overview of Bayesian Methods for Neural Spike Train Analysis

Zhe Chen
2013 Computational Intelligence and Neuroscience  
On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation.  ...  With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity.  ...  Acknowledgments The author was supported by an Early Career Award from the Mathematical Biosciences Institute, Ohio State University.  ... 
doi:10.1155/2013/251905 pmid:24348527 pmcid:PMC3855941 fatcat:nkst6mt3sfcqheuxheda3wq4wq

A Class of Algorithms for General Instrumental Variable Models [article]

Niki Kilbertus, Matt J. Kusner, Ricardo Silva
2020 arXiv   pre-print
Little exists in terms of bounding methods that can deal with the most general case, where the treatment itself can be continuous.  ...  Moreover, bounding methods generally do not allow for a continuum of assumptions on the shape of the causal effect that can smoothly trade off stronger background knowledge for more informative bounds.  ...  MK and RS acknowledge support from the The Alan Turing Institute under EPSRC grant EP/N510129/1.  ... 
arXiv:2006.06366v3 fatcat:w4lbxu574zbv5ncsco3ptr5bme

Intelligence, physics and information – the tradeoff between accuracy and simplicity in machine learning [article]

Tailin Wu
2020 arXiv   pre-print
In the information bottleneck, we theoretically show that these phase transitions are predictable and reveal structure in the relationships between the data, the model, the learned representation and the  ...  Secondly, for representation learning, when can we learn a good representation, and how does learning depend on the structure of the dataset?  ...  4 We use 4 classes since it is simpler than the full 10 classes, but still potentially possesses phase transitions.  ... 
arXiv:2001.03780v2 fatcat:piduzlhoafcjhhsgthulbbhtke

Learning Invariant Representations for Deep Latent Variable Models

Mario Wieser
2020
This is a human-readable summary of (and not a substitute for) the license.  ...  Structured Representations The last experiment illustrates the difference between the learned representations of the latent spaces of the Deep Information Bottleneck models with and without the copula  ...  To address this limitation, we extend the deep information bottleneck with a copula construction.  ... 
doi:10.5451/unibas-ep79859 fatcat:txprg5oyifee3d7pwolgwgcika

Privacy-Preserving High-dimensional Data Collection with Federated Generative Autoencoder

Xue Jiang, Xuebing Zhou, Jens Grossklags
2021 Proceedings on Privacy Enhancing Technologies  
With a local privacy guarantee ∈ = 8, the machine learning models trained with the synthetic data generated by the baseline algorithm cause an accuracy loss of 10% ~ 30%, whereas the accuracy loss is significantly  ...  With the combination of a generative autoencoder, federated learning, and differential privacy, our framework is capable of privately learning the statistical distributions of local data and generating  ...  We thank the anonymous reviewers for their constructive comments for improving this paper. In particular, we thank our Shepherd, Thomas Humphries, for his valuable suggestions during the revision.  ... 
doi:10.2478/popets-2022-0024 fatcat:z55qkdtnc5dmxp3hatsmmkolwe

Measuring Dependence with Matrix-based Entropy Functional [article]

Shujian Yu, Francesco Alesiani, Xi Yu, Robert Jenssen, Jose C. Principe
2021 arXiv   pre-print
In this work, we summarize and generalize the main idea of existing information-theoretic dependence measures into a higher-level perspective by the Shearer's inequality.  ...  We also show the impact of our measures in four different machine learning problems, namely the gene regulatory network inference, the robust machine learning under covariate shift and non-Gaussian noises  ...  When parameterizing IB objective with a DNN, T refers to the latent representation of one hidden layer.  ... 
arXiv:2101.10160v1 fatcat:tmhsnfc7o5bdfpiitme3lyvkzq

Distinguishing Cause from Effect Using Quantiles: Bivariate Quantile Causal Discovery [article]

Natasa Tagasovska, Valérie Chavez-Demoulin, Thibault Vatter
2020 arXiv   pre-print
Through the minimum description length principle, we link the postulate of independence between the generating mechanisms of the cause and of the effect given the cause to quantile regression.  ...  This study shows that bQCD is robust across different implementations of the method (i.e., the quantile regression), computationally efficient, and compares favorably to state-of-the-art methods.  ...  Learning sparse latent representations with the deep cop- ula information bottleneck. ICLR, 2018. Yu, H. and Dauwels, J. Modeling Spatio-Temporal Extreme Events Using Graphical Models.  ... 
arXiv:1801.10579v4 fatcat:lnsyw6w7jvatvjn7rvc5xfphoe
« Previous Showing results 1 — 15 out of 83 results