Filters








7,861 Hits in 8.7 sec

Squared-loss Mutual Information Regularization: A Novel Information-theoretic Approach to Semi-supervised Learning

Gang Niu, Wittawat Jitkrittum, Bo Dai, Hirotaka Hachiya, Masashi Sugiyama
2013 International Conference on Machine Learning  
We propose squared-loss mutual information regularization (SMIR) for multi-class probabilistic classification, following the information maximization principle.  ...  It offers all of the following four abilities to semi-supervised algorithms: Analytical solution, out-of-sample/multi-class classification, and probabilistic output.  ...  Following IMP, we propose an information-theoretic approach to semi-supervised learning.  ... 
dblp:conf/icml/NiuJDHS13 fatcat:q27inx2isjbzngejj53frbjt4e

Information-Maximization Clustering based on Squared-Loss Mutual Information [article]

Masashi Sugiyama, Makoto Yamada, Manabu Kimura, Hirotaka Hachiya
2011 arXiv   pre-print
In this paper, we propose an alternative information-maximization clustering method based on a squared-loss variant of mutual information.  ...  Information-maximization clustering learns a probabilistic classifier in an unsupervised manner so that mutual information between feature vectors and cluster assignments is maximized.  ...  Acknowledgments We would like to thank Ryan Gomes for providing us his program code of informationmaximization clustering. MS was supported by SCAT, AOARD, and the FIRST program.  ... 
arXiv:1112.0611v1 fatcat:jwtqlwvfxzhk7h3ufpd3tlx4oi

Information-Maximization Clustering Based on Squared-Loss Mutual Information

Masashi Sugiyama, Gang Niu, Makoto Yamada, Manabu Kimura, Hirotaka Hachiya
2014 Neural Computation  
In this paper, we propose an alternative information-maximization clustering method based on a squared-loss variant of mutual information.  ...  Information-maximization clustering learns a probabilistic classifier in an unsupervised manner so that mutual information between feature vectors and cluster assignments is maximized.  ...  Acknowledgments We would like to thank Ryan Gomes for providing us his program code of informationmaximization clustering.  ... 
doi:10.1162/neco_a_00534 pmid:24102125 fatcat:7siksjb7vzc4blotmrbajvak4m

Canonical dependency analysis based on squared-loss mutual information

Masayuki Karasuyama, Masashi Sugiyama
2012 Neural Networks  
The proposed method, which we call least-squares canonical dependency analysis (LSCDA), is based on a squared-loss variant of mutual information, and it has various useful properties besides its ability  ...  to capture higherorder correlations, for example, it can simultaneously find multiple projection directions (i.e., subspaces), it does not involve density estimation, and it is equipped with a model selection  ...  As a criterion of dependency, we employed squared-loss mutual information (SMI) which can be accurately and analytically estimated by least-squares mutual information (LSMI).  ... 
doi:10.1016/j.neunet.2012.06.009 pmid:22831849 fatcat:ckrkn5wwsvdc7ewfpzav2dqoy4

Information theoretic regularization for semi-supervised boosting

Lei Zheng, Shaojun Wang, Yan Liu, Chi-Hoon Lee
2009 Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '09  
Our approach is based on extending information regularization framework to boosting, bearing loss functions that combine log loss on labeled data with the information-theoretic measures to encode unlabeled  ...  We present novel semi-supervised boosting algorithms that incrementally build linear combinations of weak classifiers through generic functional gradient descent using both labeled and unlabeled training  ...  The authors wish to thank researchers at the 711th HPW/ RHCP lab of the Wright Patterson Air Force Base for providing them the EEG human mental workload classification dataset.  ... 
doi:10.1145/1557019.1557129 dblp:conf/kdd/ZhengWLL09 fatcat:vfrnwoonyrhqhle57c3s36fjpm

Duality Regularization for Unsupervised Bilingual Lexicon Induction [article]

Xuefeng Bai and Yue Zhang and Hailong Cao and Tiejun Zhao
2019 arXiv   pre-print
In this paper, we propose to train primal and dual models jointly, using regularizers to encourage consistency in back translation cycles.  ...  For example, EN-IT and IT-EN induction can be mutually primal and dual problems. Current state-of-the-art methods, however, consider the two tasks independently.  ...  Non-adversarial approaches have also been explored. For instance, Mukherjee et al. (2018) use squared-loss mutual information to search for optimal cross-lingual word pairing.  ... 
arXiv:1909.01013v1 fatcat:nqndyhydtrbabg6uwejs3krqei

Regularized Optimal Transport for Dynamic Semi-supervised Learning [article]

Mourad El Hamri, Younès Bennani
2021 arXiv   pre-print
In this paper, we propose a novel approach for the transductive semi-supervised learning, using a complete bipartite edge-weighted graph.  ...  Semi-supervised learning provides an effective paradigm for leveraging unlabeled data to improve a model's performance.  ...  Furthermore, we plan to develop a theoretical analysis of semi-supervised learning with optimal transport theory.  ... 
arXiv:2103.11937v2 fatcat:i6au5d3xbzcrflba4jh2mz2ava

Mutual information deep regularization for semi-supervised segmentation

Jizong Peng, Marco Pedersoli, Christian Desrosiers
2020 International Conference on Medical Imaging with Deep Learning  
Experimental results show our method to outperform recently-proposed approaches for semi-supervised and yield a performance comparable to fully-supervised training.  ...  Since mutual information does not require a strict ordering of clusters in two different cluster assignments, we propose to incorporate another consistency regularization loss which forces the alignment  ...  Mutual information corresponds to our method without loss term L reg and Consistency regularization to our KL-based method without L MI .  ... 
dblp:conf/midl/PengPD20 fatcat:oqptzj3jl5dzlnzpaqzqsaztma

GAR: An efficient and scalable Graph-based Activity Regularization for semi-supervised learning [article]

Ozsel Kilinc, Ismail Uysal
2018 arXiv   pre-print
In this paper, we propose a novel graph-based approach for semi-supervised learning problems, which considers an adaptive adjacency of the examples throughout the unsupervised portion of the training.  ...  Our results show comparable performance with state-of-the-art generative approaches for semi-supervised learning on an easier-to-train, low-cost framework.  ...  In this paper, we propose a novel framework for semi-supervised learning which can be considered a variant of graph-based approach.  ... 
arXiv:1705.07219v2 fatcat:wo25ralrafe5jkhxqnkhecyt74

A Systematic Survey of Regularization and Normalization in GANs [article]

Ziqiang Li, Xintian Wu, Muhammad Usman, Rentuo Tao, Pengfei Xia, Huanhuan Chen, Bin Li
2021 arXiv   pre-print
Regularization and normalization are common methods of introducing prior information to stabilize training and improve discrimination.  ...  Although a handful number of regularization and normalization methods have been proposed for GANs, to the best of our knowledge, there exists no comprehensive survey which primarily focuses on objectives  ...  Based on the analysis of GANs training detailed background information and theoretical analysis, or from different perspectives, we propose a novel taxonomy do not correlate  ... 
arXiv:2008.08930v5 fatcat:xk7vbmy2ebagjg7hzbhb7hfvty

Deep Supervised Hashing Network with Integrated Regularization

J.X. Liao, baoran li, Jingyu Wang, Qi Qi, Jing Wang
2019 IET Image Processing  
In this study, a new method for training deep hashing system to learn compact binary codes is presented.  ...  Existing methods use similarity and regularity loss to train deep hashing systems, but these two functions usually work together but not cooperative, which may lead to inadequate performance of the whole  ...  Supervised hashing methods differ from image classification methods by using image similarity information to learn the generation of hash codes, which shows better performance compared to unsupervised  ... 
doi:10.1049/iet-ipr.2018.6644 fatcat:nwsrjl4x7nedzkhgalsbd5nhv4

Combating noisy labels by agreement: A joint training method with co-regularization [article]

Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An
2020 arXiv   pre-print
Deep Learning with noisy labels is a practically challenging problem in weakly supervised learning.  ...  Specifically, we first use two networks to make predictions on the same mini-batch data and calculate a joint loss with Co-Regularization for each training example.  ...  Semi-supervised learning. Semi-supervised learning also belongs to the family of weakly supervised learning frameworks [15, 18, 22, 26, 27, 31, 47] .  ... 
arXiv:2003.02752v3 fatcat:4bmbhdgvwzap3j4hdet4eq353e

Graph-regularized multi-view semantic subspace learning

Jinye Peng, Peng Luo, Ziyu Guan, Jianping Fan
2017 International Journal of Machine Learning and Cybernetics  
MvSL learns a nonnegative latent space and tries to capture the semantic structure of data by a novel graph embedding framework, where an affinity graph characterizing intra-class compactness and a penalty  ...  Although label information has been exploited for guiding multi-view subspace learning, previous approaches did not well capture the underlying semantic structure in data.  ...  ) and those using label information (semi-supervised or supervised).  ... 
doi:10.1007/s13042-017-0766-5 fatcat:jk47427hyjfarknicxuyom4a3y

Regularized clustering for documents

Fei Wang, Changshui Zhang, Tao Li
2007 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '07  
In this paper, we propose a novel method for clustering documents using regularization.  ...  with a global smoothness regularizer.  ...  The idea of incorporating both local and global information into label prediction is inspired by the recent works on semi-supervised learning [31] , and our experimental evaluations on several real document  ... 
doi:10.1145/1277741.1277760 dblp:conf/sigir/WangZL07 fatcat:6md6hp3b6bevllkzdys2vi4dvm

Functional Regularization for Representation Learning: A Unified Theoretical Perspective [article]

Siddhant Garg, Yingyu Liang
2020 arXiv   pre-print
Unsupervised and self-supervised learning approaches have become a crucial tool to learn representations for downstream prediction tasks.  ...  We propose a discriminative theoretical framework for analyzing the sample complexity of these approaches, which generalizes the framework of (Balcan and Blum, 2010) to allow learnable regularization functions  ...  Acknowledgements The authors would like to thank the anonymous reviewers and the meta-reviewer for their valuable comments and suggestions which have been incorporated for the camera ready version.  ... 
arXiv:2008.02447v3 fatcat:fgbm27uokjcc3ktas4q4r74fwm
« Previous Showing results 1 — 15 out of 7,861 results