Filters








39 Hits in 1.5 sec

Flexible Manifold Embedding: A Framework for Semi-Supervised and Unsupervised Dimension Reduction

Feiping Nie, Dong Xu, Ivor Wai-Hung Tsang, Changshui Zhang
2010 IEEE Transactions on Image Processing  
For semi-supervised dimension reduction, we aim to find the optimal prediction labels for all the training samples , the linear regression function ( ) and the regression residue 0 = ( ) simultaneously  ...  Comprehensive experiments on several benchmark databases demonstrate the significant improvement over existing dimension reduction algorithms.  ...  In total, we have 3970 documents. We extract a 8014-dimensional token frequency-inverse document frequency (tf-idf) feature for each document. A.  ... 
doi:10.1109/tip.2010.2044958 pmid:20215078 fatcat:bsf6sdcn35fotdbftotjb3ju54

Robust feature extraction via information theoretic learning

Xiao-Tong Yuan, Bao-Gang Hu
2009 Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09  
In this paper, we present a robust feature extraction framework based on informationtheoretic learning.  ...  This objective function reaps the advantages in robustness from both redescending M-estimator and manifold regularization, and can be efficiently optimized via halfquadratic optimization in an iterative  ...  Half Quadratic Optimization Based on the theory of convex conjugated functions (Rockfellar, 1970) , we can trivially derive the following proposition that forms the base to solve problem (3) in an HQ  ... 
doi:10.1145/1553374.1553526 dblp:conf/icml/YuanH09 fatcat:q6l6rfccvratxdpasr2cvqzrui

Towards feature selection in network

Quanquan Gu, Jiawei Han
2011 Proceedings of the 20th ACM international conference on Information and knowledge management - CIKM '11  
In this paper, we present a supervised feature selection method based on Laplacian Regularized Least Squares (LapRLS) for networked data.  ...  The resultant optimization problem is a mixed integer programming, which is difficult to solve.  ...  Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. We thank the anonymous reviewers for their helpful comments.  ... 
doi:10.1145/2063576.2063746 dblp:conf/cikm/GuH11 fatcat:lisenoxmj5cz5jkg3co4rk5cle

Graph transduction via alternating minimization

Jun Wang, Tony Jebara, Shih-Fu Chang
2008 Proceedings of the 25th international conference on Machine learning - ICML '08  
This paper introduces a propagation algorithm that more reliably minimizes a cost function over both a function on the graph and a binary label matrix.  ...  Experiments are shown for synthetic and real classification tasks including digit and text recognition.  ...  We compared our method with LGC and GFHF, LapRLS, and LapSVM. The error rates are calculated based on the average over 20 trials. Figure 4 .  ... 
doi:10.1145/1390156.1390300 dblp:conf/icml/WangJC08 fatcat:4gezshk4obhnfhplmknftcydwa

Regularized Co-Clustering with Dual Supervision

Vikas Sindhwani, Jianying Hu, Aleksandra Mojsilovic
2008 Neural Information Processing Systems  
In this paper, we develop two novel semi-supervised multi-class classification algorithms motivated respectively by spectral bipartite graph partitioning and matrix approximation formulations for co-clustering  ...  from the classical Representer theorem applied to regularization problems posed on a collection of Reproducing Kernel Hilbert Spaces.  ...  LAPRLS which uses a graph Laplacian based on document similarity for semisupervised learning.  ... 
dblp:conf/nips/SindhwaniHM08 fatcat:efvghaqj65g75ffd3gyfwlsdji

Teaching Semi-Supervised Classifier via Generalized Distillation

Chen Gong, Xiaojun Chang, Meng Fang, Jian Yang
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
the output of the teaching function under an optimization framework.  ...  The superiority of our algorithm to the related state-of-the-art methods has also been empirically demonstrated by the experiments on different datasets with various sources of privileged knowledge.  ...  Based on the optimized Θ, we may compute the label vector of a test example x 0 as F 0 = x 0 Θ, and then x 0 is classified into the j-th class with j = arg max j ∈{1,··· ,C} [F 0 ] j .  ... 
doi:10.24963/ijcai.2018/298 dblp:conf/ijcai/0002CFY18 fatcat:czsnzyvpbrf23dusm3czue4gzi

Augmented hashing for semi-supervised scenarios

Zalán Bodó, Lehel Csató
2014 The European Symposium on Artificial Neural Networks  
One can distinguish between unsupervised, supervised and semi-supervised codeword generations, based on the information they use to obtain the embedding [5] .  ...  The 20Newsgroups data set (20 classes, 11314 training and 7532 test documents) was processed as described in Error correcting output coding is used in machine learning to perform multi-class classification  ... 
dblp:conf/esann/BodoC14 fatcat:6wd7uhsqrbesnmxjhnooaokn6m

Parameter-Free Spectral Kernel Learning [article]

Qi Mao, Ivor W. Tsang
2012 arXiv   pre-print
Hence, the proposed algorithm does not require any numerical optimization solvers.  ...  Moreover, by maximizing kernel target alignment on labeled data, we can also learn model parameters automatically with a closed-form solution.  ...  The experimental setup is based on (Belkin et al., 2006) and (Chapelle et al., 2006) : We compare our proposed methods with Laplacian SVM (LapSVM) and Laplacian RLS (LapRLS) (Belkin et al., 2006) .  ... 
arXiv:1203.3495v1 fatcat:5rohnaxa5za2vkdct4m4sveyka

Semi-supervised Learning Based on Semiparametric Regularization [chapter]

Zhen Guo, Zhongfei (Mark) Zhang, Eric P. Xing, Christos Faloutsos
2008 Proceedings of the 2008 SIAM International Conference on Data Mining  
various choices of the original RKHS and the loss function. (3) We provide experimental comparisons showing that the proposed approach leads the state-of-the-art performance on a variety of classification  ...  Furthermore, the proposed approach which naturally extends to the out-of-sample data is an inductive learning method in nature. (2) This approach allows a family of algorithms to be developed based on  ...  Therefore, we choose the kernel parameters based on the performance on a small grid of parameter values and apply the same parameters to the LapSVM and LapRLS algorithms.  ... 
doi:10.1137/1.9781611972788.12 dblp:conf/sdm/GuoZXF08 fatcat:crtfgcyyazg6neiyxqppbcsy6i

Dynamic Label Propagation for Semi-supervised Multi-class Multi-label Classification

Bo Wang, Zhuowen Tu, John K. Tsotsos
2013 2013 IEEE International Conference on Computer Vision  
In graph-based semi-supervised learning approaches, the classification rate is highly dependent on the size of the availabel labeled data, as well as the accuracy of the similarity measures.  ...  Significant improvement over the state-of-the-art methods is observed on benchmark datasets for both multiclass and multi-label tasks.  ...  Moreover, nice properties enjoyed by graph-based (built on the distance metric) two-class semi-supervised classification [36] become less obvious in the multi-class classification situations [11] ,  ... 
doi:10.1109/iccv.2013.60 dblp:conf/iccv/WangTT13 fatcat:ghrqskn3pbez7l5c23ce4fw4be

Dynamic label propagation for semi-supervised multi-class multi-label classification

Bo Wang, John Tsotsos
2016 Pattern Recognition  
In graph-based semi-supervised learning approaches, the classification rate is highly dependent on the size of the availabel labeled data, as well as the accuracy of the similarity measures.  ...  Significant improvement over the state-of-the-art methods is observed on benchmark datasets for both multiclass and multi-label tasks.  ...  Moreover, nice properties enjoyed by graph-based (built on the distance metric) two-class semi-supervised classification [36] become less obvious in the multi-class classification situations [11] ,  ... 
doi:10.1016/j.patcog.2015.10.006 fatcat:ziujocjve5ab7ncvzqbr3myyey

Dissimilarity in Graph-Based Semi-Supervised Classification

Andrew B. Goldberg, Xiaojin Zhu, Stephen J. Wright
2007 Journal of machine learning research  
We present a semi-supervised classification algorithm that learns from dissimilarity and similarity information on labeled and unlabeled data.  ...  Our approach uses a novel graphbased encoding of dissimilarity that results in a convex problem, and can handle both binary and multiclass classification. Experiments on several tasks are promising.  ...  Acknowledgments We thank Fernando Pérez-Cruz for helpful discussions on multiclass SVMs.  ... 
dblp:journals/jmlr/GoldbergZW07 fatcat:mdt4e75qvzh4vjumwweqbnyiee

Graph Transduction as a Noncooperative Game

Aykut Erdem, Marcello Pelillo
2012 Neural Computation  
Building on this assumption, traditional graph-based approaches formalize graph transduction as a regularized function estimation problem on an undirected graph (Joachims, 2003; Zhu et al., 2003; Zhou  ...  In contrast to the traditional view, in which the process of label propagation is defined as a graph Laplacian regularization, this paper proposes a radically different perspective that is based on game-theoretic  ...  Specifically, the experiments are performed on the link matrix W = (w ij ), where w ij = 1 if document i cites document j and w ij = 0 otherwise.  ... 
doi:10.1162/neco_a_00233 fatcat:l4z3pt7jtrgazbql65uyyzagie

Graph Transduction as a Non-cooperative Game [chapter]

Aykut Erdem, Marcello Pelillo
2011 Lecture Notes in Computer Science  
Building on this assumption, traditional graph-based approaches formalize graph transduction as a regularized function estimation problem on an undirected graph (Joachims, 2003; Zhu et al., 2003; Zhou  ...  In contrast to the traditional view, in which the process of label propagation is defined as a graph Laplacian regularization, this paper proposes a radically different perspective that is based on game-theoretic  ...  Specifically, the experiments are performed on the link matrix W = (w ij ), where w ij = 1 if document i cites document j and w ij = 0 otherwise.  ... 
doi:10.1007/978-3-642-20844-7_20 fatcat:l226dbckbzaxhm7dwkrfi6dp6i

Probabilistic Labeled Semi-supervised SVM

Mingjie Qian, Feiping Nie, Changshui Zhang
2009 2009 IEEE International Conference on Data Mining Workshops  
In this paper, we propose a multi-class method called Probabilistic labeled Semi-supervised SVM (PLSVM) in which the optimal decision surface is taught by probabilistic labels of all the training data  ...  LapSVM [3] is a classical method which is based on a form of regularization that exploits the geometry of marginal distribution.  ...  Although binary LapSVM can deal with multiple classification using one versus rest or one versus one schemes, it is rather expensive especially with large number of classes.  ... 
doi:10.1109/icdmw.2009.14 dblp:conf/icdm/QianNZ09 fatcat:n43spyeirbe2xjatjsu4desxge
« Previous Showing results 1 — 15 out of 39 results