A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Nonconvex Regularizations for Feature Selection in Ranking With Sparse SVM
2014
IEEE Transactions on Neural Networks and Learning Systems
In this work, we propose a general framework for feature selection in learning to rank using SVM with a sparse regularization term. ...
Feature selection in learning to rank has recently emerged as a crucial issue. ...
Sparse regularized SVM for preferences ranking To achieve feature selection in the context of SVM, a common solution is to introduce a sparse regularization term. ...
doi:10.1109/tnnls.2013.2286696
fatcat:li45e4zbxbbp3hgkz2uzdcngpq
Direct convex relaxations of sparse SVM
2007
Proceedings of the 24th international conference on Machine learning - ICML '07
We propose two direct, novel convex relaxations of a nonconvex sparse SVM formulation that explicitly constrains the cardinality of the vector of feature weights. ...
all available features in the input space. ...
Acknowledgments The authors thank the anonymous reviewers for insightful comments, and Sameer Agarwal for helpful discussions. ...
doi:10.1145/1273496.1273515
dblp:conf/icml/ChanVL07
fatcat:apgkb6fgwzgo5lri3fdaunsb74
Sparse Support Vector Infinite Push
[article]
2012
arXiv
pre-print
In this paper, we address the problem of embedded feature selection for ranking on top of the list problems. ...
We pose this problem as a regularized empirical risk minimization with p-norm push loss function (p=∞) and sparsity inducing regularizers. ...
so as to perform feature selection in a top-ranking learning problem. ...
arXiv:1206.6432v1
fatcat:kfkq4ib2gzevvcuyt5z7nvnwda
Learning to rank using 1-norm regularization and convex hull reduction
2010
Proceedings of the 48th Annual Southeast Regional Conference on - ACM SE '10
We also propose a 1-norm regularization approach to simultaneously find a linear ranking function and to perform feature subset selection. The proposed method is formulated as a linear program. ...
We present in this paper a convex hull reduction method to reduce this impact. ...
ACKNOWLEDGMENTS Xiaofei Nan, Yixin Chen, and Dawn Wilkins were supported in part by the US National Science Foundation under award number EPS-0903787. ...
doi:10.1145/1900008.1900052
dblp:conf/ACMse/NanCDW10
fatcat:tzkeynse5jg2bkebyc7osgycdq
Primal explicit max margin feature selection for nonlinear support vector machines
2014
Pattern Recognition
Embedding feature selection in nonlinear SVMs leads to a challenging non-convex minimization problem, which can be prone to suboptimal solutions. ...
We devise an alternating optimization approach to tackle the problem efficiently, breaking it down into a convex subproblem, corresponding to standard SVM optimization, and a non-convex subproblem for ...
∞ < tol We can use any convex solver for the SVM subproblem and use the bound-constrained trust-region algorithm described in Section 3.1 to solve the non-convex feature selection subproblem. ...
doi:10.1016/j.patcog.2014.01.003
fatcat:tf6qe2abpjewbkectfyoogvoci
Multiple Indefinite Kernel Learning for Feature Selection
2017
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
In the algorithm, we reformulate the non-convex optimization problem of primal IKSVM as a difference of convex functions (DC) programming and transform the non-convex problem into a convex one with the ...
Multiple kernel learning for feature selection (MKL-FS) utilizes kernels to explore complex properties of features and performs better in embedded methods. ...
Tan et al. focused on sparse support vector machines (SVM) with l 0 -norm whose convex relaxation can be further formulated as an MKL problem, where each kernel corresponds to a sparse feature subset ...
doi:10.24963/ijcai.2017/448
dblp:conf/ijcai/XueSX17
fatcat:z2mgcyayiff3nbreltgsjpxzjm
Learning Sparse SVM for Feature Selection on Very High Dimensional Datasets
2010
International Conference on Machine Learning
A sparse representation of Support Vector Machines (SVMs) with respect to input features is desirable for many applications. ...
In this paper, by introducing a 0-1 control variable to each input feature, l 0 -norm Sparse SVM (SSVM) is converted to a mixed integer programming (MIP) problem. ...
Acknowledgments This research was in part supported by Singapore MOE AcRF Tier-1 Research Grant (RG15/08). ...
dblp:conf/icml/TanWT10
fatcat:qsmntopnm5hehkkm4vdl7j6pcy
Self-calibrated Brain Network Estimation and Joint Non-Convex Multi-Task Learning for Identification of Early Alzheimer's Disease
2020
Medical Image Analysis
The learning process is completed by non-convex regularizer, which effectively reduces the penalty bias of trace norm and approximates the original rank minimization problem. ...
Finally, the most relevant disease features classified using a support vector machine (SVM) for MCI identification. ...
In the feature selection stage, non-convex regularizer is used to complete the subspace learning of samples. ...
doi:10.1016/j.media.2020.101652
pmid:32059169
fatcat:w6qno5ofufdpjdipqjardolkhe
Regularization and feature selection for large dimensional data
[article]
2019
arXiv
pre-print
The focus of our research here are five embedded feature selection methods which use either the ridge regression, or Lasso regression, or a combination of the two in the regularization part of the optimization ...
Feature selection has evolved to be an important step in several machine learning paradigms. ...
The top ranked 1000 features are selected for model selection of SVM using 5-fold crossvalidation on the datasets comprising both training and validation sets. 4. ...
arXiv:1712.01975v3
fatcat:leq5e6mzb5c37omybgexwiel6a
Sparse Logistic Regression with Lp Penalty for Biomarker Identification
2007
Statistical Applications in Genetics and Molecular Biology
In this paper, we propose a novel method for sparse logistic regression with non-convex regularization Lp (p <1). ...
Biomarkers identified with our methods are compared with that in the literature. Our computational results show that Lp Logistic regression (p <1) outperforms the L1 logistic regression and SCAD SVM. ...
Bradley and Mangasarian (1998) proposed the L 1 SVM for feature selection. ...
doi:10.2202/1544-6115.1248
pmid:17402921
fatcat:sbdavwwabravxjdmndfn643jr4
Similarity Learning for High-Dimensional Sparse Data
[article]
2019
arXiv
pre-print
The core idea is to parameterize the similarity measure as a convex combination of rank-one matrices with specific sparsity structures. ...
In this paper, we propose a method that can learn efficiently similarity measure from high-dimensional sparse data. ...
Acknowledgments This work was in part supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense U.S. ...
arXiv:1411.2374v3
fatcat:qjlg5iyz6vbbbjgqdeyaifsoci
Sturm: Sparse Tubal-Regularized Multilinear Regression for fMRI
[article]
2018
arXiv
pre-print
In this work, we study t-SVD for sparse multilinear regression and propose a Sparse tubal-regularized multilinear regression (Sturm) method for fMRI. ...
Recent sparse multilinear regression methods based on tensor are emerging as promising solutions for fMRI, yet existing works rely on unfolding/folding operations and a tensor rank relaxation with limited ...
In Lasso + SVM, ENet + SVM, Remurs + SVM, and Sturm + SVM, we rank the selected features by their associated absolute values of W in the descending order and feed the top η% of the features to SVM. ...
arXiv:1812.01496v1
fatcat:llyqce27jrdgba3j7dkk6nofgm
Feature selection in machine learning: an exact penalty approach using a Difference of Convex function Algorithm
2014
Machine Learning
We develop an exact penalty approach for feature selection in machine learning via the zero-norm 0 -regularization problem. ...
The algorithm is implemented for feature selection in SVM, that requires solving one linear program at each iteration and enjoys interesting convergence properties. ...
, feature selection in learning to rank with sparse SVM, etc. ...
doi:10.1007/s10994-014-5455-y
fatcat:b4bnlo4rcbhwhgjnuhpa5wcfce
$\ell_{p}-\ell_{q}$ Penalty for Sparse Linear and Sparse Multiple Kernel Multitask Learning
2011
IEEE Transactions on Neural Networks
Then, for the more general case when 0 < p < 1, we solve the resulting non-convex problem through a majorization-minimization approach. ...
Our contribution in this context is to provide an efficient scheme for computing the ℓ1 − ℓq proximal operator. ...
ALGORITHMS FOR JOINTLY SPARSE MULTI-TASK SVM In this section, we propose some algorithms for solving the sparse multi-task SVM problem when using Ω p,q as a regularizer with values p ≤ 1 and 1 ≤ q ≤ 2. ...
doi:10.1109/tnn.2011.2157521
pmid:21813358
fatcat:cdjradqjhrgdjmozxo4crrlhra
Multi-Task Joint Sparse and Low-Rank Representation for the Scene Classification of High-Resolution Remote Sensing Image
2016
Remote Sensing
In this paper, we introduce a multi-task joint sparse and low-rank representation model to combine the strength of multiple features for HRS image interpretation. ...
The proposed model is optimized as a non-smooth convex optimization problem using an accelerated proximal gradient method. ...
The problem is intractable for the two non-smooth convex regularization terms P(W) and Q(W). ...
doi:10.3390/rs9010010
fatcat:phdn4lsq5zhtbjonmsh3taf73q
« Previous
Showing results 1 — 15 out of 3,613 results