Filters








28,241 Hits in 5.9 sec

Orthogonal Least Squares Based Fast Feature Selection for Linear Classification [article]

Sikai Zhang, Zi-Qiang Lang
2021 arXiv   pre-print
An Orthogonal Least Squares (OLS) based feature selection method is proposed for both binomial and multinomial classification.  ...  It is also shown that the OLS based feature selection method has speed advantages when applied for greedy search.  ...  OLS based fast feature selection for binomial classification If the N instances of X belong to two classes and the n variables in X represent n features, the feature selection problem for the binomial  ... 
arXiv:2101.08539v3 fatcat:2xlkm6ugivcqrlb4cfgzpj2bsy

Automatic radiometric normalization of multitemporal satellite imagery

Morton J Canty, Allan A Nielsen, Michael Schmidt
2004 Remote Sensing of Environment  
Normalization by means of ordinary least squares regression method is compared with normalization using orthogonal regression.  ...  The linear scale invariance of the multivariate alteration detection (MAD) transformation is used to obtain invariant pixels for automatic relative radiometric normalization of time series of multispectral  ...  Poul Thyregod, IMM, Technical University of Denmark, for many good discussions on normalization and calibration.  ... 
doi:10.1016/j.rse.2003.10.024 fatcat:dfy36tr7nvadfk3z2gfyfa5gty

Automatic radiometric normalization of multitemporal satellite imagery

M CANTY
2004 Remote Sensing of Environment  
Normalization by means of ordinary least squares regression method is compared with normalization using orthogonal regression.  ...  The linear scale invariance of the multivariate alteration detection (MAD) transformation is used to obtain invariant pixels for automatic relative radiometric normalization of time series of multispectral  ...  Poul Thyregod, IMM, Technical University of Denmark, for many good discussions on normalization and calibration.  ... 
doi:10.1016/s0034-4257(04)00120-8 fatcat:kh5tya257fbxxbespgu3ytohvy

A Kernel-Based Two-Class Classifier for Imbalanced Data Sets

Xia Hong, Sheng Chen, Chris J. Harris
2007 IEEE Transactions on Neural Networks  
This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve  ...  It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without  ...  Lee for providing the austempered ductile iron (ADI) data set and the reviewers for their valuable comments.  ... 
doi:10.1109/tnn.2006.882812 pmid:17278459 fatcat:qqcj6is45fepnoiikna6k3lkki

Multiclass Classification and Feature Selection Based on Least Squares Regression with Large Margin

Haifeng Zhao, Siqi Wang, Zheng Wang
2018 Neural Computation  
Least squares regression (LSR) is a fundamental statistical analysis tech- nique that has been widely applied to feature learning.  ...  As a consequence, we pay attention to the concepts of large margin and orthogonal constraint to propose a novel algorithm, orthogonal least squares regression with large margin (OLSLM), for multiclass  ...  Acknowledgments This research is supported in part by the National Natural Science Foundation of China (61402002, 61502002, 61300057); the Project Sponsored by the Scientific Research Foundation for the  ... 
doi:10.1162/neco_a_01116 pmid:30021086 fatcat:htn4lr3i7zc5jo4fa5gcvx2xjm

Generalized Regularized Least-Squares Learning with Predefined Features in a Hilbert Space

Wenye Li, Kin-Hong Lee, Kwong-Sak Leung
2006 Neural Information Processing Systems  
Based on the representer theorem, the solution consists of a linear combination of translates of a kernel. This paper investigates a generalized form of representer theorem for kernel-based learning.  ...  Using a squared-loss function in calculating the empirical error, a simple convex solution is obtained which combines predefined features with translates of the kernel.  ...  Haixuan Yang for useful discussions.  ... 
dblp:conf/nips/LiLL06 fatcat:pqxu2yludnhkrhzlvw7ujfkwku

A fast linear-in-the-parameters classifier construction algorithm using orthogonal forward selection to minimize leave-one-out misclassification rate

X. Hong, S. Chen, C. J. Harris
2008 International Journal of Systems Science  
We also thank the reviewers for their valuable comments.  ...  An analytic formula for LOO misclassification rate is initially derived, based on the regularized orthogonal least squares (ROLS) parameter estimates (Chen et al. 2004) .  ...  The orthogonal least square algorithm (Chen et al. 1989 ) was developed as a practical linear-in-the-parameters models construction algorithm.  ... 
doi:10.1080/00207720701727822 fatcat:7cuxciouafdwnfhx2x376uksfe

Constrained Extreme Learning Machines: A Study on Classification Cases [article]

Wentao Zhu, Jun Miao, Laiyun Qing
2015 arXiv   pre-print
Extreme learning machine (ELM) is an extremely fast learning method and has a powerful performance for pattern recognition tasks proven by enormous researches and engineers.  ...  In this paper, we proposed new ways, named "constrained extreme learning machines" (CELMs), to randomly select hidden neurons based on sample distribution.  ...  Least Mean Square (LMS) based methods, such as Radial Basis Function network [11] and No-Prop network [12] based on LMS algorithm [13] .  ... 
arXiv:1501.06115v2 fatcat:4e72pju7ivg77fa2o2fvlwzs2q

Incremental Local Linear Fuzzy Classifier in Fisher Space

Armin Eftekhari, Hamid Abrishami Moghaddam, Mohamad Forouzanfar, Javad Alirezaie
2009 EURASIP Journal on Advances in Signal Processing  
In addition, rule consequent parameters are optimized using a local least square approach.  ...  In this paper, we introduce a novel incremental training algorithm for the class of neurofuzzy systems that are structured based on local linear classifiers.  ...  Aliyari for their constructive discussions and useful ideas. This research was supported in part by Iran Telecommunication Research Center under Grant T-500-9516.  ... 
doi:10.1155/2009/360834 fatcat:6lrkwkf55jgvxnxb4ftdujq764

Guest editorial: Special issue on Extreme learning machine and applications (I)

Zhihong Man, Guang-Bin Huang
2015 Neural computing & applications (Print)  
weights of the SLFN are globally optimized, by using the batch learning type of least squares, with a set of training data pairs selected to sufficiently and globally represent the inputand-output data  ...  The idea of uniformly randomly assigning input weights in a range for SLFNs is to realize such a linear separability of feature vectors in the high-dimensional feature space.  ...  In ''Improving ELMbased microarray data classification by diversified sequence features selection,'' the authors present a diversified sequence feature selection-based ELM and the experimental results  ... 
doi:10.1007/s00521-015-2086-6 fatcat:vo6m4zgqffdpzattdmfnzsoguu

Computational Advances of Tumor Marker Selection and Sample Classification in Cancer Proteomics

Jing Tang, Yunxia Wang, Yongchao Luo, Jianbo Fu, Yang Zhang, Yi Li, Ziyu Xiao, Yan Lou, Yunqing Qiu, Feng Zhu
2020 Computational and Structural Biotechnology Journal  
Firstly, a number of popular feature selection methods are overviewed with objective evaluation on their advantages and disadvantages.  ...  To facilitate cancer diagnosis/prognosis and accelerate drug target discovery, a variety of methods for tumor marker identification and sample classification have been developed and successfully applied  ...  Analysis (LDA), Partial Least Square Discriminant Analysis (PLS-DA), Orthogonal Partial Least Squares Discriminant Analysis (OPLS-DA), Sparse Partial Least Squares Discriminant Analysis (sPLSDA), Discriminant  ... 
doi:10.1016/j.csbj.2020.07.009 pmid:32802273 pmcid:PMC7403885 fatcat:2ajwdqhsgvcifm724dr7uftoou

A Hybrid Data Mining Technique for Improving the Classification Accuracy of Microarray Data Set

Sujata Dash, Bichitrananda Patra, B.K. Tripathy
2012 International Journal of Information Engineering and Electronic Business  
Experimental results show that the Partial Least-Squares(PLS) regression method is an appropriate feature selection method and a combined use of different classification and feature selection approaches  ...  This paper provides a comparison between dimension reduction technique, namely Partial Least Squares (PLS)method and a hybrid feature selection scheme, and evaluates the relative performance of four different  ...  In this study we developed a novel feature selection technique based on the Partial Least Squares (PLS) algorithm [30] [31] [32] , which we call SIMPLS.  ... 
doi:10.5815/ijieeb.2012.02.07 fatcat:hrrl7pnrlbhylnti5gh22bhkiy

Fractal Image Compression via Nearest Neighbor Search [chapter]

Dietmar Saupe
1998 Fractal Image Encoding and Analysis  
The fast search has been integrated into an existing state-of-the-art classification method thereby accelerating the searches carried out in the individual domain classes.  ...  This result is useful for accelerating the encoding procedure in fractal image compression.  ...  The author thanks Klaus Bayer, Kai Uwe Barthel, Amitava Datta, Raouf Hamzaoui, Thomas Ottmann, and Sven Schuierer for fruitful discussions, Sunil Arya and Dave Mount for the fast nearest neighbor search  ... 
doi:10.1007/978-3-662-03512-2_6 fatcat:t6to6p2fgvewris4b2tbqx7c5y

Random Forests Feature Selection with K-PLS: Detecting Ischemia from Magnetocardiograms

Long Han, Mark J. Embrechts, Boleslaw K. Szymanski, Karsten Sternickel, Alexander Ross
2006 The European Symposium on Artificial Neural Networks  
In this paper the random forests approach is extended for variable selection with other learning models, in this case Partial Least Squares (PLS) and Kernel Partial Least Squares (K-PLS) to estimate the  ...  Random Forests were introduced by Breiman for feature (variable) selection and improved predictions for decision tree models. The resulting model is often superior to AdaBoost and bagging approaches.  ...  Partial Least Squares (PLS) and K-PLS Partial Least Squares Regression (PLS) was introduced by Herman Wold [1] for econometrics modeling of multi-variate time series.  ... 
dblp:conf/esann/HanESSR06 fatcat:l5ffrkzgqzf3nibumc2m2ieg6y

Separable Linear Classifiers for Online Learning in Appearance Based Object Detection [chapter]

Christian Bauckhage, John K. Tsotsos
2005 Lecture Notes in Computer Science  
We demonstrate that separability not only leads to rapid runtime behavior but enables very fast training.  ...  In this paper, we present an iterative optimization algorithm that learns separable linear classifiers from a sample of positive and negative example images.  ...  Based on positive and negative example images, we propose an iterative least mean squares technique of learning separable linear classifiers.  ... 
doi:10.1007/11556121_43 fatcat:dkwzgmviijdehicvd7gahuqw7q
« Previous Showing results 1 — 15 out of 28,241 results