Filters








6,010 Hits in 6.8 sec

Risk-Based DEA Efficiency and SSD Efficiency of OECD Members Stock Indices

Yonca Erdem Demirtaş, Neslihan Fidan Keçeci
2018 Alphanumeric Journal  
In the Risk-Based DEA, traditional and modern risk measures are used as inputs of the model and the mean return as an output.  ...  We use Data Envelopment Analysis (DEA) methodology and Second Order Stochastic Dominance (SSD) Criteria as an efficiency metrics.  ...  However, on the contrary of theirs when we plug traditional risk measure, standard deviation of index returns to Risk-Based DEA model, it has seen that total number of the efficient indices has increased  ... 
doi:10.17093/alphanumeric.345483 fatcat:b2b3hfwlhzhdpifkwrpfrro4ai

Multi-class pairwise linear dimensionality reduction using heteroscedastic schemes

Luis Rueda, B. John Oommen, Claudio Henríquez
2010 Pattern Recognition  
The experimental results obtained on benchmark datasets demonstrate that the proposed methods are not only efficient, but that they also yield accuracies comparable to that obtained by the optimal Bayes  ...  classifier.  ...  Experimental Results In order to evaluate the performance of the new LDRC multi-class schemes, we present an empirical analysis based on measuring the accuracies of the classifiers tested.  ... 
doi:10.1016/j.patcog.2010.01.018 fatcat:7dixpwn6mzf3dg2h7bwhfgb4pe

Pairwise Costs in Multiclass Perceptrons

Sarunas Raudys, Aistis Raudys
2010 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Minimization of the loss requires a smaller number of training epochs. Efficacy of cost-sensitive methods depends on the cost matrix, the overlap of the pattern classes, and sample sizes.  ...  Experiments with real-world pattern recognition (PR) tasks show that employment of novel loss function usually outperforms three benchmark methods.  ...  Duin and R. Somorjai for providing data for the experiments and the anonymous reviewers for their useful and challenging remarks.  ... 
doi:10.1109/tpami.2010.72 pmid:20489234 fatcat:aaxll3svancadf6674howlex6a

LSD-C: Linearly Separable Deep Clusters [article]

Sylvestre-Alvise Rebuffi, Sebastien Ehrhardt, Kai Han, Andrea Vedaldi, Andrew Zisserman
2020 arXiv   pre-print
Our algorithm first establishes pairwise connections in the feature space between the samples of the minibatch based on a similarity metric.  ...  Then it regroups in clusters the connected samples and enforces a linear separation between clusters.  ...  This work is supported by the EPSRC Programme Grant Seebibyte EP/M013774/1, Mathworks/DTA DFR02620, and ERC IDIU-638009.  ... 
arXiv:2006.10039v1 fatcat:7il5piv2zbbwnpmcgtjolh3b4m

Learning from Noisy Similar and Dissimilar Data [article]

Soham Dan, Han Bao, Masashi Sugiyama
2020 arXiv   pre-print
One such kind of supervision is provided pairwise---in the form of Similar (S) pairs (if two examples belong to the same class) and Dissimilar (D) pairs (if two examples belong to different classes).  ...  Finally, we perform experiments on synthetic and real world datasets and show our noise-informed algorithms outperform noise-blind baselines in learning from noisy pairwise data.  ...  Further, we see that the clean S-D performances match the best P-N performance which empirically verifies the optimal classifiers for learning from noisefree standard P-N and pairwise S-D data coincide  ... 
arXiv:2002.00995v1 fatcat:wmt75wfy5jfwbkmyjeazbpa2iu

On data depth and distribution-free discriminant analysis using separating surfaces

Anil K. Ghosh, Probal Chaudhuri
2005 Bernoulli  
A very well-known traditional approach in discriminant analysis is to use some linear (or nonlinear) combination of measurement variables which can enhance class separability.  ...  One of these classifiers is closely related to Tukey's half-space depth, while the other is based on the concept of regression depth.  ...  Recall that in our case, for any pairwise classification, an observation x is classified depending on the sign of AE T z þ â.  ... 
doi:10.3150/bj/1110228239 fatcat:swtygz77uvcvthiqqtqjsj2dgy

Customer-Centric Decision Support

Stefan Lessmann, Stefan Voß
2010 Business & Information Systems Engineering  
Therefore, this paper contributes to the literature by conducting an empirical study that compares the performance of modern to that of established classifiers as to their predictive accuracy and economic  ...  Classification analysis contributes to the support of several corporate decision making tasks.  ...  To that end, an empirical benchmark experiment is undertaken, which contrasts several established and novel classifiers regarding a monetary accuracy measure.  ... 
doi:10.1007/s12599-010-0094-8 fatcat:ku6yftrmmveuhm6urngnqnmfji

Pairwise Margin Maximization for Deep Neural Networks [article]

Berry Weinstein, Shai Fine, Yacov Hel-Or
2021 arXiv   pre-print
In this paper, we explain why this commonly used principle is not optimal and propose a new regularization scheme, called Pairwise Margin Maximization (PMM), which measures the minimal amount of displacement  ...  We demonstrate empirically a substantial improvement when training a deep neural network with PMM compared to the standard regularization terms.  ...  MARGIN ANALYSIS FOR BINARY AND MULTI-CLASS CLASSIFICATION The maximal margin principle is traditionally presented in the context of a shallow linear classifier [8] , [11] .  ... 
arXiv:2110.04519v1 fatcat:paw4icqstnfyhlrddq5ajmchfi

Support vector machine with hypergraph-based pairwise constraints

Qiuling Hou, Meng Lv, Ling Zhen, Ling Jing
2016 SpringerPlus  
And the comprehensive experimental results on twenty-five datasets demonstrate the validity and advantage of our approach.  ...  into linear case.  ...  Acknowledgements This work is supported by the National Natural Science Foundation of China (NO. 11371365, NO. 11671032).  ... 
doi:10.1186/s40064-016-3315-x pmid:27722068 pmcid:PMC5035294 fatcat:rpw4szhxr5dgfb5zhapstfoani

A multi-objective evolutionary algorithm-based soft computing model for educational data mining

Choo Jun Tan, Ting Yee Lim, Chin Wei Bong, Teik Kooi Liew
2017 AAOU Journal  
It is tested on benchmark data pertaining to student activities and achievement obtained from the University of California at Irvine machine learning repository.  ...  (DT)-based classifier, in classifying and optimising the students' online interaction activities as classifier of student achievement.  ...  An empirical comparison between the standard C4.5 classifier and proposed model on benchmark tests Notes: Mean ± standard deviation for 30 runs experimental results and computed p-values.  ... 
doi:10.1108/aaouj-01-2017-0012 fatcat:ckczt5prr5henegsz76npix5wa

Overcome Support Vector Machine Diagnosis Overfitting

Henry Han, Xiaoqian Jiang
2014 Cancer Informatics  
We found that disease diagnosis under an SVM classifier would inevitably encounter overfitting under a Gaussian kernel because of the large data variations generated from high-throughput profiling technologies  ...  Finally, we propose a novel biomarker discovery algorithm: Gene-Switch-Marker (GSM) to capture meaningful biomarkers by taking advantage of SVM overfitting on single genes.  ...  For example, current sparseness degree selection is totally an empirical way instead of an optimal one.  ... 
doi:10.4137/cin.s13875 pmid:25574125 pmcid:PMC4264614 fatcat:wyo6kvhl65dwxaaey3q5u74nrq

Ensemble-based discriminant learning with boosting for face recognition

J. Lu, K.N. Plataniotis, A.N. Venetsanopoulos, S.Z. Li
2006 IEEE Transactions on Neural Networks  
In this paper, we propose a novel ensemble-based approach to boost performance of traditional Linear Discriminant Analysis (LDA)-based methods used in face recognition.  ...  Index Terms-Boosting, face recognition (FR), linear discriminant analysis, machine learning, mixture of linear models, smallsample-size (SSS) problem, strong learner.  ...  National Institute of Standards and Technology (NIST) for providing the FERET database.  ... 
doi:10.1109/tnn.2005.860853 pmid:16526485 fatcat:6ou23yccqrf2jghauja46lk5bm

Learning to Diversify via Weighted Kernels for Classifier Ensemble [article]

Xu-Cheng Yin and Chun Yang and Hong-Wei Hao
2014 arXiv   pre-print
Extensive experiments on a variety of 32 UCI classification benchmark datasets show that the proposed approach consistently outperforms state-of-the-art ensembles such as Bagging, AdaBoost, Random Forests  ...  Given a list of available component classifiers, how to adaptively and diversely ensemble classifiers becomes a big challenge in the literature.  ...  In this paper, we focus on the linear combination of classifiers.  ... 
arXiv:1406.1167v1 fatcat:tvaqkcoefbc2dajcwfs5rxobt4

MBA: Mini-Batch AUC Optimization [article]

San Gultekin, Avishek Saha, Adwait Ratnaparkhi, John Paisley
2018 arXiv   pre-print
Area under the receiver operating characteristics curve (AUC) is an important metric for a wide range of signal processing and machine learning problems, and scalable methods for optimizing AUC have recently  ...  This paper proposes a novel approach to AUC maximization, based on sampling mini-batches of positive/negative instance pairs and computing U-statistics to approximate a global risk minimization problem  ...  THEORETICAL ANALYSIS Solving the regularized empirical risk minimization problem in Eq. ( 14 ) requires processing N pairwise samples.  ... 
arXiv:1805.11221v2 fatcat:6ptlxlguj5h23ojczcfjrhdzyy

Dissecting cancer heterogeneity based on dimension reduction of transcriptomic profiles using extreme learning machines

Kejun Wang, Xin Duan, Feng Gao, Wei Wang, Liangliang Liu, Xin Wang, Kwong-Kwok Wong
2018 PLoS ONE  
PLoS ONE 13(9): e0203824. https://doi.org/10. Dissecting cancer heterogeneity using extreme learning machines PLOS ONE | https://doi.  ...  OPEN ACCESS Citation: Wang K, Duan X, Gao F, Wang W, Liu L, Wang X (2018) Dissecting cancer heterogeneity based on dimension reduction of transcriptomic profiles using extreme learning machines.  ...  Define a standard SLFN withÑ hidden nodes, which is also the dimension of expected feature space, and an activation function g(x).  ... 
doi:10.1371/journal.pone.0203824 pmid:30216380 pmcid:PMC6138406 fatcat:sdh4lzkiircdlb6lvhspb2ojf4
« Previous Showing results 1 — 15 out of 6,010 results