A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Unsupervised Domain Adaptation: A Multi-task Learning-based Method
[article]
2018
arXiv
pre-print
Two novel algorithms are proposed upon the method using Regularized Least Squares and Support Vector Machines respectively. ...
Specifically, the source and target domain classifiers are jointly learned by considering the geometry of target domain and the divergence between the source and target domains based on the concept of ...
Regularized Least Squares Algorithm The Regularized Least Squares algorithm (denoted as mtUDA-RLS) can be expressed as, min fs,ft∈HK 1 n s ns i=1 (y i − f s (x s i )) 2 + γ I n 2 t tr(f T t Lf t ) + γ ...
arXiv:1803.09208v1
fatcat:cdvom35pwnhudk5wnwe35tbvcm
Unsupervised Multi-class Regularized Least-Squares Classification
2012
2012 IEEE 12th International Conference on Data Mining
In this work we present an efficient implementation for the unsupervised extension of the multiclass regularized least-squares classification framework, which is, to the best of the authors' knowledge, ...
The regularized least-squares classification is one of the most promising alternatives to standard support vector machines, with the desirable property of closed-form solutions that can be obtained analytically ...
The authors would like to thank the anonymous reviewers for valuable comments and suggestions on an early version of this work. ...
doi:10.1109/icdm.2012.71
dblp:conf/icdm/PahikkalaAGK12
fatcat:oefkgx6itrbsfhaycqus2rcnwe
Dictionary Learning Based on Laplacian Score in Sparse Coding
[chapter]
2011
Lecture Notes in Computer Science
The results on binary classes datasets and multi class datasets from UCI repository demonstrate the effectiveness and efficiency of our method. ...
Sparse coding, which is represented a vector based on sparse linear combination of a dictionary, is widely applied on signal processing, data mining and neuroscience. ...
is based on the iteratively solving two least square optimization issue: 1 norm regularized and 2 norm constrained. ...
doi:10.1007/978-3-642-23199-5_19
fatcat:ryfasojsfbhl7npxsv64ndj3im
Multi-view Orthonormalized Partial Least Squares: Regularizations and Deep Extensions
[article]
2020
arXiv
pre-print
Building on the least squares reformulation of OPLS, we propose a unified multi-view learning framework to learn a classifier over a common latent space shared by all views. ...
We establish a family of subspace-based learning method for multi-view learning using the least squares as the fundamental basis. ...
Decision values of multi-class classifiers. ...
arXiv:2007.05028v1
fatcat:porzpnszvbhbbgwsvuwuz3c5py
Deep Matching Autoencoders
[article]
2017
arXiv
pre-print
We show promising results in image captioning, and on a new task that is uniquely enabled by our methodology: unsupervised classifier learning. ...
We simultaneously optimise parameters of representation learning auto-encoders and the pairing of unpaired multi-modal data. ...
To handle non-linearity in unsupervised multi-modal learning, kernel based approaches were proposed including Kernelized sorting (KS) [10, 26, 39] and the least-squared object matching [46, 47] . ...
arXiv:1711.06047v1
fatcat:cujzxw5pwbg53oyb4w2x3jje4q
Lung and Pancreatic Tumor Characterization in the Deep Learning Era: Novel Supervised and Unsupervised Learning Approaches
2019
IEEE Transactions on Medical Imaging
Motivated by the radiologists' interpretations of the scans, we then show how to incorporate task-dependent feature representations into a CAD system via a graph-regularized sparse multi-task learning ...
In the second approach, we explore an unsupervised learning algorithm to address the limited availability of labeled training data, a common problem in medical imaging applications. ...
Thus, a least square regression function with 1 regularization can be represented as: min W XW − Y 2 2 + λ W 1 . (1) In the above equation the sparsity level of the coefficient to the bottom (attribute ...
doi:10.1109/tmi.2019.2894349
pmid:30676950
fatcat:woorhrucqjbcxhulknvh2irjta
Sparse, Efficient, and Semantic Mixture Invariant Training: Taming In-the-Wild Unsupervised Sound Separation
[article]
2021
arXiv
pre-print
To handle larger numbers of sources, we introduce an efficient approximation using a fast least-squares solution, projected onto the MixIT constraint set. ...
The recent mixture invariant training (MixIT) method enables training on in-the-wild data; however, it suffers from two outstanding problems. ...
To enable training with a large number of output sources, we proposed an efficient least-squares-based MixIT implementation. ...
arXiv:2106.00847v2
fatcat:spol4alhbjbypgz4ec6phf666a
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
[article]
2020
arXiv
pre-print
We generalize our results to the multi-class case, providing the first multi-class classification algorithm that is certifiably robust to label-flipping attacks. ...
Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier. ...
for pointing us to Dicker (2014) for help choosing the appropriate regularization term. ...
arXiv:2002.03018v4
fatcat:rm6pxwtzczhwng6n6usywzf5ti
Dual-Tree Wavelet Scattering Network with Parametric Log Transformation for Object Classification
[article]
2017
arXiv
pre-print
The advantages of the proposed network over other supervised and some unsupervised methods are also presented using experiments performed on different training dataset sizes. ...
We introduce a ScatterNet that uses a parametric log transformation with Dual-Tree complex wavelets to extract translation invariant representations from a multi-resolution image. ...
Next, orthogonal least squares (OLS) selects the subset of object class-specific dimensions across the training data, similar to that of the fully connected layers in CNNs [1] .The presence of outliers ...
arXiv:1702.03267v1
fatcat:qx4udnpcgbg3vnamejuz7rdwge
Discriminatively regularized least-squares classification
2009
Pattern Recognition
In this paper, we propose a novel regularization algorithm in the least-squares sense, called Discriminatively Regularized Least-Squares Classification (DRLSC) method, which is specifically designed for ...
Furthermore, by embedding equality type constraints in the formulation, the solutions of DRLSC can follow from solving a set of linear equations and the framework naturally contains multi-class problems ...
Acknowledgment This work was supported by National Natural Science Foundation of China (60773061). ...
doi:10.1016/j.patcog.2008.07.010
fatcat:2ikbssa3mvhqdmhi7cg52ebg2a
Dual Adversarial Co-Learning for Multi-Domain Text Classification
[article]
2019
arXiv
pre-print
We conduct experiments on multi-domain sentiment classification datasets. The results show the proposed approach achieves the state-of-the-art MDTC performance. ...
under a discrepancy based co-learning framework, aiming to improve the classifiers' generalization capacity with the learned features. ...
This method uses two forms of loss to train the domain discriminator: The negative log-likelihood loss (MAN-NLL) and the least square loss (MAN-L2). • MT-CNN: The deep neural network model which can learn ...
arXiv:1909.08203v1
fatcat:rrnw2zpkqbeaxlatfhpvjerhg4
Unsupervised Domain Adaptation with Regularized Domain Instance Denoising
[chapter]
2016
Lecture Notes in Computer Science
We also exploit the source class labels as another way to regularize the loss, by using a domain classifier regularizer. ...
We report our findings and comparisons with state-of-the-art methods. ...
In all cases, the regularization term belongs to the class of squared loss functions. ...
doi:10.1007/978-3-319-49409-8_37
fatcat:f4apodgxhvc5db6h2y2mnnbueq
Ensemble deep learning: A review
[article]
2022
arXiv
pre-print
into ensemble models like bagging, boosting and stacking, negative correlation based deep ensemble models, explicit/implicit ensembles, homogeneous /heterogeneous ensemble, decision fusion strategies, unsupervised ...
This paper reviews the state-of-art deep ensemble models and hence serves as an extensive summary for the researchers. ...
The output of the models is combined via majority voting, least squares estimation weighting and double layer hierarchical approach. ...
arXiv:2104.02395v2
fatcat:lq73jqso5vadvnqfnnmw4zul4q
Multi-category news classification using Support Vector Machine based classifiers
2020
SN Applied Sciences
Here, we are particularly interested in studying the behaviour of Least-squares Support Vector Machines, Twin Support Vector Machines and Least-squares Twin Support Vector Machines (LS-TWSVM) classifiers ...
Since, all these are binary classifiers, they are extended using One-Against-All approach to handle multi-category data. ...
Since, there is huge difference in training-testing time of least-squares classifiers and TWSVM, so y-axis is taken in logarithmic scale. ...
doi:10.1007/s42452-020-2266-6
fatcat:2lbdqfitsjbp5ag7xoje4nuwp4
Unsupervised and Supervised Visual Codes with Restricted Boltzmann Machines
[chapter]
2012
Lecture Notes in Computer Science
Firstly, we steer the unsupervised RBM learning using a regularization scheme, which decomposes into a combined prior for the sparsity of each feature's representation as well as the selectivity for each ...
Incorporated within the Bag of Words (BoW) framework, these techniques optimize the projection of local features into the visual codebook, leading to state-of-theart performances in many benchmark datasets ...
Finally, we trained a linear SVM to perform multi-class classification of the test images. ...
doi:10.1007/978-3-642-33715-4_22
fatcat:vjoe6a7qlrdoxhlm424gnptu6y
« Previous
Showing results 1 — 15 out of 12,040 results