A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Dataset Meta-Learning from Kernel Ridge-Regression
[article]
2021
arXiv
pre-print
networks and kernel ridge-regression (KRR). ...
We introduce a meta-learning algorithm called Kernel Inducing Points (KIP) for obtaining such remarkable datasets, inspired by the recent developments in the correspondence between infinitely-wide neural ...
) This leads to our first-order meta-learning algorithm KIP (Kernel Inducing Points), which uses kernel-ridge regression to learn -approximate datasets. ...
arXiv:2011.00050v3
fatcat:4mcohhti7ndcnig4w37nlhfrau
A Stacking Ensemble Learning Framework for Genomic Prediction
2021
Frontiers in Genetics
For each trait, SELF performed better than base learners, which included support vector regression (SVR), kernel ridge regression (KRR) and elastic net (ENET). ...
Machine learning (ML) is perhaps the most useful tool for the interpretation of large genomic datasets. ...
Kernel Ridge Regression The difference between KRR and ridge regression is that KRR exploits the kernel trick to define a higher dimensional feature space and then builds the ridge regression model in ...
doi:10.3389/fgene.2021.600040
pmid:33747037
pmcid:PMC7969712
fatcat:772jktwrrbg2fhwligpow6t6ju
Pointwise probability reinforcements for robust statistical inference
2014
Neural Networks
Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). ...
Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. ...
Reinforced kernel ridge regression is allowed a maximum of 20 iterations to learn a robust model. ...
doi:10.1016/j.neunet.2013.11.012
pmid:24300550
fatcat:knh7gtgn2jgj3bgctuazzozkx4
Training Data Generating Networks: Shape Reconstruction via Bi-level Optimization
[article]
2022
arXiv
pre-print
Specifically, the algorithms for bi-level optimization are also being used in meta learning approaches for few-shot learning. ...
We propose a novel 3d shape representation for 3d shape reconstruction from a single image. ...
Here, we can draw from multiple options. In particular, we experimented with kernel SVMs and kernel ridge regression. ...
arXiv:2010.08276v2
fatcat:xm7vvyxizndkpisl7sgse6mthu
Prognosis of Yield of Crop using Machine Learning Techniques
2021
Zenodo
In this paper we use techniques of advanced regression such as Kernel Ridge, Lasso, E-net and polynomial regression algorithms to detect yield and stacked regression conception to enhance the algorithms ...
With improvement in the machine learning models the ability to improve the prediction of crop yield is also high. ...
Kernel Ridge Regression Non-parametric form of ridge regression is known as kernel ridge regression. It combines ridge regression with other techniques using the kernel ruse. ...
doi:10.5281/zenodo.5441061
fatcat:tqlipzufofflpdkxhemurusbc4
A Comparative Analysis of the Ensemble Methods for Drug Design
[article]
2020
arXiv
pre-print
Ensemble-based machine learning approaches have been used to overcome limitations and generate reliable predictions. Ensemble learning creates a set of diverse models and combines them. ...
In this configuration, 57 algorithms were developed and compared on 4 different datasets. Thus, a technique for complex ensemble method is proposed that builds diversified models and integrates them. ...
Kernel ridge regression (KRR): Kernel ridge regression combines ridge regression (linear least squares with l2norm regularization) with the kernel trick. ...
arXiv:2012.07640v1
fatcat:p4scqkd5wrgurb7hbmjlyqrkh4
Meta-matching: a simple framework to translate phenotypic predictive models from big to small data
[article]
2020
bioRxiv
pre-print
Here, we propose a simple framework - "meta-matching" - to translate predictive models from large-scale datasets to new unseen non-brain-imaging phenotypes in boutique studies. ...
Therefore, a unique phenotype from a boutique study likely correlates with (but is not the same as) some phenotypes in some large-scale datasets. ...
outperforms predictions from classical kernel ridge regression (KRR). ...
doi:10.1101/2020.08.10.245373
fatcat:naqrbeqw7zailpw6wyr7zqdvyy
Learning to Learn Kernels with Variational Random Features
[article]
2020
arXiv
pre-print
In this work, we introduce kernels with random Fourier features in the meta-learning framework to leverage their strong few-shot learning ability. ...
Experimental results on a variety of few-shot regression and classification tasks demonstrate that MetaVRF delivers much better, or at least competitive, performance compared to existing meta-learning ...
Method We first describe the base-learner based on the kernel ridge regression in meta-learning for few-shot learning, and then introduce kernel learning with random features, based on which our meta variational ...
arXiv:2006.06707v2
fatcat:cnw4cqcj4jcddculybknqgmqcq
Epileptic seizure detection: a comparative study between deep and traditional machine learning techniques
2020
Journal of Integrative Neuroscience
Traditional machine learning techniques such as decision tree, random forest, extra tree, ridge classifier, logistic regression, K-Nearest Neighbor, Naive Bayes (gaussian), and Kernel Support Vector Machine ...
The collection of the electroencephalography recordings contained in the dataset attributes 179 information and 11,500 instances. ...
Acknowledgment We are thankful to the authors of the freely available dataset used in this paper.
Conflict of Interest The authors declare no conflict of interest. ...
doi:10.31083/j.jin.2020.01.24
pmid:32259881
fatcat:3z4uifm2cjcibbnx5fqhddcpde
Meta-Learning With Differentiable Convex Optimization
2019
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers. ...
We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition ...
"RR" stands for ridge regression. ...
doi:10.1109/cvpr.2019.01091
dblp:conf/cvpr/LeeMRS19
fatcat:gcrmqlejsrdgdnrnrtp5zhcq6a
Meta-Learning with Differentiable Convex Optimization
[article]
2019
arXiv
pre-print
Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers. ...
We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition ...
"RR" stands for ridge regression. ...
arXiv:1904.03758v2
fatcat:lgpeaofwhnf7bjdf6h2d7qpnd4
Transformative Machine Learning
[article]
2018
arXiv
pre-print
human gene expression (across different tissue types and drug treatments), and meta-learning for machine learning (predicting which machine learning methods work best for a given problem). ...
Recently it has been demonstrated that, given sufficient data, deep neural networks can learn effective implicit representations from simple input representations. ...
Acknowledgements The authors would like to thank Rafael Mantovani for generating the original meta-learning data used in this study. ...
arXiv:1811.03392v1
fatcat:ka3albyamrdpza3p6he2pewymi
Improving range shift predictions: enhancing the power of traits
[article]
2021
bioRxiv
pre-print
We evaluate the predictive performance of four different machine learning approaches that can capture nonlinear relationships (ridge-regularized linear regression, ridge-regularized kernel regression, ...
support vector regression, and random forests). ...
Agreement is sometimes strongest within regression (OLS, 214 Ridge) and machine learning (Kernel Ridge, SVR) type models. ...
doi:10.1101/2021.02.15.431292
fatcat:rfrgv7cp3nah5di3zlez3necs4
Efficient Pairwise Learning Using Kernel Ridge Regression: an Exact Two-Step Method
[article]
2016
arXiv
pre-print
In this work we analyze kernel-based methods for pairwise learning, with a particular focus on a recently-suggested two-step method. ...
In addition, the two-step method allows us to establish novel algorithmic shortcuts for efficient training and validation on very large datasets. ...
We will also study the different learning algorithms from a spectral filtering point of view, showing that two-step kernel ridge regression uses a special decomposable filter. ...
arXiv:1606.04275v1
fatcat:am3yswdiqvcozjkkrjilopagia
Meta-Learning Priors for Efficient Online Bayesian Regression
[article]
2018
arXiv
pre-print
We find our approach outperforms kernel-based GP regression, as well as state of the art meta-learning approaches, thereby providing a promising plug-in tool for many regression tasks in robotics where ...
ALPaCA extracts all prior information directly from the dataset, rather than restricting prior information to the choice of kernel hyperparameters. ...
Our work also shares common features with recent work on meta-learning with closed form solvers [37] , in which the authors do meta-learning with a ridge regression last layer. ...
arXiv:1807.08912v2
fatcat:l2m2oohlyzdrxbcsk7a2t5piii
« Previous
Showing results 1 — 15 out of 1,355 results