Filters








292,880 Hits in 5.5 sec

The gradient complexity of linear regression [article]

Mark Braverman, Elad Hazan, Max Simchowitz, Blake Woodworth
2021 arXiv   pre-print
We investigate the computational complexity of several basic linear algebra primitives, including largest eigenvector computation and linear regression, in the computational model that allows access to  ...  Our lower bound is based on a reduction to estimating the least eigenvalue of a random Wishart matrix.  ...  Acknowledgements MB's research is supported in part by the NSF Alan T.  ... 
arXiv:1911.02212v3 fatcat:7p2b6vhbqfhpzodlih5fk3lwem

Front Matter [chapter]

2015 Machine Learning in Python®  
Acknowledgments I'd like to acknowledge the splendid support that people at Wiley have offered  ...  Whereas penalized linear regression evolved from overcoming the limitations of ordinary regression, ensemble methods evolved to overcome the limitations of binary decision trees.  ...  to Random Forest Regression in Python 275 Assessing Performance and the Importance of Coded Variables 278 Coding the Sex of Abalone for Gradient Boosting Regression in Python 278 Assessing Performance  ... 
doi:10.1002/9781119183600.fmatter fatcat:f2hzkauhhfcrdgteqrlip32gee

PREDICTION OF RETENTION TIME FOR GRADIENT ELUTION USING STEPWISE ELUTION EQUATION

SUN HO HAN, SEUNG SOO KIM, KIH SOO JOE, MOO YUL SUH, TAE YOON EOM
1991 Analytical Sciences  
Capacity factor for each metal-ligand complex was obtained frog the measured k' by non-linear regression. The same calculation was performed on the retention time of metal ions for gradient elution.  ...  The slope frog a plot of log k' against log [OH'] was obtained by a linear regression. The calculated k' was applied to predict retention time of each inorganic anion for a gradient elution .  ...  Capacity factor for each metal-ligand complex was obtained frog the measured k' by non-linear regression.  ... 
doi:10.2116/analsci.7.supple_1363 fatcat:hspc36twz5a7djr56gu3ex365m

WEATHER FORECASTING USING REGRESSION

Sahil Makhijani, Anurag Dubey, Ankush Makhijani
2020 International Journal of Engineering Applied Sciences and Technology  
Weather forecasting has traditionally been done by complex simulation models of physics made up of complex equations of fluid dynamics and thermodynamics.  ...  Simple Linear Regression Linear regression performs the task to predict a dependent variable value (y) based on a given independent variable (x).  ...  The most common form of regression analysis is linear regression, in which a researcher finds the line (or a more complex linear function) that most closely fits the data according to a specific mathematical  ... 
doi:10.33564/ijeast.2020.v05i06.022 fatcat:hl5mdegrenathfsjrfpo72izee

Intelligible models for classification and regression

Yin Lou, Rich Caruana, Johannes Gehrke
2012 Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '12  
Since the shape functions can be arbitrarily complex, GAMs are more accurate than simple linear models.  ...  Complex models for regression and classification have high accuracy, but are unfortunately no longer interpretable by users.  ...  Any opinions, findings, conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of the sponsors.  ... 
doi:10.1145/2339530.2339556 dblp:conf/kdd/LouCG12 fatcat:msvii6wxnnfnzpsm4gojcbsrby

A computing method of predictive value based on fitting function in linear model

Hao Zhong, Huibing Zhang, Fei Jia
2020 EAI Endorsed Transactions on Collaborative Computing  
Linear models are common prediction models in collaborative computing, which mainly generates fitting function to express the relationship between feature vectors and predictive value.  ...  In the process of computing the predictive value according to the fitting function and feature vector, this paper mainly conducted the following researches.  ...  The complexity of Batch Gradient Descent and Mini-Batch Gradient Descent are O(N 2 s ). According to the above analysis, the minimum complexity of linear model is O(N s ).  ... 
doi:10.4108/eai.2-10-2020.166542 fatcat:n27w33dpura2xkrvtlzkoms5pe

Accelerated Stochastic Block Coordinate Gradient Descent for Sparsity Constrained Nonconvex Optimization

Jinghui Chen, Quanquan Gu
2016 Conference on Uncertainty in Artificial Intelligence  
We prove that the algorithm converges to the unknown true parameter at a linear rate, up to the statistical error of the underlying model.  ...  The core of our algorithm is leveraging both stochastic partial gradient and full partial gradient restricted to each coordinate block to accelerate the convergence.  ...  Acknowledgements We would like to thank the anonymous reviewers for their helpful comments.  ... 
dblp:conf/uai/ChenG16 fatcat:a4bf2izbeffgzcq4azxxtid6qu

A Comparison of Gradient Estimation Methods for Volume Rendering on Unstructured Meshes

C D Correa, R Hero, Kwan-Liu Ma
2011 IEEE Transactions on Visualization and Computer Graphics  
Through a number of benchmarks, we discuss the effects of mesh quality and scalar function complexity in the accuracy of the reconstruction, and their impact in lighting-enabled volume rendering.  ...  The second heuristic improves the efficiency of its GPU implementation, by restricting the computation of the gradient on a fixed-size local neighborhood.  ...  of Energy through the SciDAC program with Agreement No.  ... 
doi:10.1109/tvcg.2009.105 pmid:21233515 fatcat:avfx5l2ar5egpmay6rfqyc6cjy

An Optimized way to Solve Regression Problems

Jyothi Vishnu Vardhan Kola, Pursuing, BTech, Department of Computer Science and Engineering, Gitam university vishakapatnam, Andhra Pradesh.
2021 International Journal of Engineering and Advanced Technology  
In this paper, we work with complex data yet by using traditional machine learning regression algorithms by working on data cleaning and data transformation according to the working principle of those  ...  In case of noisy(inconsistent) and incomplete datasets, a large number of previous works adopted complex non traditional machine learning approaches in order to get accurate predictions.  ...  LINEAR REGRESSION Linear Regression is the most simplest of all the contemporary regression techniques.  ... 
doi:10.35940/ijeat.e2873.0810621 fatcat:cczcbckkxrffpmhaocy6jc6cpy

Gradient-based optimization for regression in the functional tensor-train format

Alex A. Gorodetsky, John D. Jakeman
2018 Journal of Computational Physics  
., symmetric kernels with moving centers, within each core can outperform the standard approach of using a linear expansion of basis functions.  ...  The FT is represented by a set of matrix-valued functions that contain a set of univariate functions, and the regression task is to learn the parameters of these univariate functions.  ...  Now that we have described the computational complexity of the gradient computation, we summarize the computational complexity of the proposed optimization algorithms.  ... 
doi:10.1016/j.jcp.2018.08.010 fatcat:6uyyo2ccjrc57fxftvbw4lnwbm

PID TUNING OF FOPDT SYSTEM USING MULTIVARIATE LINEAR REGRESSION WITH GRADIENT DESCENT

Thaker Maharsh Kalpeshbhai, Patel Vinod Purushottamdas
2021 International Journal of Engineering Applied Sciences and Technology  
Multiple Linear Regression algorithm of Machine Learning and Gradient Descent based optimization algorithm is used to obtain PID parameters.  ...  This paper presents a method of obtaining FOPDT model from system's transient specifications, generating data-set of FOPDT model and then applying Multivariate Linear Regression with Gradient Descent to  ...  These techniques are more complex for a simple linear system. Linear regression with gradient descent is studied in paper [10] and [11] for first order and second order system respectively.  ... 
doi:10.33564/ijeast.2021.v05i09.030 fatcat:exyp2uzcvvbx5etrepklly7dra

Alternating Minimization Converges Super-Linearly for Mixed Linear Regression [article]

Avishek Ghosh, Kannan Ramchandran
2020 arXiv   pre-print
In this paper, we close this gap between theory and practice for the special case of a mixture of 2 linear regressions.  ...  We have unlabeled observations coming from multiple linear regressions, and each observation corresponds to exactly one of the regression models.  ...  The idea of superlinear convergence of AM is originated with Ashwin Pananjady for the problem of real phase retrieval. We also thank the reviewers of AISTATS 2020 for their insightful remarks.  ... 
arXiv:2004.10914v2 fatcat:osnqsvt7uzabflet76y4cbncbe

Index [chapter]

2015 Machine Learning in Python®  
variables, 278, 282-284 gradient boosting regression, 278-282 random forest regression, 275-278 ensemble packages, 255-256 random forest model, 256-270 errors, out-of-sample, 80 F factor variables  ...  regression, 111 complex models, compared to simple models, 82-86 complexity balancing, 102-103 simple problems versus complex problems, 80-82 complexity parameter, 110 confusion matrix, 91 contingency  ... 
doi:10.1002/9781119183600.index fatcat:ruawxijkkrdm3opj76ipi4wxsq

Correction: The complex dynamics of products and its asymptotic properties

Orazio Angelini, Matthieu Cristelli, Andrea Zaccaria, Luciano Pietronero
2017 PLoS ONE  
We characterize it and call it asymptotic market; we find that its shape depends on the Complexity value of the product. Abstract.  ...  We couple the new economic dimension Complexity, which captures how sophisticated products are, with an index called logPRODY, a weighted average of the Gross Domestic Products per capita of a product's  ...  All the regressions are linear, with the zero-order coefficient set to be zero.  ... 
doi:10.1371/journal.pone.0186436 pmid:29020048 pmcid:PMC5636154 fatcat:3hjmxaydnfb6rp2mpol3bk3h4u

MACHINE-LEARNING MODELS FOR PREDICTING PATIENT SURVIVAL AFTER LIVER TRANSPLANTATION

Wojciech Jarmulski, Alicja Wieczorkowska, Mariusz Trzaska, Michal Ciszek, Leszek Paczek
2018 Computer Science  
As the second contribution we have identified that full-complexity models such as random forests and gradient boosting lack sufficient interpretability despite having the best predictive power, which is  ...  linear models.  ...  The authors would like to show gratitude to the Department of Immunology, Transplantology, and Internal Diseases at the Medical University of Warsaw for providing the dataset.  ... 
doi:10.7494/csci.2018.19.2.2746 fatcat:scfw4kcedjhg7lryeearhpjn6q
« Previous Showing results 1 — 15 out of 292,880 results