Filters








4,044 Hits in 5.2 sec

On the Power of Preconditioning in Sparse Linear Regression [article]

Jonathan Kelner, Frederic Koehler, Raghu Meka, Dhruv Rohatgi
2021 arXiv   pre-print
In this work, we give upper and lower bounds clarifying the power of preconditioning in sparse linear regression.  ...  First, we show that the preconditioned Lasso can solve a large class of sparse linear regression problems nearly optimally: it succeeds whenever the dependency structure of the covariates, in the sense  ...  We thank Ankur Moitra, Pablo Parrilo, Arsen Vasilyan, Philippe Rigollet, Guy Bresler, Dylan Foster, Tselil Schramm, and Matthew Brennan for valuable conversations on related topics.  ... 
arXiv:2106.09207v1 fatcat:2tgcs3sidfb6rokxwuqw4jnyza

Towards Neural Sparse Linear Solvers [article]

Luca Grementieri, Paolo Galeone
2022 arXiv   pre-print
Large sparse symmetric linear systems appear in several branches of science and engineering thanks to the widespread use of the finite element method (FEM).  ...  In addition, they are inherently sequential, making them unable to leverage the GPU processing power entirely.  ...  Using this input representation, we can cast the resolution of a sparse linear system as a node regression task.  ... 
arXiv:2203.06944v1 fatcat:2rtfjeankbgztawvasgww5c7bu

Using Spatial Data Mining to Predict the Solvability Space of Preconditioned Sparse Linear Systems

Shuting Xu, SangBae Kim, Jun Zhang
2016 Computer Technology and Application  
The solution of large sparse linear systems is one of the most important problems in large scale scientific computing.  ...  of preconditioned iterative methods.  ...  This statistic will be equal to one if fits is perfect, and to zero when regressors X have no explanatory power. σ 2 is the standard error of regression.  ... 
doi:10.17265/1934-7332/2016.03.003 fatcat:s5i5kezqibailmvsmylrjeprqy

Numerical Methods of Complex Valued Linear Algebraic System

Shi-Liang Wu, Shu-Qian Shen, Masoud Hajarian, Jia Liu, Lev A. Krukier
2015 Journal of Applied Mathematics  
large sparse linear systems.  ...  Complex linear system is an important branch in the field of the linear systems, which arises in a number of scientific computing and engineering applications, such as wave propagation, diffuse optical  ...  Acknowledgments The guest editors of this special issue would express their sincere gratitude to all the authors and anonymous reviewers who have generously contributed to this special issue.  ... 
doi:10.1155/2015/979234 fatcat:gys22hqi7fbsrivy66tqpter3i

Hierarchical Matrices Method and Its Application in Electromagnetic Integral Equations

Han Guo, Jun Hu, Hanru Shao, Zaiping Nie
2012 International Journal of Antennas and Propagation  
In this paper, a novel sparse approximate inverse (SAI) preconditioner in multilevel fashion is proposed to accelerate the convergence rate of Krylov iterations for solvingH-matrices system in electromagnetic  ...  Finally, numerical experiments are given to demonstrate the advantages of the proposed multilevel preconditioner compared to conventional "single level" preconditioners and the practicability of the fast  ...  Acknowledgments This work is supported by the Fundamental Science Research Foundation of National Central University for Doctoral Program (E022050205) and partly supported by NSFC (no. 60971032), the Programme  ... 
doi:10.1155/2012/756259 fatcat:qy5vyffcjjedldgk6573mpyziy

The Bayesian lasso for genome-wide association studies

Jiahan Li, Kiranmoy Das, Guifang Fu, Runze Li, Rongling Wu
2010 Computer applications in the biosciences : CABIOS  
Our approach obviates the choice of the lasso parameter by imposing a diffuse hyperprior on it and estimating it along with other parameters and is particularly powerful for selecting the most relevant  ...  Method: We propose a two-stage procedure for multi-SNP modeling and analysis in GWASs, by first producing a 'preconditioned' response variable using a supervised principle component analysis and then formulating  ...  Lasso penalized regression Given phenotypical measurements and genotype information, we could obtain the preconditioned responseỹ based on the generic form of linear regression (1).  ... 
doi:10.1093/bioinformatics/btq688 pmid:21156729 pmcid:PMC3105480 fatcat:rlkyn4jy4bgvraccck62fh7vre

Personalized Predictive Models for Symptomatic COVID-19 Patients Using Basic Preconditions: Hospitalizations, Mortality, and the Need for an ICU or Ventilator [article]

Salomon Wollenstein-Betech, Christos G. Cassandras, Ioannis Ch. Paschalidis
2020 biorxiv/medrxiv  
The rapid global spread of the virus SARS-CoV-2 has provoked a spike in demand for hospital care.  ...  Results: Interpretable methods (logistic regression and support vector machines) perform just as well as more complex models in terms of accuracy and detection rates, with the additional benefit of elucidating  ...  Discussion of the results can be found in Section 4 and Conclusions in Section 5.  ... 
doi:10.1101/2020.05.03.20089813 pmid:32511489 pmcid:PMC7273257 fatcat:ledvmy5hqffebifwhlo57dgt2a

Energy Analysis of a Solver Stack for Frequency-Domain Electromagnetics

Emmanuel Agullo, Luc Giraud, Stephane Lanteri, Gilles Marait, Anne-Cecile Orgerie, Louis Poirel
2019 2019 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)  
This solver stack combines a high order finite element discretization framework of the system of three-dimensional frequency-domain Maxwell equations with an algebraic hybrid iterative-direct sparse linear  ...  To highlight this difficulty on a concrete use-case, we perform an energy and power analysis of a software stack for the simulation of frequency-domain electromagnetic wave propagation.  ...  As for the linear solver, the best option in terms of energy might not be the best one in terms of power.  ... 
doi:10.1109/empdp.2019.8671555 dblp:conf/pdp/AgulloGLMOP19 fatcat:7lnjilmcdjaopcvtiappsxdaiq

A Preconditioned Variant of the Refined Arnoldi Method for Computing PageRank Eigenvectors

Zhao-Li Shen, Hao Yang, Bruno Carpentieri, Xian-Ming Gu, Chun Wen
2021 Symmetry  
In this paper, we propose a novel preconditioning approach for solving the PageRank model. This approach transforms the original PageRank eigen-problem into a new one that is more amenable to solve.  ...  The PageRank model computes the stationary distribution of a Markov random walk on the linking structure of a network, and it uses the values within to represent the importance or centrality of each node  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/sym13081327 fatcat:d343yiylozb7flyqznvw6sway4

Near-Optimal Entrywise Sampling of Numerically Sparse Matrices [article]

Vladimir Braverman, Robert Krauthgamer, Aditya Krishnan, Shay Sapir
2021 arXiv   pre-print
This measure of a is smooth and is clearly only smaller than the number of non-zeros in the row/column a.  ...  Finally, we demonstrate two applications of these sampling techniques, to faster approximate matrix multiplication, and to ridge regression by using sparse preconditioners.  ...  Using iterative methods to solve the preconditioned problem requires to apply P −1 M to a vector in each iteration. In the case of ridge regression, M = A ⊤ A+λI.  ... 
arXiv:2011.01777v2 fatcat:q6krepcf7vbhderxnoi4wjomf4

Distributional Hardness Against Preconditioned Lasso via Erasure-Robust Designs [article]

Jonathan A. Kelner, Frederic Koehler, Raghu Meka, Dhruv Rohatgi
2022 arXiv   pre-print
Recent work has shown that, for certain covariance matrices, the broad class of Preconditioned Lasso programs provably cannot succeed on polylogarithmically sparse signals with a sublinear number of samples  ...  This leaves open the possibility that, for example, trying multiple different preconditioners solves every sparse linear regression problem.  ...  The Preconditioned Lasso The classical approach to solving sparse linear regression is by solving a convex program known as the Lasso [Tib96] .  ... 
arXiv:2203.02824v1 fatcat:kdhb4ccoafeipbz6figrqutae4

Faster p-Norm Regression Using Sparsity [article]

Mehrdad Ghadiri, Richard Peng, Santosh S. Vempala
2021 arXiv   pre-print
We show that recent progress on fast sparse linear solvers can be leveraged to obtain faster than matrix-multiplication algorithms for any p > 1, i.e., in time Õ(pn^θ) for some θ < ω, the matrix multiplication  ...  This algorithm runs in time Õ(nnz(A) + d^4) for any 1<p≤ 2, and in time Õ(nnz(A) + d^θ) for p close to 2, improving on the previous best bound where the exponent of d grows with max{p, p/(p-1)}.  ...  Recent progress on linear systems [25] shows how to solve sufficiently sparse linear systems faster than , i.e., with asymptotic complexity that grows as a power of strictly smaller than .  ... 
arXiv:2109.11537v2 fatcat:arut4ej46zexzdg2rxpi4l7c5e

A Comparative Study on Different Parallel Solvers for Nonlinear Analysis of Complex Structures

Lei Zhang, Guoxin Zhang, Lixiang Wang, Zhaosong Ma, Shihai Li
2013 Mathematical Problems in Engineering  
preconditioned Krylov subspace solver based on MPI, (3) a parallel sparse equation solver based on OpenMP, and (4) a parallel GPU equation solver.  ...  A comparative study on these parallel solvers is made, and the results show that the parallelization makes SAPTIS more efficient, powerful, and adaptable.  ...  Acknowledgments The authors would like to acknowledge the financial support of the National Natural Science Foundation of China  ... 
doi:10.1155/2013/764237 fatcat:ftigdfgnq5fwnprcyqr7qxi36e

Quantum assisted Gaussian process regression [article]

Zhikuan Zhao, Jack K. Fitzsimons, Joseph F. Fitzsimons
2015 arXiv   pre-print
Gaussian processes (GP) are a widely used model for regression problems in supervised machine learning. Implementation of GP regression typically requires O(n^3) logic gates.  ...  We show that even in some cases not ideally suited to the quantum linear systems algorithm, a polynomial increase in efficiency still occurs.  ...  Even if y cannot be replaced by a sparse vector, the variance on estimation of the linear predictor will scale only linear in n, meaning that the estimation process must be repeated a linear number of  ... 
arXiv:1512.03929v1 fatcat:ieujkmst5bfulfomtrre7pcjqm

Preconditioning Kernel Matrices [article]

Kurt Cutajar, Michael A. Osborne, John P. Cunningham, Maurizio Filippone
2016 arXiv   pre-print
We show this approach is exact in the limit of iterations and outperforms state-of-the-art approximations for a given computational budget.  ...  Even so, conjugate gradient is not without its own issues: the conditioning of kernel matrices is often such that conjugate gradients will have poor convergence in practice.  ...  Acknowledgements KC and MF are grateful to Pietro Michiardi and Daniele Venzano for assisting the completion of this work by providing additional computational resources for running the experiments.  ... 
arXiv:1602.06693v2 fatcat:5yzr4ph72jcfxjw4ade2ueibrq
« Previous Showing results 1 — 15 out of 4,044 results