Filters








86,981 Hits in 4.4 sec

High-dimensional estimation via sum-of-squares proofs [article]

Prasad Raghavendra, Tselil Schramm, David Steurer
2019 arXiv   pre-print
Understanding and characterizing the power of sum-of-squares proofs for estimation problems has been a subject of intense study in recent years.  ...  High dimensional estimation problems arise naturally in statistics, machine learning, and complexity theory.  ...  4k sum-of-squares proof.  ... 
arXiv:1807.11419v2 fatcat:edxdn2ctrbhyzd7ufy5qtrhdty

List Decodable Subspace Recovery [article]

Prasad Raghavendra, Morris Yau
2020 arXiv   pre-print
resilient" capturing state of the art results for list decodable mean estimation and regression.  ...  Given a dataset where an α fraction (less than half) of the data is distributed uniformly in an unknown k dimensional subspace in d dimensions, and with no additional assumptions on the remaining data,  ...  Nevertheless, there is a generous class of nonnegative polynomials that admit a proof of positivity via a proof in the form of a sum of squares.  ... 
arXiv:2002.03004v1 fatcat:hwlg67ldsrbwhcjtbqnwj5lmkm

Covariance Estimation in High Dimensions Via Kronecker Product Expansions

Theodoros Tsiligkaridis, Alfred O. Hero
2013 IEEE Transactions on Signal Processing  
This paper presents a new method for estimating high dimensional covariance matrices.  ...  Gaussian random sample, we establish high dimensional rates of convergence to the true covariance as both the number of samples and the number of variables go to infinity.  ...  We proposed a least-squares estimator in a permuted linear space with nuclear norm penalization, named PRLS. We established high dimensional consistency for PRLS with guaranteed rates of convergence.  ... 
doi:10.1109/tsp.2013.2279355 fatcat:xnrnyh4hhne2vbz6pqs7plkyka

On Communication Cost of Distributed Statistical Estimation and Dimensionality [article]

Ankit Garg and Tengyu Ma and Huy L. Nguyen
2014 arXiv   pre-print
Specifically we study the problem of estimating the mean θ⃗ of an unknown d dimensional gaussian distribution in the distributed setting.  ...  ) for the bits of communication needed to achieve the minimax squared loss, in the interactive and simultaneous settings respectively.  ...  Main Results High Dimensional Lower bound via Direct Sum Our main theorem roughly states that if one can solves the d-dimensional problem, then one must be able to solve the one dimensional problem with  ... 
arXiv:1405.1665v2 fatcat:mhnpmijsjnharnk6ag2ihc2kvu

Polynomial-Time Sum-of-Squares Can Robustly Estimate Mean and Covariance of Gaussians Optimally [article]

Pravesh K. Kothari, Peter Manohar, Brian Hu Zhang
2021 arXiv   pre-print
On the other hand, [KS17b] introduced a general framework for robust moment estimation via a canonical sum-of-squares relaxation that succeeds for the more general class of certifiably subgaussian and  ...  In this work, we revisit the problem of estimating the mean and covariance of an unknown d-dimensional Gaussian distribution in the presence of an ε-fraction of adversarial outliers.  ...  thus inefficient in high-dimensional settings.  ... 
arXiv:2110.11853v1 fatcat:feeyqmbqzbdzdd4s3wksk3gmry

Approximating mixed Hölder functions using random samples [article]

Nicholas F. Marshall
2019 arXiv   pre-print
,X_l chosen uniformly at random from the unit square. Let the location of these points and the function values f(X_1),...,f(X_l) be given.  ...  It would be interesting to develop quantitative high probability estimates on the singular values of B.  ...  So far, we have established L ∞ -norm and L 2 -norm error estimates of Theorem 1.1 for an approximationf defined via the least squares solution v of the linear system Av = b.  ... 
arXiv:1810.00823v3 fatcat:3wkopstnufadrcq2vkdmxmt6d4

Active operator inference for learning low-dimensional dynamical-system models from noisy data [article]

Wayne Isaac Tan Uy, Yuepeng Wang, Yuxiao Wen, Benjamin Peherstorfer
2021 arXiv   pre-print
The analysis also motivates an active operator inference approach that judiciously samples high-dimensional trajectories with the aim of achieving a low mean-squared error by reducing the effect of noise  ...  Numerical experiments with high-dimensional linear and nonlinear state dynamics demonstrate that predictions obtained with active operator inference have orders of magnitude lower mean-squared errors than  ...  We also thank Jonathan Niles-Weed for directing us to references for deriving upper bounds on moments of the norm of Gaussian random matrices. This  ... 
arXiv:2107.09256v2 fatcat:z6ekwdi5jvfdhdu7bpruqy34mq

Koopman-Based Neural Lyapunov Functions for General Attractors [article]

Shankar A. Deka, Alonso M. Valle, Claire J. Tomlin
2022 arXiv   pre-print
Additionally, when the dynamics are polynomial and when neural networks are replaced by polynomials as a choice of function approximators in our approach, one can further leverage Sum-of-Squares programs  ...  In such a polynomial case, our Koopman-based approach for constructing Lyapunov functions uses significantly fewer decision variables compared to directly formulating and solving a Sum-of-Squares optimization  ...  sum-of-squares (SOS) polynomial, which we denote by Σ[ ].  ... 
arXiv:2203.12303v1 fatcat:5i2ewwucwbfo3d4ic6phsrmn4y

Outlier-robust moment-estimation via sum-of-squares [article]

Pravesh K. Kothari, David Steurer
2017 arXiv   pre-print
Our algorithms are based on a standard sum-of-squares relaxation of the following conceptually-simple optimization problem: Among all distributions whose moments are bounded in the same way as for the  ...  We develop efficient algorithms for estimating low-degree moments of unknown distributions in the presence of adversarial outliers.  ...  Algorithm 4. 3 ( 3 Algorithm for moment estimation via sum-of-squares).  ... 
arXiv:1711.11581v2 fatcat:5smhx6jrxzdufj7o7wi2xyashq

Parameter Estimation with the Ordered ℓ2 Regularization via an Alternating Direction Method of Multipliers

Mahammad Humayoo, Xueqi Cheng
2019 Applied Sciences  
This paper explores the problem of parameter estimation with the ordered ℓ 2 -regularization via Alternating Direction Method of Multipliers (ADMM), called ADMM-O ℓ 2 .  ...  Prior studies have found that modern ordered regularization can be more effective in handling highly correlated, high-dimensional data than traditional regularization.  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/app9204291 fatcat:7krw2ickcvb2tn72vf6eep34ne

A New Look at Compressed Ordinary Least Squares

Ata Kaban
2013 2013 IEEE 13th International Conference on Data Mining Workshops  
The prospect of carrying out data mining on cheaply compressed versions of high dimensional massive data sets holds tremendous potential and promise.  ...  However, our understanding of the performance guarantees available from such computationally inexpensive dimensionality reduction methods for data mining and machine learning tasks is currently lagging  ...  Thus, we seek to bound, with high probability w.r.t. the random draw of R, the difference between the expected square loss of the kdimensional OLS estimate obtained from S R and the square loss of the  ... 
doi:10.1109/icdmw.2013.152 dblp:conf/icdm/Kaban13 fatcat:z3caiafpn5fstcobgubbw6vsve

Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions

Alekh Agarwal, Sahand Negahban, Martin J. Wainwright
2012 Annals of Statistics  
We analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems.  ...  The observations are noisy realizations of a linear transformation X of the sum of an approximately) low rank matrix Θ^ with a second matrix Γ^ endowed with a complementary form of low-dimensional structure  ...  Introduction In this paper, we study a class of high-dimensional matrix decomposition problems.  ... 
doi:10.1214/12-aos1000 fatcat:cpquuzywgzhsliosoa6raaiufa

Consistent regression when oblivious outliers overwhelm [article]

Tommaso d'Orsi, Gleb Novikov, David Steurer
2021 arXiv   pre-print
The other proof highlights a connection between the Huber loss estimator and high-dimensional median computations.  ...  One proof is phrased in terms of strong convexity, extending work of [Tsakonas et al.'14], and particularly short.  ...  High-dimensional Estimation via median algorithm To get an estimate of the form * −ˆ 2 * 2 2 , we use the algorithm below: The performance of the algorithm is captured by the following theorem.  ... 
arXiv:2009.14774v2 fatcat:uwrlkxth5nee5bvs5zgsvbytcu

A "black-box" re-weighting analysis can correct flawed simulation data, after the fact [article]

F. Marty Ytreberg, Daniel M. Zuckerman
2007 arXiv   pre-print
Successful implementations of the strategy, which reduce both statistical error and bias, are developed for a one-dimensional system, and a 50-atom peptide, for which the correct 250-to-1 population ratio  ...  There is a great need for improved statistical sampling in a range of physical, chemical and biological systems.  ...  The canonical estimates (squares) were computed via (4), and the black-box estimates (circles) via Eqs. (5) and (6) with a bin size of 0.005.  ... 
arXiv:physics/0609194v2 fatcat:34lx5cqxbvdyhieo7fmc5dhpty

SoS Degree Reduction with Applications to Clustering and Robust Moment Estimation [article]

David Steurer, Stefan Tiegel
2021 arXiv   pre-print
We develop a general framework to significantly reduce the degree of sum-of-squares proofs by introducing new variables.  ...  To illustrate the power of this framework, we use it to speed up previous algorithms based on sum-of-squares for two important estimation problems, clustering and robust moment estimation.  ...  Degree reduction of sum-of-squares proofs via linearization In this section, we show how we can reduce the degree of sum-of-squares proofs by introducing new variables.  ... 
arXiv:2101.01509v1 fatcat:yzywzuy7gfh3ndh7wzsf5e43wa
« Previous Showing results 1 — 15 out of 86,981 results