Filters








69 Hits in 6.8 sec

On the Performance of Sparse Recovery via L_p-minimization (0<=p <=1) [article]

Meng Wang and Weiyu Xu and Ao Tang
2010 arXiv   pre-print
recovery of l_p-minimization (0<=p<1) where the aim is to recover all the sparse vectors on one support with fixed sign pattern.  ...  Besides analyzing the performance of strong recovery where l_p-minimization needs to recover all the sparse vectors up to certain sparsity, we also for the first time analyze the performance of "weak"  ...  Moreover, we assume that x is exactly sparse, i.e. most of its entries are exactly zero.  ... 
arXiv:1011.5936v1 fatcat:l474gbwzy5bptezntu5gao4boq

The Gelfand widths of ℓp-balls for 0<p≤1

Simon Foucart, Alain Pajor, Holger Rauhut, Tino Ullrich
2010 Journal of Complexity  
We provide sharp lower and upper bounds for the Gelfand widths of ℓ_p-balls in the N-dimensional ℓ_q^N-space for 0<p1 and p<q ≤ 2.  ...  Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area.  ...  The third and fourth authors acknowledge support by the Hausdorff Center for Mathematics, University of Bonn. The third author acknowledges funding through the WWTF project SPORTS (MA07-004).  ... 
doi:10.1016/j.jco.2010.04.004 fatcat:ljt54zphdfgexo2koo2uo2ivwu

Recovery analysis for weighted mixed ℓ_2/ℓ_p minimization with 0<p≤ 1 [article]

Zhiyong Zhou, Jun Yu
2017 arXiv   pre-print
We study the recovery conditions of weighted mixed ℓ_2/ℓ_p (0<p1) minimization for block sparse signal reconstruction from compressed measurements when partial block support information is available.  ...  We show that the block p-restricted isometry property (RIP) can ensure the robust recovery.  ...  Figure 3 : 3 Performance of weighted mixed ℓ 2 /ℓ p recovery with p = 0.5 in terms of SNR for exactly block sparse signal x depending on ω with k = 20, d = 2, while varying the size of the block support  ... 
arXiv:1709.00257v2 fatcat:yfsuo5uuvbg4rk7b6rywejiekm

Non-Convex Compressed Sensing Using Partial Support Information [article]

Navid Ghadermarzy, Hassan Mansour, Ozgur Yilmaz
2013 arXiv   pre-print
We show that weighted ℓ_p minimization with 0<p<1 is stable and robust under weaker sufficient conditions compared to weighted ℓ_1 minimization.  ...  Moreover, the sufficient recovery conditions of weighted ℓ_p are weaker than those of regular ℓ_p minimization if at least 50 also review some algorithms which exist to solve the non-convex ℓ_p problem  ...  Several works have attempted to close the gap in the required number of measurements for recovery via`0 and`1 minimization problems, including solving a non-convex`p minimization problem with H < p < I  ... 
arXiv:1311.3773v1 fatcat:y6gubunpufgulnymmyl3eagpha

Stable Cosparse Recovery via ℓ_p-analysis Optimization [article]

Shubao Zhang and Hui Qian and Xiaojin Gong and Jianying Zhou
2018 arXiv   pre-print
In this paper we study the ℓ_p-analysis optimization (0<p1) problem for cosparse signal recovery. We establish a bound for recovery error via the restricted p-isometry property over any subspace.  ...  We further prove that the nonconvex ℓ_q-analysis optimization can do recovery with a lower sample complexity and in a wider range of cosparsity than its convex counterpart.  ...  Numerical Analysis In this section we compare the performance of the ℓ q -analysis minimization in the case q < 1 and q = 1 on cosparse vector recovery.  ... 
arXiv:1409.4575v4 fatcat:wyuc3h2o2nfwpjfc5x6t3xh5ii

Impulsive Noise Robust Sparse Recovery via Continuous Mixed Norm [article]

Amirhossein Javaheri, Hadi Zayyani, Mario A. T. Figueiredo, Farrokh Marvasti
2018 arXiv   pre-print
In this paper, we exploit a Continuous Mixed Norm (CMN) for robust sparse recovery instead of ℓ_p-norm.  ...  fidelity on the residual error.  ...  power: In this part, we would like to examine to effect of noise power on the performance of robust sparse recovery algorithms.  ... 
arXiv:1804.04614v1 fatcat:d4smy4wxmbhs5lvadil3ulaiuu

Performance Guarantees for Schatten-p Quasi-Norm Minimization in Recovery of Low-Rank Matrices [article]

Mohammadreza Malek-Mohammadi, Massoud Babaie-Zadeh, Mikael Skoglund
2014 arXiv   pre-print
Firstly, using null space properties of the measurement operator, we provide a sufficient condition for exact recovery of low-rank matrices.  ...  Based on this theorem, we provide a few RIP-based recovery conditions.  ...  Furthermore, [18] proves a similar equivalence between RIP-based conditions for recovery of sparse vectors viap quasi-norm minimization and recovery of low-rank matrices using pSNM.  ... 
arXiv:1407.3716v2 fatcat:t36cqv4t5jbhtnzryclntzpnce

The high-order block RIP for non-convex block-sparse compressed sensing [article]

Jianwen Huang, Xinling Liu, Jinyao Hou, Jianjun Wang
2020 arXiv   pre-print
We establish high-order sufficient conditions based on block RIP to ensure the exact recovery of every block s-sparse signal in the noiseless case via mixed l_2/l_p minimization method, and the stable  ...  This paper concentrates on the recovery of block-sparse signals, which is not only sparse but also nonzero elements are arrayed into some blocks (clusters) rather than being arbitrary distributed all over  ...  We establish a sufficient condition that guarantee the stable and robust signal reconstruction via mix l 2 /l p minimization method.  ... 
arXiv:2006.06344v1 fatcat:4fe3xvqzpbdv7od77ymaw3v4e4

On the gap between RIP-properties and sparse recovery conditions [article]

Sjoerd Dirksen, Guillaume Lecué, Holger Rauhut
2015 arXiv   pre-print
We consider the problem of recovering sparse vectors from underdetermined linear measurements via ℓ_p-constrained basis pursuit.  ...  First, one may need substantially more than s (en/s) measurements (optimal for p=2) for uniform recovery of all s-sparse vectors.  ...  The restricted isometry property (RIP) is a well-established tool to analyze the performance of sparse recovery methods.  ... 
arXiv:1504.05073v1 fatcat:xmxhr52dybagponvw4mlbaeijm

Complete Dictionary Learning via ℓ_p-norm Maximization [article]

Yifei Shen, Ye Xue, Jun Zhang, Khaled B. Letaief, Vincent Lau
2020 arXiv   pre-print
Extensive experiments will demonstrate that the ℓ_p-based approaches enjoy a higher computational efficiency and better robustness than conventional approaches and p=3 performs the best.  ...  In this paper, we investigate a family of ℓ_p-norm (p>2,p ∈N) maximization approaches for the complete dictionary learning problem from theoretical and algorithmic aspects.  ...  during preparation of this manuscript.  ... 
arXiv:2002.10043v3 fatcat:q6cbw65n4vbgbhpckhaqyqhzmm

A Simplified Approach to Recovery Conditions for Low Rank Matrices [article]

Samet Oymak, Karthik Mohan, Maryam Fazel, Babak Hassibi
2011 arXiv   pre-print
Various reconstruction algorithms have been studied, including ℓ_1 and nuclear norm minimization as well as ℓ_p minimization with p<1.  ...  Recovering sparse vectors and low-rank matrices from noisy linear measurements has been the focus of much recent research.  ...  Assume property S on matrices R n → R m implies perfect recovery of all vectors with sparsity at most 2k viap quasi-norm minimization where 0 < p < 1.  ... 
arXiv:1103.1178v3 fatcat:42zzlv4yn5gdraqlmmfglf7az4

Multi-band Weighted l_p Norm Minimization for Image Denoising [article]

Yanchi Su and Zhanshan Li and Haihong Yu and Zeyu Wang
2019 arXiv   pre-print
Extensive experiments on additive white Gaussian noise removal and realistic noise removal demonstrate that the proposed MBWPNM achieves a better performance than several state-of-art algorithms.  ...  To address this problem, we propose a flexible and precise model named multi-band weighted l_p norm minimization (MBWPNM).  ...  of X, and 0 < p1.  ... 
arXiv:1901.04206v4 fatcat:dxjf52zqhrh5fj3rbiuailfa2i

Nonconvex Sorted $$\ell _1$$ ℓ 1 Minimization for Sparse Approximation

Xiao-Lin Huang, Lei Shi, Ming Yan
2015 Journal of the Operations Research Society of China  
The numerical experiments demonstrate the better performance of assigning weights by sort compared to ℓ_p minimization.  ...  As one method for solving ℓ_p minimization problems, iteratively reweighted ℓ_1 minimization updates the weight for each component based on the value of the same component at the previous iteration.  ...  From both Fig.2 and Fig.3 , one can find that compared with setting weights by value, setting weights according to the sort can enhance the sparse recovery performance.  ... 
doi:10.1007/s40305-014-0069-4 fatcat:ynsesjwk5rhh7npzaqpi4jwaam

SPOQ ℓ_p-Over-ℓ_q Regularization for Sparse Signal Recovery applied to Mass Spectrometry [article]

Afef Cherni, Emilie Chouzenoux, Laurent Duval, Jean-Christophe Pesquet
2020 arXiv   pre-print
It consists of a Lipschitz-differentiable surrogate for ℓ_p-over-ℓ_q quasi-norm/norm ratios with p∈ ]0,2[ and q≥ 2.  ...  However, the latter does not exhibit the desirable property of scale invariance for sparse data.  ...  Marc-André Delsuc (University of Strasbourg, France), for helping with MS problem modeling.  ... 
arXiv:2001.08496v2 fatcat:tqkyhud34rgpvhvcixschcw6au

Efficient Tracking of Sparse Signals via an Earth Mover's Distance Dynamics Regularizer [article]

Nicholas P. Bertrand, Adam S. Charles, John Lee, Pavel B. Dunn, Christopher J. Rozell
2020 arXiv   pre-print
However, the tracking regularizers are often based on the ℓ_p-norm which cannot account for important geometrical relationships between neighboring signal elements.  ...  We propose a practical approach to using the earth mover's distance (EMD) via the earth mover's distance dynamic filtering (EMD-DF) algorithm for causally tracking time-varying sparse signals when there  ...  to div(M ) + v − v = 0, 0 ≤ v ≤ x, 0 ≤ v ≤ x, v 1 = v 1 = u, u ≤ x 1 , u ≤ x 1 . (10) The complex variant of EMD-DF in [15] can also be trivially converted to adopt this formulation, though it is not  ... 
arXiv:1806.04674v5 fatcat:7sewnnbwbnfjhfprk5c7ussvi4
« Previous Showing results 1 — 15 out of 69 results