Optimizing Matrices For Compressed Sensing Using Existing Goodness Measures: Negative Results, And An Alternative [article]

Alankar Kotwal, Ajit Rajwade
2017 arXiv   pre-print
The bound that arises out of sparse recovery analysis in compressed sensing involves input signal sparsity and some property of the sensing matrix. An effort has therefore been made in the literature to optimize sensing matrices for optimal recovery using this property. We discover, in the specific case of optimizing codes for the CACTI camera, that the popular method of mutual coherence minimization does not produce optimal results: codes designed to optimize effective dictionary coherence
more » ... n perform worse than random codes in terms of mean squared reconstruction error. This surprising phenomenon leads us to investigate the reliability of the coherence bound for matrix optimization, in terms of its looseness. We examine, on simulated data, the looseness of the bound as it propagates across various steps of the inequalities in a derivation leading to the final bound. We then similarly examine an alternate bound derived by Tang, G. et al, based on the ℓ_1/ℓ_∞ notion of sparsity, which is a compromise between coherence and the restricted isometry constant (RIC). Moreover, we also perform a bound looseness analysis for the RIC as derived by Cai, T. et al. The conclusion of these efforts is that coherence optimization is problematic not only because of the coherence bound on the RIC, but also the RIC bound itself. These negative results imply that despite the success of previous work in designing sensing matrices based on optimization of a matrix quality factor, one needs to exercise caution in using them for practical sensing matrix design. We then introduce a paradigm for optimizing sensing matrices that overcomes the looseness of compressed sensing upper bounds using an average case error approach. We show a proof-of-concept design using this paradigm that performs convincingly better than coherence-based design in the CACTI case, and no worse for general matrices.
arXiv:1707.03355v1 fatcat:hogj7l7eendkdd4zredxeqevie