A Unified Primal-Dual Algorithm Framework Based on Bregman Iteration

Xiaoqun Zhang, Martin Burger, Stanley Osher
2010 Journal of Scientific Computing  
In this paper, we propose a unified primal-dual algorithm framework for two classes of problems that arise from various signal and image processing applications. We also show the connections to existing methods, in particular Bregman iteration (Osher et al., Multiscale Model. Simul. 4(2):460-489, 2005) based methods, such as linearized Bregman (Osher et al.The convergence of the general algorithm framework is proved under mild assumptions. The applications to 1 basis pursuit, TV−L 2
more » ... and matrix completion are demonstrated. Finally, the numerical examples show the algorithms proposed are easy to implement, efficient, stable and flexible enough to cover a wide variety of applications. Keywords Saddle point · Bregman iteration · 1 minimization · Inexact Uzawa methods · Proximal point iteration Introduction The main goal of this paper is to propose a unified algorithm framework for two classes of convex optimization problems arising from sparse reconstruction. The framework proposed here is a continued work started in [51], where a Bregmanized operator splitting (BOS) J Sci Comput (2011) 46: 20-46 21 method is proposed for nonlocal total variation regularization. In addition to unifying some existing algorithms, we also propose new ones such as an extension of split Bregman [32] that linearizes quadratic penalties to yield simpler iterations. This work was originated from Bregman iteration [41], but we can find the connections to other classical optimization concepts, such as augmented Lagrangian method [44] and proximal point minimization. Bregman iteration for image processing problems was originally proposed by Osher, Burger, Goldfarb, Xu and Yin in [41] to improve the classical Rudin-Osher-Fatemi [45] total variation (TV) regularization model for image restoration. For a given closed, proper convex functional J (u) : R N → R ∪ {+∞}, the Bregman distance [7] is defined as where p ∈ ∂J (v) is some subgradient of J at the point v and ·, · denotes the canonical inner product in R N . It is well known that Bregman distance (1.1) is not a distance in the usual sense since it is generally not symmetric. However, it measures the closeness of two points since D p J (u, v) ≥ 0 for any u and v. Furthermore, if the functional J is strictly convex, the following relation is satisfied: Using the Bregman distance (1.1), an iterative regularization method is proved in [41] to solve (1.3): where p k+1 ∈ ∂J (x k+1 ) and A is the adjoint operator of A. By a change of variables under certain assumptions, the above algorithm can be simplified as 5) for k = 0, 1, . . . starting with x 0 = 0, y 0 = b. From (1.5), the constrained problem (1.3) can be solved by a sequence of unconstrained subproblems such as (1.2) and gradient ascent steps. There are two main convergence results for the sequence {x k } generated by (1.4): Ax k − b 0 and D p k J (x, x k ) → 0, where x is a true solution of the problem (1.3). In practice, when there is noise, this algorithm still can be applied with a stopping criterion according to a discrepancy principle of the
doi:10.1007/s10915-010-9408-8 fatcat:db22izkxrvht5au3hpnzn543ki