The Little Engine That Could: Regularization by Denoising (RED)

Yaniv Romano, Michael Elad, Peyman Milanfar
2017 SIAM Journal of Imaging Sciences  
Removal of noise from an image is an extensively studied problem in image processing. Indeed, the recent advent of sophisticated and highly effective denoising algorithms has led some to believe that existing methods are touching the ceiling in terms of noise removal performance. Can we leverage this impressive achievement to treat other tasks in image processing? Recent work has answered this question positively, in the form of the Plug-and-Play Prior (P 3 ) method, showing that any inverse
more » ... blem can be handled by sequentially applying image denoising steps. This relies heavily on the ADMM optimization technique in order to obtain this chained denoising interpretation. Is this the only way in which tasks in image processing can exploit the image denoising engine? In this paper we provide an alternative, more powerful, and more flexible framework for achieving the same goal. As opposed to the P 3 method, we offer Regularization by Denoising (RED): using the denoising engine in defining the regularization of the inverse problem. We propose an explicit image-adaptive Laplacian-based regularization functional, making the overall objective functional clearer and better defined. With a complete flexibility to choose the iterative optimization procedure for minimizing the above functional, RED is capable of incorporating any image denoising algorithm, can treat general inverse problems very effectively, and is guaranteed to converge to the globally optimal result. We test this approach and demonstrate state-of-the-art results in the image deblurring and super-resolution problems. In our notation, we consider an image as a vector of length n (after lexicographic ordering). In the above description, the noise vector is normally distributed: e ∼ N 0, σ 2 I . In the most general terms, the image denoising engine is a function f : [0, 255] n −→ [0, 255] n that maps an image y to another image of the same size x = f (y), with the hope of getting as close as possible to the original image x. Ideally, such functions operate on the input image y to remove the deleterious effect of the noise while maintaining edges and textures beneath. The claim made above about the denoising problem being solved is based on the availability of algorithms proposed in the past decade that can treat this task extremely effectively and stably, getting very impressive results, which also tend to be quite concentrated (see, for example, the work reported in [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 42] ). Indeed, these documented results have led researchers to the educated guess that these methods are getting very close to the optimally possible denoising performance [21, 22, 23] . This aligns well with the unspoken observation in our community in recent years that investing more work to improve image denoising algorithms seems to lead to diminishing returns. While the above may suggest that work on denoising algorithms is turning to a dead-end avenue, a new opportunity emerges from this trend: seeking ways to leverage the vast progress made on the image denoising front in order to treat other tasks in image processing, bringing their solutions to new heights. One natural path towards addressing this goal is to take an existing and well-performing denoising algorithm and generalize it to handle a new problem. This has been the logical path that has led to contributions such as [24, 25, 26, 27, 28, 29] and many others. These papers, and others like them, offer an exhaustive manual adaptation of existing denoising algorithms, carefully retailored to handle specific alternative problems. This line of work, while often successful, is quite limited, as it offers no flexibility and no general scheme for diverting image denoising engines to treat new image processing tasks. Could one offer a more systematic way to exploit the abundance of high-performing image denoising algorithms to treat a much broader family of problems? The recent work by Venkatakrishnan, Bouman, and Wohlberg provides a positive and tantalizing answer to this question, in the form of the Plug-and-Play Prior (P 3 ) method [30, 31, 32, 33] . This technique builds on the use of an implicit prior for regularizing general inverse problems. When handling the obtained optimization task via the ADMM optimization scheme [57, 58, 59] , the overall problem decomposes into a sequence of image denoising tasks, coupled with simpler L 2 -regularized inverse problems that are much easier to handle. While the P 3 scheme may sound like the perfect answer to our prayers, reality is somewhat more complicated. First, this method is not always accompanied by a clear definition of the objective function, since the regularization being effectively used is only implicit, implied by the denoising algorithm. Indeed, it is not clear at all that there is an underlying objective function behind the P 3 scheme if arbitrary denoising engines are used [31] . Second, parameter tuning of the ADMM scheme is a delicate matter, and especially so under a nonprovable convergence regime, as is the case when using sophisticated denoising algorithms. Third, being intimately coupled with the ADMM, the P 3 scheme does not offer easy and flexible ways of replacing the iterative procedure. Because of these reasons, the P 3 scheme is neither a turn-key tool, nor is it free from emotional involvement. Nevertheless, the P 3 method has drawn much attention (see, e.g., [31, 32, 33, 34, 35, 36, 37, 38] ), and rightfully so, as it offers a clear path towards harnessing a given image denoising engine for treating more general inverse
doi:10.1137/16m1102884 fatcat:axnnnf6szzhidjzeyuhpxze3gi