Single image super-resolution based on approximated Heaviside functions and iterative refinement

PLOS ONE, Jan 2018

One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:

http://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0182240&type=printable

Single image super-resolution based on approximated Heaviside functions and iterative refinement

January Single image super-resolution based on approximated Heaviside functions and iterative refinement Xin-Yu Wang 0 1 Ting-Zhu Huang 0 1 Liang-Jian Deng 0 1 0 School of Mathematical Sciences/Research Center for Image and Vision Computing, University of Electronic Science and Technology of China , Chengdu, Sichuan , P. R. China 1 Editor: Li Zeng, Chongqing University , CHINA One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as ªsmoothº and ªnon-smoothº, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. - Data Availability Statement: All relevant data are within the paper. Funding: The authors received no specific funding for this work. Competing interests: The authors have declared that no competing interests exist. Introduction Image super-resolution (SR) is to generate or recover high-resolution (HR) images from one or multiple low-resolution (LR) images. If we generate/recover the HR image from only one LR image, we call it single-frame SR. Otherwise, we call it multiple-frame SR. The multipleframe SR methods are available in that multiple LR images are of the same scene with different sub-pixel shifts taken as input. It has direct applications in video SR problems (see [ 1 ]). Singleframe SR methods are quite popular and challenging when only one LR image is avaliable. In particular, we focus on single-frame SR problems in this paper. To produce a HR image, the simplest and effective way is to interpolate, e.g., bicubic and nearest interpolations. Recently, more interpolation-based methods have been proposed(see [2±8]). For instance, Youngjoon et al. [ 8 ] utilize a generalized curvature source term estimated from the LR image to construct a HR image. In particular, the resulting HR image has a reliable curvature profile which minimizes ringing artifacts. In addition, they propose an iterative application of the curvature interpolation method [ 9 ]. The method utilizes the gradientweighted curvature measured from the LR image, being an interpolator to suppress texture oversmoothing. In [ 10 ], Wang et al. present a fast image upsampling method to preserve the sharpness via two-scale sharpness preserving operations. On the one hand, the low-frequency of image is recovered based on a well-constructed displacement field. On the other hand, the local high-frequency structures are reconstructed via a sharpness preserving reconstruction algorithm. However, due to images zooming with the interpolation-based methods to be solved are frequently estimated by information of unknown locations without other priors. The interpolation-based methods usually introduce jagged artifacts or blur effect. Learning-based methods (see [11±14]) have attracted attention in image processing. Recently, they are also performed impressively in image SR problems. A key issue for the learning-based methods is to learn high frequency correspondences from a database generated by LR and HR image patches pairs. And then we could apply the correspondences to the LR image input to obtain its HR output. Purkait et al. [11] develop fuzzy rules to find different possible HR patches and combine them according to different rule strength to obtain the estimated HR patches. The rule parameters are learned from LR-HR patch pairs and then they use the Takagi-Sugeno (TS) model [ 15 ] with the rule parameters expressed as a linear combination of the different input possible HR patches. In addition, Yang et al. [12] apply the theory of sparse coding to SR problems effectively. This method jointly trains two dictionaries for LR and HR image patches, and then they could use the LR dictionary to generate sparse representations of the LR input to obtain the corresponding HR output. And Dong and Loy [ 13 ] first learn a mapping between low-resolution and high-resolution images, which is represented by a deep convolutional neural network (CNN). And then they take the LR image as input via the CNN to generate the HR image output. However, these methods typically reply on the similarity between test images and the database. Consequencely, they also involve expensive computation. In recent years, there has been tremendous interest in developing statistics-based methods [ 14, 16, 17 ], such as the popular tool Maximum a Posteriori (MAP) and Maximum Likelihood estimator (MLE). As proposed in [ 14 ], Peleg and Elad assume that prediction of high resolution patches can be obtained by MMSE estimation and the resulting scheme has the useful interpretation of a feedforward neural network. Apart from the methods mentioned above, a hybrid method [ 18 ], reconstruction methods [19±21], and other methods [22±25] have been used. And these methods are not completely independent with each other. For instance, Peleg and Elad propose a SR method via combining a statistics method and a learning-based method. In this paper, we study an effective single-frame SR approach, which is an improvement of the so called approximated Heaviside functions method (AHFM) proposed in [ 7 ]. It shows that the underlying image, can be viewed as a instensity function, which can be approximately represented by two classes of AHFs. Deng et al. cast the image super-resolution problem as an intensity function estimation problem. Defined on a continuous domain, the underlying intensity image, which belongs to a space with redundant basis, could be approximately represented via two classes of AHFs. Using only one LR image input, we could compute the representation coefficients by the proposed iterative AHF method. Then the high-resolution image is generated by combining the representation coefficients with two classes of AHFs. Here, the two classes of AHFs are corresponding to the high-resolution image. However, there are only two classes of AHFs, which may not describe/represent the whole information of one image. It may generate some oversharp information for the final HR image. Forced by that, we tend to extend the AHFM algorithm to general form to suppress the resulted image texture oversharp and improve super-resolution images qualities. We consider that an image is normally consisted of different smooth components and non-smooth components. In particular, there are some details, such as edges and corners, which have different sharpness. Based on this, we propose that the smooth components should be represented by multiple classes of AHFs with smooth edges, and the non-smooth components are also 2 / 24 represented by multiple classes of AHFs with sharp edges. In particular, due to the sparsity of non-smooth components, we give l1 regularization model and solve the model via block-wise alternating direction method of multipliers (ADMM) [ 26 ]. Furthermore, we design a novel iterative refinement algorithm to pick up more image details. Finally, the proposed method has been numerically proved competitively to some state-of-the-art methods. The paper is organized as follows. A brief review of the AHFM algorithm [ 7 ] is introduced in Section 2. In Section 3, we give the proposed model and its corresponding algorithms. In Section 4, we present the numerical results of different methods. It demonstrates that the proposed algorithms are more efficient. Finally, we conclude the paper in Section 5. Some preliminaries This section gives general remarks on approximated Heaviside functions. In addition, a brief review of the method based on approximated Heaviside functions can be found from [ 7 ]. 2.1 General remarks on Heaviside function Heaviside function or Heaviside step function, is a discontinuous function whose value is zero for negative argument and one for positive argument. Heaviside function could be defined as following alternative form of the unit step, as a function of a discrete variable x (see Fig 1(a)), ( 0 x < 0; The definition of ϕ(0) = 0 is significant. In the practical applications, some logistic functions to the Heaviside functions are often used for smooth approximations, called approximated Heaviside functions (AHFs), such as …x† ˆ 1 x 0: 1 c…x† ˆ 1 ‡ e 2x=x ; …1† …2† Fig 1. (a) Heaviside function. (b) two approximated Heaviside functions with ξ = 1 (blue solid line) and ξ = 0.01 (red dash line). 3 / 24 or c…x† ˆ As illustrated in Fig 1(b), a smaller ξ corresponds to a sharper transition at x = 0. In our work, we employ Eq (3) to approximate Heaviside functions. From [ 27 ], Kainen et al. propose that any functions in Lp([ 0, 1 ]d), p 2 [1, 1) could be well approximated by linear combinations of m characteristic functions of half-space, and m is any positive integer. Let Hd be a set of functions on [ 0, 1 ]d defined as: Hd ˆ ff : ‰0; 1Šd ! R : f …x† ˆ c…v x ‡ c†; v 2 Rd; c 2 Rg; where ψ is an approximated functions. Hd is the set of characteristic functions of closed halfd space of R . Theorem 1 [ 27 ] For any positive integer d, define spanmHd as fPimˆ1 oic…vi x ‡ c†g, where oi 2 R and vi 2 Rd, and ci 2 R, then it is known that Um2n+spanmHd is dense in (Lp([ 0, 1 ]d), k kp), p 2 [1, 1). Theorem 2 [ 27 ] For any positive integer m,d and every p 2 [1, 1), spanmHd is approximately a compact sub-set of (Lp([ 0, 1 ]d), k kp). Consequently, we can use spanm Hd for a finite m in practical computing. 2.2 Single image super-resolution via iterative AHF method (AHFM) The single image super-resolution via iterative AHF method (AHFM) proposed in [ 7 ] for image super-resolution gives a selection of sharp-related terms, which are measured from the LR image input and apply them to fine grids to generate the HR image. They assume the underlying image intensity function f is defined on [ 0, 1 ]2, then f 2 Lp([ 0, 1 ]2) with p 2 [1, 1). According to the theorems stated in section 2.1, f can be approximated by the following equation: X m jˆ1 f …z† ˆ ojc…vj z ‡ cj†; where oj 2 R; vj 2 R2, v = {(cosθt, sinθt)0, t = 1, 2, . . ., p} denote p different directions, and cj ˆ f1 ; 2 ; 3 . . . ; 1g is to denote discrete positions, m = pq, z = (x, y)0, where q is the total numq q q ber of pixels of the input image. Consequently, the function fc…vj z ‡ cj†gjmˆ1 is called a class of AHF with a specific ξ. For an image L 2 Rn1n2 , we assume it is a discretization of intensity function f on [ 0, 1 ]2, i.e., Li;j ˆ f …xi; yj†; xi ˆ ni1 ; yj ˆ nj2 …i ˆ 1; 2; ; n1; j ˆ 1; 2; ; n2†. Therefore, Eq (4) could be rewritten as matrix-vector form, L Co; f 2 Rn; o 2 Rm, with n = n1n2, m = pq. We compute coefficient ω and get the high resolution image with equation C~ o, where s is an upscaling factor and C~ 2 RN m with size N = s2n1n2. By this strategy, based on the observation that an image consists of smooth components and non-smooth components. We use two classes of AHFs to depict an image. Different components of an image may be described by different orientations θt at the locations cj with two ξ1, ξ2. One is big parameter ξ1 to represent smooth components (forming C1), another one is the smaller ξ2 to represent non-smooth components (forming C2). Thus, the vector-form image L can be approximated by the following discrete formula: L C1b1 ‡ C2b2: …4† …5† …8† Since non-smooth components are sparse, l1 regularization could be given on the coefficient β2. The optimal model is expressed as: kL The Eq (6) is solved via ADMM [28±31] in article [ 7 ]. The single image super-resolution via iterative AHF method (AHFM) proposed in [ 7 ] is outlined as shown in Algorithm 1. We close this section with the following remarks. · In order to pick up more details, such as edges, they design an iterative strategy to conduct an iterative refinement. They consider the difference (L − DH) as a new low-resolution input of Eq (6) to recompute a residual high-resolution image. · The AHFM algorithm performs well for natural images. However, the images with smooth backgrounds usually appear ring artifacts along the large scale edges, which mainly come from the added non-smooth components (see step (b) in Algorithm 1). Aiming to discard the ring artifacts of non-smooth components E, they use bicubic interpolation as the intermediate method to make a mask. Actually, in the step (b), only the E needs to update by Enew, which can be obtained from by the Eq (7), and the ring artifacts could be reduced significantly. where Enew ˆ Mask: E; Mask ˆ ( 0; if 0 Gi;j t; 1; otherwise; where Gi, j is a vector-form of gradient at location (i, j) of image B. The image B is generated via the bicubic interpolation. Notation . stands for dot product. t is a threshold value and t = 0.05 is in the experiments. Algorithm 1 (Single image super-resolution via iterative AHF method (AHFM) [ 7 ]) Input: one vector-form low-resolution image: L 2 Rn 1; l1 > 0; l2 > 0, s: upscaling factor. τ: maximum number of iteration. Output: high-resolution image H^ 2 RN 1 1. According to Eq (4), construct matrices C1; C2 2 Rn m on coarse grids, on fine grids construct C~ 1; C~ 2 2 RN m, where N = s2n. 2. Initialization: L(1) = L. for k = 1:τ 1. Compute the coefficients: …b…1k†; b…2k†† ˆ argminkL…k† C1b1 C2b2k22 ‡ l1kb1k22 ‡ l2kb2k1: 2. Update the high-resolution image: H…k† ˆ S…k† ‡ E…k†; where S…k† ˆ C~ 1b…1k†; E…k† ˆ C~ 2b…2k†: 3. Downsampling H(k) to coarse grid: L~ ˆ DH…k†. 4. Compute residual: L…k‡1† ˆ L…k† L~: end 3. Assemble the high-resolution outputs: S ˆ Pitˆ1 S…i†; E ˆ Pitˆ1 E…i†: 4. Compute the final high-resolution image: H^ ˆ S ‡ conv…E; p†; where conv represents a convolution operator, and p is a Gaussian kernel with a small size. Modified AHFM and iterative refinement This section presents a new image super-resolution algorithm, which is an extension of the AHFM algorithm. Note that only two classes of AHFs are not enough for the whole image components. Hence, taking into account the varying sharpness of the whole image, we will consider a modified AHFM with the different sharpness components and further present its algorithm for implementation in next section. Besides, the modified AHFM algorithm contains a new iterative refinement strategy, aiming to pick up more information about nonsmooth components. 3.1 Modified AHFM with different sharpness components The AHFM has its respective advantages, which is completely a single image super-resolution method without extra training data. The AHFM algorithm has only used two classes of AHFs to represent the whole image components. The sharper components or the smoother components are pivotal but are not represented well. Motivated by the point and the works proposed in [ 7 ], we propose a modified AHFM algorithm to comprise the whole information of one image as well as possible. First, we assume that a low-resolution image L is consisted of smooth components and non-smooth components. We exploit different ξ to form different AHFs to describe smooth components and non-smooth components. Hence, the low-resolution image L could be approximated by the following discrete formula: Xl iˆ1 X k jˆ1 L Ciai ‡ Fjbj; H Xl iˆ1 C~ iai ‡ …9† …10† where l, k represent numbers of smooth components and non-smooth components, respectively. We could obtain Ci; Fj 2 Rn m…i ˆ 1; 2; ; l; j ˆ 1; 2; ; k† according to Eq (4). Ci(i = 1, 2, , l) represents smooth components formed by larger ξ. Fj(j = 1, 2, , k) represents non-smooth components formed by smaller ξ. And ai; bj 2 Rm 1…i ˆ 1; 2; ; l; j ˆ 1; 2; ; k† are the corresponding representation coefficients with Ci(i = 1, 2, , l) and Fj(j = 1, 2, , k). After computing the representation coefficients, we apply them into following the equation to obtain the high-resolution image: where C~ i; F~ j 2 RN m…i ˆ 1; 2; ; l; j ˆ 1; 2; ; k; N ˆ s2n† are obtained on fine grids. s is the upscaling factor. Since the non-smooth components, such as edges and corner details, are sparse in generic images. Hence, we apply l1 regularization on βj(j = 1, 2, , k) to characterize this feature. Thus, the optimization model can be written as following: min k L ai;bj Xl iˆ1 Ciai where pj(j = 1, 2, , k) are the substitution variable. In particular, the Eq (13) is separable, w.r.t (u, pj)(j = 1, 2, , k). Many methods are used to solve the l1 problem (see [28±33]). The work in [ 7 ] solve the l1 problem via alternating direction method of multipliers(ADMM) [ 31 ]. Here, we solve Eq (13) via block-wise ADMM [ 26 ]. The augmented Lagrangian of Eq (13) is L…u; pj; bj† ˆk L Cu k22 ‡ li k Miu k22 ‡ gj k pjk1 ‡ 2 Bju ‡ bj k2; …14† Xl iˆ1 X k jˆ1 Xk t where bj(j = 1, 2, , k) are Lagrangian multipliers with proper size. The problem of minimizing L…u; pj; bj† could be solved by iteratively and alternatively via the following subproblems: u‐subproblem : min k L u Cu k22 ‡ li k Miu k22 ‡ j k pj 2 Bju ‡ bj k2; Xl iˆ1 Xk t t pj‐subproblem : mingj k pjk1 ‡ 2j k pj Bju ‡ bj k22 …j ˆ 1; 2; ; k†: Hence, the solution to the problem (11) is solved by block-wise ADMM, which is shown in Algorithm 2: Algorithm 2 Input: Given the low-resolution image L 2 Rn 1, Ci; Fj 2 Rn m, λi(i = 1, 2, , l), γj > 0, τj > 0(j = 1, 2, , k) Output: coefficient u Xl iˆ1 Xl iˆ1 X k jˆ1 X k jˆ1 ; k†; …12† …13† …15† …16† 2. while not converged do 3. t t + 1 4. u(t) solve subproblem (15) for pj ˆ pj…t 1†; bj ˆ bj…t 1† K ˆ CT C ‡ Xl iˆ1 liMiT Mi ‡ Xjˆk1 t2j BjT Bj 2 R…k‡l†m …k‡l†m; q ˆ CT L ‡ j …BjT pj ‡ BjT bj† 2 R…k‡l†m 1: …pj†i ˆ shrink……Bju†i g …bj†i; tj†; j The pj-subproblem(j = 1, 2, , k) has a closed form solution for each (pj)i (see [ 33 ]), where shrink(a, b) = sign(a) max(jaj − b, 0) and 0.(0/0) = 0 is assumed. By Algorithm 2, we have computed the representation coefficients αi, βj(i = 1, 2, 2, , k). We can get the high resolution image H^ via Eq (10). min k R b3;b4 min k R b3;b4 3.2 Modified AHFM with iterative refinement The Eq (9) takes different smooth components and non-smooth components into consideration. A natural remedy for extracting more details is to find the structure of the difference between LR image and last updated resulted image. We find that residual R ˆ L DH^ , where H^ is the last updated HR image and D is the downsampled operator. Fortunately, we find the residual is mostly non-smooth components. To pick up more edge details to make image less blurry, we design a new iterative refinement model based on the special structure. In the model, we use two smaller x3; x4 (forming F3; F4) to depict the residual image. Since the nonsmooth components are also sparse in the residual. We apply l1 regularization to the corresponding coefficients. The iterative refinement model could be written as following: F b 3 3 b3; b4 are the coefficients of the non-smooth components. μ1, μ2 are regularization parameters. Since the Eq (19) is the form of l1 norm, we solve it by ADMM scheme. We make two variable substitutions for b3; b4, and rewrite Eq (19) as: We rewrite Eq (20) as: min k R b Db k22 ‡m1 k u1k1 ‡ m2 k u2k1; s:t:; u1 ˆ Ab ; u2 ˆ Bb ; where D ˆ …F3; F4†; b ˆ …b3; b4†T , A = (I, 0), and B = (0, I). And 0 is zero matrix. I is identity matrix. Since the optimization Eq (21) is separable, w.r.t (β , u1, u2). The augmented Lagrangian of Eq (21) is Lt~1;t~2 …b ; u1; u2; b1; b2† ˆk R Db k22 ‡m1 k u1k1 ‡ m2 k u2k1 …21† …22† …24† …25† …26† …27† …28† t t + 1 β (t) solve subproblem (23) for u1 ˆ u…1t 1†; u2 ˆ u…2t 1†, u…t† 1 u…t† 2 b1…t† b2…t† 9. end while b1 ˆ b1…t 1†; b2 ˆ b2…t 1†; solve subproblem (24) for b ˆ b …t†; b1 ˆ b1…t 1†; solve subproblem (25) for b ˆ b …t†; b2 ˆ b2…t 1†; b1…t† ˆ b1…t 1† ‡ …u…1t† b2…t† ˆ b2…t 1† ‡ …u…2t† Ab …t††; Ab …t††: 3.3 Single image super-resolution based on approximated Heaviside functions and iterative refinement Take different behaviours of Eqs (9) and (19) into consideration, we get the modified single super-resolution based on approximated Heaviside functions and iterative refinement, which is shown in Algorithm 4. The details of the proposed method could be found from the flow chart shown in the Fig 2. Fig 2. The flow chart of the proposed work. 10 / 24 Algorithm 4 Input: Given the low-resolution image L (a vector form), λi, γj > 0 (i = 1, 2, , l;j = 1, 2, , k), μ1 > 0, μ2 > 0, ~t1 > 0, t~2 > 0. s: the upscaling factor. t: maximum iterations of the iterative refinement. Output: high-resolution H 1. According to Eq (4), construct matrices Ci; Fj 2 Rn m, F3; F4 2 Rn m on coarse grids, construct C~ i; F~ j…i ˆ 1; 2; ; l; j ˆ 1; 2; ; k†F~ 3; F~ 4 2 RN m on corresponding fine grids where N = s2n. 2. Compute the coefficients (αi, βj) according to Algorithm 2: …ai; bj† ˆk L Xl iˆ1 Ciai X k jˆ1 Xl iˆ1 1. Compute the residual image R(k): R(k) = L − DH(k). 2. Compute the coefficients according to Algorithm 3: 6. Compute the final high-resolution image: H ˆ H^ ‡ conv…E1; p† ‡ conv…E2; p†; where conv represents a convolution operator, p is a Gaussian kernel with a small size. We use a Gaussian kernel p with a small size to make a convolution to avoid the oversharp information on the non-edge parts. And D is the bicubic downsampling operator. However, the computation of the Algorithm 4 is very expensive, due to the large scale and non-sparse matrix C in Eq (12). For example, if a LR image is 512 × 512 and we choose 10 different directions, the size of matrix C will be equal to (5122 × 10 × 5122). It is obviously large. Thus, it is observed that if we increase the number of smooth components and non-smooth components. The quality of an image usually gets improved at the cost of increased computation time and memory requirement. However, for large number of smooth components and non-smooth components, it is difficult to do SR on a limited hardware. We have experimentally seen that the use of more than l = 1, k = 2 in Algorithm 4 does not improve quality of the images significantly. In addition, l = 1, k = 2 work well on a regular desktop. Hence, we choose l = 1, k = 2 in our work. The optimization model can be simplified as: …29† where b1, b2 are Lagrangian multipliers with proper size. The problem of minimizing L…u~;p1;p2;b1;b2† could be solved by iteratively and alternatively by following subproblems: u~‐subproblem : min k L C~u~ k22 ‡l1 k Mu~ k22 ‡t21 k p1 B01u~ ‡ b1 k22 SingleimageSRbasedonapproximatedHeavisidefunctionsanditerativerefinement The u~‐subproblem is a smooth quadratic problem. We solve it by least squares method as following: where u~ ˆ K1 1q1; K1 ˆ C~TC~ ‡ l1MTM ‡ t21 …B1† B1 ‡ t22 …B02†TB2; 0 T q1 ˆ C~TL ‡ Xjˆ21 t2j …B0j†T…pj ‡ bj†: The pj-subproblem(j = 1, 2) has a closed form solution Eqs (39) and (40) for each (pj)i(j = 1, 2): g …p1†i ˆ shrink……B01u~†i …b1†i; t11†; …39† 1. According to Eq (4), construct matrices C1; F1; F2; F3; F4 2 Rn m on coarse grids, construct C~ 1; F~ 1; F~ 2; F~ 3; F~ 4 2 RN m on corresponding fine grids where N = s2n. 2. Compute the coefficients according to Algorithm 2: …a1; b1; b2† ˆ argmin k L C1a1 F1b1 F2b2 k22 ‡l1 k a1 k22 ‡g1 k b1k1 ‡ g2 k b2k1: 3. Get high-resolution image H^ : H^ ˆ S1 ‡ Pj2ˆ1 Ej; S1 ˆ C~ 1a1; Ej ˆ F~ jbj…j ˆ 1; 2†: 2. Compute the coefficients according to Algorithm 3: We have implemented our algorithm to some benchmark images whose high-resolution versions are available. For a quantitative comparison, we downscale actual HR images to their LR versions via bicubic interpolation and then generate HR images by MAHFM algorithm and other methods. For gray images, we perform the proposed algorithm to them directly. While 13 / 24 working for color images, the input image is first converted from RGB to YCbCr. We only perform the MAHFM algorithm on the illuminance channel because the human are more sensitive to the brightness information. The other two channels contain chromaticity information and they could be upsampled by bicubic interpolation. Finally, we convert three channels back to RGB to get the estimated color HR image. We make the quantitative comparisons between the recovered HR images and the actual HR images. And we use root mean square error (RMSE) and peak signal to noise ratio (PSNR) on the illuminance channel to evaluate numerical performance. B B PSNR ˆ 10 log10@ 1 X 2n RMSE ˆ s 1 XN …hi h^i†2; N iˆ1 0 IJ i;j…hij 1†2 h^ij†2 1 C C; A …41† …42† where h, h^ are the vector-form of ground-truth image and the resulted high-resolution image, respectively. N represents testing times for one image. IJ is the original image consisting of (I × J) pixels. n is the number of bits per sample. The smaller RMSE is, the better performances of the method usually are. But PSNR has the opposite property. As mentioned in [ 7 ], if we apply MAHFM algorithm to one image, it would involve expensive computation. It follows that we could utilize image patches to reduce computation and storage significantly. In our work, we set patch size to be 6 × 6 and overlap to be 3. First, we compare the MAHFM algorithm with AHFM algorithm [ 7 ], and the numerical results are shown in Table 1. The performance of MAHFM algorithm is compared with that of bicubic interpolation, a kernel regression method (denoted as ª07'TIPº [ 34 ]), a fast upsampling method (denoted as ª08'TOGº [ 22 ]), one state-of-the art learning-based method (denoted as ª10'TIPº [12]), a fast image upsampling method via the displacement field (denoted as ª14'TIPº [ 10 ]) and AHFM [ 7 ]. The numerical results by these methods are displayed in Table 2. Furthermore, we have also carried out our experiments for the upscaling factor s = 3. The quantitative measurements for them are shown in Table 3. As can be seen, these results reveal that the MAHFM algorithm is effective. 4.2 Visual comparisons The test images (LR images) in Figs 3 and 4 are shown in Figs 3(a) and 4(a) of every figures, respectively. The image ªbaboonº and ªpeppersº are downloaded from http://decsai.ugr.es/ cvg/dbimagenes/c512.php. The bottom of the web page has shown that these images are Copyright free. The version of the images in our paper are other special formats which are 14 / 24 PSNR/RMSE RMSE PSNR RMSE PSNR RMSE PSNR RMSE PSNR RMSE PSNR RMSE PSNR RMSE PSNR converted by Photoshop from the sources above. We have shown the results on these images with different upscaling factors. For example, we have set the upscaling factor s = 3 in the Fig 3. The proposed algorithm is compared with bicubic interpolation, a learning-based method (ª10'TIPº [12]), one state-of-the-art interpolation-based methods (ª11'IPOLº [ 2 ]), one deep learning method(ª14'TIPº [ 10 ]) and the AHFM [ 7 ]. As can be seen, blur effect is generated by bicubic interpolation method. The learning-based method has comparable vision while we have to need extra training data to learn the relation between the test images and the training images. In addition, the interpolate-based methods (ª11'IPOLº) also show impressive performance. In particular, ª14'TIPº shows superior vision on image edges and runs quite fast. But as shown in figures, not only does it lose small-scale texture but also introduces minimal jagged artificial and smooths non-edge regions. By contrast, the recovered HR images by the MAHFM algorithm are preserved sharpness on the edges, such as the texture of the mustache without ringing and blurring artifacts. Compared with the ground truth image, the MAHFM alorithm generates superior effect than others not only visually but also quantitatively. 15 / 24 Fig 3. Results of ªbaboonº with the upscaling factor s = 3. (a) LR image, (b) bicubic, (c) 11'IPOL [ 2 ], (d) 14'TIP [ 10 ], (e) 10'TIP [12], (f) AHFM [ 7 ], (g) MAHFM. 4.3 Comparisons between the AHFM and the proposed algorithm (MAHFM) In this section we will illustrate the comparisons between the AHFM algorithm [ 7 ] and MAHFM algorithm. First, it is necessary to illustrate the difference between Algorithm 2 (let l = 1, k = 2) and AHFM algorithm. Although the only difference between the two methods is that the MAHFM algorithm is consisted of another term, which characters more sharp components. These could be seen much more clearly in Fig 5, which shows that the resulted HR image of Algorithm 2 (let l = 1, k = 2) is more sharp than that in the AHFM algorithm, especially at edges in Fig 5(c). 16 / 24 Fig 4. Results of ªpeppersº with the upscaling factor s = 2. (a) LR image, (b) ground truth, (c) bicubic, (d) 11'IPOL [ 2 ], (e) 14'TIP [ 10 ], (f)10'TIP [12], (g) AHFM [ 7 ], (h) MAHFM. Alternatively, we also compare the performance of the iterative refinement generated with that of the iterative AHFM algorithm [ 7 ]. To measure the effect of iterative refinement, we employ two classes of relative error defined as following: Reerror ˆ k L k LkF DHkF ; …43† Fig 5. Comparisons between MAHFM and AHFM with factor s = 2. (a) MAHFM (RMSE = 8.7501, PSNR = 29.2905 dB). (b) AHFM [ 7 ] (RMSE = 9.0810, PSNR = 28.9681 dB). (c) residual (For better visualization, we add 0.5 to the intensities of the residual). 17 / 24 Fig 6. Comparisons between two iterative refinement methods. (a) Reerror. (b) Reerror1. and Reerror1 ˆ k Htrue HkF : k HtruekF …44† where L is a low resolution image. D is the downsampling operator. H is the last updated resulted image. Htrue is the true high resolution image, and k kF is the Frobenius norm. After computing the relative errors of all the test images, we take the average of all the relative errors as the final results. The final results compared with the iterative AHFM algorithm in [ 7 ] are plotted in Fig 6, illustrating the Algorithm 3 could produce more details and less error than the original iterative refinement method. Besides, the precision of the proposed method is higher than that in original method. From the Fig 6, we can also get the iterative steps of the iterative refinement method. Only two iterations could be achieved the goal of refinement, which would also reduce the operation time. 4.4 Parameters As mentioned in section 3, there are many parameters in the MAHFM algorithm, e.g. ξ1, ξ2, ξ3, λ1, λ2, λ3 and others. Although the parameters are so many, they are easy to select. In our work, we select these parameters mainly according to the experience. In particular, the parameters ξ1, ξ2, ξ3 are especially important, because the sharpness for different components of an image are decided by them. First, we test ξ1 by fixing ξ2, ξ3 and then tune ξ1. In this way, we obtain a rough estimation of the three parameters (see Fig 7(a)). Note that there are some special points, which we can roughly estimate ξ1 2 [0.8, 1], ξ2 2 [10−2, 10−1] and ξ3 2 [10−4, 10−2]. Fig 7(a) as well as reveals ξ1 and ξ3 are not sensitive to the selection of parameters. Thus we can select ξ1 = 0.8 and ξ3 = 10−4 at first, and then tune ξ2 from 10−2 to 10−1 by 10−2 time change, which is plotted in Fig 7(b). We could find that ξ2 = 6 × 10−2 is the best fit. We conduct the same method to select other parameters. The final parameters of MAHFM algorithm are shown in Table 4. 18 / 24 Fig 7. (a) The RMSE trend of the MAHFM algorithm as the different parameter varies when other parameters are roughly estimated (To simplify the experiment, e1, e2, e3 represents ξ1, ξ2, ξ3, respectively in this figure). (b) The RMSE trend of the MAHFM algorithm as the parameter ξ2 varies, where ξ1 = 0.8, ξ3 = 10−4. 4.5 Computation time From Figs 8 and 9, we can see the proposed method is a little slower in operation time. The main reason is that our algorithm is mainly to compute the coefficients u, however, due to many components in the proposed algorithm have to be character thereafter there are a lot of loop programs used Matlab. Hereto, to speed up the computation time, there is a lot of room. We believe that the limitations could be improved by using cmex in Matlab involving a lot of loops to speed up the code. From Fig 8, we can also find that Algorithm 3 runs more time than Algorithm 2, since it involves more matrix-vector representation. In addition, in our work, we also use the theory of patch to speed up. Based on this, we can also consider utilizing the parallel computing. 4.6 The differences between Heaviside function and wavelets In the image processing field, there exist some other basis functions such as wavelets [ 35 ]. Therefore, we discuss the difference between the Heaviside function and wavelets. To test the difference, we write a program about the representation of super resolution based on the haar wavelet. The quantitative comparisons are shown in Table 5. Compared Fig 10 with Fig 4, we can see that the algorithm based on Heaviside function has got more details about the nonsmooth components than the wavelets do. The algorithm based on wavelets mainly splits one image into different sub bands. The first layer scale coefficients and wavelet coefficients of the image are extracted from the structure of wavelet decomposition. The different sub bands focus on the different ways, such as the information about horizontal, vertical and other ξ3 Fig 8. (a) Computation time of Algorithm 2 (l = 1, k = 2) and Algorithm 3 vs. upscaling factor for the low-resolution image ªbutterflyº with LR size 60 × 60. (b) Computation time of Algorithm 2 (l = 1, k = 2) and Algorithm 3 vs. the size of low-resolution image ªbutterflyº, the size of low-resolution image is increased from 30 × 30 to 110 × 110 and the upscaling factor is always set to be 4. Note that, to reduce the instability of Matlab, the computation time is the average of 10 runs. Fig 9. (a) Computation time of MAHFM (i.e., Algorithm 5) vs. upscaling factors for the low-resolution image ªbutterflyº with LR size 60 × 60. (b) Computation time of MAHFM vs. the size of low-resolution image ªbutterflyº, the size of low-resolution image is increased from 30 × 30 to 110 × 110 and the upscaling factor is always set to be 4. Note that, at the common point with Fig 8 (i.e., upscaling factor 4 and 60 × 60 LR size), it has slightly different computation due to the instability of Matlab, thus we present the computation time by the average of 10 runs to reduce the gap. PLOS ONE | https://doi.org/10.1371/journal.pone.0182240 20 / 24 Fig 10. Results of ªpeppersº with the upscaling factor s = 2 via the representation of Haar wavelets. directions. Every sub band persists the information only about the corresponding direction. If some operators, such as bicubic interpolation, are applied, error would be generated. And then the error would make pixels overlap, causing Gibbs effect. Differently, the basis is generated by the corresponding Heaviside functions and the representation coefficients are extracted from the low resolution image. The MAHFM algorithm could represent different sharp components via different classes of Heaviside functions as well as possible, which makes the information minimize the loss. 21 / 24 Conclusion In this paper, we have proposed a general framework of approximated Heaviside functions model that can describe different smooth components and non-smooth components in an image. The different components of an image could be approximately represented by different classes of approximated Heaviside functions (AHFs). With only one LR image input, we could compute the representation coefficients of AHFs, and then utilized these representation coefficients to obtain the high-resolution images. In addition, to pick up more image details, we employed an iterative refinement algorithm according to the residuals between the LR input and the downsampled LR image. To reduce computation and storage size, we applied the MAHFM algorithm to image patches. In particular, we discussed how to choose parameters. Extensive examples have been provided in the experiments to show the effectiveness of the MAHFM algorithm both quantitatively and visually. Author Contributions Conceptualization: Xin-Yu Wang. Formal analysis: Xin-Yu Wang, Ting-Zhu Huang, Liang-Jian Deng. Methodology: Xin-Yu Wang, Ting-Zhu Huang, Liang-Jian Deng. Software: Xin-Yu Wang. Supervision: Ting-Zhu Huang, Liang-Jian Deng. Validation: Xin-Yu Wang. Visualization: Xin-Yu Wang. Writing ± original draft: Xin-Yu Wang. Writing ± review & editing: Xin-Yu Wang. 22 / 24 12. Yang J, Wright J, Huang T S, Ma Y. Image super-resolution via sparse representation. IEEE Transactions on Image Processing. 2010; 19:2861±2873. https://doi.org/10.1109/TIP.2010.2050625 PMID: 20483687 23 / 24 1. Elad M , Feuer A . Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images . IEEE Transactions on Image Processing . 1997 ; 6 : 1646 ± 1658 . https://doi. org/10.1109/83.650118 PMID: 18285235 2. Getreuer P . Image interpolation with contour stencils . Image Processing On Line . 2011 ; 1 . https://doi. org/10.5201/ipol. 2011 .g_igcs 3. Mueller N , Lu Y , Do M N. Image interpolation using multiscale geometric representations . Proc Spie . 1973 ; 13 : 341 ± 353 . 4. Zhang L , Wu X. An edge-guided image interpolation algorithm via directional filtering and data fusion . IEEE Transactions on Image Processing , 2006 ; 15 : 2226 ± 2238 . https://doi.org/10.1109/TIP. 2006 . 877407 PMID: 16900678 5. Li X , Orchard M T. New edge-directed interpolation . IEEE Transactions on Image Processing . 2001 ; 10 : 1521 ± 1527 . https://doi.org/10.1109/83.951537 PMID: 18255495 6. Getreuer P . Contour stencils: Total Variation along curves for adaptive image interpolation . SIAM Journal on Imaging Sciences . 2011 ; 4 : 954 ± 979 . https://doi.org/10.1137/100802785 7. Deng L J , Guo W H , Huang T Z . Single image super-resolution by approximated Heaviside functions . Information Sciences . 2016 ; 348 : 107 ± 123 . https://doi.org/10.1016/j.ins. 2016 . 02 .015 8. Cha Y , Lee G Y , Kim S. Image zooming by curvature interpolation and iterative refinement . SIAM Journal on Imaging Sciences . 2014 ; 7 : 1284 ± 1308 . https://doi.org/10.1137/130907057 9. Kim H , Cha Y , Kim S . Curvature interpolation method for image zooming . IEEE Transactions on Image Processing . 2011 ; 20 : 1895 ± 1903 . https://doi.org/10.1109/TIP. 2011 .2107523 PMID: 21257378 10. Wang L F , Wu H Y , Pan C H. Fast image upsampling via the displacement field . IEEE Transactions on Image Processing . 2014 ; 23 : 5123 ± 5135 . https://doi.org/10.1109/TIP. 2014 .2360459 PMID: 25265631 11 . Pulak P , Pal. N. R , Chanda B. A fuzzy-rule-based approach for single frame super resolution . IEEE Transactions on Image Processing . 2014 ; 23 : 2277 ± 2290 . https://doi.org/10.1109/TIP. 2014 .2312289 13. Dong C , Chen C L , He K , Tang X . Learning a deep convolutional network for image super-resolution . European Conference on Computer Vision(ECCV) . 2014 ; 184 ± 199 . 14. Peleg T , Elad M. A statistical prediction model based on sparse representations for single image superresolution . IEEE Transactions on Image Processing . 2014 ; 23 : 2569 ± 2582 . https://doi.org/10.1109/TIP. 2014 .2305844 PMID: 24815620 15. Takagi T , Sugeno M. Fuzzy identification of systems and its applications to modeling and control . IEEE Trans. on Syst. man and Cybern . 1985 ; 15 : 387 ± 403 . 16. Farsiu S , Robinson M D , Elad M. Fast and robust multiframe super resolution . IEEE Transactions on Image . 2004 ; 13 : 1327 ± 1344 . https://doi.org/10.1109/TIP. 2004 .834669 17. Fattal R . Image upsampling via imposed edge statistics . Acm Transactions on Graphics . 2007 ; 26 : 95 . https://doi.org/10.1145/1276377.1276496 18. Glasner D , Bagon S , Irani M . Super-resolution from a single image . IEEE International Conference on Computer Vision . 2009 ; 349 ± 356 . 19. Chambolle A , Pock T. A first-order primal-dual algorithm for convex problems with applications to imaging . Journal of Mathematical Imaging and Vision . 2011 ; 40 : 120 ± 145 . https://doi.org/10.1007/s10851- 010-0251-1 20. Irani M , Peleg S . Super resolution from image sequence . International Conference on Pattern Recognition . 1990 ; 2 : 115 ± 120 . 21. Komatsu T , Igarashi T , Aizawa K. Very high resolution imaging scheme with multiple different-aperture cameras . Signal Processing Image Communication . 1993 ; 5 : 511 ± 526 . https://doi.org/10.1016/ 0923 - 5965 ( 93 ) 90014 - K 22. Shan Q , Li Z R , Jia J Y , Tang C K. Fast image/video upsampling . ACM Transactions on Graphics (TOG) . 2008 ; 27 . https://doi.org/10.1145/1409060.1409106 23. Fernandez G C , Candès E J. Super-resolution via transform-invariant group-sparse regularization . IEEE International Conference on Computer Vision . 2013 ; 3336 ± 3343 . 24. Liang-Jian Deng , Weihong Guo, Ting-Zhu Huang . Single image super-resolution via an iterative reproducing kernel Hilbert space method . IEEE Transactions on Circuits and Systems for Video Technology . 2016 ; 26 ( 11 ): 2001 ± 2014 . https://doi.org/10.1109/TCSVT. 2015 .2475895 25. Liang-Jian Deng , Huiqing Guo, Ting-Zhu Huang . A fast image recovery algorithm based on splitting deblurring and denoising . Journal of Computational and Applied Mathematics , 2015 ; 287 : 88 ± 97 . https://doi.org/10.1016/j.cam. 2015 . 03 .035 26. He B , Tao M , Yuan X . Block-wise alternating direction method of multipliers for multiple-block convex programming and beyond . SMAI Journal of Computational Mathematics . 2015 ; 1 : 145 ± 174 . https://doi. org/10.5802/smai-jcm. 6 27. Kainen P C , KuÇrkova V V. , Vogt A . Best approximation by linear combinations of characteristic functions of half-space . Journal of Approximation Theory . 2003 ; 122 : 151 ± 159 . https://doi.org/10.1016/S0021- 9045 ( 03 ) 00072 - 8 28. Zhao X L , Wang F , Ng M K. A new convex optimization model for multiplicative noise and blur removal . SIAM Journal on Imaging Sciences . 2014 ; 7 : 456 ± 475 . https://doi.org/10.1137/13092472X 29. Xu Y , Huang T Z , Liu J. An augmented Lagrangian algorithm for total bounded variation regularization based image deblurring . Journal of the Franklin Institute . 2014 ; 351 : 3053 ± 3067 . https://doi.org/10. 1016/j.jfranklin. 2014 . 02 .009 30. Liu G , Huang T Z , Liu J . High-order TV L1-based images restoration and spatially adapted regularization parameter selection . Computers and Mathematics with Applications . 2014 ; 67 : 2015 ± 2026 . https:// doi.org/10.1016/j.camwa. 2014 . 04 .008 31. Chen C , Ng M K , Zhao X L . Alternating direction method of multipliers for nonlinear image restoration problems . IEEE Transactions on Image Processing . 2015 ; 24 : 33 ± 43 . https://doi.org/10.1109/TIP. 2014 . 2369953 PMID: 25398177 32. Zhang Y D , Peterson B S , Ji G L , Dong Z C. Energy Preserved Sampling for Compressed Sensing MRI . Computational and Mathematical Methods in Medicine . 2014 ; 5 : 546814 , 12 pages. 33. Wang Y , Yang J , Yin W , Zhang Y. A new alternating minimization algorithm for total variation image reconstruction . SIAM Journal on Imaging Sciences . 2008 ; 1 : 248 ± 272 . https://doi.org/10.1137/ 080724265 34. Takeda H , Farsiu S , Milanfar P . Kernel regression for image processing and reconstruction . IEEE Transactions on Image Processing . 2007 ; 16 : 349 ± 366 . https://doi.org/10.1109/TIP. 2006 .888330 PMID: 17269630 35. Zhang Y D , Wang S H , Ji G L , Dong Z C. Exponential wavelet iterative shrinkage thresholding algorithm with random shift for compressed sensing magnetic resonance imaging . Ieej Transactions on Electrical and Electronic Engineering . 2015 ; 1 : 115 ± 132 .


This is a preview of a remote PDF: http://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0182240&type=printable

Xin-Yu Wang, Ting-Zhu Huang, Liang-Jian Deng. Single image super-resolution based on approximated Heaviside functions and iterative refinement, PLOS ONE, 2018, DOI: 10.1371/journal.pone.0182240