Signal Processing: Image Communication 31 (2015) 86–99
Contents lists available at ScienceDirect
Signal Processing: Image Communication journal homepage: www.elsevier.com/locate/image
Super-resolution by polar Newton–Thiele's rational kernel in centralized sparsity paradigm$ Lei He a, Jieqing Tan a,n, Zhuo Su b,n, Xiaonan Luo b,n, Chengjun Xie a,n a
College of Computer and Information, Hefei University of Technology, Hefei 230009, China National Engineering Research Center of Digital Life, State-Province Joint Laboratory of Digital Home Interactive Applications, School of Information Science and Technology, Sun Yat-sen University, Guangzhou 510006, China b
a r t i c l e in f o
abstract
Article history: Received 22 July 2014 Received in revised form 9 December 2014 Accepted 9 December 2014 Available online 31 December 2014
In general the rectangular windows are used by many super-resolution reconstruction approaches, however, they are not suitable for the arc regions of images. In view of this, a novel reconstruction algorithm is proposed in this paper, which is based on the Newton– Thiele's rational interpolation by continued fractions in the polar coordinates. In order to get better reconstructed results, we also present a novel model where the Newton– Thiele's rational interpolation scheme used to magnify images/videos is combined with the sparse representation scheme used to refine the reconstructed results. Plenty of experiments in image and video sequences demonstrate that the new method can produce high-quality resolution enhancement, as compared with the state-of-the-art methods. Experimental results show that the proposed method achieves much better results than other methods in terms of both visual effect and PSNR. & 2014 Elsevier B.V. All rights reserved.
Keywords: Continued fractions Nonlinear interpolation Polar coordinates Sparse representation Super-resolution
1. Introduction Super-resolution (SR) reconstruction is an active area of research, which aims to reconstruct a high resolution (HR) image from one or more low-quality input images. HR means that images have high pixel intensity, and more details which play important role in actual applications can be provided. The reconstruction task is difficult due to a lot of details being not contained in input low resolution (LR) images. SR image reconstruction is a typical ill-posed inverse problem [43], since for a given LR input image there are many HR images satisfying the reconstruction constraint. And generally it can be modeled as Y ¼ DHX þV;
ð1Þ
☆ Fully documented templates are available in the elsarticle package on http://www.ctan.org/tex-archive/macros/latex/contrib/elsarticle CTAN. n Corresponding authors. E-mail addresses:
[email protected] (J. Tan),
[email protected] (Z. Su),
[email protected] (X. Luo). URL: http://
[email protected] (L. He).
http://dx.doi.org/10.1016/j.image.2014.12.003 0923-5965/& 2014 Elsevier B.V. All rights reserved.
where Y is a given LR input image, X is the HR image, D is a down-sampling operator, H is a blurring operator, and V is additive noise. Currently there are many SR image reconstruction methods [1–35], including interpolation based methods, regression based methods, sparse representation based methods, etc. The simplest form of attaining more details is to reconstruct new pixels by using interpolation approaches [9,20,21], e.g., the bilinear interpolation and the bicubic interpolation. However, these simple interpolation methods result in that there commonly exists artifacts along the edges in the reconstructed images. In order to overcome this phenomenon, a new edge-directed interpolation method via the local covariance was presented in [21], where the covariance-based adaptive interpolation method and the bilinear interpolation method were combined to reduce computational complexity. Zhou et al. [9] proposed a novel interpolation-based SR reconstruction framework by using multisurface fitting which generated one surface for every LR pixel and the surfaces with different weights were used to retain the image details.
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
Recently, the regression approach [4,5,31] has been widely concerned. Yang et al. [4] proposed a robust superresolution method based on in-place example regression. This method took full advantage of two SR approaches of learning from an external training database and learning from self-examples. Tang et al. [5] found Yang's algorithm [6] to be a linear regression method in a special feature space, and proposed a novel SR method by using more flexible regression model. Zhang et al. [31] proposed a maximum a posteriori probability framework to realize SR recovery by using the non-local regularization prior based on non-local means filter and the local regularization prior based on the steering kernel regression. Aiming at an efficient algorithmic method for a single SR image, Jiang et al. [44] proposed gCLSR (graph-Constrained Least Squares Regression), which was formulated as a nonnegative constraint least squares program, to learn a projection matrix mapping the LR image patch to the HR image patch space. And by using the projection matrix, the HR image could be simply reconstructed from a single LR image without the need of HR–LR training pairs. The advantage of the gCLSR was that it could preserve the intrinsic geometric structure of the original HR image patch manifold. These approaches are considerably feasible and are capable of reproducing more details. Nevertheless, learning this regression function is extremely difficult because the nature of SR is ill-posed and good image priors are needed to constrain the solution. Motivated by its successful applications in many imagerelated inverse problems, as a powerful model, the sparse representation also has been used to SR reconstruction [1,6– 8,17,22,26]. Ren et al. [1] utilized the contextual information of neighboring image patches, and proposed a unified framework based on the context-aware prior model and Markov random fields model to realize image denoising and super-resolution reconstruction. Yang et al. [7] proposed an SR approach by using couple dictionaries which had the same sparse representation and were trained from HR and LR image patch pairs respectively, and then the SR patch would be attained by the HR dictionary and the sparse representation trained from the LR dictionary. Considering that the optimal sparse domains of natural images can vary significantly across different images or different image patches in a single image, Dong et al. [17] proposed an algorithm by adaptive sparse domain selection and adaptive regularization in order to further improve the quality of reconstructed images. After that, Dong et al. [26] put forward a novel image reconstruction method by using nonlocal regularization and local adaptive sparsity. These methods via sparse representation have achieved good results in both edges and textural regions, but seriously rely on the training dictionary and relevant parameters. We refer to [2,3,10–16,18,19,23–25,27–30,32–35] for other SR methods. In the SR reconstruction algorithm, Sun et al. [11] used the gradient profile prior which was learned from plenty of natural images. By using this prior, a gradient field constraint which mainly sharpened the details and suppressed ringing or jaggy artifacts along edges was enforced to reconstruct the image. Considering the fact that the exampled-based super-resolved image inevitably contains artifacts, Liang et al. [24] proposed a new single image SR algorithm by integrating an enforced
87
similarity preserving process into example-based SR approach. Li et al. [18] utilized the information of a set of LR images and proposed a multi-frame SR reconstruction algorithm based on image local characteristics. Su et al. [13] proposed an adaptive block-based SR framework where each block was assigned a suitable method from the following: rule-based and learning-based image/video classification, feature extraction, and image enhancement. Tai et al. [14] combined edge-directed SR with learningbased SR to obtain good results and extended edgedirected SR to get detail from a single exemplar image. After analyzing the reasons for the inaccuracy of motion estimation, Lu et al. [25] proposed a multi-lateral filter to regularize the process of motion estimation, and then introduced a non-local prior to regularize the HR image reconstruction, and finally the two regularizations were incorporated into one maximum a posteriori estimation model. In contrast to the traditional exemplar-based hallucination methods, Yue et al. [12] proposed a novel SR scheme for landmark images by retrieving correlated images from the Internet. Wanner and Goldluecke [34] developed a continuous framework for light field analysis, and proposed novel variational methods for spatial and angular super-resolution. A blind unified method for multiimage super-resolution was presented in [35], which was based on alternating minimization of a well-defined cost function and the Huber–Markov random field model was used for the HR image. Purkait and Chanda [32] proposed a new edge-preserving super resolution reconstruction method, where the multiscale morphology and the Bregman iterations were used. The rectangular windows are used in many reconstruction approaches above [4,6,7,13,15,16,23]. However, in view of the observation images, we find that there are a lot of arc regions in images. If we adopt the conventional rectangle windows in the cartesian coordinates, the edge cannot be better processed. Inspired by this observation, we seek a novel SR model for these arc regions. In this paper we propose a new SR reconstruction technique that extends existing sparse representation based framework, and introduce how to select an appropriate window in the polar coordinates and how to form a novel SR model which combines the nonlinear interpolation scheme, to magnify images/videos, with the sparse representation scheme to refine the reconstructed results. The windows in the polar coordinates, which contain more details and texture regions of images than the rectangular windows in the cartesian coordinates, will be used to acquire better visual effect and prominent texture. The main contributions of this paper are as follows: the Newton–Thiele's rational interpolation function in the polar coordinates is proposed; the nonlinear interpolation in the polar coordinates is applied to image and video SR reconstruction; the novel SR model by polar Newton–Thiele's rational kernel in centralized sparsity paradigm is presented. The rest of the paper is organized as follows. In Section 2, the overview of proposed method is described. The rational interpolation based on the continued fractions, polar Newton–Thiele's rational interpolation and its error estimation are presented in Section 3. The centralized sparse representation model for fine scale reconstruction is presented in Section 4. The implementation and experimental analysis of
88
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
Suppose X ¼ fx0 ; x1 ; …; xm g is a set of real or complex points and a function f(x) is defined in the domain D*X, where x0 ; x1 ; …; xm do not have to be distinct from one another. To approximate function f(x), Thiele's interpolating continued fraction can be constructed as follows:
the proposed algorithm are presented in Section 5. And the paper is concluded in Section 6. 2. Overview of the proposed method In this section we mainly summarize the proposed method. Our method consists of two phases, namely reconstructing phase and refining phase. In the reconstructing phase, we construct polar Newton–Thiele's rational interpolation function. When the denominator is zero in the process of application of the polar Newton–Thiele's rational interpolation function, the Thiele type continued fraction will be replaced by polynomial. The error estimation of Newton–Thiele's interpolation in the polar coordinates is given. In order to apply the polar Newton–Thiele's rational interpolation function to SR reconstruction, we analyze the relationship between the cartesian coordinates and the polar coordinates, as shown in Fig. 1. Finally, we get initial reconstruction images by using the polar Newton–Thiele's rational interpolation function. In the refining phase, we adopt centralized sparse representation (CSR) model [22]. A local dictionary for each patch or each cluster of similar patches is learned. By continuous iteration, we update the dictionary, regularization parameters, and the sparse code. After updating all parameters, the estimation of the original image can be obtained. Finally, the reconstruction image by interpolation and the estimation image by CSR are endowed to different weights. By balancing weight coefficient, we can get final SR results.
T m ðxÞ ¼ b0 þ
x x0 x x1 x xm 1 þ⋯þ ; b1 \t þ b2 bm
where bi ¼ φ½x0 ; x1 ; …; xi ði ¼ 0; 1; …; mÞ are the inverse differences of f(x) at points x0 ; x1 ; …; xi . It is not difficult to show that Tm(x) is a rational function with the degrees of numerator polynomial and denominator polynomial not exceeding ½ðm þ 1Þ=2 and ½m=2, respectively, where ½x denotes the largest integer less than or equal to x, and Tm(x) satisfies T m ðxi Þ ¼ f ðxi Þ ði ¼ 0; 1; …; mÞ. An alternative way to approximate function f(x) is Newton's interpolating polynomial, which can be expressed as N m ðxÞ ¼ f ðx0 Þ þ
m 1 X
f ½x0 ; …; xi þ 1 ðx x0 Þ⋯ðx xi Þ;
where f ½x0 ; …; xi þ 1 ði ¼ 0; …; m 1Þ is the divided difference of function f(x) at points x0 ; …; xi þ 1 . Let Y ¼ fy0 ; y1 ; …; yn g be another set of real or complex points and ∏m; n ¼ X Y ¼ fðxi ; yj Þ ði ¼ 0; 1; …; m; j ¼ 0; 1; …; nÞg. Then the Thiele-type branched continued fractions for two-variable function f ðx; yÞ can be defined as [37–40] TT m;n ðx; yÞ ¼ b0 ðyÞ þ
x x0 x x1 x xm 1 þ⋯þ ; b1 ðyÞ \t þ b2 ðyÞ bm ðyÞ
bi ðyÞ ¼ ci;0 þ
The continued fractions are classical branch of mathematics, which were systematically expounded by Wall [36], and exerted a significant influence in the fields of computer-aided industry. Nowadays, numerous research works have demonstrated that the Thiele type continued fractions have the best performance in actual applications. Mathematically, it can be defined as follows.
y y0 y y1 y yn 1 þ⋯þ ci;1 \t þ ci;2 ci;n
ði ¼ 0; 1; …; mÞ; ð5Þ
where ci;j ¼ φTT ½x0 ; …; xi ; yo ; …; yj ði ¼ 0; …; m; j ¼ 0; …; nÞ are the partial inverted differences of f ðx; yÞ at grid points fxo ; …; xi g fyo ; …; yj g. Then TT m;n ðx; yÞ satisfies TT m;n ðxi ; yj Þ ¼ f ðxi ; yj Þ
ði ¼ 0; …; m; j ¼ 0; …; nÞ:
90 degree
(r , )
( x, y )
r
y
x
X
ð4Þ
where bi ðyÞ ði ¼ 0; 1; …; mÞ are the Thiele's interpolants based on continued fractions, defined by
3.1. Rational interpolation based on the continued fractions
r
ð3Þ
i¼0
3. SR via polar Newton–Thiele's rational interpolation
Y
ð2Þ
0 degree
o
Fig. 1. The relationship between the cartesian coordinates (a) and the polar coordinates (b).
ð6Þ
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
Integrating Newton's interpolating polynomial with Thiele's continued fractions yields Newton–Thiele's rational interpolation [41,42] as follows: NT m;n ðx; yÞ ¼ A0 ðyÞþ ðx x0 ÞA1 ðyÞ þ ⋯ þ ðx x0 Þ⋯ðx xm 1 ÞAm ðyÞ;
ð7Þ y y0 y y1 y yn 1 Ai ðyÞ ¼ ai;0 þ þ⋯þ ai;1 \t þ ai;2 ai;n
ði ¼ 0; 1; …; mÞ; ð8Þ
where ai;j ¼ φNT ½x0 ; …; xi ; y0 ; …; yj ; i ¼ 0; 1; …; m; j ¼ 0; 1; …; n, are the blending differences of f ðx; yÞ at grid points fx0 ; …; xi g fy0 ; …; yj g and we refer to [42] for details. Essentially, Newton–Thiele's rational interpolation is formed jointly by Newton's polynomial in x and Thiele's continued fractions in y. 3.2. Polar Newton–Thiele's rational interpolation Considering the complexity of Thiele-type branched continued fractions for two-variable functions, we will adopt Newton–Thiele's rational interpolation to reconstruct images. Through the observation we find that the rectangular windows in the cartesian coordinates are not suitable for arc regions of images, such as Fig. 2(b). If we adopt the rectangle windows and Newton–Thiele's rational interpolant in the cartesian coordinates [41,42], these arc regions cannot be better processed. Here, we propose a novel polar Newton–Thiele's rational interpolation function as follows:
89
Then it is not difficult to show that Rm;n ðr; θÞ determined by (9) and (10) satisfies m;n
Rm;n ðr i ; θj Þ ¼ f ðr i cos θj ; r i sin θj Þ;
8 ðr i ; θj Þ A ∏ ;
ð11Þ
r;θ
where m;n
∏ ¼ fðr i ; θj Þji ¼ 0; 1; …; m; j ¼ 0; 1; …; ng: r;θ
3.3. Newton-type approximation In order to reconstruct SR images, we interpolate every point's intensity by using Eqs. (9)–(11). However, the denominator might be zero in the process of calculations. In this case, the corresponding Thiele type continued fraction T i ðθÞ defined in Eq. (10) should be replaced by the following Newton-type polynomial: T i ðθÞ ¼ pn ðr 0 ; …; r i ; θ0 Þ þ pn ðr 0 ; …; r i ; θ0 ; θ1 Þðθ θ0 Þ n1
þ ⋯þ pn ðr 0 ; …; r i ; θ0 ; …; θn Þ ∏ ðθ θj Þ; j¼0
i ¼ 0; 1; …; m;
where pn r 0 ; …; r i ; θ0 ; …; θj ¼
pn ðr 0 ; …; r i ; θ0 ; …; θj 2 ; θj Þ pn ðr 0 ; …; r i ; θ0 ; …; θj 2 ; θj 1 Þ ; θj θ j 1
pn ðr 0 ; …; r i 2 ; r i ; θk Þ pn ðr 0 ; …; r i 2 ; r i 1 ; θk Þ pn r 0 ; …; r i ; θk ¼ ; r i r i 1 pn ðr i ; θj Þ ¼ f ðr i cos θj ; r i sin θj Þ
ði ¼ 0; 1; …; m; j ¼ 0; 1; …; nÞ:
Rm;n ðr; θÞ ¼ T 0 ðθÞ þ ðr r 0 ÞT 1 ðθÞ þ ðr r 0 Þðr r 1 ÞT 2 ðθÞ þ⋯ þðr r 0 Þðr r 1 Þ⋯ðr r m 1 ÞT m ðθÞ;
ð9Þ
where r ðr 4 0Þ and θ ð0 r θ o 2π Þ are the radius and the angle of interpolation kernel, respectively. Rm;n ðr; θÞ is the output intensity of the interpolated point ðr; θÞ. Thiele's rational interpolants T i ðθÞ by continued fractions are defined as follows: T i θ ¼ p r 0 ; …; r i ; θ0 þ þ⋯þ
θ θ0 θ θ1 þ pðr 0 ; …; r i ; θ0 ; θ1 Þ pðr 0 ; …; r i ; θ0 ; θ1 ; θ2 Þ
θ θn 1 ; pðr 0 ; …; r i ; θ0 ; …; θn Þ
i ¼ 0; 1; …; m
ð10Þ
where pðr 0 ; …; r i ; θ0 ; …; θj Þ; i ¼ 0; 1; …; m; j ¼ 0; 1; …; n, are the blending differences, which can be calculated recursively as follows: ði ¼ 0; 1; …; m; j ¼ 0; 1; …; nÞ; p r i ; θj ¼ f r i cos θj ; r i sin θj pðr j ; θk Þ pðr i ; θk Þ p ri ; rj ; θk ¼ ; r j r i pðr p ; …; r q ; r j ; θk Þ pðr p ; …; r q ; r i ; θk Þ ; p r p ; …; r q ; r i ; r j ; θk ¼ r j r i
θ l θk ; pðr p ; …; r q ; θl Þ pðr p ; …; r q ; θk Þ p r p ; …; r q ; θr ; …; θs ; θk ; θl p r p ; …; r q ; θk ; θl ¼
¼
θ l θk : pðr p ; …; r q ; θr ; …; θs ; θl Þ pðr p ; …; r q ; θr ; …; θs ; θk Þ
3.4. Relationship and error estimation As mentioned above, we transform the cartesian coordinates to the polar coordinates, and require all the points in the polar coordinates to satisfy Eq. (10). Mathematically, a point with the coordinate (x,y) and its intensity f ðx; yÞ in the cartesian coordinates can be transformed to the corresponding polar coordinate ðr; θÞ, where r ¼ p ffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 þ y2 ; θ ¼ arctanðy=xÞ; and intensity turns out to be f ðr cos θ; r sin θÞ in the form of polar coordinates. To further analyze the essence of our method, we will describe the error estimation of Newton–Thiele's interpolation form in the polar coordinates and prove it. Let Fðr; θÞ ¼ f ðr cos θ; r sin θÞ. If Fðr; θÞ is an ðm þn þ1Þst differentiable function defined in G*∏m;n and its r;θ Newton–Thiele's blending differences exist in ∏m;n , then r;θ there exist m þ 2 numbers ξ; ξ0 ; …; ξm in the smallest interval I½r 0 ; r 1 ; …; r m ; r containing r and all support radius coordinates r i , and m þ 1 numbers η0 ; η1 ; …; ηm in the smallest interval I½θ0 ; θ1 ; …; θn ; θ containing θ and all support angle coordinates θj , such that for each pair of arguments ðr; θÞ A G: ωm þ 1 ðrÞ ∂m þ 1 Fðξ; θÞ F r; θ Rm;n r; θ ¼ ðm þ 1Þ! ∂r m þ 1 m ϖ n þ 1 ðθ Þ X ωi ðrÞ ∂n þ i þ 1 ½bi ðηi ÞFðξi ; ηi Þ þ ; nþ1 ðn þ 1Þ! i ¼ 0 i!bi ðθÞ ∂r i ∂θ
90
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
90 degree
(r
(r
r,
)
(r ,
)
r,
)
(r
(1,2)
r, )
(2,2)
(0,2) (r , ) (r
(1,1)
r, )
(r
r
r
r
(r , (r
r,
r,
)
(0,1)
(2,1)
)
)
0 degree
o
(1,0)
(0,0)
(2,0)
Fig. 2. Polar Newton–Thiele's rational interpolation points and windows in different coordinates. (a) Interpolation points in the polar coordinates; (b) rectangular windows in the cartesian coordinates; and (c) windows in polar coordinates.
0.8
28
0.78 0.76
SSIM
PSNR
27 26 25
0.74 0.72 0.7
24
0
0.2
0.4
0.6
0.8
1
1.2
0
1.4
0.2
0.4
0.6
0.8
1
1.2
1.4
0.8
27.5
0.78
SSIM
PSNR
27 26.5 26
0.76 0.74 0.72
25.5
0.7
0
0.1
0.2
0.3
0.4
0.5
0.6
Fig. 3. The comparisons with different parameter settings: (a), (b) Δθ ¼ 0:001 and (c), (d) Δr ¼ 0:4.
where
¼
T i θ ¼ p r 0 ; …; r i ; θ0 þ a ðθ Þ ¼ i ; bi ðθÞ
θ θ0 θ θn 1 þ⋯þ pðr 0 ; …; r i ; θ0 ; θ1 Þ pðr 0 ; …; r i ; θ0 ; …; θn Þ
i ¼ 0; 1; …; m:
Proof. Using Newton's expansion formula gives F r; θ ¼ F ðr Þ θ ¼ F ðr 0 Þ θ þ ðr r 0 ÞF ðr 0 ; r 1 Þ θ þ ⋯ þ ðr r 0 Þ⋯ðr r m 1 ÞF ðr 0 ; …; r m Þ θ þ ðr r 0 Þ⋯ðr r m ÞF ðr 0 ; …; r m ; r Þ θ
m X i¼0
ωi ðrÞF ðr0 ; …; r i Þ θ þ
ωm þ 1 ðrÞ ∂m þ 1 Fðξ; θÞ ðm þ 1Þ!
∂r m þ 1
;
where ω0 ðrÞ ¼ 1; ωi ðrÞ ¼ ðr r 0 Þðr r 1 Þ⋯ðr r i 1 Þ; i ¼ 1; 2; …; m þ 1, and ξ is a number located in the smallest interval I½r 0 ; r 1 ; …; r m ; r containing r and all support radius coordinates r i . For i ¼ 0; 1; …; m, let ci ðθÞ ¼ bi ðθÞðFðr 0 ; r 1 ; …; r i ÞðθÞ T i ðθÞÞ. Since T i θj ¼ p r 0 ; r 1 ; …; r i ; θj ¼
i i X X pðr k ; θj Þ Fðr k ; θj Þ ¼ ¼ F ðr 0 ; …; r i Þ θj ; 0 0 ω ðr Þ ω ðr Þ k k k ¼ 0 iþ1 k ¼ 0 iþ1
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
91
Table 1 Time cost of two schemes with different image sizes by up-scaling factor of 2. All the records are evaluated in MATLAB 2010b (unit: s). Time cost
ð64Þ2
ð86Þ2
ð128Þ2
ð256Þ2
Nonlinear scheme CSR scheme
0.169904 23.807674
0.261127 42.183716
0.598681 101.607127
2.222045 617.221249
0.8 0.75
28
0.7
SSIM
PSNR
26 24 22
0.65 0.6 0.55
0.5
20
0.45
18
0.4 0
0.1
0.2
0.3
0.4
0.5
ε
0.6
0.7
0.8
0.9
1
0
0.1 0.2 0.3 0.4
0.5 0.6 0.7 0.8 0.9
ε
1
Fig. 4. The PSNR value (a) and SSIM index (b) with different parameter ε.
Fig. 5. Reconstructed raccoon images by different methods. (a) LR image; (b), (c) with scaling factor 2: (b) by Dong et al. [22] (PSNR: 16.103932) and (c) by the proposed method (PSNR: 27.395443); (d)–(f) with scaling factor 3: (d) by Dong et al. [22] (PSNR: 21.534390), (e) by the nonlinear interpolation method (PSNR: 25.555892), and (f) by the proposed method (PSNR: 26.359839).
92
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
Fig. 6. The detailed display of reconstructed girl images by up-scaling factor of 2. (a) The input image; (b) by bilinear interpolation; (c) by NEDI; (d) by Patch; (e) by LSS; (f) by SDS; (g) by our method; and (h) the original image.
we can get ci ðθj Þ ¼ 0; i ¼ 0; 1; …; m; j ¼ 0; 1; …; n, which leads to ϖ n þ 1 ðθÞ ∂n þ 1 ci ðηi Þ ci θ ¼ ðn þ 1Þ! ∂θn þ 1 ¼
ϖ n þ 1 ðθÞ ∂n þ 1 ðbi ðηi ÞFðr 0 ; r1 ; …; r i Þðηi ÞÞ nþ1 ðn þ1Þ! ∂θ
¼
ϖ n þ 1 ðθÞ ∂n þ i þ 1 ½bi ðηi ÞFðξi ; ηi Þ ; nþ1 ðn þ1Þ! i!∂r i ∂θ
where ϖ n þ 1 ðθÞ ¼ ðθ θ0 Þðθ θ1 Þ⋯ðθ θn Þ, ξi is a number in the smallest interval I½r 0 ; r 1 ; …; r i containing r 0 ; r 1 ; …; r i , and ηi is a number in the smallest interval I½θ0 ; θ1 ; …; θn ; θ containing θ0 ; θ1 ; …; θn ; θ. Since Rm;n ðr; θÞ ¼ T 0 ðθÞ þ ðr r 0 ÞT 1 ðθÞ þ ðr r 0 Þðr r 1 ÞT 2 ðθÞ þ ⋯ þ ðr r 0 Þðr r 1 Þ⋯ðr r m 1 ÞT m ðθÞ ¼
m X
ωi ðrÞT i ðθÞ;
i¼0
we have m X F r; θ Rm;n r; θ ¼ ωi ðr Þ F ðr0 ; …; r i Þ θ T i θ i¼0
þ ¼
ωm þ 1 ðrÞ ∂m þ 1 Fðξ; θÞ ðm þ1Þ!
∂r m þ 1
ωm þ 1 ðrÞ ∂m þ 1 Fðξ; θÞ ðm þ 1Þ!
þ
∂r m þ 1
m ϖ n þ 1 ðθ Þ X ωi ðrÞ ∂n þ i þ 1 ½bi ðηi ÞFðξi ; ηi Þ : nþ1 ðn þ 1Þ! i ¼ 0 i!bi ðθÞ ∂r i ∂θ
This completes the proof.
□
4. Centralized sparse representation for fine scale reconstruction In order to refine the reconstructed result by using polar Newton–Thiele's rational interpolation, we adopt the centralized sparse representation (CSR) scheme [22] due to its properties of prominent texture. Through plenty of experiments, we find that if the noise is not considered, the reconstructed results by [22] have more prominent texture details, however, visual effect is not good. In view of this, we propose a novel framework via polar Newton– Thiele's rational interpolation kernel in CSR, where the reconstruction result by interpolation and the estimation result by CSR are endowed to different weights, respectively. By balancing weight coefficient, we can get final SR results which inherit the properties of prominent texture details in [22] and good visual effect by Newton–Thiele's rational interpolation in the polar coordinates. Here, it is necessary to introduce the reconstructed scheme in [22] which is used to get the estimation result with prominent texture details. In the paper [22], the centralized sparse representation (CSR) model by exploiting the nonlocal image statistics was proposed, the concept of sparse coding noise (SCN) was introduced, and the local sparsity and nonlocal sparsity constraints were unified for optimization result. According to the degraded image model, in order to reconstruct X from Y , first Y is sparsely coded over ϕ by solving the following minimization problem:
αY ¼ arg min f‖Y DHϕ○α‖22 þ λ J α J 1 g: α
ð12Þ
The image can be reconstructed as X^ ¼ ϕ○αY . The sparse coding noise (SCN) is defined as vα ¼ αY αX , which determines the reconstructed image quality of X^ . In order to improve the accuracy of αY and
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
93
Fig. 7. The edge display of reconstructed parrot image patches by up-scaling factor of 4. (a) By bilinear interpolation; (b) by Patch; (c) by LSS; (d) by SDS; (e) by our method; and (f) the original image.
parameters (λ and γ) can be updated. By combining the formula (15) with (14), αY can be calculated, and the estimation X^ of the original image can be obtained.
suppress αX , the CSR model was proposed:
αY ¼ arg min f J Y DH ϕ○α α
J 22 þ
λ J α J 1 þ γ J α E½α J lp g: ð13Þ
Let θ ¼ α E½α. According to characteristics of the additive noise, the SCN can be well characterized by the Laplacian distribution, and then the formula (13) can be expressed as X X αY ¼ arg minf‖Y DHϕ○α‖22 þ λi J αi J 1 þ γ i J θi J 1 g; α
i
i
ð14Þ where pffiffiffi 2 2σ 2n λi ¼ ;
σi
γi ¼
pffiffiffi 2 2σ 2n
δi
:
ð15Þ
In the process of selecting the dictionary, similar patches to a given patch are collected, and the PCA was applied to each cluster of similar patches to learn a local dictionary. By the iteration, the dictionary ϕ and the regularization
5. Implementation and experimental analysis 5.1. Algorithm implementation According to the characteristics of the polar coordinates, in order to better construct Newton–Thiele's rational interpolation, we adopt 9 interpolation points in our algorithm as shown in Fig. 2. In Fig. 2(a), the solid dot denotes the intensity which we want to get, and its position in the polar coordinates is ðr; θÞ; where r is radius and θ is angle. We extend its length to r þ Δr and r Δr along the r direction, and extend its angle to θ þ Δθ and θ Δθ along the θ direction, respectively. At this moment, we can get 5 interpolation points including itself. In a similar way, we extend the attained points ðr; θ ΔθÞ and ðr; θ þ ΔθÞ along the r direction, and finally we get 9 interpolation points.
94
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
Fig. 8. The magnification of butterfly images with a factor of 4 by different SR approaches. (a) The input image; (b) by bilinear interpolation; (c) by NEDI; (d) by Patch; (e) by LSS; (f) by SDS; (g) by our method; and (h) the original image.
With the 9 points, we can construct Newton–Thiele's rational interpolation surface by the interpolation function in the polar coordinates, and then up-scale the input images. The detailed process is summarized in Algorithm 1. Algorithm 1. Reconstructing image via Newton–Thiele's rational interpolation. Input: Y: a low-resolution image, k: the up-scaling factor. Output: I1: the reconstructed image 1: ½m; n ¼ sizeðYÞ; M ¼ ðintÞðm kÞ; N ¼ ðintÞðn kÞ; I 1 ¼ zerosðM; NÞ %Initialization 2: for i1 ¼ 0: M 1; j1 ¼ 0: N 1 j 3: i ¼ ik1 ; j ¼ k1 . %Compute the corresponding position (i,j) in the input image qffiffiffiffiffiffiffiffiffiffiffiffi 4: r ¼ i2 þ j2 ; θ ¼ arctan ji .
%Corresponding position ðr; θÞ in the polar coordinates 5: ðr Δr; θÞ; ðr; θÞ; ðr þ Δr; θÞ, ðr Δr; θ ΔθÞ; ðr; θ ΔθÞ; ðr þ Δr; θ ΔθÞ, ðr Δr; θ þ ΔθÞ; ðr; θ þ ΔθÞ; ðr þ Δr; θ þ ΔθÞ %Extend point ðr; θÞ to get 9 interpolation points 6: Compute Rm;n ðr; θÞ which is just the pixel value of point ði1 ; j1 Þ by using the formulas (9)–(11) and the 9 interpolation points' information. 7: End for 8: Return
In order to refine the reconstructed image I1 and get better effect, we add CSR scheme to polar Newton–Thiele's rational interpolation. That is, the reconstructed result by Algorithm 1 is set as an input part of Algorithm 2, and the result in Algorithm 1 and the estimation image in Algorithm 2 are given different weights to get the final SR result, which means that Algorithm 1 aims to get an initial magnified image and Algorithm 2 aims to optimize the result of Algorithm 1. Algorithm 2 is summarized as follows.
Algorithm 2. Refining image via CSR. Input: Y: a low-resolution image, L: iteration number, I1: the reconstructed image. Output: new: SR result 1: an initial estimation X^ %Initialization 2: for i ¼ 1: L 3: Update the dictionary ϕ; 4: Update the regularization parameters ðλ and γ Þ by using formula (15); 5: Update αY by using formulas (15) and (14); 6: End for 7: the estimation image X^ ¼ ϕ○αY . 8: the SR result new ¼ ð1 εÞnI 1 þ εnX^ ; ð0 r ε r 1Þ. 9: Return
5.2. Analysis of the windows in different coordinates Shown in Fig. 2 is a 3 3 patch of an image, and the intensities of the whole patch are 0 or 255. We aim to get the intensity of the solid dot ð1:3; 1:3Þ whose true pixel value is 0. If we adopt conventional rectangle window, say, 3 3, to interpolate, as shown in Fig. 2(b), we find that only 6 points are located in the black region, and the other points are located in the white region. The interpolation result by Newton–Thiele's rational interpolation in the cartesian coordinates [27–29] is 18.3, and this does not conform to the true pixel value 0. However, if we adopt the interpolation points of our algorithm as shown in Fig. 2(c), we find that all points are located in the black region. In this case, the corresponding coordinate of the solid dot in the polar coordinates is ð1:84; 45○ Þ; and Δr ¼ 0:16; Δθ ¼ 10○ . The interpolation result by our algorithm is 0, and it is just the true value. This example shows that our algorithm is more flexible than Newton–Thiele's rational
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
95
Fig. 9. The texture patches display of reconstructed house images by up-scaling factor of 2. (a) The input image; (b) by bilinear interpolation; (c) by NEDI; (d) by Patch; (e) by LSS; (f) by SDS; (g) by our method; and (h) the original image patch.
interpolation in the cartesian coordinates and more suitable to the arc regions of image.
find that the nonlinear scheme is more efficient than CSR scheme, especially as the image sizes increase. 5.4. Experimental results and analysis
5.3. Computational complexity and implementation time In Algorithm 1, the implementation of whole algorithm requires dual loop, so the computational complexity of loop is Oðm nÞ, where m and n denote sizes of the reconstructed image. In step 1(4), the computational 2 complexity of Rm;n ðr; θÞ is Oð1 þ r θ þ r 2 θ Þ. In Algorithm 2, the most complicated computation is the renewal of dictionary. In step 2(1), the computational complexity of 2 assigning each patch a PCA dictionary is OðL ðb 1ÞÞ, where b b denotes the size of the patch, and L is the number of pixel points that meet the conditions except for the pixel points of the patch b b. To demonstrate the complexity of two algorithms, we record the time cost with specific sizes of house image. All the experiments are tested on PC with Intel(R) core (TM) i3-4130 3.4 GHz CPU, 8 GB RAM, NVIDIA 1 GB, and MATLAB 2010b. Table 1 shows the records in detail. We
In this section, we firstly analyze the parameter settings, and then we demonstrate the effectiveness and superiority of our method through plenty of experiments. In our experiments, we will mostly magnify the input images, and the noise is not considered. The original images with downscaling are used as the input images. For the color images and video sequences, in experiments we apply our algorithm to each of the red, green, and blue channel respectively. In order to demonstrate the superiority and robustness of our algorithm, we use plenty of images and video sequences for experiments, and choose five state-of-the-art SR algorithms to compare with it. They are the schemes of paper [23] (LSS for short), [21] (NEDI for short), [15] (Patch for short), [17] (SDS for short), and the conventional bilinear interpolation method. In the experiments, the parameters [17] are set as: k ¼ 16; γ ¼ 0:0894; η ¼ 0:2, where k is the number of iteration. The
96
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
150
150 Original
Original
Bilinear
140
140
130
130
120
120
110
110 100
100 0
10
20
30
40
50
60
70
80
150
0
10
20
30
40
50
60
70
80
150 Original
Original
Patch
140
140
130
130
120
120
110
110
100
0
10
20
30
40
50
60
70
80
150
100
0
10
20
30
40
50
60
LSS
70
80
150 Original
SDS
Original
140
140
130
130
120
120
110
110
100
NEDI
0
10
20
30
40
50
60
70
80
Our
100 0
10
20
30
40
50
60
70
80
Fig. 10. The intensity distribution of a column from Fig. 9. (a) By bilinear interpolation; (b) by NEDI; (c) by Patch; (d) by LSS; (e) by SDS; and (f) by our method. Table 2 The PSNR of Figs. 6–9. PSNR
Bilinear
NEDI
Patch
LSS
SDS
Ours
Fig. Fig. Fig. Fig.
26.504846 25.964764 17.767077 29.312666
30.051532 26.593763 18.154271 29.555886
29.276496 25.614543 17.855995 29.453694
31.609562 27.362397 18.833057 30.230676
28.304274 25.233517 19.134468 27.311428
32.660093 28.687004 19.664951 31.176653
6 7 8 9
Table 3 The SSIM of Figs. 6–9. SSIM
Bilinear
NEDI
Patch
LSS
SDS
Ours
Fig. Fig. Fig. Fig.
0.7815 0.7019 0.6071 0.8561
0.7821 0.7125 0.6119 0.8537
0.7773 0.6954 0.6278 0.8609
0.8320 0.7592 0.6533 0.8613
0.7778 0.6810 0.6791 0.8387
0.8406 0.7782 0.6876 0.8743
6 7 8 9
parameters [15] are set as: P ¼ 5 5; α ¼ k, where k is the up-scaling factor. In order to select suitable parameters, we measure the reconstructed results by our algorithm and record the PSNR
value and SSIM index with different settings, as shown in Fig. 3. Through comparisons of Fig. 3 we find that the PSNR and SSIM are max when Δr is 0.4 and Δθ is 0.001. The parameter ε of the proposed method can be arbitrarily selected, and through comparisons of Fig. 4 we find the PSNR and SSIM are maximum when ε is about 0.1. We can also analyze it theoretically. If the parameter ε is larger, the details are overly shown and the results look untrue, which means that our reconstruction results are mainly dependent on the nonlinear interpolation. The sparse representation has achieved good results in both edges and textural regions. However, in the
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
97
Fig. 11. The SR video sequences display by up-scaling factor of 2. (a) The input video; (b) by bilinear; (c) by NEDI; (d) by Patch; (e) by LSS; (f) by SDS; (g) by our method; and (h) the original video.
98
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
Fig. 12. The PSNR comparisons of each frame by different SR approaches. (a) 2 upsampling; (b) 3 upsampling; and (c) 4 upsampling.
experiments, we find that if the noise is not considered, the reconstructed results by [22] look untrue due to its excessive texture details. From Fig. 5, we can clearly see the superiority of our proposed method. We respectively magnify the raccoon image with the scales of 2 and 3. We find that only the image reconstructed in [22] has better luminance, but it looks untrue due to its excessive texture details. This situation is obvious especially around the beard and hair. The result by Newton–Thiele's rational interpolation algorithm has good visual effect, however, the PSNR is lower than that by our mixture model. And we can find that the reconstructed result by our model not only has good visual effect but also is closer to the original image and the details are better contained. Now we will compare our method with other methods by subjective evaluation and objective evaluation. The reconstructed results by different methods are shown in Figs. 6–9. The property of prominent texture details is shown in Figs. 6 and 9, and the property of arc regions and edge being processed better is shown in Fig. 7. The SR color images are shown in Figs. 6–8 and the SR gray images are shown in Fig. 9. In order to illustrate the property of prominent texture regions by our method, we display the intensities of a column which are selected from Fig. 9. As shown in Fig. 10, the intensities by different methods are compared with those of original image patch. If the values and frequency of distributed intensity are closer to those of original image patch, it means that the reconstructed image by this method is closer to original image. From Fig. 10 we find that the distributed intensity by our method is closer to that of the original image patch. Listed in Tables 2 and 3 are the PSNR and SSIM results of the reconstructed images above by different methods, where the maximum values are marked in italic type. Through visual comparisons of Figs. 6–9, we find that the results by bilinear interpolation are not very good due to texture regions being smoothed. By the NEDI method, the boundary is processed well, however, the details are not enough in the reconstructed results. By the LSS method, the visual effect is not good because the details are overly smooth. By SDS method, the results are viewed as untrue because of the excessive details being introduced. The results by the Patch method and our method have better visual effect, and the color is more natural by our method compared to the Patch method. Through
objective comparisons of Tables 2–3, we get that our PSNRs and SSIMs are all higher than those of other methods. In the experiments, we also apply our algorithm to video sequences, and show the comparisons by the subjective evaluation and objective evaluation. The videos in our experiments are downloaded from the Internet.1 We do plenty of experiments on video sequences, and here we only show one of them. We take the 1st frame, the 10 frame, the 20th frame, and the 22nd frame of the reconstructed car video sequences as shown in Fig. 11. The PSNR comparisons of each frame of the reconstructed car video sequences by different approaches are shown in Fig. 12. As a result, applied to image and video SR, our method not only has good visual effect, but also preserves the details and texture regions very well. The PSNR and SSIM comparisons also show that our method outperforms the others. 6. Discussions and conclusions A novel SR reconstruction method using nonlinear rational interpolation and sparse representation was presented. Our approach is based on the observation that the arc regions usually occur in images and the conventional rectangle windows are not suitable for such regions. Inspired by the applications of Newton–Thiele's rational interpolation, we proposed a novel Newton–Thiele's rational interpolation formula in the polar coordinates to reconstruct images. In order to obtain better reconstructed results, we integrate the centralized sparse representation (CSR) scheme [22] to the nonlinear interpolation by continued fractions. Our method has been tested on a series of images and videos, and the results show that our proposed method has significantly better performance as compared with the other SR approaches. Our results heavily depend on the parameter ε, and the larger weight ε leads to smaller PSNR. Because the parameters r and θ are constants, the window size in the polar coordinates remains unchanged and this is not flexible. We believe that the use of Newton–Thiele's rational interpolation in the polar coordinates can better preserve 1
http://www.wisdom.weizmann.ac.il/vision/SingleVideoSR.html
L. He et al. / Signal Processing: Image Communication 31 (2015) 86–99
the edges of images while adaptively selecting the parameters r and θ, and we will pursue this interesting work in our future research. Acknowledgments This work is supported by the NSFC-Guangdong Joint Foundation (Key Project) under Grant no. U1135003 and the National Natural Science Foundation of China under Grant nos. 61070227 and 61472466. References [1] Jie Ren, Jiaying Liu, Zongming Guo, Context-aware sparse decomposition for image denoising and super-resolution, IEEE Trans. Image Process. 22 (4) (2013) 1456–1469. [2] Haichao Zhang, Yanning Zhang, Haisen Li, Thomas S. Huang, Generative Bayesian image super resolution with natural image prior, IEEE Trans. Image Process. 21 (9) (2012) 4054–4067. [3] Xinbo Gao, Kaibing Zhang, Dacheng Tao, Xuelong Li, Image superresolution with sparse neighbor embedding, IEEE Trans. Image Process. 21 (7) (2012) 3194–3205. [4] Jianchao Yang, Zhe Lin, Scott Cohen, Fast image super-resolution based on in-place example regression, IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), (2013) 1059–1066, http://dx.doi.org/10.1109/CVPR. 2013.141. [5] Yi Tang, Yuan Yuan, Pingkun Yan, Xuelong Li, Greedy regression in sparse coding space for single-image super-resolution, J. Vis. Commun. Image Represent. 24 (2) (2013) 148–159. [6] Jianchao Yang, John Wright, Thomas Huang, Yi Ma, Image superresolution as sparse representation of raw image patches, IEEE Conf. Comput. Vis. Pattern Recognit., (2008) 1–8, http://dx.doi.org/10. 1109/CVPR.2008.4587647. [7] Jianchao Yang, John Wright, Thomas Huang, Yi Ma, Image superresolution via sparse representation, IEEE Trans. Image Process. 19 (11) (2010) 2861–2873. [8] Jianchao Yang, Zhaowen Wang, Zhe Lin, Scott Cohen, Thomas Huang, Coupled dictionary training for image super-resolution, IEEE Trans. Image Process. 21 (8) (2012) 3467–3478. [9] Fei Zhou, Wenming Yang, Qingmin Liao, Interpolation-based image super-resolution using multisurface fitting, IEEE Trans. Image Process. 21 (7) (2012) 3312–3318. [10] Xinbo Gao, Kaibing Zhang, Dacheng Tao, Xuelong Li, Joint learning for single-image super-resolution via a coupled constraint, IEEE Trans. Image Process. 21 (2) (2012) 469–480. [11] Jian Sun, Jian Sun, Zongben Xu, Heung-Yeung Shum, Image superresolution using gradient profile prior, in: IEEE Conference on Computer Vision and Pattern Recognition, 2008. [12] Huanjing Yue, Xiaoyan Sun, Jingyu Yang, Feng Wu, Landmark image super-resolution by retrieving web images, IEEE Trans. Image Process. 22 (12) (2013) 4865–4878. [13] Heng Su, Liang Tang, Ying Wu, Daniel Tretter, Jie Zhou, Spatially adaptive block-based super-resolution, IEEE Trans. Image Process. 21 (3) (2012) 1031–1045. [14] Yu-Wing Tai, Shuaicheng Liu, Michael S. Brown, Stephen Lin, Super resolution using edge prior and single image detail synthesis, in: IEEE Conference on Computer Vision and Pattern Recognition, 2010. [15] Daniel Glasner, Shai Bagon, Michal Irani, Super-resolution from a single image, in: IEEE 12th International Conference on Computer Vision, 2009. [16] Hong Chang, Dit-Yan Yeung, Yimin Xiong, Super-resolution through neighbor embedding, in: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. [17] Weisheng Dong, Lei Zhang, Guangming Shi, Xiaolin Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization, IEEE Trans. Image Process. 20 (7) (2011) 1838–1857. [18] Xuelong Li, Yanting Hu, Xinbo Gao, Dachegn Tao, Beijia Ning, A multi-frame image super-resolution method, Signal Process. 90 (2) (2010) 405–414.
99
[19] Jian Sun, Jiejie Zhu, Marshall F. Tappen, Context-constrained hallucination for image super-resolution, in: IEEE Conference on Computer Vision and Pattern Recognition, 2010. [20] J. Sun, N.N. Zheng, H. Tao, H. Shum, Image hallucination with primal sketch priors, IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 2 (2003) 729–736. [21] Xin Li, Michael T. Orchard, New edge-directed interpolation, IEEE Trans. Image Process. 10 (10) (2001) 1521–1527. [22] Weisheng Dong, Lei Zhang, Guangming Shi, Centralized sparse representation for image restoration, in: IEEE International Conference on Computer Vision, 2011. [23] Gilad Freedman, Raanan Fattal, Image and video upscaling from local self-examples, ACM Trans. Graphics 30 (2) (2011) 1–11. [24] Yan Liang, Pong C. Yuen, Jian-Huang Lai, Image super-resolution by textural context constrained visual vocabulary, Signal Process.: Image Commun. 27 (2012) 1096–1108. [25] Jian Lu, HongRan Zhang, Yi Sun, Video super resolution based on non-local regularization and reliable motion estimation, Signal Process.: Image Commun. 29 (4) (2014) 514–529. [26] Weisheng Dong, Guangming Shi, Xin Li, Lei Zhang, Xiaolin Wu, Image reconstruction with locally adaptive sparsity and nonlocal robust regularization, Signal Process.: Image Commun. 27 (2012) 1109–1122. [27] Dinh-Hoan Trinh, Marie Luong, Francoise Dibos, JeanMarie Rocchisani, Canh-Duong Pham, Truong Q. Nguyen, Novel example-based method for super-resolution and denoising of medical images, IEEE Trans. Image Process. 23 (4) (2014) 1882–1895. [28] Ce Liu, Deqing Sun, On Bayesian adaptive video super resolution, IEEE Trans. Pattern Anal. Mach. Intell. 36 (2) (2014) 346–360. [29] Takayuki Katsuki, Akira Torii, Masato Inoue, Posterior-mean superresolution with a causal Gaussian Markov random field prior, IEEE Trans. Image Process. 21 (7) (2012) 3182–3193. [30] Qiangqiang Yuan, Liangpei Zhang, Huanfeng Shen, Regional spatially adaptive total variation super-resolution with spatial information filtering and clustering, IEEE Trans. Image Process. 22 (6) (2013) 2327–2342. [31] Kaibing Zhang, Xinbo Gao, Dacheng Tao, Xuelong Li, Single image super-resolution with non-local means and steering kernel regression, IEEE Trans. Image Process. 21 (11) (2012) 4544–4556. [32] Pulak Purkait, Bhabatosh Chanda, Super resolution image reconstruction through Bregman iteration using morphologic regularization, IEEE Trans. Image Process. 21 (9) (2012) 4029–4039. [33] Heng Su, Ying Wu, Jie Zhou, Super-resolution without dense flow, IEEE Trans. Image Process. 21 (4) (2012) 1782–1795. [34] Sven Wanner, Bastian Goldluecke, Variational light field analysis for disparity estimation and super-resolution, IEEE Trans. Pattern Anal. Mach. Intell. 36 (3) (2014) 606–619. [35] Esmaeil Faramarzi, Dinesh Rajan, Marc P. Christensen, Unified blind method for multi-image super-resolution and single/multi-image blur deconvolution, IEEE Trans. Image Process. 22 (6) (2013) 2101–2114. [36] H.S. Wall, The Analytic Theory of Continued Fractions, Nostrand, Princeton, NJ, 1948. [37] Khristina I. Kuchminskaya, Wojciech Siemaszko, Rational approximation and interpolation of functions by branched continued fractions, in: J. Gilewicz, M. Pindor, W. Siemaszko (Eds.), Rational Approximation and Its Applications in Mathematics and Physics, Lecture Notes in Mathematics, vol. 23, Springer, Berlin, 1987, pp. 24–40. [38] Wojciech Siemaszko, Thiele-type branched continued fractions for two-variable functions, J. Comput. Appl. Math. 9 (1983) 137–153. [39] Annie Cuyt, Brigitte Verdonk, Multivariate reciprocal differences for branched Thiele continued fraction expansions, J. Comput. Appl. Math. 21 (1988) 145–160. [40] Wojciech Siemaszko, Branched continued fractions for double power series, J. Comput. Appl. Math. 6 (1980) 121–125. [41] Jieqing Tan, Shuo Tang, Composite schemes for multivariate blending rational interpolation, J. Comput. Appl. Math. 144 (1-2) (2002) 263–275. [42] Jieqing Tan, Yi Fang, Newton–Thiele's rational interpolants, Numer. Algorithms 24 (2000) 141–157. [43] M. Bertero, P. Boccacci, Introduction to Inverse Problems in Imaging, IOP, Bristol, UK, 1998. [44] Junjun Jiang, Ruimin Hu, Zhen Han, Tao Lu, Efficient single image super-resolution via graph-constrained least squares regression, Multimed. Tools Appl. 72 (3) (2014) 2573–2596.