Accepted Manuscript
A new multiframe super-resolution based on nonlinear registration and a spatially weighted regularization Amine Laghrib, Aissam Hadri, Abdelilah Hakim, Said Raghay PII: DOI: Reference:
S0020-0255(19)30343-3 https://doi.org/10.1016/j.ins.2019.04.029 INS 14451
To appear in:
Information Sciences
Received date: Revised date: Accepted date:
18 May 2017 12 April 2019 17 April 2019
Please cite this article as: Amine Laghrib, Aissam Hadri, Abdelilah Hakim, Said Raghay, A new multiframe super-resolution based on nonlinear registration and a spatially weighted regularization, Information Sciences (2019), doi: https://doi.org/10.1016/j.ins.2019.04.029
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
A new multiframe super-resolution based on nonlinear registration and a spatially weighted regularization 1
CR IP T
Amine Laghrib1,∗ , Aissam Hadri1 , Abdelilah Hakim2 , Said Raghay2 LMA FST B´eni-Mellal, Universit´e Sultan Moulay Slimane, Maroc. 2 LAMAI, FST Marrakech, Universit´e Cadi Ayyad, Maroc. ∗ Correspondig author :
[email protected]
AN US
Abstract
PT
ED
M
Increasing the size of an image is an extensively studied problem in image processing. In recent years, many studies have been conducted on image superresolution (SR). Since the super-resolution techniques depend on the motion precision estimation, we investigate the use of a nonlinear elastic (called hyperelastic) image registration. Also, we propose a spatially weighted second order SR algorithm, which takes into account the distribution of the spatial information in different image areas. The hyperelastic image registration is used to handle the subpixel errors between the unregistered images, while the spatially weighted second order regularization allows to increase the robustness of the restoration step with respect to degradation factors (blur and noise). As a result, the registration model is more efficient and easier to implement and the proposed SR algorithm reduces artifacts in flat regions of the image and also preserves well sharp edges. The efficiency of the proposed model is demonstrated using simulated and real tests, while comparison with other competitive SR methods is achieved.
CE
Keywords: Super-resolution, Spatially weighted, Hyperelastic registration, Variational regularization.
AC
1. Introduction Recently, high-resolution (HR) images have been required in many areas. Super-resolution (SR) technique is always considered as an effective method to improve the spatial resolution of the image. The resolution of an image plays an important role in image processing. In fact, the extraction of information of a given image relies on its quality. The sharper the image, the more Preprint submitted to Information Sciences
April 17, 2019
ACCEPTED MANUSCRIPT
AN US
CR IP T
information can be extracted. If the image is degraded, its interpretation will be very difficult and misleading. We are, therefore, forced to apply a zoom to enlarge the regions of the image. Unfortunately, if we zoom beyond its resolution, the interpolation process produces a blurred and low-resolution (LR) image. It is, in fact, tricky to create the desired information in the original image. An alternative way to guess this information is the use of super-resolution techniques [42, 45, 17]. The aim of multiframe super-resolution technique is to build a highresolution (HR) image from a sequence of LR ones that are degraded by noise, blur and decimation [34]. The SR techniques are used in various fields, such as medical diagnostics [22], image satellite [8], face recognition [26], network analysis [2] and also video surveillance [47]. Moreover, the public and industries demand for moving low resolution and old digital videos into high definition (HD) increases day after day.
AC
CE
PT
ED
M
1.1. Related works The first step of multiframe SR aim is to use the motion information to register all LR images in a common position [31]. This stage is the key of the success of multiframe SR algorithms and without a good estimation of the motion between the LR sequence, the super-resolution becomes very limited [7]. To overcome the difficulties of the registration step, many works have been proposed. These methods include the projection on a convex set (POCS) [25], the iterative back projection (IBP) [9, 18], optical-flow [6]. However, these methods suffer from the non-uniqueness of solutions, and the fact that they consider only translational and rotational motion between LR frames. More robust approaches are then proposed [24, 39] to tackle the misregistration errors. One of the more successful method was the nonlocalmeans proposed by Protter et al. [38] which is applied to nonglobal motion. Otherwise, to treat nonparametric deformations between the LR images, an elastic registration was proposed in [29], but it is specified to small deformations and didn’t tackle larger ones. Other techniques use the maximum a posteriori (MAP) [41] with accurate spatial domain observation model to reconstruct the HR image. More recently, other methods were proposed [5, 27] but still suffering from the missregistration errors. Afterwards, the selection of a regularization function in the deblurring and denoising process (the last step of SR technique) is very important to avoid different artefacts. Indeed, great care must be made in the choice of this function. There is numerous of super-resolution approaches that rely on 2
ACCEPTED MANUSCRIPT
ED
M
AN US
CR IP T
regularization framework [4, 48, 28]. Even if these methods give promising results they have some shortcomings. One of them is the staircasing effect and the blur apparition in the flat regions. Although the noise and blur, in smooth regions, can be reduced by adjusting the regularization parameter, the texture information is however blurred. Furthermore, deep learning has been widely used to address the single image super-resolution task. One of the famous work is the one proposed by Dong et al. [16], where the authors introduce the CNN-based SR method (SRCNN), which is done in three convolutional steps. The three steps are realized such as: patch extraction, non-linear mapping, and reconstruction. The SRCNN has shown attractive results, however, the need of large HR images for data training still an ambarrassing problem. Then, Dong et al. introduced an accelerated version of SRCNN which called FSRCNN [15], which aims is to incorporate the upsampling operation into the network. As a result, the restoration quality is efficient in a remarkable execution time. In contrast, this approach remains relatively shallow. Other approaches are then elaborated to fix the issues of the previous approach, see [23] for more details. Recently, Zhang et al. [46] used more advances on deep learning to propose the similar network (DnCNN), which shows great success. More recently, Chen et al.[12] investigated a deep convolutional network-based SR (CISRDCNN) framework for compressed images, which gives promising results.
AC
CE
PT
1.2. The contributions The main goal of this paper consists of increasing the super-resolution performance with respect to the misregistration errors, blurring effect and noise. Inspired by the efficiency of the nonlinear elastic regularization in the image registration problem [44] and the combination of first and second order regularization in the image denoising [37]. We propose a new SR method to avoid, the most possible, misregistration error and annoying artifacts. Firstly, to tackle the misregistration errors, we propose a hyperelastic registration, which takes into account the large deformations between the LR images better than the elastic registration, proposed in [29] and the diffusion one [27]. Otherwise, to avoid the ill-posedness of the restoration step, we propose a spatially weighted second order regularization based on the second derivative (curvature difference), proposed by Chen et al. [16]. This indicator can efficiently distinguish edges from flat areas which will preserve the texture of the restored image and reduce the smoothing effect near boundaries. 3
ACCEPTED MANUSCRIPT
CR IP T
The rest of the paper is formulated as follows. In Section 2, we present the general super-resolution problem. The Section 3 will be devoted to the nonlinear registration part of the super-resolution algorithm. In section 4, we present the proposed SR restoration algorithm and in section 5 we present some synthetic and real results, while we compare our approach with some competitive SR methods in the literature. We finally end the paper by a conclusion. 2. Problem formulation
ED
M
AN US
As discussed in the introduction, the performance of the super-resolution approaches decreases when applied to applications involving non-parametric transformations. There is in fact two models of the super-resolution problem formulation: the classical linear model, which is intensively used when the transformations are parametric and a modified super-resolution model that is based on optical flow formulation, which is represented by a dense flow field that describes thoroughly a deformation for every pixel [20]. In this work, we have considered the classical SR model with separate steps to better perform the registration process. In fact, taking into account the many degradation factors, the unknown and desired HR image X of size [r2 N 2 × 1] is related to the captured LR images Yk (represented by a column vector of size [N 2 × 1]) by this formulation: ∀k = 1, 2, . . . , n,
Yk = DHFk X + Vk
(1)
AC
CE
PT
where H is the blurring operator of size [r2 N 2 × r2 N 2 ], D represents the decimation matrix of size [N 2 × r2 N 2 ]. Fk are the geometric warp matrices of size [r2 N 2 × r2 N 2 ], representing transformation that differs in all frames and Vk is a vector of size [N 2 × 1] represents the additive noise. We also suppose that the noise is independent and identically distributed (i.i.d), following a Gaussian distribution with a standard deviation σ. One of the difficulties in applying the SR process is how to compute the warp matrix Fk in the above super-resolution model (1), especially when the deformations are non-parametric, which are generally not known. Trying to simultaneously estimate warping and super resolved the given sequence is very complicated and computationally difficult. For that, we first estimate the warping matrices using hyperelastic registration algorithm and in the second step we use the obtained motion to compute the HR image X. Given 4
ACCEPTED MANUSCRIPT
a sequence of LR images Yk , k = 1, . . . , n, the main steps of the SR process are enumerated as follows:
AN US
CR IP T
1. Interpolate the original image X0 by an up-scaling factor r of a LR image using bilinear interpolation. 2. Compute the warp matrices Fk between the current reference image (n − 1) (n − 1) previous images and the next LR frames. and the 2 2 (n − 1) 3. Register the previous and next images with respect to the 2 reference image X0 using the displacements, estimated in the previous step. 4. Estimate the super-resolution image X, which is considered as the restoration step of the super-resolution model, through the resolution of a convex minimization problem described in (2). ) ( n X b = argmin kDFk HX − Yk k2 + δR(X) , (2) X 2
X
1
M
where δ is the regularization parameter and R is a regularization term that we will define in the restoration step 4.
ED
Firstly, we have to define carefully the warp operators Fk , which is considered as a registration step.
PT
3. The construction of the warp matrix Fk
AC
CE
We compute the warp matrix Fk for each frame through a hyperelastic registration algorithm, after a transition from discrete to continuous images using 2-linear interpolation. Let us denote by Yk (x) the intensity of the kth image of coordinate x ∈ Ω ⊂ R2 where Ω is defined as the domain of the image. So the expression of the new continuous image Yk (x) is given by this formulation : Yk (x)=
P
k∈{0,1}2
Yk
E(n1 x1 )+k1 E(n2 x2 )+k2 , n1 n2
Q k (−1) j × 2j=1 (E(xj nj )+1−kj −xj nj ) , n j
with N = (n1 , n2 ) is the size of Yk and E(x) is the integer part of x. Curing the first step, we choose arbitrarily one image Yr from the sequence
5
ACCEPTED MANUSCRIPT
Yk as a reference image and we look-for to find the deformations uk between Yr and the other images, such that: for k 6= r
and ∀x ∈ Ω.
(3)
CR IP T
Yr (x) = Yk (uk (x))
AN US
An intuitive way to find the deformation uk between the images is to minimize the distance measure between the two images. Since the problem is ill-posed, we have to choose an appropriate regularization. In this paper, we introduce a hyperelastic regularization term, which tackle the deformed LR image better than the elastic and diffusion regularizations proposed, respectively, in [29] and [27]. The main difference between the proposed hyperelastic registration algorithm and the elastic one proposed in [29] is the introduction of a nonlinear term in the strain tensor which can handle the non-smooth deformations between the LR images better than the classical linear one. The main registration process is given by the following minimization problem min Jhyp (uk ),
(4)
Jhyp (uk ) = DSSD (Yr , Yk , uk ) + βShyp (uk ),
(5)
uk
M
with where
ED
DSSD (Yr , Yk , uk ) =
Z
Ω
(Yk (uk (x)) − Yr (x))2 dx,
(6)
CE
PT
and Shyp is the hyperelastic regularization based on the nonlinear strain tensor matrix. This tensor is generally defined via the displacement vk , supposing that uk (x) = x + vk (x), so ∇uk (x) = I2 + ∇vk (x). We denote this tensor by V , which is defined by 1 V (vk ) = (∇vk + ∇vk| + ∇vk| ∇vk ). 2
(7)
AC
Using these notations we can define the hyperelastic regularization Shyp as follows: Z λ Shyp (uk ) = µ (traceV )2 + trace(V 2 ) dx, (8) 2 Ω
where µ and λ are the Lam´e parameters [33]. The registration problem is now well defined in (4). First, the existence and uniqueness of a solution to the problem (4) must be assured. The choice of the functional space is the Sobolev space T = H 1 (Ω) [1]. 6
ACCEPTED MANUSCRIPT
Theorem 3.1. Let Ω be a regular bounded open subset of R2 . Then, the minimization problem min Jhyp (uk ), (9) admits a unique solution.
CR IP T
uk ∈T
To demonstrate this theorem, we have to prove that Jhyp is elliptic and weakly sequentially lower semi-continuous (l.s.c). Proof. Step 1: Existence We have to check that Jhyp is coercive and weakly sequentially l.s.c. This is done by proving that Jhyp (uk ) = +∞.
AN US
lim
kuk kH 1 (Ω) →+∞
Ω
Z
ED
Let’s define the semi-norm
M
Let uk ∈ T , then Z Z λ > 2 Jhyp (uk ) = µ (trace V ) + trace(V (uk ) V (uk ))dx + (Yk (uk (x)) − Yr (x))2 dx 2 Ω ZΩ Z µ trace(V (uk )> V (uk )) dx + (Yk (uk (x)) − Yr (x))2 dx. ≥
|Ψ(uk )|0 =
Ω
Ω
(10)
1/2 trace (ε(uk ) ε(uk )) , T
PT
1 (∇vk + ∇vk| ). Using 2 the fact that trace(V (uk )> V (uk )) ≥ trace(ε(uk )> ε(uk )), the inequality (10) becomes Z Z Jhyp (uk ) ≥ µ trace(ε(uk )> ε(uk )) dx + (Yk (uk (x)) − Yr (x))2 dx Ω ΩZ Z Z 2 2 2 ≥ µ|ε(uk )|0 + (Yk (uk (x))) dx + (Yr (x)) dx − 2 Yk (uk (x))Yr (x)dx Ω Ω Ω Z 2 ≥ µ|ε(uk )|0 − 2kYr kL∞ (Ω) |Yk (uk (x))|dx
AC
CE
where ε is the linear tensor defined such as ε(vk ) =
Ω
≥
µ|ε(uk )|20
− 2mΩ kYr kL∞ (Ω) kYk (uk (x))kL2 (Ω) , 7
(11)
ACCEPTED MANUSCRIPT
where mΩ is a constant verifying kYk (uk (x))kL1 (Ω) ≤ mΩ kYk (uk (x))kL2 (Ω) . Since Yk is a linear and continuous application on uk , we have kYk (uk (x))kL2 (Ω) ≤ k|Yk |kkuk kL2 (Ω) ,
(12)
CR IP T
∀uk ∈ L2 (Ω),
where k|Yk |k = min{k ≥ 0; ∀uk ∈ L2 (Ω), kYk (uk (x))kL2 (Ω) ≤ kkuk kL2 (Ω) }. Which implies Jhyp (uk ) ≥ µ|ε(uk )|20 − 2mΩ kYr kL∞ (Ω) k|Yk |kkuk kL2 (Ω) ≥ µ|ε(uk )|20 − 2mΩ kYr kL∞ (Ω) k|Yk |kkuk kH 1 (Ω) .
AN US
We set
(13)
b = 2mΩ kYr kL∞ (Ω) Yk , then k|b|k = 2mΩ kYr kL∞ (Ω) k|Yk |k. In contrast, by using the Korn’s inequality [35], there exist a constant β > 0 such that |ε(uk )|0 ≥ βkuk kH 1 (Ω) .
M
We can deduce that
Jhyp (uk ) ≥ µβkuk k2H 1 (Ω) − kf kL2 (Ω) kuk kH 1 (Ω) .
(14)
ED
Using the Young’s inequality, we obtain µβC kuk k2H 1 (Ω) − C()k|b|k2 − kuk k2H 1 (Ω) 2 2 µβC ≥( − )kuk k2H 1 (Ω) − C()k|b|k2 , 2 2
PT
Jhyp (uk ) ≥
(15)
AC
CE
where is chosen such as µβC− > 0. We get then that Jhyp (uk ) → +∞ if 2 kuk kH 1 (Ω) → +∞. The weakly sequentially l.s.c of the function Jhyp is obvious since this function is continuous and convex with respect to uk . Step 2: Uniqueness To prove the uniqueness of the function, we have to check that it is strictly convex. Indeed, since uk → Shyp (uk , uk ) is strictly convex (the norm k.kL2 (Ω) is strictly convex). Also, since DSSD is linear it is then strictly convex with respect to uk . As a result, the function Jhyp is strictly convex with respect to uk , which concludes the proof. 8
ACCEPTED MANUSCRIPT
CR IP T
To solve the problem (9), we use the Gauss-Newton method [40]. Since uk is given as a column vector of size [r4 N 4 × 1], it is then re-dimensioned into the warp matrix Fk , with the size of [r2 N 2 × r2 N 2 ]. The main algorithm to resolve the problem (9) is given as follows
AN US
Algorithm for solving (9) Input : The initial deformation u0 initialization : We set Jhyp (u) = kJ(u)k22 , where: √ J(u) = µ + λ (trace V ) and choose λ > 0, µ > 0, iteration number: N max. Compute : The direction search d such as (DJ(uk ))| DJ(uk ) d = −DJ(uk )J(uk ) for k = 1, . . . , N max Where DJ is the Jacobian of the function J. Output : The deformation u defined by uk+1 = uk + d. 4. The restoration step
AC
CE
PT
ED
M
4.1. The spatially weighted second order regularization In this section, we give a description of the proposed restoration algorithm. We use a spatially convex combination between T V and T V 2 regularizations. This approach is inspired by the work in denoising and deblurring context [37]. In fact, there are also other previous attempt to combine first and second order regularization, see [10]. A remarkable one is proposed in [11] where the authors consider the total variation minimization together with weighted versions of the Laplacian. Although this regularization has the advantages of preserving edge and avoiding the creation of blocky-effect, it still suffers from the staircasing effect in the flat area. Moreover, even if the reduction of noise in the flat regions is significant when the regularization parameter is well chosen, the edges and texture will certainly be blurred. Another complication comes from the choice of the regularization parameter that make the balance between the first and second order terms which reduce the efficiency of the combined T V and T V 2 model. To avoid this weakness, a spatially weighted regularization model is proposed which takes into account the spatial dependent property of this regularization. We use in fact the spatial difference curvature information [13], to discern edges from smooth area in a given image. We recall first the difference curvature coefficient C
9
ACCEPTED MANUSCRIPT
for the ith pixel in the image by Ci = ||sηη | − |s || , where
(16)
µ2x µxx + 2µx µy µxy + µ2y µyy , µ2x + µ2y
(17)
s =
µ2y µxx + 2µx µy µxy + µ2x µyy . µ2x + µ2y
(18)
and
CR IP T
sηη =
X
1
M
where RW SO is given by
AN US
η and are respectively the direction of the gradient and the perpendicular direction to the gradient. µx , µy , µxx , µyy and µxy denote, respectively the first and second order derivative in each pixel. While | . | is the absolute value function. Using the difference curvature proprieties given by (16), we define now the proposed restoration super-resolution model as follows: ( n ) X b = arg min X kDFk HX − Yk k22 + RW SO (X) , (19) RW SO (X) = Lk∇X||1 + V k∇2 Xk1 .
(20)
PT
ED
L is a spatially weighted matrix defined by the difference curvature as follows L1 0 . . . 0 0 L2 . . . 0 L = .. (21) .. . . .. , . . . . 0 0 . . . Lr 2 N 2
AC
CE
where Li is the spatial gradient weight of the ith pixel in the HR image X, which is defined depending on the difference curvature coefficient in the following way 1 Li = 1 − , (22) 1 + σCi2 where σ is contrast factor. V is also a spatially weighted matrix defined by V1 0 . . . 0 0 V2 . . . 0 V = .. (23) .. . . .. , . . . . 0 0 . . . Vr 2 N 2 10
ACCEPTED MANUSCRIPT
CR IP T
and Vi is related to the difference curvature coefficient defined as follows −Ci Vi = exp , (24) δ
M
AN US
where δ is a defined threshold. To approve the choice of δ, we used several test images. Based on the expression of Li and Vi in equations (22) and (24), we can see that if the coefficient Ci is high, which signify that the pixels are in an edge and texture area, the coefficient Vi will be close to 0. This leads to promote the T V term at the expense of the second order one. Then, the noise in the textured regions will be well-suppressed without blurring the smooth area controlled by the coefficient Vi . Conversely, for smooth pixel area, because the Ci value is too small, Vi is high, which means that a strong second order regularization is considered to these pixels. As a result, the edges, smooth areas and the texture will be well-preserved. Based on this analysis, it is demonstrated that the spatially weighted second order regularization RW SO can automatically choose the desired term with the appropriate spatial information at each pixel. Which can efficiently reduce the noise in smooth regions and also preserve sharp edge and complex texture.
CE
PT
ED
4.2. Numerical solution of the restoration problem In this section, we use the split Bregman algorithm [21] to solve the minimization problem (19). The first step of the split Bregman algorithm is to observe that this minimization problem is equivalent to the following constrained minimization problem: ( n ) ( X B = ∇X arg min kDFk HX − Yk k22 + LkBk + V kZk , such that .(25) Z = ∇2 X X,B,Z 1 Then, the split Bregman iteration for (25) is described as
AC
(X k+1 , B k+1 , Z k+1 ) = arg min X,B,Z
bk+1 1 bk+1 2
n X 1
kDFk HX k+1 − Yk k22 + LkB k k1 + V kZ k k1
λ k λ kb1 + ∇X k+1 − B k k2 + kbk2 + ∇2 X k+1 − Z k k2(26) + 2 2 k k+1 k+1 = b1 + ∇X −B , (27) k 2 k+1 k+1 = b2 + ∇ X −Z , (28) 11
ACCEPTED MANUSCRIPT
The Euler–Lagrange equation is then used to compute the solution of (26), which is given by: Pn
(∇X k+1 +bk1 −B k )−V λ div2 (∇2 X k+1 +bk2 −Z k )=0, (29)
CR IP T
∗ k+1 −Y )−Lλ div l l=1 (DFl H) (DFl HX
where (DFl H)∗ is the adjoint operator of DFl H, and div is the first order divergence operator verifying the adjointness relation 2 2 N 2 ×1 div X · Y = −X · ∇Y ∀Y ∈ R , X ∈ RN ×1 , (30) div2 X · Y = X · ∇2 Y
AN US
while div2 is the second order divergence operator with the adjointness property ∀Y ∈ RN
2 ×1
, X ∈ RN
2 ×1
4
,
(31)
and the dot ”·” denotes the Euclidean inner product. We can now computing X k+1 using (29), which leads to ! n X k+1 −1 ∗ k k k k X = Θ (DFl H) Yl + V λ div2 (b2 − Z ) − Lλ div(b1 − B ) , (32)
M
l=1
where
(33)
ED
Θ = 1 − V λ div2 (∇2 ) − Lλ div(∇).
CE
PT
Since the operators div(∇) and div2 (∇2 ) are negatives and semi-definite, the operator Θ is diagonally dominant and taking into account that the parameters λ, V and L are very small (closes to zero), Θ is then invertible. To compute the vectors Z k and B k , we use the shrinkage operators 1 B k+1 = shrink(∇X k+1 + bk1 , ), λ 1 Z k+1 = shrink(∇2 X k+1 + bk2 , ). λ
AC
where
shrink(x, y) =
x max(|x| − y, 0). |x|
(34) (35)
(36)
Finally, we summarize the algorithm associated to the new restoration model in Algorithm 1. To implement the Algorithm 1, we need to introduce the discrete setting. Let Xi,j be the discrete version of the image X, such as Xi,j = X(i, j), i = 1 . . . M , j = 1 . . . M , where M = r2 N 2 . We start by 12
ACCEPTED MANUSCRIPT
Algorithm 1 Split Bregman algorithm Inputs: The LR image Yk , b1 , b2 and the parameter λ. The procedure: = arg min X
B k+1 = Z k+1 = bk+1 = 1 k+1 b2 =
1
λ kDFk HX − Yk k22 + kbk1 + ∇X − B k k2 2
λ k kb + ∇2 X − Z k k2 , 2 2 λ arg minLkBk1 + kbk1 + ∇X k+1 − Bk22 , 2 B λ arg minV kZk1 + kbk2 + ∇2 X k+1 − Zk22 , 2 Z k k+1 k+1 b1 + ∇X −B , k 2 k+1 b2 + ∇ X − Z k+1 ,
AN US
+
n X
CR IP T
X
k+1
(37)
(38)
(39)
(40) (41)
Output: The HR deblurred and denoised image X
ED
M
the discretization of the operators ∇ which is given by two component in the following form ( Xi+1,j − Xi,j if i < M , (42) (∇X)1i,j = 0 if i = M
PT
(∇X)2i,j
( Xi,j+1 − Xi,j = 0
if j < M , if j = M
(43)
CE
For the second order operator ∇2 , we have ∇2 X = ∇xx X + ∇xy X + ∇xy X + ∇yy X.
AC
We define the second convolutions with the 0 0 kxx = 1 −2 0 0
(44)
order differential operators ∇xx , ∇yy , and ∇xy using following kernels 0 0 1 0 0 0 0 1 , kyy = 0 −2 0 , kxy = 0 1 −1 . 0 0 1 0 0 −1 1 13
ACCEPTED MANUSCRIPT
CR IP T
As a consequence, the matrix ∇2 can be interpreted as the kernel k2 defined as: 0 1 0 k2 = kxx + kyy + 2kxy = 1 −2 −1 . 0 −1 2
Therefore, the result of ∇2 X is merely the convolution of X with the above linear kernel k2 . 5. Experimental results
AN US
In the following experiments, the Lam´e coefficient µ and λ, the parameters σ, δ and the other parameters for the other SR models, are selected manually with respect to the best SR result. For example, in the simulated experiments, the best result is selected with the one of the highest PSNR value, while for the real data, the best visual pleasant result is kept. The main experiments are implemented in MATLAB 2013 b with a Pentium Dual core with 2.0 GHz and 2.0 GB RAM computer.
AC
CE
PT
ED
M
5.1. The Simulation results 5.1.1. The effectiveness of the registration part In the first experience, we construct 16 synthetic LR image from the original image of Cameraman such as: each frame is deformed by random vector fields, blurred by a Gaussian low-pass filter with size 4 × 4 and a standard deviation of 2. Then, the blurred frames are down-sampled in the two directions by a factor of r = 2 and a Gaussian noise was added with a standard deviation σ = 20. In this experiment, we fix the regularization term which considered as the BTV term [19] and we compare our registration algorithm with other classical registration methods in the SR task, such as: SR with probabilistic optical flow (POF) [20], SR with Least-squares based flow (LSOF) [3], SR with consistent flow (CF) [49], SR with diffusion registration proposed in [29] and also Median Shift and Add (MSA) [14]. The obtained SR results are shown in Fig. 1 to see the efficiency of the proposed registration part. We can deduce that the proposed registration method gives a slightly better result compared to the other methods. 5.1.2. The effectiveness of the second order regularization term To perform the restoration step of the proposed super-resolution approach, we fix now the registration step, which is considered as a hyperelastic registration one for our method and the other SR approaches too. We 14
(b) SR with LSOF [3]
(c) SR with MSA [14]
PT
ED
M
(a) One LR image
AN US
CR IP T
ACCEPTED MANUSCRIPT
(d) SR with CF [49]
(e) SR with diffusion [29]
(f) Our registration
AC
CE
Figure 1: Comparisons of different SR methods (Cameraman image when the magnification factor is r = 2 using random motion vectors)to perform the registration step.
15
(b) T V reg. [32]
(d) TV+BTV reg. [28]
(e) NLM reg. [38]
(c) BT V reg. [19]
AN US
(a) One LR image
CR IP T
ACCEPTED MANUSCRIPT
M
(f) The proposed model
ED
Figure 2: Comparisons of different SR methods (Cameraman image) when the noise level is σ = 50 and the magnification factor is r = 4.
AC
CE
PT
measure the performance of the proposed second order regularization term by considering a high-level of Gaussian noise, while the blur is supposed to be a Gaussian low-pass filter with a size 4 × 4 and a standard deviation of 2. Indeed, we consider the LR frames from the Cameraman original image corrupted by a Gaussian noise with σ = 50, 60, and 70 respectively. The obtained results, when the magnification factor is r = 4, using different regularization terms, such as: TV [32], nonlocal-means (NLM) [38], BTV [19] and TV+BTV [28], are illustrated in the Fig. 2. Visually, we can see that the proposed second order regularization preserves the image features better than the other terms for the three noise level. In the second experiment, we keep the same procedure of the SR process except that this time the interest is to see how the proposed regularization term deals with high-level of blur. Indeed, we augment the blur level, which is considered now as a Gaussian low-pass filter with a size of 7 × 7 and a standard deviation of 2 while the noise is considered with σ = 25. Once again, as illustrated in the Fig. 5, 16
(b) T V reg. [32]
(c) BT V reg. [19]
PT
ED
M
(a) One LR image
AN US
CR IP T
ACCEPTED MANUSCRIPT
(d) TV+BTV reg. [28]
(e) NLM reg. [38]
(f) The proposed model
AC
CE
Figure 3: Comparisons of different SR methods (Cameraman image) when the noise level is σ = 60 and the magnification factor is r = 4.
17
(b) T V reg. [32]
(c) BT V reg. [19]
PT
ED
M
(a) One LR image
AN US
CR IP T
ACCEPTED MANUSCRIPT
(d) TV+BTV reg. [28]
(e) NLM reg. [38]
(f) The proposed model
AC
CE
Figure 4: Comparisons of different SR methods (Cameraman image) when the noise level is σ = 70 and the magnification factor is r = 4.
18
(b) T V reg. [32]
(d) TV+BTV reg. [28]
(e) NLM reg. [38]
(c) BT V reg. [19]
AN US
(a) One LR image
CR IP T
ACCEPTED MANUSCRIPT
M
(f) The proposed model
ED
Figure 5: Comparisons of different SR methods (Cameraman image) while the blur is considered of size 7 × 7 and the magnification factor is r = 4.
AC
CE
PT
the proposed regularization term outperforms the others in the quality of the restoration. For the last test, we still keep the Cameraman sequence. To measure the robustness of the proposed regularization term against missregistration and blur estimation errors, we simulate the errors due to misregistration by injecting a registration parameter error in a selected one LR image corresponding to a random error of pixels on the HR image grid. While the PSF is assumed to be a normalized 3 × 2 Gaussian kernel instead of the true estimation, which is a 4 × 4 Gaussian kernel blur. In addition, to generate more outliers, we add a white Gaussian noise with σ = 20. The obtained SR images are shown in figure 6 using different SR methods. Also, we can see that the proposed regularization is more robust in reducing the registration and blur errors compared to other regularization. However, the blur still appears, especially in the homogeneous regions, which is the case of the other methods too. 19
(b) T V reg. [32]
(c) BT V reg. [19]
PT
ED
M
(a) One LR image
AN US
CR IP T
ACCEPTED MANUSCRIPT
(d) TV+BTV reg. [28]
(e) NLM reg. [38]
(f) The proposed model
AC
CE
Figure 6: Comparisons of different SR methods (Cameraman image) with misregistration and PSF errors when the magnification factor is r = 4.
20
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
5.1.3. The effectiveness of the main SR method In this section, experimental results for the proposed SR method was tested on simulated and real data. We consider in the first set of experiment six simulated tests with known HR images, after, four real experiments are presented. In these experiments, we justify the effectiveness of our method by comparing it with some competitive methods used in SR problems, such as SR with BTV regularization [19], adaptive regularization term (ART) [5], a noise-suppressing and edge-preserving equation SR (NSEP) [30], Region-based weighted-norm (RBWN) [36], generalized detail-preserving SR (GDP) [48] and also the combined first order and BTV (T V + BT V ) regularization [28] using the same data. In the following experiments, the motion model is assumed to be the global translational model for the other methods while we use the hyperelastic registration for the proposed SR. To evaluate the performance of the proposed algorithm, the peak-signalto-noise ratio (PSNR) and the structural similarity (SSIM) criterion [43] are used. The proposed super-resolution algorithm is tested on six benchmark images presented in figure 7 with different sizes and content. We construct n = 20 synthetic LR image from the original images such that: each frame is slightly deformed, blurred by a Gaussian low-pass filter with a 3 × 5 and a standard deviation of 1.5. Then, the blurred frames are down-sampled vertically and horizontally by a factor of r = 4 and Gaussian noise was added with a standard deviation σ = 30. Figs. 8-11 show the reconstruction results obtained for each simulated test compared with the other SR approaches. As we can see, the reconstructed HR images of the proposed algorithm have a good visual quality compared with the others. Indeed, the proposed method with hyperelastic registration avoids perfectly the misregistration errors and also the staircasing effect better than the other SR techniques. Moreover, sharp edges and texture are effectively conserved while reducing blur and noise. To confirm that, a computational study is needed. For that, in Tables 1 and 2, the PSNR and SSIM values are shown respectively for the six tested images with different σ noise value. The best value of the PSNR and SSIM is in bold number in each row. The proposed SR approach is always better compared with the others, which confirms the efficiency of our algorithm. In Table 3, the execution time of the proposed method compared with the other SR methods is shown for ten simulated images, including the ones used in the previous simulated tests. We can see that the proposed algorithm is always
21
(b) Cameraman
(c) Lion
ED
M
AN US
(a) Train
CR IP T
ACCEPTED MANUSCRIPT
(d) Men
(e) Car
(f) Air-plane
PT
Figure 7: The original six images.
CE
with the high time execution. This can be explained by the complexity of the registration step even if the use of the Bregman iteration converges quickly compared to other methods.
AC
5.2. The real experiments In this subsection, we approve our method in avoiding missregistration errors using real data compared with the T V + BT V method with the diffusion registration proposed in [29] and the GDP SR approach. The two first sequences used are “Emily” and “Disk” videos downloaded from the website1 , which are challenging examples, since it contain a high level of blur 1
https://users.soe.ucsc.edu/ milanfar/software/sr-datasets.html
22
CR IP T
ACCEPTED MANUSCRIPT
(d) NSEP SR [30]
PT
ED
M
(c) ART reg. [5]
(b) BT V reg. [19]
AN US
(a) One LR image
(f) GDP SR [48]
AC
CE
(e) RBWN SR [36]
(g) T V + BT V reg. [28]
(h) The proposed model
Figure 8: Comparisons between different SR methods and the proposed one for the (Train image).
23
(b) BT V reg. [19]
(c) ART reg. [5]
M
(a) One LR image
AN US
CR IP T
ACCEPTED MANUSCRIPT
(e) RBWN SR [36]
(f) GDP SR [48]
CE
PT
ED
(d) NSEP SR [30]
(h) The proposed model
AC
(g) T V + BT V reg. [28]
Figure 9: Comparisons between different SR methods and the proposed one for the (Cameraman image).
24
CR IP T
ACCEPTED MANUSCRIPT
(d) NSEP SR [30]
PT
ED
M
(c) ART reg. [5]
(b) BT V reg. [19]
AN US
(a) One LR image
(f) GDP SR [48]
AC
CE
(e) RBWN SR [36]
(g) T V + BT V reg. [28]
(h) The proposed model
Figure 10: Comparisons between different SR methods and the proposed one for the (Lion image).
25
(b) BT V reg. [19]
(c) ART reg. [5]
ED
M
AN US
(a) One LR image
CR IP T
ACCEPTED MANUSCRIPT
(e) RBWN SR [36]
(f) GDP SR [48]
AC
CE
PT
(d) NSEP SR [30]
(g) T V + BT V reg. [28]
(h) The proposed model
Figure 11: Comparisons between different SR methods and the proposed one for the (Man image).
26
(c) ART reg. [5]
AN US
(b) BT V reg. [19]
ED
M
(a) One LR image
CR IP T
ACCEPTED MANUSCRIPT
(e) RBWN SR [36]
(f) GDP SR [48]
AC
CE
PT
(d) NSEP SR [30]
(g) T V + BT V reg. [28]
(h) The proposed model
Figure 12: Comparisons between different SR methods and the proposed one for the (Airplane image).
27
(c) ART reg. [5]
AN US
(b) BT V reg. [19]
ED
M
(a) One LR image
CR IP T
ACCEPTED MANUSCRIPT
(e) RBWN SR [36]
(f) GDP SR [48]
AC
CE
PT
(d) NSEP SR [30]
(g) T V + BT V reg. [28]
(h) The proposed model
Figure 13: Comparisons between different SR methods and the proposed one for the (Car image).
28
ACCEPTED MANUSCRIPT
Table 1: The PSNR Table of the super-resolution process evaluated on the six images provided in the simulated tests, when the magnification factor is 4. σ = 10 σ = 20 σ = 30 31.55 31.06 30.77 31.80 31.19 30.55 31.70 31.22 30.87 31.95 31.37 30.91 31.84 31.44 30.99 31.72 31.22 30.88 32.83 32.51 32.03 28.91 28.06 27.66 29.03 28.60 27.90 29.48 29.01 28.66 29.58 28.88 28.29 29.76 28.43 28.36 29.28 28.77 28.04 31.08 30.71 30.16 32.22 31.69 31.18 32.52 31.66 31.02 33.08 32.75 32.02 33.03 32.74 32.20 33.06 32.81 32.41 33.18 32.92 32.49 34.01 33.61 33.15 32.44 32.09 31.73 32.60 32.22 31.80 32.65 32.26 31.90 32.69 32.28 31.94 33.11 32.96 32.55 33.02 32.66 32.14 34.21 33.90 33.73 35.11 34.69 34.03 35.18 34.79 34.12 35.82 35.44 35.09 35.88 35.40 35.01 36.00 35.69 35.19 35.82 35.52 35.18 36.22 35.94 35.33 33.41 33.05 32.73 33.50 33.09 32.90 33.70 33.42 33.06 33.95 33.60 33.12 34.08 33.79 33.47 33.92 33.69 33.44 34.82 34.55 34.16
CR IP T
Method BTV reg. [19] ART reg. [5] NSEP SR [30] Train RBWN SR [36] GDP SR [48] TV+BTV reg. [28] Proposed approach BTV reg. [19] ART reg. [5] NSEP SR [30] Cameraman RBWN SR [36] GDP SR [48] TV+BTV reg. [28] Proposed approach BTV reg. [19] ART reg. [5] NSEP SR [30] Lion RBWN SR [36] GDP SR [48] TV+BTV reg. [28] Proposed approach BTV reg. [19] ART reg. [5] NSEP SR [30] Man RBWN SR [36] GDP SR [48] TV+BTV reg. [28] Proposed approach BTV reg. [19] ART reg. [5] NSEP SR [30] Air-plane RBWN SR [36] GDP SR [48] TV+BTV reg. [28] Proposed approach BTV reg. [19] ART reg. [5] NSEP SR [30] RBWN SR [36] Car GDP SR [48] TV+BTV reg. [28] Proposed approach
AC
CE
PT
ED
M
AN US
Image
29
ACCEPTED MANUSCRIPT
Table 2: The SSIM table of the super-resolution process evaluated on the six images provided in the simulated tests, when the magnification factor is 4. σ = 10 σ = 20 σ = 30 0.901 0.888 0.833 0.911 0.866 0.8149 0.936 0.900 0.871 0.939 0.908 0.877 0.944 0.919 0.890 0.935 0.918 0.900 0.969 0.940 0.917 0.877 0.853 0.839 0.880 0.850 0.839 0.899 0.873 0.851 0.891 0.867 0.849 0.911 0.883 0.863 0.903 0.882 0.869 0.939 0.919 0.899 0.905 0.882 0.861 0.913 0.881 0.852 0.936 0.909 0.891 0.925 0.903 0.884 0.933 0.913 0.893 0.939 0.919 0.996 0.972 0.933 0.910 0.899 0.865 0.841 0.906 0.868 0.844 0.918 0.894 0.870 0.918 0.895 0.866 0.929 0.905 0.882 0.936 0.919 0.900 0.958 0.930 0.908 0.922 0.894 0.870 0.929 0.900 0.873 0.944 0.918 0.893 0.949 0.920 0.892 0.953 0.929 0.906 0.955 0.922 0.902 0.978 0.950 0.928 0.908 0.881 0.853 0.910 0.894 0.860 0.931 0.910 0.883 0.928 0.901 0.975 0.933 0.910 0.991 0.948 0.920 0.902 0.964 0.931 0.922
CR IP T
Method BTV reg. [19] ART reg. [5] NSEP SR [30] Train RBWN SR [36] GDP SR [48] TV+BTV reg. [28] Proposed approach BTV reg. [19] ART reg. [5] NSEP SR [30] Cameraman RBWN SR [36] GDP SR [48] TV+BTV reg. [28] Proposed approach BTV reg. [19] ART reg. [5] NSEP SR [30] Lion RBWN SR [36] GDP SR [48] TV+BTV reg. [28] Proposed approach BTV reg. [19] ART reg. [5] NSEP SR [30] Man RBWN SR [36] GDP SR [48] TV+BTV reg. [28] Proposed approach BTV reg. [19] ART reg. [5] NSEP SR [30] Air-plane RBWN SR [36] GDP SR [48] TV+BTV reg. [28] Proposed approach BTV reg. [19] ART reg. [5] NSEP SR [30] RBWN SR [36] Car GDP SR [48] TV+BTV reg. [28] Proposed approach
AC
CE
PT
ED
M
AN US
Image
30
ACCEPTED MANUSCRIPT
Table 3: CPU times (in seconds) of different super-resolution methods and the proposed method when the magnification factor is 4. NSEP SR [30] 11.66 12.90 13.80 14.30 32.80 32.88 35.20 13.20 11.70 11.66 19.01
RBWN SR [36] 10.90 11.70 12.80 14.09 32.70 33.03 36.40 13.07 11.55 11.90 18.81
GDP SR [48] 11.42 13.66 14.02 15.17 35.26 34.70 33.11 14.80 12.14 12.58 19.68
TV+BTV reg. [28] 8.80 10.14 12.06 12.82 27.99 28.06 30.90 10.01 8.86 9.88 15.95
CR IP T
ART reg. [5] 10.26 11.10 12.40 13.75 31.80 31.10 34.33 11.90 10.23 11.99 17.88
AN US
Image BTV reg. [19] Cameraman 8.20 Train 10.60 Lion 10.01 Man 12.60 Car 28.18 Airplane 29.26 San-Diego 30.46 Baboon 10.85 Peppers 9.60 Lena 10.22 Average time 15.99
AC
CE
PT
ED
M
and noise and different deformations. We select the first ten low-resolution images from the disk video of size 49 × 57. These low-resolution images were only translated and did not contain any rotations. Note that we use the proposed fluid registration to estimate the motion for our method. The reconstructed image with a factor r = 3 by different methods are presented in Fig. 14. We can observe that the restored image by our SR method is visually better than the others. In the second example, we take the first ten images of the Emily video with size 66 × 101 and we double it size and show the obtained results in the fig. 15. Once again, we can see that the proposed SR method gives a better result compared to the other methods. In the third and fourth experiments, our interest is to improve the readability of the licence plate of two cars. We note that the camera is static and fixed alongside the road. The two video sequences contain fifty frames, we select only the first thirty ones with sizes of 52 × 61 and 52 × 48, respectively. Note that the main complexity in these videos is that it is not possible to register the corresponding image patches using a classical parametric registration because of their limited extent, even though the licence plate area is actually planar in 3D. Another complexity comes also from the rapidly moving of cars which introduce non-rigid motions between LR sequence. The reconstructed image with a factor r = 4 by different methods are presented in Figs. 16 and 17. 6. Conclusion In this paper, we have proposed a novel multi-frame SR reconstruction method based on hyperelastic registration and a spatially weighted second 31
Our Method 12.50 13.01 15.66 17.13 35.90 35.54 36.80 14.00 12.35 14.12 20.70
AN US
CR IP T
ACCEPTED MANUSCRIPT
(b) TV+BTV reg. [28]
PT
ED
M
(a) One LR image
(c) GDP SR [48]
(d) Our method
AC
CE
Figure 14: The results obtained by applying different SR methods to LR (Emily sequence).
32
AN US
CR IP T
ACCEPTED MANUSCRIPT
(b) TV+BTV reg. [28]
PT
ED
M
(a) One LR image
(d) Our method
CE
(c) GDP SR [48]
AC
Figure 15: The results obtained by applying different SR methods to LR (Disk sequence).
33
AN US
CR IP T
ACCEPTED MANUSCRIPT
(b) TV+BTV reg. [28]
PT
ED
M
(a) One LR image
(d) Our method
CE
(c) GDP SR [48]
AC
Figure 16: The results obtained by applying different SR methods to LR (Car 1 sequence).
34
(b) TV+BTV reg. [28]
PT
ED
M
(a) One LR image
AN US
CR IP T
ACCEPTED MANUSCRIPT
(d) Our method
CE
(c) GDP SR [48]
AC
Figure 17: The results obtained by applying different SR methods to LR (Car 2 sequence).
35
ACCEPTED MANUSCRIPT
AN US
CR IP T
order regularization, which can improve the registration and the restoration steps against different outliers. Synthetic and real results show that the proposed SR method gives promising results while noise and blur are reduced, which is confirmed computationally using PSNR and SSIM measures. Moreover, the proposed approach can efficiently avoids misregistration errors, especially when the transformations between the LR images are nonparametric. However, the high computation cost of the proposed algorithm remains a weakness, an accelerating scheme is then investigated based on the projection techniques. Finally, the selection of the appropriate parameters affects the performance of the proposed approach, a learning method is then desired to define the hyper-parameters used in our algorithm. Acknowledgements
The authors are grateful to the anonymous reviewers for their insightful remarks and corrections. Compliance with Ethical Standards
M
• Funding: This research was entirely funded by institutions of the authors.
ED
• Conflict of interest: The authors declare that they have no conflict of interest.
PT
• Neither human participants nor animals are involved in this research. Bibliography
CE
References
AC
[1] Adams, R. A. and Fournier, J. J. (2003). Sobolev spaces, volume 140. Academic press. [2] Amancio, D. R. (2015). Probing the topological properties of complex networks modeling short written texts. PloS one, 10(2):e0118394. [3] Anandan, P., Bergen, J., Hanna, K., and Hingorani, R. (1993). Hierarchical model-based motion estimation. In Motion Analysis and Image Sequence Processing, pages 1–22. Springer. 36
ACCEPTED MANUSCRIPT
[4] Babacan, S. D., Molina, R., and Katsaggelos, A. K. (2011). Variational bayesian super resolution. IEEE Transactions on Image Processing, 20(4):984–999.
CR IP T
[5] Bahy, R. M., Salama, G. I., and Mahmoud, T. A. (2014). Adaptive regularization-based super resolution reconstruction technique for multifocus low-resolution images. Signal Processing, 103:155–167. [6] Baker, S. and Kanade, T. (1999). Super-resolution optical flow. Carnegie Mellon University, The Robotics Institute.
AN US
[7] Baker, S. and Kanade, T. (2002). Limits on super-resolution and how to break them. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(9):1167–1183.
[8] Cai, K., Shi, J., Xiong, S., and Wei, G. (2011). Edge adaptive image resolution enhancement in video sensor network. Optics Communications, 284(19):4446–4451.
ED
M
[9] Capel, D. and Zisserman, A. (1998). Automated mosaicing with superresolution zoom. In Computer Vision and Pattern Recognition, 1998. Proceedings. 1998 IEEE Computer Society Conference on, pages 885–891. IEEE.
PT
[10] Chambolle, A. and Lions, P.-L. (1997). Image recovery via total variation minimization and related problems. Numerische Mathematik, 76(2):167–188.
CE
[11] Chan, T., Marquina, A., and Mulet, P. (2000). High-order total variation-based image restoration. SIAM Journal on Scientific Computing, 22(2):503–516.
AC
[12] Chen, H., He, X., Ren, C., Qing, L., and Teng, Q. (2018). Cisrdcnn: Super-resolution of compressed images using deep convolutional neural networks. Neurocomputing, 285:204–219. [13] Chen, Q., Montesinos, P., Sun, Q. S., Heng, P. A., et al. (2010). Adaptive total variation denoising based on difference curvature. Image and Vision Computing, 28(3):298–306.
37
ACCEPTED MANUSCRIPT
[14] Chiang, M.-C. and Boult, T. E. (2000). Efficient super-resolution via image warping. Image and Vision Computing, 18(10):761–771.
CR IP T
[15] Dong, C., Loy, C. C., and Tang, X. (2016). Accelerating the superresolution convolutional neural network. In European Conference on Computer Vision, pages 391–407. Springer. [16] Dong, W., Zhang, L., Shi, G., and Wu, X. (2011). Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Transactions on Image Processing, 20(7):1838–1857.
AN US
[17] El Mourabit, I., El Rhabi, M., Hakim, A., Laghrib, A., and Moreau, E. (2017). A new denoising model for multi-frame super-resolution image reconstruction. Signal Processing, 132:51–65. [18] Elad, M. and Hel-Or, Y. (2001). A fast super-resolution reconstruction algorithm for pure translational motion and common space-invariant blur. IEEE Transactions on Image Processing, 10(8):1187–1193.
M
[19] Farsiu, S., Robinson, M. D., Elad, M., and Milanfar, P. (2004). Fast and robust multiframe super resolution. IEEE Transactions on Image processing, 13(10):1327–1344.
ED
[20] Fransens, R., Strecha, C., and Van Gool, L. (2007). Optical flow based super-resolution: A probabilistic approach. Computer vision and image understanding, 106(1):106–115.
PT
[21] Goldstein, T. and Osher, S. (2009). The split bregman method for l1regularized problems. SIAM journal on imaging sciences, 2(2):323–343.
CE
[22] Greenspan, H. (2009). Super-resolution in medical imaging. The Computer Journal, 52(1):43–63.
AC
[23] Hayat, K. (2018). Multimedia super-resolution via deep learning: a survey. Digital Signal Processing. [24] He, Y., Yap, K.-H., Chen, L., and Chau, L.-P. (2007). A nonlinear least square technique for simultaneous image registration and super-resolution. IEEE Transactions on Image Processing, 16(11):2830–2841.
[25] Irani, M. and Peleg, S. (1991). Improving resolution by image registration. CVGIP: Graphical models and image processing, 53(3):231–239. 38
ACCEPTED MANUSCRIPT
[26] Jiang, J., Chen, C., Huang, K., Cai, Z., and Hu, R. (2016). Noise robust position-patch based face super-resolution via tikhonov regularized neighbor representation. Information Sciences, 367:354–372.
CR IP T
[27] Laghrib, A., Ghazdali, A., Hakim, A., and Raghay, S. (2016). A multiframe super-resolution using diffusion registration and a nonlocal variational image restoration. Computers & Mathematics with Applications, 72(9):2535–2548.
AN US
[28] Laghrib, A., Hakim, A., and Raghay, S. (2015). A combined total variation and bilateral filter approach for image robust super resolution. EURASIP Journal on Image and Video Processing, 2015(1):1–10.
[29] Laghrib, A., Hakim, A., Raghay, S., and El Rhabi, M. (2014). Robust super resolution of images with non-parametric deformations using an elastic registration. Appl. Math. Sci, 8(179):8897–8907.
M
[30] Maiseli, B. J., Ally, N., and Gao, H. (2015). A noise-suppressing and edge-preserving multiframe super-resolution image reconstruction method. Signal Processing: Image Communication, 34:1–13.
ED
[31] Marai, G. E., Laidlaw, D. H., and Crisco, J. J. (2006). Super-resolution registration using tissue-classified distance fields. IEEE transactions on medical imaging, 25(2):177–187.
PT
[32] Marquina, A. and Osher, S. J. (2008). Image super-resolution by tvregularization and bregman iteration. Journal of Scientific Computing, 37(3):367–382.
CE
[33] Modersitzki, J. (2004). Numerical methods for image registration. Oxford University Press on Demand.
AC
[34] Nasrollahi, K. and Moeslund, T. B. (2014). Super-resolution: a comprehensive survey. Machine vision and applications, 25(6):1423–1468. [35] Nitsche, J. A. (1981). On korn’s second inequality. RAIRO-Analyse num´erique, 15(3):237–248. [36] Omer, O. A. and Tanaka, T. (2011). Region-based weighted-norm with adaptive regularization for resolution enhancement. Digital Signal Processing, 21(4):508–516. 39
ACCEPTED MANUSCRIPT
[37] Papafitsoros, K. and Sch¨onlieb, C.-B. (2014). A combined first and second order variational approach for image reconstruction. Journal of mathematical imaging and vision, 48(2):308–338.
CR IP T
[38] Protter, M., Elad, M., Takeda, H., and Milanfar, P. (2009). Generalizing the nonlocal-means to super-resolution reconstruction. IEEE Transactions on image processing, 18(1):36–51. [39] Robinson, D., Farsiu, S., and Milanfar, P. (2009). Optimal registration of aliased images using variable projection with applications to superresolution. The Computer Journal, 52(1):31–42.
AN US
[40] Schweiger, M., Arridge, S. R., and Nissil¨a, I. (2005). Gauss–newton method for image reconstruction in diffuse optical tomography. Physics in medicine and biology, 50(10):2365.
[41] Shen, H., Zhang, L., Huang, B., and Li, P. (2007). A map approach for joint motion estimation, segmentation, and super resolution. IEEE Transactions on Image processing, 16(2):479–490.
ED
M
[42] Tsai, R. Y. and Huang, T. S. (1984). Multiframe image restoration and registration. In: Advances in Computer Vision and Image Processing, ed. T.S.Huang. Greenwich, CT, JAI Press.
PT
[43] Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612.
CE
[44] Yanovsky, I., Le Guyader, C., Leow, A., Toga, A., Thompson, P., and Vese, L. (2008). Unbiased volumetric registration via nonlinear elastic regularization. In 2nd MICCAI workshop on mathematical foundations of computational anatomy.
AC
[45] Yue, L., Shen, H., Li, J., Yuan, Q., Zhang, H., and Zhang, L. (2016). Image super-resolution: The techniques, applications, and future. Signal Processing, 128:389–408. [46] Zhang, K., Zuo, W., Chen, Y., Meng, D., and Zhang, L. (2017). Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155. 40
ACCEPTED MANUSCRIPT
[47] Zhang, L., Zhang, H., Shen, H., and Li, P. (2010). A super-resolution reconstruction algorithm for surveillance images. Signal Processing, 90(3):848–859.
CR IP T
[48] Zhao, S., Liang, H., and Sarem, M. (2016). A generalized detailpreserving super-resolution method. Signal Processing, 120:156–173.
AC
CE
PT
ED
M
AN US
[49] Zhao, W. and Sawhney, H. S. (2002). Is super-resolution with optical flow feasible? In European Conference on Computer Vision, pages 599– 613. Springer.
41