Image reconstruction from random samples using multiscale regression framework

Image reconstruction from random samples using multiscale regression framework

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Brief Pap...

7MB Sizes 2 Downloads 96 Views

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Brief Papers

Image reconstruction from random samples using multiscale regression framework Susmi Jacob, Madhu S. Nair n Department of Computer Science, University of Kerala, Kariavattom,Thiruvananthapuram 695581, Kerala, India

art ic l e i nf o

a b s t r a c t

Article history: Received 24 June 2014 Received in revised form 26 August 2015 Accepted 30 October 2015 Communicated by Kaushik Sinha

Preserving edge details is an important issue in most of the image reconstruction problems. In this paper, we propose a multiscale regression framework for image reconstruction from sparse random samples. A multiscale framework is used here to combine the modeling strengths of parametric and nonparametric statistical techniques in a pyramidal fashion. This algorithm is designed to preserve edge structures using an adaptive filter, where the filter coefficients are derived using locally adapted kernels which take into account both the local density of the available samples, and the actual values of these samples. As such, they are automatically directed and adapted to both the given sampling geometry and the samples' radiometry. Both the upscaling and missing pixel recovery processes are made locally adaptive so that the image structures can be well preserved. Experimental results demonstrate that the proposed method achieves better improvement over the state-of-the-art algorithms in terms of both subjective and objective quality. & 2015 Elsevier B.V. All rights reserved.

Keywords: Image modeling Image reconstruction Kernel regression Inverse problem

1. Introduction The emergence of high definition displays in recent years, along with rapid increase of cheaper digital imaging devices has resulted in the need for fundamentally new image processing algorithms. In this work, we address the issue of missing information corrupted by the limitations of the imaging system as well as degradation processes such as compression [1], in a different way. This work concentrates on a data-adaptive multiscale regression framework for reconstruction and enhancement of randomly sampled images. The projection onto convex set (POCS) based Papoulis–Gerchberg (PG) algorithm [2–3] and Delaunay triangulation based interpolation [4] are two classic image reconstruction algorithms. Michael and David proposed a sparse representation-based morphological component analysis (MCA) method [5], which separates the image into texture and piecewise smooth (cartoon) parts. It is exploiting both the variational and the sparsity mechanisms. The method combines the basis pursuit denoising (BPDN) algorithm and the total-variation (TV) regularization scheme. The maximum likelihood estimation by random sample and the local optimization (MLESAC) method [6] is a robust estimator which adopts maximum likelihood theory with local optimization n

Corresponding author. E-mail addresses: [email protected] (S. Jacob), [email protected] (M.S. Nair).

(LO). Guided-MLESAC [7] introduced by Tordoff and Murray completely utilizes matching prior probabilities, which makes sampling more efficient and finally achieves Bayesian maximum likelihood estimation, but it requires high cost in calculation. AMLESAC mentioned by Konouchin [8] adopts modified median estimator method to estimate initial value of parameter in the model and enhances likelihood of the output model. The algorithm of MLESAC generally uses some iterative optimized algorithm. It also introduces an accelerated algorithm MLESAC, which embeds LO into the iterative steps of MLESAC, then guides the iteration through the result of LO [6]. Classical Kernel Regression [10] is another well known, nonparametric point estimation procedure. KR approach was useful for handling image reconstruction from very sparse samples [10,13]. The non-parametric KR method for image processing which was a variant of the famous Nadaraya–Watson (NW) estimator [9] was introduced in a nonlocal means denoising algorithm [18,19]. By using the kernel functions driven by the distances between existing pixels within a large neighborhood, these nonlocal type of non-parametric image models made the algorithm more robust in recovering very sparse image samples [10]. But, the classic KR-based method does not explore the inter-scale similarity. Takeda et al. generalized this technique to spatially adaptive steering kernel regression [10], which preserved and restored details with minimal assumptions on local signal and noise models. The hybrid image reconstruction (HIR) algorithm [11] proposed by Guangtao and Zang combined the linear

http://dx.doi.org/10.1016/j.neucom.2015.10.127 0925-2312/& 2015 Elsevier B.V. All rights reserved.

Please cite this article as: S. Jacob, M.S. Nair, Image reconstruction from random samples using multiscale regression framework, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2015.10.127i

S. Jacob, M.S. Nair / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

2

autoregressive (AR) parametric model [12] and the kernel regressive (KR) non-parametric model [9] systematically to improve the modeling efficiency. HIR algorithm utilized the context vectors in vicinity without considering the local orientation along the image structures. By incorporating the steering KR technique into the multiscale framework, both in the restoration of missing pixels as well as the upscaling of lower scale image to higher resolutions, we could reconstruct different edge structures more efficiently. From the above analysis, it is noted that, a unified scheme for reconstructing the edge, smooth and texture regions with affordable computational complexity is desirable. The essence of this reconstruction is to assign the missing pixels according to the rest of the image, i.e., being a conditional expectation estimation problem. The missing pixels are considered as a vector, and its expectation is given by its surrounding available pixels. Moreover, the multiscale approach [13,14] is utilized to combine the parametric and non-parametric techniques in a single framework. The missing pixels are successively recovered from significant pixel loss (from low to high frequencies), and the restored image at a particular level is in turn used for estimation of the next level. This approach is shown to be extremely effective for sparse random samples. Many large-scale structures can be well recovered based upon the progressively computed low-level results, and this is impossible for traditional single level reconstruction algorithms. For the signal recovery and upscaling, a data-adaptive filter [9] is used here, which directs the kernel along the edges, rather spread across it. The filter coefficients are derived depending on the dominant orientation calculated from the local covariance matrices in the selected window. The original image with only randomly selected samples is successively downsampled to form a multiscale pyramid by replacing the low resolution pixel by the average of the available high resolution pixels. The missing pixels in the lowest resolution image (highest level L2 ) can be recovered using a data-adaptive non-parametric KR model. This recovery can be done iteratively to improve the estimation accuracy. The parametric AR model embedded into a data-adaptive KR framework is then used to upsample the recovered image to a higher resolution. The estimates on each level is refined by a non-parametric KR model which use the upsampled image from previous level as a prior estimate in the next level. Refined estimate is upscaled again by a parametric data-adaptive KR model, which can be in turn used as a prior for next level of reconstruction to get the final result. The rest of this paper is organized as follows. Section 2 describes the underlying theory behind our work. The concept of parametric and non-parametric image modeling, multiscale approximation, kernel regression and soft-decision interpolation are discussed. The proposed multiscale regression framework with an algorithmic description is presented in Section 3. The implementation details and simulation results are given in Section 4. Also, the proposed method is compared with classical and the state-of- the-art image reconstruction algorithms in terms of both subjective and objective quality. Finally, the paper is concluded in Section 5.

2. Background theory 2.1. Image models The estimation of the conditional expectation of Y given an observation of the context X ¼ xi is as follows Xn y Pðyi jxi Þ ð1Þ EðY j X ¼ xi Þ ¼ i¼1 i

where {yi }, 1 r irn and {xi }, 1 rirn are two sets of image pixel samples. Y is a dependent variable and X A Rm is an independent variable that represent a pixel and a set of pixels (context vector), respectively. In this paper, for image reconstruction, we use the existing pixel samples from the observation. 2.1.1. Parametric image model Linear autoregressive (AR) model is widely used in signal processing. It is an effective structural constraint for solving various image processing problems such as predictive coding and image interpolation. A parametric image model can be derived by assuming a parametric relationship between yi and xi : A linear regression model of yi for xi can be written as: Xm α x þ ei ð2Þ yi ¼ j¼1 j j where αj , j¼ 1,2,…,m are the regression coefficients and ei A Rm is an additive multivariate Gaussian zero mean noise term. This is known as the AR model. Writing (2) in vector form, we have y ¼ Ax þ e where A ¼

h

ð3Þ

i

xið1Þ ; xið2Þ ; …; xðnÞ ; A A Rmn is the design i n T is the parameter vector. n ; x A R

matrix, and x ¼

½α1 ; α2 ; …; α The parameter vector can be estimated by the l 2 minimization problem (minimization of error vector)  argmin x^ ¼ x A Rn y  Ax22

ð4Þ

The AR models are solved by classical least square (LS) method. With the normality assumption of the noise term ei , the least square estimation is also the maximum likelihood estimation of the model parameters. The LS problem in (4) is equivalent to solving an incompatible linear system Ax ¼ Y T





ð5Þ T

where Y ¼ y1 ; y2 ; …; yn and X ¼ ½x1 ; x2 ; …; xn , which has closed form solution as x^ LS ¼ ðAAT Þ  1 AT Y

ð6Þ

for the over-determined system (n 4 m). Numerical stability is a major issue with the LS solution of the AR model. The problem is related to the rank condition of the design matrix. The probability of numerical rank deficiency is rather high due to discrete nature and structures of the digital images in case of natural images. Without proper care, numerical rank deficiency can adversely affect the parameter estimation of the AR model. In order to overcome this, rank revealing QR factorization is used to select an optimal subset from the design matrix. A truncated solution to the linear system can be calculated by removing the ill conditioned part of the right orthogonal matrix of the rank revealing QR decomposition [15]. 2.1.2. Non-parametric image model KR is a widely known non-parametric technique which is used for point estimation of probability functions, where the estimated distributions are smooth. The conditional expectation in (1) can be expressed as Pn j ¼ 1 K h ð‖x i  x j kÞyj  y ¼ EðY j X ¼ xi Þ  Pn ð7Þ xi  xj kÞ j ¼ 1 Kh  where the kernels K o ðxÞ ¼ 1o K xo can be chosen from functions that are non-negative, sum to 1 and symmetric around 0. The non-parametric conditional expectation estimator in (7) is known as the Nataraya–Watson (NW) estimator, which is the weighted average of the observations y1 ; y2 ; …; yn with the weight in inverse proportion to the distances between xj , 1 r j r n and xi .

Please cite this article as: S. Jacob, M.S. Nair, Image reconstruction from random samples using multiscale regression framework, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2015.10.127i

S. Jacob, M.S. Nair / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

The NW estimator can also be derived another way. Recall that from (1), the conditional expectation of yi can be written as a weighted sum of the samples y1 ; y2 ; …; yn . If we denote yp as a global fit to all the samples y1 ; y2 ; …; yn and weight the residuals with a kernel measuring the distance between the contexts xi and xj , 1 rj r n, then the weighted residual sum of squares (WRSS) can be written as 2  Xn  WRSS ¼ yj yp K h ‖xi  xj kÞ ð8Þ j¼1 By simply setting the derivative of (8) to zeros, we can solve the minimization of WRSS as Pn ð9Þ argmin j ¼ 1 K h ð‖x i  x j ‖Þyj yp ¼ yp ðWRSSÞ ¼ Pn K ð‖x  x ‖Þ i j j¼1 h which is equal to the NW estimator given in (7). Simply, the NW estimator is nothing but finding a global best fit to all the available samples y1 ; y2 ; …; yn with the kernel weights determined by the corresponding kernel. In the NW estimator (in lower levels), prior information about the context vectors can also be incorporated. This prior-based NW estimator will be used for lower levels in the proposed multiscale reconstruction algorithm. 2.2. Multiscale approximation The multiscale approach devised in HIR algorithm and image coding algorithms [13,14] is well structured to restore the information from different levels of the pyramid. The missing pixels are successively recovered using non-parametric kernel regression and the restored pixels at a particular level are in turn used for estimation of the next level. This approach is shown to be extremely effective for high pixel loss rates. Many large-scale structures can be well recovered based upon the progressively computed low-level results, and this is impossible for traditional single level reconstruction algorithms. This means that application window in the highest levels of the pyramid has more pixel members than that in the lowest levels. Moreover, if yi and its observations xi contain missing pixels, the number of effective members in the point-wise estimation is further reduced, and this causes unstable estimate of Pðyi jxi Þ. As a consequence, using an upscaled image with more effective pixel members generally leads to higher estimation accuracy and, therefore, better performance of the reconstruction. The basic idea is first to recover some rough information in the highest level (lowest resolution), and then keeps refining the details, in later approximations. So, this multiscale approach improves the reconstruction efficiency successively. Also we use estimates on lower levels as guidance for higher levels, while excluding missing pixels in computing the l 2 distance. It is to be noted that the restoration result can also be plugged back to that level as prior estimate for another iteration. Since the priors used in the multiscale algorithm are formed through coarsely up-sampling the lowresolution image, this new prior is expected to bring further improvement of estimation accuracy. 2.3. Kernel regression Kernel regression is a useful technique for data fitting. It is providing high-level control over data reconstruction while allowing data smoothing up to a limit. The data model in 2-D can be considered as an estimation problem where the measured data yi at coordinate position xi ¼ ½x1i ; x2i T is given by yi ¼ zðxi Þ þ εi ; i ¼ 1; 2;   ; p; xi ¼ ½x1i ; x2i T

ð10Þ

where z(  ) is the regression function to be estimated, p is the number of measured pixels, and εi 's are the independent and

3

identically distributed noise values. If we assume that, it is locally smooth to some order N, then in order to estimate the value of the function at any given point x, we can rely on a generic local expansion of the function about this point. Specifically, if x is near the sample at xi , we have the N-term Taylor series   ð11Þ zðxi Þ  α0 þ αT1 ðxi  xÞ þ αT2 vechðxi  xÞ xi xÞT þ … where α1 and α2 are the gradient and Hessian operators respectively, and vech(  ) is the half-vectorization operator. The classical regression methods estimate the coefficients fαn gN n ¼ 0 from the data while giving the nearby samples higher weights than samples farther away, since this approach is based on local approximations. It is also appropriate to weight samples based on their relative location with respect to a local edge, performing the regression along the edges rather than across them. This is the basis of modern adaptive methods that combines the geometric weighting as well as radiometric weighting. A general formulation is to solve the following optimization problem: XP min ½yi  α0  αT1 ðxi  xÞ  αT2 vechðxi –xÞðxi  xÞT    2 K H ðxi –xÞ |{z} i¼1 fαn g ð12Þ where KðÞ is the kernel function which penalizes both geometric and radiometric distances and will be described in detail in Section 3.1. Using the matrix form, the optimization problem (12) can be treated as weighted LS (WLS) problem: hh ii T b^ ¼ argmin ðy XbÞ K ðy XbÞ ð13Þ |fflfflfflffl{zfflfflfflffl} b where h y ¼ y1

y2 …

yp

iT

h ; b ¼ α0

αT1 …

αTN

i

ð14Þ

K ¼ diag½K H ðx1  xÞK H ðx2  xÞ  K H ðxp  xÞ 2 61 6 61 6 X¼6 6 6 : 6 4 1

ðx1 xÞT ðx2 xÞ

T

vech

T

vech

T

: ðxp xÞT

T

vech

n n

ðx1  xÞðx1 xÞT ðx2  xÞðx2 xÞT

n

: ðxp  xÞðxp  xÞT

ð15Þ o o o



3

7 7 …7 7 7 7 : 7 7 …5

ð16Þ

with ‘diag’ defining the diagonal elements of a diagonal matrix. A locally linear adaptive filter is derived which is known as the Nadaraya–Watson Estimator (NW) [16]. Generally, lower order approximates result in smoother images (large bias and small variance) as there are fewer degrees of freedom. On the other hand, over-fitting happens in regressions using higher orders of approximation, resulting in small bias and large estimation variance. 2.4. Soft-Decision Adaptive Interpolation (SAI) Soft-decision Adaptive interpolation (SAI) [12] is a single image super resolution technique, which estimates missing pixels in groups rather than one at a time. This technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution (LR) image. The pixel structure learned by the AR model is then enforced by the soft-decision estimation process onto a block of pixels in high-resolution (HR) image, including both observed and estimated. SAI uses the duality between the LR and HR covariance for super-resolution [17]. The covariance between neighboring pixels in a local window around the LR source is used to estimate the

Please cite this article as: S. Jacob, M.S. Nair, Image reconstruction from random samples using multiscale regression framework, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2015.10.127i

4

S. Jacob, M.S. Nair / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

covariance between neighboring pixels in HR target image. The covariance used is between pixels and their four diagonal neighbors and between pixels and their vertical and horizontal neighbors. This covariance determines the optimal way of blending the four diagonals into the center pixel. This optimal value in the LR window is used to blend a new pixel in the SR image. For the purpose of adaptive image interpolation, we model the image as a piecewise autoregressive (PAR) process. By rewriting (2), we have X xði; jÞ ¼ βðp; qÞ:xði þ p; j þ qÞ þ εi;j ð17Þ ðp;qÞϵT where T is an application window for the regression. The term εi;j is a random perturbation independent of spatial location and it accounts for measurement noise. The relevance of the PAR model depends on a mechanism that adjusts the model parameters βðp; qÞ to local pixel structures. Specifically, semantically meaningful image constructs such as edges and surface textures are formed by spatially coherent contiguous pixels. This means that, the parameters remain constant or near constant in a small locality. This piecewise stationarity makes it possible to learn pixel structures such as edges, textures and flat regions by fitting samples of a small window to the PAR model. SAI algorithm [12] interpolates the missing pixels in two passes. To interpolate the missing pixels in the first pass, model parameters characterizing the diagonal correlations of the image signal in a local window is calculated using the LS (Least Square) approach. Also, the vertical and horizontal correlations are modeled using another set of parameters. Using these two sets of model parameters, the missing pixels in a hexagonal window can be well estimated to form a quincunx lattice in the first and second pass. In this paper, we used a data-adaptive SAI method to upscale the image from the lower resolution to higher, which may in turn used as a prior estimate for missing pixel recovery in the next

level. The data-adaptive SAI used for upsampling is described in Section 3.2.

3. Proposed method The HIR algorithm was strongly built on a multiscale approach which could efficiently reconstruct the missing pixels in different levels. The missing pixel reconstruction was based on the kernel functions driven by the distances between existing pixels within a large neighborhood. Even though, HIR could reconstruct different image structures, it failed to recover the highly directional features such as edges and texture areas. The proposed method is based on a data-adaptive KR based image pixel recovery as well as dataadaptive soft-decision KR image interpolation, which is capable of reconstructing the pixel structures more accurately (Fig. 1). Other than spreading the kernel weights across the local window, the proposed method spreads the weights along the pixel structures according to the local orientation information as shown in Fig. 2. The classical Kernel Regression uses a uniform kernel (as shown in Fig. 2(a)) in all the parts of image such as smooth, texture as well as sharp edges. The circles in figure show that kernel is of uniform size throughout the image regions. But, in our work, we have incorporated the steering kernel regression in recovery as well as upscaling stages, to better estimate different parts according to its features. In smooth areas kernel weights will be distributed uniformly, in texture areas it will be confined towards the pixel of interest and in case of edges, the kernel weights will be distributed along its edges. It is shown in Fig. 2(b) using a large circle, small circle and an elongated circle respectively for smooth, texture and edge. Fig. 2(c) and (d) shows the kernel spread in KR and SKR respectively. The data-adaptive KR is locally adaptive so as to derive the filter weights depending on the dominant orientation calculated from the local covariance matrices in a local window.

Fig. 1. Sketch showing the steps of multiscale regression framework for image reconstruction from 20% random samples.

Please cite this article as: S. Jacob, M.S. Nair, Image reconstruction from random samples using multiscale regression framework, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2015.10.127i

S. Jacob, M.S. Nair / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

5

Fig. 2. Kernel Spread in normal KR and Steering KR on Lena image and in a local window. (a) Uniformly spread across the local window. (b) Adaptive Kernels elongate with respect to the edge. (c) Kernels in the classic method depends only on the sample density. (d) Data-adapted kernels elongate with respect to the edge.

The multiscale approach combines the inter-pixel dependencies in various scales as well as the inter-scale dependencies, which can’t be considered in single-scale methods. The classic single-scale algorithms are not capable of recovering edges across heavy pixel losses as it contain only very few pixels with edge information. But with the multiscale approach with steering kernel regression incorporated in it, can well preserve those edges because of both inter-scale, intra-scale dependencies as well as the estimation accuracy of the data-adapted kernel along different regions (edge, texture or smooth) of the image. Hence, the propose method combines the parametric and nonparametric modeling techniques using the multiscale approach devised by Guangtao et al., to achieve much better estimation accuracy. The parametric approach enables the model to learn and adapt the structures from lower to higher resolutions using the explicit linear model parameters. It is noted that, the LS solution of the linear model is determined by the second order statistics of the design matrices. The structural information of the lowerresolution images are encoded in linear model coefficients and those coefficients are used to propagate the structures from lower to higher resolutions. Also, the proposed method is shown to be quite effective, because of the fact that data-adapted kernels incorporated in the SAI interpolation reduces the outliers and the data-adapted missing pixel recovery can effectively improve the estimation accuracy. 3.1. Data – adaptive kernel regression for pixel recovery The steering kernel approach is purely based on the idea of robustly obtaining local signal structures by analysing the local radiometric (pixel value) distance, and feeding this structure information to the kernel function in order to control its shape and size. The choice of kernel function greatly affects the quality of reconstruction. In this section, the review of non-adaptive kernel function (Classic Kernel Regression) and two adaptive kernel functions which are generalized from non-adaptive regression function is presented. 3.1.1. classic kernel regression In classic kernel regression, samples are weighted based only on their spatial distances to a pixel of interest, which simplifies the kernel KðÞ in (12) to Kðxi x; yi  yÞ  K H i ðxi  xÞ; where K Hi ðÞ is defined as   1 K H i 1 t K H ðtÞ ¼ detðH i Þ

ð18Þ

ð19Þ

which penalizes distance away from the local position where the approximation is centered. This smoothing matrix is defined as H i ¼ hμi I 2 , where μi is a scalar that specifies the local density of

data samples and h is the global smoothing parameter which extends the kernel to contain enough samples. It is reasonable to use smaller kernels in the areas with more available samples, whereas larger kernels are more suitable for the more sparsely sampled areas of the image. Since the shape of the classic kernels is independent of the radiometric distances, classic kernel based regression methods suffer from an inherent limitation due to the local linear action on the data. The adaptive kernel functions depend on not only the sample location and density, but also the radiometric distances of these samples. Therefore, the effective size and shape of the regression kernel are adapted locally to image features such as edges and textures. Fig. 2 illustrates the difference between classical and adaptive kernel shapes in the presence of edges, textures and flat areas. 3.1.2. Bilateral kernel regression An intuitive choice of the adaptive kernel K(  ) is to use separate terms for penalizing the geometric and radiometric distances. This is the logic behind bilateral filter. The bilateral kernel choice is then   K xi  x; yi  y  K H i ðxi  xÞ:K hr ðyi  yÞ; ð20Þ where hr is the radiometric smoothing parameter that controls the bias, and K H i ðÞ and K hr ðÞ are the spatial and radiometric kernel functions, respectively. K ðÞ is split into spatial and radiometric terms as utilized in the bilateral case, which weakens the estimation accuracy because of its limited degrees of freedom and absence of correlations between positions of the pixels and their values. This limitation can be overcome by using an initial estimate of y by an appropriate classical KR interpolation. 3.1.3. Steering kernel regression The filtering procedure can be generalized one step further. The effect of computing K hr ðyi  yÞ in (19) is to implicitly measure a function of the local gradient estimated between neighboring values and this estimate can be used to weight the respective measurements. For example, if a pixel is located near an edge, then pixels on the same side of the edge will have stronger influence in filtering. This approach can be summarized in two steps. First, an initial estimate of the image gradients is made using some kind of gradient estimator, say Classical KR. Next, this estimate is used to measure the dominant orientation of the local gradients in the image. In second filtering stage, this orientation information is used to adaptively “steer” the local kernel, resulting in elongated, elliptical contours spread along the directions of the local edge structure. With these locally adapted kernels, the reconstruction is effected most strongly along the edges, rather than across them, resulting in strong preservation of details in the final output. The steering weights spread wider in flat areas and spread along edges while kept small in texture regions. Specifically, the steering

Please cite this article as: S. Jacob, M.S. Nair, Image reconstruction from random samples using multiscale regression framework, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2015.10.127i

S. Jacob, M.S. Nair / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

kernel takes the form   K xi  x; yi  y  K H si ðxi  xÞ

ð21Þ

H si s

are data-dependent smoothing matrices which is called where as “steering” matrices. It is defined by modifying H i of bilateral KR 1

H si ¼ hμi C i 2 ;

ð22Þ

where C i 's are covariance matrices depending on local intensity values. A good choice for C i will effectively spread the kernel function along the local edges as shown in Fig. 2. If we choose a Gaussian kernel, the steering kernel is mathematically represented as pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi detðC i Þ 1 T K H si ðxi  xÞ ¼ exp  ðx  xÞ C ð x  x Þ ð23Þ i i i 2 2 2h 2π h The local edge structure is related to local dominant orientation or gradient covariance. A better way to represent covariance is to decompose it into three components as follows: C i ¼ γ i U θi Λi U Tθi " U θi ¼

cos θi

 sin θi

ð24Þ sin θi

cos θi

# ; Λi ¼

"

ρi 0

0

#

ρi 1

ð25Þ

where U θi is the rotation matrix and Λi is the elongation matrix. The covariance matrix is given by the three parameters γ i , θi and ρi , which are the scaling, rotation, and elongation parameters, respectively and the effect of which are as follows. First, the initial kernel is elongated by the elongation matrix Λi with ρi and ρi 1 on the semi-minor and major axes respectively. Second, the elongated kernel is rotated by the matrix U θi . Finally, the kernel is scaled by the scaling parameter γ i . 3.2. Soft-decision KR interpolation The soft-decision interpolation technique [12] recovers the missing pixels in a block by block manner and is shown to be effective in improving the visual quality. The soft-decision KR (SKR) interpolation [16] proposed by Guangtao et al. used the framework of SAI to accommodate the kernel regression model. SKR is composed of two stages: (1) First pass interpolates the diagonal unknowns (2) Second pass fills in the remaining unknowns (see [16] for more details). For every stage, first the model is to be trained using LR patches with same geometric structure; and then predict the unknown pixels with model constraints. The estimation of model parameters assumes that the spatial correlation between the HR pixels is approximately the same as between LR pixels. Both training process and prediction process are related to two kinds of neighborhood relationship. The center HR point is related not only to its 8-connected LR neighbors but also to its 4-connected HR neighbors.

4. Implementation and results 4.1. Implementation details and algorithm description An example of image reconstruction on a 512  512 Lena image with 80% pixel loss is given in Fig. 1. Algorithm 1 summarizes the step by step illustration of the proposed method. Lena image with 20% randomly selected samples (80% missing pixels) is taken as input. The random sampled input image is directly downsampled twice to form a multiscale pyramid, where the downsampled images are named as L1 and L2 (See Fig. 1 for L1 and L2 ). Downsampling is done by averaging only the available pixels in a 2  2 window on HR image and replacing the result into the LR image. It

is found that more image details are recovered gradually as the pyramid goes into higher levels (lower resolutions). Different resolution images are named as level 0, level1 and level2 in the order of decreasing resolutions. Now, the missing pixels are recovered using data-adaptive KR as described in Section 3.1.3. Steps 3–12 of Algorithm 1 describes the steps for missing pixel recovery for level2 and steps 14–18 of Algorithm 1 for missing pixel recovery of level0 and level1. Using non-parametric modeling, only the missing pixel locations of L2 are filled. Also, this estimation can be done iteratively by considering all the radiometric values in the application window, once the missing pixel locations are filled (by feeding L^ 2 back to the non-parametric KR estimator). Only the missing pixel locations are to be iteratively refined. The iterative mode of algorithm further improves the accuracy of estimation. We find through experiments that the algorithm provides better pixel recovery results in two iterations. In the first iteration, only the available pixel values in a neighborhood are used for recovering the pixel of interest. But in second iteration, the recovered missing pixels from the previous iteration as well as the available pixels in the kernel are used for re-estimation. The non-parametric steering kernel regression used kernel sizes of 17  17, 21  21, 25  25 and 33  33 for 80%, 85%, 90% and 95% pixel losses. The smoothing parameter used for Gaussian kernel is between 2.3 and 2.9 and the standard deviation is 4. Algorithm 1. : Image reconstruction using multiscale regression framework 1: Down sample the randomly sampled image twice to form a pyramid (L1 and L2 respectively in Fig. 1) 2: For each level do (Level 2 to 0) Missing pixel Reconstruction 3: If Level¼2 4: For Iteration¼ 1:2 do 5: For all missing pixels y do 6: If Iteration ¼1 7: Recover the missing pixels using Steering KR function ( Use Eq. (23)) 8: Else 9: Update only the missing pixel locations using both available and filled pixels in the previous Iteration 10: End If 11: End For 12: End For 13: Else 14: For Iteration¼ 1:2 do 15: For all missing pixels y do 16: Recover the missing pixels in Llevel using Steering KR function by incorporating L0 level as the prior estimate for the context in Llevel (Use Eq. (23)) 17: End For 18: End For 19: End If 20: If Level¼0 21: Return L00 22: End If Upsampling the lower level image using Soft-Decision KR 23: Apply Soft-Decision KR Interpolation (Use Eq. (6) with Adaptive KR. See [6] for more details) for upsampling the missing pixel recovered image from Level (i.e., upsample L^ to L^ ) Level

Level þ 1

24: End For Now, the reconstructed image in the highest scale of the pyramid (L^ 2 ) is upscaled to the next lower resolution (L^ 1 ) using a

Please cite this article as: S. Jacob, M.S. Nair, Image reconstruction from random samples using multiscale regression framework, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2015.10.127i

S. Jacob, M.S. Nair / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

parametric model (step 23 of Algorithm 1 summarizes interpolation). For this parametric model, an eight-order linear AR model is adopted and the pixels in 5  5 neighborhood are involved in the linear systems. More specifically, the soft-decision KR interpolation described in Section 3.2 is used for image interpolation. First, two sets of model parameters are trained using LR patches, one exploiting diagonal correlations and the other for vertical and horizontal correlations. Then, the HR image pixels are estimated by enforcing the learnt model onto the HR patch. The lowest level image, L^ 2 is interpolated in this way to get L^ 1 . The upsampled image (L^ 1 ) is now used as a prior estimate, which can be incorporated with the downsampled image (L1 ) in the same level. Specifically, L^ 1 is used as a prior estimate for calculating adaptive KR weights in non-parametric estimation. Here also, an iterative refinement can be utilized to improve the effectiveness of reconstruction. Two iterations in each level are found to be effective in this work. The iteratively refined image, L01 can be upscaled again to L^ 0 with soft-decision KR interpolation. The upscaled estimate L^ 0 is combined with L0 in a data0 adaptive KR based reconstruction towards final output L^ 0 . The kernel size and the window size for data estimation and local gradient estimation in data-adaptive KR may vary depending on the pixel loss rates and the detail in the image. For high detail images, smaller kernel and window sizes are enough. But for low detail images, we must go for larger window and kernel sizes. Also, the smoothing parameter h varies depending on the density of pixel loss. 4.2. Experimental results and discussions Extensive experiments were conducted to evaluate the multiscale regression technique for image reconstruction in comparison with its predecessors. For thoroughness and fairness of our comparison study, ten standard images were used. The proposed method has been compared against a number of classic algorithms such as Delaunay interpolation [4] as well as some state-of-the-art methods like classical kernel regression [9], steering kernel regression [10], sparse representation based MCA [5] and HIR algorithm [11]. Table 1 tabulates the PSNR comparison of the six different methods applied on different test images with

7

80%, 85%, 90% and 95% pixel loss rates. As depicted in Table 1, our proposed multiscale adaptive regression technique outperforms all other methods in all pixel loss rates. We further report visual comparison of image reconstruction results of the proposed method at 80%, 85%, 90% and 95% pixel loss rates for the image peppers are shown in Fig. 3. Fig. 3 shows that smooth areas and sharp edges of Peppers image can be recovered effectively by the proposed data-adaptive method in various pixel loss rates. Even at high loss rate, say, 95%,many missing pixels are clustered together, however, our data-adaptive regression technique is still capable of restoring the major edges and the repeated textures of the image. The visual comparison of proposed strategy with HIR [11] on enlarged portions of Lena, Peppers and Zelda are shown in Fig. 4. By looking on to the marked (red arrows) portions of those three images, it is clearly evident that most of the over blurring and checker boarding occurred due to the existing multiscale reconstruction [11] could be better overcome by our new method which combines the capabilities of inter-pixel and interscale dependencies using a data-adaptive kernel regression. For a thorough comparison, zoomed in portions of Lena and butterfly images are shown in Figs. 5 and 6 with 85% and 80% pixel loss respectively. As it is evident from Fig. 5, the proposed reconstruction method eliminates most of the visual defects associated with other methods. Edges are sharper due to the data-adaptive KR (non-parametric modeling) used for pixel recovery and are stable due to the soft-decision KR interpolation (parametric modeling). It is noticed that the sharp edges with severe pixel losses have been reconstructed clearly (see the marked portions on lena and butterfly images). In Fig. 5, the edges of Lena’s hat and the repeated texture areas of her hair have been well preserved by the proposed method because of the data-adaptive regression incorporated in the missing pixel recovery and the interpolation. From Fig. 6, it is seen that the proposed method reconstructs the sharp edges much better than the existing methods (see the forewing and wing veins on the butterfly image). Fig. 7 compares the results of five image reconstruction methods on test images Lena, Peppers, Boats and Zelda respectively. The evaluated methods exhibit different visual characteristics near edges and fine textures. It can be easily understood that the multiscale approach achieves better image

Table 1 PSNR comparison of the reconstructed images by the proposed method and other image reconstruction methods. Images 80%

85%

90%

95%

Proposed HIR MCA SKR KR Delaunay Proposed HIR MCA SKR KR Delaunay Proposed HIR MCA SKR KR Delaunay Proposed HIR MCA SKR KR Delaunay

Airplane

Barb

Boat

Baboon

Butterfly

Goldhill

Lena

Man

Peppers

Zelda

29.75 28.53 27.45 27.47 26.51 28.16 28.62 27.40 26.20 26.55 25.93 26.90 27.12 26.02 24.74 25.36 24.99 25.54 24.15 23.44 22.28 21.73 21.96 23.17

24.74 24.45 26.72 23.23 23.10 24.01 24.28 24.01 25.48 22.92 22.76 23.45 23.74 23.35 23.95 22.06 22.00 22.96 22.84 22.42 21.67 19.40 20.17 21.93

29.41 28.37 26.84 27.85 26.62 27.55 28.58 27.53 25.93 26.97 25.99 26.40 27.05 26.31 24.51 25.76 25.05 25.31 24.51 24.28 22.33 22.42 22.36 23.38

24.69 22.37 21.07 20.92 20.71 22.01 23.63 21.73 20.32 20.46 20.32 21.45 22.38 20.98 19.84 19.67 19.64 20.78 21.02 19.73 19.09 16.88 17.47 19.86

28.94 28.39 26.57 27.18 26.93 28.11 27.77 27.32 26.21 26.67 25.89 26.95 26.06 25.81 23.65 24.69 24.24 24.43 23.53 23.32 22.02 22.43 22.23 22.98

29.97 29.11 27.63 28.50 27.84 28.69 29.07 28.26 26.58 27.82 27.27 27.55 28.04 27.27 25.47 26.71 26.34 26.57 25.98 25.38 23.43 23.52 23.79 24.67

32.14 31.31 28.26 30.91 29.08 29.83 30.72 30.47 27.18 30.08 28.51 28.71 29.34 28.99 25.61 28.44 27.27 27.30 26.97 26.91 23.89 24.78 24.69 25.38

29.20 28.43 26.16 27.87 26.94 27.86 28.74 27.69 25.16 27.21 26.40 26.87 27.29 26.48 24.14 25.99 25.44 25.84 24.72 24.57 22.46 22.77 23.02 24.03

31.91 30.55 28.41 29.56 28.25 29.74 30.78 29.33 27.41 28.61 27.67 28.79 29.51 27.95 25.91 27.50 26.77 27.52 26.36 25.51 22.97 24.14 24.11 25.16

34.87 34.41 31.70 34.03 33.23 33.81 33.90 33.64 30.65 33.23 32.60 32.54 32.59 32.27 29.28 31.88 31.56 31.29 30.25 29.88 26.77 27.99 28.70 29.27

Please cite this article as: S. Jacob, M.S. Nair, Image reconstruction from random samples using multiscale regression framework, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2015.10.127i

8

S. Jacob, M.S. Nair / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 3. Reconstructed results of Peppers image : (a), (b), (c) and (d) showing the images with 80%, 85%, 90% and 95% pixel loss with corresponding reconstructed images.

Fig. 4. Enlarged reconstructed images (85% pixel loss) to compare HIR algorithm and proposed data-adaptive method. (a), (c) and (e) shows proposed method. (b), (d) and (f) shows HIR.

quality, through combining steering kernel regression in both upscaling and missing pixel recovery processes of the multiscale approach. As summarized in Section 2.2, the multiscale approach combines the inter-pixel dependencies in various scales (levels) as well as the inter-scale dependencies, which can’t be considered in single-scale methods. The classic single-scale algorithms are not capable of recovering edges across heavy pixel losses as it contain only very few pixels with edge information. But, the multiscale approach with steering kernel regression incorporated in it can well preserve those edges because of both inter-scale, intra-scale

dependencies as well as the estimation accuracy of the dataadapted kernel along different regions (edge, texture or smooth) of the image. Even though, the HIR algorithm [11] reconstructs the image structures well in the highest pixel loss rates; it tends to produce some ringing artifacts across the smooth areas. For higher pixel loss rates, the MCA method [5] fails to recover edges well and there is speckle noise along different directions. The KR method [9] causes some spurious high-frequency artifacts along the sharp edges. The Delaunay interpolator [4] produces irregular outliers along various image structures such as edges and

Please cite this article as: S. Jacob, M.S. Nair, Image reconstruction from random samples using multiscale regression framework, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2015.10.127i

S. Jacob, M.S. Nair / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

9

Fig. 5. Examples of using the proposed multiscale reconstruction on Lena images with 85% missing pixels samples (a) Lena original (part) (b) Lossy image. (c) Proposed. (d) HIR [11]. (e) KR [9]. (f) SKR [10]. (g) Delaunay [4]. (h) MCA [5].

Fig. 6. Examples of using the proposed data-adaptive multiscale reconstruction on butterfly image with 80% pixel loss (a) Butterfly original (part) (b) Lossy image. (c) Proposed (d) HIR [11] (e) KR [9] (f) SKR [10] (g) Delaunay [4] (h) MCA [5].

textures. It is observed that using large kernels and optimal smoothing parameter with data-adaptive regression, better reconstruction comes out as a result of our new proposed approach. It is noticed that proposed method is almost consistently better compared to test algorithms in all the testing conditions in Table 1, the best PSNR results are highlighted in boldface, and the proposed data-adaptive multiscale algorithm is found to outperform all competitors with an average PSNR gain of about 0.86 dB, 0.80 dB, 0.78 dB and 0.34 dB for 80%, 85%, 90% and 95% pixel loss respectively over the best methods of the test algorithm set. The SAI interpolation is very competitive because it keeps geometric

regularity well. However, in high frequency areas where the geometric duality fails, this method generates speckle noise and ringing artifacts. This limitation could be overcome by softdecision KR interpolation [16] which is adaptive to the data in a local window. Moreover, parametric and non-parametric modeling techniques once properly combined with data-adaptive KR, yields better reconstruction results. The superior subjective and objective qualities of the proposed data-adaptive reconstruction method over the tested classic algorithms and hybrid methods still demonstrate it. The proposed method suppresses most of the visual artifacts associated with the other methods and reproduces visually better reconstructed images.

Please cite this article as: S. Jacob, M.S. Nair, Image reconstruction from random samples using multiscale regression framework, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2015.10.127i

10

S. Jacob, M.S. Nair / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 7. Enlarged results of reconstruction methods on Boats, Lena, Peppers and Zelda images with 85% missing pixels. (a) Original images, (b) images with 85% pixel loss, (c) Proposed method, (d) HIR [11], (e) KR [9], (f) SKR [10] and (g) Delaunay Interpolator [4].

Table 2 shows the reconstruction results using different values of smoothing parameter, h, in steering kernel regression. Extensive experiments were conducted to determine the optimum range of h. The PSNR values are shown in five different pixel loss rates, say

75%, 80%, 85%, 90%, and 95%. Almost all the images in the test and training set converge to peak PSNR values in the interval of 2.3– 2.6. The values shown in bold face in the table indicates the best PSNR values in each case. But for a few images, the PSNR

Please cite this article as: S. Jacob, M.S. Nair, Image reconstruction from random samples using multiscale regression framework, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2015.10.127i

S. Jacob, M.S. Nair / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Table 2 PSNR convergence table for range of smoothing parameters in 75%, 80%, 85%, 90% and 95% pixel loss on image Elaine. Smoothing parameter, h

75%

80%

85%

90%

95%

1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4

30.5481 31.0076 31.3500 31.6040 31.8007 31.9470 32.0566 32.1367 32.1917 32.2285 32.2502 32.2585 32.2587 32.2511 32.2379 32.2197 32.1995 32.1735 32.1471 32.1162 32.0852 32.0525 32.0186

29.6421 30.1020 30.4310 30.7275 30.9880 31.1797 31.3302 31.4428 31.5297 31.5913 31.6619 31.6784 31.6872 31.6869 31.6812 31.6691 31.6529 31.6331 31.6099 31.5829 31.5530 31.5220 31.4340

24.2388 26.6646 28.2251 29.2041 29.8167 30.2039 30.4467 30.6296 30.8793 30.9563 31.0694 31.0816 31.0856 31.0832 31.0728 31.0587 31.0392 31.0135 30.9856 30.9536 30.8760 30.8010 30.7860

16.1231 19.5320 22.5640 24.2317 26.1102 27.4584 28.3423 28.8815 29.4334 29.6229 29.9790 30.0452 30.1116 30.1456 30.1307 30.1197 30.0729 30.0441 30.0145 29.9761 29.9365 29.8903 29.8043

9.7830 11.7232 13.3488 16.7320 18.3205 19.0231 21.3035 22.8994 23.8994 25.2339 26.6991 27.1210 27.4338 27.6518 27.8261 27.9659 28.1702 28.2519 28.2909 28.2831 28.2662 28.2389 28.1920

converges when the parameter comes in the range of 2.5–3.0 in severe pixel loss rates (such as 95% pixel loss). The Table shows the experiment results on the image Elaine. In 75%, 80%, 85% and 90% cases, the PSNR converges in the range 2.3–2.6. But for huge pixel loss rates (95%), PSNR converges during interval 2.5 to 3.2, because only less than 10% of the pixels are available for reconstruction. The values less than 1.5 could hardly recover the image in such cases.

5. Conclusion In this paper, we propose a multiscale regression framework for image reconstruction from random samples with data-adaptive kernel regression. A multiscale framework combining the strengths of parametric and non-parametric modeling was used to improve the modeling efficiency. A data-adaptive KR technique is incorporated into both the missing pixel reconstruction stage as well as soft-decision adaptive interpolation stage to improve the estimation accuracy. This novel image reconstruction technique outperforms the existing methods in both PSNR measure and subjective visual quality over a wide range of scenes, by preserving the finer details of the reconstructed image.

References [1] P. Sen, S. Darabi, A novel framework for imaging using compressed sensing, Proc. in: Proceedings of the IEEE International Conference on Image Processing (ICIP), 2009, pp. 2133–2136. [2] A. Papoulis, A new algorithm in spectral analysis and bandlimited signal extrapolation, IEEE Trans. Circuits Syst. 22 (9) (1975) 735–742. [3] R.W. Gerchberg, Super-resolution through error energy reduction, Opt. Acta 21 (9) (1974) 709–720. [4] A. Okabe, B. Boots, K. Sugihara, Spatial Tessellations: Concepts and Applications of Voronoi Diagrams, Wiley, New York, 1992.

11

[5] J.L. Starck, M. Elad, D. Donoho, Image decomposition via the combination of sparse representations and a variational approach, IEEE Trans. Image Process. 14 (10) (2005) 1570–1582. [6] W. Tian, H. Wang, Q. Cai, Maximum likelihood estimation by random sample and local optimization, SPIE MIPPR, Pattern Recognition and Computer Vision, vol. 7496, 2009. [7] B.J. Tordoff, D.W. Murray, Guided-MLESAC: faster image transform estimation by using matching priors, IEEE Trans. Pattern Anal. Mach. Intell. 27 (10) (2005) 1523–1535. [8] A. Konouchine, V. Gaganov, V. Veznevets, AMLESAC: a new maximum likelihood robust estimator, Conference Proceedings Graphicon, 2005. [9] E.A. Nadaraya, On estimating regression, Theory Probab. Appl. 9 (1) (1964) 141–142. [10] H. Takeda, S. Farsiu, P. Milanfar, Kernel regression for image processing and reconstruction, IEEE Trans. Image Process. 16 (2) (2007) 349–366. [11] Guangtao Zhai, Xiaokang Yang, Image reconstruction from random samples with multiscale hybrid parametric and non-parametric modeling, IEEE Trans. Circuits Syst. Video Technol. 22 (11) (2012) 1554–1563. [12] X. Zhang, X. Wu, Image interpolation by adaptive 2D autoregressive modeling and soft-decision estimation, IEEE Trans. Image Process. 17 (6) (2008) 887–896. [13] G. Zhai, Z. Yang, W. Lin, W. Zhang, Bayesian error concealment with DCT pyramid for images, IEEE Trans. Circuits Syst. Video Technol. 20 (9) (2010) 1224–1232. [14] X. Wu, X. Zhang, X. Wang., Low bit-rate image compression via adaptive down-sampling and constrained least squares upconversion, IEEE Trans. Image Process. 18 (3) (2009) 552–561. [15] T.F. Chan., Rank revealing QR factorizations, Linear Algebra Appl. 88–89 (1987) 67–82. [16] Jing Liu, Xiaokang Yang, Guangtao Zhai, Li Chen, Hybrid Image Interpolation with Soft – Decision Kernel Regression, 2013 (unpublished). [17] X. Li, M.T. Orchard., New edge-directed interpolation, IEEE Trans. Image Process 10 (10) (2001) 1521–1527. [18] A. Baudes, B. Coll, J. Morel, Nonlocal image and movie denoising, Int. J. Comput. Vis. 76 (2) (2008) 123–139. [19] A. Buades, B. Coll, J. Morel, A review of image denoising algorithms, with a new one, Multiscale Model. Simul. 4 (2) (2005) 490–530.

Susmi Jacob received her B.Tech degree from Mahatma Gandhi University, Kerala, India, in 2006 and M.Tech with specialization in Digital Image Computing from Department of Computer Science, University of Kerala, India, in 2013. She is currently working as Assistant Professor in the Department of Computer Science and Engineering, SCMS School of Engineering and Technology, Kochi, India. Her research interests include Image Processing, Pattern Recognition and Computer Vision.

Dr. Madhu S. Nair received his Bachelors Degree in Computer Applications (BCA) from Mahatma Gandhi University with First Rank in the year 2000, Masters Degree in Computer Applications (MCA) from Mahatma Gandhi University with First Rank in the year 2003 and Masters Degree in Technology (M.Tech) in Computer Science (with specialization in Digital Image Computing) from University of Kerala with First Rank in the year 2008. He obtained his PhD in Computer Science (Image Processing) from Mahatma Gandhi University in the year 2013. He also holds a Post Graduate Diploma in Client Server Computing (PGDCSC) from Amrita Institute of Computer Technology. He has published research papers in reputed International Journals and Conferences, which includes IEEE, Springer, Elsevier, IAENG, CSI, etc. He is a Life Member of Computer Society of India (CSI), Member of Institute of Electrical and Electronics Engineers (IEEE), Member of International Association of Engineers (IAENG) and Member of International Association of Computer Science and Information Technology (IACSIT). He is also a reviewer of various International Journals published by IEEE, Elsevier, Springer etc. His areas of interest include Digital Image Processing, Pattern Recognition, Computer Vision, Data Compression and Soft Computing.

Please cite this article as: S. Jacob, M.S. Nair, Image reconstruction from random samples using multiscale regression framework, Neurocomputing (2016), http://dx.doi.org/10.1016/j.neucom.2015.10.127i