Localization of radiance transformation for image dehazing in wavelet domain

Localization of radiance transformation for image dehazing in wavelet domain

Localization of Radiance Transformation for Image Dehazing in Wavelet Domain Communicated by Dr. Y Gu Journal Pre-proof Localization of Radiance Tr...

1MB Sizes 0 Downloads 12 Views

Localization of Radiance Transformation for Image Dehazing in Wavelet Domain

Communicated by Dr. Y Gu

Journal Pre-proof

Localization of Radiance Transformation for Image Dehazing in Wavelet Domain Hira Khan, Muhammad Sharif, Nargis Bibi, Jamal H Shah, Sajjad A Haider, Saira Zainab, Muhammad Usman, Yasir Bashir, Nazeer Muhammad PII: DOI: Reference:

S0925-2312(19)31377-3 https://doi.org/10.1016/j.neucom.2019.10.005 NEUCOM 21346

To appear in:

Neurocomputing

Received date: Revised date: Accepted date:

30 January 2018 4 July 2019 15 October 2019

Please cite this article as: Hira Khan, Muhammad Sharif, Nargis Bibi, Jamal H Shah, Sajjad A Haider, Saira Zainab, Muhammad Usman, Yasir Bashir, Nazeer Muhammad, Localization of Radiance Transformation for Image Dehazing in Wavelet Domain, Neurocomputing (2019), doi: https://doi.org/10.1016/j.neucom.2019.10.005

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Elsevier B.V. All rights reserved.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

H IGHLIGHTS • •

To recover the haze-free. To deal the poor vision system by retrieving the salient features information of the ground truth data.

1

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

2

Localization of Radiance Transformation for Image Dehazing in Wavelet Domain Hira Khan, Muhammad Sharif, Nargis Bibi, Jamal H Shah, Sajjad A Haider, Saira Zainab, Muhammad Usman, Yasir Bashir, Nazeer Muhammad Abstract—Hazy image observation is a serious phenomenon occurred by random scattering of atmospheric elements. This reduces the scene prominence in terms of color contrast and makes the vision system poor. To overcome this issue, dehazing of digital images is dealt with much attention in the field of computer vision. Dense scattering in hazy image results in loss of original spectral information of the object. It is required to remove the unwanted artifacts of weather during image acquisition process of the given object to improve the visual insight of an acquired image. It is found that most of the existing methods fail to retrieve the sufficient information from hazy images. To address such problem, a robust single image dehazing method is introduced to recover the haze-free image. This is performed by calculating the atmospheric light of the given hazy images by decomposing and retaining the high frequency sub bands using wavelet domain. The dense haze elimination is performed on approximated low frequency sub band of the given hazy image. On reconstruction process, well-refined dehaze image is obtained. Moreover, transmission maps are also produced as a byproduct of this method. Index Terms—Single Image Dehazing, Image dofogging, Image restoration, Image de-noising, Image enhancement, Air light estimation, Wavelet transformation.



1

I NTRODUCTION

Due to scattering of atmospheric particles in foggy or bad weather conditions, the scene is significantly scattered which results in very limited contrast [1]. Thus, captured outdoor images and videos may appear blurred and fade in terms of colors that makes objects hazy or foggy. Image low contrast and loss of color fidelity may directly affects the human visual perception. This reduces the performance of computer vision applications e.g. object detection, classification, scene analysis, outdoor surveillance, driver assistance, satellite images, and underwater imaging [2, 3, 4, 5, 6]. It is required by vision applications as a preprocessing steps. Different dehazing algorithms have been proposed to recover the haze free images. By visualization, haze offers very limited contrast, thus several methods rely on this observation. Many scientists and researchers introduced the dehazing techniques that are usually divided into two classes, namely single and multi-image (additional information) dehazing. Early dehazing approaches used the physical model that require the additional information or multiple images of the same scene [7, 8]. •

H. Khan, M. Sharif, J. H. Shah, S. Ali, Y. Bashir, N. Muhammad were with the COMSATS University Islamabad, Wah Campus, G. T. Road, 47040, Wah Cantt., Pakistan.



N. Bibi was with the Department of Computer Science, Fatima Jinnah Women University, Rawalpindi, Pakistan.



S. Zainab was with the Department of Mathematics, University of Wah, Wah Cantt, Pakistan.



M. Usman was with the Department of Engineering Sciences, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Sawabi, 23640, Pakistan.

Manuscript received XXXX XX, XXXX; revised XXXX XX, XXXX.

Currently, image dehazing approaches can be generally classified into three major categories based on their formulation: some of them worked on image enhancement [9, 10], some worked on image restoration [11], while some of them are categorized as hybrid approaches [12, 13, 14]. The very first type of methods have adopted direct image enhancement techniques that are used to enhance the required contrast of the given images for removing haze which includes histogram equalization [9, 15], wavelet transform [16, 17, 18], retinex [19], and image fusion [10, 20, 21], respectively. These methods often cannot fully remove haze due to lack of degradation process. The second category of image dehazing methods was used for image degradation process as well as depth information [8, 22], multiple images of same scene under different weather conditions [23, 24] or polarization filter to recover true image by reversing the degradation process [25]. These methods may have better performance than that of enhancement based methods by maintaining the desired information. However, depth information or multiple images might not be applicable on practical situations. The third type contains hybrid approaches with haze prior assumptions which includes local contrast to maximums level [13, 14, 26, 27]. Moreover, the dark channel prior [11] is tuned to adjust color attenuation as a prior [28]. It is estimated to observe the transmission map while removing of haze through atmospheric degradation model. These studies classify the listed approaches into the restoration based category because the estimated transmission may not strictly meet the physical properties of haze degradation [7, 10, 29]. Furthermore, the dehazed results often have good visual effect to take the advantages of both enhancement and restoration approaches. Therefore, much work is recently devoted to this category. Tan [12] observed that haze-free images have higher contrast

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

than images which are plagued by bad weather. Moreover, by maximizing local contrast of the restored image, visual compelling of the resultant image is greatly improved. Based on a simple optical model that the air light is constant [27], Kim et al. [14] constructed a contrast cost function for air light estimation which is subtracted to correct the degraded image. The most common single image dehazing methods are: contrast maximization, independent component analysis of color lines, and dark channel prior. Contrast maximization [30] is a technique used to increase the contrast beneath the constraint. While, this technique does not physically improve the brightness. Lin et al. [31] adopted the contrast maximization approach for real-time image and video processing. Dubok et al. [32] presented a method based on haze amount estimation from difference between RGB channels in single image. As haze increases, this difference decreases, however their assumption does not work well in the gray areas. Zhu et al. [33] proposed a novel approach for image dehazing using color attenuation prior. They characterized the hazy regions by high brilliance and less saturation. Likewise, ICA based contrast maximization technique is also very common for image dehazing. ICA based methods generally applied directly on the color images, beside the contrast maximization, it is time exhausting and fails to deal with the real-time imageries [13]. In contrast to these algorithms, color lines and dark channel prior are being frequently used to solve the same problem. Berman et al. [34] presented a non-local image dehazing algorithm, through haze lines the color image is converted into small image patches and display the 1D distribution in color space. Human visual system (HVS) has specific retort sensitivity to the minor interval of light wavelength [35]. Fig. 2 shows the wavelength segment where the HVS has its maximum sensitivity. In this figure, one curve signifies the photonic vision sensitivity and the other for scotopic vision. Because of the much higher sensitivity to luminous efficiency of the scotopic vision compared to the photopic vision, both have extreme sensitivity with pattern of red and blue observation, and the mutual overall sensitivity ranges from 505 nm to 555 nm. Various significant features can be observed in Fig 4 which demonstrate the important curve feature by the learned random forest regressor [11]. To demonstrate the trend more clearly, importance score is plotted in the log scale. The most important features amongst others are multi-scale dark channel features, and the most important major scale dark channel features are D10, that approve the dark channel prior. Though, regression model does not depend on a single dark channel value. It takes the high-order connection between the dark channel features in a local patch (5 × 5), and the high-order relationship between different kinds of features within the local neighborhood. This model is improved the transmission approximation. Each kind of features are organized within its own scales and the regressor lean towards maximum or minimum statistics of these features [11]. This information is used to approximate the haze transmission which efficiently work in dense haze. In case of similar objects in the scene, this method may produce invalid results.

3

Tarel et al. [26] used the fast smoothness prior on air light, based with the median filter by keeping image contrast high. The complexity is linear, so main improvement is speed or time-consuming optimization. Several methods work on patch-based prior by assuming radiance and transmission as piecewise constant [13, 36, 11]. Fattal et al. [13] presented the optical transmission based assumption used to eliminate the scattered light. This method relies on the supposition that transmission and surface shading are not correlated. He et al. [11] generalized the dark channel prior: adopts that in the small neighborhood, there will be minimum one dark channel pixel. These pixels hold in many regions of an image, while often-bright pixels are available in large regions of an image. Furthermore, this minimal value is used for estimating the haze, however, this prior information does not hold in the bright areas of the scene. Fattal et al. [36] presented a new image dehazing algorithm that works on color lines to recover scene transmission. In this model, false predictions can easily be identified and avoided through lack of color-lines. Moreover, these color ellipsoids are fitted per patch in RGB space in vicinity of whole image. Rong et al. [17] assumed a direct image enhancement technique using wavelet transform. In this paper, authors directly applied the wavelet transform to dehaze an image and after that they extended the Retinex algorithm for color enhancement and effects [37]. This is based on patch recurrence property in order to calculate airlight color. This sought out to compare the transmission of each pair patches that occurred at different depth in an image. Cai et al. [38], introduced an end to end system called dehaze net for evaluation of medium transmission. This system endorsed convolutional neural network (CNN) as a deep architecture and its layers are intended to represent priors in image dehazing [39]. In the proposed haze model of [40], two layer Gaussian process regression was used for learning from local image priors. This process created a direct relationship from the input image to direct depth estimation. However, this method required significant improvement in terms of training efficiency. In the proposed haze model of [41], a multiscale deep neural network was adopted by learning the mapping between input images and their corresponding transmission maps. This method did not validate much effectiveness for night time hazy images. A light-weight AOD-Net model has been proposed by [42]. This model was based on reformulated atmospheric model as a substitute of estimating transmission matrix and atmospheric light. In addition, Lu et al. [43] used to optimize an image dehazing method that efficiently evaluated atmospheric air light. It eliminates haze through the approximation of adaptive filter. State of art image dehazing methods often suffer from quantization artifacts and noises in hazy sky regions [44, 45, 46]. This would result in deprived image quality or may drop the spectral data [47]. To report this issue, a perception oriented transmission estimation technique [48] has been introduced. First an innovative transmission method has been proposed by posturing single image dehazing as a native contrast optimization issue. This transmission model can flexibly adjust the haze removal to accommodate local contrast gain, its solution is

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

4

similar as the dark channel solution, however not confined to DCP. In this paper, we attempt to recover haze free image from hazy images by removing noise and artifacts for accurate retrieval of spectral information and image eminence enhancement under complex circumstances. In the proposed work, a hybrid approach is introduced to dehaze the image. For this, we calculate the overwhelm image noise or artifacts for precise retrieval of spectral information in the wavelet domain, which is used to enhance the image quality under complex environment. The rest of paper is ordered as following: Section 2 briefly presents the proposed method formulation of image dehazing. Section 2.1, Section 2.3, Section 2.2 and Section 2.4 describe the implementation of our propose method. Section 3 provides experimental outcomes and analysis, and Section 4 draws a conclusion.

2

Light estimated haze line construction

Air light estimation model also known as haze imaging model was proposed by [13], which is further developed by [49] and [10]. Atmospheric scattering model can be represented in form:

Λ (κ) =υ (κ) ψ (κ) +ϑ(1 − ψ(κ)).

υ (κ) =Γ (κ) ϑ.

Λ (κ) =Γ (κ) ϑ · ψ (κ) + (1−ψ (κ)) .

ϑ=Λ (κ) , ∆ (κ) →∞.

(2)

ϑ = max ω∈{κ|ψ(κ)≤ψ0 } Λ(ω).

(3)

In practice, the distance of the imaging view ∆(κ) cannot be possibly infinity but it can be long distance having very low transmission ψ0 . Atmospheric light ϑ can be estimated more stably by following rule:

The above discussion shows that in order to remove haze from a scene or to make a scene clear (free of haze), It is essential to approximate the precise transmission maps of medium. Brightest pixels are considered as most hazy pixels in hazy images as presented by [12], however, this is only possible when we ignore sunlight and weather is only overcast. This case provide us the scene of source using only atmospheric

(5)

The brightest pixels are considered as most haze-opaque in the hazy image when pixels in the image are at infinite distance ψ≈0. We have to consider the sunlight because in practice sunlight cannot be ignored. Equation (1) can be modified by considering sunlight ξ as follows: (6)

and (6) can be extended as follows:

Λ (κ) =Γ (κ) ξ · ψ (κ) +Γ (κ) ϑ · ψ (κ) + (1−ψ (κ)) ϑ. (7)

In the given situation, brightest pixels of whole image can be brighter than atmospheric light. To deal with this issue, we retrieve the dense pixels from original haze image by constructing haze lines, after calculating air light ϑ, Λϑ is calculated as:

Λϑ (κ) =Λ (κ) −ϑ.

(1)

In (1), Λ(κ) represents the input hazed image, υ (κ) is the scene radiance. Transmission maps ψ(κ) = e−β∆(κ) of the haze-medium shows the light portion that reaches the camera and does not scatter. This is imaged at index κ, ψ(κ) is the medium transmission, while ϑ represents atmospheric light. Real scene radiance υ(κ) can be recovered after estimating ϑ and ψ(κ). Where, ∆(κ) represents the distance of the scene point from the camera at pixel (κ), while β is known as scattering or attenuation coefficient. Given expression of ψ(κ) shows that if we keep on increasing distance κ, transmission ψ(κ) will get decreased, such that ψ(κ) approaches to infinity together using (1), as follows:

(4)

In the given equation, points are reflectance of scene points where Γ ≤ 1. Haze imaging (1) can be written in (5) as follows:

υ (κ) =Γ (κ) (ξ+ϑ),

T HE P ROPOSED M ETHOD F ORMULATION

A detail description of the nomenclature and mathematical formulation of the proposed dehazing method as shown in the flow chart Fig. 1 is stated in this section. 2.1

air light. The scene radiance for every color channel is obtained using dense haze elimination υ as:

(8)

2.2 Low Frequency Sub band retrieving The given image Λ tends to have a better low frequency in(hl) (lh) (hh) formation, yet its high frequency contents Λj−1 , Λj−1 , Λj−1 may have been degraded which can be categorized on airlight estimation.   (ll) (hl) (lh) (hh) W (Λ) = Λj−1 , Λj−1 , Λj−1 , Λj−1 . (9)

Therefore, in the wavelet decomposition of Λ, the high frequency sub-bands of Λ are discarded, which may con(ll) tains unwanted noise. The approximated sub-band Λj−1 is processed using dense haze elimination which retrieve e (ll) , the salient information of smooth data in terms of Λ j−1 efficiently. This confirm the property of approximation subband which contain maximum energy [51]. Dense haze elimination [11] is performed on the lower frequency subband in order to attain the estimated dense e (ll) . This depends on the accompanying perhaze value Λ j−1 ception of dense haze existence of the given hazy image by considering the minimum one channel which has low intensity at some regions. Therefore, this less intensity ought to be a very low value as compared to the other regions. 2.3

High Frequency Sub band retrieving

Some of the important information of the haze image Λ may be lost due to haze estimation process. This can be recovered using a sub-band replacement process. The method is based on the replacement of a particular sub-band information of Λϑ form (10) with that of Λ in wavelet domain. In fact, treating the wavelet coefficients (smooth and edge details)

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

5

Fig. 1: Flowchart of the proposed dehazing method.

Fig. 2: Photonic and scotopic response of the HVS [50].

Fig. 3: Airlight centered spherical representation. The sphere was sampled uniformly using 500 points. The color at each point [θ (κ) φ (κ)] specifies the number of pixels κ with these angles when writing Λϑ (κ) in spherical coordinates (image size 768 × 1024) [34]. with dehazing approach can develops ringing artifacts and wavelet shaped noise [51]. In this regard, the wavelet decomposition is applied on the given haze image Λ as shown in (10) as follows: (ll)

(hl)

(lh)

(hh)

W (Λϑ ) = (Λϑj−1 , Λϑj−1 , Λϑj−1 , Λϑj−1 ).

quency information however, its low frequency contents may have been degraded. Therefore, in the wavelet de(ll) composition of (Λϑ ), the low frequency sub-band Λϑj−1 is (hl)

ignored. In this case only the high frequency details Λϑj−1 , (lh)

(10)

The processed image Λϑ tends to have a better high fre-

(hh)

Λϑj−1 and Λϑj−1 are processed.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

6

TABLE 1: Abbreviations. Acronym

2.4

Definition

CV

Computer Vision

NL

Non-local

DCP

Dark Channel Prior

MRF

Markov Random Field

PSNR

Peak Signal to Noise Ratio

MSE

Mean square error

RMSE

Root Mean square error

PCC

Pearson Correlation Coefficient

SNR

Signal to Noise Ratio

MAE

Mean Absolute Error

Sub band replacement reconstruction process

The approximated data obtained through higher frequency sub-bands from Eq. 10 is combined with processed data of lower frequency sub-band from Eq. 9 using the dense haze elimination. Subsequently, the inverse wavelet transform ˆ as shown in (11): W −1 , yields the final dehazed image Λ   e (ll) , Λ(hl) , Λ(lh) , Λ(hh) . ˆ = IW Λ (11) Λ j−1 ϑj−1 ϑj−1 ϑj−1

Haze density of a hazy image can be approximated through e (ll) that is why 0.1% dense pixels dense haze elimination Λ j−1 are picked. These pixels are actually most haze-opaque ones and among these pixels, the pixels in input image Λ having highest intensities are selected as atmospheric light ϑ and these selected pixels may not be the brightest pixels of input image.

Algorithm 1 Haze reduction Input: Haze image Λ(κ), atmospheric light ϑ 1: Initialize Λ(0) = Λ, Atmospheric light based haze estimation Λϑ (κ) is achieved using (1) Transform Λϑ (κ) to spherical directions for obtaining the required cluster intensities using [θ (κ) φ (κ)] as shown in Fig. 3. 2: for each value of cluster θ do 3: Estimated hazeline output: Λϑ (κ) 4: for Wavelet decomposition is performed on Λϑ (κ) and Λ(κ) do 5: Estimating radius: ρmax (κ) = maxx∈θ ρ(κ) e (ll) . 6: Estimated dense haze value Λ j−1 7: end for 8: Fusion data is obtained. 9: Estimating transmission output: ψ(κ) 10: end for ˆ , ψ(κ) are obtained. Output: Dehazed image Λ(κ)

3

E XPERIMENTAL R ESULTS

To determine the efficiency of the proposed algorithm, this paper first presents some quantitative metrics to estimate

image dehazing. Second, the performance of this algorithm on synthetic hazy images is evaluated. Lastly, this study relates the proposed algorithm with some recent methods on a set of hazy images. The pixels having maximum intensities in an input image are picked up with an atmospheric air light factor ϑ using gamma value is set to be 0.5. Moreover, the K-means clustering is performed on input image in order to find haze lines which is calculated by [34]. Whereas, our approach is mainly compared with several state of art algorithms presented in Fig. 5. 3.1

Quantitative evaluation

The proposed method is evaluated on a benchmark data set commonly used by [11, 34, 41, 40, 42]. This comprises on well known natural images associated with various nature of fog. Synthetic dataset of natural images which is is attained by online http://www.cs.huji.ac.il/∼raananf/ projects/defog/index.html and http://www.eng.tau.ac.il/ ∼berman/NonLocalDehazing/. Our propose method performs better in comparison to listed methods in terms of well known metrics measurements: MSE, PSNR, SSIM, RMSE, and MAE. The PSNR is used to calculate the squared error between original and reconstructed image and measures the quality of reconstruction of lossy or hazy image [53]. PSNR can also be defined in terms of MSE. As PSNR is inversely proportional to MSE so higher the PSNR values gives the least MSE. Table 2 to Table 6 shows comparison on six metrics MSE, PSNR, SSIM, RMSE, and MAE, respectively. An outstanding image dehazing method should clearly remove haze and instantaneously suppress quantization artifacts or noise for the accurate retrieval of original structure and high image quality [54]. Means a dehazed image is likely to have high PSNR and SSIM values and small MSE, RMSE, and MAE value. Table 3 and 4 shows that our method produces the highest PSNR, and SSIM values respectively than He’s, Berman’s, Ren’s, Fan’s [11, 34, 41, 40] and Li’s method [42]. Meanwhile, no initiation of quantization artifacts or noise suppression, reveals that our propose method has the improved dehazing ability in low scattering regions. Notice that these methods yields some serious degradation in the recovered images [11, 34]. Table 2, 5, and 6 demonstrates that our method has the

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

7

Fig. 4: Importance of different features output by the random Forest regressor [52].

Fig. 5: Example of true (haze-free) training images used as a reference images by [34, 36, 11].

Fig. 6: Estimated dehazing disparity of transmission maps comparison for listed methods with the proposed method.

Fig. 7: Comparisons of haze removal outputs (fragments of Fig. 5) of listed dehazing methods with the proposed method. lowest MSE, RMSE, and MAE values in comparison to listed methods. This shows that our method reads the significant informations rather than over sharpening which produce more smoothen recovered images. However, other methods introduces some artifacts with over-saturated color. In comparison with these methods, our method can eliminate haze in nearby areas and proficiently improve the discontinued depth map through soft-matting. Additionally, our method introduces no veil effect in dehazed imageries. MSE measures the cumulative squared error amongst the estimated and ground truth image. In case of dehazing, MSE is non-negative always and closer to zero which means MSE should be minimum for high quality images. As in [38], MSE is measured between predicted and true transmission on different images. It is calculated using the following equation: P 2 M,N [I1 (m, n) − I2 ] , (12) M SE = M ×N where M , N are the number of row and columns of an input image. Moreover, PSNR is the proportion of maximum power of a signal and power of changing noise which is error produced by distortion or compression which affects the quality of estimated image and its representation. It can

be represented as shown in (13).

R2 , (13) M SE where R is termed as maximum fluctuation to input image data type. A good quality image has high PSNR while, PSNR is observed low for degraded images. The proposed algorithm outperforms in previous methods by producing good quality images with high PSNR. Moreover, RMSE predicts the difference between the observed and the ground truth images. These values can be positive or negative as the predicted value can be over or under the estimated actual value as shown (14). s (f − o2 ) RM SE = , (14) n P SN R = 10log10

where, f is the predicted value, o is the observed value with regularization parameter n. RMSE values for images are also least as compared to other methods. Moreover, MAE measures the average magnitude of errors in the given set of test images. Both RMSE and MAE can be used together to measure the variation in errors our results shows the minimum MAE and RMSE as compared to other algorithms. Higher the PSNR, better would be the specification because the noise that is unwanted data would be less as compared

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

8

Fig. 8: Image dehazing results for satellite images compared with the state-of-art methods.

Fig. 9: Challenging natural images results compared with the state-of-art methods.

Fig. 10: Dehazed results on synthetic hazy images using stereo images: Bowling and Aloe. to signal that is useful data. PSNR is ratio of power of signal to the power of background noise (unwanted or irrelevant data). Like PSNR, better the image quality greater would be the PSNR. It can represented in form: where P is the average power for both signal (useful data) and noise (irrelevant data). 3.2

Qualitative evaluation

We compare the proposed algorithm with the state-of-theart dehazing methods [11, 34, 41, 40, 42] using the well known metrics measurements. Fig. 6(a) and Fig. 6(b) shows the ground truth and hazed transmission maps, while Fig. 6(h) shows improved depth maps. The depth maps are measured through ψ(κ) and are up to an unidentified scaling parameter β . Atmospheric air lights in these imageries are automatically predicted by means of method defined in Section 2.1. As it can be seen, the proposed method can reveal the particulars and improve vivid color information even in very dense haze regions. The projected depth maps are sharp and consistent with the input imageries as compared to [11, 34, 41, 40, 55], and [42] methods. Some transmissions at the areas of rapid depth change in Fig. 6(c), Fig. 6(d), Fig. 6(e) and Fig. 6(f) are unsmooth and irregular. The transmissions of the proposed method in Fig. 6(h) are smoother in these regions than others. The key purpose is that the transmissions predicted by the proposed method are smoothen by the soft matting which improves the transmission maps. Fig. 7 shows experiment results on fragmented portions of Fig. 5, respectively. Fig. 7(a) and Fig. 7(b) shows ground truth and hazy images. Moreover, Fig. 7(b) He’s method proficiently eliminate haze, however, it introduced severe degradation and dark looking appearance in the sky regions. They produce very small transmission values, which

significantly increases image noise and decreases the illumination of these sky regions [56, 57, 58]. Since, Berman’s technique [34] produced over enhancement and produces color distortion into the dehazed images. Although, He’s [11], Ren’s [41], and [40] method efficiently eliminate haze, but the color at maximum regions is over-saturated. Some distinct quantization artifacts also appear in the dense scattering patches. On the contrary, our technique has less capability of image dehazing than Fan’s method [40], however, it attains improved balance between haze removal and artifacts or noise suppression [59, 60, 61, 62]. The proposed results achieves good image information restoration. Therefore, the proposed method has decent ability of haze removal and structural information retrieval. Fig.7 illustrates dehazing outcomes on six heavy haze imageries, in which objects in distant region (near the sky patches) can barely be discriminated. He’s method [11] and Berman’s method [34] eliminates slight haze, produces degradation and introduce golden-yellow artifacts in some regions Fig.7(c) and Fig.7(d). Moreover, [40], and [42] methods have the finest dehazing capability so that the distant areas can be differentiated. However, they also produce some serious degradation Fig. 7(f), and Fig. 7(g). In comparison to the listed methods, our method has no degradation in sky regions and proficiently eliminate haze in near regions. Furthermore, our propose method has a stronger dehazing capability than these methods. In Fig. 8, we estimate the performance of our method and other four approaches with hazy satellite imageries attained from [63]. In Fig. 8, the image size is 600 × 600, and its spatial resolution is about 0.60m. These images comprises of numerous high buildings related with their significant shadows, several automobiles moving on the streets, dark green trees, farm lands, dispersed houses, and rivers. This image comprises of numerous house buildings, a concrete

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

square and large aquatic area of a river. Our method can well avoid the problematic excessive color saturation and distortion. Even the local contrast of the area underneath tall buildings shadows has been improved. Though, the results attained by the other five methods still undergoes from the difficulties of darkened luminance and color distortion. Such as, Fattal’s method [13] leads to excessively saturated color, particularly in Fig. 8(d). In comparison with the high reflection of air light by buildings in city regions, the relative low reflectivity of farm-lands causes such images looks deep-colored. For such images, our method can also attain satisfying results. Though, the actual relative low reflection of air light extinguishes the corresponding supposition of He et al.’s [11], Tarel et al.’s [26], and Fattal et al.’s [13] methods. Thus, the results obtained by these three methods appear hazy. Though for Ni’s method [63], it over-dehazed these imageries with saturated color in local regions. As revealed by Fig. 9, He’s method [11] suffers from overlyenhanced visual artifacts. However, [34, 41] produce unrealistic color tones on one or several images, such as MSCNN results on the second row (notice the stone color). Fan’s [40], and AOD-Net [42] have the most competitive visual results among all, with plausible details. Yet by a closer look, we still observe that AOD-Net sometimes blurs image textures, and darkens some regions. However, propose method recovers richer and more saturated colors, while suppressing most artifacts. Moreover, AOD-Net does no explicitly consider the handling of white scenes, as it can be seen in the first row. Fig. 10(a) shows the input hazy images which are synthesized from the haze-free images. As the method by He et al. [11] assumes that the dark channel values of clear images are zeros, it tends to overestimate the haze thickness and results in darker results as shown in Fig. 10(b). We note that the dehazed images generated by [34, 41, 40] tend to have some color distortions. For example, the colors of the Aloe leaves on the second row become darker as shown in Fig. 10(b) and (d). Although the dehazed results by AOD-Net [42] are better, the colors are still darker than the ground truth and blurs image textures. In contrast, the dehazed results by the proposed algorithm in Fig. 10(g) are close to the ground truth haze-free images, which indicates that better transmission maps are estimated.

4

9

R EFERENCES [1] [2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10] [11]

C ONCLUSION

This paper presents an innovative method for image dehazing that hybridly estimate an approximate transmission map and provide sufficient dehazing. The proposed method is based on the estimation of atmospheric air light of the hazy image. After calculating this, the haze line constructed image is obtained. This is decomposed into its sub bands through wavelet transformation to enhance salient information of details subbands of the hazy image. Moreover, to retrieve the significant information of smooth data, wavelet decomposition is also applied to given hazy image. From this, an approximation sub band is processed for getting a good estimation of hazy image using dense haze estimation method. Finally, a dehazed image is obtained by implying the inverse wavelet transform.

[12] [13] [14]

[15]

S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” International Journal of Computer Vision, vol. 48, no. 3, pp. 233–254, 2002. N. Hauti`ere, J.-P. Tarel, and D. Aubert, “Mitigation of visibility loss for advanced camera-based driver assistance,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 2, pp. 474–484, 2010. N. Bibi, S. Farwa, N. Muhammad, A. Jahngir, and M. Usman, “A novel encryption scheme for highcontrast image data in the fresnelet domain,” PLOS ONE, vol. 13, no. 4, p. e0194343, 2018. Y. Li, Q. Miao, J. Song, Y. Quan, and W. Li, “Single image haze removal based on haze physical characteristics and adaptive sky region detection,” Neurocomputing, vol. 182, pp. 221–234, 2016. B. Mughal, M. Sharif, and N. Muhammad, “Bi-model processing for early detection of breast tumor in cad system,” The European Physical Journal Plus, vol. 132, no. 6, p. 266, 2017. Z. Mahmood, T. Ali, N. Muhammad, N. Bibi, I. Shahzad, and S. Azmat, “Ear: Enhanced augmented reality system for sports entertainment applications,” KSII Transactions on Internet & Information Systems, vol. 11, no. 12, 2017. S.-C. Huang, B.-H. Chen, and W.-J. Wang, “Visibility restoration of single hazy images captured in realworld weather conditions,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 10, pp. 1814–1824, 2014. N. Hauti`ere, J.-P. Tarel, and D. Aubert, “Towards fogfree in-vehicle vision systems through contrast restoration,” in Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pp. 1–8, IEEE, 2007. Z. Xu, X. Liu, and X. Chen, “Fog removal from video sequences using contrast limited adaptive histogram equalization,” in Computational Intelligence and Software Engineering, 2009. CiSE 2009. International Conference on, pp. 1–4, IEEE, 2009. C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing, vol. 22, no. 8, pp. 3271–3282, 2013. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 12, pp. 2341–2353, 2011. R. T. Tan, “Visibility in bad weather from a single image,” in Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1–8, IEEE, 2008. R. Fattal, “Single image dehazing,” ACM transactions on graphics (TOG), vol. 27, no. 3, p. 72, 2008. D. Kim, C. Jeon, B. Kang, and H. Ko, “Enhancement of image degraded by fog using cost function based on human visual model,” in Multisensor Fusion and Integration for Intelligent Systems, 2008. MFI 2008. IEEE International Conference on, pp. 64–67, IEEE, 2008. M. Irshad, N. Muhammad, M. Sharif, and M. Yasmeen, “Automatic segmentation of the left ventricle in a cardiac mr short axis image using blind morphological operation,” The European Physical Journal Plus, vol. 133,

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

10

TABLE 2: MSE comparisons for the proposed and recently listed methods. Method

Road1

Road2

Lawn1

Lawn2

Flower1

Church

[11] [34] [41] [40] [42] Proposed

7578 8180 75246 8109 6210 5084

5069 7833 83892 7265 5311 4717

13439 8880 63459 7190 5672 4234

17471 7887 63700 7842 6019 4871

2301 3608 39264 1379 1145 807

10064 8499 60930 1003 8179 6509

TABLE 3: PSNR comparisons for the proposed and recently listed methods. Method

Road1

Road2

Lawn1

Lawn2

Flower1

Church

[11] [34] [41] [40] [42] Proposed

14.10 13.80 4.17 12.22 14.98 15.87

15.88 13.99 3.69 13.14 15.78 16.20

11.65 13.45 4.91 14.59 16.01 16.68

10.51 13.96 4.89 12.78 14.65 16.06

19.31 17.36 6.99 19.05 21.83 23.90

12.90 13.64 5.08 11.23 13.54 14.80

TABLE 4: SSIM comparisons for the proposed and recently listed methods. Method

Road1

Road2

Lawn1

Lawn2

Flower1

Church

[11] [34] [41] [40] [42] Proposed

0.6392 0.6960 0.4862 0.5021 0.6131 0.8000

0.8202 0.8260 0.4153 0.6731 0.7242 0.8267

0.6319 0.6319 0.4357 0.6922 0.8011 0.8895

0.7284 0.7694 0.3664 0.6398 0.7022 0.8421

0.8243 0.8549 0.2456 0.7931 0.8745 0.9232

0.5881 0.5777 0.4383 0.6128 0.6551 0.7169

TABLE 5: RMSE comparisons for the proposed and recently listed methods. Method

Road1

Road2

Lawn1

Lawn2

Flower1

Church

[11] [34] [41] [40] [42] Proposed

87.05 90.44 274.31 86.97 83.15 71.30

71.20 71.20 289.64 74.39 71.02 68.69

115.9 94.23 251.91 71.26 69.99 65.07

132.1 88.81 252.38 75.78 73.23 69.79

47.97 60.07 198.15 31.82 30.12 28.42

100.3 92.19 246.84 94.56 90.16 80.68

TABLE 6: MAE comparisons for the proposed and recently listed methods. Method

Road1

Road2

Lawn1

Lawn2

Flower1

Church

[11] [34] [41] [40] [42] Proposed

110.6 117.5 420.0 90.24 88.17 84.71

99.7 113.3 454.2 85.38 81.29 87.52

146.1 115.5 405.0 99.16 93.96 67.80

168.7 107.6 393.6 76.88 74.76 76.04

66.68 83.69 381.3 49.12 40.01 37.84

137.5 118.4 319.3 113.15 109.02 103.6

no. 4, p. 148, 2018. [16] N. Muhammad and N. Bibi, “Digital image watermarking using partial pivoting lower and upper triangular decomposition into the wavelet domain,” IET Image Processing, vol. 9, no. 9, pp. 795–803, 2015. [17] Z. Rong and W. L. Jun, “Improved wavelet transform algorithm for single image dehazing,” OptikInternational Journal for Light and Electron Optics, vol. 125, no. 13, pp. 3064–3066, 2014. [18] N. Muhammad, N. Bibi, I. Qasim, A. Jahangir, and Z. Mahmood, “Digital watermarking using hall property image decomposition method,” Pattern Analysis and Applications, pp. 1–16, 2017. [19] J. Zhou and F. Zhou, “Single image dehazing motivated by retinex theory,” in Instrumentation and Measurement, Sensor Network and Automation (IMSNA), 2013 2nd International Symposium on, pp. 243–247, IEEE, 2013. [20] L. K. Choi, J. You, and A. C. Bovik, “Referenceless prediction of perceptual fog density and perceptual im-

[21]

[22]

[23]

[24]

age defogging,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3888–3901, 2015. F. K. G. A, T. Akram, B. Laurent, S. R. Naqvi, M. M. Alex, and N. Muhammad, “A deep heterogeneous feature fusion approach for automatic land-use classification,” Information Sciences, 2018. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. CohenOr, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” in ACM transactions on graphics (TOG), vol. 27, p. 116, ACM, 2008. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, vol. 1, pp. 598–605, IEEE, 2000. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE transactions on pattern analysis and machine intelligence, vol. 25, no. 6, pp. 713–724, 2003.

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

[25] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Applied optics, vol. 42, no. 3, pp. 511–525, 2003. [26] J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Computer Vision, 2009 IEEE 12th International Conference on, pp. 2201–2208, IEEE, 2009. [27] J. P. Oakley and H. Bu, “Correction of simple contrast loss in color images,” IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 511–522, 2007. [28] Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522–3533, 2015. [29] B. Mughal, N. Muhammad, M. Sharif, T. Saba, and A. Rehman, “Extraction of breast border and removal of pectoral muscle in wavelet domain,” Biomedical Research, vol. 28, no. 11, 2017. [30] J.-B. Wang, N. He, L.-L. Zhang, and K. Lu, “Single image dehazing with a physical model and dark channel prior,” Neurocomputing, vol. 149, Part B, pp. 718 – 728, 2015. [31] Z. Lin and X. Wang, “Dehazing for image and video using guided filter,” Open Journal of Applied Sciences, vol. 2, no. 04, p. 123, 2013. [32] P. Dubok, J. Changwon, et al., “Fast single image dehazing using characteristics of rgb channel of foggy image,” IEICE TRANSACTIONS on Information and Systems, vol. 96, no. 8, pp. 1793–1799, 2013. [33] Q. Zhu, J. Mai, and L. Shao, “Single image dehazing using color attenuation prior.,” in BMVC, Citeseer, 2014. [34] D. Berman, S. Avidan, et al., “Non-local image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1674–1682, 2016. [35] C. T. Thurow, “Real-time image dehazing,” 2011. [36] R. Fattal, “Dehazing using color-lines,” ACM Transactions on Graphics (TOG), vol. 34, no. 1, p. 13, 2014. [37] Y. Bahat and M. Irani, “Blind dehazing using internal patch recurrence,” ICCP, 2016. [38] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187–5198, 2016. [39] S. R. Naqvi, T. Akram, S. Iqbal, S. A. Haider, M. Kamran, and N. Muhammad, “A dynamically reconfigurable logic cell: from artificial neural networks to quantum-dot cellular automata,” Applied Nanoscience, vol. 8, no. 1, pp. 89–103, 2018. [40] X. Fan, Y. Wang, X. Tang, R. Gao, and Z. Luo, “Twolayer gaussian process regression with example selection for image dehazing,” IEEE Trans. Circuits Syst. Video Technol, vol. 27, no. 12, pp. 2505–2517, 2016. [41] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European conference on computer vision, pp. 154–169, Springer, 2016. [42] B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aodnet: All-in-one dehazing network,” in Proceedings of the IEEE International Conference on Computer Vision, vol. 1, p. 7, 2017. [43] H. Lu, Y. Li, S. Nakashima, and S. Serikawa, “Single

11

[44] [45]

[46]

[47] [48]

[49]

[50] [51]

[52]

[53]

[54]

[55]

[56]

[57]

[58]

image dehazing through improved atmospheric light estimation,” Multimedia Tools and Applications, vol. 75, no. 24, pp. 17081–17096, 2016. N. Muhammad, N. Bibi, A. Jahangir, and Z. Mahmood, “Image denoising with norm weighted fusion estimators,” Pattern Analysis and Applications, pp. 1–10, 2017. N. Bibi, N. Muhammad, and B. Cheetham, “Inverted wrap-around limiting with bussgang noise cancellation receiver for ofdm signals,” Circuits, Systems, and Signal Processing, Jun 2017. N. Muhammad, N. Bibi, A. Wahab, Z. Mahmood, T. Akram, S. R. Naqvi, H. S. Oh, and D.-G. Kim, “Image de-noising with subband replacement and fusion process using bayes estimators,,” Computers & Electrical Engineering, pp. –, 2017. N. Muhammad, N. Bibi, Z. Mahmood, and D.-G. Kim, “Blind data hiding technique using the fresnelet transform,” SpringerPlus, vol. 4, no. 1, p. 832, 2015. Z. Ling, G. Fan, J. Gong, Y. Wang, and X. Lu, “Perception oriented transmission estimation for high quality image dehazing,” Neurocomputing, vol. 224, pp. 82–95, 2017. S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, vol. 2, pp. 820– 827, IEEE, 1999. D. Nan, D.-y. Bi, C. Liu, S.-p. Ma, and L.-y. He, “A bayesian framework for single image dehazing considering noise,” The Scientific World Journal, vol. 2014, 2014. N. Muhammad, N. Bibi, Z. Mahmood, T. Akram, and S. R. Naqvi, “Reversible integer wavelet transform for blind image hiding method,” PLOS ONE, vol. 12, pp. 1– 17, 05 2017. K. Tang, J. Yang, and J. Wang, “Investigating hazerelevant features in a learning framework for image dehazing,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2995–3002, June 2014. N. Bibi, A. Kleerekoper, N. Muhammad, and B. Cheetham, “Equation-method for correcting clipping errors in ofdm signals,” SpringerPlus, vol. 5, p. 931, Jun 2016. Y. Bashir, A. Aslam, M. Kamran, M. I. Qureshi, A. Jahangir, M. Rafiq, N. Bibi, and N. Muhammad, “On forgotten topological indices of some dendrimers structure,” Molecules, vol. 22, no. 6, 2017. B. Mughal, N. Muhammad, M. Sharif, A. Rehman, and T. Saba, “Removal of pectoral muscle based on topographic map and shape-shifting silhouette,” BMC cancer, vol. 18, no. 1, p. 778, 2018. B. Mughal, M. Sharif, N. Muhammad, and T. Saba, “A novel classification scheme to decline the mortality rate among women due to breast tumor,” Microscopy research and technique, vol. 81, no. 2, pp. 171–180, 2018. Z. Mahmood, O. Haneef, N. Muhammad, and S. Khattak, “Towards a fully automated car parking system,” IET Intelligent Transport Systems, vol. 13, no. 2, pp. 293– 302, 2019. B. Mughal, N. Muhammad, and M. Sharif, “Adaptive hysteresis thresholding segmentation technique for localizing the breast masses in the curve stitching domain,” International journal of medical informatics,

JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015

vol. 126, pp. 26–34, 2019. [59] B. Mughal, N. Muhammad, and M. Sharif, “Deviation analysis for texture segmentation of breast lesions in mammographic images,” The European Physical Journal Plus, vol. 133, no. 11, p. 455, 2018. [60] S. Khalid, N. Muhammad, and M. Sharif, “Automatic measurement of the traffic sign with digital segmentation and recognition,” IET Intelligent Transport Systems, vol. 13, no. 2, pp. 269–279, 2018. [61] H. Khan, M. Sharif, N. Bibi, and N. Muhammad, “A novel algorithm for the detection of cerebral aneurysm using sub-band morphological operation,” The European Physical Journal Plus, vol. 134, no. 1, p. 34, 2019. [62] Z. Mahmood, N. Bibi, M. Usman, U. Khan, and N. Muhammad, “Mobile cloud based-framework for sports applications,” Multidimensional Systems and Signal Processing, pp. 1–29, 2019. [63] W. Ni, X. Gao, and Y. Wang, “Single satellite image dehazing via linear intensity transformation and local property analysis,” Neurocomputing, vol. 175, pp. 25–39, 2016.

12

Biographies

Hira Khan received the B.S. degree in computer science from Fatima Jinnah Women University, Rawalpindi, Pakistan, in 2015. She is currently pursuing the M.S. degree in computer science with the COMSATS Institute of Information Technology, Wah Cantt, Pakistan. Hira Khan’s research expertise encompasses topics, such as Digital image processing, Image denoising models, Image dehazing, Medical image analysis, Video processing and Mathematical modeling.

Muhammad Sharif is Associate Professor at the Department of Computer Science, COMSATS Institute of Information Technology, Pakistan. His interests are digital signal processing and has more than 100 publications.

Nargis Bibi received a Ph.D. degree in Computer Science the School of Computer Science, University of Manchester, UK in 2014. Currently, she is Assistant Professor at the Department of Computer Science, Fatima Jinnah Women University (FJWU), Rawalpindi, Pakistan. She received her M.Sc from Fatima FJWU, Rawalpindi, Pakistan. She is currently employed in FJWU as Assistant Professor. Her interests are digital signal processing, OFDM, coding theory and information theory.

Jamal Hussain Shah is a PhD Scholar at University of Science and Technology of China (USTC), China. He is graduated from COMSATS Institute of Information Technology, Pakistan in 2011. His areas of interest are Digital image Processing and Networking. Mr. Jamal has more than five years of experience of teaching and IT related projects.

Sajjad Ali Haider is working as Assistant Professor at the Department of Electrical Engineering, COMSATS Institute of Information Technology (CIIT), Wah Cantt. He completed his BS (Computer Engineering) from CIIT Islamabad in 2005. He completed in MS in Embedded systems and Control engineering from Leicester University, UK in 2007. He received his PhD degree from Chongqing University, China in 2014. He is associated with EE department, CIIT Wah since October 2005. His research interests include Control Systems, System Identification and Machine Learning.

Saira Zainab is working as Assistant Professor at the Department of Mathematics, University of Wah, Wah Cantt, Pakistan. She completed her Ph.D. in Mathematics, in 2012 from CIIT, Islamabad, Pakistan. Her research interests include Functional Analysis and Variation Inequality of digital data.

Muhammad Usman received his Bachelor of Science in Electrical Engineering with major in telecommunications in 2007 from Pakistan. He has worked in the telecommunication industry in Pakistan from 2008 to 2012. Meanwhile he completed his Master of Science degree in Engineering Management in Pakistan. In 2015, he completed his doctoral studies from Hanyang University, South Korea. Currently, the author is working as Assistant Professor in Ghulam Ishaq Khan Institute of Engineering Sciences & Technology, Pakistan. The author is also a member of prestigious professional societies such as Institute of Electrical and Electronics Engineer, Optical Society of America and Japanese Applied Physics Society. The author has also served as a reviewer for various research projects and student grants.

Yasir Bashir received his PhD degree under the supervision of Prof. Dr. Tudor Zamfirescu from Abdus Salam School of Mathematical Sciences, Government College University Lahore, Pakistan in 2014. Currently, he is Assistant Professor at the Department of Mathematics, COMSATS Institute of Information Technology, Wah Cantt, Pakistan. He received his BS Mathematics degree from University of Punjab, Lahore, Pakistan. His area of interest is Discrete Mathematics and Computational Geometry.

Nazeer Muhammad received a Ph.D. degree in Applied Mathematics from Hanyang University, South Korea in 2015. He received the prestigious Pakistan Government higher education commission (HEC) scholarship award for MS and Ph.D. Currently, he is Assistant Professor at the Department of Mathematics, COMSATS University Islamabad, Wah Campus, Pakistan. His interests are digital signal processing, data hiding, image denoising, digital holography, OFDM, and information theory.

The Editor-in Chief. Subject: Submission of manuscript for publication. Dear Sir, On behalf of all authors, it is certified that there is no conflict of interest for our manuscript titled “Localization of Radiance Transformation for Image Dehazing in Wavelet Domain”, which we wish to submit to your prestigious journal Neurocomputing. We hope our manuscript will get the due consideration for publication in your upcoming issue. Thanking you in advance and looking forward to a favorable response. Faithfully yours Nazeer Muhammad, Ph.D. Assistant Professor, Department of Mathematics, COMSATS Institute of Information Technology, G. T. Road, Wah Cantonment, Pakistan Email: [email protected]