Digital Signal Processing 23 (2013) 1337–1343
Contents lists available at SciVerse ScienceDirect
Digital Signal Processing www.elsevier.com/locate/dsp
The symmetric logarithmic image processing model Laurent Navarro a , Guang Deng b,∗ , Guy Courbebaisse c a b c
Ecole Nationale Supérieure des Mines, CSI-EMSE, CNRS UMR 5307, LGF, F-42023, Saint-Etienne, France Department of Electronic Engineering, La Trobe University, Victoria 3083, Australia CREATIS, CNRS UMR 5220, Inserm U 1044, UCB Lyon 1, INSA Lyon, University of Lyon, 7 Av. J. Capelle, 69621 Villeurbanne, France
a r t i c l e
i n f o
Article history: Available online 16 July 2013 Keywords: Logarithmic image processing model Generalized linear system Vector space Unsharp masking
a b s t r a c t In this paper, a new extension of logarithmic image processing (LIP) model, called Symmetric Logarithmic Image Processing (SLIP), is proposed. Inspired by the previously developed symmetric models, the SLIP model defines a vector space on a symmetric bounded set. The development is aimed at (1) maintaining the physical interpretation of the LIP model and (2) solving the potential out-of-range problem which the LIP model has. The SLIP model is also able to deal with transmitted and reflected light images. The advantage of the SLIP is demonstrated through an image enhancement application using the generalized unsharp masking algorithm. © 2013 Elsevier Inc. All rights reserved.
1. Introduction Logarithmic image processing has been extensively studied. For example, Oppenheim et al. [1] developed the homomorphic filter theory to deal with signals combined by multiplication or convolution operations. Stockham [2] proposed an image enhancement method based on the homomorphic theory. Jourlin and Pinoli developed the logarithmic image processing (LIP) model [3]. The main idea of the LIP model is to provide a mathematical theory for the processing of transmitted-light images in a bounded intensity range. It has been shown that the LIP model is consistent with some physical laws and certain characteristics of the human visual system [4,5]. Recent developments include: adaptive neighborhood [6], an information theoretic connection [7], Bregman divergence minimization [8], parametrization [9], and the giga-vision sensor model [10]. The LIP model and its extensions have found applications in a number of areas [5,8,11–18]. However, an inherent problem of the LIP model is that it could lead to the out-of-range problem. This is due to the specific vector space used. More specifically, the intensity of an image is modeled by its gray tone function valued in the bounded set [0, M ). The addition and scalar multiplication of the gray tone functions are first defined as a positive linear cone on [0, M ) [19]. In order to define a vector space, the interval [0, M ) is extended to (−∞, M ) such that the scalar multiplication with a real scalar and the subtraction operation can be defined. However, the definition of the vector space on the interval (−∞, M ) means that the result of the LIP model processing is also in the interval (−∞, M ). This leads
*
Corresponding author. Fax: +61 3 9471 0524. E-mail addresses:
[email protected] (L. Navarro),
[email protected] (G. Deng),
[email protected] (G. Courbebaisse). 1051-2004/$ – see front matter © 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.dsp.2013.07.001
to the potential out-of-range problem. For example, an 8-bit image has the range of [0, 256). The pixel value of the processed image could be out of this range. A careful re-scaling process is needed to deal with this problem. Other developments, either aiming at solving the out-of-range problem for image restoration or inspired by the LIP model, include the log-ratio model [20], the homomorphic LIP model [21] and the pseudo-LIP model [22,23]. Unlike the LIP model, these models usually do not have a clear physical interpretation. However, these models do not have the out-of-range problem. From a generalized linear system point of view, the fundamental difference between these models is in the way the generating function is defined (refer to Table 1). A comparison of these generating functions shows that the generating function of the LIP model is asymmetrical about 0, while the generating functions of other models presented in Table 1 are symmetrical about the center of the domain. Although the LIP model has been extensively studied, the potential out-of-range problem has not been systematically addressed. This work is inspired by the mathematical theory and the physical relevance of the LIP model. We aim at developing a modified LIP model such that it keeps the mathematical elegance and physical interpretation of the original LIP model while avoiding the out-of-range problem. Specifically, we address the out-of-range problem by proposing an extension of the interval from [0, M ) to (− M , M) such that the generating function is symmetrical about zero. As such, we call the modified LIP model the symmetric LIP (SLIP) model. The SLIP model has the following desirable properties: (1) the vector space is properly defined, (2) in the interval [0, M ), it is exactly the same as the LIP model, and (3) it does not suffer from the out-of-range problem. The rest of this paper is organized as follows. In Section 2, we briefly introduce the LIP model and other existing symmetric
1338
L. Navarro et al. / Digital Signal Processing 23 (2013) 1337–1343
Table 1 A comparison of the LIP model with three proposed symmetric LIP model and other symmetric models. LIP [3]
ψ( f )
− M ln 1 −
Domain of ψ Range of ψ
(−∞, M ) (−∞, ∞)
f M
SLIP (proposed)
−sgn( f ) M ln 1 −
f
ln
M
(− M , M ) (−∞, ∞)
M− f
(0, M ) (−∞, ∞)
models. In Section 3, we describe the proposed SLIP model. We show that the SLIP model is able to deal with both transmitted and reflected light images. We also show how it solves the out-ofrange problem by defining the new vector space. In Section 4, we demonstrate an application of the SLIP model in image enhancement which uses the generalized unsharp masking algorithm. In Section 5, we present a summary of this paper. 2. The LIP model and symmetric models In this section, we first present a brief review of the key concepts in linear vector space. We then describe the main ideas of the LIP model and some symmetric models. 2.1. Linear vector space A linear vector space S over a field K , generally R or C , is equipped with a vector addition + and a scalar multiplication × of each element of S by each element of K on the left [19]. These two operations satisfy a number of classical properties. A positive linear cone S + over K + , generally R + or C + , is equipped with a vector addition + and a scalar multiplication × of each element of S + by each element of K + on the left, where K + , R + and C + are constituted by the positive elements of the fields K , R and C respectively. A positive linear cone has the same properties and operation rules as those of a linear vector space. Thus it is closed for addition and for positive scalar multiplication operations. A neutral element exists for the addition which is commutative and associative. The positive scalar multiplication also has a neutral element and follows the associative and distributive laws. 2.2. The LIP model
HLIP [21]
SPLIP [22]
1 2
f 1−| f |
ln
1+ f 1− f
(−1, 1) (−∞, ∞)
(−1, 1) (−∞, ∞)
fg
+ g = f + g − f
(2)
M
and the multiplication by a positive real scalar λ ∈ R + is defined as:
λ f × f = M −M 1− λ
(3)
M
In order to extend the positive linear cone to a vector space, the gray tone range is extended from [0, M ) to (−∞, M ). Jourlin and Pinoli [3] defined the negative of a gray tone function f as follows: − f = −M
f
(4)
M− f
and they extended the scalar multiplication to any real number λ ∈ R. The subtraction operation can then be defined as − g = M f
f −g
(5)
M−g
The set of gray tone functions valued in the range (−∞, M ) is related to R through the fundamental isomorphism [28] defined as:
ψLIP ( f ) = − M ln
M− f
(6)
M
However, the set on which the vector space is defined is asymmetric. The LIP operations are bounded in the positive part [0, M ) and unbounded in the negative part (−∞, 0]. For example, the result of the following subtraction operation could be unbounded when f < g, i.e., − g ∈ (−∞, 0) f
2.2.1. The gray tone function In the LIP model, an image is represented by its associated gray tone function, denoted f , which is defined on the non-empty spatial domain D in R 2 . A gray tone function is valued in the bounded real number interval [0, M ), where M is strictly positive, called the gray tone range. Elements of [0, M ) are called the gray tones. The gray tone function f is related to the gray level function ˜f as follows:
f = M − ˜f
LR [20]
|f|
(1)
The intensity scale is inverted such that 0 represents total whiteness or transparency and M represents the absolute blackness or opacity. In the LIP model, the addition operation is defined for adding two transmitted-light images. Physically, the addition of two transmitted-light images follows the classical transmittance law, and the total blackness can not be reached. As such, the scale inversion is justified, because 0 corresponds to the total transparency and is the neutral element for a mathematical addition in this case. 2.2.2. The structure of the LIP model The initial aim of the LIP model was to define an additive operation which is closed in the bounded positive real number intensity range [0, M ) [3,28], i.e., a positive linear cone [0, M ). The addition of two gray tone functions f and g is defined as:
(7)
This property leads to the potential out-of-range problem. 2.3. Symmetric models A key component of the LIP mode and existing symmetric models is the function ψ( f ) which is used to define the generalized addition ⊕ and scalar multiplication ⊗:
f ⊕ g = ψ −1 ψ( f ) + ψ( g )
(8)
and
λ ⊗ f = ψ −1 λψ( f )
(9)
where λ ∈ R. In this paper, we call ψ( f ) the generating function of a generalized linear system. In the following we briefly discuss existing symmetric models using their respective generating functions. Unlike the LIP model, these symmetric models do not have clear physical or physiological justifications. A summary of these models is presented in Table 1. The log-ratio (LR) model [20] was developed to systematically solve the out-of-range problem in image restoration. In the development, a set of desirable mathematical properties for the generating function was proposed. The LR model’s generating function is a simple example that has the required properties. The main
L. Navarro et al. / Digital Signal Processing 23 (2013) 1337–1343
idea of the so-called homomorphic LIP (HLIP) [21] is also purely mathematical. In the HLIP model, the generating function is defined such that the signal set in (−1, 1) is closed for the addition and scalar multiplication operations. The pseudo-LIP model [23] is a logarithmic-like image processing model with the signal set in [0, 1). However, the limitation of this model is that it cannot deal with scalar multiplication with a negative scalar. This is because the signal set and the associated operations do not form a vector space. Thus the subtraction operation cannot be defined. The symmetric pseudo-LIP (SPLIP) model [22] is developed in order to extend the pseudo-LIP to a vector space. The aim of the SPLIP model is similar to the aim of the HLIP. The generating function is a modification of that of the pseudo-LIP such that the signal set in (−1, 1) is closed for the addition and scalar multiplication operations.
We present the details of the proposed symmetric logarithmic image processing (SLIP) model including: the space for the gray levels, the fundamental isomorphism, and the vectorial operations. We also discuss the relationship between the LIP model and the SLIP model in terms of mathematical properties, dealing with transmitted and reflected images, and solving the out-ofrange problem. 3.1. The space of gray levels In the SLIP model, an image is represented by its associated gray level function, denoted f , defined on the non-empty spatial domain D in R 2 . The gray level functions are valued in the bounded symmetric real number interval (− M , M ), where M is strictly positive, called the gray levels range. Elements of (− M , M ) are called the gray levels. M represents the maximum light intensity and − M is the total light absorption. 3.2. The fundamental isomorphism The SLIP model is based on an odd isomorphism inspired by the LIP model’s isomorphism in order to obtain a model which has the same behavior for positive and negative values. In the SLIP model, the fundamental isomorphism and its inverse are defined as:
ψSLIP ( f ) = − M sgn( f ) ln =
M −|f|
M
− M ln( MM− f ), 0 f < M M ln(
M+ f M
),
−M < f < 0
(10)
(11)
and
−| f | −1 ψSLIP ( f ) = M sgn( f ) 1 − e M M (1 − e − f / M ), 0 f < M = − M (1 − e f / M ), − M < f < 0
f g = M sgn( f + g ) 1 − 1 −
|f|
γ1
M
1−
|g|
γ2
M
(14)
where
γ1 =
sgn( f )
(15)
sgn( f + g )
and
γ2 =
sgn( g )
(16)
sgn( f + g )
The multiplication by a scalar λ (λ ∈ R) is defined as:
|λ| |f| λ f = M sgn(λ f ) 1 − 1 − M
(17)
The set of gray levels functions valued on the symmetric range (− M , M ), structured with addition and multiplication by a real scalar, defines a real vector space.
3. The symmetric logarithmic image processing model
1339
(12) (13)
where the signum function is defined on R as: sgn(x) = 1 for x > 0, sgn(x) = −1 for x < 0, and sgn(x) = 0 for x = 0. The signum function must be understood in the sense of mathematical distributions [29]. 3.3. The vectorial operations In order to define a vector space, addition and scalar multiplication operations are defined below using the fundamental isomorphism (10) and the definitions in (8) and (9). The addition of two gray levels f and g is defined as:
3.4. Discussions 3.4.1. Properties of the SLIP operations Since the SLIP model is defined from a vector space point of view, its addition operation satisfies the following abelian group axioms. (1) The signal space is closed under the operation, i.e., f g ∈ (− M , M ) for all f , g ∈ (− M , M ). (2) The operation is associative. (3) The operation is commutative. (4) The identity element is 0 such that 0 f = f . (5) There is an inverse denoted f for all f ∈ (− M , M ) such that f f = 0. It can be easily shown that f = (−1) f = − f . In addition, the signal set is closed under the scalar multiplication operation , i.e., λ f ∈ (− M , M ) for λ ∈ R and f ∈ (− M , M ). The operation is also distributive in that λ ( f g ) = (λ f ) (λ g ) and β (λ f ) = (β × λ) f for β, λ ∈ R and f , g ∈ (− M , M ). Its identity element is 1, i.e., 1 f = f. Therefore, the signal set is closed for these two operations. These properties are essential for solving the out-of-range problem. Moreover, it is easy to show that the negative value denoted by f is defined as f = (−1) f and the subtraction operation can be defined as: f g = f ( g ). 3.4.2. Mathematical relationship between the LIP and the SLIP models From a mathematical point of view, we can clearly see that when f ∈ [0, M ) the two generating functions ψLIP ( f ) and ψSLIP ( f ) are exactly the same. Therefore, the SLIP model is the same as the LIP model for the pixel value range [0, M ). However they differ from each other in the way they deal with negative values. × f ∈ (−∞, 0) For example, when λ < 0 and f > 0, we have λ and λ f ∈ (− M , 0). Similarly, when −∞ < f < g < M we have − g ∈ (−∞, 0) and when − M < f < g < M, we have f g ∈ f (− M , 0). 3.4.3. Dealing with transmitted and reflected light images We comment on the difference between the two models in terms of how they deal with transmitted and reflected light images. A transmitted-light image is an image formation model used in the development of the LIP model. The model is basically a light filter with a spatially variant absorption function which is called the gray tone function. In the LIP model, an image is represented by the light filter denoted f which is related to the gray scale image t by f = M − t. As such, the light filter representation is a gray scale inversion of the gray scale representation. On the other hand, a reflected light image is defined as the light energy reflected by the scene and acquired by an imaging sensor. It has been demonstrated that the LIP model can also deal with the reflected light images [10]. Therefore, when using the LIP model,
1340
L. Navarro et al. / Digital Signal Processing 23 (2013) 1337–1343
Fig. 1. Original Lena image (left) and the image viewed through a light filter (right). The filter has 3 horizontal bands of 25% light absorption and the remaining two bands are without any absorption.
Fig. 2. Comparison of the function ψ of the LIP, SLIP, HSLIP and SPLIP models. The two functions for the LIP and SLIP model have the same behavior for the positive values, but for the negative values the LIP model is not symmetric.
a transmitted-light image requires a scale inversion before further processing. For a reflected light image, the LIP operations can be performed directly on image gray levels. The SLIP model can deal with these two types of images, but in a different way. More specifically, the positive part [0, M ) of the interval (− M , M ) is dedicated to reflected light images, while the negative part (− M , 0] is dedicated to the transmitted-light images. It is easy to show that for two reflected light images denoted by g 1 and g 2 , the SLIP addition of them is given by
g = g1 g2
= g1 + g2 − g1 g2 / M
(18)
which is the same as that produced by the LIP model. Since the value of the light filter for the transmitted-light image is defined in the interval (− M , 0], the addition of two filters denoted by ¯f 1 and ¯f 2 is then performed by the SLIP addition
¯f = ¯f 1 ¯f 2
(19)
To add two transmitted-light images represented by their respective gray level representations t 1 and t 2 , we need to convert them into their respective light filter representations by using ¯f 1 = t 1 − M and ¯f 2 = t 2 − M and the resulting image can be recovered by
t = M + ( ¯f 1 ¯f 2 )
(20)
To demonstrate how the SLIP model deals with these two types of images, we consider a simple case of viewing a reflected light image denoted by x (0 x < M) through a light filter denoted by y¯ (− M < y¯ 0). In order to perform the addition of these two images, we must convert them into the same type. For example, we can convert the reflected light image into the filter representation by x¯ = x − M. A simulation example is shown in Fig. 1. In this figure, the Lena image x is viewed through the light filter y¯ which has 3 horizontal bands of 25% light absorption, i.e., a value of − M /4 (non-absorptive bands have a value of 0). The resulting image is then obtained by using M + (¯x y¯ ) and is shown in Fig. 1. 3.4.4. Solving the out-of-range problem It has been realized by researchers [8,20,21,23] that the key to solving the out-of-range problem is to define a bijective generating function such that
ψ : (a, b) → (−∞, ∞)
(21)
where a and b are the lower bound and upper bound for the amplitude of the signal. In fact, for f , g ∈ (a, b) and α ∈ R, we can easily show that the results of addition and scalar multiplication operations [defined in (8) and (9)] are bounded in the interval (a, b). In the original LIP model with a positive linear cone, we have a = 0 and b = M and λ ∈ [0, ∞). The result of the addition and
Fig. 3. The block diagram of a generalized unsharp masking algorithm [8].
scalar multiplication operations using a positive real scalar are bounded in [0, M ). However, the requirement for defining subtraction operation or equivalently extending the domain of λ from [0, ∞) to (−∞, ∞) leads to the extension of the domain of the signal from [0, M ) to (−∞, M ). As such, the results of the two operations are not bounded in [0, M ). This causes the out-of-range problem. In developing the LR, HLIP and SPLIP model, researchers defined the generating function such that condition (21) is satisfied. A drawback of this approach is that the generating functions do not have any direct physical interpretations. This is in contrast to the LIP model which is developed from a physical image formation model and its generating function has recently been shown to be a special case of the giga-vision sensor model [10]. The proposed SLIP model, on the other hand, is a modification of the LIP model in that (1) it is identical to the LIP model for f , g 0 and λ 0, and (2) it solves the out-of-range problem of the LIP model by using symmetric extension such that results of the two operations are in (− M , M ). A comparison of these generating functions is shown in Fig. 2. 4. Application in image enhancement Generalized unsharp masking (GUM) algorithms for image enhancement have been study by many researchers. Recently, a generalized linear system based GUM [8] and parametric LIP model [12] based GUM algorithm have been proposed. Image enhancement has also been studied based on mathematical morphology, wavelet design and logarithmic quality measure [24–27]. Since the LR model is closely related to the SLIP model, we borrow the idea of the LR model based GUM algorithm presented in [8] and develop an SLIP model based GUM algorithm. For easy reference, the block diagram of the algorithm is reproduced and is shown in Fig. 3. The output image is produced by the following equation
v =z⊕
γ ⊗ (x y )
(22)
where y is the result of edge-preserved filtering (using iterative median filtering) of the original image x and z is the result of contrast enhancement of y using adaptive histogram equalization. The
L. Navarro et al. / Digital Signal Processing 23 (2013) 1337–1343
1341
Fig. 4. Comparison of results.
operations ⊕, and ⊗ depend on the particular generating function ψ . For example, in [8] the log-ratio model and tangent system are studied. In this paper, we study the application of the SLIP in the GUM algorithm. In this case the SLIP operations (, , ) are used. The dynamic range of an N-bit digital image is [0, 2 N − 1] which can be conveniently normalized to the range of (0, 1). The SLIP model cannot be directly applied since its dynamic range is (−1, 1). A simple way to get around this problem is to use a linear function 2x − 1 to scale the image x from (0, 1) to (−1, 1). The scaled image is then processed by the SLIP-based GUM (SLIP-GUM) algorithm. The result is then scaled back from (−1, 1) to (0, 1) to obtain the final result.
We compare the results using the SLIP-GUM algorithm with those presented in [8]. We use the “canyon” image which is a 16-bit image. The results are shown in Fig. 4 which shows the original image, the enhanced image using Meylan’s algorithm [30], Farbman’s result [31], result of using adaptive histogram equalization (a MATLAB Image Processing Toolbox function called adapthisteq), and the results using the SLIP-GUM and the logratio-GUM (LR-GUM) algorithms. For the SLIP-GUM and LR-GUM algorithms, we use the same parameters as follows
• a vertical–horizontal cross (3 × 3) mask for iterative median filtering with 5 iterations;
• a nonlinear adaptive gain control [8];
1342
L. Navarro et al. / Digital Signal Processing 23 (2013) 1337–1343
Fig. 5. Comparison of the generating function of the SLIP ψ with that of the LR model φLR . For comparison purpose, the domain of the function φLR is linearly mapped from (0, 1) to (−1, 1).
• an adaptive histogram equalization (a MATLAB Image Processing Toolbox function called adapthisteq) with the contrast
which the problem of halo effect is identified and edge preserving filters are used to replace linear filters. The use of an iterative median filter in our work is inspired by Farbman’s work. In Fig. 4, we can see that the SLIP-GUM algorithm produces an enhanced image which is very similar to that produced by the LRGUM algorithm. This is a direct result of the similarity between the generating function ψSLIP of the SLIP and that of the LR denoted by ψLR . To visualize their similarity, we linearly map the domain of the function ψLR from (0, 1) to (−1, 1) such that the two functions can be plotted in one figure which is shown in Fig. 5. We can see that the two functions are indeed very similar. As such, similar image enhancement results are expected from using these two functions. In addition, the overall contrast of these two images are to some extend better than that of Farbman’s result and are definitely better than Maylan’s result. The inferior result of Meylan’s algorithm may be due to the lack of fine tuning of parameters for this particular image. In our simulation only the default parameters are used. The adaptive histogram equalization algorithm enhances the overall contrast of the image but does not enhance the sharpness of the image. However, the SLIP-GUM and LR-GUM algorithms can enhance both the contrast and the sharpness of the image. Next, we demonstrate the advantage of the SLIP over the LIP model using the generalized unsharp masking algorithm. Using the LIP model, the GUM algorithm can be written as
factor of 0.005. Color images are first converted from the RGB color space to the HSI space. The I component is processed. The color components H and S are not processed to avoid potentially changing the overall color appearance of the image. Meylan’s algorithm [30] is a state-of-the-art retinex-type of algorithm1 which requires a sophisticated re-scaling process with carefully tuned parameters for every image. Farbman’s result [31] is obtained using a multi-resolution tone manipulation method in
1
The source code is available on: ivrgwww.epfl.ch/supplementary_material.
+ v = z
× (x − y) γ
(23)
The image data is normalized such that the pixel value is in (0, 1). − y ) ∈ (−∞, 1), the result of the LIP-GUM algorithm can Since (x be potentially out of the range of (0, 1). This effect is shown in Fig. 6 which shows a comparison of the original 100th line of the “cameraman” image with those of the processing results using the SLIP-GUM and LIP-GUM algorithms. The difference between the two algorithms is in the operations being used. We can clearly see that while the SLIP-GUM algorithm is free of the out-of-range problem, the LIP-GUM algorithm does have this problem.
Fig. 6. A comparison of original signal (the 100th line of the cameraman image) with the processing results of the SLIP-GUM and the LIP-GUM algorithms.
L. Navarro et al. / Digital Signal Processing 23 (2013) 1337–1343
5. Conclusion In this paper, we propose the SLIP model which is a modification of the LIP model. The SLIP model is the same as the LIP model in the pixel range [0, M ) and thus enjoys the same physical interpretation as that of the LIP model. By defining a vector space on the symmetric bounded range (− M , M ), the SLIP model systematically solves the out-of-range problem inherent to the LIP model. The advantage of the SLIP model have been demonstrated by using a recently published generalized unsharp masking algorithm for enhancing the contrast and sharpness of an image. We show that the performance of the SLIP-GUM algorithm is the same as the LR-GUM algorithm and is comparable to those of state-of-theart algorithms. Further work on the SLIP model will be the development of mathematical objects that require a symmetric bounded vector space such as the wavelet transform and the Fourier transform [15]. On the applications side, the physical properties of transmitted-light images modeled by light filters will be used in practical applications such as setting up multi-lighting filters used in experimental mechanics. The SLIP model is a new tool for the mathematical modeling of these complex lighting systems. References [1] A.V. Oppenheim, R.W. Schafer, T.G. Stockham Jr., Nonlinear filtering of multiplied and convolved signals, Proc. IEEE 56 (8) (1969) 1264–1291. [2] T. Stockham, Image processing in the context of a visual model, Proc. IEEE 60 (7) (1972) 828–842. [3] M. Jourlin, J.-C. Pinoli, A model for logarithmic image processing, J. Microsc. 149 (1988) 21–35. [4] J.C. Brailean, B.J. Sullivan, C.-T. Chen, M.L. Giger, Evaluating the EM algorithm for image processing using a human visual fidelity criterion, in: Proc. IEEE ICASSP, 1991, pp. 2957–2960. [5] J.-C. Pinoli, A general comparative study of the multiplicative homomorphic, log-ratio and logarithmic image processing approaches, Signal Process. 58 (1) (1997) 11–45. [6] J.-C. Pinoli, J. Debayle, Logarithmic adaptive neighbourhood image processing (LANIP): Introduction, connections to human brightness perception, and application issues, EURASIP J. Adv. Signal Process. (2007), http://dx.doi.org/10. 1155/2007/36105. [7] G. Deng, An entropy interpretation of the logarithmic image processing model with application to contrast enhancement, IEEE Trans. Image Process. 18 (5) (2009) 1135–1140. [8] G. Deng, A generalized unsharp masking algorithm, IEEE Trans. Image Process. 20 (5) (2011) 1249–1261. [9] K. Panetta, S. Agaian, Y. Zhou, E.J. Wharton, Parameterized logarithmic framework for image enhancement, IEEE Trans. Syst. Man Cybern. B: Cybernetics 41 (2) (2011) 460–473. [10] G. Deng, A generalized logarithmic image processing model based on the gigavision sensor model, IEEE Trans. Image Process. 21 (3) (2012) 1406–1414. [11] K. Panetta, E. Wharton, S. Agaian, Human visual system-based image enhancement and logarithmic contrast measure, IEEE Trans. Syst. Man Cybern. B: Cybernetics 38 (1) (2008) 174–188. [12] K. Panetta, Y. Zhou, S. Agaian, H. Jia, Nonlinear unsharp masking for mammogram enhancement, IEEE Trans. Inf. Technol. Biomed. 15 (6) (2011) 918–928. [13] C. Nercessian, K. Panetta, S. Agaian, Multiresolution decomposition schemes using the parameterized logarithmic image processing model with application to image fusion, EURASIP J. Adv. Signal Process. (2011), http://dx.doi.org/10. 1155/2011/515084. [14] G. Deng, J.-C. Pinoli, Differentiation-based edge detection using the logarithmic image processing model, J. Math. Imaging Vision 8 (2) (1998) 161–180. [15] G. Courbebaisse, F. Trunde, M. Jourlin, Wavelet transform and lip model, Image Anal. Stereol. 21 (2002) 121–125.
1343
[16] M. Lievin, F. Luthon, Nonlinear color space and spatiotemporal MRF for hierarchical segmentation of face features in video, IEEE Trans. Image Process. 13 (1) (2004) 63–71. [17] J.M. Palomares, J. Gonzalez, E.R. Vidal, A. Prieto, General logarithmic image processing convolution, IEEE Trans. Image Process. 15 (11) (2006) 3602–3608. [18] S. Agaian, A. Almuntashri, A new edge detection algorithm in image processing based on LIP-ratio approach, in: Proc. IS&T/SPIE Electronic Imaging, 2010, pp. 1041–1047. [19] N. Dunford, J.T. Schwartz, Linear Operators, Part 1: General Theory, Interscience, 1957. [20] H. Shvaytser, S. Peleg, Inversion of picture operators, Pattern Recogn. Lett. 5 (1) (1987) 49–61. [21] V. Patrascu, V. Buzuloiu, C. Vertan, Fuzzy image enhancement in the framework of logarithmic models, Stud. Fuzziness Soft Comput. 122 (2003) 219–236. [22] C. Florea, C. Vertan, L. Florea, A. Oprea, On the use of logarithmic image processing models in total hip prostheses X -ray visualization and analysis, in: Proc. Medical Image Understanding and Analysis, 2008, pp. 127–131. [23] C. Vertan, A. Oprea, C. Florea, L. Florea, A pseudo-logarithmic image processing framework for edge detection, in: Advanced Concepts for Intelligent Vision Systems, Springer, 2008, pp. 637–644. [24] R.G. Kogan, S. Agaian, K. Panetta Lentz, Visualization using rational morphology and zonal magnitude reduction, in: Proc. SPIE Nonlinear Image Processing IX, vol. 3304, 1998, p. 154, http://dx.doi.org/10.1117/12.304595. [25] S. Agaian, Visual morphology, in: Proc. SPIE Nonlinear Image Processing, vol. 3646, 1999, pp. 139–150. [26] S. DelMarco, S. Agaian, The design of wavelets for image enhancement and target detection, in: Proc. SPIE, Mobile Multimedia/Image Processing, Security, and Applications, vol. 7351, 2009, http://dx.doi.org/10.1117/12.816135. [27] E. Wharton, S. Agaian, K. Panetta, A logarithmic measure of image enhancement, in: Proc. SPIE Mobile Multimedia/Image Processing for Military and Security Applications, vol. 6250, 2006, http://dx.doi.org/10.1117/12.665693. [28] M. Jourlin, J. Pinoli, Logarithmic image processing: The mathematical and physical framework for the representation and processing of transmitted images, Adv. Imaging Electron Phys. 115 (2001) 129–196. [29] L. Schwartz, Méthodes mathématiques pour les sciences physiques, Hermann, 1961. [30] L. Meylan, S. Susstrunk, High dynamic range image rendering using a retinexbased adaptive filter, IEEE Trans. Image Process. 15 (9) (2006) 2820–2830. [31] D.L.Z. Farbman, R. Fattal, R. Szeliski, Edge-preserving decompositions for multiscale tone and detail manipulation, in: Proceedings of ACM, SIGGRAPH 2008, in: ACM Transactions on Graphics, vol. 27 (3), 2008, pp. 1–10.
Laurent Navarro is Research Engineer in Ecole Nationale Supérieure des Mines de Saint-Etienne. He is with the Center for Health Engineering, LPMG-UMR CNRS 5148. His research areas include time–frequency signal processing, mechanical simulation and image processing in the field of biomedical engineering.
Guang Deng is Reader and Associate Professor in Electronic Engineering, La Trobe University. His current research interests include Bayesian signal processing and generalized linear systems.
Guy Courbebaisse is the project leader and scientific coordinator of the FP7 European Project ‘THROMBUS’ (www.thrombus-vph.eu). He is with the CREATIS, CNRS UMR 5220, INSA-Lyon. He worked successively airborne counter-measure systems, spark-ignition engines, and polymer processing. His current research is focused on medical imaging.