Discrete Tomography in Discrete Deconvolution: Deconvolution of Binary Images Using Ryser's Algorithm

Discrete Tomography in Discrete Deconvolution: Deconvolution of Binary Images Using Ryser's Algorithm

Electronic Notes in Discrete Mathematics 20 (2005) 555–571 www.elsevier.com/locate/endm Discrete Tomography in Discrete Deconvolution: Deconvolution ...

399KB Sizes 0 Downloads 54 Views

Electronic Notes in Discrete Mathematics 20 (2005) 555–571 www.elsevier.com/locate/endm

Discrete Tomography in Discrete Deconvolution: Deconvolution of Binary Images Using Ryser’s Algorithm Behzad Sharif a,1,2 and Behnam Sharif b a

Coordinated Science Laboratory and ECE Department University of Illinois at Urbana-Champaign, Urbana, Illinois, USA b

University of Manitoba, Winnipeg, Canada

Abstract A new deconvolution algorithm for binary images based on the theory of discrete tomography is proposed. The proposed algorithm is inherently binary as opposed to traditional filtering techniques such as Wiener filtering which require thresholding to produce binary images. Time and space complexity of the proposed algorithm are polynomial in the image size whereas the two-dimensional Viterbi method has an exponential complexity. Application of the proposed method in equalization of two-dimensional inter-symbol interference channels such as page-oriented optical memories is demonstrated. Through numerical simulations, it is shown that the method can outperform the traditional methods such as Wiener filtering especially for low singal-to-noise scenarios. Keywords: Discrete Tomography, Ryser’s Theorem, Switching Components, Discrete Deconvolution, Two-dimensional Inter-symbol Interference Channels, Page-oriented Optical Memories, Two-dimensional Channel Equalization.

1

The first author would like to thank professor Richard E. Blahut for introducing and motivating the problem of binary deconvolution to him. 2 Email: [email protected]

1571-0653/$ – see front matter © 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.endm.2005.05.085

556

1

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

Introduction

In this work, we propose an inherently-binary algorithm for deconvolution of binary images . We apply a fundamental result in discrete tomography, namely Ryser’s theorem and algorithm, to the problem of deconvolution of binary images. In Section 2, using the structure of the point spread function, we reduce the deconvolution problem to a binary tomography problem with two available orthogonal projections. In Section 3, an initial reconstruction is computed using Ryser’s algorithm. Next, Ryser’s theorem is utilized to identify variant positions and the blurred image is used to resolve the ambiguities. Perfect reconstruction conditions for an ideal noiseless case are given. Several ideas to battle the noise are presented such as rounding the line sums or a Metropolis algorithm. In Section 4, application of the proposed method in equalization of two-dimensional inter-symbol interference (ISI) in page-oriented optical memories is described. Section 5 provides extensive numerical experiments to demonstrate the viability of the method. Finally, Section 6 summarizes the contributions and concludes the paper.

2

Problem Formulation

2.1 The Discrete Deconvolution Problem A general two-dimensional deconvolution problem can be written as: (1)

b(x, y) = p(x, y) ∗ ∗c(x, y) + w(x, y)

where c(x, y) is the unknown field, p(x, y) denotes the point spread function (PSF) or convolution kernel and w(x, y) is the additive noise. We refer to b(x, y) as the “dirty image” and to c(x, y) as the “original image” or unknown field. For a nonanalytical solution, c(x, y) must be discretized. In what follows, it is assumed that the unknown field can be sufficiently represented by a weighted sum of shifted versions of a basis function Φ(x, y) as follows: (2)

c(x, y) =

 i

cij Φ(x − i, y − j)

j

For instance, the basis functions are often chosen to be the set of unit height boxes corresponding to a two-dimensional array of square pixels. In that case, {cij } are the heights of the square pixels or simply the values of the pixels. Collecting all the observations into a matrix b and the unknowns {cij } into

557

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

1

1

0.9

0.8

0.8

0.7 0.6

0.6

0.5 0.4

0.4

0.3

0.2

0.2

0.1

0

(a)

(b) 3

2.5

2

1.5

1

0.5

0

−0.5

(c)

Fig. 1. (a) Original 64 × 64 binary image (original image) (b) The point spread function (symmetric blurring kernel) (c) The blurred and noisy image (dirty image).

a matrix c of size m × n, and denoting the discretized kernel by P gives the following matrix equation (3) b(:) = Pc(:) + w(:) where (:) denotes the lexicographic ordering (vector form) of a matrix and w is the measurement noise. Since we are dealing with a binary inverse problem, the pixel values are restricted to be either one or zero. Figure 1 shows a sample phantom, a PSF and the resulting dirty image. The measurement noise is additive white Gaussian. The signal-to-noise (SNR) ratio defined as 20 log10 1/σw is 10dB where σw2 is the variance of noise samples. Given the dirty image data b and the kernel P, the task of discrete deconvolution is to find the binary field b such that Pc(:) ≈ b(:).

558

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

2.2 Reducing the Deconvolution Problem to a Tomography Problem ⎡ ⎤ hgh ⎢ ⎥ ⎢ ⎥ Throughout the paper, we assume that the PSF has the form ⎢ g 1 g ⎥ for ⎣ ⎦ hgh some g, h ∈ R+ . The P matrix in Eq. (3) is the discrete convolution kernel corresponding to this PSF. Assuming that the first and last rows and the first and last columns of c are zero (that is c has a frame of zeros at the borders), it can be shown that: n n n     bij = (g + 2h) · ci,(j−1) + (1 + 2g) · ci,j + (g + 2h) · ci,(j+1) i

i=1

i=1

i=1

for 1 ≤ j ≤ m − 1. For j = 1 the left term is absent and for j = m the right term is absent. This forms a well-posed system of linear equations  that can  n be solved to find sj = m c , the column sums, and similarly r = ij i i=1 j=1 cij , the row sums. It can be shown that a general symmetric discrete PSF can be treated by the same token (assuming a thick enough frame of zeros for c). Having the row and column sums, the problem is reduced to a binary tomography problem or in other words, reconstruction of a binary matrix c from its row and column sums. We are now in a position to apply the discrete tomography framework to solve the discrete deconvolution problem.

3

The Proposed Method

3.1 Background on Reconstruction from Two Projections In this subsection, we briefly state theoretical results by Ryser [4,15] which provide an answer to the uniqueness and existance questions in the binary tomography problem. This subsection closely follows [12]. Let R = (r1 , . . . rm ) and S = (s1 , . . . sn ) be nonnegative vectors. The class of all binary matrices c = (cij ) ∈ {0, 1}m×n , with row and column sums (line sums) equal to R and S respectively, is a member of the tomographic equivalent class U(R, S). Definition 3.1 Consider the matrix c¯ in which i-th row consists of ri ones followed by n − ri zeros. Such a matrix is called maximal and is uniquely determined by its row sums. ¯ be the column sum vector of c¯. Also, denote the nonincreasing Now, let S ´ and S ´ respectively. The following permutation of the elements of R and S by R

559

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

theorem answers the existence question, that is whether U(R, S) is empty or not: Theorem 3.2 (Ryser, 1957) Let S and R be a pair of compatible row and sum vectors. The class U(R, S) is nonempty if and only if n n   (4) s¯j , 2≤≤n s´j ≥ j=

j=

For proof and more details see [12]. Ryser proposed an algorithm for construction of c when the condition of Theorem 3.2 is satisfied which we refer to as Ryser’s algorithm and is quoted here: Ryser’s Algorithm: Input: a compatible pair (R, S) satisfying (4). ´ from S by permutation π. Step 1: Construct S Step 2: Let T = ¯c and k = n. Step 3: While (k > 1), While (s´k > m i=1 tik ), let j0 = max{j < k|tij = 1, ti,j+1 = · · · tik = 0}. let row i0 be where such j0 was found. set ti0 j0 = 0 and ti0 k = 1 (i.e., shift the 1 to right). k = k − 1. Step 4: c = π −1 (T). Step 5: Output c. The complexity of algorithm is O(mn + n log n). Definition 3.3 A “switching component” (SC) of⎡a binary ⎤ matrix⎡c is a⎤2 ×2 submatrix of either of the following forms: A1 = ⎣

10 01

⎦ or A2 = ⎣

01

⎦.

10

A “switching operation” is a transformation of elements of c that changes a submatrix of type A1 into type A2 or vice versa. Collecting the row and column sums into vectors R and S and denoting the tomographic equivalence class by U(R, S), the Ryser’s theorem can be stated as follows: Theorem 3.4 Ryser’s Theorem. If ∃c1 , c2 ∈ U(R, S) such that c1 = c2 then c1 is transformable to c2 by a finite number of switching operations. The following corollary characterizes the “variant positions” (for definition see [12]):

560

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

Corollary 3.5 If position (i, j) is variant, then it is part of a SC. That is ∃i◦ , j ◦ such that either (cij = ci◦ j ◦ = 1 and ci◦ j = cij ◦ = 0) or (cij = ci◦ j ◦ = 0 and ci◦ j = cij ◦ = 1). 3.2 Resolving the Switching Component Ambiguities Having the row and column sums, we can get a reconstruction, say cˆ, by running the Ryser’s algorithm stated above. Next, using Corollary 3.5, we know that all of the ambiguities (variant positions) in ˆc are SCs. For each SC in cˆ with corners at (i, j) and (i◦ , j ◦ ), we can “correct” it using the following rule: Discrete Deconvolution (DD) rule: if bi,j + bi◦ ,j ◦ ≥ bi,j ◦ + bi◦ ,j then decide ˆci,j = ˆci◦ ,j ◦ = 1 and ˆci,j ◦ = cˆi◦ ,j = 0 and vice versa. Note that here we are using information from the dirty image b. This is the key difference between the deconvolution and tomography problems. Using the extra information available in the dirty image, we aim to resolve the SC ambiguities. A “good” SC in ˆc is also a SC in the original image c, i.e., one of the cases in Corollary 3.5. If it is not a SC in c we refer to it as a “bad” SC. The following Lemma asserts that assuming the PSF given in Section 2.2 with h = 0, the DD rule we make a correct decision for every good SC in the noiseless case provided that g is small enough. Lemma 3.6 If g < 14 then the DD rule will make the correct decision for every SC in the noiseless case. Proof. Consider a good SC with corners at (i, j) and (i◦ , j ◦ ). Without loss of generality, assume ci,j = ci◦ ,j ◦ = 1 and ci,j ◦ = ci◦ ,j = 0. The worst-case situation for the DD rule happens when the (i, j ◦ ) and (i◦ , j) positions are surrounded by ones in c (original image) and because of the non-ideal PSF, each of the surrounding ones contribute a nonzero value to the (i, j ◦ ) and (i◦ , j) positions in b (dirty image). Considering the form of the PSF given above and assuming h = 0, the maximum possible value of bi◦ ,j or bi,j ◦ is hence 4g. Therefore, the DD rule makes a correct decision if 4g + 4g < 1 + 1 which is equivalent to g < 14 . 2 The Noiseless-DD algorithm is the following: Step 1. Solve for row and column sums using the method in Section 2.2. Step 2. Apply the Ryser’s algorithm to obtain an initial reconstruction cˆ. Step 3. For each undecided SC at (i, j) and (i◦ , j ◦ ) do the following: If bi,j + bi◦ ,j ◦ ≥ 2 or bi,j ◦ + bi◦ ,j ≥ 2 then this is a good SC so apply the DD rule.

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

(a)

(b)

(c)

(d)

561

Fig. 2. (a) Original binary image (b) Original Ryser’s algorithm reconstruction. Panels (c) and (d) show the evolution of the proposed method. Notice that the difference between (b) and (c) or (c) and (d) or (d) and (a) is only in one switching operation. The final reconstruction result is exactly the one is (a) and we have perfect reconstruction.

else do not change the SC and continue. Step 4. Return ˆc as the reconstruction result. In the ideal conditions mentioned above, one can detect the good SCs and only correct those. That is, although we do not have access to c but we can say whether a SC in cˆ is a SC in c. It is straightforward that this can be done simply by summing up the corresponding diagonal elements in b as mentioned in the algorithm (note that we have assumed g < 14 and h = 0). Since the only difference between c and cˆ is in good SCs (according to Ryser’s Theorem) and because the DD rule corrects any good SC (Lemma 3.6), it follows that the Noiseless-DD algorithm gives a perfect reconstruction of c. Figure 2 shows the simulation result for the noise-free scenario where the blurring kernel is 3 × 3 symmetric with g = 0.24 and h = 0. Panel (a) shows the original binary image. Panel (b) is the original Ryser’s algorithm reconstruction. The image in (c) is derived from (b) by one switching operation based on DD rule. Similarly, the next iteration step gives the image in (d) and finally, with one more switching operation, we get the original image of (a).

562

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

3.3 Rounding and Modified Ryser’s Algorithm In practice, the dirty image contains noise and the perfect reconstruction results of the last part do not apply. The simplest idea to battle the noise is rounding the row and column sums which will give the perfect line sums only if the noise power is small enough. Otherwise, the reconstruction will be based on perturbed projections and will result in imperfect reconstruction. A more severe problem occurs when after rounding, the resulting (R, S) pair fail to satisfy the conditions of Theorem 3.2. In that case, the class U(R, S) will be empty and Step 3 of the Ryser’s algorithm may fail since it is possible that no (j0 ,i0 ) exist that satisfies the required condition. One may think of a way to optimally alter the faulty line sums to get a nonempty class. For instance, the optimality measure may be minimum 1 perturbation in the measured line sums and will lead to a combinatorial optimization problem. Adopting a different approach, we propose to modify the Ryser’s algorithm in order to get a reconstruction and then hope that application of DD rule will reduce the number of errors. The modification is to add a line in the inner loop of Step 3 as follows: Step 3: While (k > 1), While (s´k > m i=1 bik ), ... If no such j0 found decrease k by 1 and go to start of Step 3. ... Furthermore, it is no longer possible to detect good SCs and the algorithm applies the DD rule to every SC. This may result in complications in case of bad SCs. The theoretical analysis of this issue is still open. However, through numerical experiments, we learned that instead of just scanning ˆc once for SCs and deciding for each of them, it is better to do multiple passes. In all of the experiments done in Section 5, the number of bit errors, i.e., |c − ˆc|, converges in at most three passes. A summary of the proposed algorithm, referred to as the DD algorithm, is shown in Figure 3. 3.4 Further Discussion on Battling the Noise It is known that for an ill-posed inverse problem such as limited-angle tomography, forcing the solution to satisfy all the noisy measurements is not a good strategy [3]. Reconstruction of binary images from the their line sums in a finite number of directions is also ill-posed. Even some small perturbation in the measurements can lead to dramatically different yet still unique solutions. In [2], the authors prove theorems asserting that in a general discrete tomog-

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

563

Fig. 3. Flowchart of the proposed algorithm (DD algorithm).

raphy problem the perfect match to the noisy line sums maybe far from the true image while a nonperfect match can be quite close. In [1], it is asserted that the instability result of [2] is not valid for reconstruction from two projections, which is our case. The results are proved for perturbations of size 2 (in the 1 norm) and a general result is sill to come. It should be noted that the studies on stability of the reconstruction algorithms which incorporate a priori knowledge also exist in the literature [7,13]. In [5], the authors focus on stability results for special lattice sets especially the convex ones. In [8], experimental studies on reconstruction of convex binary bodies from noisy projections are presented. It is worth mentioing that, the applications that we have focused on do not allow for any prior assumption on the structure of the binary field, e.g., convexity, connectivity, etc. Here we introduce some practical ideas than can be combined with the DD algorithm: •

Metropolis Algorithm on Ryser’s Graph: In the Ryser graph of a tomographic equivalence class U, two vertices are adjacent if and only if the images corresponding to the two vertices differ only by one switching operation [11]. Given a real positive function defined on the vertices of the graph, the Metropolis algorithm seeks to maximize the function using an iterative stochastic method (for details see [11]). In the problem at hand, one can postprocess the reconstructed binary image cˆ using the Metropolis algorithm with the following objective function:

564

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

1 b − Pcˆi 22 − σ ˆw2 |1 m·n where cˆi is the image associated with the i-th vertex and σ ˆw2 is the estimated noise variance (|.|1 and .2 denote the 1 and 2 norms respectively). The 1 idea behind the proposed objective function is that if cˆi ≈ c, then E( m·n b− 1 2 2 2 Pcˆi 2 ) ≈ E( m·n w2 ) ≈ σ ˆw . This is similar to the discrepancy principle used in parameter estimation. Since the computational complexity of the Metropolis algorithm is much higher than Ryser’s algorithm, i.e., it takes much longer for it to reach a close-to-optimal solution, it makes sense to apply it as a postprocessor to the output of the DD algorithm. (5) E(cˆi ) ≈ |



Two-dimensional Wiener Denoising: Assuming Gaussian noise statistics, one can denoise the dirty image b prior to computing the line sums and then apply the DD algorithm. The problem with this idea is that the Wiener filter tends to smoothen the edges and if the original binary image is not smooth, it may worsen the situation by introducing extra errors in the dirty image. Therefore, for applications where the underlying images are not typically smooth, one should avoid filtering of any kind.



Divide and Conquer: For data storage applications, it is in our power to predesign the system so that instead of having to reconstruct the whole unknown field c(x, y), the problem would decompose into smaller subproblems. This can be accomplished by dividing the c(x, y) matrix into, say, N 2 submatrices and enforcing frames of zeros around them (which wastes a negligible number of storage cells). In this way, we can deal with each of the N 2 subproblems individually which will reduce the number of terms in the line sums by a factor of N. Assuming independent identically distributed noise, this will reduce the noise power in a row sum by a factor of N.

4

Applications

As stated in the introduction, the main application of this method is in equalization (deconvolution) of two-dimensional communication or storage systems. One instance of such problems is removal of ISI in page-oriented (page-access) optical memories where the ISI pattern is two-dimensional and the unknown field is inherently binary [6,9]. During the past decade, volume holographic data storage has emerged as a promising digital storage technology because of its ideal properties such as high data storage density, high data rate, and short access times. A typical setup for a volume holographic data storage system is shown in Figure 4. Data

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

565

Fig. 4. Typical setup of a digital holographic data storage system

are input to the system via a spatial light modulator (SLM) and in the simplest case, each bit is mapped to a single SLM pixel. Several data pages are recorded holographically by a reference beam. The ability to multiplex several pages into a fixed volume allows for high data densities. As storage density increases, the performance of the storage channel is degraded due to ISI and increase in noise level. Any such deviation from ideal imaging results in inter-pixel cross talk and generally in a higher bit-error rate (BER). Recently, researchers have proposed use of signal processing and estimation techniques to decrease the BER of the retrieved data which results in easing the design tolerances of the optical components and permits the use of large data pages. The techniques range from Wiener filtering or linear minimum mean square error estimation (LMMSE) [6,10] to combination of Viterbi algorithm with decision feedback equalizer [9]. The filtering-type methods are unable to incorporate the knowledge that the original image is binary. The problem with Viterbitype algorithms is their exponential computational complexity. The developed deconvolution algorithm (Figure 3) seems like an ideal match to this application. It is inherently binary and also has polynomial time complexity and linear space complexity. In all cases it is assumed that the input date page is much larger than the optical system blur, so that edge effects are neglected. This validates the assumption made in Section 2.2 about the frame of zeros around the image. The blurring occurs in page-access optical memories because of optical defocusing, aberrations of the optical system, or blooming at the CCD array. The blurring kernel can be reasonably modeled by a 3 × 3 symmetric kernel and can be reliably estimated a priori [9]. Therefore, the method of Section 2.2 is applicable. Another interesting point is that the DD algorithm does not assume any specific noise statistics and this makes it applicable for both the Gaussian and non-Gaussian noise situations.

566

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571 1 1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0

(a)

(c)

(b) 1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0

(d)

Fig. 5. Reconstruction results for the 1st experiment. The original image is shown in Figure 1. (a) The thresholded Wiener filtering (LMMSE) result (# bit errors = 597) (b) Original Ryser’s algorithm reconstruction (# bit errors = 1227)(c) Final DD algorithm result (# bit errors = 434) (d) Output of the Metropolis algorithm performed on the image in (c) (# bit errors = 394).

In fact, the Gaussian noise assumption is only valid for the case of electronicnoise-dominated storage channels and if the channel is dominated by optical noise the output will obey a Rician statistics [6]. Restoration of astronomical images [14] in situations where the PSF can be accurately estimated can be another application. A typical astronomical image consists of sparse point sources on a smooth background which can be estimated and subtracted from the measurements, leaving an image with sparse nonzero pixels. In some cases the intensities are almost uniform which, with appropriate preprocessing, will result in a discrete deconvolution problem. However, the application of the developed framework is limited to situations where the underlying image can be modeled as a binary image.

567

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571 1 4

0.8 3

0.6 2

0.4

1

0.2

0

−1

0

(a)

(b) 1 1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

(c)

0

(d)

Fig. 6. (a) Original binary image of the 2nd experiment (b) The dirty image (PSF same as 1st experiment) with SNR = 10dB (c) The thresholded Wiener filtering (LMMSE) result (# bit errors = 154) (d) Final DD algorithm result (# bit errors = 64).

5

Simulation Results

In this section we provide simulation results corresponding to realistic situations in the optical storage application described in Section 4. The first experiment is the one introduced in Figure 1. The PSF is of the form in Section 2.2 with g = 0.44 and h = 0.11 and the SNR (defined in Section 2.1) is 10dB. Figure 5 shows the simulation results comparing the standard thresholded Wiener filtering to the DD algorithm. Note the dramatic improvement in panel (c) compared to panel (b) that is accomplished by applying the DD rule. The figure also depicts the output of the Metropolis algorithm described in Section 3.4. As can be seen from the images and the number of bit errors, the DD algorithm outperforms the Wiener filter in this case and the Metropolis algorithm performed on the DD algorithm’s output gives a slight improvement. The explanation is that the DD algorithm is able to formulate the prior knowledge that the underlying field is binary whereas Wiener filtering uses knowledge of noise statistics. The second experiment is shown in Figure 6. The SNR and blurring kernel are the same as in the first experiment but this time the original image is not

568

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571 0

0

10

10

−1

−1

10

BER

BER

10

Wiener 28dB DD 28dB Wiener 21dB DD 21dB Wiener 14dB DD 14dB

−2

−2

−3

−3

10

Wiener g=0.3 DD g=0.3 Wiener g=0.4 DD g=0.4 Wiener g=0.7 DD g=0.7

10

10

0.1

0.2

0.3

0.4

0.5

0.6

10

0.7

40

24

19

g

(a)

16 14 SNR

12

10

9

(b)

Fig. 7. (a) Bit-error rate (BER) for the 1st experiment as a function of blurring kernel intensity g comparing DD and Wiener methods for 3 different SNRs (b) BER as a function of SNR comparing DD and Wiener methods for 3 different kernels. 0

0

−1

BER

10

10

Wiener 28dB DD 28dB Wiener 21dB DD 21dB Wiener 14dB DD 14dB

−1

10

Wiener g=0.3 DD g=0.3 Wiener g=0.4 DD g=0.4 Wiener g=0.7 DD g=0.7

BER

10

−2

10

−2

10 −3

10

−4

10

0

−3

0.1

0.2

0.3

0.4 g

(a)

0.5

0.6

0.7

10

40

24

19 16 SNR

14

12

10

(b)

Fig. 8. (a) Bit-error rate (BER) for the 2nd experiment as a function of blurring kernel intensity g comparing DD and Wiener methods for 3 different SNRs (b) BER as a function of SNR comparing DD and Wiener methods for 3 different kernels.

as random as before and is much smoother. As can be seen from the results, this time the improvement factor in using DD instead of Wiener filtering is almost twice as the first experiment (2.4 compared to 1.3). This implies that not only the SNR or the kernel parameters are a factor in the performance of the DD algorithm, but also the properties of the original image itself, e.g., smoothness or connectivity, can have a significant effect on the reconstruction performance.

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

569

In order to compare the performance of Wiener filtering and the DD algorithm in more detail, we have computed the BER as a function of g and SNR for both methods (h is taken to be 41 g). The results for the first experiment are shown in Figure 7. Each point in the plots is the average BER of 20 MonteCarlo runs. As can be seen from the figure, for low SNRs (less than 14dB) and moderate values of g (between 0.3 to 0.5), the DD algorithm outperforms the Wiener filter. And in other cases its performance is worse. Figure 8 demonstrates the same results for the second experiment. As expected, here the DD algorithm outperforms Wiener filtering even for moderate SNRs (20dB and lower) and almost all of the g range (except for g < 0.3). It should be noted that the first experiment is much closer to reality for a data storage application whereas the second experiment resembles an imaging application. It is important to note that the DD algorithm implemented here does not use any knowledge of the statistics of the noise (except for the Metropolis result in Figure 5). Overall, the results of this section have demonstrated that the proposed algorithm is a viable candidate for computationally efficient deconvolution of binary images when the underlying inverse problem is severely ill-posed.

6

Summary and Conclusions

We have proposed a new deconvolution algorithm for binary images that is built upon the theory of discrete tomography especially Ryser’s theorem. Extensive numerical results have shown that the method can outperform the traditional methods such as Wiener filtering, especially for low SNR scenarios. Overall, the attributes of the proposed DD algorithm are as follows: (i) The proposed algorithm utilizes the fact that the unknown field is a binary. Other standard inverse filtering methods such as Wiener filtering are incapable of doing so. (ii) The time complexity of the proposed algorithm is polynomial as opposed to the two-dimensional Viterbi method which has exponential complexity. (iii) The proposed algorithm does not use any assumed knowledge of statistics of the noise or signal. In contrast, all standard methods for ISI equalization methods assume a prior statistics for noise and/or the underlying field. (iv) To our knowledge, the proposed method is the first combinatorial approach to the binary deconvolution problem. (v) It was experimentally shown that the algorithm works well under noisy

570

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

conditions but theoretical results of the stability and robustness of the algorithm are still open for investigation.

References [1] Alpers, A., and S. Brunetti, On the stability of reconstructing lattice sets from xrays along two directions, Lecture Notes in Computer Science; Digital Geometry for Computer Imagery 3429 (2005), 92–103. [2] Alpers, A., P. Gritzmann, and L. Thorens, Stability and instability in discrete tomography, Lecture Notes in Computer Science; Digital and Image Geometry 2243 (2001), 175–186. [3] Bertero, M., and P. Boccacci, “Introduction to Inverse Problems in Imaging,” Institute of Physics Publishing, London, 1998. [4] Brualdi, R. A., and H. J. Ryser, “Combinatorial Matrix Theory,” Cambridge University Press, Cambridge, UK, 1991. [5] Brunetti, S. and A. Daurat, Stability in discrete tomography: some positive results (2004), to appear in Discrete Appl. Math. [6] Chugg, K. M., X. Chen, and M. Neifeld, Two-dimensional equalization in coherent and incoherent page-oriented optical memory, J. Opt. Soc. Am. A 16 (1999), 549–562. [7] Frese, T., C. A. Bouman, and K. Sauer, Multiscale bayesian methods for discrete tomography, in: G. T. Herman and A. Kuba, editors, Discrete Tomography: Foundations, Algorithms, and Applications, Birkh¨ auser, Boston, 1999 pp. 237– 264. [8] Gesu, V. D., and C. Valenti, The stability problem and noisy projections in discrete tomography, Journal of Visual Languages and Computing 15 (2004), 361–371. [9] Heanue, J. F., K. Gurkan, and L. Hesselink, Signal detection for page-access optical memories with intersymbol interference, Applied Optics 35 (1996), 2431– 2438. [10] Keskinoz, M., and B. V. K. V. Kumar, Application of linear minimum meansquared-error equalization for volume holographic data storage, Applied Optics 38 (1999), 4387–4393.

B. Sharif, B. Sharif / Electronic Notes in Discrete Mathematics 20 (2005) 555–571

571

[11] Kong, T. Y., and G. T. Herman, Tomographic equivalence and switching operations, in: G. T. Herman and A. Kuba, editors, Discrete Tomography: Foundations, Algorithms, and Applications, Birkh¨ auser, Boston, 1999 pp. 59– 84. [12] Kuba, A., and G. T. Herman, Discrete tomography: A historical overview, in: G. T. Herman and A. Kuba, editors, Discrete Tomography: Foundations, Algorithms, and Applications, Birkh¨ auser, Boston, 1999 pp. 3–34. [13] Matej, S., A. Vardi, G. Herman, and E. Vardi, Binary tomography using gibbs priors, in: G. T. Herman and A. Kuba, editors, Discrete Tomography: Foundations, Algorithms, and Applications, Birkh¨ auser, Boston, 1999 pp. 191– 212. [14] Molina, R., J. Nunez, F. J. Corijo, and J. Mateos, Image restoration in astronomy: a bayesian perspective, IEEE Signal Processing Magazine (2001), 11–29. [15] Ryser, H. J., Combinatorial properties of matrices of zeros and ones, Canad. J. Math 9 (1957), 371–377.