Micron,Vol.
25, No. 4, pp. 309-315,
1994
Copyright 0 1994 Elsevier Science Ltd Printed in Great Britain. All rights reserved 0968-4328194
%7.00+0.00
0968-4328(94)00013-1
RESEARCH PAPER
Quantization
and Dithering Techniques Applied to Digital Microscopy PRIM0
COLTELLI*
and PAOLO GUALTIERIST
*CNR, CNUCE, via S. Maria 36, 56126 Pisa, Italy $CNR, Istituto di Biojisica, via S. Lorenzo 26, 56127 Pisa, Italy (Received 14 February 1994; Accepted 6 April 1994)
Abstract-The complexity of 24 bits true colour technology is the main problem associated with the display of colour images. However, the average user in digital microscopy possesses only an 8 bit workstation (256 colour). In these cases quantization and dithering algorithms represent a valid solution for the display of images having the same quality of 16 million colour images. In this paper we will present and analyze these algorithms, showing the results of their application to digital microscopy images. Key words: Video microscopy,
image analysis,
colours,
quantization,
INTRODUCTION
The complexity of the 24 bits true colour technology is the main problem associated with the display of colour images (Foley and VanDam, 1990). Didactic, archival, retrieval and analysis applications in digital microscopy demand a representation of colour images, of the same quality either of the microscope image or of its photographic reproduction. True colour devices and techniques, for both image input and output, are becoming available (Josep and Mehl, 1991). The 24 bits frame buffer has a few disadvantages, since image transmission by networks, image handling in window user interface environments, and image feature determinations have a great complexity in the computational and transmission time and storage requirements. Some of these problems could be solved with the use of specialized hardware or high speed networks, but these solutions create further problems (portability, cost, etc.) to the average user in our field who normally has access to an 8 bits digital microscopy workstation. In computer graphics, colours can be defined as linear combinations of different values of intensity of three primary colours, namely blue, green and red. Each combination corresponds to a point in a three-dimensional colour space (i.e. the RGB cube). Since true colour systems use 256 levels for each primary colour (8 bits), more than 16 million colours can be displayed (Glassner, 1990). The human eye is capable of recognizing 100 levels for each primary colour; therefore, it can see a million of
t
To whom correspondence
should
be addressed.
dithering.
these different linear combinations (Pratt, 1978). Thus, it is possible to create and display many more colours than those distinguishable by the human eye. Alternatively, low cost graphic computers can simultaneously dis.pTtiy only 256 colours, selected from the palette of the whole RGB space, and therefore, the image represented by the computer has a number of colours lower with respect to the same image before acquisition (Liping and Rundquist, 1988; Diaz-Cayeros, 1990). To overcome this problem a lot of work has been performed since the late 70s on the representation of the chromatic gamut of images and the relation between the true colours of images and the subset of colours selected from the palette (Floyd and Steinberg, 1975; Heckbert, 1982; Gervautz and Purgathofer, 1990; Hawley, 1990; Wu, 1991; Coltelli et al., 1993). Since then, several algorithms have become available in the literature dealing with the process of reducing the number of colours of an image to a small set of representative colours (quantization), but still maintaining the high information content. Moreover, due to human eye sensibility to minimal colour variations, images generated by these algorithms present zones with colour discontinuity that decrease image quality. This phenomenon called ‘contouring’ can be eliminated by means of a new class of algorithms that enhance image quality, by representing with the same small subset of colours a larger number of colours (dithering). In this paper, quantization and dithering algorithms are presented and compared with respect to their applicability in the domain of digital microscopy. This solution to the problem of colour image display is practical and useful in most cases, and has only a negligible cost in terms of information loss.
310
P. Coltelli and P. Gualtieri
MATERIALS
AND METHODS
Hardware and software Slides of the microscope field have been acquired by means of a table scanner (UMAX UC840) that provides 24 bits of colour information per pixel in the RGB space. The acquisition has been made by taking into account all those elements that can influence the digitization. The sampling step (400 x 400 dots per inch) has been chosen in order to guarantee an optimal compromise between the need of storing image data of a reasonable size (about 600 x 400 pixels) and the requirement of microscope resolution (0.5 u); sensitivity has been adjusted and input gamma corrected for the radiometric accuracy of the generated image (Foley and VanDam, 1990). Under these conditions, it is assumed that the acquisition phase does not introduce perceivable errors. Consequently, the acquired images are considered the original images that will be processed by applying quantization and dithering algorithms, whose source code is furnished upon request. The hardware platform used for the different quantization and dithering algorithm tests is the UNIX system (Silicon Graphics Personal IRIS) with a driver direct colour display (24 bits), used for comparing the images. Algorithms were implemented in the KHOROS environment, which provides a development system and a visual language for image processing (Rasure and Young, 1992). Procedures are written in C language and divided into: --quantization procedures; -total and average quantization error determination procedures; --dithering procedures; -image difference procedures. Quantization
and quantization error determination procedures Here, the term ‘quantization’ refers to the process of selecting the set of colours, usually called the set of representatives, that gives the best representation of the colour gamut of an image, and computing the correspondence between the colours in the image and the colours in the selected set. We can state that the best set of representatives minimizes the total quantization error E,, defined by the following formula: E, = (, j) xM N d(ci,jy q(‘i.j)) I, E x
(1)
where 8(x, y) is the colour metric in the predefined colour space which measures the difference between corresponding colours in the original and final images, respectively denoted by ci,j and q(ci,j), and M x N is the set of pixels in the images. Average quantization error (E,) is defined as E, divided by the number Mx N of pixels. Therefore, a colour space with an associated metric has to be defined, to compare the results given by different algorithms. For a lowest computation complexity RGB colour space is always chosen; however, the choice metric should be based on the characteristics of human perception since it is the human eye that judges the quality of the displayed image. Therefore, the best representation of the chromaticity of an image is computed by using a uniform colour
space so that when given any two couples of colours with equal Euclidean distance, they always give the same visual perception of this distance. For this reason for our algorithms we use CIELuv colour space, defined to closely match the human perception (Travis, 1991). Further, the computer-aided analysis of microscope images requires that the set of pixels of similar colour in the original image is bound to a unique colour in the final image and that each colour in the final image uniquely determines the set of colours it represents. This requirement constrains the applicability in microscopy to those quantization algorithms that satisfy this property still minimizing the quantization error. In the following we will briefly describe the quantization algorithms we implemented. The comparison between them is made by computing the average quantization errors produced both in the CIELuu and RGB spaces by each of these algorithms applied to the same image.
1. Unqorm quantization The simplest solution for the quantization of colour images is to use a predefined set of representatives and a fixed relationship between them and the image colours. Typical assignments of the output colours are 3 bits for red, 3 bits for green, and only 2 bits for the blue, because of the lower sensitivity of the human eye to this primary colour. Therefore, the 256 colours are combinations of 8 red hues, 8 green hues and 4 blue hues. However, in our case 6 hues for each primary colours are used, giving 216 colours equally distributed. This very fast solution presents the false contouring problem in the final image, since a great number of colours are visualized by the same combination of primary colours. The cost in computation time of the colour map is O(k), where k is the number of representatives.
2. Median cut quantization For this algorithm a prequantization step is used. The number of bits is reduced from 8 to 5 or 6 for each primary colour before the representative choice. This solution allows the reduction of computation time, which can be prohibitive, but has the drawback of cutting the information content. The basic concept of median cut algorithm is that every representative maps an equal number of pixels in the original image. This algorithm divides the RGB colour space repeatedly into two smaller boxes along the median plane until 256 boxes are generated. The colour that represents a certain box is calculated as the weighted average of all the colours in that box (Heckbert, 1982). The cost in computation time of the colour map is O(D log k), where D is the number of different colours in the original image and k is the number of representatives.
3. Statistical
quantization
This algorithm slightly modifies the previous algorithm by quantizing the colours according to statistical compu-
Quantization
and Dithering
tation. In the median cut algorithm the two boxes generated could be of different size; it is reasonable th%t the bigger box should need more than one representative since the colours are more dispersed. This algorithm moves the cutting plane minimizing the sum of variance of the two boxes, and repeatedly cuts the box having the greater variance (Wu, 1991). The cost in computation time of the colour map is O(vD yk’), where D is the number of different colours in the original image and k is the number of representatives. 4. Octree quantization The quantization by means of an Octree data structure, that we implemented, does not reduce the number of bits from 8 to 6 for each primary colour as the other quantization algorithms do. This algorithm is based on the construction of an Octree structure whose nodes and leaves are dynamically allocated during the sequential scanning of the original image. It computes the first 256 different colours of the original image and takes them as representatives. If a new colour is found the algorithm performs a merge of two neighbour representatives and substitutes their average for them (Gervautz and Purgathofer, 1990). The cost in computation time of the colour map is of the order of O(N+Dk), where N is the number of pixels in the image, D is the number of different colours in the original image and k is the number of representatives. Dithering procedures
Clever quantization algorithms, as a statistical algorithm, generate a quite satisfactory image. However, when we examine this image very closely, by zooming, we would like to improve its colour quality, and for this purpose we use dithering techniques. These techniques hide the contouring effect introduced by the quantization process, and produce the illusion of a number of colours greater than the one effectively present in the image. This is due to the human eye’s capability of performing a spatial integration on small areas, so that only their global brightness is perceived while the details are lost. Because of the different number of colours in the original image (16 million colours) and in the final image (256 colours), a representative colour of the final image should map a subset of similar colours of the original image. This constraint should be overcome in digital microscopy: in short, the human eye should not note, or at least should tolerate the changing of the true colour with its representative colour. In the following section we will briefly describe the dithering algorithms we implemented. The comparison between them is made by computing the average quantization errors produced both in the CIELuo and RGB spaces by each of these algorithms applied to the same image. 5. Error diffusion dithering Error diffusion algorithms are widespread in their use.
in Digital
Microscopy
311
Once the set of representative colours has been computed by the quantization process, the image is elaborated sequentially, for example from left to right, and from top to bottom. For each pixel the quantization error is calculated and distributed to the neighbour pixels with different weights according to one of the following matrices:
0 0 0’ 1 16 [ 007
r
ooooo-
00000’
00000
00000
’I 1 48
3 5 1.
00075
1 G
00084
35753
24842
13
12421_
5 3 1,
where the central pixel is the pixel used to calculate the error for each primary colour (Floyd and Steinberg, 1975; Schumacher, 1991). These algorithms, by distributing the quantization errors, hide the regular pattern of false contours that quantization could generate, without affecting image size. They give different results depending on the different patterns of image scannings, and do not preserve the colour correspondence between the original and final image. 6. Ordered dithering The ordered dithering algorithm that we developed (Coltelli et al., 1993), requires a quadruplicate memory allocation, since it substitutes a matrix 2 x 2, the dithering pattern, for each pixel. Moreover, it preserves the correspondence between true colour space and representative colour space. These two facts imply a greater complexity and a lower computation efficiency, but since we are working with sub-images, these drawbacks are overcome by the accuracy of the results. Once the set of representative colours has been computed by the quantization process, for each colour c found in the true colour
Fig. 1. Digital image of the ciliate Bkpharisma japonicum represented with 16 million colours. The framed vacuole zone is shown in Fig. 4a after zooming.
312
P. Coltelli and P. Gualtieri
Fig. 2. 256 colour outputs of the uniform quantization algorithm (a), median cut quantization algorithm (c), and statistical quantization algorithm (e) with the corresponding difference images (b, d, f). Arrows in (f) indicate several errors in mapping oarticular subsets of colours.
Quantization and Dithering in Digital Microscopy
Fig. 3.256 colour outputs of the Octree quantization algorithm (a), error diffusion dithering algorithm after median cut quantization (c), and ordered dithering algorithm after octree quantization (e) with the corresponding difference images (b, d, f). Arrow in (c)marks ghost appearance following error diffusion dithering algorithm.
313
314
P. Coltelli and P. Gualtieri
image, a fixed number of representative colours ri (in our case 16) is chosen so that the distance a(c, ri) computed in CIELuu colour space is minimized. The chosen subset of representative colours is used to generate all possible dithering patterns. Among these, the dithering pattern that best represents the colour to be displayed is the one with the minimum distance in CIELuu space, and that belongs to the subset of similiar colours to which the true colour belongs. This representation gives the illusion of showing the image with many more colours compared to the image obtained by using 256 colour representatives. Image diference procedure
The image difference is calculated substituting the distance value between the true colour and representative colour for each pixel. The image is displayed using a 256 grey level scale. Since the values of the pixels are very low, an offset value is added to each pixel for visualization purposes.
RESULTS AND DISCUSSION The image of the ciliate Blepharisma japonicum has been used for highlighting the capability of colour restoration algorithms. Figure 1 shows the ciliate digital image represented with 16 million colours. This is the image used by quantization and dithering procedures as the original image. Figures 2a to 3b show the 256 colour outputs of the four quantization algorithms (uniform, median cut, statistical, octree) and the corresponding difference images. Figure 3c,d shows the 256 colour output of the error diffusion dithering algorithm after the median cut quantization and its corresponding difference image, whereas Fig. 3e,f represents the 256 colour output of the ordered dithering algorithm after the Octree quantization and its corresponding difference image. The average quantization errors for each quantization and dithering procedure are summarized in Table 1. The errors have been calculated both in RGB space and in CIELuu space, and the results are quite similar. The uniform quantization algorithm has a very high average error (E,); since the selection of the representatives is independent of the original image, the procedure is fast, but due to the use of only a few representative colours to display the quantized image, this algorithm produces contours manifestly false and does not allow the recogniTable 1.
Uniform quantization Median cut quantization Statistical quantization Octree quantization Error diffusion after median cut Ordered dithering after Octree
.E,, in RGB space
E, in CIELuo space
26.49 7.58 3.16 4.19 5.10 4.51
9.67 1.92 1.41 1.54 1.69 1.13
Fig. 4. These figures show the correspondence of colours betw een the original image and the processed images. The zoomed cell vacuole framed in Fig. 1 is shown in (a); (b) and (c) show the zoomed images of this same cell zone of Fig. 3(c) (error ditIus ;ion dithering algorithm after median cut quantizatidn) and Fig 3e (ordered dithering algorithm after Octree quantization).
Quantization
and Dithering
tion of true subcellular domains. (Fig. 2a,b). This algorithm must be avoided in any case. The other three quantization algorithms produce almost similar results. The analysis of the different E, shows that the statistical quantization algorithm is the best since it has the lowest average error, whereas the median cut is the worst. Statistical algorithm still produces false contour effects, although less visible (Fig. 2c-2f). However, since these three algorithms do not preserve colour similarity between the original and quantized image, there exists the possibility of several noticeable errors in mapping particular subsets of colours of the original image (arrows in Fig. 2f). Octree quantization algorithm has the advantage of an accurate choice of representatives, and therefore, the errors are evenly distributed in the image (Fig. 3a). False contours also represent a problem in this case. Median cut, statistical and Octree quantization algorithms can be used for didactical and archival applications, whereas Octree algorithm is recommended for retrieval and analytical applications in digital microscopy, due to its colour preservation capability. Error diffusion algorithm produces ghost effects attributed to the random introduction of noisy errors (arrow in Fig. 3c) and false contours. Ordered dithering algorithm produces an image with more defined contours, due to the oversampling effect. To show in detail the difference in colour preservation between error diffusion dithering and order dithering algorithms we zoomed a zone of the cell original image, i.e. the vacuole (arrow in Fig. 1). Figure 4a shows the zoomed image of the vacuole; Fig. 4b,c show this zoomed zone after the application of error diffusion dithering and ordered dithering algorithms. The false colour correspondence of the error diffusion algorithm can hide the true structure of subcellular domains, in our case the ciliary rows of Blepharisma. In comparison, the ordered dithering algorithm, although quadrupling the number of pixels, preserves with high accurary the colour similarity. We can observe for instance, in Fig. 3f the very low error level together with the almost even distribution of these errors throughout the image. In this case the false contour effect
in Digital
Microscopy
315
is negligible and the information content of the original image is almost entirely preserved. In conlusion, using the CIELuu space (i.e. the colour perception space), the best results with respect to the average error, and visual inspection effect are obtained by our novel algorithms, namely Octree quantization followed by ordered dithering. These algorithms produce an image having the same quality of the original image by using only 8 bits colour representation.
REFERENCES Coltelli, P., Faconti, G. and Marfori, F., 1993. On application of quantization and dithering techniques to history of arts. Comput. Graph. Forum. 12, 351-362. Diaz-Cayeros, L., 1990. Quality colour composite image display using low cost equipment. Photo&am. Eng. Remote Se&56, l&2<1629. Flovd. R. W. and Stein&x. R.. 1975. An adantive algorithm for sDatia1 iray scale. In: Internat&zaiSymposium on’DigitaiTechnica1 Pipers, 36, pp. 223-234. Foley, J. D. and VanDam, A., 1990. Computer Graphics, AddisionWesley, Reading, MA. Gervautz, P. and Purgathofer, P., 1990. A simple method for colour quantization: octree quantization. In: Graphic Gems I, Glassner, A. S. (ed.). Academic Press, San Diego, pp. 287-293. Glassner, A. S., 1990. Frame buffers and colour maps. In: Graphic Gems I, Glassner, A. S. (ed.). Academic Press, San Diego, pp. 215-218. Hawley, D., 1990. Ordered dithering. In: Graphic Gems I, Glassner, A. S. (edy). Academic Press, San Diego, pp. 1%178. Heckbert, P. S., 1982. Colour image quantization for frame buffer display. Comp. Graph., 16, 297-304. Joseph, I% and Mehl, M., 1991. Computer Graphics Hardware: Introduction and State ofthe Art. Eurog;ranhics ‘91, Tutorial Note 9. Liping, R. and Rundquist, @., 1988. Colou~c~mpositeimage generation on an eight bit graphics workstation, Photogram. Eng. Remote Sens., 54, 1741-1748. Pratt, W. K., 1978. Digital image processing. John Wiley and Sons, New York. Rasure, J. and Young, S., 1992. An open environment for image processing software development. In: SPIE/fS&T Symposium on Electronic Imaging, SPIE Proceedings, 1659, pp. 3749. Schumacher, R. A., 1991. A comparison ofdigital halftoning techniques. In: Graphic Gems II, Arvo, J. (ed.). Academic Press, San Diego, pp. 57-71. Travis, D., 1991. E$ectioe Colour Display: Theory and Practice. Academic Press, San Diego. Wu, X., 1991. Efficient statistical computations for optimal colour quantization. In: Graphic Gems II, Arvo, J. (ed.). Academic Press, San Diego, pp. 126133.
.