Color restoration algorithm for dynamic images under multiple luminance conditions using correction vectors

Color restoration algorithm for dynamic images under multiple luminance conditions using correction vectors

Pattern Recognition Letters 26 (2005) 1304–1315 www.elsevier.com/locate/patrec Color restoration algorithm for dynamic images under multiple luminanc...

925KB Sizes 0 Downloads 21 Views

Pattern Recognition Letters 26 (2005) 1304–1315 www.elsevier.com/locate/patrec

Color restoration algorithm for dynamic images under multiple luminance conditions using correction vectors Yutaka Hatakeyama *, Kazuhiko Kawamoto, Hajime Nobuhara, Shin-ichi Yoshida, Kaoru Hirota Department of Computational Intelligence and Systems Science, Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, 4259 Nagatsuta, Midori-ku, Yokohama 226-8502, Japan Received 4 December 2003; received in revised form 27 July 2004 Available online 15 December 2004

Abstract An algorithm for color restoration under multiple luminance conditions is proposed. It automatically produces correction vectors to restore the color information in the L*a*b* color metric space, using color values of a target object within the well-illuminated region in a given dynamic image. The use of the correction vectors provides better image quality than that obtained by the restoration algorithm using color change vectors. An experiment is done with two real dynamic images, where a walking person in a building is observed, to evaluate the performance of the proposed algorithm in terms of color-difference. The experimental results show that the restored image by the proposed algorithm decreases the color-difference by 30% compared to the restoration algorithm using color change vectors. The proposed algorithm presents the foundation to identify the person captured by a practical security system using a low cost CCD camera.  2004 Elsevier B.V. All rights reserved. Keywords: Instance-based reasoning; Color restoration; Surveillance system; Dynamic image processing

1. Introduction Video color images for surveillance systems are usually used to monitor events and to identify a * Corresponding author. Tel.: +81 459245682; fax: +81 459245676. E-mail address: [email protected] (Y. Hatakeyama).

person in the scene. In real situations, sufficient illumination for surveillance, however, is not always assumed, e.g., in the office at night. This low illumination problem has not been studied enough. Although color constancy (Bauml, 1999, 2001; Brainard et al., 1997) and color invariance (Geusebroek et al., 2001) address color restoration under varying illumination, these algorithms are likely to fail in color restoration in the

0167-8655/$ - see front matter  2004 Elsevier B.V. All rights reserved. doi:10.1016/j.patrec.2004.11.018

Y. Hatakeyama et al. / Pattern Recognition Letters 26 (2005) 1304–1315

poorly-illuminated scene due to insufficient illumination in calculating surface reflectance, i.e., low illumination leads to loss of the physical properties. This fact makes it difficult to restore the color of low-illuminated objects using physics-based models. To solve the above-mentioned problem, an algorithm that automatically restores color information of still images taken under low illumination (Hatakeyama and Hirota, 2002; Hatakeyama et al., 2001, 2002) has been proposed. In (Hatakeyama and Hirota, 2002) color change vectors are defined at each sample point in the L*a*b* color metric space by measuring a color chart whose color scheme card (Color scheme card 175b (sinhaisyokukado in Japanese), 1989) is used. These vectors restore color information of a given target image in a point wise manner. In (Hatakeyama and Hirota, 2002; Hatakeyama et al., 2001, 2002) it is confirmed that the restored image quality is good enough for a surveillance system. This paper focuses on dynamic color images under multiple luminance conditions (i.e., low and standard illumination). The purpose of the proposed algorithm is to generate the image which is easy to use for the real surveillance by manual operation or computer (e.g., tracking and identification). In the setting of the L*a*b* color metric space that harmonizes with the space in (Hatakeyama and Hirota, 2002; Hatakeyama et al., 2001, 2002). In (Hatakeyama and Hirota, 2002) the restoration algorithm for dynamic color images under the multiple luminance conditions is proposed. The proposed algorithm restores input frames using color values of other input frames. The proposed method divides a given target region into sub-regions, and then correction vectors are constructed using the color information of the target at a well-illuminated region. These vectors can improve the result obtained by the algorithm in (Hatakeyama and Hirota, 2002). An experiment is done with three real video images to compare the proposed algorithm with the conventional algorithm (Hatakeyama and Hirota, 2002) in terms of color-difference which is Euclidean distance in the L*a*b* color metric space. The processing time of the proposed algorithm is also evaluated in order to confirm

1305

that the algorithm can be applied to a real-time system. In Section 2, a restoration process using the correction vectors for dynamic images under multiple luminance conditions is presented. Section 3 shows the experimental results on dynamic image color restoration.

2. Color restoration algorithm using correction vectors 2.1. Color restoration for dynamic image using color information in well-illuminated regions Fig. 1 shows an overview of the proposed algorithm for restoration of dynamic color images based on the color change vectors (Hatakeyama and Hirota, 2002) and the correction vectors. In the case of the proposed algorithm, the experimental target consists in dynamic images taken by a fixed camera inside the rooms. Each color change vector is defined at every 5 in the L*a*b* color metric space (L* = 0, 5, . . ., 95 a*, b* = 50, 45, . . ., 40, 45) using color chart. In this case, 175 color scheme cards are used as color chart. The target region in the given frame is determined using the difference between the given frame and the ‘‘background’’ image taken in advance. Given frames are restored by adding only the color change vectors to the given color information before generating the correction vectors, as shown in Frames 1 and

Fig. 1. Restoration process for dynamic images.

1306

Y. Hatakeyama et al. / Pattern Recognition Letters 26 (2005) 1304–1315 Table 1 Average of target color information L*a*b* color metric space (in scale ({0, 100}, {100, 100}, {100, 100})) in well-illuminated region of given frame and the image taken under standard illumination

Standard Well-illuminated

Fig. 2. Divided target regions in the well-illuminated region in the frame.

2 of Fig. 1. The correction vectors are generated in a frame in which there is the target at a well-illuminated area, as shown in Frame 3 of Fig. 1 (e.g., experimental example is shown in Fig. 2). The frame processed by the color change vectors is restored by adding the correction vectors, as shown in Frames 4 and 5 of Fig. 1. 2.2. Extraction of target regions in dynamic images As mentioned in Section 2.1, all given frames in (Hatakeyama and Hirota, 2002) are restored using the color change vectors. A given image where a target does not exist is referred to ‘‘background’’. The background is restored using the color change vectors in advance. The target region in a given frame is extracted using the difference between the given frame restored by adding the color change vectors and the background. The well-illuminated region in the background is selected by the manual operation in advance. The position of the camera is supposed to be fixed. So the manual operation is performed only once. If there is a target region in the well-illuminated region, as shown in Frame 3 of Fig. 1, the correction vectors are calculated (cf. Section 2.4).

Red

Green

Blue

(58, 22, 8) (55, 18, 6)

(64, 8, 9) (70, 5, 8)

(66, 8, 13) (64, 5, 9)

instances. In these color instance-based algorithms, the color scheme cards (color instances) are sufficient for color restoration because the color measurements of these cards are uniformly distributed over the L*a*b* color metric space. Although color information in the background taken under low and standard illumination can be employed as color instances, the color instances of the background are not always usefully utilized for color restoration. It is assumed that there is always a well-illuminated region in target dynamic images. The squared blocks in Fig. 2 show an example of the well-illuminated region. Table 1 shows average color values (L*a*b* color metric space in the scale ({0, 100}, {100, 100}, {100, 100})) for targets with shirts (red, green, and blue) in the well-illuminated region and under standard luminance condition. In Table 1, ‘‘standard’’ means the color values of an image taken under standard illumination (e.g., Fig. 9). That is, color values of goal image are shown in ‘‘standard’’. The color values of the target under standard illumination are similar to that of the well-illuminated region (e.g., Fig. 2). The color value of goal image exists in an input dynamic image. Thus, the proposed algorithm restores the target in low-illuminated region using the color values of the target in the well-illuminated region. In the proposed algorithm, the correction vectors are defined using the color information in the well-illuminated region. The frame obtained by the color change vectors is restored by adding the correction vectors.

2.3. Dynamic images taken under multiple luminance conditions

2.4. Generation of correction vectors

Color restoration algorithms (Hatakeyama and Hirota, 2002; Hatakeyama et al., 2001, 2002) employ 175-color scheme cards for the color

Color values of the target in a well-illuminated region in the frame are used for generating the correction vectors. The correction vectors are defined

Y. Hatakeyama et al. / Pattern Recognition Letters 26 (2005) 1304–1315

in the L*a*b* color metric space to establish relations between the distribution of the color values in the frame restored by adding the color change vectors and that in the well-illuminated region. Due to the target region in the given frame including different segments, e.g., skin and clothes, the false correction vectors are obtained by using the color information in all pixels of a target region. In order to determine the appropriate correction vectors, the target region is divided into subregions in consideration of human characteristics as shown in Fig. 2 (there are 12 regions in the experiments mentioned in 3). The pixel in a frame of the size X · Y is denoted by Frameðx; yÞ

ðx ¼ 1; 2; . . . ; X ; y ¼ 1; 2; . . . ; Y Þ;

ð1Þ where the frame is simply written by Frame (Frame(x, y) 2 Frame). The color value of a pixel, denoted by pixel, is

1307

Calculation algorithm for correction vectors Step 1. (Dividing target region). Divide the target region in a well-illuminated region TR in the frame into sub-regions TRn, denoted by TRn

ðn 2 1; 2; . . . ; N Þ;

TRn ðx; yÞ 2 TRn TRi [ TRj ¼ £

ð TRÞ ði 6¼ jÞ;

N [

ð4Þ TRn ¼ TR;

n¼1

(e.g., N = 12 in Fig. 2) the pixel of the sub-region is TRn(x, y). The target region is divided into 12 (4 · 3) sub-regions. The width ratio is 1:2:1 (e.g., Fig. 2), since this ratio is defined in consideration of torso. The height of sub-regions is the same. Step 2. (Acquisition of color information in the target regions). Define the set of color values of TRn by TCI n ¼ fCIðTRn ðx; yÞÞg:

ð5Þ

CIðpixelÞ ¼ fL ðpixelÞ; a ðpixelÞ; b ðpixelÞg; 0 6 L ðpixelÞ 6 100; 100 6 a ðpixelÞ 6 100;

The number of pixels with the same color value as CI(TRn(x, y)) in TRn is written by

 100 6 b ðpixelÞ 6 100;

#pixelðCIðTRn ðx; yÞÞÞ:

ð2Þ that is, a color value for Frame (x, y) is denoted by CI(Frame(x, y)). The target region in the given frame is denoted by

Step 3. (Calculation of correction vectors). The proposed algorithm defines the correction vectors, CVn(CI), for arbitrary color values in each target region in the L*a*b* color metric space (e.g., Fig. 8). The number of these vectors is 8000 for a target region in the experiments mentioned in Section 3.1. That is, each correction vector is defined at every 5 in the L*a*b* color metric space (L* = 0, 5, . . ., 95 a*, b* = 50, 4, . . ., 40, 45). To establish a relation between the color values in the pixel restored by color change vectors and that in the well-illuminated region, neural network, evolutionary computation mechanism and so on may be employed. These methods are difficult to apply for the surveillance in processing time and learning data. This paper notices that the color values in the frame restored by adding the color change vectors are located on a certain neighbor space of color values in the well-illuminated region. In this step, calculate the correction vectors (CVn(CI)) in TRn for arbitrary color information, CI, using the neighborhood of CI(TRn(x, y)) for CI. The neighbor of CI(TRn(x, y)) for CI is defined

TRð FrameÞ:

ð3Þ

The correction vectors are calculated as follows (cf. Fig. 3).

Fig. 3. Generation of correction vectors.

ð6Þ

1308

Y. Hatakeyama et al. / Pattern Recognition Letters 26 (2005) 1304–1315

NCI n ðCIÞ ¼fCIðTRn ðx; yÞÞj; jL ðTRn ðx; yÞÞ  L j 6 Dis; ja ðTRn ðx; yÞÞ  a j 6 Dis; a ðTRn ðx; yÞÞ  a > 0;

ð9Þ

jb ðTRn ðx; yÞÞ  b j 6 Dis; b ðTRn ðx; yÞÞ  b > 0g:

Fig. 4. Calculation of correction vectors (CVn(CI)).

as NTCIn(CI). Fig. 4 shows this step. The set of NTCIn(CI), NTCIn(CI), stands for NTCI n ðCIÞ ¼ fCIðTRn ðx; yÞÞj; L ðTRn ðx; yÞÞ  L 6 Dis; ja ðTRn ðx; yÞÞj  ja j 6 Dis; a ðTRn ðx; yÞÞ  a > 0; jb ðTRn ðx; yÞÞj  jb j 6 Dis; b ðTRn ðx; yÞÞ  b > 0g; NTCI n ðCIÞ  TCI n ;

CI ¼ fL ; a ; b g: ð7Þ

The value of Dis, distance, in (7) is set to 10 in the experiments in Section 3.1. The calculation of CVn(CI) is done by ðnÞ

ðnÞ

ðnÞ

CVn ðCIÞ ¼ fDLCI ; DaCI ; DbCI g ¼ aven ðCIÞ  CI; P ðNTCIn ðCIÞ  #pixelðNTCIn ðCIÞÞÞ P : aven ðCIÞ ¼ #pixelðNTCIn ðCIÞÞ ð8Þ In Step 3, correction vectors are defined. As shown in Fig. 2, segmentation is not complete. Therefore, a smoothing process is necessary to cover segmentation and noise. Step 4. (Smoothing correction vectors in each color metric space). Smooth a correction vector for CI using other correction vectors in the neighborhood of CI. This calculation is done by 8 CVn ðCIÞ > > > > > < PðjCVn ðCIÞj 6¼ 0Þ; CVn ðCIÞ ¼ N CIðCIÞð2NCIðCIÞÞ CVn ðN CIðCIÞÞ > > > #NCI ðCIÞ > > : ðjCVn ðCIÞj ¼ 0Þ;

where NCI(CI) (2 NCI(CI)): neighborhood color value for CI. The number of NCIn(CI) is written by #NCI(CI). Step 5. (Smoothing correction vectors in the target regions). Smooth the correction vectors for CI in TRn using the correction vectors for CI in the upper, lower, right and left sub-regions for TRn. This calculation is done by 8 ðjC n ðCIÞj 6¼ 0Þ; < CVn ðCIÞ Pm CVn ðCIÞ ¼ CV ðCIÞ : l¼1 l ðjCVn ðCIÞj ¼ 0Þ; m ð10Þ m: the number of the upper, lower, right and left sub-regions for TRn (Fig. 5). 2.5. Color restoration process with correction vectors The given frames are restored using the color change vectors (Hatakeyama and Hirota, 2002). The processed frames are restored by adding the

Fig. 5. Smoothing correction vectors in Steps 4 and 5.

Y. Hatakeyama et al. / Pattern Recognition Letters 26 (2005) 1304–1315

1309

CIrestore ðRTRn ðx; yÞÞ ¼ CICCV ðRTRn ðx; yÞÞ þ CVn ðCICCV ðRTRn ðx; yÞÞÞ: ð13Þ The restoration processes are divided into Step 1 and Step 3, because the target region should be determined before using the correction vectors. It is difficult to extract the target region in input images. Therefore, input frame should be restored using the color change vectors at the beginning of restoration. Fig. 6. Restoration process using correction vectors.

correction vectors of target regions. The restoration process with the correction vectors is summarized as follows (cf. Fig. 6). Restoration algorithm Step 1. (Restoration with color change vectors). Restore given frames with the color change vectors (Hatakeyama and Hirota, 2002). Color information of the restored frame is denoted by CICCV ðpixelÞ ¼ CIðpixelÞ þ CCVðCIðpixelÞÞ; CCVðCIðpixelÞÞ ð2 CCVÞ : the color change vector for CIðpixelÞ: ð11Þ Step 2. (Dividing target region). The target region, RTR, in the restored frame is determined from the difference between the restored background image and the restored frame. Divide this target region into sub-regions, denoted by RTRn ðn 2 1; 2; . . . ; N Þ; ðRTRn  RTR  Frame; RTRn ðx; yÞ 2 RTRn Þ: ð12Þ The sub-region of the target region is defined in a same way as in Step 1 in Section 2.4 (e.g., Fig. 6). Step 3. (Restoration using correction vectors). Restore color information of RTRn(x, y), CICCV(RTRn(x, y)), by adding the correction vectors. This calculation is denoted by

3. Experiments on color restoration for dynamic images taken under multiple luminance conditions 3.1. Experimental results The purpose of experiments is the check of reflecting the color distribution of color scheme cards for the color-difference (that is, a comparison between the proposed algorithm and the conventional algorithm (Hatakeyama and Hirota, 2002)). An input for the proposed algorithm is a dynamic image under multiple luminance conditions in the L*a*b* color metric space with a CCD camera (OLYMPUS CAMEDIA C-2020Z, 30 fps). The input image is supposed to include a person in a known background. The target person is supposed to face a fixed CCD camera. The low luminance condition of the input image is the same as the conditions in which a human operator can recognize just enough colors of the target (person). That is, the contours of a target in the input are visible for a human operator. It is not useful to define the quantitative condition of low luminance condition, because the quantitative conditions (i.e., lux) depend on the performance of the CCD camera and the brightness of the lens. The well-illuminated region is lighted by fluorescent lamp. In the restoration process for a dynamic image, the luminance conditions of input frames are fixed. An output for the proposed algorithm is a restored image by 175 color scheme cards to use in criminal investigation. The purpose of the proposed algorithm is to generate the image which is

1310

Y. Hatakeyama et al. / Pattern Recognition Letters 26 (2005) 1304–1315

easy to use for the real surveillance by manual operation or by computer (e.g., tracking and identification). Tracking and identification require the same color values of a target as that under standard illumination. Therefore, the evaluation of color restoration is done using the color-difference in the L*a*b* color metric space. The size of dynamic images is 320 · 240 pixels/ frame. The proposed algorithm is implemented with Visual C++ on a Pentium 3 (1 GHz) PC. Fig. 7 shows the color change vectors for input images. Fig. 2 is the image that has the target re-

Fig. 7. Color change vectors in restoration process.

gion at well-illuminated region. Fig. 8 is the correction vectors from Fig. 2 (the left image is in the middle region and the right image is in the top region). Fig. 9 shows the target person taken under standard illumination. This image is used as comparison data. Fig. 10 shows the 10th and the 93rd frames of an input dynamic image. Fig. 11 is the image restored with the color change vectors (Fig. 7). Fig. 12 is the restoration result of Fig. 11 using the correction vectors (Fig. 8). Fig. 13 shows that another target person is in the well-illuminated region. Fig. 14 shows the

Fig. 9. The image in which target is taken under standard illumination.

Fig. 8. Correction vectors in divided target region. (Left is in center region and right is in top region.)

Y. Hatakeyama et al. / Pattern Recognition Letters 26 (2005) 1304–1315

1311

Fig. 10. The 10th and the 93rd frame of an input dynamic image.

Fig. 11. The restored image of Fig. 10 using color change vectors (Fig. 7).

Fig. 12. The restoration result of Fig. 11 using correction vectors (Fig. 8).

correction vectors from Fig. 13, respectively (left image is in the middle region and the right image is in the top region). Fig. 15 shows (a) the 16th, (b) the 26th, (c) the 36th, and (d) 45th frames of the input dynamic image. Fig. 16 shows the restored frames for Fig. 15 using the color change vectors (Fig. 7). Fig. 17 is the restoration result of Fig. 16 using the correction vectors (Fig. 14).

3.2. Comparison experiment in a fixed luminance condition Color restoration experiments for three dynamic images in which the target has respectively red, green, and blue shirts are done. The number of frames is 45, 93, and 48, respectively. The dynamic image with red and green shirts includes Figs. 15

1312

Y. Hatakeyama et al. / Pattern Recognition Letters 26 (2005) 1304–1315

Fig. 13. The image where target is at the well-illuminated region.

and 10, respectively. Moreover, the difference Dis in Eq. (7) of the color restoration is checked. The color-difference between the target region of an image under standard illumination and that of the restored frame is shown in Table 2. The maximal value of color difference is 300. The color differences of Table 2 are the median of color differences in three dynamic images (45, 93, and 48 frames, respectively). The color-difference by the proposed algorithm decreases with 30% or more than that of (Hatakeyama and Hirota, 2002). The luminance condition of low-illuminated regions in the given frames is darker than that of the color scheme cards that generate the color change vectors. Low luminance condition reduces

chromatic information more than intensity information. For example, the median of color value in the cloth in Fig. 16 is (48, 3, 0) in the L*a*b*color metric space. The median of the color value in the cloth in Fig. 17 is (54, 12, 6) in the L*a*b*color metric space. Fig. 17 shows that the correction vectors restore the image in which the color change vectors do not always restore. The restoration results show that the correction vectors restore chromatic information of the restored image using the color change vectors. For tracking and identification, it is required that the color value of targets is same as that under standard illumination. Therefore, the proposed algorithm is available for the real surveillance in terms of the color-difference. Table 3 is the color difference between the target regions of an image under standard illumination and that of the frame using the correction vectors which have different Dis in Eq. (7). As shown in Section 2.2, the correction vectors are defined at every 5. The check of Dis is preformed at every 5 in a similar way. In Dis = 5, the set NTCIn(CI) is small. The small NTCIn(CI) does not generate the appropriate correction vectors. From this result, an appropriate value for Dis is 10. The restoration process in Section 3.1 is carried out about 0.4 and 0.06 s/frame of CPU time on the average to calculate the correction vectors and to restore using the color change vectors and the correction vectors, respectively. The processing time of creating the color change vectors is 3 s of CPU

Fig. 14. Correction vectors from Fig. 13.

Y. Hatakeyama et al. / Pattern Recognition Letters 26 (2005) 1304–1315

Fig. 15. The (a) 16th, (b) 26th, (c) 36th, and (d) 45th frames of an input dynamic image.

Fig. 16. The restored frames for Fig. 15 using color change vectors (Fig. 7).

1313

1314

Y. Hatakeyama et al. / Pattern Recognition Letters 26 (2005) 1304–1315

Fig. 17. The restoration result of Fig. 16 using correction vectors (Fig. 14).

Table 2 Color-difference between an image under standard illumination and that of the restored frame in the target region (CCV: color change vectors, CV: correction vectors)

CCV CCV + CV

Red

Green

Blue

15.4 7.1

10.1 6.8

13.4 8.9

Table 3 Color-difference between an image under standard illumination and that of the restored frame in the target region with each Dis in Eq. (7)

Dis = 5 Dis = 10 Dis = 15

Red

Green

Blue

14.4 7.5 7.1

7.6 6.8 6.4

9.9 6.3 7.1

time. Creation of the color change vectors is preprocessing, this processing time does not influence restoration time. The proposed algorithm is available for the real surveillance by implementing on a chip device in terms of processing time.

Thus, the proposed algorithm restores dynamic images under multiple luminance conditions to use for the real surveillance. 4. Conclusions A problem of color restoration in dynamic images under multiple luminance conditions is investigated, and a color restoration algorithm is proposed. The proposed algorithm enhances the image quality using color values of input video image. The correction vectors are constructed using color information of a target in the wellilluminated region and restore the frames using the color change vectors. In order to confirm the performance of the proposed algorithm, three experiments are done using general real world dynamic images inside rooms. First, for an input frame, the restored frame obtained by the proposed algorithm is compared with that obtained by the existing algorithm (Hatakeyama and Hirota, 2002) in terms of color difference. The experimental results show that the

Y. Hatakeyama et al. / Pattern Recognition Letters 26 (2005) 1304–1315

restored image by the proposed algorithm decreases the color-difference by about 30% compared to that of (Hatakeyama and Hirota, 2002). The proposed algorithm is successfully applied to the image where chromatic information is lost. The restored image, in which personal identification is possible, is obtained. Second, the experiment is done to define the parameter for generating the correction vectors. In the restoration of low-illuminated still images, the manual operation has disadvantages compared with the automatic processing, because it is time-consuming and it highly depends on individual skills. Moreover, dynamic images restoration under low illumination is difficult by manual operation in the real-time operation. The proposed algorithm automatically restores the dynamic images under multiple luminance conditions in real time. And it can be applied to tracking a person using color information in dynamic images. The proposed algorithm aims to establish the foundation for identifying a person for the practical security systems with a low cost CCD camera.

1315

References Bauml, K.-H., 1999. Color constancy: the role of image surfaces in illuminant adjustment. JOSA A 16 (7), 1521–1530. Bauml, K.-H., 2001. Increments and decrements in color constancy. JOSA A 18 (10), 2419–2429. Brainard, D.H., Brunt, W.A., Speigle, J.M., 1997. Color constancy in the nearly natural image. 1. Asymmetric matches. JOSA A 14 (9), 2091–2110. Color scheme card 175b (sinhaisyokukado in Japanese) 1989. Nihonsikiken. Geusebroek, J.M., van den Boomgaard, R., Smeulders, H., Geerts, H., 2001. Color invariance. IEEE Trans. Pattern Anal. Machine Intell. 23 (12), 1338–1349. Hatakeyama, Y., Hirota, K., October 2002. Color restoration algorithm in low luminance conditions with color change vectors. In: Joint 1st Internat. Conf. on Soft Computing and Intelligent Systems, Tsukuba (CD-ROM proceeding 22P4-5). Hatakeyama, Y., Takama, Y., Hirota, K., 2001. Instance-based color restoration algorithm for still image under low illumination. In: 2nd Internat. Sympos. on Advanced Intelligent Systems (ISIS), pp. 185–188. Hatakeyama, Y., Takama, Y., Hirota, K., 2002. Instance-based color restoration algorithm for still/dynamic images taken under low illumination, Japan. Soc. Fuzzy Theory Systems 14 (3), 320–328 (in Japanese).