Neurocomputing 35 (2000) 123}135
Target detection through image processing and resilient propagation algorithms L.M. Patnaik *, K. Rajan Microprocessor Applications Laboratory, Indian Institute of Science, Bangalore - 560 012, India Department of Physics, Indian Institute of Science, Bangalore - 560 012, India Received 14 May 1999; accepted 13 April 2000
Abstract This paper deals with target detection studies using the image processing method as well as resilient propagation-based neural network paradigm. In the resilient propagation-based algorithms, the pre-processing operation to extract features of relevance is done using the moment invariance method. These features are then fed as input to the resilient propagation neural network. RPROP (resilient propagation) is an adaptive technique based on the standard backpropagation algorithm. This RPROP algorithm is also implemented in ADSP-21062 assembly language, since a digital signal processor (DSP) execution is much faster than the normal PC execution, as speed is desirable in real time . It is observed that the resilient propagation-based target detection is better compared to the image processing method of target detection. The main objectives of the paper are the demonstration of the applicability of moment invariant features to neural network-based target detection method and implementation of the technique using a DSP chip, ADSP-21062. 2000 Elsevier Science B.V. All rights reserved. Keywords: Automatic target detection; Resilient propagation; Back propagation; Moment invariance
1. Introduction Neural networks represent a recent technology rooted in many disciplines. They are endowed with some unique attributes: universal approximation (input}output
* Corresponding author. Tel.: 91-80-3342451; fax: 91-80-3341683. E-mail address:
[email protected] (L.M. Patnaik). 0925-2312/00/$ - see front matter 2000 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 5 - 2 3 1 2 ( 0 0 ) 0 0 3 0 1 - 5
124
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135
mapping), and the ability to learn from and adapt to their environment. Detection of targets is a speci"c "eld of study within the general scope of image processing and image understanding. From a sequence of images (usually within the visual or infrared spectral bands), it is desired to recognize a target such as a car. Automatic detection of a target can be a very di$cult task, particularly if there is a noisy background and a low signal-to-noise ratio. Neural network technology provides a number of tools which could form the basis for a potentially fruitful approach to the detection of a target. Target detection needs methods to represent targets and backgrounds that are both su$ciently descriptive yet robust to signature and environmental variations. E!ective target detection could be achieved if a prior knowledge about target signatures and background is used as much as possible. Neural network learning could facilitate two main advantages for detection of targets: automatic knowledge acquisition and continuous system re"nement. The use of learning could save the user from spending enormous amount of time necessary to derive rule-based databases for targets and environments. System re"nement could then be incorporated to make any necessary changes to improve the performance of the recognition system. A multilayer neural network has been adopted for training and testing purposes. The training algorithm that has been employed to train and test the neural network is the resilient propagation (RPROP) [16] algorithm. The results of training and testing of the neural network adopted are presented in this paper. The RPROP algorithm has been implemented on an ADSP-21062 SHARC } Super Harvard Architecture Computer since such an implementation is faster than the one on PC. Such a faster execution of the automatic target detection algorithm is desirable in real-time applications. A number of automatic target detection methods have been developed and tested. However, these e!orts have been only partially successful and the percentage success rate has not been very high. For an excellent review of the related problems and the methods, see [4]. The problem of automatic target detection involves extraction of critical information from complex and uncertain data for which traditional approaches of signal processing, pattern recognition, etc., have not been able to provide adequate solution. Thus in this paper explore the possibility of using neural networkbased approach and compare the results of this method with those of the traditional image processing method. The rest of the paper is organized as follows. Section 2 deals with the image processing method of target detection. In Section 3, resilient propagation-based detection of a target is discussed. After presenting these two methods, a discussion on the DSP implementation of RPROP algorithm is presented, in Section 4. Section 5 presents a discussion on the results of target detection and the concluding remarks are presented in Section 6.
2. Image processing method of target detection In this section, we discuss image enhancement, image restoration and image analysis algorithms [8] which are the building blocks for image processing method of target detection. We use the gray-level car image data for extensive experimentation.
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135
125
2.1. Image enhancement Image enhancement [8,9] refers to accentuation or sharpening of image features such as edges, boundaries, or contrast to make a graphics display more useful for display and machine analysis purposes. The prominent among them are contrast enhancement and edge enhancement. The method used in this paper is histogram equalization [8]. Histogram equalization applies the greatest contrast enhancement to the most populated range of brightness in the image. It automatically reduces the contrast in the very light or dark parts of the image. 2.2. Image restoration Image restoration [20] is concerned with "ltering the observed image to minimize the e!ect of degradations. Degradation may be in the form of sensor noise, blur due to camera misfocus, relative object}camera motion, random atmospheric turbulence, and so on. The e!ectiveness of image restoration "lters depends on the extent and the accuracy of the knowledge of the degradation process as well as on the "lter design criterion. A frequently used criterion is the mean square error of degradations. The image restoration models are obtained by using di!erent "ltering techniques. Some of the common methods [8] are low-frequency "ltering, high-pass "ltering, band pass "ltering, median "ltering and unsharp masking and crispening. The method used in this paper is unsharp masking and crispening [8]. In this method a signal proportional to the low-pass-"ltered version of the image is subtracted from the image. 2.3. Image analysis The ultimate aim in a large number of image processing applications is to extract important features from image data, from which a description and interpretation, or understanding of the scene can be provided by the machine. More sophisticated vision systems are able to interpret the results of analysis and describe the various objects and their relationships in the scene. Image analysis basically involves the study of feature extraction [17], segmentation [13] and classi"cation techniques. In our case, actual features used are boundary features, which are computationally quite e$cient. A region-based segmentation has been followed, wherein window size of 12;12 pixels is considered and segmented on the basis of location of the target. Region-based segmentation is further used while scanning the image for automatic detection of the target. 2.4. Automatic detection of targets The problem of much signi"cance in image analysis is the detection of position change or presence of an object in a given scene. The presence of a known object in a scene can be detected by searching for the location of match between the object template and the scene.
126
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135
In a typical algorithm for automatic detection of targets, window-size is the size of the window considered for comparison between template and the given image and template size is the size of the template which is considered as the target. The comparison is done by considering intensity values of the original image and the template. Each pixel is compared and the scanning is shifted to the next template even if one of the pixel intensity comparisons fails. It is well known that such a meansquared di!erence measure using templates is very susceptible to noise. Correlation "lters such as the matched spatial "lter, or the synthetic discriminant function (SDF) have been shown to be more tolerant to noise [12]. However, due to implementation simplicity, we have used the mean-squared di!erence method.
3. Resilient propagation-based target detection There are di!erent paradigms for arti"cial neural networks [18]. These di!er in the structure of the neural network and the way in which the neural network learns Some examples [18] are perceptrons, adaline/madeline systems, Hop"eld nets, backpropagation networks, competitive learning networks, Kohonen feature map, adaptive resonance theory, bidirectional associative memories, and counterpropagation. 3.1. Training an artixcial neural network Learning denotes changes in the system that are adaptive in the sense that they enable the system to do the same task or tasks drawn from the same population more e$ciently and more e!ectively the next time. There are two types of learning in neural networks, supervised learning and unsupervised learning. The backpropagation [18] network is probably the most well-known and widely used among the current types of neural network systems available. Backpropagation, which is a kind of gradient descent technique with backward error propagation. The training algorithm that has been employed to train and test the neural network is resilient propagation (RPROP) [16] algorithm. 3.1.1. Problem associated with standard backpropagation The problems encountered with standard backpropagation [19] are: E Slow or no convergence. E The possibility of network getting stuck in local minimum solutions. Many computational techniques have been proposed to overcome the "rst problem. Empirical results have shown that with ample hidden units embedded in the network, backpropagation can usually escape a local minima due to large degrees of freedom. However, increasing hidden units in the network may not be an appealing idea, since unnecessarily large number of hidden units is likely to decrease the generalization capability of the network.
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135
127
3.2. Resilient propagation (RPROP) RPROP [16] is an adaptation technique based on the backpropagation algorithm, for getting faster learning. This technique was proposed by Reidmiller and Heinrich Braun [16]. It is suitable for multilayer feedforward networks. The resilient propagation algorithm overcomes the inherent disadvantages of the pure gradient-descent method and gives better performance than other adaptation techniques. The weight adaptation process is not a!ected by unforeseeable behavior of the size of the derivative, but the weight adaptation process depends on the sign of the derivative. Most adaptation techniques perform a modi"cation of the learning rate according to some function of the output error. Then the adapted learning rate is used to calculate the weight update size. But it should be noted that the weight size is dependent not only on the learning rate, but also on the partial derivative of the error function with respect to the weight . The weight update size can be adversely a!ected by the magnitude of the derivative even when the learning rate is decided appropriately by the adaptation algorithm. The details of the algorithm can be found in [16] and thus not presented here. 3.2.1. Parameters of RPROP algorithm The parameters which can be altered to study the performance of the algorithm are D , D , D , g> and g\; D is the initial value of the update value, D is used with all GH weights. The above parameter values usually lie between 0.1 and 1.0. As the update value is increased during the course of training, it may reach very high, unacceptable values. To prevent the update value from reaching high value, it must be limited to some predetermined value. This is the parameter D (i.e., the parameter which limits
the update value). The lower limit of the update value is the parameter D .
Variations in D , D and D , did not yield appreciable variations in training speed.
Parameters g> is the growth factor and g\ is the decay factor for the update value. An increase in the value chosen for g> results in a faster growth for update values, and a smaller value chosen for g\ results in faster decay of the update values. But neither g> nor g\ by itself decides how big the update value is, because that depends on the manner in which training proceeds and on how often the weight updates are reverted during training. The algorithm is found to be quite robust against the choice of the above parameters. Only the initial weight vector has a noticeable e!ect on the speed of convergence. The choice of g> and g\ was led by the following considerations: if a jump over a minimum occurred, the previous update value was too large. Selecting g\"0.5 halves the update value. Any variation in the value of g\ around 0.5 did neither improve nor deteriorate convergence time. The increase factor g> has to be large enough for faster convergence in shallow regions of the error function, but too large an g> leads to persistent changes of the direction of the weight step. In all experiments with RPROP [16], the choice of g>"1.2 gave very good results. Variations in D , D and D did not yield appreciable variations in the training
speed. Only the initial weight vector seems to have a noticeable e!ect on the speed of convergence. To begin with, random numbers between 0 and #0.6 were used for initial weight settings. Later, a more evenly balanced weight set between #0.6 and
128
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135
!0.6 gave better results. Then the limits were varied and it was found that a weight set of random values between #0.9 and !0.9 gave the best results. RPROP is a local adaptation algorithm which guarantees convergence at least to within a neighborhood of a local minimum. Also, once the learning process approaches the local minimum, the training becomes slow. The convergence percentage obtained for RPROP [16] is high, which is nearly 100%. 3.2.2. Logarithmic error function The error function normally used is the quadratic error function E "1/2 (t !O ), / NH NH N H
(1)
where t is the target output and O the actual output. NH NH But a logarithmic error function [11] not only reduces the learning time, but also alleviates the problem of getting stuck in local minima by reducing the density of local minima. The logarithmic error function given below reduces the computational overhead as well E" [t ln(t /O )#(1!t ) ln+(1!t )/(1!O ),]. NH NH NH NH NH NH N H
(2)
This is the error function that is used in the implementation of the resilient propagation algorithm [16]. It should be stressed that the change of the error function from Eq. (1) to (2) does not increase any computation load [11]. 3.3. Inputs to the neural network The seven inputs to the RPROP network for target detection of a given image are the moment invariant values of the target to be detected. Moment invariance was "rst introduced by Hu [7]. Parameswaran et al. [14] successfully used the moment invariants for symmetric images. The theory of moments provides an interesting and useful alternative to series expansions for representing shape of objects. Moment invariants refer to certain functions of moments, which are a set of non-linear functions which are invariant to translation, scale, orientation and aspect angle and are de"ned on geometrical moments of the image of the object. Given a two-dimensional M;M image g(x, y), +g(x, y), x, y,"0, 1,2, M!1,, the ( p#q)th geometrical moment is de"ned as [10] +\ +\ m " xNyOg(x, y) for p, q"0, 1, 2, 3,2, n. NO V W To keep the dynamic range of m consistent for di!erent size images, the M;M NO image plane is "rst mapped onto a square de"ned by x3[!1,#1], y3[!1,#1].
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135
129
This implies that the grid locations will no longer be integers but will have real values in [!1,#1] range. This changes m de"nition to NO > > m " xNyOg(x, y) for p, q"0, 1, 2, 3,2, n. NO V\ W\ To make these moments invariant to translation, one can de"ne central moments as > > k " (x!x )N(y!y )Og(x, y) for p, q"0, 1, 2, 3,2, n NO V\ W\ with x "m /m and y "m /m . Central moments can be normalized to become invariant to scale change by k g " NO NO kA
p#q with c" #1. 2
A set of non-linear functions de"ned on g which are invariant to rotation, translaNO tion, and scale change have been derived [10]. The "rst seven functions are
"g #g ,
"(g !g )#4g ,
"(g !3g )#(3g !g ),
"(g #g )#(g #g ),
"(g !3g )(g #g )[(g #g )!3(g #g )] #(3g !g )(g #g )[3(g #g )!g #g )],
"(g !g )[(g #g )!(g #g )]#4g (g #g )(g #g ),
"(3g !g )(g #g )[(g #g )!3(g #g )] #(3g !g )(g !g )[3(g #g )!(g #g )]. In the above, is skew-invariant and is useful in distinguishing mirror images. Numerical values of to are very small. To avoid precision problems, the logarithms of the absolute values of these seven functions, i.e. log " ", i"1,2,7, are ' selected as features representing the image. These moments are noise-sensitive. Though the moments serve the useful purpose of being invariant to the various operations discussed above, the odd order of these moments should not vanish. This is because in a noisy environment one needs more non-zero features for successful classi"cation since most classi"ers are trained with only noiseless images. In our case, since all the seven moments are non-vanishing, the moments reduce the e!ects of noise. Thus these moments can be used for representing
130
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135
symmetrical and non-symmetrical images with and without noise. Some related studies can be found in [5]. The geometric invariants are applied to the binary thresholded image.
4. Hardware implementation of RPROP algorithm Discussion on the hardware implementation of RPROP algorithm is being carried out in this section. The hardware used is DSP board (ADSP-21062) with SHARC processor. Digital signal processors [2] are high-performance #oating point processors optimized for speech, sound, graphics and imaging applications. Because of their high computational speed they can be used in real-time applications as well. The ADSP-21062 SHARC (Super Harvard Architecture Computer) is a high-performance 40MIPS (25 ns instruction cycle time) 32-bit digital signal processor in the Analog Devices ADSP-21000 family of #oating point digital signal processors (DSPs). The SHARC is built on the ADSP-21000 family DSP core to form a complete system-ona-chip, adding a dual-ported on-chip SRAM and integrated I/O peripherals supported by a dedicated I/O bus. With its on-chip instruction cache, the processor can execute every instruction in a single cycle. Four independent buses for dual data, instructions, and I/O, plus crossbar switch memory connections, comprise the Super Harvard Architecture of the ADSP-21062. The ADSP-21062 SHARC represents a new standard of integration for digital signal processors, combining a high-performance #oating-point DSP core with integrated, on-chip features including a host processor interface, DMA controller, serial ports, and link ports and shared bus connectivity for glueless DSP multiprocessing. A DSP architecture is better than a microprocessor architecture for neural network implementation because of the presence of fast, #exible arithmetic units; extended precision and dynamic range in the computation units. The user interface of the system was written using C language. The executable code of the RPORP algorithm written in assembly language code was down-loaded to the board using host interface function provided as a part of the interface library. 4.1. Software implementation using DSP The algorithms and methods for implementing the application were coded using the assembly language instructions of the DSP. The DSP implementation requires the following modules. (i) Optimizing C compiler: The C compiler source reads source "les written in ANSI-standard C language [1]. The compiler outputs ADSP-21062 assembly language "les. It comes with a standard library of C callable routines. (ii) ASM xles: These are the source code "les written in ADSP-21062 assembly language. These "les recognize di!erent memory segments which store the main program, subroutines and the input and output data in data memory and program memory statements.
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135
131
(iii) Assembler: The assembler inputs a text "le of ADSP-21062 source code and assembler directives and outputs a relocatable object "le. The assembler supports standard C preprocessor directives as well as its own directives. (iv) ACH xles: The architecture "le (with extension .ach) is an ASCII text "le that contains a de"nition of both physical and logical aspects of the system at runtime. The architecture "le is read by the linker and the C compiler for the size and placement of code segments in memory, register allocation, placement of memory banks and memory wait states, and other options that a!ect code generation. (v) Linker: The linker processes separately assembled object and library "les to create a single executable program. It assigns memory locations to code and data in accordance with user-de"ned architecture "le. The linker inputs one or more object "les and outputs an executable "le. (vi) Library: The assembly library contains standard arithmetic and DSP routines that can be called from the source programs, saving development time. Routines can also be added to the library. (vii) Simulator: The simulator executes an ADSP-21062 executable program in software in the same way that an ADSP-21062 processor would in hardware. The simulator also simulates the memory and I/O devices speci"ed in the architecture "le. The simulator has window-based user interface, which helps in interactively observing and altering the data in the processor and memory.
5. Results In this paper, a set of seven non-linear moment invariant [10] equations are used to train and test the RPROP neural network. The neural network using the RPROP algorithm has been extensively trained and tested and it was found that a four-layer network produces promising results for target detection. The above algorithm has been implemented on an ADSP-21062 DSP board. The coding on the ADSP-21062 board has been carried out using assembly language of the SHARC processor. The input layer contains seven neurons because the input to the neural network contains seven moment invariant values. It is observed that the resilient propagation-based target detection is better compared to the image processing method of target detection. Table 1 shows the testing results of image-processing method of target detection, with Gaussian noise [6,15] of varying standard deviation (p) and a constant mean (k) of zero added to each pixel to generate a noisy image. From Table 1 it is found that, the image processing method of target detection fails to detect the target when the value of standard deviation (p) is equal to or more than 0.21. 5.1. Results of RPROP training For training of the RPROP network, the input contains 50 sets of moment invariants of the 12;12 images. Among these data, the input contains 25 images of a car and other 25 images are di!erent from a car. Table 2 shows the time taken for the
132
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135 Table 1 Results of tests carried out with noise for the image processing method Mean k
Standard deviation p
Result of testing
0 0 0
0.001 0.01 0.21
Target detected Target detected Target not detected
Table 2 Results of training Network size
No. of iterations
Total no. of inputs
Time taken to converge (min)
7-42-35-1 7-49-35-1 7-56-42-1 7-63-49-1
39,501 42,751 60,751 64,501
50 50 50 50
152 187 246 254
RPROP network to converge. We have considered a four-layer network. Several methods to select the number of nodes in the hidden layer have been discussed in previous publications, especially in the context of generalization ability [3]. Another way to choose the neural net architecture would be to select the neural net with the smallest number of hidden layer neurons that yields the best performance on the entire training set. We have followed such an approach. Note that in Table 2, 7-42-35-1 means that the input layer contains 7 neurons, the "rst hidden layer contains 42 neurons, the second hidden layer contains 35 neurons and the output layer contain only 1 neuron. The number of layers in the network and the number of neurons in the hidden layers were "xed through extensive experiment studies, to yield the best-possible performance. As the network size is increased, the time taken for convergence also increases. It was found that 63 neurons in the "rst hidden layer and 49 neurons in the second hidden layer yield good results. 5.2. Results of RPROP testing For testing of the RPROP network, the seven moment invariant values of the given 12;12 image are given to the RPROP network. The test results with Gaussian [6,15] noise of di!erent standard deviation and constant mean of zero for di!erent sizes of the network are shown in Table 3. The test data were randomly selected for 60 sample cases, several of these cases were unseen previously. Several tests were performed for each noise level; the table shows only the limiting case of noise when the target was detected/not detected.
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135
133
Table 3 Test results for noisy input Network Size
Mean(l)
Variance(p)
Results
7-42-35-1 7-42-35-1 7-49-35-1 7-49-35-1 7-56-42-1 7-56-42-1
0 0 0 0 0 0
7 11 10 16 15 21
Target Target Target Target Target Target
detected not detected detected not detected detected not detected
From Table 3 it is seen that as the network size increases, the network detects the target with larger noise. The neural network fails to detect the target with larger noise as the image gets corrupt beyond recognition. All the targets were detected when the noise level was minimum and at higher values of variance, the algorithm failed to detect. As seen from Tables 1 and 3, the neural network-based approach detects more noisy targets compared to the image processing technique.
6. Conclusions In this paper, a set of seven non-linear moment invariant equations are used and test a neural network based on the Resilient Propagation algorithm. It was found that a four-layer network has produced promising results for target detection, compared to the image processing technique. The above algorithm has been implemented on an ADSP-21062 DSP board. In order to detect the target under translation, scaling and rotation transformations, a moment invariant technique has been e!ectively used. To the best of our knowledge, a similar approach/application has not been reported in the literature and thus no direct comparison could be presented in the paper.
Acknowledgements The authors would like to acknowledge the useful discussions with Dr. K.V. Rao, Mr. B.V. Rao and Dr. K.N. Swamy, Mr. H. Shankaranarayana Adiga and Mr. A. Ranjan helped in the implementation details. The suggestions made by the anonymous reviewers and the Editor in Chief Dr. V David Sanchez A have improved the presentation of the paper.
References [1] ADSP-21000 Family C Tools Manual, 1992 Analog Devices, Inc., 1992. [2] ADSP-2106x SHARC User's Manual, 1995 Analog Devices Inc., Vol. I and Vol. II, 1995.
134
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135
[3] E.B. Baum, D. Haussler, What size net gives valid generalization? Neural Comput. 1 (1) (1989) 151}160. [4] B. Bhanu, Automatic target recognition: state of the art survey, IEEE Trans. Aerospace Electron. Systems AES-22 (4) (1986) 364}379. [5] M. Gruber, K.Y. Hsu, Moment based image normalization with high noise tolerance, IEEE Trans. Pattern Anal. Mach. Intell. 19 (2) (1997) 136}138. [6] S.V. Hoover, R.F. Perry, Simulation : A Problem Solving Approach, Addison-Wesley Publishing Company, Reading, MA, 1989. [7] M.K. Hu, Visual pattern recognition by moment invariants, IRE Trans. Inform. Theory IT-8 (1962) 179}187. [8] A.K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Englewood Cli!s, 1989. [9] J.R. Jenson, Introductory Digital Image Processing, Prentice-Hall, Englewood Cli!s, NJ, 1986. [10] A. Khotanzad, J.H. Lu, Distortion invariant character recognition by a multilayer perceptron and backpropagation learning, IEEE International Conference on Neural Networks, Vol.1, 1988, pp. 625}632. [11] Kiyotoshi Matsuoka, Jianqiang YI, Backpropagation based on the logarithmic error function and elimination of local minima, IEEE International Conference on Neural Network, 1991, pp. 1117}1122. [12] S.P. Kozaitis, W.E. Foor, Performance of synthetic discriminant functions for binary phase-only "ltering of thresholded imagery, Opt. Eng. 31 (4) (1992) 830}837. [13] M. Lineberry, Image segmentation by edge tracing. Application Digital Image Process. IV SPIE-vol. 359 (1982) 361}368. [14] R. Parameswaran, P. Ramaswamy, S. Omatsu, Regular moments for symmetric images, IEE Electron. Lett. 34 (15) (1998) 1481}1482. [15] W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C, Cambridge University Press, Cambridge, 1992. [16] M. Riedmiller, H. Brun, A direct adaptive method for faster backpropagation learning : the RPROP algorithm, IEEE International Conference on Neural Networks, 1993, pp. 586}591. [17] E.M. Rounds, T. King, D. Ste!y, Feature extraction from forward looking infrared (FLIR) imagery, Image Process. Missile Guidance SPIE-vol. 238 (1980) 126}135. [18] P.D. Wassermann, Neural Computing : Theory and Practice, Van Nostrand Reinhold, New York, 1989. [19] Z. Tang, J. G.J. Koehler, A convergent neural network learning algorithm, IEEE International Conference on Neural Networks, Vol. II, 1992, pp. 127}132. [20] Y.T. Zhou, R. Chellappa, B.K. Jenkins, A novel approach to image restoration based on neural network, IEEE International Conference on Neural Network, Vol. 4, 1993, pp. 269}276.
L.M. Patnaik obtained his Ph.D in 1978 in the area of Real-Time Systems, D.Sc. in 1989 in the areas of Computer Systems and Architectures, both from the Indian Institute of Science, Bangalore, India. Currently, he is a Professor with the Department of Computer Science and Automation of the Electrical Sciences Division at the Indian Institute of Science, Bangalore, India. He directs his research group in the Microprocessor Applications Laboratory at the Institute. During the last 30 years of his service at the Institute, his teaching, research, and development interests have been in the areas of Parallel and Distributed Computing, Computer Architecture, CAD of VLSI Systems, Computer Graphics, Theoretical Computer Science, Real-Time Systems, Neural Computing, and Genetic Algorithms. In these areas, he has published over 140 papers in refereed International Journals and over 160 papers in refereed International Conference Proceedings, on the theoretical, software, and hardware aspects of the above areas of Computer Science and Engineering. He is a co-editor/co-author of "ve books in the areas of VLSI System Design and Parallel Computing.
L.M. Patnaik, K. Rajan / Neurocomputing 35 (2000) 123}135
135
K. Rajan obtained a Ph.D. degree from Indian Institue of Science in 1997. He has been a Principal Research Scientist in the Department of Physics. Indian Institute of Science since 1995. His research interests are in signal processing, image processing, medical imaging and parallel computer architectures.