Simulation study of artificial ocular movement with intelligent control

Simulation study of artificial ocular movement with intelligent control

ARTICLE IN PRESS Control Engineering Practice 13 (2005) 509–518 Simulation study of artificial ocular movement with intelligent control Jason J. Gua,...

627KB Sizes 0 Downloads 23 Views

ARTICLE IN PRESS

Control Engineering Practice 13 (2005) 509–518

Simulation study of artificial ocular movement with intelligent control Jason J. Gua,*, Max Mengb, Albert Cookc, Gary Faulknerd a

Department of Electrical and Computer Engineering, Dalhousie University, 1360 Barrington Street, Halifax, Canada B3J 2X4 b Department of Electrical and Computer Engineering, University of Alberta, Canada T6G 2G7 c Faculty of Rehabilitation Medicine, University of Alberta, Canada T6G 2G7 d Department of Mechanical Engineering, University of Alberta, Canada T6G 2G7 Received 7 March 2003; accepted 13 April 2004 Available online 25 May 2004

Abstract This paper is concerned with the designing and construction of a biomedical assistive device for ophthalmic patients to regain the natural movement of their ocular prostheses. Various aspects of the device are discussed in this paper in terms of sensor design, sensor data fusion techniques, artificial ocular actuation, control system and experimental prototype setup. Simulation and experimental study are included to illustrate the effectiveness of the device. The paper concludes with future considerations. r 2004 Elsevier Ltd. All rights reserved. Keywords: Biomedical systems; Sensor fusion; Infrared detectors; Neural networks; Automatic control; Electrodes; Sensor failure

1. Introduction A person with one eye missing, through various reasons, may suffer severely psychologically as well as physically. It always requires the implantation of an artificial eye by re-constructive surgeons. There is a long history of artificial eyes, from antiquity to present (Martin & Clodius, 1979). Before the 19th century, artificial eyes were made of metal, which were expensive, heavy and painful to wear and so metal eyes were replaced by glass eyes at the beginning of the 19th century. These ocular prostheses were created to replace lost eyes. Physically an artificial eye can be made as natural as a real one in appearance, whereas it is static, as shown in Fig. 1. Research indicates that patients are generally not satisfied with a static artificial eye and an artificial eye that moves like a natural eye would definitely make ophthalmic patients much more confident and happier psychologically. A few research groups have studied this problem (Massry & Holds, 1995; Sheilds, Shields, & De Potter, 1993; Qiangjuan, Jinglin, Hongfei, & Lageng, 1996). The advent of hydroxyapatite orbital implant has improved *Corresponding author. Tel.: +1-902-494-3163; fax: +1-902-4227535. E-mail address: [email protected] (J.J. Gu). 0967-0661/$ - see front matter r 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.conengprac.2004.04.012

the motility of the ocular prosthesis of ophthalmic patients who had undergone enucleation. The patient who had such motile implant had better cosmetic results than those with static implants. Even the limited smalldegree of motility of the ocular prosthesis was good enough to boost the confidence of the patient psychologically. However, large-degree movement of the ocular prosthesis was impossible. In this paper, a novel project on developing a motile ocular system is presented, which will endow an artificial ocular prosthesis with natural movement as the real eye. 1.1. The background The human eyeball can rotate around any axis, as shown in Fig. 2, e.g., it can rotate from side to side around the x-, y- and z-axis. Rotation around x-axis leads to the eye movement from side to side. Rotation around the horizontal y-axis leads to eye movements that are directed upward or downward. Torsional eye movement occurs around the z-axis. Three antagonistic pairs of muscles control the natural eye movement: the lateral and medial recti, the superior and inferior oblique, and the superior and inferior recti, as shown in Fig. 3. The lateral and medial recti are mainly responsible for moving the eyes horizontally. Both the

ARTICLE IN PRESS 510

J.J. Gu et al. / Control Engineering Practice 13 (2005) 509–518

Fig. 1. The patient lost one of his eyes (left) and has it replaced by the ocular implant (right). Fig. 3. Superior view of the eye muscles.

Fig. 2. Three principal axes of eye rotation.

superior and inferior oblique and the superior and inferior recti can move the eye vertically as well as torsionally (Kandel, Schwartz, & Jessell, 1995). To endow the artificial eye with the same functionality as the muscles, the artificial eye was mounted onto a small servomotor. The aim of the present project is to sense the natural eye movement in the horizontal direction only and then to control the motor to drive the artificial eyeball to move correspondingly and naturally, matching the horizontal movement of the natural eye. 1.2. Literature review on eye movement detection The normal eye movement signal is used to control the movement of the artificial eye and so the first step is

to detect the eye movement of the natural eye. In the past 35 years, eye movement and eye blink detections have attracted much attention in various fields. From as early as the 1950s, a large amount of researchers began to study eye movements. Young (Young, 1975) reviewed different types of research projects on eye movements, and analyzed the advantages and disadvantages of different methods. For eye movement sensing, following devices and methods have been reported: electrodes, magnetic induction, optical sensing, photoelectric methods, infrared reflection, and video imaging. There are many methods that indirectly measure the induced potential, which causes the eye movements, rather than directly measure the mechanical movement of the eyeball itself. These include the electroretinogram signal sensing and electromyogram sensing methods. These methods are invasive. Thus, they are not suitable for the purposes of this project. Fiber optic eye position sensing enables both horizontal and vertical eye positions to be determined with respect to the frame of a pair of eyewear wearing by the patient (Drouin & Drake, 1987). The imaging lenses are mounted on the eyewear frame around the field of vision, which allows normal eye movement and unobstructed vision. The optical sensors detect lights reflected from the left and right sides of the iris and send lights through optical fibers to remote detectors.  The instrument can achieve an accuracy of better than 12 in the horizontal direction and 3 in the vertical direction. Nakano and others measured eye blinks by image processing. They applied the measurement to the detection of a driver’s drowsiness (Nakano, Sugiyama, Mizuno, & Yamamoto, 1996). Tock and Craw (Tock & Craw, 1996) described a computer vision system for tracking the eye movement of a car driver in order to

ARTICLE IN PRESS J.J. Gu et al. / Control Engineering Practice 13 (2005) 509–518

511

Fig. 4. Arrangement of MI element on the glasses frame.

measure eyelid separations to determine the drowsiness of the driver. Eadie (Eadie, 1995) used a limbus reflection method to measure eye movements. When the limbus is illuminated with infrared radiation, it measures the difference in the reflected radiation as the eye moves. Practical realizations employ a pair of emitter/detector directed at opposite sides of the eye and amplify the difference between the detected signals as the eye moves. The usage of a pair of emitter/detector has been shown to improve the linearity of the system. Takagi and others (Takagi, Mohri, Katoh, & Yoshino, 1994) constructed a displacement sensor for detection of eyelid movements, by using amorphous wire magneto-inductive elements. It is an invasive method, as shown in Fig. 4. Above methods can detect eye movements, but the hardware systems are bulky and complex. They are not practical for a small size system, such as the motile ocular device.

Fig. 5. A photograph of a light servomotor is shown compared to a penny.

Fig. 6. The system block diagram of motor control.

Fig. 7. EOG detection and data acquisition.

1.3. Organization of the paper The organization of the paper is given as follows. Section 2 describes the experimental system. Section 3 discussed a sensor data fusion method. Section 4 presents the experimental study. Section 5 addresses the issue on the system integration and control. The last section concludes the paper.

2.2. Sensor When choosing a sensing method to detect natural eye movement, constraints must be considered in terms of sensor size and complexity. In this paper, suitable methods are applied to detect the eye movements, which are electrooculography method and the infrared reflection method. These sensors are simple to use. However, each method has its advantages and disadvantages as shown in the next section.

2. The experimental system 2.1. Motor A small servomotor is used to drive the artificial eye. The motor shown in Fig. 5 is one of the smallest and lightest normal servomotors around, weighting only 3:5 g: At 5 V power supply, the motor maximizes its output torque, which is sufficient to drive the motor with artificial eyeball mounted on. The servomotor is controlled by pulse-modulated signals. The width of the pulse is the code that signifies to what position the shaft should turn. Fig. 6 illustrates a control diagram of the servomotor system.

2.2.1. Electrode sensor Electrooculography (EOG) is a classical method, which has been used to detect the eye movement for a long time. EOG signal can be used to measure the positions of the eyes with respect to the head (Davison, 1980). The electrical axis of the eye corresponds to its visual axis. EOG can record, according to our and other’s experimental data, the eye movements over 70 ; with a typical accuracy of approximately 71:5 to 72 : However, greater resolution is possible by averaging equivalent responses. An EOG recording system was setup to do the eye movement recording, as shown in Fig. 7.

ARTICLE IN PRESS J.J. Gu et al. / Control Engineering Practice 13 (2005) 509–518

*

Preparation. The drift is usually caused by an accumulation of electrical charge in the recording electrodes, but it can be minimized, or even eliminated by several methods through recording contact area site preparation (i.e. mild skin abrasion, conductive paste).

Subject1’s two eye movement

1.5 1 0.5

right Voltage(Volts)

Two small (6–8 mm) Ag/AgCl electrodes are used for EOG signal recording. Disposable electrode with low impedance functioning as a reference electrode is located on the midline of the forehead. The configuration of the electrodes is shown in Fig. 8. One pair horizontally placed electrodes is to record the horizontal eye movements. Vertically placed pair can be used to record the vertical eye movements, which is not used in the current project. To record the one eye horizontal movement, one electrode needs to be moved to the middle point of the two eyes. The potential difference between the two electrodes is amplified and sent to the computer through the AD card with a 30 Hz sample frequency. At the same time, the amplified signal is shown on the screen of an oscilloscope. The recorded data is plotted in Figs. 9 and 10, respectively. Fig. 9 depicts the two eye horizontal movement signals. The electrodes are placed close to the temple of each side of the head. Fig. 10 shows the left eye horizontal movement signals. The reason to record the one eye movement signal is to find out the difference between the one eye horizontal movement signal and the two-eye horizontal movement signal. Experimental studies shows that the amplitude of the two-eye signal is almost twice as that of the one-eye signal. There are words ‘‘left’’ and ‘‘right’’ labeled on the curves in the figure, which denote the smooth pursuit movement of the eye from left side to the right side and back to the left side. The signal is close to linear, but the drift is a problem, which is the drawback associated with EOG. EOG is subject to a slow shift in baseline electrical potentials, which is primarily caused by the polarization of electrodes. This is also one main reason that EOG had less favor with researchers in the past. Following steps have been used to improve the signal quality by suppressing the drift:

0 –0.5 –1 left –1.5 left

–2 –2.5 0

500

1000

1500

2000

sample frequency 30 HZ

Fig. 9. Use EOG to detect two eye movement (electrode at each temple).

Subject1’s left eye movement

0.5

0 left Voltage(Volts)

512

left –0.5

–1 right –1.5

–2 0

200

400

600

800

1000

1200

sample frequency 30 HZ

Fig. 10. Use EOG to detect left eye movement (one electrode at nose and one at left temple). *

*

Polarization of the electrodes. Let the subject settle down for at least an hour or so to ensure the polarization of the electrodes. Using a low-pass filter with the corner frequency set to some very low frequency around 10 Hz:

Use the methods above, low drift EOG signals can be obtained, as shown in Fig. 11.

Fig. 8. Configuration of the electrodes.

2.2.2. Infrared sensor Pair of emitter and detector was used first. The emitter sends out infrared signal and the detector detects the reflected signal. The emitter was mounted onto the upper front of the eyewear frame and the detector onto the lower front of eyewear frame. The angle between the

ARTICLE IN PRESS J.J. Gu et al. / Control Engineering Practice 13 (2005) 509–518

513

Subject1’s right eye movement

1.5 1

right

Voltage(Volts)

0.5 0 –0.5

left

left

–1 –1.5 –2

Fig. 13. Eye position acquisition block diagram.

–2.5 0

200

400

600

800

1000

1200

1400

1600

sample frequency 30 HZ

Fig. 11. Use EOG to detect right eye movement (one electrode at nose and one at right temple).

and the detector array receive the reflected light. The reflected light will change in relation to the eye position. Because the reflected light may be interfered by the ambient light, the modulated signal is used to avoid the interference. The reflected light signal was converted to digital signal and then transferred to a micro controller. Through sensor data fusion, the eye position can be determined. The experimental system setup is shown in Fig. 13. In the next section, the sensor data fusion algorithm will be presented.

3. Sensor data fusion

Fig. 12. Infrared array.

emitter and detector was adjusted so that the detector receives the signals reflected by the eye. The emitter and detector are about 5 mm in diameter and 1:5 cm in length, respectively. It is fine for preliminary study and yet too large for a final product. Second method is to use the infrared reflection method. An array of emitter/detector pair was used to detect the eye movement. Fig. 12 shows the small infrared emitter array from Infineon Technologies Company. The detector array is around 3:5 mm high and 2:5 cm long. It can be easily fitted on to an eyewear frame. The emitter sends out infrared light to illuminate the eye, and reflected infrared light is detected by the detector array. The sensor placement is shown in Fig. 13, the emitter and detector array are mounted to a rod respectively and the rods are mounted onto the frame of the eye glass. The rod can do translation and rotation. The emitter and detector are adjusted such that the emitter will send out the light to illuminate the eyeball

In this section, a modeling approach is presented based on the sensor array introduced in the previous section. A linear array photo diode is used to receive the reflected light from an object. It not only represents the distance between the object and the sensor, but also includes the information about the surface texture of the object. This property will be explored to detect the eye movement. In the present case, the detector array is always facing the moving eye. For each specified position of the eye, there is a corresponding output of the linear sensor array. In this way different eye positions will generate different outputs from the sensor. Through experiments, the eye position data space and the sensor data space can be created. 3.1. Neural network approach The work done in the aspects of the fusion algorithms and theories can be roughly divided into three categories: statistically based fusion algorithms, information theoretic fusion algorithms and neural network and fuzzy set based fusion. Sensor fusion with known statistics often relies on well-developed techniques such as maximum a posteriori and maximum-likelihood estimation, and adapts results from Kalman filtering, Bayesian theorem, Dempster–Shafer evidence theory, and adaptive decision. Information theoretic fusion

ARTICLE IN PRESS 514

J.J. Gu et al. / Control Engineering Practice 13 (2005) 509–518

algorithms policies make use of a transformation or mapping between parametric data and a resultant identity declaration. The techniques include expert systems, rule based and adaptive learning. Neural network and fuzzy set based fusion policies are distribution free and no prior knowledge is needed about the statistical distributions of the classes in the data source in order to apply these methods for sensor fusion. We found out that in our sensor system, we could not obtain the statistical distributions of data, so it is proper to use neural network and fuzzy set based fusion algorithms. A great deal of research has been conducted into applying fuzzy sets and neural networks for sensor data fusion (Wide & Driankov, 1996; Wang, Zheng, Yuan, & Fu, 1996; Zheng & Bhanu, 1996; Lee, 1996; van Dam, Krose, & Groen, 1996; Kabre, 1996). Fuzzy sets approach (Wide & Driankov, 1996) is typically used for classification. Neural network method is often used to carry out motion detection (Wang, Zheng, Yuan, & Fu, 1996), object detection (Zheng & Bhanu, 1996), speech perception (Kabre, 1996), and signal processing (Chung & Merat, 1996). Lee (Lee, 1996) presented a perception–action network, in which the network embeds feasible system behaviors in various levels of abstraction such that the system can rearrange and control its behaviors towards the goal. We chose neural network instead of fuzzy set for sensor fusion is because of its simplicity. In this section, an artificial neural network based approach for sensor data fusion will be presented. An artificial neural network can learn the characteristics of a non-linear system without using an explicit model through training samples. During the real application, the sensor signals can be used to feed and to train the network. Neural network can also be used to fit a curve in arbitrary accuracy in theory. It can, therefore, be used to process the successive sensor data and estimate the next sensor data. In this paper, the network was designed to detect any sensor failure and also for sensor fusion. 3.2. Two layer neural network An artificial neural network can be used to learn the characteristics of a non-linear system through trainings. Assume there are n inputs X ¼ ½x1 ; x2 ; y; xn ; and m outputs Y ¼ ½y1 ; y2 ; y; ym : They are related by a general non-linear unknown function Y ¼ F ðX Þ: A neural network sketched in Fig. 14 can be used to learn the relationship between X and Y : The neural network assumes a two-layer structure. The input layer has neurons that are fed by the sensor measurements xi ; i from 1 to n; with activation function Fin and biases Bin : The output layer has neurons that are fed by the outputs of the input layer. The output

Y1

Y2

Ym

Output

Output layer

Hidden layer

Input X1

X2

Xn

Fig. 14. Two layer neural network.

NN

Output +

Input

Error desired output Generator

Ydesired

Fig. 15. Supervised learning.

neurons have activation function Fout and biases Bout : A set of weights is associated with each layer. Let Win be the weight matrix for the input layer and Wout for the output layer, the output of Y can be expressed as Y ¼ Fout ðWout ðFin ðWin X Þ þ Bin ÞÞ þ Bout :

ð1Þ

3.3. Learning of neural network Supervised learning method is used for the proposed two-layer network. The outputs of the network are compared to the desired outputs. The error is used to update the weights and the biases. The network can be trained by minimizing the error. The block diagram of the neural network learning system is shown in Fig. 15. 3.4. Training the neural network to recover the faulty sensor data Using the proposed neural network and the Black Propagation (BP) algorithm, the training process can be divided into the following steps: *

Normal sample selection First step is to randomly select the set of normal sensor data samples collected when the sensors are working normally.

ARTICLE IN PRESS J.J. Gu et al. / Control Engineering Practice 13 (2005) 509–518

*

*

Faulty sensor sample selection Second step is to randomly select the abnormal sensor data samples collected when the sensor data is faulty. Training Use the normal data samples as the input and the true data of the faulty sensor data as the desired output to train the first network. Use the BP algorithm or Adaptation algorithm to stabilize the network. Faulty sensor data recovery When a sensor failure is detected, use the sensor data excluding the faulty sensor data, as the input to feed the network, and the output is the desired recovered sensor data. This second trained network can be used to recover the faulty data.

40 30 Deflection of the eye (Degree)

*

515

20 10 0 –10 –20 –30 –40 0

500

1000

1500

2000

2500

3000

Scaled EOG signal

Fig. 17. Scaled eye movement signal.

3.5. Simulation

Fusion

Fig. 18. Fusion block diagram.

40 30 Deflection of the eye (Degree)

3.5.1. Initial sensor data space creation and neural network model generation The artificial eye model was used to carry out the simulation study. A linear sensor array is used to detect the infrared reflection from an artificial eye in our experimental setup shown in Fig. 16. For each position of the artificial eye, there is a corresponding output of the sensor. Using this relationship, the eye position data space and the sensor data space are generated, as shown in Fig. 16. The moving path of the eye is designed and used to control the servomotor to drive the artificial eyeball. At the same time, the sensor array records the movements and sends the data to the computer. Then the data of the sensor array will be used as input to the network, and the control signals are used as the outputs from the network. Through the training process, a model of the system is obtained. A second network can also be trained similarly to recover the faulty sensor data.

Sensor array Data

True Signal

20 10 0 –10 –20 –30 –40 0

500

1000

1500 2000 Fusion out of the network

2500

3000

Fig. 19. Fusion out eye movement signal.

3.5.2. Sensor data fusion results The two eye movement signals are used here for the simulation. Assume that the eye deflection is 0 for the eye stop at the center position, and the complete range of the eye movement is 70 ; from 35 to 35 : The linearly scaled eye movement signals are depicted in Sensor Data

PMDI Card

Multi Channel Display

Servo Motor

Fig. 17. The signals are used as the desired eye movement control signals. The sensor array generates the data array as the inputs for the neural network based sensor data fusion block. The outputs of the network are the eye movement signals. The fusion block diagram and the output eye movement signal are shown in Figs. 18 and 19, respectively.

Data Analysis and Fusion

Control Signal

Fig. 16. Eye position analysis diagram.

System Modal

3.5.3. Sensor failure detection and recovery In the previous section, it is assumed that the sensor array is working under the normal operation. In the real environment, sensor failures may occur. There are two kinds of sensor failures, one is hard sensor failure, meaning that the sensor does not work at all, and in electronics it is usually defined as the stuck-at sensor

ARTICLE IN PRESS J.J. Gu et al. / Control Engineering Practice 13 (2005) 509–518

516

failure where the sensor is stuck at one extreme of its signal range. Another kind of failure is soft sensor failure, meaning that the sensors are still working, however, there are noises in the sensor data, such as

Table 1 Structure of the faulty sensors Time

DS1

DS2

DS3

DS4

DS5

DS6

DS7

DS8

DS9

1–280 281–560 561–840 841–1120 1121–1400 1401–1680 1681–1920 1921–2240 2241–2590

1 0 0 0 0 0 0 0 0

0 1 0 0 0 0 0 0 0

0 0 1 0 0 0 0 0 0

0 0 0 1 0 0 0 0 0

0 0 0 0 1 0 0 0 0

0 0 0 0 0 1 0 0 0

0 0 0 0 0 0 1 0 0

0 0 0 0 0 0 0 1 0

0 0 0 0 0 0 0 0 1

bias, drifting, and precision degradation. The amplitude of the sensor noise is typically very low. In our setup, there are nine sensors in the sensor array. To make the simulation easy to understand, the sensor data array is divided into nine periods. For each period, there is one sensor failure, which is shown in Table 1. ‘‘1’’ indicates that the failure occurs, ‘‘0’’ indicates that no failure occurs. To recover the faulty data, the faulty sensor data detection and recovery block is used, as shown in Fig. 20. The sensor array data goes through the faulty data detection and recovery block first, which is a neural network based algorithm. Then the recovered data is fed into the fusion block, as discussed in the previous section. Fig. 21 shows the true sensor data and the recovered data. It is clear that the faulty data is completely recovered.

4. Experimental study Sensor array Data

Faulty data Detect and Recover

Fusion out Eye Signal

True Signal

The goal of the project is to detect the natural eye movement, and use the detected signal to control the movements of an artificial eye. The system block diagram is shown in Fig. 22. The infrared sensor detects the current position of the natural eye. The desired angle position, angle speed and angle acceleration are

Fusion

Fig. 20. Block diagram of the faulty data recovery and fusion.

300

300

200

200

80 60 40

100 0

100

0

100

200

300

0

150

300

100

200

20 0

100

200

300

0

0

100

200

300

0

100

200

300

80 60 40

50 0

100

0

100

200

300

300

0

20 0

100

200

300

255

300

250

200

200

true data recovery data

254.5

100

150 100

0

0

100

200

300

254

0

100

200

300

Fig. 21. Recovered sensor array data.

0

0

200

400

ARTICLE IN PRESS J.J. Gu et al. / Control Engineering Practice 13 (2005) 509–518

. .. θd θd θd

.

517

PWM Signal Generator

..

eee

EOG + _

. .. θ θ θ Generator

Internal Controller

Motor

Sensor Array

Fusion

A/D

Fig. 22. The control block diagram.

Fig. 24. Eye pit with eyewear and ocular prostheses.

5. Future work in system integration and control

Fig. 23. Eye pit model with ocular prostheses.

generated as the input signals. The dash line enclosed area is the micro controller. The dynamic error signal was created to generate the Pulse Width Modulation (PWM) signal to the motor. The motor then drives the ocular implant to the desired position, meaning follow and match the movement of the natural eye. An eye pit model was used in the experiment, which has the same size as a real eye pit. The eyewear frame was mounted onto the eye pit model. Two artificial eyeballs were mounted into the eye pits. One eyeball was used to simulate the natural eye (right eye in Fig. 23). It can be rotated manually or driven by an external signal. The other eyeball was mounted onto a motor (left eye in Fig. 23). This eye simulates the artificial eye. The infrared emitter and detector were mounted onto the upper front eyewear frame in front of the natural eye as shown in Fig. 24. During the experiment, the simulated natural eye was rotated horizontally from one extreme to the other. The sensor detects the eye position and sends the corresponding signal to the controller. The controller will then generate the pulse width modulation (PWM) signal for the motor, driving the artificial eyeball to follow and match the movement of the natural eye.

In order to minimize the volume of the device, the actuator and electronics need to be integrated into the eye socket model. Thus miniaturized programmable electronics are required. Miniaturized CMOS Pico micro-controller is used for the control system. This micro-controller is available in surface technology and is one of the smallest controllers. The eye movement signals are acquired by the sensor array and sent to the signal-processing unit. The signalprocessing unit processes the raw signals and provides a real eye position signal. The signal is sent to the control unit. This is the input signal to the control loop. Because the artificial eye is mounted onto the motor, the artificial eye position signal from the motor can be obtained through the motor positional encoder. This is the feedback signal of the control loop. The Pico controller is the kernel of the circuit. The control program is written and loaded into the memory of the Pico controller.

6. Future considerations This paper reports on the design of a biomedical assistive device, which can detect natural eye movement and drive the artificial eye to follow and match the natural eye movement. A laboratory prototype has been designed and constructed. However, the work presented in this paper can only be considered preliminary, since many challenging and possibly more important problems have not been touched. A number of problems will be proposed as the future directions. Circuit minimization is very important in the design of this robotic eye system. Because the whole system, the motor, the artificial eyeball and the micro controller will be integrated into the size of the eye pit volume, it is

ARTICLE IN PRESS 518

J.J. Gu et al. / Control Engineering Practice 13 (2005) 509–518

critical to find the tiny and powerful and energy efficient motor for the system. The controller not only should be small, but also shall be able to process the sensor signal and control the motor to drive the eyeball to move simultaneously with the real eye. The further directions in this project include searching for the smaller motor, trying out smaller size controller and necessary component, and integrating them into a small size system. The energy is very important issue in the further design of the robotic eye system. The motor used to drive the eyeball will consume lots of power, so low power design technique has to be incorporated into the system. Search for high power, rechargeable and smaller size battery is an approach. In this case, the whole system should be very easily removable. There are many advanced technical rechargeable solutions such as infrared charge and microwave charge, which may be taken into consideration. Sensor is always the key to the success. Although the second generation of the system can detect the eye movement and control the motor to drive the artificial eye to move accordingly, the robustness, the stability of the system needs to be further tested and improved. Further direction of the eye movement detection may be the residual signal of the injury eye socket, and other multisensor fusion techniques. Multiple sensor fusion technique can be used to prevent the sensor failure and recover the faulty data.

References Chung, D., & Merat, F. L. (1996). Neural network based sensor array signal processing. Proceedings of the 1996 IEEE/SICE/RSJ international conference on multisensor fusion and integration for intelligent systems (pp. 757–764). Davison, H. (1980). Physiology of the eye. Edinburgh: Churchill Livingstone. Drouin, D., & Drake, A. (1987). A two-dimensional fiber optic eye position sensors. Ninth annual conference of the engineering in medicine and biology society (pp. 1829–1830). Eadie, A. S. (1995). Improved method of measuring eye movements using limbus reflection. Medical and Biological Engineering and Computing, 33, 107–112.

Kabre, H. (1996). On the active perception of speech by robots. Proceedings of the 1996 IEEE/SICE/RSJ international conference on multisensor fusion and integration for intelligent systems (pp. 765–774). Kandel, E. R., Schwartz, J. H., & Jessell, T. M. (1995). Essentials of neural science and behavior. 571–583. Lee, S. (1996). Sensor fusion and planning with perception-action network. Proceedings of the 1996 IEEE/SICE/RSJ international conference on multisensor fusion and integration for intelligent systems (pp. 687–696). Martin, O., & Clodius, L. (1979). The history of the artificial eye. Annals of Plastic Surgery, 3(2), 168–171. Massry, G. G., & Holds, J. B. (1995). Coralline hydroxyapatite spheres as secondary orbital implants in anophthalmos. Ophthalmology, 161–166. Nakano, T., Sugiyama, K., Mizuno, M., & Yamamoto, S. (1996). Blink measurement by image processing and application to detection of driver’s drowsiness. Journal of the Institute of Television Engineers of Japan, 50, 1949–1956. Qiangjuan, C., Jinglin, Y., Hongfei, L., & Lageng, W. (1996). Clinical application of a new mobile integrated orbital implant. Chinese Journal of Ophthalmology, 32(3), 182–184. Sheilds, A., Shields, C. L., & De Potter, P. (1993). Hydroxyapatite orbital implant after enucleation experience with 200 cases. Mayo Clinic Proceedings, 68(12), 1191–1195. Takagi, M., Mohri, K., Katoh, M., & Yoshino, S. (1994). Magnet displacement sensor using magneto inductive elements for sensing eyelid movement. IEEE Transaction Journal on Magnetics in Japan, 9, 80–85. Tock, D., & Craw, I. (1996). Tracking and measuring drivers’ eyes. Image and Vision Computing, 14, 541–547. van Dam, J. W. M., Krose, B. J. A., & Groen, F. C. A. (1996). Adaptive sensor models. Proceedings of the 1996 IEEE/SICE/RSJ international conference on multisensor fusion and integration for intelligent systems (pp. 705–712). Wang, A., Zheng, N., Yuan, L., & Fu, X. (1996). Multiplicative inhibitory velocity detector (MIVD) and multi velocity motion detection neural network model. Proceedings of the 1996 IEEE/ SICE/RSJ international conference on multi sensor fusion and integration for intelligent systems (pp. 476–483). Wide, P., & Driankov, D. (1996). A fuzzy approach to multi-sensor data fusion for quality profile, Proceedings of the 1996 IEEE/SICE/ RSJ international conference on multi sensor fusion and integration for intelligent systems (pp. 215–221). Young, L. (1975). Methods and designs: Survey of eye movement recording methods. Behavior Research Methods and Instrumentation, 7(5), 397–429. Zheng, Y.-J., & Bhanu, B. (1996). Adaptive object detection from multi sensor data. Proceedings of the 1996 IEEE/SICE/RSJ international conference on multisensor fusion and integration for intelligent systems (pp. 633–640).