Digital microscope with augmented reality for neurosurgery

Digital microscope with augmented reality for neurosurgery

International Congress Series 1230 (2001) 248 – 253 Digital microscope with augmented reality for neurosurgery H. Varvaroa,b, M.C. Juan-Lizandraa,b,*...

118KB Sizes 4 Downloads 192 Views

International Congress Series 1230 (2001) 248 – 253

Digital microscope with augmented reality for neurosurgery H. Varvaroa,b, M.C. Juan-Lizandraa,b,*, M. Alcan˜iza, C. Monserrata,b, V. Graua, J.A. Gila a

MedICLab (Medical Image Computing Laboratory), Universidad Polite´cnica de Valencia, Camino de Vera s/n, 46022 Valencia, Spain b DSIC, Universidad Polite´cnica de Valencia, Camino de Vera s/n, 46022, Valencia, Spain

Abstract Most neurosurgical procedures make use of a neuronavigator showing the surgeon’s tool guidance through a reconstructed patient’s anatomy model from MRI and/or CT and, if wanted, a previously planned modus operandi. It would be preferable for the surgeon not to look away from the operative scene for further information. Image guidance given by a neuronavigator can be provided through the microscope as overlays onto the surgical field of view. All the solutions make use of an operating microscope and project relevant information into its eyepieces, that is what we call ‘‘optics see-through’’. There is a need for different optics calibration for each different microscope used. Actually, there are low-cost Charge-Coupled Device (CCD) cameras with automated zoom lenses that can give the image quality necessary for surgery. Personal computers can provide real-time stereo video with overlay processing. Microdisplay technology has evolved and offers good integrated solutions. We propose a ‘‘video see-through’’ system incorporating the neurosurgical guidance images, segmented structures, surgical planning, etc., as overlays to the surgeon’s field of view. The main idea is to replace the optical microscope with a digital one composed by two CCD cameras, two microdisplays and their respective optics, everything controlled by a personal computer. D 2001 Elsevier Science B.V. All rights reserved. Keywords: Neurosurgery; Augmented reality; Camera calibration

*

Corresponding author. Tel.: +34-96-387-9720; fax: +34-96-387-7359. E-mail address: [email protected] (M.C. Juan-Lizandra).

0531-5131/01/$ – see front matter D 2001 Elsevier Science B.V. All rights reserved. PII: S 0 5 3 1 - 5 1 3 1 ( 0 1 ) 0 0 0 4 9 - 8

H. Varvaro et al. / International Congress Series 1230 (2001) 248–253

249

1. Introduction The use of neuronavigators has changed the way the neurosurgeons operate. Those systems allow a 3D reconstructed model of patient’s anatomy to be shown from preoperative images such as MRI and/or CT’s on a display device. Most significantly, it lets the neuronavigators plan and guide the surgeon through the surgery. Those systems register the coordinate system of a 3D localiser to image coordinates, then the surgeon can see the position of the tools that he is using displayed onto the reconstructed 3D model of patient’s anatomy. All the neurosurgeons agree that it will help to see this information directly overlapped on the surgical field of view and not to look away from it (Chios et al. [1]). Some neuronavigation systems make use of the surgical microscope to perform this, they project this information onto the microscope eyepieces (Edwards et al. [2]) that is augmented reality with ‘‘optics see-through’’. Optics calibration is required for each microscope used. The results are dedicated systems that are expensive, not compatible among them and less versatile. Today, low-cost Charge-Coupled Device (CCD) colour cameras with motorised optics offer an image quality and functionality similar to microscopes. Microdisplay technologies have evolved a lot and present good solutions for visualization. PC systems offer a good quality – performance/cost, and they are able to produce real time stereoscopy with image overlay. All together, this form an augmented reality system with ‘‘video see-through’’. In this article, an augmented reality system is presented.

2. Methods Four steps are necessary to achieve augmented reality: real-time video for two CCD cameras on two independent displays, camera calibration, real and virtual image fusion and display projection. Our system is controlled by an 800 MHz– 256 Mb Pentium III running under MSWindows 2000. 2.1. Video capture The first step is to have video synchronisation (external V-sync) and show the result on two independent displays (two separate SVGA signals). A personal computer performs this, controlling camera optics via RS-232C, and processing the video imaging. Two Sony EVI401DR cameras, as shown in Fig. 1, are employed for the purpose. 2.2. Camera calibration The main problem for virtual image overlay on real imaging is camera calibration, that is, to determine the relationship among the coordinate frame of the world, the relevant object 3D reconstruction and the image plane of the camera viewing the object. An independent, static first, and then a dynamic camera calibration is done with Tsai’s [3] method using a coplanar calibration pattern, as shown in Fig. 2, for both cameras,

250

H. Varvaro et al. / International Congress Series 1230 (2001) 248–253

Fig. 1. Sony EVI401DR CCD camera with a pencil sharpener to show its small size.

registering them using a Polaris active tracker. Eleven camera parameters are calibrated. Those are six extrinsic parameters, i.e. the rigid body transformations and the five intrinsic parameters, i.e. focal length, image centre coordinates, lens distortion (just radial) and uncertainty scale factor for x. The cameras are also tracked and error measures are taken during the process.

Fig. 2. Calibration pattern.

H. Varvaro et al. / International Congress Series 1230 (2001) 248–253

251

2.3. Registration 2D coordinate subpixel accuracy points from the calibration pattern are obtained from video image by segmenting the crosses and applying a cross morphology operator to probable cross windows, as shown in Fig. 3. The corresponding 3D coordinate points are given by a Polaris Optical Tracking System from Northern Digital with 0.35-mm RMS accuracy. 2.4. Image overlay The third step is the process of real and virtual image fusion. The virtual image is taken from a 3D model, reconstructed from CT and/or MRI data, and a parallel segmentation and rendering software library (Blanquer et al. [4]). The correspondent 2D image is given by the neuronavigator (Roldan et al. [5]). The 3D position of the cameras is known by the neuronavigator, so it knows which perspective projection of the virtual model it must show for each camera. Once the virtual views are given, they must be extrapolated so the size of real and virtual images match, the resolution must be a least SVGA (800  600) and 24 bit of color depth. 2.5. Display The display is composed by two monitors. They show the resulting video and overlayed images. In the near future, the resultant images will be displayed onto two microdisplays. The solution will go through the use of two ‘‘viewfinders’’, like those

Fig. 3. 2D calibration point coordinate registration.

252

H. Varvaro et al. / International Congress Series 1230 (2001) 248–253

employed in the camcorders. Those are two microdisplays Near To Eye (NTE), two Application Specific Integrated Circuit (ASICs) to control the display images and their optics (field of view 3000 diagonal). Both, housed together act as binoculars.

3. Results In order to evaluate the registration error committed by the different elements and processes involved in the system, each one is studied in detail. Three kinds of error sources must be taken into account: 2D and 3D registration error given by the neuronavigator when registering the calibration points, cameras and calibrating tools, and extracting 2D coordinate points, the camera calibration error is evaluated by calculating statistics of the magnitude of the error, in distorted image plane coordinates, between the measured location of a feature point in the image plane and the image of the 3D feature point as projected through the calibrated model. The same calculation is done for the undistorted image plane for the same data set and object space error is measured by finding the distance of the closest approach between the point in object space and the line of sight formed by back projecting the measured 2D coordinates out through the camera model. The results have shown good accuracy, 0.45 mm and 0.3 pixel error are given when doing the 3D and 2D registration, respectively. Less than two RMS pixel error is given for the calibration process, and the object space error was less than 2 mm depending on the number of calibrating points registered.

4. Conclusion We have developed an augmented reality system. This prototype has two parallel CCD cameras housed together with automated zoom lenses. The resulting images are shown on two independent monitors. The overall control of the system is performed by a PC, i.e. video capture, image processing and display. Several neurosurgeons have seen our system and they had a convincing illusion of seeing structures over or inside patient’s anatomy. The system is transparent to the user. Registration, calibration and object space errors have been calculated. The results have shown good accuracy, 0.45 mm and 0.3 pixel error are given when doing the 3D and 2D registration, respectively. Less than two RMS pixel error is given for the calibration process, and the object space error was less than 2 mm depending on the number of calibrating points registered. These errors are among acceptable errors, so the system will be tested by surgeons in a near future. Now we are incorporating two microdisplays Near To Eye (NTE) like those used in the Head Mounted Display. These will be mounted behind the cameras. This will facilitate that the system will be used, after its validation, in surgery daily work. The whole system is less expensive than actual optic microscopes with virtual image information projected to its eyepieces and it is easier to maintain. In conclusion, this solution can be used in a wide variety of medical applications with little software and no hardware modification. In addition, more than one display

H. Varvaro et al. / International Congress Series 1230 (2001) 248–253

253

system can be attached so, assistants, other surgeons and/or public can assist or just watch the procedure.

References [1] P. Chios, A.C. Tan, A.D. Linney, G.H. Alusi, A. Wright, G.J. Woodgate, D. Ezra, The potential use of an autostereoscopic 3D display in microsurgery, MICCAI (1999) 998 – 1009. [2] P.J. Edwards, A.P. King, C.R. Maurer Jr., D.A. de Cunha, D.J. Hawkes, D.L.G. Hill, R.P. Gaston, M.R. Fenlon, S. Chandra, A.J. Strong, C.L. Chandler, A. Richards, M.E. Gleeson, Design and evaluation of a system for microscope-assisted guided interventions (MAGI), MICCAI (1999) 842 – 851. [3] R.Y. Tsai, A versatile camera calibration technique for high-accuracy 3D machine vision metrology using offthe-shelf TV cameras and lenses, IEEE J. Rob. Autom. RA-3 (4) (1987) 323 – 344 (August). [4] I. Blanquer, V. Herna´ndez, F.J. Ramı´rez, A. Vidal, M. Alcan˜iz, C. Montserrat, V. Grau, M.C. Juan-Lizandra, Parallel segmentation and rendering using clusters of PC’s, MMVR’2000 (2000) 103 – 104. [5] P. Roldan, J.L. Barcia-Salorio, F. Talamantes, M. Alcan˜iz, V. Grau, C. Montserrat, M.C. Juan-Lizandra, Interactive image-guided surgery system with high-performance computing capabilities on low-cost workstations: a prototype, Stereotact. Funct. Neurosurg. 72 (1999) 112 – 116.