Robotic and Image-Guided Knee Arthroscopy

Robotic and Image-Guided Knee Arthroscopy

29 G Robotic and Image-Guided Knee Arthroscopy Liao Wu1,2, Anjali Jaiprakash1,2, Ajay K. Pandey1,2, Davide Fontanarosa1, Yaqub Jonmohamadi1, Maria An...

4MB Sizes 0 Downloads 15 Views

29 G

Robotic and Image-Guided Knee Arthroscopy Liao Wu1,2, Anjali Jaiprakash1,2, Ajay K. Pandey1,2, Davide Fontanarosa1, Yaqub Jonmohamadi1, Maria Antico1, Mario Strydom1, Andrew Razjigaev1,2, Fumio Sasazawa1, Jonathan Roberts1,2 and Ross Crawford1,2,3 1

Queensland University of Technology, Brisbane, QLD, Australia Australian Centre for Robotic Vision, Brisbane, QLD, Australia 3 Prince Charles Hospital, Brisbane, QLD, Australia 2

ABSTRACT Knee arthroscopy is a well-established minimally invasive procedure for the diagnosis and treatment of knee joint disorders and injuries, with more than 4 million cases and costing the global healthcare system over US$15 billion annually. The complexities associated with arthroscopic procedures dictate relatively longer learning curves for surgeons with the potential to not only cause unintended damage during surgery but may cause postsurgery complications. The advancement of robotics and imaging technologies can reduce the shortcomings, alleviating some of the health access and workforce stressors on the health system. In this chapter, we discuss several key platform technologies that form a complete system to assist in knee arthroscopy. The system consists of four components: (1) steerable robotic tools, (2) autonomous leg manipulators, (3) novel stereo cameras for intraknee perception, and (4) three-dimensional/four-dimensional ultrasound imaging for tissue and tools tracking. As a platform technology, the system is applicable in other minimally invasive surgeries like hip arthroscopy, intraabdominal surgery, and any surgical site that can be accessed with a continuum robot, imaged with either stereo vision, ultrasound, or a combination of both. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00029-3 © 2020 Elsevier Inc. All rights reserved.

493

494

29.1

Handbook of Robotic and Image-Guided Surgery

Introduction

Knee arthroscopy is the most common orthopedic procedure in the world with more than 4 million cases per year and costs the global healthcare system over US$15 billion annually. It is a type of minimally invasive surgery (MIS). After 40 years since its clinical introduction, knee arthroscopy is a well-established diagnostic and therapeutic procedure in which a camera, that is, an arthroscope, and a surgical tool are introduced into the knee joint through small incisions in the skin. The arthroscope and instruments are placed in the “soft spot” on either side of the patella tendon just below the patella. Each instrument is pushed into the knee to gain access. Cartilage inside the knee can be damaged at this stage if care is not taken. Once inside the knee the arthroscope generates real-time images that are displayed on a screen. The surgeon then inspects the entire knee looking for any unsuspected abnormalities before addressing the pathology that would have been detected by preoperative imaging. The phase of knee inspection can involve complex manipulation of the limb to allow the surgeon access to different areas of the knee [1]. In patients with large muscular legs, this phase of the operation can be particularly difficult. Despite several clinical advantages, arthroscopic technique faces long-standing challenges: (1) physically demanding ergonomics during patient manipulation, (2) lack of depth perception, (3) limited field of view (FOV), and (4) counterintuitive hand eye coordination between scope and surgical instruments. These above challenges make knee arthroscopy a complex procedure. Like any new skill there is a learning process to reach full competency. A recent publication from Oxford demonstrated that it was necessary to complete 170 cases to reach baseline competency [2]. Even experienced surgeons feel that harm can be caused in many procedures. In a recent paper looking at surgeons’ attitudes to knee arthroscopy it was noted that unintended damage to the knee was common [3]. In this study, half the surgeons said unintentional damage to articular cartilage, which is the tissue that covers the end of the bones that make up the joints, occurred in at least one in 10 procedures. About a third (34.4%) felt that the damage rate was at least one in five procedures. About 7.5% of the surgeons said such damage occurred in every procedure carried out [3]. The ergonomics of arthroscopy can mean the procedure is physically demanding. During a knee arthroscopy the surgeon needs to control the leg, the camera, and a surgical tool, while watching a remote screen (Fig. 29.1). It was also highlighted that 59% of surgeons reported they found the procedure to be physically challenging, and more than a fifth (22.6%) said they had experienced physical pain after performing a knee arthroscopy [3]. Arthroscopic tools and the arthroscope itself are usually straight and metallic. The arthroscope has sharp edges. Moving the straight instrument inside a curved knee can make it difficult to navigate the environment without hitting upon cartilage. Being able to use curved instruments with robotic assistance would be a significant advantage.

FIGURE 29.1 A typical scenario in a knee arthroscopy. Here the surgeon is reaching backward for a tool while looking sideways at a screen, holding the arthroscope, and controlling the leg with his body. This illustrates some of the complexity of the procedure that can be reduced with improved technology.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

495

FIGURE 29.2 Proposed autonomous robotic knee arthroscopy systems consisting of steerable robotic tools, autonomous leg manipulators, miniature stereo cameras, and 3D/4D ultrasound imaging systems. 3D, Three-dimensional.

29.2 29.2.1

Steerable robotic tools for arthroscopy Why steerable robotic tools are necessary for arthroscopy

One of the main reasons behind unintended damage to cartilage that occurs commonly in arthroscopy is the rigidity of the tools, including arthroscopes and other instruments, like graspers and punches, that are used in the current procedure. While the rigidity of the tools provides good force transmission that is beneficial to some operations, it also

29. Knee Arthroscopy

Arthroscopic cameras are angled at 30 or 70 degrees from the line of sight. Adjusting to a 30-degree angle is relatively straightforward but many surgeons find it difficult in using the 70-degree camera. A robot-assisted system would not have this limitation and could in fact view at any angle. Robotics started to focus on solving medical challenges in the 1940s but only began to grow from industrial systems in the early 1980s [4]. Today they are complex systems customized for specific medical procedures, using multiple sensors to measure, track, align, and understand patient and environmental parameters. Their role is to improve the safety, success, and consistency of unusually involved surgeries [5]. Over the past few decades, robots have grown in precision and complexity for a wide variety of medical applications including orthopedic surgery [6]. Surgical support through robotics is developing fast [7] because of the increased demand for noninvasive surgery that stretches surgeon capabilities to the limit. It is evident that research into new techniques and technologies benefits the medical community as a whole [8]. Robot-assisted surgery has significant advantages for both novice and skilled surgeons in delivering a more precise operation. Improvements include but are not limited to reduced unintended trauma and shorter recovery times for patients [9]. In this chapter, we discuss four novel technologies that could assist surgeons in performing arthroscopy and improving outcomes. These technologies include robotic steerable tools, autonomous leg manipulators, miniature stereo cameras, and three/four-dimensional (3D/4D) ultrasound (US) imaging systems. We describe how each individual technology may assist surgeons in performing arthroscopy with enlarged accessibility, improved ergonomics, enhanced guidance, and better precision in the near future; we also discuss the possibility of a fully autonomous robotic and image-guided system in the far future (Fig. 29.2) that could be supervised by surgeons with their clinical expertise input, while minimizing the effects caused by the limitation of humans, such as hand tremor, fatigue, and limited precision. Though the chapter focuses on knee arthroscopy we consider many of these concepts platform technologies that could eventually assist in other forms of arthroscopy, laparoscopy (keyhole abdominal surgery), and ultimately in many procedures where access can be achieved by keyhole techniques.

496

Handbook of Robotic and Image-Guided Surgery

indicates less accessibility and less dexterity given the confined space of the joint and the keyhole surgery setting. In the current approach, surgeons usually make two to three portals for the tools to cover the surgical areas, and change the portals for each tool frequently during the procedure. As a consequence, the chance of damage to the cartilage is significantly increased. In addition, the arthroscopes are often made beveled at the tip to enlarge the range of view, which leads to a high probability of chopping the cartilage. Moreover, the lack of dexterity makes the manipulation of the tools through small portals and under limited vision extremely challenging, leading to a long learning curve and high occurrence of unintended damage to patients [3]. Steerable robotic tools are excellent candidates to replace the current rigid designs. By making the tools steerable, the accessibility and dexterity of the tools can be tremendously improved. For example, by adding a rotation degree of freedom (DoF) to the tip of the tool, the surgeon is able to change the approaching direction of the tool to the surgical target without moving the main shaft. As a result, the area that can be accessed by the tool through a single port is enlarged, and thus the necessity of changing the ports is decreased. The addition of DoFs also makes it possible to bypass some obstacles, thus holding the potential to extend the application of this keyhole procedure to broader syndromes. Robotics, or more broadly mechatronics, is another technology that is revolutionizing the instruments used in surgeries. Compared to purely mechanical and manually manipulated instruments, robotic instruments have better maneuverability as more DoFs can be actuated simultaneously than could be directly handled by human hands. They are also more intelligent since more sensors can be integrated and processed during the operation to monitor the process and assist in decision-making. As a consequence, robotic tools are more and more widely adopted in surgeries. This also applies to arthroscopy since the tools are supposed to have greater maneuverability and better sensing ability to increase the accessibility, safety, and ease of use. In this section, we introduce some prototypes of steerable robotic tools for arthroscopy. Some of the prototypes were designed for knees and some for other joints like hips, but they share many characteristics and we will discuss their mechanical design, user interface, sensing, and evaluation together in the following.

29.2.2

Mechanical design

Mechanical design is the most important part to endow arthroscopic tools with steerability. There are generally three challenges faced in the mechanical design of steerable arthroscopic tools: 1. Size. Restricted by the keyhole surgery setting and the confined space inside the knee joint, the size of the arthroscopic tools has to be made very small. This is especially challenging when the tools are to be added with multiple DoFs to increase dexterity. Current arthroscopic tools used in surgeries usually have a shaft with a diameter of less than 5 mm. The steerable arthroscopic tools should be designed with comparable sizes. 2. Dexterity. In order to make the tools steerable and dexterous, complex structures and mechanisms need to be integrated with the mechanical design. There is generally a tradeoff between the dexterity of the tool and the compactness the tool can be made with. 3. Force transmission ability. In some operations in arthroscopy, the tools are used to exert some forces on the hard/ soft tissues inside the knee. In these applications, a good force transmission ability is necessary. This is, however, very challenging when the tools are made steerable. A compromising design should be taken for these situations where both the force transmission ability and the steerability are desired. To address these challenges, researchers have proposed different mechanisms. Some of the mechanisms that have been proposed for the steerable robotic arthroscopic tools are depicted in Fig. 29.3. Traditional serial-link robots have three structural components: links, joints, and motors. Links are the main body of a robot and are connected by joints; joints are actuatable mechanisms that can move the links they connect; motors are actuators that drive the motion of the joints. Steerable tools for arthroscopy usually do not have the same structures, but we can map their components to those of the serial-link robots by mimicking their functions. In this way, the mechanical structures of the prototypes shown in Fig. 29.3 can be summarized in Table 29.1. The characterization of the prototypes shown in Fig. 29.3 is also summarized in Table 29.1. As discussed previously, size, dexterity, and force transmission ability are three important factors for steerable arthroscopic tools. In Table 29.1, these factors are embodied by the diameter of the tool, the number of DoFs, and the applied force, respectively. Generally, most of the prototypes can be made as small as less than 6 mm in diameter. The prototype in Ref. [10] was slightly larger than the others, with a diameter of 8 mm. However, according to the authors, it could be reduced to 4 mm, which is rational considering the simplicity of the design.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

497

FIGURE 29.3 Mechanisms for constructing steerable arthroscopic tools: (A) SMA-based [10]; (B) hinged-joint-based [11]; (C) lobed-feature-based [12]; (D) notched-tube-based [13]; (E) tube-and-slider-based [14]; and (F) spine-and-hinge-based [15] SMA, Shape-memory alloy.

TABLE 29.1 Mechanical structure and characterization of the prototypes shown in Fig. 29.3. Mechanical structure

Characterization

Links

Joints

Motors

Diameter (mm)

DoF

Applied force

A [10]

Plastic disks

Special arrangement of SMA wires

SMA wires

8

1

1N

B [11]

Disks and spines

Hinges

Cables

4.2

1

At least 1N

C [12]

Disks and spines

Lobed features

Tendons

4.2

1

At least 3N

D [13]

Two nested tubes

Asymmetric notches on the tubes

Cables

5.99

1

At least 1N

E [14]

A distal link and two proximal tubes

A hinge composed by two sliders

Rotation of outer tube

5

1

Axial 100 N Lateral 20 N

F [15]

Disks and a central spine

Space between disks and deformation of spine

Cables

3.6

3

Unknown

DoF, Degree of freedom; SMA, Shape-memory alloy.

In terms of the dexterity, most designs chose to endow the device with only one bending DoF. Since the device is handheld, it naturally has four additional DoFs (three rotations and one translation) empowered by the motion of the hand. In consideration of this, one DoF at the tip is sufficient for some operations inside the joint. The prototype in Ref. [15] added another bending DoF to the proximal part of the tool and a pivoting DoF to the distal tip. The additional

29. Knee Arthroscopy

Fig. 29.3

498

Handbook of Robotic and Image-Guided Surgery

DoFs enable the device to be bent in a different plane without changing the orientation of the handle, and adjust the approaching direction of the tip without affecting the bending segment. Most cable-driven mechanisms, which form the main part of the steerable mechanisms, have applied force less than 3 N. This is sufficient for examination purposes when the mechanism is used to deliver a camera to the surgical site for inspection. However, for some other operations such as cutting, greater force transmission capability is desired. Distinguished from the other prototypes, the design in Ref. [14] does not rely on the pulling of cables but rather uses a special mechanism to transform the rotation of a tube to the translation of a hinge at the tip. As a consequence, the force it can exert increases significantly. The tradeoff, however, is the lack of flexibility that may cause damage to the cartilage.

29.2.3

Human robot interaction

For handheld devices, the design of the interface between the surgeons and the devices is critical for the final adoption of the devices. Due to the addition of DoFs, the surgeons have to be able to feed in more inputs when manipulating the steerable devices than the conventional ones. A good interface should let the surgeons intuitively and naturally maneuver the devices with little attention spared to the control of the devices itself. Three types of interfaces have been proposed for the steerable arthroscopic tools: pure mechanical, mechatronic, and a combination of both. In Ref. [14], the steerable arthroscopic cutter is controlled by pure mechanical interfaces composed of a wheel for steering the tip and a handle for actuating the cutting. The advantages of pure mechanical interfaces include the simpleness of implementation and the ease of sterilization; the disadvantages, however, include difficulty in controlling multiple DoFs simultaneously and integration with additional sensing. Therefore more prototypes adopt mechatronic interfaces, such as in Refs. [10,13,15]. Usually, joysticks and buttons are used to relay the control signals from the surgeons to the onboard processors to actuate the associated motors. Some designs [11,12] provide both options that can be switched by the surgeons, so that they can select the best interface according to the tasks they are performing. The redundancy is also beneficial for coping with unexpected failures of one of the interfaces.

29.2.4

Modeling

Modeling of steerable arthroscopic tools usually includes kinematics modeling and mechanics modeling. The former develops the mapping between the position of the actuators and the position of the tip (position kinematics) or between the motion of the actuators and the motion of the tip (velocity kinematics). The latter investigates how the force applied to the tip relates to the force generated by the actuators. Kinematics modeling is useful for mechatronic control. If the position kinematics is derived, the surgeons can directly control the absolute position of the tip; if the velocity kinematics is obtained, the surgeons are able to control the incremental motions of the tip from any state. Examples of kinematics modeling can be found in Refs. [10,13,15]. Generally, the position kinematics can be derived from the geometry of the mechanism and the velocity kinematics can be obtained by differentiating the position kinematics. However, due to the involvement of elastic elements, the accuracy of the modeling is usually inferior to the rigid robotic counterparts. Mechanics modeling is helpful for safety monitoring. As the arthroscopic tools usually make contacts to the tissues inside the joint when performing operations, it is beneficial to be aware of how much force is being exerted by the tip. The force is generally difficult to measure directly; an indirect approach is to measure the force experienced by the actuators (such as tension in cables) and then estimate the force at the tip through the mechanics model. Some preliminary work has been done in Refs. [10,14] but in general it is still an open research question how to efficiently and accurately monitor the force output.

29.2.5

Sensing

One of the advantages of robotic arthroscopic tools over traditional ones is the capability of integrating multimodality sensors. Six types of sensors can be used to guide the operation of robotic arthroscopic tools. 1. Encoder. The motion of the actuators, such as the cables, can be recorded by the encoders that are installed with the motors for the actuation. Since many motors have in-built encoders, these are the most convenient sensors to be integrated. Knowing the motion of the actuators is a prerequisite for kinematics modeling and control.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

499

2. Strain gauge. Strain gauges are sensors that can measure the force applied to them based on the deformation caused. By coupling them with the actuators, such as the cables, it is possible to sense the forces the actuators exert. Then with the mechanics model, the force output of the tip of the device can be estimated. The sensors can be used for safety monitoring to prevent overload. 3. Electromagnetic (EM) tracking system. An EM tracking system consists of two components, a field generator that can generate a modulated EM field and a coil whose current can react to the EM field it is placed in. By attaching the EM coil sensors to the tip of the device and placing the sensors within the EM field of the generator, the position and orientation of the sensors can be accurately measured with respect to the coordinate frame of the generator system. The EM tracking system provides an effective method to sense the spatial position as well as the shape of the device when it is inserted into the human body [16]. The disadvantage, however, is its incompatibility with ferro materials. 4. Optical tracking system. An optical tracking system also comprises two parts: a camera system and a set of markers. The markers can be rigidly bound to the devices and tracked by the camera system in real time. Due to the size of the markers and the occlusion problem, usually these markers can only be used outside the human body, such as on the handle of the device. Although for steerable devices, the optical tracking system is not able to give the information of the tip directly, it is useful for providing the position and orientation of the base of the device, which can be used to assist in mechatronic control, estimation of the tip position, etc. 5. Inertial measurement unit (IMU). While the optical tracking system can measure the absolute position and orientation of the device, the IMU sensor is capable of giving the relative rotation and translation of the device with respect to an initial state by means of integration. When the initial state is calibrated, the absolute information can also be recovered. Compared with the optical tracking system, the IMU sensor is much cheaper and is not hampered by the occlusion problem. However, the accuracy of the IMU is not competitive and it often suffers from the drift problem. 6. Camera. For arthroscopic operations, the arthroscope can not only provide direct visual feedback to the surgeons, but also be used as a sensor for the control of the arthroscope and other tools. Visual servoing techniques can be employed to automatically move the arthroscope or the tools to the desired position, or assist in the surgeon’s operation, as demonstrated in Ref. [17]. Advanced developments in the fabrication of cameras combined with computer vision techniques are enabling more and more sophisticated sensing within the joints, as is discussed later.

Evaluation

Evaluation of the robotic steerable arthroscopes can be performed in two ways. The first, namely the quantitative way, is to evaluate the individual performance of the device through separate experiments. Examples include the test on the kinematics modeling [10,11,13,15] and validation on the force output [11 14]. The second, namely the qualitative way, is to validate the designs via phantom, cadaver, or animal experiments. These experiments provide more close-to-reality simulations for the devices and usually can expose problems that are latent in lab experiments. In Ref. [12], a cadaver study was carried out to validate the developed robotic steerable arthroscope and it was found that “the quality of the onboard camera and light source were significantly inferior to the rigid endoscope and standard arthroscopes.” Feedback from the surgeons can also be requested with these experiments. In Ref. [14], questionnaires were given to the surgeons who participated in the cadaver experiments and it was confirmed that the steerability increased the reachability of the tools compared with their rigid counterparts in real surgery. Fig. 29.4 shows a cadaver experiment conducted by the Medical and Healthcare Robotics group from the Australian Centre for Robotic Vision, Queensland University of Technology. In this experiment, a steerable arthroscope integrated with a tiny camera and a lead was inserted into a cadaver knee through a standard trocar. At the same time, FIGURE 29.4 Cadaver experiment to evaluate the steerable arthroscope compared with the standard arthroscope.

29. Knee Arthroscopy

29.2.6

500

Handbook of Robotic and Image-Guided Surgery

a conventional straight rigid arthroscope was inserted to the knee from another portal. It can be seen that the quality of the image from the camera on the steerable arthroscope is comparable to that from the conventional arthroscope. Without sacrificing imaging quality, the robotic steerable arthroscope holds the potential to make arthroscopy easier and safer by incorporating multimodality sensing and intelligent control of the additional DOFs.

29.3 29.3.1

Leg manipulators for knee arthroscopy Leg manipulation systems

The process of knee arthroscopy can be decomposed into three main stages: (1) inserting an arthroscope, (2) navigating to the affected location, and (3) removing damaged cartilage. To enable the navigation of the arthroscope, the patient’s leg is manipulated by the surgeon (Fig. 29.5) to create a gap (instrument gap) [18] at a specific point in the knee joint. Automating leg movement has significant advantages for both novice and skilled surgeons in reducing their workload. The knee joint’s complexity and anatomic structure cause arthroscopic surgery to depend largely on the surgeon’s skill level. The joint has six DoFs allowing it to move in various directions. Zavatsky used the “Oxford rig” to prove that the ankle and hip systems combine with knee movement to enable the six DoFs movement of the knee [19]. To provide access for surgical instruments during an arthroscopy, surgeons physically manipulate the patients’ knee through bending, lifting, twisting, and rotating the leg. Moustris et al. [20] suggest learning from expert surgeon actions, where the surgeon’s insight into how to manipulate the leg during surgery provides an insight for future robotic arthroscopic procedures. Tarwala and Dorr [21] note that the current state-of-the-art robotic orthopedic technologies, such as the MAKO RIO surgical arm, are limited to robotic partial knee replacement, with the potential to be used for total hip, knee, and tunnel placement during anterior cruciate ligament (ACL) replacement surgery. However these systems still rely on the surgeon to manually manipulate the patient’s leg during the procedure. To support a level of automation for knee surgery, a range of manual devices are found in theater rooms today, such as the DeMayo, Stryker, and SPIDER2 [7] leg manipulators, where the surgeon is in control by manually moving the limb with the device. The main limitation of these devices, however, is that each position change requires the surgeon to stop the surgery and move the leg manually as automation is not incorporated. Manipulators such as the SPIDER2 are also used for other limbs (such as shoulder) surgery, while the Stryker leg manipulator is mainly used for partial knee replacement. For knee arthroscopy most surgeons today opt to maneuver the patient’s leg themselves in the traditional manner due to the low benefit and disruptive nature of how these systems operate.

FIGURE 29.5 Leg manipulation during an arthroscopy.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

29.3.2

501

Knee gap detection for leg manipulation

29.4

Miniature stereo cameras for medical robotics and intraknee perception

Visual information plays a significant role in our everyday life, as 80% of the information we receive comes from visual inputs. No doubt, vision improves our ability in making decisions. The vision system plays an important role in MIS. Current arthroscopes use a rigid rod lens geometry to visualize the surgical area of interest. The two-dimensional (2D) images provided by arthroscopes are used by surgeons to perform surgery. However, the current technology lacks perception and for autonomous robotic arthroscopy having the access to depth information is an important element. Autonomous robotic navigation would heavily rely on stereo vision in robots and a lot of research efforts are underway in designing smart robotics for security and surveillance operations in unstructured environments. Technological challenges in visualizing the interior of the human body present new opportunities for designing advanced vision systems and a 3D reconstruction of complex joints such as knee cavity will benefit arthroscopic surgeries. As a first step toward the realization of better medical imaging systems for medical diagnosis and therapy, miniature stereo vision cameras are developed to assign depth perception to robotic surgical procedures.

29.4.1

Complementary metal-oxide semiconductor sensors for knee arthroscopy

Human vision is the most sophisticated and powerful vision solution to observe the environment and extract location information. Akin to the human visual system, in robotics stereo vision forms a reliable depth perception technique for

29. Knee Arthroscopy

Although progress has been made in autonomous surgical maneuvers, optical coherence, tomography guidance, and motion compensation which is revolutionizing surgical procedures in robotic laparoscopic surgery [22], research has overlooked these technological advantages in the context of knee arthroscopy [23]. There are currently no robotic or automated technologies for meniscus or cartilage surgery used in knee arthroscopy. The reason for this omission is the confined spaces within the knee joint and the intricate maneuvers of the leg that are required to access the joint to perform the surgery [24]. Other causes include the low level of standardization of routines, limitations of surgical tools, inadequate procedures, and postsurgery issues such as iatrogenic vascular lesions. It highlights the significant complexity in developing robotic solutions for arthroscopic knee surgery. However, it presents an opportunity for the integration and adoption of existing and new technologies to deliver the vast benefits of robotic surgery to knee arthroscopy patients, as detailed by Bartoli et al. for laparoscopic surgery [9]. To perform precision robotic surgery, it is necessary as a first step to automate today’s leg manipulation systems to move the patient’s leg and knee safely, and in a controlled manner. For an initial step toward the development of robotic arthroscopy, it is essential to detect the instrument gap [18] for feedback to a robotic system. Strydom et al. reviewed, tested, and analyzed segmentation algorithms for suitability to detect the knee joint features such as the instrument gap. During cadaver experiments, arthroscopy videos were recorded. These sequences were used to create 10 3 100 image sets as detailed in Fig. 29.6 to test the segmentation algorithms against. Three segmentation algorithms were examined and implemented to test their suitability to segment the instrument gap. It was found the Chan Vese level set active contour algorithm is easy to initialize, has a high average accuracy level, and is robust across all image sets. Using its shape capability the level set active contour can be a great option for segmenting the instrument gap if its performance can be optimized. The Watershed algorithm has performed sporadically well across the image sets, and needs to be tuned for each set to work well. It is not suited to be used for segmenting the instrument gap. The Otsu adaptive thresholding algorithm performed fast and accurately across the image range as seen from Fig. 29.7 for two of the data sets [18]. Fig. 29.7D and H is the difference image, color coded to highlight the true positive (green), true negative (gray), false positives (red), and false negative (blue) from the segmentation results. Overall the Otsu algorithm outperformed the watershed and level set algorithms in segmenting the instrument gap. Future development of computer learning algorithms might improve on these methods, especially for complex areas in the knee. However, for knee joint analysis of selected areas in the joint and fast processing, the Otsu algorithm is effective and accurate. Analyzing the knee gap is one step toward robotic leg manipulation to support automated surgery. Future research will need to include a range of computer vision and kinematic technologies and sensor solutions, to enable decisions on how, when, and in which direction to move the leg to support the surgeon at a specific point in time.

FIGURE 29.6 Segmentation data sets.

FIGURE 29.7 Otsu algorithm. Top row: L3 (A) Frame 2 of L3 arthroscope video, (B) L3 marked up image, (C) Otsu L3 mask, (D) L3 SAD output. Bottom row: L5 (E) Frame 2 of L5 arthroscope video, (F) L5 marked up image, (G) Otsu L5 mask, and (H) L5 SAD output.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

503

29.4.2

Emerging sensor technology for medical robotics

Current image sensors have many attractive features but their mechanical rigidity limits application in devices that require flexible packaging (e.g., within a soft tubing). CMOS-based image sensors are traditionally planar and lack mechanical flexibility due to the strong nature of covalent bonding. The most advanced nonplanar inorganic electronic systems were recently demonstrated by Ko et al. where silicon-based hemispherical image sensors with a wide FOV (akin to the human eye) by employing elastomeric interconnects were realized [27]. The development of mechanically flexible image sensors is particularly important for aberration-free conformal imaging with a wide FOV required for medical and security applications. Mechanical flexibility is therefore an important attribute in choosing the lightsensing material for designing the next generation of image sensors [28]. The nature of weak van der Waal interactions among neighboring molecules in organic semiconductors enables intrinsic flexibility at the molecular scale—making this imaging technology particularly suitable for medical and soft robotics. Organic semiconductors offer cheaper processing methods, the fabrication of devices which are light, flexible, and manufactured in large (or small) sizes, and the tuning of photophysical and optoelectronic properties. Lighting plays an important role in the color constancy and quality of 3D reconstruction that an imaging system can produce. A theoretical approach to circumvent the vision impairment of CMOS image sensors for machine vision was presented by Finlayson in which the use of a combination of narrow spectral responses and a logarithmic pixel were proposed to create an image that is invariant to the change in lighting conditions [29]. Along these lines, the practical feasibility of employing a set of organic absorbers for producing color with high purity using this approach was

29. Knee Arthroscopy

successful navigation of robots in unknown and unstructured environments [25]. The stereo vision technique requires two cameras to observe a scene from different locations and in turn produces different image locations of the objects. The disparity and baseline of the system are used for distance estimation and 3D reconstruction of the scene. It is not surprising that modern imaging technologies now form an integral part of the minimally invasive surgical procedures. However, development of miniature cameras that can reach and see inside tight spaces of complex body joints, such as the knee, is highly desired for surgical planning, image-guided surgery, surgery simulation, and disease monitoring. Arthroscopic surgical procedures such as meniscal repair and ACL reconstruction require extreme care and a 3D reconstruction using stereo vision can be used by a robotic system as its primary sensor for collision avoidance and mapping of the knee cavity. There are four key steps by which an image sensor captures an image: first, photons emitted by a light source (in natural and artificially illuminated scenes) are absorbed by the photoactive material which constitutes the pixels resulting in the generation of electron hole pairs; second, the electrons and holes are driven by means of an external electric field toward opposite electrodes where they are extracted, resulting in signal charge which is collected and accumulated at each pixel; third, the accumulated charge from each pixel in the 2D array is read out—the process by which this occurs gives rise to the different image sensors on the current market which include charge-coupled devices, chargeinjection devices, complementary metal-oxide semiconductor (CMOS) image sensors, and image pickup tubes; finally, the detected charges are converted into digital data, which are processed in order to produce the final color image [26]. The current market for CMOS image sensors is nearly $14 billion. Market trends predict an ever-increasing demand for CMOS-based image sensors in the next few years, with new applications in medical, scientific, automotive, and industrial contexts accounting for a significantly larger share of the market (up to 25% by 2022). This expansion is being driven by the need for increasingly smaller and cheaper electronic devices with more functionality. Emerging alternatives to CMOS technology with organic semiconductors and quantum dots are being seen to further broaden up the digital imaging market. Significant progress has already been made to the point that organic photodetectors (OPDs) complement the CMOS technology in changing the landscape of the image sensor market. Organic semiconductor based imaging systems are particularly suitable for flexible design in a soft packaging form. It is not difficult to envisage them making their way into medical and robotic applications. Bringing real-time situational awareness to medical robotics would require multimodal sensing. For robot-assisted MIS the current imaging system has to change. There is a growing demand for the next generation of arthroscopes that can provide volumetric representation in real time. Different components and characteristics of the image sensor have to be modified to facilitate their use in medical robotics. The areas that require modifications include (1) image processing software to augment medical imaging with multimodal image fusion, (2) the image sensor architecture that enables 3D imaging, (3) fast frame rates with active pixel arrangement, (4) the integration of “intelligence” at the chip level, and (5) multispectral imaging pixels that are able to see through blood and occlusions. Importantly, the type of photodetector material and color separation system associated with medical imaging would play an important role in defining the quality of the final image.

504

Handbook of Robotic and Image-Guided Surgery

FIGURE 29.8 Prototypes of the stereo cameras and their feasibility as an imaging system for knee arthroscopy. The NanEye stereo module is shown in (A). Its miniature dimension is ideal for arthroscopy; however, it suffers from a low image resolution and unreliable performance. The second prototype (B) is based on pairing two mcU103A cameras.

reported [30,31]. Assuming Gaussian profiles for the sensing material and a full width at half maxima of ,100 nm, it was reported that a combination of four sensors is capable of producing high purity of color information in an image. It further identified organic chromophores that offer a close match to ideal Gaussian profiles required for producing a high color quality in image sensors. Four narrow absorbers separately absorbing blue, green, yellow-orange, and red colors of a scene were reported. Further development of this concept has led to demonstrations of narrow-spectrum OPDs that are suitable to produce high color purity for blue and green regions [32]. Motivated by this work, researchers at Samsung have developed a narrow-spectrum sensing sensor for sensing the green color [33]. Further developments in material design and processing of spectrally selective organic semiconductor pixels would be beneficial for producing miniature cameras with desired color purity.

29.4.3

Miniature stereo cameras for knee arthroscopy

In this section, we aim to describe the feasibility of miniature stereo vision cameras for robotic knee arthroscopy. Three-dimensional reconstruction using stereo image pairs requires identification of image pixels that correspond to the same point in a physical scene that is being observed by the two cameras. Two prototypes of arthroscopes have been developed at Queensland University of Technology for achieving this aim: (1) NanEye stereo camera (each with 250 3 250 pixels) mounted on a 3D printed head and (2) a pair of muC103A cameras (each with 400 3 400 pixels) assembled as a stereo pair and mounted on a 3D printed head. The images of arthroscope prototypes are shown in Fig. 29.8. In comparison to prototype 1, substantial improvements have been achieved using prototype 2, in terms of image resolution and reliability of the camera during the recordings. However, there is room for further improvements in physical characteristics of the camera such as adding LED illuminators and having a multibaseline stereo arrangement.

29.4.3.1 Validation of stereo imaging in knee arthroscopy Fig. 29.9 shows a set of stereo images acquired by arthroscope prototype 1. It is evident that despite the stereo camera having an image resolution of 250 3 250 only the amount of detail present in these images is more informative than that provided by traditional oblique-view arthroscopes. An example of a stereo rectified image is also shown together with its disparity map. This proves the potential of direct use of CMOS-based stereo image sensors in MIS. Extension of the stereo vision concept was further developed by use of prototype 2. This prototype offers better resolution and provides detailed images as can be seen from the set of images shown in Fig. 29.5. These images were acquired from the same phantom knee. The robustness of this arthroscope for MIS was further verified in cadavers. Experiments in this direction have shown some very exciting results where internal features and textures associated were successfully captured and depth maps were created for every frame in real time. The depth map is an essential part in creating 3D surfaces of the internal knee structures. Fig. 29.10 also shows the corresponding depth maps for stereo images taken from a phantom knee and a cadaver knee. Images acquired from cadaver knee further validate the use of stereo cameras in knee arthroscopy. The computed depth map shows parts of femur and ACL that are close to the camera with features up to 25 mm captured in good detail. It is also evident that prototype 2 is unable to accurately reconstruct knee anatomy as there are gaps in the depth map. A lot of it has to do with the lack of texture and we also found that the use of light conditions plays an important role. For robotic knee arthroscopy it is apparent that more than one imaging mode is required to address situations where the main imaging system is not able to provide detailed depth information. The use of 3D US is looked into as an additional mode of imaging for getting a volumetric information in real time to complement the stereo

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

505

FIGURE 29.9 The stereo images and their corresponding depth maps from a phantom knee using stereo endoscope prototype 1. Although the left and right images carry sufficient details, the reconstructed disparity map lacked features.

29. Knee Arthroscopy FIGURE 29.10 The stereo images and their corresponding depth maps from a phantom knee (A), and a cadaver knee (B). In (A) the meniscus is the main knee structure visible in the stereo images, whereas in (B) the ACL is clearly visible. ACL, Anterior cruciate ligament.

506

Handbook of Robotic and Image-Guided Surgery

camera based imaging system. The suitability and potential of 3D US for robotic knee arthroscopy are discussed in the next section.

29.5 29.5.1

Ultrasound-guided knee arthroscopy Ultrasound-based navigation

US is an imaging modality based on the use of high-frequency compressional sound waves. At tissue interfaces the wave fronts are partially reflected back to the US transducers, and from the time of flight it is possible to calculate the depth of the structures. Several lines of view are typically scanned sequentially and used to reconstruct a volume. Modern systems can scan several full volumes per second, making it suitable for real-time or quasi-real-time applications [34]. Some recently introduced technologies even allow for refresh rates of thousands of Hertz [35]. US scanning has peculiar characteristics that make it interesting for intraoperative autonomous surgery applications: it is the only real-time and volumetric imaging modality currently clinically available in operating theaters; moreover it is noninvasive, it provides superior soft-tissue contrast and high resolution, and it is cost-effective, compared to other modalities such as CT and MRI. Advanced modalities such as elastography [36] or ultrasonic backscatter analysis [37] potentially allow for tissue typing/characterization, which can be used to inform the surgeon or the autonomous robotic system about the distribution of the different tissues in the operating region. Navigation using US has been successfully explored in several medical fields. For example, US guidance in radiation therapy before or during treatment is currently a clinical standard [38,39]. In robotic needle procedures or MIS, US guidance is becoming increasingly more common, either alone or in combination with other modalities, such as MRI, CT, or vision systems. US is used to track both internal tissue distributions and tools, like arthroscopes [40], catheters [41], or biopsy [42]/brachytherapy [43] needles. Although presently there is no clinically available system using US guidance for arthroscopic procedures in the knee, several studies in the literature have investigated procedures which could be adapted to (autonomous) arthroscopic applications, in particular for identification and tracking of knee tissues and of surgical tools with characteristics similar to arthroscopes.

29.5.2

Ultrasound for the knee

29.5.2.1 Automatic and semiautomatic segmentation and tracking In a clinical workflow where (autonomous) robotic arthroscopy for the knee is implemented, the first possible application for US imaging is the identification and segmentation of the structures of interest. This imaging modality allows to recognize and contour most of the tissues in this region, such as tendons [44], ligaments [45], menisci [46], nerves [46,47], and cartilage [48,49] (see Fig. 29.11 for an example of a US-based map of hard-tissue knee structures). Vessels (like the popliteal artery) can be segmented and tracked dynamically using duplex US [50,51]. Bony structures cannot be fully imaged, due to the physical limitations of this imaging modality: since US reflection and FIGURE 29.11 On the left, sagittal 2D US Bmode scan of the knee through the patellar tendon. On the right, identification and segmentation of the main hard-tissue structures. 2D, Twodimensional; US, ultrasound.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

507

transmission coefficients at an interface between tissues depend on the difference between the relative acoustic impedance values, at the muscle bone interface where there is a large difference almost all the signal is reflected back. Still, it is possible to image the proximal surface of the bone and use registration techniques (e.g., with preoperative MRI or CT) to reconstruct and track in real time the correct position, orientation, and shape of the whole structures [52]. To facilitate the surgeon’s work and provide them with a more familiar navigation tool, US can be used as a real-time source of information to navigate (rigidly or in principle also elastically) in another volumetric imaging modality (a technique also referred to as “virtual sonography”) [45].

29.5.2.2 Ultrasound-guided interventions

29.5.2.3 Ultrasound-guided robotic procedures US-guided anesthetic procedures, such as nerve block, could benefit as other surgical procedures from the introduction of robotic procedures. In the knee area, in order to guarantee stable incision and a faster learning curve, Hemmerling et al. [60,61] developed a robotic system (Magellan) in which the needle is held and placed under US guidance by a robotic arm, remotely controlled by a joystick. To select the point for needle insertion, the combined information from a camera and from a manually operated 2D US system was visualized and used by the surgeon. In computer-assisted orthopedic surgery (CAOS) [62] robots are most of the time an essential component of the workflow. A successful example of a commercial CAOS product for the knee is the Mako Total Knee System (Stryker, 2825 Airview Boulevard Kalamazoo, Michigan, United States) [63] (see Fig. 29.12). Currently, most systems use CT as preoperative images and fluoroscopic imaging during the operation. This exposes the patients and the operators to an unwanted X-ray dose. Moreover, to coregister the structures intraoperatively with the preoperative plan typically heavy invasive tracking devices are used, possibly resulting in pain and longer recovery times. This also limits the use to larger nonmobile bones. The reference markers used are fixed directly to the bones (pelvis, femur, and tibia) through very small incisions (1 2 cm). The patients seldom complain about the small incisions because the main wounds for joint replacement are larger and more painful. However, it is fact that more incisions (albeit small) are needed and extra holes on the healthy parts of bones are created. In elderly patients, the markers might be loosened during surgery because of porotic bone. If this happens, the system does not work correctly. US has been proposed to reduce significantly these issues providing potentially submillimetric accuracy and robust registration procedures not requiring segmentation [64]. Mozes et al. tested on phantoms and cadavers an optically tracked US probe in A-mode as a variable length pointer to position in real time in absolute (room) coordinates bony structures and register with preop scans [65]. Some studies even suggest the possibility to avoid altogether preoperative CT scans. Using 3D US imaging, the surface of the femur, for example, can be reconstructed and registered to a generic accurately segmented CT scan. The resulting contours can be used for real-time navigation during surgery [66].

29. Knee Arthroscopy

Interventional US guidance is currently used in the knee mainly for injections, in particular when accurate positioning of the needle is required. Clinical indications include voluminous and painful cysts which need in situ injections; tendinopathies/bursopathies that are difficult to reach; complex cases, like when synovitis or prostheses are present and joint aspirations are prescribed [53]. For these procedures, the use of US has reportedly transformed completely the outcomes, significantly increasing accuracy and safety. Research work has shown that there is potential also for needle placement guidance in the posterior cruciate ligament (PCL): some authors have investigated the aspiration of ganglion cysts [54], for example; but probably the most interesting application is for the injection of advanced regenerative treatments like cells, growth factors, or scaffolds [55], as this would open new scenarios for the role of orthobiologic agents in the treatment of the injuries to this ligament. US has been used also to guide Baker’s cyst aspiration with significant clinical improvements in osteoarthritis patients [56]. Procedures for patellar tendons are also reported in the literature using US guidance. In an application in sports medicine, Maffulli et al. describe a procedure to inject sclerosing agents into the vascularization site of recalcitrant tendinopathy patients [57]. In this study, the authors show how US is a valuable and precise tool not only to identify the interface between the tendon and the Hoffa body, but also to follow in real time the distribution of the fluid and, with color Doppler, to assess the neovascularization state of the tendons to understand how effective the injection was. To diagnose and treat patellar tendon tendinosis, Kanaan’s group proposed to use US to identify sonographic features that correlate with the disease, and to guide percutaneous tendon fenestration [58]. They concluded in their work that this workflow produced clinical improvements or no changes in all patients. Hirahara et al. [59] described a technique to use US for percutaneous reconstruction of the anterolateral ligament (ALL), where sonography was used to identify accurately the origin and the insertion of the ALL.

508

Handbook of Robotic and Image-Guided Surgery

FIGURE 29.12 Medial unicompartmental knee arthroplasty using the MAKO RIO robotic system (Stryker) (Dr. William A, Leone, The Leone Center for Orthopedic Care at Holy Cross Hospital, Fort Lauderdale, Florida, United States).

29.5.3

Ultrasound guidance and tissue characterization for knee arthroscopy

There is a clear indication from clinical institutions that only automatization of interventions, at least the most standard ones, can in the near future be sustainable in terms of efficiency, minimization of error occurrence, and costeffectiveness. Arthroscopic procedures are among these types of intervention but, in particular for the knee, specific issues must be addressed before fully autonomous applications can be envisioned. First, the robots need to be aware of the position of all the relevant structures (targets, organs at risk, and tools) at all times with the level of accuracy and precision required by the specific application. It must also be noted that imaging for soft tissues (which is essential in robotic surgery to identify, segment, and track the structures of interest) has not been thoroughly investigated in the literature because most applications studied have focused on bony structures. Currently, internal vision systems, such as microcameras, are typically employed which, though, typically cannot provide any volumetric or spatially localized information. Volumetric X-ray systems (like CT) are not real time but time integrated, deliver extra unwanted radiation to the operators, have poor contrast and resolution, and are incompatible with normal operational conditions. Other clinically available options presently only include (open) MRI scanning, which has several remarkable disadvantages: it is hardly compatible with standard arthroscopic workflows due to the high-intensity magnetic fields required and the limited operating space, and it is extremely costly. US is instead a portable, cheap, harmless real-time volumetric imaging modality with advanced tissue characterization capabilities, superior soft-tissue contrast, and virtually unlimited resolution possibilities. Moreover, it has already been proven as a reliable image guidance tool in other applications with similar requirements, in the radiotherapy space [38,39]. These requirements are include the ability to distinguish automatically tissues; automatic segmentation of tissues and tools; automatic tracking of tissues and tools; and real-time volumetric navigation (possibly elastically registered to another modality such as a preoperative MRI or CT for the convenience of physicians). Most arthroscopic procedures in the knee are performed from the anterior side through the parapatellar portals (the soft spots on the sides of the patella) and only involve the anterior region of the joint. Therefore it is natural to assume the best option is to scan anteriorly the knee during the operation, using a 4D probe with adequate resolution (e.g., in the range 5 13 MHz, for optimal penetration and resolution). US can identify and track the cartilages, the ligaments, the meniscus, the bony structures, the tendons, possible pathological scar tissue, and possible liquid present (inflammation/joint fluid/saline solution). It is also advisable to perform a posterior preoperative scan to segment arteries, nerves, PCL, posterior part of the bones, posterior part of the cartilages, muscles, and tendons. The latter scan can be used to fix safety margins, in particular for vessels and nerves, although it is very rare that these structures enter the operating region. The only structures usually assessed during arthroscopy that cannot be visualized with US from the anterior view are the PCL and the patellar cartilage. While the PCL can be generally identified from the popliteal fossa, the patellar cartilage cannot be imaged since it is located beyond the patella [67]. It has been reported in the literature that US cannot demonstrate meniscal and ligamentous lesions [56]. Instead several authors propose US (possibly combined

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

509

FIGURE 29.13 On the left, the probe held in position on the knee; in yellow the extension of the US probe surface and in red the actual surface of contact. On the right, the resulting 3D image, showing respectively, the expected (in yellow) and the actual (in red) field of view generated. 3D, Three-dimensional; US, ultrasound.

29.6 Toward a fully autonomous robotic and image-guided system for intraarticular arthroscopy So far we have introduced four technologies that may change the way that arthroscopies are performed. These technologies can bring the surgery with enlarged accessibility, improved ergonomics, enhanced guidance, and better precision. We have described how each individual technology may assist the surgeons in performing arthroscopy in the near future; from a longer perspective, these technologies have the potential to be combined to form a fully autonomous robotic and image-guided system. The system could be supervised by surgeons with their clinical expertise input, while minimizing the effect caused by the limitations of humans, such as hand tremor, fatigue, and limited precision. In this section, we discuss some of the possibilities that these technologies may interlink with each other toward the development of a fully autonomous robotic and image-guided system for intraarticular arthroscopy.

29. Knee Arthroscopy

with preoperative images) to find possible areas of pathology [68 72]. This information could be used to guide the robot to the disease. In combination with the US-based map of the tissues, a full set of active constraints could reduce damage to normal tissue and shorten the overall time required for the procedure. This scenario presents some serious challenges. Traditionally, US has always been regarded as a very highly operator-dependent modality. So for autonomous applications, a certain level of automatic interpretation of images must be introduced, and there is already some promising literature in this direction. For optimal US scanning, a good coupling between the surface scanned and the surface of the US probe should be maintained throughout the whole procedure. Unfortunately, the knee has a complex unevenly curved shape which does not match the curvature of a standard 4D US probe (see Fig. 29.13). Moreover, the leg can be moved during surgery, so the coupling must be maintained dynamically. And the different tissues inside the joint might require different orientations of the probe and possibly specific flexion angles for optimal visualization, which in principle might not be optimal to ensure arthroscope access to those structures. However, the US guidelines presently available for the knee focus on the visualization of each of the structures independently, with the most commonly used 2D probes. Further studies are needed to evaluate whether 3D/4D volumetric US can overcome these limitations. Another parameter to take into account is the FOV of the probe: scanning from the anterior side means through the patellar tendon which can provide a small contact surface, reducing significantly the accessible FOV. Furthermore, several structures of interest in the knee might be beyond the air gap on the sides (Fig. 29.13). Finally, also the arthroscopic tools must be considered in designing the navigation system, because it is necessary to provide them with enough space to access the joint (while not letting the US probe lose coupling) and dynamically move in it. It becomes then apparent how important it is to select the most appropriate US probe, possibly with a small footprint in order to minimize these problems. Adopting novel concept fill-in gel pads that fill the whole volume between the probe and the knee is another possible complementary approach to mitigate the issues described.

510

Handbook of Robotic and Image-Guided Surgery

29.6.1

Sensor fusion of camera image and ultrasound guidance

The oblique viewing design of current arthroscopes drastically reduces the FOV. Use of miniature cameras has been shown to circumvent this problem. Development of arthroscopes with stereo vision will bring further advances and surgeons will benefit from an increased FOV as well as real-time availability of depth perception. Such improvements to vision systems will improve visualization of the surgical space and reduce the long learning curves associated with knee arthroscopy. A limitation of this approach, in particular when robotic applications are investigated, is that the use of microcameras cannot provide detailed volumetric information and that the information is not spatially localized in absolute coordinates. The ideal vision system for knee arthroscopy therefore would necessarily be a multimodal imaging system. 4D US imaging is currently the only imaging modality, as discussed in the previous sections, with all the characteristics to augment camera-based vision systems. Together, a multimodal imaging system, consisting of a set of stereo cameras fused with the volumetric view of a 4D US, can provide quantitative rich surface visualization of surgical space accurately spatially localized. The fusion between the two modalities can be performed in several different ways. For example, it is possible to identify the same structures and overlap them, using image processing techniques. It might still be challenging, though, to recognize typically 2D surfaces in a 3D space, as provided by the cameras, in 3D US volumes. Another, more realistic, approach would be to localize and track the metallic tools using US and then align the view and orientation of the depth maps with the position and orientation of the tools in the US scans [73]. There are several examples in the literature of work that proves the feasibility of this strategy (although mainly just in 2D). Nevertheless there are no established algorithms for reliable 3D tracking of surgical instruments. Moreover, to provide the robotic system with absolute coordinates for the structures of interest, the US probe must be accurately spatially localized. Multiple solutions are available, for example, optical systems like the NDI Polaris (NDI, Waterloo, Ontario, Canada) or EM systems like the NDI Aurora. It must be noted that, for each of these solutions, the geometric errors introduced in the localization chain must be carefully evaluated to define the final precision and accuracy of the surgical system. Working in tandem, such imaging systems will allow surgeons to fix safety margins around sensitive vessels and nerves. Integration of this technology with robotic platforms will bring further improvements in enforcing even greater precision and accuracy. Adding a certain level of artificial intelligence to cameras as well as US imaging modes would add further advantages. Of course, successful clinical validations are paramount before such imaging systems are widely deployed.

29.6.2

Leg manipulation for better imaging and image-guided leg manipulation

The joint space inside the knee is so confined that leg manipulation is indispensable in arthroscopy to make space for the arthroscope and tools and expose certain tissues to the imaging system. Therefore there is a link between leg manipulation and imaging. On the one hand, the quality of the imaging calls for a good strategy of leg manipulation during the procedure; on the other hand, the leg manipulation can be guided by the real-time images acquired from either the arthroscope or the US system, or both. To automate the leg manipulation, it is essential to understand how the motion of the manipulator relates to the image feedback. Understanding the anatomic structure of the leg, especially its kinematic model, will help establish the mapping between the leg manipulator and the imaging. In Section 29.3 we described the knee gap detection and segmentation that could facilitate autonomous leg manipulation. In addition, the techniques enabling the automation of leg manipulation are also shared with those of the steerable robotic tools, and are discussed in the following section.

29.6.3

Vision-guided operation with steerable robotic tools

Autonomous operation with steerable robotic tools is possible with abundant sensing feedback available. In Section 29.2 we discussed how multimodality sensors may be equipped to the handheld robotic tools to make the operation more situation-aware. Particularly, the camera image from the arthroscope provides the most direct information on the surgical sites and the interaction between the tools and the tissues. At the same time, the US imaging system is able to track the tools with respect to the anatomy of the joint in real time. The fusion of the two sensors, as discussed above, will provide a detailed description of the situation and thus enable the autonomous motions of the steerable robotic tools. Visual servoing is a technique that can make vision-guided automation possible. It is essentially a control algorithm that takes images, either RGB images from the camera [74] or US images [75], as feedback. There are two frameworks

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

511

for visual servoing, position based and image based. Position-based visual servoing first tries to estimate the positions of the relevant components such as the robot and the target, and then uses the positions to feed a conventional closedloop control algorithm; image-based visual servoing directly establishes the relationship between the motion of the robot and the change of the image via a matrix called the image Jacobian, and updates the Jacobian matrix while the robot moves to the target position until an ideal configuration is reached. Whichever framework is used, the quality of the image acquired significantly affects the precision and robustness of the visual servoing implementation. Therefore the development of an autonomous robotic system for arthroscopy will be established on the advancement of highquality imaging systems. Learning-based methods are another approach toward the development of a fully autonomous system. In recent years, significant advancement has been seen in the field of machine learning and artificial intelligence, which has started impacting the area of medicine. For autonomous surgical robotic systems, techniques such as deep learning may be used for understanding the situation and decision-making. However, this requires a huge amount of data to train a neural network, whereas the acquisition of such data is extremely difficult in the context of surgical applications. Unsupervised learning such as reinforcement learning does not require labeled training data, but this technique still suffers from sampling inefficiency and tricky design of reward functions, leading to problems in robustness, accuracy, and predictability. Even though the rapid progress in machine learning and artificial intelligence holds the potential to revolutionize the area of medicine, the development of fully autonomous robotic and image-guided systems for arthroscopy is promising.

29.7

Discussion

29.8

Conclusion

Knee arthroscopy is a widespread complex procedure with a long learning curve causing unintended damage to patients and surgeons. The use of autonomous technologies like steerable instruments and leg manipulators along with improved camera technology and imaging modalities such as US will make the operation safer, more predictable, and easier.

29. Knee Arthroscopy

In this chapter we have described knee arthroscopy, discussed some of the current difficulties with the procedure, and outlined some potential solutions as the procedure becomes more automated. The solutions we propose may never reach clinical practice as we cannot predict if a different strategy may prove more effective. Furthermore, each of the technologies will not evolve at the same rate. Some may be utilized and others not. That is the inevitable fate of new technologies. While we do not claim we are describing the future of arthroscopy we believe that the technologies described will be increasingly utilized in surgical procedures and other interventions such as cardiology. Technological advances have been so rapid in recent years that changes to the way that knee arthroscopies and other procedures are performed are inevitable. However, change will inevitably be slower in the medical field than others due to issues such as regulation and safety as well as resistance from patients and the medical profession. The barriers to the introduction of medical devices are great. To develop even some of the technologies that we describe would be incredibly expensive and probably only be achievable by large multinational corporations. Regulatory approval can be difficult to gain for the introduction of new technology as the standards of effectiveness and safety that must be reached are high. Finally, patients and healthcare workers need to be convinced of the safety of new technologies. Though this chapter focuses on knee arthroscopy all the technologies described are platform technologies. That is, they can be used in other procedures beyond knee arthroscopy. Though pioneered in the knee, arthroscopy is now performed in many joints including hip, shoulder, wrist, and ankle to name a few. Arthroscopy in these joints can be even more difficult than in the knee and the technologies described here have an opportunity to provide an upside. Access can be particularly difficult in the hip joint and curved steerable cameras and instruments will be of even greater value when navigating around the circular femoral head. Outside arthroscopy there are many other procedures where technology as described could be utilized. This includes inside the abdominal cavity (endoscopy), the chest (bronchoscopy), or the bowel as just three examples. Finally, new fields could be opened up with variations of the technologies described greatly expanding the role of minimally invasive and semiautonomous and ultimately autonomous surgery.

512

Handbook of Robotic and Image-Guided Surgery

References [1] Phillips BB. General principles of arthroscopy. Campbell’s Oper Orthop 2003;3:2497 514. [2] Price AJ, Erturan G, Akhtar K, Judge A, Alvand A, Rees JL. Evidence-based surgical training in orthopaedics: how many arthroscopies of the knee are needed to achieve consultant level performance? Bone Joint J 2015;97-B:1309 15. [3] Jaiprakash A, O’Callaghan WB, Whitehouse SL, Pandey A, Wu L, Roberts J, et al. Orthopaedic surgeon attitudes towards current limitations and the potential for robotic and technological innovation in arthroscopic surgery. J Orthop Surg 2017;25 2309499016684993. [4] Catani F, Zaffagnini S. Knee surgery using computer assisted surgery and robotics. Springer Science & Business Media; 2013. [5] Abolmaesumi P, Fichtinger G, Peters TM, Sakuma I, Yang G-Z. Introduction to special section on surgical robotics. IEEE Trans Biomed Eng 2013;60:887 91. [6] Jabero M, Sarment DP. Advanced surgical guidance technology: a review. Implant Dent 2006;15:135 42. [7] Gomes P. Medical robotics: minimally invasive surgery. Elsevier; 2012. [8] Camarillo DB, Krummel TM, Salisbury Jr. JK. Robotic technology in surgery: past, present, and future. Am J Surg 2004;188:2S 15S. [9] Bartoli A, Collins T, Bourdel N, Canis M. Computer assisted minimally invasive surgery: is medical computer vision the answer to improving laparosurgery? Med Hypotheses 2012;79:858 63. [10] Dario P, Paggetti C, Troisfontaine N, Papa E, Ciucci T, Carrozza MC, et al. A miniature steerable end-effector for application in an integrated system for computer-assisted arthroscopy. In: Proceedings of the international conference on robotics and automation, ,https://doi.org/10.1109/ robot.1997.614364.; 1997. [11] Dario P, Carrozza MC, Marcacci M, D’Attanasio S, Magnami B, Tonet O, et al. A novel mechatronic tool for computer-assisted arthroscopy. IEEE Trans Inf Technol Biomed 2000;4:15 29. [12] Payne CJ, Gras G, Hughes M, Nathwani D, Yang G-Z. A hand-held flexible mechatronic device for arthroscopy. In: 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS), 2015. https://doi.org/10.1109/iros.2015.7353466. [13] Kutzer MDM, Segreti SM, Brown CY, Armand M, Taylor RH, Mears SC. Design of a new cable-driven manipulator with a large open lumen: preliminary applications in the minimally-invasive removal of osteolysis. In: 2011 IEEE international conference on robotics and automation; 2011. https://doi.org/10.1109/icra.2011.5980285. [14] Horeman T, Schilder F, Aguirre M, Kerkhoffs G M MJ, Tuijthof GJM. Design and preliminary evaluation of a stiff steerable cutter for arthroscopic procedures. J Med Device 2015;9:044503. [15] Paul L, Chant T, Crawford R, Roberts J, Wu L. Prototype development of a hand-held steerable tool for hip arthroscopy. In: 2017 IEEE international conference on robotics and biomimetics (ROBIO); 2017. https://doi.org/10.1109/robio.2017.8324546. [16] Wu L, Song S, Wu K, Lim CM, Ren H. Development of a compact continuum tubular robotic system for nasopharyngeal biopsy. Med Biol Eng Comput 2017;55:403 17. [17] Wu L, Wu K, Ren H. Towards hybrid control of a flexible curvilinear surgical robot with visual/haptic guidance. In: 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS); 2016. p. 501 7. [18] Strydom M, Jaiprakash A, Crawford R, Peynot T, Roberts JM. Towards robotic arthroscopy: “Instrument gap” segmentation, Australian Robotics & Automation Association; 2016. [19] Zavatsky AB. A kinematic-freedom analysis of a flexed-knee-stance testing rig. J Biomech 1997;30:277 80. [20] Moustris GP, Hiridis SC, Deliparaschos KM, Konstantinidis KM. Evolution of autonomous and semi-autonomous robotic surgical systems: a review of the literature. Int J Med Robot 2011;7:375 92. [21] Tarwala R, Dorr LD. Robotic assisted total hip arthroplasty using the MAKO platform. Curr Rev Musculoskelet Med 2011;4:151 6. [22] Uecker DR, Lee C, Wang YF, Wang Y. Automated instrument tracking in robotically assisted laparoscopic surgery. J Image Guid Surg 1995;1:308 25. [23] Picard F, DiGioia AM, Moody J, Martinek V, Fu FH, Rytel M, et al. Accuracy in tunnel placement for ACL reconstruction. Comparison of traditional arthroscopic and computer-assisted navigation techniques. Comput Aided Surg 2001;6:279 89. [24] Ward BD, Lubowitz JH. Basic knee arthroscopy part 3: diagnostic arthroscopy. Arthrosc Tech 2013;2:e503 5. [25] Murray D, Little JJ. Using real-time stereo vision for mobile robot navigation. Auton Robots 2000;8:161 71. [26] Jansen-van Vuuren RD, Armin A, Pandey AK, Burn PL, Meredith P. Organic photodiodes: the future of full color detection and image sensing. Adv Mater 2016;28:4766 802. [27] Ko HC, Stoykovich MP, Song J, Malyarchuk V, Choi WM, Yu C-J, et al. A hemispherical electronic eye camera based on compressible silicon optoelectronics. Nature 2008;454:748 53. [28] Forrest SR. The path to ubiquitous and low-cost organic electronic appliances on plastic. Nature 2004;428:911 18. [29] Finlayson GD, Hordley SD. Color constancy at a pixel. J Opt Soc Am 2001;18:253. [30] Ratnasingam S, Collins S. Study of the photodetector characteristics of a camera for color constancy in natural scenes. J Opt Soc Am A Opt Image Sci Vis 2010;27:286 94. [31] Jansen van Vuuren R, van Vuuren RJ, Johnstone KD, Ratnasingam S, Barcena H, Deakin PC, et al. Determining the absorption tolerance of single chromophore photodiodes for machine vision. Appl Phys Lett 2010;96:253303. [32] Pandey AK, Johnstone KD, Burn PL, Samuel IDW. Solution-processed pentathiophene dendrimer based photodetectors for digital cameras. Sens Actuators B Chem 2014;196:245 51. [33] Lim S-J, Leem D-S, Park K-B, Kim K-S, Sul S, Na K, et al. Organic-on-silicon complementary metal-oxide-semiconductor colour image sensors. Sci Rep 2015;5:7708.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

513

29. Knee Arthroscopy

[34] Bushberg JT, Anthony Seibert J, Leidholdt EM, Boone JM, Goldschmidt EJ. The essential physics of medical imaging. Med Phys 2003;30:1936. [35] Montaldo G, Tanter M, Bercoff J, Benech N, Fink M. Coherent plane-wave compounding for very high frame rate ultrasonography and transient elastography. IEEE Trans Ultrason Ferroelectr Freq Control 2009;56:489 506. [36] Bamber J, Cosgrove D, Dietrich C, Fromageau J, Bojunga J, Calliada F, et al. EFSUMB guidelines and recommendations on the clinical use of ultrasound elastography. Part 1: Basic principles and technology. Ultraschall in Der Medizin Eur J Ultrasound 2013;34:169 84. [37] Lizzi FL, Feleppa EJ, Kaisar Alam S, Deng CX. Ultrasonic spectrum analysis for tissue evaluation. Pattern Recognit Lett 2003;24:637 58. [38] Fontanarosa D, van der Meer S, Bamber J, Harris E, O’Shea T, Verhaegen F. Review of ultrasound image guidance in external beam radiotherapy: I. Treatment planning and inter-fraction motion management. Phys Med Biol 2015;60:R77 114. [39] O’Shea T, Bamber J, Fontanarosa D, van der Meer S, Verhaegen F, Harris E. Review of ultrasound image guidance in external beam radiotherapy part II: intra-fraction motion management and novel applications. Phys Med Biol 2016;61:R90 137. [40] Tyryshkin K, Mousavi P, Beek M, Ellis RE, Pichora DR, Abolmaesumi P. A navigation system for shoulder arthroscopic surgery. Proc Inst Mech Eng H 2007;221:801 12. [41] Brattain LJ, Loschak PM, Tschabrunn CM, Anter E, Howe RD. Instrument tracking and visualization for ultrasound catheter guided procedures. Lect Notes Comput Sci 2014;2014:41 50. [42] Bluvol N, Shaikh A, Kornecki A, Del Rey Fernandez D, Downey D, Fenster A. A needle guidance system for biopsy and therapy using twodimensional ultrasound. Med Phys 2008;35:617 28. [43] Banerjee S, Kataria T, Gupta D, Goyal S, Bisht SS, Basu T, et al. Use of ultrasound in image-guided high-dose-rate brachytherapy: enumerations and arguments. J Contemp Brachyther 2017;2:146 50. [44] Wong-On M, Til-Pe´rez L, Balius R. Evaluation of MRI-US fusion technology in sports-related musculoskeletal injuries. Adv Ther 2015;32:580 94. [45] Oshima T, Nakase J, Numata H, Takata Y, Tsuchiya H. Ultrasonography imaging of the anterolateral ligament using real-time virtual sonography. Knee 2016;23:198 202. [46] Faisal A, Ng S-C, Goh S-L, George J, Supriyanto E, Lai KW. Multiple LREK active contours for knee meniscus ultrasound image segmentation. IEEE Trans Med Imaging 2015;34:2162 71. [47] Giraldo JJ, Alvarez MA, Orozco AA. Peripheral nerve segmentation using Nonparametric Bayesian Hierarchical Clustering. In: 2015 37th Annual international conference of the IEEE engineering in medicine and biology society (EMBC); 2015. ,https://doi.org/10.1109/ embc.2015.7319048.. [48] Faisal A, Ng S-C, Goh S-L, Lai KW. Knee cartilage segmentation and thickness computation from ultrasound images. Med Biol Eng Comput 2017;56:657 69. [49] Faisal A, Ng S-C, Goh S-L, Lai KW. Knee cartilage ultrasound image segmentation using locally statistical level set method. In: IFMBE proceedings; 2017. p. 275 81. [50] Guerrero J, Salcudean SE, McEwen JA, Masri BA, Nicolaou S. Real-time vessel segmentation and tracking for ultrasound imaging applications. IEEE Trans Med Imaging 2007;26:1079 90. [51] Shetty AA, Tindall AJ, Qureshi F, Divekar M, Fernando KWK. The effect of knee flexion on the popliteal artery and its surgical significance. J Bone Joint Surg Br 2003;85-B:218 22. [52] Wein W, Karamalis A, Baumgartner A, Navab N. Automatic bone detection and soft tissue aware ultrasound CT registration for computeraided orthopedic surgery. Int J Comput Assist Radiol Surg 2015;10:971 9. [53] Morvan G, Vuillemin V, Guerini H. Interventional musculoskeletal ultrasonography of the lower limb. Diagn Interv Imaging 2012;93:652 64. [54] DeFriend DE, Schranz PJ, Silver DAT. Ultrasound-guided aspiration of posterior cruciate ligament ganglion cysts. Skeletal Radiol 2001;30:411 14. [55] Hackel JG, Khan U, Loveland DM, Smith J. Sonographically guided posterior cruciate ligament injections: technique and validation. PM&R 2016;8:249 53. [56] Ko¨ro˘glu M, C ¸ allıo˘glu M, Eri¸s HN, Kayan M, C¸etin M, Yener M, et al. Ultrasound guided percutaneous treatment and follow-up of Baker’s cyst in knee osteoarthritis. Eur J Radiol 2012;81:3466 71. [57] Maffulli N, Del Buono A, Oliva F, Testa V, Capasso G, Maffulli G. High-volume image-guided injection for recalcitrant patellar tendinopathy in athletes. Clin J Sport Med 2016;26:12 16. [58] Kanaan Y, Jacobson JA, Jamadar D, Housner J, Caoili EM. Sonographically guided patellar tendon fenestration: prognostic value of preprocedure sonographic findings. J Ultrasound Med 2013;32:771 7. [59] Hirahara AM, Andersen WJ. Ultrasound-guided percutaneous reconstruction of the anterolateral ligament: surgical technique and case report. Am J Orthop 2016;45:418 60. [60] Hemmerling TM, Taddei R, Wehbe M, Cyr S, Zaouter C, Morse J. First robotic ultrasound-guided nerve blocks in humans using the Magellan system. Anesth Anal 2013;116:491 4. [61] Morse J, Terrasini N, Wehbe M, Philippona C, Zaouter C, Cyr S, et al. Comparison of success rates, learning curves, and inter-subject performance variability of robot-assisted and manual ultrasound-guided nerve block needle guidance in simulation. Br J Anaesth 2014;112:1092 7. [62] Hernandez D, Garimella R, Eltorai AEM, Daniels AH. Computer-assisted orthopaedic surgery. Orthop Surg 2017;9:152 8. [63] Gilmour A, MacLean AD, Rowe PJ, Banger MS, Donnelly I, Jones BG, et al. Robotic-arm assisted vs conventional unicompartmental knee arthroplasty. The 2-year clinical outcomes of a randomized controlled trial. J Arthroplasty 2018;33:S109 15.

514

Handbook of Robotic and Image-Guided Surgery

[64] Chen TK, Abolmaesumi P, Pichora DR, Ellis RE. A system for ultrasound-guided computer-assisted orthopaedic surgery. Comput Aided Surg 2005;10:281 92. [65] Mozes A, Chang T-C, Arata L, Zhao W. Three-dimensional A-mode ultrasound calibration and registration for robotic orthopaedic knee surgery. Int J Med Robot 2009. Available from: https://doi.org/10.1002/rcs.294. [66] Barratt DC, Chan CSK, Edwards PJ, Penney GP, Slomczykowski M, Carter TJ, et al. Instantiation and registration of statistical shape models of the femur and pelvis using 3D ultrasound imaging. Med Image Anal 2008;12:358 74. [67] Friedman L, Finlay K, Jurriaans E. Ultrasound of the knee. Skeletal Radiol 2001;30:361 77. [68] Paczesny Ł, Kruczy´nski J. Ultrasound of the knee. Semin Ultrasound CT MRI 2011;32:114 24. [69] Fuchs S, Chylarecki C. Sonographic evaluation of ACL rupture signs compared to arthroscopic findings in acutely injured knees. Ultrasound Med Biol 2002;28:149 54. [70] Sohn C, Casser HR, Swobodnik W. Ultrasound criteria of a meniscus lesion. Die Sonogr Kriter einer Meniskuslasion. 1990;11(2):86 90. Available from: ,http://ovidsp.ovid.com/ovidweb.cgi?T 5 JS&PAGE 5 reference&D 5 med3&NEWS 5 N&AN 5 2192453. [accessed 26.08.18]. [71] De Flaviis L, Scaglione P, Nessi R, Albisetti W. Ultrasound in degenerative cystic meniscal disease of the knee. Skeletal Radiol 1990;19:441 5. [72] Cook C, Stannard J, Vaughn G, Wilson N, Roller B, Stoker A, et al. MRI versus ultrasonography to assess meniscal abnormalities in acute knees. J Knee Surg 2014;27:319 24. [73] Yang L, Wang J, Kobayashi E, Ando T, Yamashita H, Sakuma I, et al. Image mapping of untracked free-hand endoscopic views to an ultrasound image-constructed 3D placenta model. Int J Med Robot 2014;11:223 34. [74] Azizian M, Khoshnam M, Najmaei N, Patel RV. Visual servoing in medical robotics: a survey. Part I: endoscopic and direct vision imaging— techniques and applications. Int J Med Robot 2014;10:263 74. [75] Azizian M, Najmaei N, Khoshnam M, Patel R. Visual servoing in medical robotics: a survey. Part II: tomographic imaging modalities— techniques and applications. Int J Med Robot 2015;11:67 79.