System integration

System integration

CHAPTER 35 System integration Andras Lassoa , Peter Kazanzidesb a School of Computing, Queen’s University, Kingston, Ontario, Canada Science, Johns ...

3MB Sizes 0 Downloads 67 Views

CHAPTER 35

System integration Andras Lassoa , Peter Kazanzidesb a School

of Computing, Queen’s University, Kingston, Ontario, Canada Science, Johns Hopkins University, Baltimore, MD, United States

b Computer

Contents 35.1. Introduction 35.2. System design 35.2.1 Programming language and platform 35.2.2 Design approaches 35.3. Frameworks and middleware 35.3.1 Middleware 35.3.1.1 Networking: UDP and TCP 35.3.1.2 Data serialization 35.3.1.3 Robot Operating System (ROS) 35.3.1.4 OpenIGTLink 35.3.2 Application frameworks 35.3.2.1 Requirements 35.3.2.2 Overview of existing application frameworks 35.4. Development process 35.4.1 Software configuration management 35.4.2 Build systems 35.4.3 Documentation 35.4.4 Testing 35.5. Example integrated systems 35.5.1 Da Vinci Research Kit (dVRK) 35.5.1.1 DVRK system architecture 35.5.1.2 dVRK I/O layer 35.5.1.3 DVRK real-time control layer 35.5.1.4 DVRK ROS interface 35.5.1.5 DVRK with image guidance 35.5.1.6 DVRK with augmented reality HMD 35.5.2 SlicerIGT based interventional and training systems 35.5.2.1 3D Slicer module design 35.5.2.2 Surgical navigation system for breast cancer resection 35.5.2.3 Virtual/augmented reality applications 35.6. Conclusions References Handbook of Medical Image Computing and Computer Assisted Intervention https://doi.org/10.1016/B978-0-12-816176-0.00040-5

862 862 863 863 864 864 865 865 866 867 867 867 869 871 871 873 873 874 875 875 875 877 878 879 881 881 882 883 884 886 888 889 Copyright © 2020 Elsevier Inc. All rights reserved.

861

862

Handbook of Medical Image Computing and Computer Assisted Intervention

35.1. Introduction Many computer assisted intervention (CAI) systems are created by integrating a number of different devices, such as imaging, robotics, visualization, and haptics, to solve a clinical problem. It is therefore important to consider the processes, tools, and best practices in the area of system integration. This chapter focuses on the software aspects pertaining to the integration of CAI systems. First, several design considerations are discussed, such as programming language and platform, followed by a discussion of object-oriented and component-based design approaches. In recent years, component-based design has become the predominant approach for larger systems due to the looser coupling between components as compared to the coupling between objects. Component-based software generally relies on middleware to handle the communication between components, and so the chapter continues with a review of different middleware options, highlighting two popular choices: Robot Operating System (ROS) [8] and OpenIGTLink [7]. While both packages support a variety of devices, ROS is especially attractive for CAI systems that integrate robots, while OpenIGTLink is widely used with medical imaging. This is followed by a discussion of frameworks suitable for creating CAI applications, especially for translational research and product prototyping. Many potential frameworks are listed and more details are provided for the two of the largest, most commonly used platforms, 3D Slicer [6] and MITK [23]. The next section discusses the software development process. Although research software is not subject to the strict design controls used for commercial medical device software development, there are several advantages to adopting some good development practices. This section describes best practices in the areas of software configuration management, build systems, documentation, and testing. Following these practices will lead to software that is easier to use, easier to maintain, and better prepared for clinical evaluation or commercialization. Finally, the chapter concludes with examples of integrated systems that illustrate many of the concepts presented in this chapter. The first is the da Vinci Research Kit (dVRK) [9], which is an open research platform created by integrating open source electronics and software with the mechanical components of a da Vinci Surgical Robot. Additional examples are shown for CAI applications implemented using the 3D Slicer application framework.

35.2. System design Although system designs vary based on the problem being addressed, there are some key design decisions that are common to all projects. These include the choice of programming language, platform, and design approach. The following sections present a brief overview of these design choices.

System integration

35.2.1 Programming language and platform Most computer-assisted intervention systems require interaction with the physical world and thus require some level of real-time performance. This often leads to the choice of a compiled language, such as C++, due to its faster execution times. On the other hand, compiled languages are not ideal for interactive debugging because it is necessary to recompile the program every time a change is made. In this case, an interpreted language such as Python may be more attractive. It is not unusual for systems to be constructed with a mix of languages to attempt to reap the benefits of each language. One common design is to use C++ for performance-critical modules, with Python scripts to “glue” these modules together. An example of this design is provided by 3D Slicer [6], which consists of Python scripts that utilize a large collection of C++ modules. The platform choice refers not only to the operating system, but possibly also to an environment or framework. Two popular operating systems are Windows and Linux, though OS X is also used, and mobile operating systems, such as iOS and Android, are becoming more prevalent. Development environments often encapsulate the underlying operating system and thus can provide some amount of portability. Examples of these environments include Matlab,1 Qt, and 3D Slicer. In other cases, such as Robot Operating System (ROS) [8], the platform may require a specific operating system (e.g., ROS is best supported on Linux). Typically, decisions about programming language are based on developer familiarity, availability of existing software packages, and performance considerations. The situation is similar for choosing the platform, possibly with an additional constraint regarding the availability of drivers for specific hardware components.

35.2.2 Design approaches Two popular design approaches are object-oriented design and component-based design. In object-oriented design, the basic software elements are objects that are instances of classes and data exchange occurs by calling methods in these classes. Componentbased design consists of components (which may be implemented as objects), but data exchange occurs by passing messages between components. Thus, a component-based design involves a looser coupling between its basic software elements. Both approaches have merits and here too the choice of design approach may depend on developer preference, familiarity, and compatibility with other packages or tools. For example, use of ROS imposes a component-based design approach. Component-based design is especially useful for larger systems because the message passing paradigm is generally more robust to concurrent execution. Modern software systems often benefit from parallel threads of execution; for example, one thread may wait for user input, another may wait for information from a device, and another 1 MATLAB® is a trademark of The MathWorks.

863

864

Handbook of Medical Image Computing and Computer Assisted Intervention

may perform some computation. Parallel execution may be implemented by multiple executable programs (multiprocessing) or by multiple threads within a single process (multithreading), or some combination of both (e.g., multiple threads in multiple processes). In an object-oriented design, use of multiple threads can lead to data corruption when objects in different threads invoke the same methods or invoke methods that operate on the same data. Programmers can use mutual exclusion primitives, such as semaphores, to guard against these issues, but it is easy to make a mistake. In contrast, message passing is generally implemented using structures, such as queues, that can be systematically protected against mutual exclusion problems. Thus, a component-based design can provide an advantage for concurrent systems. While some developers define software components to be independent binary executables (i.e., each component is a separate process), we adopt a broader definition that allows multiple components to exist in each process. When multiple components exist in a single process, it is possible to use simple and efficient data exchange mechanisms, based on shared memory, that also provide thread safety (e.g., that one thread is not attempting to read an element from shared memory while another thread is writing that same element); examples include OROCOS [19] and cisst [16,13,17]. In the multiprocessing case, it is common to adopt middleware to provide the communication between processes; this is especially true for component-based designs (such as ROS) where each component is a separate process. There are two general classes of message-based communication: client/server and publish/subscribe. In the first case, the client initiates communication with the server, often in a manner similar to a remote procedure call. Specifically, the client provides input parameters to a routine on the server and receives the results as output parameters. It is also possible for the server to send events to the client to indicate asynchronous actions, such as changes of state on the server. In the publish/subscribe model, each component can publish and/or subscribe to data. The component publishing the data does not need to know how many other components (if any) have subscribed to the data. Similarly, the subscribing component does not need to know the identity of the publisher. Additional information about middleware options is given in the following section.

35.3. Frameworks and middleware 35.3.1 Middleware Middleware refers to a software package that provides communication services between software components or applications in distributed systems. Many middleware solutions attempt to be both platform and language independent, that is, to support a variety of operating systems and programming languages. In most cases, middleware packages rely on the widely-used User Datagram Protocol (UDP) or Transmission Control Protocol (TCP), which both depend on the Internet Protocol (IP). Thus, it is helpful to first

System integration

review the capabilities and limitations of these two protocols, since the various middleware packages leverage the capabilities, but still suffer from the limitations. Furthermore, in some cases a middleware solution may not be available for a particular platform, or may impose other challenges, leaving direct use of UDP or TCP as the best alternative. In this case, the programmer must create and use sockets, which are supported (with minor differences) on all major platforms.

35.3.1.1 Networking: UDP and TCP UDP is a packet-oriented protocol, where one software component sends a packet and one or more software components may receive it. There is no guarantee that the packet will be delivered and the receiver does not provide an acknowledgment. The advantage of UDP is that, due to its simplicity, it has low overhead and therefore low latency. Although there are no guarantees of delivery, in practice most packets are delivered. UDP is especially useful for real-time data streams (e.g., measurements of a robot’s position) where it would be better to transmit the latest value rather than to retransmit an older value that was not delivered. TCP establishes a virtual connection between a client and server and provides guaranteed delivery of messages. Large messages are split into smaller packets and are guaranteed to be received in the correct order. The disadvantage of TCP is that these guarantees increase the overhead and latency of communication. Thus, TCP is more suitable when high reliability is required, although UDP may also be used if reliability is explicitly provided by the programmer, for example, by sending an acknowledgment packet via UDP.

35.3.1.2 Data serialization In addition to the networking aspect, it is also necessary to consider the representation of the data, which may differ based on the operating system (including whether 32 or 64 bit), compiler, and programming language. A typical solution is for the sender to serialize the data into a standard format before transmitting it and then for the receiver to deserialize from the standard format into the correct native representation. Some packages address only one of the two requirements of networking and data representation. For example, ZeroMQ (http://zeromq.org/) fulfills the networking requirement with a design that its developers call “sockets on steroids”, but programmers must either write the serialization/deserialization code by hand or use an additional package, such as Google protocol buffers (Protobufs, https://developers.google. com/protocol-buffers/) or FlatBuffers (https://google.github.io/flatbuffers/). Typically, packages that provide serialization require the programmer to define the data format using a text file; depending on the package, this may be called an Interface Definition Language (IDL), a schema, or similar. The package then provides a “compiler” that

865

866

Handbook of Medical Image Computing and Computer Assisted Intervention

processes this text file and generates the appropriate serialization/deserialization code for the particular platform and language. Other middleware packages provide both networking and data representation. CORBA (www.corba.org) is an early example and it has been used for some computerassisted intervention systems [4,5]. Internet Communications Engine (ICE) [20] is another example that has been used for robotics [21,13].

35.3.1.3 Robot Operating System (ROS) Robot Operating System (ROS, www.ros.org) [8] is a more recent package that has become the de facto standard in robotics and provides both networking and data representation (in addition to many other features unrelated to middleware). ROS was initially created by Willow Garage, then transferred to the Open Source Robotics Foundation (OSRF), which has recently created an Open Robotics subsidiary. ROS introduced two methods of communication between components (called nodes): topics and services. Topics are implemented using a publish/subscribe paradigm, where the sender publishes a message and any number of nodes may subscribe to it. When a topic is received, ROS invokes a user-supplied callback to process the data. Services follow the client/server or remote procedure call (RPC) approach, where the client node fills in the Request field of the service data structure, invokes the service, and then waits for the server to provide the results via the Response field of the service data structure. Services are implemented using TCP, whereas topics may use either TCP or UDP. In practice, ROS topics are much more commonly used than ROS services, even in cases where services would seem to be a better choice (for example, commands to change the state of the robot or to move to a new location). One limitation of ROS is that it is best supported on Ubuntu Linux and thus is difficult to use in cases where some parts of the system must use a different platform. This was one of the motivations for the development of ROS 2, which uses Data Distribution Service (DDS) as its middleware. DDS has been standardized by the Object Management Group (OMG) and is a popular cross-platform publish/subscribe middleware. At present, a large part of the robotics community still uses the original version of ROS (ROS 1), but this may change as ROS 2 becomes more mature. Although ROS provides a standard middleware, via topics and services, there currently is little standardization of topic names and payloads (message types). It is not always clear which type to use; for example, ROS geometry_msgs provides Pose and Transform types, as well as the Stamped versions of these types. This makes it difficult to “plug and play” different types of robots and developers often must resort to writing small ROS “glue” nodes to translate from one type of message to another when integrating packages. One recent effort to provide standardization of message names and types, at least for the domain of teleoperated and cooperatively-controlled robots, is the Collaborative Robotics Toolkit (CRTK), https://github.com/collaborative-robotics. One

System integration

aspect of the standardization is to distinguish between different types of motion commands; for example, a servo motion command is intended to provide an immediate setpoint (e.g., position, velocity, or effort), whereas a move command provides a goal and assumes an underlying trajectory planner. Since CRTK is still evolving, it is best to check the project website for the most up-to-date information. Note that the CRTK standard API is not ROS-specific – it is an abstract definition and CRTK will include implementations for common robotics middleware such as ROS.

35.3.1.4 OpenIGTLink Although early middleware such as CORBA were used for CAI systems, the complexity and requirement for an IDL led to several efforts to produce a simpler alternative (though with some added functionality compared to UDP or TCP). A notable example is OpenIGTLink [7], which is a very simple and efficient message-based communication protocol over TCP or UDP. Message types are defined for all information objects commonly used in CAI applications, such as position, transform, image, string, sensor data (http://openigtlink.org/developers/spec). Messages have a static “header” that is quick and easy to parse, followed by a message “body”. The message body consists of “content” (static structure defined by the OpenIGTLink standard) and an optional “metadata” section (list of key/value pairs). This structure allows customization and extension of the protocol while maintaining basic interoperability with all OpenIGTLinkcompliant software. While OpenIGTLink can be used in a completely stateless manner, OpenIGTLink version 2 introduced data query and publish/subscribe mechanisms, and OpenIGTLink version 3 added a generic command message (COMMAND). OpenIGTLink is directly supported by some commercial products (BrainLab surgical navigation systems, some Siemens MRI scanners, KUKA robots, Waters intraoperative mass spectrometers, etc.) and many CAI tools and applications (3D Slicer, MITK-IGT, CustusX, IBIS, NifTK, ROS, etc.). Since many commercial products do not provide an OpenIGTLink interface out of the box, a common approach is to implement a communication bridge, which translates standard OpenIGTLink messages to proprietary device interfaces. The Plus toolkit [1] implements a communication bridge to a wide range of devices used in CAI systems, such as optical and electromagnetic position tracking devices, ultrasound systems, general imaging devices (webcams, frame grabbers, cameras), inertial measurement units, spectrometers, and serial communication devices. The toolkit also implements various calibration, data synchronization, interpolation, image volume reconstruction, and simulation algorithms.

35.3.2 Application frameworks 35.3.2.1 Requirements All CAI systems require user interfaces that allow operators to create plans, carry out a procedure, and evaluate results after completion. The majority of user interfaces are

867

868

Handbook of Medical Image Computing and Computer Assisted Intervention

implemented as software applications that receive inputs and give feedback to users most commonly using standard computing peripherals, such as mouse, keyboard, display, touchscreen, and increasingly using mobile tablets and virtual/augmented reality headsets and wireless controllers. This section gives guidance for implementing such software applications. Successful software projects typically do not deliver “perfect” results, just software that is good enough for the specific purpose. Since development time and costs are always limited, it is necessary to make trade-offs between important aspects of the software, such as robustness, flexibility, extensibility, ease of use, performance, or maintainability. These choices then dictate all major software architectural, design, and development process decisions. Robustness (program behavior is predictable, does not crash or hang) is important for all software that is used during medical interventions. Study protocols are set up in a way that a software crash would not impact patient safety. Still, frequent software errors are typically not tolerated because of time delays and/or extra effort needed from operating room staff to deal with the issue. Flexibility and extensibility (how easy it is to use existing features in different contexts and add new features) is a key requirement for systems used in research projects – where it is expected that the project may go in completely unknown directions as new questions and interesting results emerge. This is in stark contrast with product development projects, where the general direction of the project is expected to be known in advance, and significant product modifications are avoided to limit costs, need to retrain users, and risk of breaking existing functionality. Maintainability (effort required to keep existing features working while dealing with hardware and software obsolescence, fixing issues, and adding new features) is important for long-term projects. These include product and research platform development projects. Ease of use (ability to use the software without specialized training, requiring very few interactions for performing common tasks) and optimal performance (responsiveness, fast computations) are essential for commercial products but typically not critical for research software. Platform independence (ability to easily switch between different hardware devices, operating systems, or software versions) is very desirable for research projects. On the other hand, platform independence is often not necessary for product development, since system support on multiple platforms or switch between platforms would be so expensive anyway that it is not even considered. License compatibility (making sure the resulting software conforms to all licensing requirements specified in its components) is a strong requirement for commercial products. Licenses that put any limitation on how derivative work (software that application developers implemented) may be used, shared, or distributed – such as GPL license –

System integration

is typically banned from commercial software. Research software may utilize GPL software without requiring public disclosure of all source code, since the software is not distributed publicly. Still, relying on GPL is too restrictive even in research software because it is not known in advance what the software will be used for later. It is less risky and more future-proof to build on alternative software libraries that come with nonrestrictive licenses, such as BSD, MIT, or Apache. From the above paragraphs it is clear that there are significant differences between requirements for research software and for commercial product development. In this chapter, we focus on requirements for research software that is suitable to be used during computer aided interventions.

35.3.2.2 Overview of existing application frameworks In the past 20 years, hundreds of CAI and medical image computing applications have been developed, many of them with the ambition to make them generally usable frameworks [3]. Most of these software have been abandoned when the main developers left the project or project funding stopped; others just remained small, focused efforts, kept alive by a few part-time enthusiasts. As a result, it is not easy to find application frameworks that fulfill most important requirements for research application frameworks. A number of application frameworks that saw significant investment earlier have been abandoned or their publicly visible development activity is diminished to a minimal level in recent years. Examples include IGSTK (https://www.kitware.com/igstk), deVide (https://github.com/cpbotha/devide/), Gimias (http://www.gimias.org/), medInria (https://med.inria.fr/), and CamiTK (http://camitk.imag.fr/). Since both user and developer numbers are very low (few commits per months, few user questions a month), their survival in a crowded field is questionable and so these application frameworks are probably not good candidates as the basis of new projects. Relying on closed-source software severely limits reproducibility of research results. Software platforms that require buying expensive licenses reduces availability and flexibility of all applications developed on top of them. Therefore, commercial software frameworks such as MevisLab, Amira-Avizo, OsiriX, Simpleware, Matlab/Simulink, or Labview are not ideal as research software platforms. Open-source applications with restrictive licenses, such as ITKSnap (http://www.itksnap.org) or InVesalius (https:// github.com/invesalius/invesalius3), are not well suited as platforms, as they impose severe limitations on how derived software projects may be used or distributed. There are excellent application frameworks that were originally not intended for medical imaging, but due to their flexibility they may be considered for this purpose. Such frameworks include simulation/technical computing applications, such as Paraview (https://www.paraview.org/), or gaming engines, such as Unity (https://unity3d.com/) or Unreal (https://www.unrealengine.com). The main problem with building on technologies built for a different purpose is that all features that are commonly needed in

869

870

Handbook of Medical Image Computing and Computer Assisted Intervention

CAI software need to be developed from the ground up, which takes many years of development effort and the roadmap and priorities of the platform may diverge from the needs of the application. Frameworks that only work on a single operating system are not well suited for scientific research, where software may need to be used in a heterogeneous environment. A notable example is Horos (https://horosproject.org/) that only runs on Mac OS. There is growing interest in using the web browser as an operating system for medical image computing applications. Efforts with promising results include OHIF (http:// ohif.org/), Cornerstone (https://cornerstonejs.org/), and Kitware’s Glance (https:// kitware.github.io/paraview-glance/app/) viewers. However, running interactive, highperformance interventional software applications in a web browser is typically suboptimal because the strong isolation from the rest of the system (hardware components and other software components) and the extra abstraction layer on top of the host operating system may decrease software performance. There are applications targeting very specific clinical use cases. These may be good choices to be used as the basis of software applications that are guaranteed to be used only in a small number of predefined scenarios. Examples include the MITK-diffusion application (http://www.mitk.org/wiki/DiffusionImaging) for diffusion imaging, NifTK for general surgical navigation (http://www.niftk.org/), or CustusX (https://www.custusx.org/) and IBIS (http://ibisneuronav.org/) for ultrasoundguided neuro-navigation. The advantage of applications with reduced scope is that they may be simpler to learn and easier to use than more generic software. However, in research work it is often not known in advance what directions the project will go and there is a high chance that some desirable features are not readily available in the software. We are only aware of two frameworks with sufficient user and developer community, feature set, and stability that make them solid bases for CAI applications: 3D Slicer (https://www.slicer.org) and MITK (http://mitk.org). Both frameworks have been continuously developed for about 20 years. Both are designed to be very generic: capable of loading, visualizing, and analyzing almost any kind of medical data sets. They are open-source and come with BSD-type nonrestricted licenses, which allows usage and customization for any purposes without any limitations. They are built on the same foundation libraries: VTK for visualization, ITK for image processing, Qt and CTK for GUI and operating system abstraction, DCMTK for DICOM. They are implemented in C++ and features are also accessible in Python. Both application frameworks are used as a basis of commercial products, some of them with FDA approval or CE marking. The two frameworks chose different approaches for feature integration. 3D Slicer makes all features available to end users in a single application. Users download a core application then can choose from over a 100 freely available extensions that add features

System integration

in specific domains, such as diffusion imaging, image-guided therapy, radiotherapy, chest imaging, or cardiac interventions. MITK is accessible to users via applications built from MITK core components by various groups. German Cancer Research Center (DKFZ) provides a generic Workbench and a diffusion imaging application. The Niftk project at University College London (UCL) and King’s College London (KCL) provides a collection of applications for registration, segmentation, image analysis, and image-guided therapy [24]. Over time, adoption of 3D Slicer in the research and development community grew significantly larger than MITK (estimated from number of citations, traffic on discussion forums, etc.) but both frameworks are constantly improved as their developers collaborate at many levels. Most notably, they created the CTK toolkit (http:// www.commontk.org) to facilitate sharing of source code and there are common efforts in improving DICOM support, web frameworks, and image-guided interventions.

35.4. Development process Professional software development processes are the subjects of many books and typically involve phases of requirements generation, specification, design, implementation, testing, and maintenance. A well-documented software development process is especially important in regulated industries, including the medical device industry. The focus of this chapter, however, is on development of research software. Nevertheless, there is value in good development practices to enable robust and maintainable research systems, and possibly to facilitate future commercialization. The previous section presented some suggestions for good system design; in this section, we present some “best practices” for the implementation, testing, and maintenance of software.

35.4.1 Software configuration management Software configuration management is one of the primary concerns during system implementation and maintenance. Fortunately, there are many available tools, beginning with systems such as Revision Control System (RCS) and Concurrent Versions System (CVS), which have been followed by newer tools such as Subversion (SVN), Mercurial (Hg) and Git. The basic approach is to create a software repository, usually on a network drive or online, and then to keep a working copy of the software on your local hard drive. Software changes are made in the working copy and then checked in to the repository. Conversely, software can be checked out from the repository to the working copy. This basic approach supports multiple developers, assuming that they all have access to the repository, but is useful even for a single developer because the repository maintains a history of all software that was checked in. For example, if your latest change introduced an error, it is easy to revert back to the previous version or to compare the current version of the software to the previously working version. Generally, it is also possible to

871

872

Handbook of Medical Image Computing and Computer Assisted Intervention

Figure 35.1 Typical Git workflow for open source project on GitHub. User forks original repository and commits changes to forked repository, then issues pull request to original repository.

label, or tag, a particular version of the software, for example, to indicate a particular release. Another common concept is branches, which enable parallel development of different versions or features of the software. For example, two programmers can each work on a separate branch, implement different features, and then later merge their branches back into the main branch (sometimes called trunk or master). One of the most popular software configuration management packages is Git, initially released in 2005. One reason for its popularity is the widespread use of GitHub (www.github.com), which is a code hosting platform that is based on Git. GitHub provides users with free public repositories and many popular open-source packages are hosted there. GitHub offers unlimited free hosting of private repositories for research groups and, for a fee, GitHub also provides private repositories for corporate customers. Many researchers create and use free GitHub repositories, even in cases where there is only a single developer. For collaborative development in large software projects, it is advantageous to have a well-defined workflow. One popular workflow, that is based on Git and GitHub, is illustrated in Fig. 35.1. This figure shows the scenario where an organization, org, maintains a public repository, sw_repo, on GitHub. Any researcher can clone the repository (Step 1), thereby creating a repository (and working copy) on their computer. For researchers who just wish to use the software, and do not plan to contribute back, this is sufficient. However, researchers who wish to contribute to sw_repo must do so indirectly, since write access to the primary repository is generally limited to a few trusted developers. Other researchers can instead create a forked copy of the repository on their GitHub account and push their changes to that repository, before issuing a pull request to the maintainers of the original repository. This mechanism gives the maintainers a

System integration

convenient way to review the proposed changes and possibly request modifications, before merging the changes to the original repository. Some steps shown in Fig. 35.1 can be done in a different order (e.g., the repository can be forked before being cloned) and other variations may be necessary. Note that in this workflow, the local repository has two remotes: the original repository and the forked repository. The reason for keeping a remote to the original repository is to enable merges from the original repository to incorporate changes that were done in parallel by other developers.

35.4.2 Build systems In addition to source code control, it is necessary to consider the build system. For projects that will only ever be built on a specific platform, it is fine to use a native build system, such as Microsoft Visual Studio (on Windows) or Xcode (on OS X), especially since these typically provide an integrated development environment (IDE) that includes source code editors, debuggers, documentation, and other tools. In addition, there are cross-platform IDEs, such as Eclipse and Qt Creator. An alternative approach is to use a build tool such as CMake (www.cmake.org), which takes a text description of the build process (in a file called CMakeLists.txt) and generates the appropriate configuration for the native build tool. For example, CMake can generate Solution files for Microsoft Visual Studio or makefiles for many different compilers (e.g., gcc on Linux). CMake also provides a portable way to configure the build, which includes setting of different options and specifying paths to dependencies. Because the CMakeLists.txt files are plain text, it is also easy to review them and track changes within the source code control system.

35.4.3 Documentation Documentation is critical for the medical device industry and comes in many forms, including requirements, design, verification and validation, and testing. These documents are tangible products of the development process that is necessary to obtain regulatory approval for medical devices, as well as to enable the company to continue to develop and maintain the software as the engineering team changes. This type of development process is generally too cumbersome for research; in fact, most companies distinguish between research and product development and apply their development process only to the latter. Nevertheless, for research software that is expected to be more widely used, some level of documentation is prudent and could also help if that software is ever considered for commercial release. Two important forms of documentation are an architectural overview of the design and detailed design descriptions. The former is best represented by block diagrams. While any block diagram is better than nothing, there is some advantage to using a standard format, specifically the Unified Modeling Language (UML). UML has grown in complexity over the years to support many different kinds of diagrams, but there are a

873

874

Handbook of Medical Image Computing and Computer Assisted Intervention

few more widely used types of diagrams. Kruchten [22] first defined the “4+1” view of architecture, where the “4” views are logical, physical, process, and development, and the “+1” view is use cases (or scenarios). However, this work predated both UML and the rise of component-based design. UML 2 now defines a component diagram that can be used to document the architecture of a component-based system. Detailed design documentation is best embedded in the source code because it makes it more likely that developers will update the documentation when they change the software. One popular tool is doxygen (http://www.doxygen.nl/), which embeds documentation in comments, using special syntax. The doxygen tool is able to extract this documentation and format it, including the automatic creation of class inheritance diagrams. User documentation is often written in text files using markdown or restructuredtext markup. These text files are human-readable and so can be created using any text editor. There are a number of authoring tools available for markdown, such as GitHub’s web editor or Visual Studio Code (using markdown plugins). Documentation with rich formatting can be published on the web using GitHub, ReadTheDocs, or other online services, or can be uploaded to any web server after converting to HTML files or PDF using Sphinx (http://www.sphinx-doc.org/) or pandoc (https://pandoc.org/).

35.4.4 Testing In the medical device industry, the terms verification and validation are often used, but both imply some form of testing. Generally, software verification ensures that specific software components or subsystems meet their design requirements, whereas the goal of validation is to demonstrate that the overall system satisfies the customer requirements under realistic conditions. Developers of many open source software packages have implemented automated testing; this can generally be considered as software verification since the tests cover operation of specific components in a virtual environment. In other words, automated testing is most practical when there is no interaction with the user or physical environment. Some ideal applications of automated testing include mathematical and image processing functions, where the computed results can be compared to the known, or assumed correct, results. Automated software testing typically includes use of a testing framework (i.e., a standard way to write tests), a tool for executing the tests on as many different platforms as possible, and then a tool to report the results to a dashboard (e.g., a web page). Examples of testing frameworks include the xUnit family, where x refers to the language; e.g., JUnit (https://junit.org) for Java and CppUnit (https:// sourceforge.net/projects/cppunit/) for C++, and similar frameworks such as Google Test (https://github.com/google/googletest). The CMake package described earlier includes CTest to compile the software and execute the tests, and CDash to provide the dashboard for visualization testing results. Alternatively, Travis CI (https://travis-ci.org/)

System integration

and Appveyor (https://www.appveyor.com/) are conveniently integrated with GitHub to support building, executing, and visualizing the results of automated testing for open source projects hosted on GitHub. Because automated testing is usually not feasible to verify large integrated systems or to validate the overall application, these tests are generally conducted manually. A typical approach is to first validate the application with a phantom, which is an inanimate object that represents the patient anatomy. Subsequently, the system may be validated on animal tissue (grocery store meat), in-vivo (with animals) or with cadavers. In these scenarios, data collection can be critical because there often is not enough time to evaluate the results while the animal is under anesthesia or while the cadaveric procedure is being performed. Thus, it is advantageous to consider the use of frameworks, such as ROS, that provide integrated tools for data collection and replay.

35.5. Example integrated systems 35.5.1 Da Vinci Research Kit (dVRK) The da Vinci Research Kit (dVRK) is an open source research platform created by integrating open source electronics and software with the mechanical components of the da Vinci Surgical Robot [9], as shown in Fig. 35.2. The mechanical components consist of the Master Tool Manipulator (MTM), Patient Side Manipulator (PSM), Endoscopic Camera Manipulator (ECM), Master Console stereo display, and footpedal tray. Because this chapter is about system integration, we do not delve into the mechanical hardware, electronics, or FPGA firmware of the dVRK, but instead focus on the open source software and its external interfaces (e.g., ROS).

35.5.1.1 DVRK system architecture As shown in Fig. 35.3 (middle section), the dVRK architecture [11] consists of: 1. One hardware Input/Output (I/O) component, mtsRobotIO1394 (∼3 kHz), handling I/O communication over the FireWire bus, 2. Multiple low-level control components, mtsPID (3 kHz, one for each manipulator) providing joint level PID control, 3. Multiple mid-level control components (1 kHz, different components for each type of manipulator, such as da Vinci MTM and PSM) managing forward and inverse kinematics computation, trajectory generation and manipulator level state transition, 4. Teleoperation components mtsTeleoperation (1 kHz) connecting MTMs and PSMs, 5. A console component (event-triggered) emulating the master console environment of a da Vinci system. Fig. 35.3 also shows optional Qt widget components and ROS interface components. The ROS interfaces (bridge components) will be presented later.

875

876

Handbook of Medical Image Computing and Computer Assisted Intervention

Figure 35.2 Overview of the da Vinci Research Kit (dVRK): Mechanical hardware provided by da Vinci Surgical System, electronics by open-source IEEE-1394 FPGA board coupled with Quad Linear Amplifier (QLA), and software by open-source cisst package with ROS interfaces.

Figure 35.3 Robot teleoperation control architecture with two MTMs and two PSMs, arranged by functional layers and showing thread boundaries [11].

The dVRK component-based software relies on the cisst package [16,13,17], which provides a component-based framework to define, deploy, and manage components (cisstMultiTask). The cisst package contains other pertinent libraries, such as a linear algebra and spatial transformation library (cisstVector) and a robot kinematics, dynamics,

System integration

and control library (cisstRobot). The cisst libraries also form the basis of the Surgical Assistant Workstation (SAW) package [15]. SAW is a collection of reusable components, based on cisst, with standardized interfaces that enable rapid prototyping of CAI systems. SAW provides diverse application components that range from hardware interface components (e.g., to robots, tracking systems, haptic devices, force sensors) and software components (e.g., controller, simulator, ROS bridge). The dVRK uses a combination of general-purpose SAW components (e.g., mtsPID, mtsRobotIO1394) and dVRK-specific SAW components (e.g., for the MTM, PSM), as shown in Fig. 35.3. The cisstMultiTask library was primarily designed to support real-time computing with multiple components in a single process. Each cisst component contains provided and required interfaces and a state table that maintains a time-indexed history of the component data (typically, the members of the component class). The state table is implemented as a ring buffer, with a single writer (the component) and potentially multiple readers (other components). Reading of the state table is implemented lock-free to minimize overhead. Correctness of the lock-free implementation has been proven using formal methods [14]. Most cisst components contain their own thread and execute either continuously, periodically, or in response to an external event. However, it is also possible to chain execution of multiple components into a single thread, using special component interfaces called ExecIn and ExecOut. If the ExecIn interface of the child component is connected to the ExecOut interface of the parent component, the parent executes the child component by issuing a run event; otherwise, separate threads are created for each component.

35.5.1.2 dVRK I/O layer The dVRK I/O layer is represented by the mtsRobotIO1394 component and the underlying IEEE 1394 (FireWire) fieldbus. This section presents a brief overview of the fieldbus as background information for the design of the I/O layer software. One of the most desirable properties of a fieldbus is to provide deterministic performance with low latency. A major source of latency is the software overhead on the control PC. This can be minimized by limiting the number of communication transactions performed by the PC. This motivates use of a fieldbus that supports broadcast, multicast and peer-to-peer transfers. A second desirable property is for the fieldbus to support “daisy chain” connection; that is, where one cable connects from the PC to the first interface board, another cable connects from the first board to the second board, and so on. This enables the bus topology to scale with the number of manipulators and facilitates reconfiguration to support different setups. From the software perspective, this leads to a situation where a single resource (i.e., FireWire port) is shared between multiple robot arms and where, for efficiency, data from multiple arms should be combined into a single packet.

877

878

Handbook of Medical Image Computing and Computer Assisted Intervention

Figure 35.4 UML class diagram of interface software (subset of class members shown): the design can scale and support different fieldbus implementations, such as FireWire and Ethernet.

Fig. 35.4 presents a UML class diagram of the interface software that supports the above design. Two bases classes are defined: (1) BasePort represents the physical fieldbus port resource, which, depending on the implementation, can be a Firewire or Ethernet port, and (2) the abstract BoardIO class that represents the controller board. Currently, there is one derived class, AmpIO, that encapsulates the functionality of the FPGA/QLA board set. For a typical system, one port will connect to multiple FPGA nodes; thus the BasePort object maintains a list of BoardIO objects. The BasePort class contains two methods, ReadAllBoards and WriteAllBoards, which read all feedback data into local buffers and transmit all output data from local buffers, respectively. This allows the class to implement more efficient communication mechanisms, such as a broadcast write and consolidated read [10]. The AmpIO API provides a set of functions to extract feedback data, such as encoder positions, from the read buffer, and to write data, such as desired motor currents, into the write buffer.

35.5.1.3 DVRK real-time control layer The real-time control layer includes the low-level and mid-level control components. The low-level control implements the joint controllers for the da Vinci manipulators and is typically configured to run at 3 kHz. The mid-level control incorporates the robot kinematics and contains a state machine that manages the robot states (e.g., homing, idle, moving in joint or Cartesian space); it typically runs at 1 kHz.

System integration

Figure 35.5 cisst/ROS bridge example: a cisst component interfaces with a ROS node using a bridge component.

The dVRK uses the ExecIn/ExecOut feature described above to logically separate the I/O component (sawRobotIO1394) from the low-level control components (sawPID), while still achieving synchronous communication and minimal latency for maximum control performance. In this case, the RunEvent is generated by the mtsRobotIO1394 component after it receives feedback from the controller boards and before it writes the control output. Thus, the mtsPID components receive the freshest feedback data and compute the control output, which is immediately sent to the hardware when the mtsPID components return the execution thread to the mtsRobotIO1394 component. On average, the latency for data transfer between the two components is 21.3 µs, with a maximum value of 115.2 µs.

35.5.1.4 DVRK ROS interface Robot Operating System (ROS) is used to provide a high level application interface due to its wide acceptance in the research community, large set of utilities and tools for controlling, launching and visualizing robots, and the benefits of a standardized middleware that enables integration with a wide variety of systems and well-documented packages, such as RViz and MoveIt!. ROS also provides a convenient build system. This section presents the bridge-based design that enables integration of the cisst real-time control framework within a ROS environment. The implementation includes a set of conversion functions, a cisst publisher and subscriber, and a bridge component. The bridge component is both a periodic component (inherits from mtsTaskPeriodic) and a ROS node. As an mtsTaskPeriodic component, it is executed periodically at a user specified frequency and connected, via cisst interfaces, to the other cisst components. The bridge component also functions as a ROS node with a node handle that can publish and subscribe to ROS messages. To illustrate this design, consider the example in Fig. 35.5, which has one cisst component connected to a ROS node via a cisst-to-ROS bridge. The cisst component has a

879

880

Handbook of Medical Image Computing and Computer Assisted Intervention

Figure 35.6 System architecture for ultrasound-based, image-guided telerobotic system. The USbased Imaging module can provide ultrasound images to the Image Guidance module (3D Slicer plug-in) for visualization, along with live stereo endoscope video (via SVL-IGTL) and models of the drill, laser, and US transducer that are positioned based on kinematic position feedback from the da Vinci PSMs (via dVRK-IGTL). Visualizations from 3D Slicer are simultaneously presented on a separate screen and on the da Vinci stereo viewer using an additional information window.

periodic Run method that assigns mVal2 to mVal1. It also contains a provided interface with two commands: ReadVal1 and WriteVal2. The bridge component connects to the ReadVal1 command and publishes to the ROS topic /Val1. Similarly, the bridge subscribes to the ROS topic /Val2 and connects to the WriteVal2 command. On the ROS side, the node simply subscribes to /Val1, increments the received value, and publishes to /Val2. At runtime, the bridge node fetches data through the cisst interface, converts it to a ROS message, and then publishes the message to ROS. In the reverse direction, the ros::spinOnce function is called at the end of the Run method, which calls the subscriber callback function, converts data, and triggers the corresponding cisst write command. The bridge always publishes at its specified update rate. If the cisst component is faster than the bridge component, the bridge only fetches the latest data at runtime, thus throttling the data flow. If the bridge component updates faster, it publishes the latest data at the bridge’s rate. For certain applications that require publishing and subscribing at the exact controller update rate, programmers can either create a separate bridge for each cisst controller component or directly initialize a publisher node within the cisst component and call publish and ros::spinOnce manually. The initial dVRK ROS interface used a somewhat arbitrary collection of topic names and message types, but now conforms to the Collaborative Robotics Toolkit (CRTK) mentioned above and is actively updated as CRTK continues to evolve.

System integration

35.5.1.5 DVRK with image guidance Although most members of the dVRK community use the ROS interface, it is possible to use bridges to other middleware, such as OpenIGTLink. This is especially attractive when it is necessary to interface the dVRK to 3D Slicer for medical image visualization, as was the case when photoacoustic image guidance was integrated with the dVRK [12]. Briefly, a pulsed laser was attached to one da Vinci instrument and an ultrasound probe to another. The energy from the pulsed laser was preferentially absorbed by blood, causing a large photoacoustic effect (ultrasound wave) to emanate from the blood vessel, even when behind other soft tissue or bone. The user interface included visualization of the endoscope video, photoacoustic image, and virtual models of the instrument, ultrasound probe and (diffused) laser beam. The system architecture is shown in Fig. 35.6, where the “US-based” modules are capable of processing conventional (B Mode) ultrasound or photoacoustic images. The dVRK-IGTL and SVL-IGTL components are the OpenIGTLink bridges, where the first handles transforms (i.e., poses of the robot arms) and the second handles the endoscope images, which are acquired using the cisstStereoVision library (hence the svl prefix). A sample visualization (with a phantom) is shown in Fig. 35.7.

35.5.1.6 DVRK with augmented reality HMD The dVRK was integrated with an optical see-through head mounted display (HMD) to provide an application for augmented reality guidance, called ARssist, for the bedside assistant (also called First Assistant) in da Vinci robotic surgery [25,27]. Fig. 35.8 shows the normal (unassisted) view, followed by two different augmented reality views: (1) stereo endoscope video displayed in a virtual stereo monitor, and (2) stereo endoscope video displayed at the base of the virtual endoscope frustum. This system effectively gives the assistant “X-ray vision” to see robotic instruments inside the patient as well as the spatial location of the endoscope field of view with respect to the patient. The implementation of ARssist required integration of the dVRK with the Unity3D game engine, which drove the Microsoft HoloLens HMD. Fiducial marker tracking, using the HoloLens world camera, is implemented using a port of ARToolKit [26] to HoloLens (https://github.com/qian256/HoloLensARToolKit). The ROS interface to the dVRK was used to obtain the robot kinematic data and the stereo endoscope video. One ROS node (udp_relay) was created to receive the joint state ROS topic, serialize to JSON, and then send to the HoloLens via UDP protocol. A second ROS node (arssist_streamer) was introduced to fetch the two channels of endoscopy and stream them to the HoloLens via Motion-JPEG protocol. There are two points to note: (1) It would have been a significant effort to create a ROS interface on the HoloLens and thus a UDP socket was used (an alternative would have been OpenIGTLink, as illustrated in the next example in Section 35.5.2.3), and (2) It was more convenient to interface to the dVRK via its standard ROS interface than to modify the dVRK code to add

881

882

Handbook of Medical Image Computing and Computer Assisted Intervention

Figure 35.7 Visualization of photoacoustic-guidance for dVRK. Top shows left and right images displayed (in stereo) on dVRK Master Console; these consist of endoscope video with augmented overlay (Additional Information Window). Bottom shows main 3D Slicer display which is displayed on separate monitor.

Figure 35.8 (Left) Normal view of surgical field (no assistance); (Middle) Augmented display showing endoscope field of view (frustum), instrument inside body, and endoscope video on virtual monitor; (Right) Augmented display, but with endoscope video rendered at base of frustum [27].

a custom UDP interface. More recently, however, an open source package that uses a UDP interface, with JSON serialization, has been released for mixed reality visualization of the dVRK [28].

35.5.2 SlicerIGT based interventional and training systems SlicerIGT is a toolset for building medical interventional and training systems, typically using real-time imaging and position tracking, mainly for translational research

System integration

Figure 35.9 Overview of system components that can be combined to build complete medical interventional or training systems.

use. The toolset includes the 3D Slicer application platform for data visualization and interaction, the Plus toolkit (www.plustoolkit.org) for data acquisition and calibration, OpenIGTLink for real-time data transfer, SlicerIGT, PerkTutor, and other 3D Slicer extensions for interventional calibration, registration, surgical navigation, and skill assessment and feedback (Fig. 35.9). The SlicerIGT toolset allows building custom systems without any programming (just by configuring existing software components) for quick prototyping. These prototypes can be further developed into software highly optimized for specific clinical workflows, most often with minimal effort: only a simplified user interface layer needs to be implemented.

35.5.2.1 3D Slicer module design Applications that contain many features developed by a diverse group of developers may become very complicated and unstable over time. Significant effort must be invested into designing an architecture that allows implementing highly cohesive and loosely coupled components. 3D Slicer goes through a major refactoring every 3–5 years. Each time the design is improved based on lessons learned and realigned with changes in the software industry, and consistently applied to all software components. Currently, 3D Slicer is just starting its 5th generation.

883

884

Handbook of Medical Image Computing and Computer Assisted Intervention

Figure 35.10 Internal structure of a 3D Slicer module. Module class is responsible for instantiating other module classes. Data is stored in MRML node classes. Data processing is implemented in Logic classes (which may observe MRML node changes and perform processing automatically). User interface is implemented in Widget classes, which usually modifies data in MRML nodes and calls logic methods for data processing. It is important that Logic class must not use (or even be aware of ) the Widget class to make sure the module can be used for bulk data processing, without requiring a user interface.

The 3D Slicer framework consists of an application core, which is responsible for setting up the main window, standard user interface elements, and creating a central data container, the MRML scene and modules that operate on the data. MRML is an acronym for Medical Reality Modeling Language, an extensible software library that can represent data structures (nodes) that are used in medical image computing, such as volumes (images), models (surface meshes), transforms, points sets, or segmentations. Modules rarely communicate or call each other’s methods directly. Instead, they usually observe the scene and react to relevant data changes. This decoupling of modules from each other offers great flexibility, since modules can be combined to implement a complex workflow without any of the modules requiring one to know how the inputs were generated or how its outputs will be used. The internal design of modules follows a model-view-controller type of separation of data (MRML node), graphical user interface (Widget), and behavior (Logic) as shown in Fig. 35.10. Decoupling of data and behavior is important because it allows adding various processing methods without making data objects overly complex. Separation of user interface from the rest of the module is important because it allows using a module without any user interface (for batch processing, or when used by another module).

35.5.2.2 Surgical navigation system for breast cancer resection A complete surgical navigation system for a breast cancer resection procedure (lumpectomy) was implemented based on SlicerIGT [2]. The system components are shown in Fig. 35.11. Ultrasound probe, cautery tool, and tumor position are continuously acquired using an electromagnetic tracker. Ultrasound images and tracking information are recorded by Plus toolkit and made available through OpenIGTLink to 3D Slicer.

System integration

Figure 35.11 Components of surgical navigation system for breast cancer resection. Only a single custom module, LumpNav, had to be developed for this clinical application.

Figure 35.12 Clinicians using electromagnetic navigation for breast lumpectomy procedure. Position of tumor relative to cautery is shown on overhead display. The software is implemented using 3D Slicer, SlicerIGT extensions, and a small custom module for providing a simplified user interface optimized to be used in the operating room.

A custom LumpNav module replaces the standard 3D Slicer graphical user interface with a simplified touch-screen friendly interface that surgeons can use during the procedure on a tablet. The LumpNav module uses basic visualization features of the Slicer core and specialized calibration and visualization features of the SlicerIGT extension to implement the clinical workflow. The system has been successfully used for tens of patient cases in the operating room (Fig. 35.12). The software was iteratively refined

885

886

Handbook of Medical Image Computing and Computer Assisted Intervention

Figure 35.13 Size of software components in term of lines of source code. While size, complexity, or effort to develop or maintain software is not directly measurable by counting source code lines, this example illustrates how a small fraction of custom development was needed to create a new software application (LumpNav) which was robust and capable enough to be used on human patients in the operating room. The amount of custom source code was only 0.01% of the total size of all software components that were used to build the application (operating system and system-level libraries were excluded).

based on feedback received from surgeons. Most of the requested improvements were implemented and tested in about a week, since typically features were already available in the Slicer core or extensions and they only had to be enabled and exposed on the user interface. The amount of custom software that had to be developed specifically for this clinical procedure was extremely small. In term of lines of source code, the LumpNav module size was 0.01% of the total source code used for building the software (including all software toolkits and libraries, except system libraries), as shown in Fig. 35.13.

35.5.2.3 Virtual/augmented reality applications Virtual and augmented reality (VR and AR) has been the focus of research for decades, but affordable and practically usable hardware has become available only recently. Therefore, many groups started to explore the application of these technologies for CAI systems. In this section, we present examples of how VR and AR capable systems can be built based on the 3D Slicer application framework. 3D Slicer has direct support for virtual reality applications, via SlicerVirtualReality extension (https://github.com/KitwareMedical/SlicerVirtualReality) [18]. Users can view content of any 3D view in virtual reality by a single click of a button, move around in space and move and interact with objects in the scene. The feature uses VTK

System integration

Figure 35.14 Virtual reality was used in ultrasound-guided needle insertion training. Needle and ultrasound probe position and images were recorded and analyzed using PerkTutor extension. Trainees reviewed their own performance in virtual reality in close-up 3D view.

toolkit’s OpenVR interface, which is compatible with all commonly-used high-end virtual reality hardware, such as HTC Vive, Windows Mixed Reality, and Oculus Rift headsets. All rendering modes (surface rendering, volume rendering) are available as in regular desktop rendering and the application remains fully responsive while immersive viewing is enabled. Virtual reality visualization has been evaluated on a number of applications. Medical students used it for reviewing their recorded ultrasound-guided needle insertions practice in close-up 3D view (Fig. 35.14), pediatric cardiologists viewed complex congenital abnormalities in 4D ultrasound images and cardiac implant placement plans in 4D CT data sets, and it showed promising potential in simulating various minimally invasive procedures. Current augmented reality hardware suffer from issues such as limited field of view, fixed focal distance, tracking instability, too large and heavy headset for prolonged use, and limited computational capacity of untethered devices. Due to these problems, augmented reality cannot yet fulfill its full potential in CAI. However, feasibility studies and a few clinical applications are possible even using current technology. Unfortunately, there is no augmented reality software interface that would give low-level access to a wide range of devices and could be utilized from existing application frameworks. Developers who do not want to be locked into proprietary software development kits can use Unity (https://unity3d.com/) or Unreal (https://www.unrealengine.com) game engines. As mentioned before, a huge limitation of these systems is that existing medical image computing software libraries are not available in these frameworks (especially when deploying applications on standalone headsets, such as Microsoft HoloLens or Magic Leap). To reduce the burden of redeveloping all features, such as DICOM networking, image visualization, segmentation, surgical planning, etc., a promising

887

888

Handbook of Medical Image Computing and Computer Assisted Intervention

Figure 35.15 Implementing augmented reality system using gaming engine and existing CAI application framework. 3D Slicer is used for image import, visualization, segmentation, procedure planning. Created models are sent for visualization and patient registration to the augmented reality headset via OpenIGTLink.

approach is to use the gaming engine on the headset as a 3D viewer, which visualizes models (and maybe image slices and volumes) sent from existing CAI software through OpenIGTLink (Fig. 35.15). While various network protocols could be used, OpenIGTLink is a good choice because it can already transfer all commonly needed CAI data types, it is a very simple socket-based protocol, therefore it can be easily implemented in any software environment, and it is already implemented in many CAI applications.

35.6. Conclusions This chapter focused on the design approaches, tools, and development processes that are intended to facilitate integration of various devices in computer assisted intervention (CAI) systems. The choice of programming language and platform generally depends on specific factors, such as development team experience and hardware constraints. A component-based approach, using a standard middleware to provide the messaging capabilities, is recommended especially for large, complex systems. Two popular middleware choices are Robot Operating System (ROS) and OpenIGTLink. ROS is widely used in robotics and provides a complete development environment, including build system and package management, in addition to middleware, though (at present) is primarily focused on the Linux platform. In contrast, OpenIGTLink provides only a messaging system, but has the advantage of being light-weight, available on multiple platforms, and easy to implement on new platforms. It is widely used in CAI systems, especially when necessary to interface with common CAI devices such as medical imaging systems, visualization software (e.g., 3D Slicer), tracking systems, and some robots.

System integration

The chapter also reviewed application frameworks that support the development of CAI systems. The review did not focus on general frameworks, such as Qt, Matlab, and LabView, though many of these are also used. Due to the prevalence of medical images in CAI systems, the two most popular CAI application frameworks, 3D Slicer and MITK, have a central focus on image visualization and user interaction, but also are readily extensible to integrate other CAI devices, such as tracking systems and robots. Finally, the chapter concluded with a presentation of two different CAI systems, the da Vinci Research Kit (dVRK) and the Slicer IGT environment. These serve to demonstrate many of the system integration concepts presented in this chapter, such as component-based design and use of application frameworks (3D Slicer) and middleware (ROS and OpenIGTLink). The dVRK example demonstrates how to build a complex robotic system, with several manipulators, from the ground up, while also leveraging middleware to facilitate integration with other devices. The Slicer IGT examples demonstrate the value of building on an existing application framework such as 3D Slicer, which enables the creation of CAI applications with a rich set of features, but requiring relatively low development effort. Furthermore, the development process described in this chapter is based on best practices in the community (especially, the open source community), but were also used for large portions of the presented systems. The reader can observe this by visiting the project GitHub pages, which show evidence of the described Git workflow, the use of automated testing (when feasible), and reference manuals generated by doxygen.

References [1] A. Lasso, T. Heffter, A. Rankin, C. Pinter, T. Ungi, G. Fichtinger, PLUS: open-source toolkit for ultrasound-guided intervention systems, IEEE Transactions on Biomedical Engineering 61 (10) (Oct. 2014) 2527–2537, https://doi.org/10.1109/TBME.2014.2322864. [2] T. Ungi, G. Gauvin, A. Lasso, C. Yeo, P. Pezeshki, T. Vaughan, K. Carter, J. Rudan, C. Engel, G. Fichtinger, Navigated breast tumor excision using electromagnetically tracked ultrasound and surgical instruments, IEEE Transactions on Biomedical Engineering (2015 Aug.). [3] Open-source platforms for navigated image-guided interventions, Medical Image Analysis 33 (June 2016), https://doi.org/10.1016/j.media.2016.06.011. [4] A. Bzostek, R. Kumar, N. Hata, O. Schorr, R. Kikinis, R. Taylor, Distributed modular computerintegrated surgical robotic systems: implementation using modular software and networked systems, in: Proc. Medical Image Computing and Computer Assisted Intervention, MICCAI 2000, Pittsburgh, PA, Oct. 2000, pp. 969–978. [5] O. Schorr, N. Hata, A. Bzostek, R. Kumar, C. Burghart, R.H. Taylor, R. Kikinis, Distributed modular computer-integrated surgical robotic systems: architecture for intelligent object distribution, in: Proc. Medical Image Computing and Computer Assisted Intervention, MICCAI 2000, Pittsburgh, PA, Oct. 2000. [6] A. Fedorov, R. Beichel, J. Kalpathy-Cramer, J. Finet, J-C. Fillion-Robin, S. Pujol, C. Bauer, D. Jennings, F.M. Fennessy, M. Sonka, J. Buatti, S.R. Aylward, J.V. Miller, S. Pieper, R. Kikinis, 3D Slicer as an image computing platform for the quantitative imaging network, Magnetic Resonance Imaging 30 (9) (2012 Nov.) 1323–1341, PMID: 22770690, PMCID: PMC3466397.

889

890

Handbook of Medical Image Computing and Computer Assisted Intervention

[7] J. Tokuda, G.S. Fischer, X. Papademetris, Z. Yaniv, L. Ibanez, P. Cheng, H. Liu, J. Blevins, J. Arata, A.J. Golby, T. Kapur, S. Pieper, E.C. Burdette, G. Fichtinger, C.M. Tempany, N. Hata, OpenIGTLink: an open network protocol for image-guided therapy environment, The International Journal of Medical Robotics and Computer Assisted Surgery 5 (4) (2009) 423–434. [8] M. Quigley, K. Conley, B. Gerkey, J. Faust, T.B. Foote, J. Leibs, R. Wheeler, A.Y. Ng, ROS: an open-source Robot Operating System, in: IEEE Intl. Conf. on Robotics and Automation (ICRA), Workshop on Open Source Software, 2009. [9] P. Kazanzides, Z. Chen, A. Deguet, G.S. Fischer, R.H. Taylor, S.P. DiMaio, An open-source research kit for the da Vinci ® Surgical System, in: IEEE Intl. Conf. on Robotics and Automation, ICRA, 2014, pp. 6434–6439. [10] Z. Chen, P. Kazanzides, Multi-kilohertz control of multiple robots via IEEE-1394 (Firewire), in: IEEE Intl. Conf. on Technologies for Practical Robot Applications, TePRA, 2014. [11] Z. Chen, A. Deguet, R.H. Taylor, P. Kazanzides, Software architecture of the da Vinci Research Kit, in: IEEE Intl. Conf. on Robotic Computing, Taichung, Taiwan, 2017. [12] S. Kim, Y. Tan, A. Deguet, P. Kazanzides, Real-time image-guided telerobotic system integrating 3D Slicer and the da Vinci Research Kit, in: IEEE Intl. Conf. on Robotic Computing, Taichung, Taiwan, 2017. [13] M.Y. Jung, A. Deguet, P. Kazanzides, A component-based architecture for flexible integration of robotic systems, in: IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, IROS, 2010. [14] P. Kazanzides, Y. Kouskouslas, A. Deguet, Z. Shao, Proving the correctness of concurrent robot software, in: IEEE Intl. Conf. on Robotics and Automation, ICRA, St. Paul, MN, May 2012, pp. 4718–4723. [15] P. Kazanzides, S. DiMaio, A. Deguet, B. Vagvolgyi, M. Balicki, C. Schneider, R. Kumar, A. Jog, B. Itkowitz, C. Hasser, R. Taylor, The Surgical Assistant Workstation (SAW) in minimally-invasive surgery and microsurgery, in: MICCAI Workshop on Systems and Arch. for Computer Assisted Interventions, Midas Journal (June 2010), http://hdl.handle.net/10380/3179. [16] A. Kapoor, A. Deguet, P. Kazanzides, Software components and frameworks for medical robot control, in: IEEE Intl. Conf. on Robotics and Automation, ICRA, May 2006, pp. 3813–3818. [17] M.Y. Jung, M. Balicki, A. Deguet, R.H. Taylor, P. Kazanzides, Lessons learned from the development of component-based medical robot systems, Journal of Software Engineering for Robotics (JOSER) 5 (2) (Sept. 2014) 25–41. [18] Saleh Choueib, Csaba Pinter, Andras Lasso, Jean-Christophe Fillion-Robin, Jean-Baptiste Vimort, Ken Martin, Gabor Fichtinger, Evaluation of 3D Slicer as a medical virtual reality visualization platform, Proceedings of SPIE Medical Imaging (2019) 1095113-1–1095113-8. [19] H. Bruyninckx, P. Soetens, B. Koninckx, The real-time motion control core of the Orocos project, in: IEEE Intl. Conf. on Robotics and Automation, ICRA, Sept. 2003, pp. 2766–2771. [20] M. Henning, A new approach to object-oriented middleware, IEEE Internet Computing 8 (1) (Jan.– Feb. 2004) 66–75. [21] A. Brooks, T. Kaupp, A. Makarenko, S. Williams, A. Oreback, Towards component-based robotics, in: Intl. Conf. on Intelligent Robots and Systems, IROS, Aug. 2005, pp. 163–168. [22] P. Kruchten, The 4+1 view model of architecture, IEEE Software 12 (1995) 42–50. [23] M. Nolden, S. Zelzer, A. Seitel, D. Wald, M. Müller, A.M. Franz, D. Maleike, M. Fangerau, M. Baumhauer, L. Maier-Hein, K.H. Maier-Hein, H.P. Meinzer, I. Wolf, The medical imaging interaction toolkit: challenges and advances: 10 years of open-source development, International Journal of Computer Assisted Radiology and Surgery 8 (4) (2013) 607–620. [24] M.J. Clarkson, G. Zombori, S. Thompson, J. Totz, Y. Song, M. Espak, S. Johnsen, D. Hawkes, S. Ourselin, The NifTK software platform for image-guided interventions: platform overview and NiftyLink messaging, International Journal of Computer Assisted Radiology and Surgery 10 (3) (2014) 301–316.

System integration

[25] L. Qian, A. Deguet, P. Kazanzides, ARssist: augmented reality on a head-mounted display for the first assistant in robotic surgery, Healthcare Technology Letters 5 (5) (2018) 194–200. [26] H. Kato, M. Billinghurst, Marker tracking and HMD calibration for a video-based augmented reality conferencing system, in: IEEE/ACM Intl. Workshop on Augmented Reality, IWAR, 1999, pp. 85–94. [27] L. Qian, A. Deguet, Z. Wang, Y.H. Liu, P. Kazanzides, Augmented reality assisted instrument insertion and tool manipulation for the first assistant in robotic surgery, in: IEEE Intl. Conf. on Robotics and Automation, ICRA, 2019, pp. 5173–5179. [28] L. Qian, A. Deguet, P. Kazanzides, dVRK-XR: mixed reality extension for da Vinci Research Kit, in: Hamlyn Symposium on Medical Robotics, 2019, pp. 93–94.

891