Computerized control system and interface for flexible micromanipulator control

Computerized control system and interface for flexible micromanipulator control

Advances in Engineering Software 86 (2015) 107–114 Contents lists available at ScienceDirect Advances in Engineering Software journal homepage: www...

2MB Sizes 0 Downloads 134 Views

Advances in Engineering Software 86 (2015) 107–114

Contents lists available at ScienceDirect

Advances in Engineering Software journal homepage: www.elsevier.com/locate/advengsoft

Computerized control system and interface for flexible micromanipulator control Nicholas C. Hensel, Ryan M. Dunn, Michael G. Schrlau ⇑ Rochester Institute of Technology, Mechanical Engineering, 76 Lomb Memorial Drive, Rochester, NY 14623, United States

a r t i c l e

i n f o

Article history: Received 23 January 2015 Received in revised form 27 March 2015 Accepted 19 April 2015

Keywords: Automation Computer interface Graphical user interfaces Image processing Micromanipulators Microactuators

a b s t r a c t Micro- and nanomanipulators are essential for a broad range of applications requiring precise micro- and nanoscopic spatial control such as those in micromanufacturing and single cell analysis. These manipulators are often manually controlled using an attached joystick and can be difficult for operators to use efficiently. This paper describes a system developed in MATLAB to control a well-known, commercial micromanipulator in a user friendly and versatile manner through a graphical user interface (GUI). The control system and interface allows several types of flexible movement controls in three-axis, Cartesian space, including single movements, multiple queued movements, and mouse-following continuous movements. The system uses image processing for closed loop feedback to ensure precise and accurate control over the movement of the manipulator’s end effector. The system can be used on any electronic device capable of running the free MATLAB Runtime Environment (MRE) and the system is extensible to simultaneously control other instruments capable of serial communication. Ó 2015 Elsevier Ltd. All rights reserved.

1. Introduction Micro- and nanomanipulators provide the ability to maneuver and position micro- and nanoscale objects to precise locations and orientations. Used in conjunction with optical and electron microscopes, these ultrahigh-precision positioning instruments enable functions critical for many micro- and nanoscale applications, such as manufacturing and biomedicine. In manufacturing, manipulator systems have been employed to automate the maneuvering and positioning of small objects in order to produce patterns, structures, and devices. For example, Cappelleri et al. developed caging grasps for micro assembly capable of accurately positioning micro objects using an array of probes [1,2]. In biomedicine, manipulators are commonly utilized to maneuver and position small probes inside living cells and tissue. Cell injection is a large area of research; Wang et al. developed a method of carrying out cellular injection using probes at a high rate and with minimal user intervention [3]. Grippers are used in addition to pipettes for biological applications. For example, Kim et al. developed a nanonewton microgripper to analyze the properties of biomaterials [4].

⇑ Corresponding author at: Rochester Institute of Technology, Mechanical Engineering, 76 Lomb Memorial Drive, GLE-2181, Rochester, NY 14623, United States. Tel.: +1 585 475 2139. E-mail address: [email protected] (M.G. Schrlau). http://dx.doi.org/10.1016/j.advengsoft.2015.04.009 0965-9978/Ó 2015 Elsevier Ltd. All rights reserved.

The goal of micro- and nanomanipulator automation is to create a system requiring only minimalistic intervention, optimally none, to carry out a specific desired function. In this way, automation of predefined tasks increases precision and throughput while reducing variability and time. Most automation requires computer vision algorithms to access the position of manipulation targets. Wang et al. also developed high-throughput automatic injection systems and demonstrated their use on zebrafish embryos [5–7] and contributed image processing techniques for injection automation [8]. Mattos et al. developed image processing techniques to improve their automated injection process of blastocyst cells [9–13]. However, situations exist, especially in the development and utilization of new tools and techniques, where the automation and control of these manipulators, and other related equipment, needs to be flexible and adaptable. For example, in our own work towards developing carbon nanotube (CNT)-based probes for single cell analysis [14–20], micro- and nanomanipulators are routinely used to maneuver the functional end of the probes in order to interface with single living cells in an undetermined manner, often requiring on-the-fly repositioning or customized movements based on qualitative visual feedback. Here, the tips of CNT-based probes are manually maneuvered in Cartesian space by the manipulator’s joystick and positioned within the intracellular environments of single living cells under an optical or fluorescence microscope to perform functions or analysis with tertiary instruments.

108

N.C. Hensel et al. / Advances in Engineering Software 86 (2015) 107–114

New probe-based single cell analysis techniques, as well as traditional cell physiology techniques such as patch clamp electrophysiology, involve continuous interactions with multiple instruments simultaneously. The user is required to often switch attention and focus between the microscope, the micro- or nanomanipulator, tertiary instruments (e.g. electrophysiology amplifier), and computer screen (often displaying the field of view from a microscope camera and/or graphical user interface of tertiary instruments), making the work difficult and laborious. Although many commercial microscopes and tertiary instruments come equipped with some form of graphical user interface (GUI) for use on standard computer workstations, no such interface has been provided for micro- or nanomanipulator control. Moreover, no interface exists as an expandable platform for the inclusive control of multiple instruments. The purpose of this paper is to present a new system layout developed in MATLAB to provide an intuitive and accessible GUI for micro and nanomanipulator control through a standard computer workstation. Through the GUI, a user with minimal prior knowledge can directly control the manipulator or select from customizable control functions for real-time control with a mouse. The system and GUI can be adapted to multiple applications with relative ease and configured to include control over additional instruments as needed. The computer-based system is intended to provide a user-friendly, expandable control platform for micro- and nanomanipulators over a wide range of applications in both research and education.

2. Materials and methods 2.1. System layout A typical configuration for performing micro- or nanomanipulator operations, as shown in Fig. 1, was used to develop the Manipulator Control system. The system consists of four primary components: a manipulator and its control unit (Eppendorf TransferMan NK 2), a microscope (Zeiss Observer.A1m), a camera (Point Grey Chameleon), and a computer (Dell with Intel Core i5-2400 @ 3.1 GHz) to interface with all of the controllable components. The software was developed in MATLAB R2011a. The microscope is mounted on a vibration isolation table in order to minimize detrimental vibrations during manipulation operations. The computer and manipulator controller are located near the microscope but separated from the vibration isolation table.

Fig. 1. System layout used in the development of the nanomanipulator GUI. The layout consists of a microscope, camera, manipulator with control unit, and a standard computer workstation containing the Manipulator Control software.

Each system component is a commercially available device with no hardware modifications. While each of the system components can be reasonably interchanged and the control software adapted to the new equipment, the current implementation assumes a number of things about each of the components. The camera must be a device recognizable by MATLAB, which requires that it provide video information and accept computer commands through MATLAB’s Image Acquisition Toolbox. This restriction prevents cameras with proprietary communication protocols from being used with the GUI. The camera utilized in this particular hardware configuration was selected because the manufacturer provides control drivers, which specifically allow for open interfacing. There are however many microscope cameras which use restrictive or proprietary communication and control schema. To access information from more restrictive camera systems, it would be necessary to run a separate executable from the control software or develop device drivers which can allow MATLAB to interface with the camera. The control scheme of the manipulator controller and its method of digital communication are critical to the design of the GUI. The selected control unit utilizes an absolute positioning system, wherein all movement commands sent to the manipulator are interpreted as a request to place the needle tip at the specified location in three-dimensional space by travelling at a specified velocity along each axis. The control software is designed to generate movement commands according to this control scheme. However, coordinate information is maintained both for the current position of the needle and the movement location, so that it would be possible to extend the software to support a manipulator which utilizes a relative positioning system. Additionally, the means by which the system is calibrated has been managed in such a way that it could be readily adapted to a relative positioning system. The manipulator selected has a range of travel of approximately 20 mm along each axis and can travel at up to 7.50 mm/s. The finest possible resolution of movement is approximately 40 nm. This allows for sufficient movement of the manipulator tip over a wide range of magnifications while also providing fine resolution for accurate manipulations at high optical magnifications. The Eppendorf TransferMan control unit is programmed to receive serial communication. Besides movement commands and coordinate requests, the control unit can receive commands to perform a number of other functions including connecting and disconnecting or toggling between manual and computerized control. The schematic view of the system, shown in Fig. 2, illustrates the flow of information between components. The host computer controls the manipulator and camera using the control program developed in MATLAB. The computer interfaces with the

Fig. 2. System layout illustrating the transfer of information between system components. The camera observing the microscope slide is directly connected to the computer, which is in turn connected to the manipulator controller. The manipulator controller processes either signals from the computer or the manual input device depending on which method of control is enabled.

N.C. Hensel et al. / Advances in Engineering Software 86 (2015) 107–114

manipulator controller over the serial port and with the camera over a USB port. Information from the camera is sent to the computer, which sends and receives commands to the manipulator controller, and visual feedback from the movements is visible through the camera. In this way, a closed loop is created in the system. The microscope used for developing the software does not have any form of computer control, so this aspect of the system is not directly managed by the host software. More sophisticated commercial microscope systems do exist which provide programmatic control of the X, Y, and Z stage position.

2.2. Program structure Program flow is broken into three primary components: initialization, main loop execution, and termination. Program initialization consists primarily of the creation of the main control window and all of the control mechanisms for each of the graphics objects. After the end of the program initialization step, the graphics object is fully defined and the program enters the main loop.

109

The main loop of the program, shown in Fig. 3, acts as a control scheme and continuously queries the graphics object for the current state of information. Given a change in the state of the graphics object, reference functions are called to carry out different actions based on the observed update in the state of the system. Such system changes are generated by user input in a variety of means. An example of this process might be the user pressing an interface button, which results in the execution of a callback function, which acts as an interrupt at the current point of program execution. In general, these callback functions can be executed at any time, but one callback cannot interrupt another callback. Within the callback, some element of the system is updated, such as changing the state of a figure object’s value. The main loop then observes this update in the figure object when a check function is called. This check function is contained within the GUI object class and is used to observe the current state of some part of the system, possibly as newly updated by user input. The hierarchy of program flow, as shown in Fig. 3, is established to allow for multiple control schemes. The main loop has multiple sub-procedures that are invoked differently depending on which control scheme is currently active. In the loop, the system checks for movement, updates the tip coordinates, updates the graphics displays and restarts. If the user terminates the program, the loop terminates and the shutdown procedure is called. The series of movement checks in the main loop is the aspect of the program that enables control of the manipulator, and is designed in such a way as to allow multiple control schemes. The first type of control checked in the movement cycle is continuous movement control, which allows for intuitive user control of the manipulator. In this control scheme, the program continually monitors the position of the mouse in the control window and sends commands to the manipulator to move to the queried position. This control scheme does not use any image processing feedback in order to provide real-time control with minimal movement and command lag. It does not check for completion of movements, so that the move command always corresponds to the exact desired position, without requiring completion of a possibly outdated command. The second type of movement control checked in the main loop is driven by a series of user-specified point movements, which can be generated in a number of ways as described in the User Interface Layout section. This type of movement checks that the previous movement has been completed before starting the next movement. Furthermore, if feedback is enabled, the software adjusts the end effector position before the next move is loaded such that its observed position falls within a pre-defined distance from the commanded position. This is accomplished through loading the currently defined movement again using a new coordinate transform, which utilizes the calculated position data generated by tip detection in a proportional feedback control scheme. 2.3. User interface layout

Fig. 3. Software flow of the main loop that processes movement commands and updates coordinate information. The progression of the main loop is from top to bottom.

In the design of the GUI layout, the major requirements were to provide convenient access to all of the GUI’s functionality while maintaining an easy-to-learn and easy-to-use interface. To meet these requirements, the layout was designed to mimic the layouts found in common computer applications. As such, the GUI’s largest component is the main viewing area that displays the microscope image, while the GUI’s functions are accessible using buttons surrounding the viewing area and options are contained in a drop-down menu. The GUI, as shown in Fig. 4, allows for easy and quick control of the manipulator while providing visual feedback. The GUI window is broken into four regions: the menu and three control panels (Image Display, Manipulator Control, and Image Control). The menu is used to configure the video display

110

N.C. Hensel et al. / Advances in Engineering Software 86 (2015) 107–114

Fig. 4. Labeled software screenshot with annotations labeling the various regions of the GUI. The white space occupying a majority of the GUI is the region, which displays the camera image.

and manipulator connection, along with providing logging control options. The Microscope Video dropdown provides controls for the zoom level, the camera utilized (as selected from an automatically generated list of available connected cameras), and management of various properties for the field of view display. The manipulator dropdown allows the user to select the port over which to establish serial communication with the manipulator (the list of available ports is automatically generated based on the detected available ports). The largest control panel is the Image Display Panel, and acts as the primary control region for the system. The video feed from the selected camera is displayed on this panel, and it is here that the user generates coordinates for manual movement commands. The panel to the right of the display panel is the Image Control Panel. This region contains the capture camera sub-panel and is used to initialize the camera connection, as well as capture camera images and initialize camera recording. Also contained in this panel is the tip detection sub-panel, where the user can configure and load the template image and scanning parameters. This is also where the user can start tip detection and modify the feedback parameters. The panel at the bottom of the GUI is the Manipulator Control Panel. This panel provides Manipulator Controls and is broken into three sub-panels. The leftmost sub-panel is dedicated to displaying coordinate data for each of the possible frames. The data in this panel is continuously updated based on operational mode. The middle section of the Manipulator Control Panel contains all of

the movement controls. The left column of buttons and controls are used for connecting to the manipulator and configuring the movement speed and vertical step size (used in continuous movement control). The vertical step size represents the distance in micrometers that the end effector will move when the user scrolls the mouse wheel one tick or presses the up or down arrow key on their keyboard. This is also where the user can enable feedback control if tip detection is enabled. The right column provides a set of different movement types, which includes six different button types, which will be described below. The rightmost section of the Manipulator Control Panel is currently empty but has been reserved for future expansion. The small panel along the bottom of the figure is used to provide system feedback to the user, indicating what is currently being done by the system or describing the type of input the user needs to provide. The sequence of events to initialize control of the system using the GUI is as follows: 1. System is started and user configures video feed and manipulator connection using the menu controls. 2. User connects to the camera using the Image Control Panel and to the manipulator using the Manipulator Control Panel. 3. User positions the end effector under the microscope’s field of view by physically positioning it or using the Eppendorf manipulator’s manual controller. 4. With all connections established and calibrated, the user can now carry out manipulation tasks.

N.C. Hensel et al. / Advances in Engineering Software 86 (2015) 107–114

2.4. Capabilities and options The movement control sub-panel of the Manipulator Control Panel contains six different movement control schemes: 1. Manual Calibrate XY Center: Prompts the user to indicate the location of the tip in the field of view. This information is used to define the image-manipulator transformation, using a purely translational model. After obtaining the transformation, the tip is moved to the center of the image. This process can be carried out whenever the operator desires to calibrate the GUI and the hardware systems, and should only need to be carried out once each time the system is initialized. 2. Preconfigured Movement: Prompts the user to select an excel spreadsheet file containing a series of Cartesian coordinates. These coordinates are loaded into the program as a list of movement commands to be immediately executed by the manipulator. 3. Single Move: Prompts the user to select a point in the camera field of view. The tip is moved to that point. 4. Return to Zero: Returns the tip to the center of the image. 5. Multi-Move: Prompts the user to specify a series of points using mouse clicks. When the user is finished and presses the ‘‘Enter’’ key, the manipulator moves to each of these points in series. 6. Continuous Move: This is a toggle-able control scheme for Manipulator Control. While active, the tip of the manipulator is continuously driven to the currently detected mouse position within the field of view. This is done by repeatedly polling the

111

current position of the mouse and sending the detected position as movement command to the manipulator. The system does not check that the last movement is completed, so that it is possible to smoothly control the device. In this scheme, movement is bounded by the field of view of the Image Display Panel. This scheme does not use feedback, even if enabled, in order to minimize response delay and provide the user the same ‘‘feel’’ as if using the manipulator joystick. Manual XY calibration is necessary because some manipulators, including the Eppendorf TransferMan NK 2, allow manual adjustments to the probe that are not measured by the device. For example, setting the probe to a different approach angle will change the tip’s location relative to the manipulator’s actuators. Once the XY position is calibrated, no other calibration is required because the movement scale factor is automatically calculated by the software using information such as camera resolution, Image Display Panel size and zoom level. The user only needs to ensure that the GUI is set to the correct zoom level and scaling calibration is handled automatically to further maintain ease-of-use. The interface allows for other simple commands for ease-of-use purposes. All movements can be immediately stopped by pressing the ‘‘Esc’’ key. Another movement cannot be started while a previous movement is still being executed. The movement speed affecting all movement types can be changed at any time between movement commands. This movement speed, adjustable in the GUI, represents the maximum movement speed in micrometers/second that the end effector will be moved. When Continuous Move is enabled, the tip can be moved more slowly if the user moves their mouse slower than the specified speed and if the mouse is moved abruptly the end effector will follow at the set speed. This limit is useful if the user wishes to limit the speed of objects being manipulated.

3. Results and discussions Fig. 5 demonstrates the use of Continuous Move and Multi-Move. The Continuous Move shown on the left column took 8 s to complete and was performed by lowering the tip with the mouse wheel and moving the mouse in a ‘‘W’’ pattern. The Multi-Move command was completed by lowering the tip and specifying the 5 points of an ‘‘M’’ shape. The manipulator completed the movement in 5 s. Fig. 6 shows the result of an example movement command carried out by the system using the Preconfigured Movement feature. To demonstrate the capability of this command, the acronym of our research laboratory, Nano-Bio Interface Laboratory (NBIL), has been indented into a film of negative photoresist deposited on a slide using a glass pipette tip. The use of the prerecorded movement requires a file with XYZ coordinates for each point. The file used to create Fig. 6 was generated with a MATLAB script

Fig. 5. Demonstration of Continuous Move and Multi-Move functionality at 10 magnification. The medium is negative photoresist baked on a glass slide at 300 °C for 2 min.

Fig. 6. Demonstration of the GUI’s Preconfigured Movement functionality. The medium is negative photoresist baked on a glass slide at 300 °C for 1 min.

112

N.C. Hensel et al. / Advances in Engineering Software 86 (2015) 107–114

that records mouse-clicks and converts them to a coordinate file. For each mouse click, the resulting file contains a point that moves the tip to the specified position, then two more to raise and lower the tip. The process required the manipulator to make 43 indentations, which was completed in 35 s, representing a rate of 1.23 indentations/second. The patterns in Figs. 5 and 6 were completed while the manipulator speed was set to 1000 lm/s. The targeting accuracy and speed of the GUI was compared to that of traditional manipulator operation. Here, a user, with prior manipulator experience but no prior GUI experience, was asked to repeat a simple movement task with both the traditional manipulator joystick and GUI. The three movement types were compared: Single Move with the GUI, Continuous Move with the GUI, and joystick movement without the GUI. The user was tasked with maneuvering the manipulator tip to seven predetermined targets on a 0.1 mm grid, as shown in Fig. 7a, over several cycles. Each cycle consists of eight target visits because target 1 is visited twice per cycle. The recorded movement was then analyzed using

open-source software (Tracker, https://www.cabrillo.edu/ ~dbrown/tracker/) to obtain the location of the tip at each target. Each time the user visited a target, the tip’s position was measured and the absolute deviation from the target was calculated. The absolute deviation of every target visit for Single Move and joystick control is shown in Fig. 7b. The deviations from all targets and all cycles for each movement type were averaged to calculate the mean error, E. The duration of each movement was also recorded and averaged to obtain the mean movement duration, Dt. The mean error and mean movement duration for each movement type is displayed in Table 1. The GUI-based Continuous Move was designed to control the movement of the manipulator in the same fashion as the manipulator joystick. From the targeting results, as shown in Table 1, the user had better targeting accuracy (less mean error) using Continuous Move than using the manipulator joystick (3.80 lm vs. 4.58 lm, respectively). However, it took the user longer to make these targeting movements with Continuous Move than with the joystick (4.19 s vs. 3.17 s, respectively). Since it is common to compromise accuracy for speed and vice-versa, these results suggest controlling the manipulator with Continuous Move is comparable to controlling the manipulator using the manipulator joystick. These conclusions also take into account that the user had prior experience with the traditional manipulator but no prior GUI experience. The GUI-based Single Move was designed to increase manipulator efficiency over traditional manipulator operation. From the results shown in Table 1, when comparing Single Move to the manipulator joystick, targeting error and movement time were both reduced (3.43 lm vs. 4.58 lm and 1.93 s vs. 3.17 s, respectively). In other words, when using the GUI compared to the joystick, the user had 25% better targeting accuracy and was able to complete targeting movements in 40% less time. Without knowing these results, the user reported that the Single Move was the most effective control method of those compared. The user also reported that they preferred the tactile feel of the joystick to that of the simple computer mouse used to interface with the GUI. Although tactile comfort was not a focus of this paper, the tactile feel of the GUI is largely dependent on the control device; an advantage of the GUI is that it can be used with other cursor control devices compatible with the operating system.

4. Positioning feedback Early versions of the control system suffered from an unacceptable amount of positioning error, a problem which is consistent with previous work [21]. This error could be attributed to a number of possible system components, including misalignment of the manipulator mounting relative to the microscope and imperfection in the manipulator’s actuators. To quantify this error, a set of 200 random movements were generated using pseudorandom number generation in MATLAB. These commands were sent to the program as a Multi-Move command. For each movement, the recorded position, observed actual position, and movement duration was recorded. The 200 test movements were completed in 128.4 s at a rate of 93.5 moves/min with an average absolute

Table 1 Mean error E and mean move duration Dt for control with and without the GUI.

Fig. 7. Comparison of accuracy between using the GUI and the manipulator joystick. (a) Microscope image showing the manipulator tip and 0.1 mm grid with target order labeled; (b) plot of x and y errors of all target visits and all cycles using manipulator joystick and the Single Move function in the GUI.

GUI Single Move GUI Continuous Move Manipulator joystick

E (lm)

Dt (s)

3.43 3.80 4.58

1.93 4.19 3.17

N.C. Hensel et al. / Advances in Engineering Software 86 (2015) 107–114

error of 4.83 lm. The manipulator was configured to a movement speed of 1000 lm/s during the tests. In completing this set of movements, the position of the tip was calculated using tip detection based on a normalized cross-correlation between a template image and a subsection of the field of view image. This technique was selected based on well documented prior use in other systems, shown in [21,22]. The template image used for correlation is generated either by loading a pre-existing template image, or by selecting the current tip in the field of view, which creates the template automatically. To extract the current tip position from the image, this loaded template image is compared with a subset of the current field of view around the assumed tip position using MATLAB’s built in ‘‘normxcorr2’’ function. This function compares the template and snapshot images and returns a normalized matrix of match values. The greatest value in this matrix corresponds to the best match between the two images and is used to extract the tip location. The correlation must exceed a software specified limit, in this case defaulted to 0.8. The process is repeated if this threshold is not exceeded until either the entire image is sampled or the threshold is exceeded. This sub-sampling process is utilized to reduce detection time, as suggested by Mattos et al. [23]. The maximum match location is then translated to an overall image position and then to a coordinate location. This location is then stored as the true tip position. At 10X magnification, the field of view is 693  520 lm on a 1280  960 pixel image, for a pixel size of 0.542 lm/pixel square. Given successful tip detection, it is possible to use the calculated true position to provide feedback to the system in order to correct the movement position and eliminate the previously identified positioning error. This was accomplished through the implementation of a proportional feedback control scheme. Controls were created in the GUI for the definition of proportional, integral, and derivative control parameters, but in testing it was found that a purely proportional control provided sufficiently accurate results. Moreover, proportional control uses visual feedback from the camera, so its accuracy scales with microscope zoom level and camera resolution. The control scheme utilized operates by continuously updating a set of variable-translation coordinates between the computer and manipulator coordinate frames. Movement is broken into two phases, as suggested by Becattini et al. rough positioning and fine positioning [21]. Rough positioning refers to movement without feedback. After each movement is completed in this step, the tip then undergoes fine positioning based on a proportional feedback scheme. The difference between the current and desired position is found using tip detection, and this error is added to the variable-translation coordinates, multiplied by a gain factor, Kp. The variable-translation is then used to create an updated movement command, which incorporates the feedback result. The feedback loop is continued until the tip falls below a desired positioning threshold in this case set to 5 lm divided by the zoom level, which is roughly one pixel at any zoom level. Using the feedback methodology described above, 200 random movements were carried out. The errors resulting from this movement are shown in Fig. 8. The 200 movements with feedback were completed in 615.5 s for a rate of 19.5 moves/min with an average error of 0.30 lm. This set of movements was completed with the manipulator configured to a move speed of 1000 lm/s. At a pixel size of 0.541 lm/pixel, this corresponds to an average positioning error of 0.556 pixels. Compared against the average positioning error of 0.23 lm at a rate of 7.6 targets/min documented in [21] (which corresponds to an average pixel error of 0.554 pixels at 0.415 lm/pixel), the purely proportional feedback methodology implemented here operates approximately 2.5 times quicker with comparable error. Therefore, proportional feedback methodology

113

Fig. 8. The first graph shows the error of 200 movement operations without any programmatic feedback excluding outliers. The second graph shows the error of 200 movement operations with visual feedback as described in this section. The introduction of the feedback dramatically improved the positioning accuracy of the manipulator.

is sufficient for this vision-based GUI because of its fast computation and its accuracy to within a single pixel. It should be noted that a more computationally expensive control methodology may result in degraded performance if hosted on less powerful computing machines.

5. Conclusion Given that the limiting factor in many manipulation tasks is the skill of the user, the development of user-friendly software is an important step in facilitating more efficient manipulation. The GUI and functions described herein allows for users to perform manipulation procedures without needing to develop dexterity with a less intuitive operating method. The GUI corrects for error by detecting the manipulator’s end effector using image processing to ensure the end-effector reaches the desired destination with more precision. Further development of the image processing protocols and device interfacing can allow for additional types of procedures, including cell injection and micro-assembly, to be carried out through the GUI. Currently, users are limited to controlling manipulators using devices supplied by the manufacturers. Computerization of the Manipulator Control processes facilitates the use of a wide array of alternative controllers. The combination of a keyboard and

114

N.C. Hensel et al. / Advances in Engineering Software 86 (2015) 107–114

mouse is familiar to most users but other third-party devices can be utilized, including haptic joysticks and motion-sensing devices such as smart phones. Additionally, computerization of microand nanomanipulators provides the ability to enhance commercial instruments by accommodating third-party devices for users with special needs. Acknowledgement This work was supported by the Office of the Vice President for Research at the Rochester Institute of Technology (RIT), and the generous donations from Michael Bady at Eppendorf North America. References [1] Cappelleri DJ, Fatovic M, Fu ZB. Caging grasps for micromanipulation & microassembly. In: Presented at the IEEE/RSJ international conference on intelligent robots and systems, San Francisco, CA; 2011. [2] Cappelleri DJ, Fatovic M, Shah U. Caging micromanipulation for automated microassembly. In: Presented at the IEEE international conference on robotics and automation, Shanghai; 2011. [3] Wang WH, Sun Y, Zhang M, Anderson R, Langille L, Chan W. A system for highspeed microinjection of adherent cells. Rev Sci Instrum 2008;79:104302 (6pp). [4] Kim K, Liu X, Zhang Y, Cheng J, Wu X, Sun Y. Manipulation at the nanonewton level: micrograpsing for mechanical characterization of biomaterials. In: Presented at the IEEE international conference on robotics and automation, Kobe; 2009. [5] Wang WH, Liu XY, Sun Y. Autonomous zebrafish embryo injection using a microrobotic system. In: Presented at the IEEE international conference on automation science and engineering; 2007. [6] Wang WH, Liu XY, Gelinas D, Ciruna B, Sun Y. A fully automated robotic system for microinjection of zebrafish embryos. PLoS One 2007;2:e862. [7] Wang WH, Liu XY, Sun Y. High-throughput automated injection of individual biological cells. IEEE Trans Automat Sci Eng 2009;6:209–19. [8] Wang WH, Hewett D, Hann CE, Chase JG, Chen XQ. Machine vision and image processing for automated cell injection. In: IEEE/ASME international

[9] [10]

[11] [12]

[13]

[14] [15]

[16] [17] [18] [19]

[20]

[21] [22]

[23]

conference on mechatronic and embedded systems and applications, Beijing; 2008. p. 309–14. Mattos LS, Grant E, Thresher R, Kluckman K. Blastocyst microinjection automation. IEEE Trans Inf Technol Biomed 2009;13:822–31. Mattos LS, Grant E, Thresher R. Speeding up video processing for blastocyst microinjection. In: Presented at the IEEE/RSJ international conference on intelligent robots and systems, Beijing; 2006. Becattini G, Mattos LS, Caldwell DG. A fully automated system for adherent cells microinjection. IEEE J Biomed Health Inf 2013;18:83–93. Mattos LS, Caldwell DG. A fast and precise micropipette positioning system based on continuous camera-robot recalibration and visual servoing. In: Presented at the 2009 IEEE international conference on automation science and engineering, Bangalore, India; 2009. Becattini G, Mattos LS, Caldwell DG. A novel framework for automated targeting of unstained living cells in bright field microscopy. In: Presented at the 2011 8th IEEE international symposium on biomedical imaging: from nano to macro; 2011. Schrlau MG, Bau HH. Carbon-based nanoprobes for cell biology. Microfluidics Nanofluidics 2009;7:439–50. Schrlau MG, Brailoiu E, Patel S, Gogotsi Y, Dun NJ, Bau HH. Carbon nanopipettes characterize calcium release pathways in breast cancer cells. Nanotechnology 2008;19:325102 (5pp), (13.08.08). Schrlau MG, Dun NJ, Bau HH. Cell electrophysiology with carbon nanopipettes. ACS Nano 2009;3:563–8. Schrlau MG, Falls EM, Ziober BL, Bau HH. Carbon nanopipettes for cell probes and intracellular injection. Nanotechnology 2008;19:015101 (4pp), (09.01.08). Singhal R et al. Multifunctional carbon-nanotube cellular endoscopes. Nat Nanotechnol 2011;6:57–64. Niu JJ, Schrlau MG, Friedman G, Gogotsi Y. Carbon nanotube-tipped endoscope for in situ intracellular surface-enhanced Raman spectroscopy. Small 2011;7:540–5. Orynbayeva Z et al. Physiological validation of cell health upon probing with carbon nanotube endoscope and its benefit for single-cell interrogation. Nanomed: Nanotechnol, Biol, Med 2012;8:590–8. Becattini G, Mattos L, Caldwell D. A fully automated system for adherent cells microinjection. IEEE J Biomed Health Inf 2013;18:83–93. Wang WH, Sun Y, Zhang M, Anderson R, Langille L, Chan W. A system for highspeed microinjection of adherent cells. Rev Sci Instrum 2008;79. 104302– 104302-6. Mattos L, Grant E, Thresher R. Speeding up video processing for blastocyst microinjection. In: 2006 IEEE/RSJ international conference on intelligent robots and systems; 2006. p. 5825–30.