5th IFAC Symposium on Mechatronic Systems Marriott Boston Cambridge Cambridge, MA, USA, Sept 13-15, 2010
Hardware-Software Co-Design Tracking System for Predictable High-Speed Mobile Microrobot Position Control Claas Diederichs ∗ ∗
Division Microrobotics and Control Engineering, Oldenburg University, Oldenburg, Germany (e-mail:
[email protected]).
Abstract: Fast closed-loop control is a key issue for high-throughput automated microand nanohandling. Currently, mobile robot position control relies on computer vision using FireWire- or USB-cameras. This approach has several drawbacks for closed-loop control, such as limited update rate, high latency and unpredictable jitter. To overcome these drawbacks, a new hardware-based position tracking algorithm is presented and compared to current position control sensors. Additionally, several measurements of a closed-loop controlled mobile robot for micro- and nanohandling show the system’s advanced performance in terms of speed and accuracy using the new sensor. Keywords: Sensors, machine vision, FPGA, hardware, tracking, robot control 1. INTRODUCTION Automated manipulation of nanoscale objects such as carbon nano tubes (CNTs) or nanowires is of high interest for academic researchers as well as industrial applications. Automated manipulation of nanoobjects has been successfully applied using nanorobotic systems (Eichhorn et al., 2009; Fatikow et al., 2006). However, to achieve high throughput of nanohandling sequences, fast and precise sensor information on tool positions is a major issue. 1.1 Related work Fast closed loop control inside an SEM has been achieved (Jasper et al., 2010). The employed algorithm is used for fine positioning of a tool inside a SEM. It can only operate if the tool carried by the nanorobot is inside the SEM view and can therefore not be used for coarse positioning. Additionally, as the tracking relies on the use of the SEM beam, it cannot be applied to other systems such as optical microscopes. For coarse positioning of mobile robots, sub-pixel accuracy approach using LEDs at the bottom of the mobile robot was used (Dahmen et al., 2008). However, this approach uses software image processing carried out by a general purpose CPU and therefore has long latency and is vulnerable to jitter (see Section 2). Hardware-based image processing was applied by different groups. An approach to track hand motion was presented by Johnston et al. (2005). This approach relies on different colors of an object that must be held by the hand and is only pixel-accurate. An approach based on edge detection and a distance transform was reported by Arias-Estrada and Rodr´ıguezPalacios (2002). This approach reaches a maximal update 978-3-902661-76-0/10/$20.00 © 2010 IFAC
447
rate of 26 frames per second (fps) at VGA resolution and introduces an additional latency of at least one frame, due to the distance transform approach. A hardware-software co-design architecture for object tracking using optical flow measurements with a performance of up to 30 fps was developed by Schlessman et al. (2006). However, no statements concerning jitter or latency were made. An approach aiming at high-speed uses particle filters for pixel precise object tracking (Cho et al., 2006). Using this approach a frame-rate of 57 fps could be achieved for VGA images. 1.2 Structure of this paper In order to overcome the limitations of general purpose CPU-based tracking for coarse robot positioning (update rate, latency and jitter), a new hardware-based approach for the LED tracking described in Dahmen et al. (2008) is presented. First, the problems of current LED tracking approaches are analyzed. In Section 3, the hardware-based approach is presented. Section 4 compares the performance of the two approaches. Section 5 gives a conclusion and an outlook. 2. SENSOR PERFORMANCE ISSUES OF SOFTWARE BASE ROBOT TRACKING 2.1 Key issues of sensor performance The speed and quality of closed-loop control is directly connected to the speed and the quality of the connected sensors. Three main timing quality characteristics of an optical sensor are update rate, latency and jitter. 10.3182/20100913-3-US-2015.00080
Mechatronics'10 Cambridge, MA, USA, Sept 13-15, 2010
b)
48
10 mm
distance x-direction [pixel]
10 mm
46
Intervall [ms]
a)
44 42 40 38 36 34 32 30
a)
Fig. 1. a) Bottom-up view of the mobile robot. b) Image taken by the tracking camera with infrared filter. The sensors update rate is a limiting factor for the digital closed-loop control of a highly dynamic system. For visionbased sensor systems, the update rate is comparatively low, because a full image must be acquired and transferred. Common USB- or FireWire-cameras have update rates of 10 to 30 Hz. To improve the update rate, special highspeed cameras can be used. These cameras have special interfaces like CameraLink or Low Voltage Differential Signaling (LVDS).
0
500
1000
1500
2000
2500
2.2 System description
2.3 Algorithm The tracking algorithm used by Dahmen et al. (2008) calculates the weighted center of gravity for each region
Robot
LEDs
4000
4500
38 37.5 37 36.5 36 1
b)
1.5
2
2.5
time [s]
3
3.5
4
Fig. 3. a) Update intervalls of software based led tracking. b) Real position and tracked position (dotted, green) for robot movements in x direction with 400µs duration (1pixel ≈ 100 nm). in the image. A region contains all neighbored pixels that are above a certain threshold gth . The grayscale value of a pixel at (x, y) is defined as g(x, y). The weighted center of gravity cx , cy of a single region R is defined as follows: cx =
Sx St
Sx =
Sy St
, cy = X w(x, y) · x
(1) (2)
(x,y)∈R
Sy =
X
w(x, y) · y
(3)
(x,y)∈R
St =
X
w(x, y)
(4)
w(x, y) = max (g(x, y) − gth , 0)
(5)
(x,y)∈R
The software implementation finds a single pixel above the threshold and then grows the region by continuously checking each neighborhood pixel of the found pixels. 2.4 Performance
The mobile robots used in the experiments (Jasper and Edeler, 2008) have two infrared LEDs at the bottom. The robots move on a glass plate. A camera with an infrared filter is mounted underneath the glass plate. The camera parameters are modified to produce black pictures with bright regions at the LED positions (see Fig. 1). The current tracking system uses a videology usb-camera with VGA resolution (640x480 pixel) and an update rate of 25 Hz. The image is transferred to a computer vision library (Dahmen et al., 2008) running on a standard PC. The found position is transferred to the mobile robot via USB (see Fig. 2).
Glass
3500
sample number
The latency of a sensor describes the age of a sensor value. With a high latency, the closed-loop control works with old data. Camera-based sensors have a high latency, because an object position is calculated after a full image was captured from the camera. The latency of a vision-based object tracking is usually at least one update interval. Jitter is a time variation in a periodic signal (e.g. update rate), adding an uncertainty for closed-loop control. Jitter is a main problem in software-based object tracking on general purpose CPUs, because of the unpredictable scheduling of the operating system.
3000
38.5
Control Unit
To measure the update interval and the update interval jitter, updates were recorded at the closed-loop controller. Fig. 3a shows the update rate deviation. The average update rate is 40 ms, as expected. The update rate jitter has a standard deviation of 1.74 ms (4.4 % of the average update rate) and a min-to-max jitter of 17.2 ms. To record the sensor’s latency, the robot performed linear movements with constant speed and duration. The sensor position updates were recorded by the robot controller. Each movement started with a sensor update. After each movement the robot paused for ten sensor updates. The robot moved 100 times forward and 100 times back. Fig. 3b shows a section of the recorded movement. The average latency is 47.5 ms, composed of a full sensor update and additional computation and transfer time. The latency has a standard deviation of 2.3 ms and a min-to-max jitter of 14.7 ms.
USB
3. HARDWARE-BASED LED TRACKING
actuation signals
The high jitter on update rate and latency is a main problem in software-based systems. Responsible for this jitter is the use of a general purpose CPU with a common operating system and its unpredictable scheduling behavior.
Camera USB
Control PC
Fig. 2. Archtecture of the tracking system. 448
Mechatronics'10 Cambridge, MA, USA, Sept 13-15, 2010
To minimize the jitter, a system that is predictable and dedicated to the tracking is needed. To additionally reduce the latency, an embedded system that performs major steps of the tracking calculation in hardware while the image is captured is desirable. Gribbon et al. (2006) define three different processing modes for hardware image processing. In offline mode the image is written to memory, allowing random access to image pixels. The stream mode operates on the pixels while the image is streamed from the camera to the hardware processing unit, allowing for faster processing and less hardware utilization at the cost of no random pixel access. The hybrid mode is a combination of both. The standard software method to calculate the weighted center of gravity is to scan the image for pixels above the threshold and then flood-filling the region connected to this pixel. The offline mode would allow for pixel search and floodfilling. However, this approach requires to first transfer a full image to memory and find the regions while the next image is captured thus adding undesired latency to the system. Additionally, the runtime of this approach is hardly predictable because it depends on unsteady factors like region size and shape. Operating in stream mode, the tracking can be performed while the image is captured with no additional latency and is therefore highly predictable. To perform the weighted center of gravity calculation in stream mode, a new algorithm must be developed. 3.1 Algorithm The main idea of this approach is to decide locally for each single pixel that exceeds the threshold, whether it belongs to an already known region. To decide this, the region membership of the pixel which is left of the current one as well as the membership of the pixel above the current one must be known. If one of these belongs to a known region, the current pixel belongs to the same. A new regions is started if the current pixel is above the threshold and both adjacent regions have no membership. The region membership of one line has to be stored to make this possible.(see Fig. 4). For each region, the values Sx , Sy and St are stored. These values are updated if the current pixel belongs to the region. When two regions are joined together (see below), the values of the joined region have to be summed. With this limited knowledge, it can happen that one region is identified as two or more regions. This is detected when the region membership of the left and the above pixel are different (Fig. 5a,b). In this case, the already collected values of both regions have to be summed, only one of -
1 1 1 1
1 1 1 1
1 1 - 1 1 - 1 1 1 p
-
- - - 2 2 2 2 2 2 -
-
a)
-
- 1 1 1 1 - 1 1 1 1 p
b)
c)
-
- 1 1 1 1 - 1 1 1 1 2 2 2 - p
- - 1 1 1 1 - 2 2 2 - 1 d) 3 3 3 3 3 p
- 1 1 1 1 - 1 1 1 1 2 p
Fig. 5. a) The algorithm has no knowledge of the right pixel, therefore the current pixel starts a new region. b) At this point the algorithm detects that region 1 and 2 are the same region. c) region 1 is found, but it was already added to region 2. d) Region 1 was added to region 2, and region 2 was added to region 3. Region 1 must not be added to region 3. the regions stays valid. Additionally, the joining of the regions must be remembered, to avoid continuing already invalidated regions (see Fig. 5c). For a special region shape simply remembering all joins is not sufficient, because it can lead to joining regions multiple times (see Fig. 5d). This algorithm can be implemented using a hardwaresoftware co-design mechatronic system. 3.2 Architecture The new tracking system aims at minimizing the sensor performance issues described in Section 2. First, a different camera with higher update rate (58 Hz) is used. This camera can be controlled via the IIC bus to only scan a region of interest (ROI) which leads to a higher update rate (up to 200 Hz). Second, the position calculation is performed on a dedicated field programmable gate array (FPGA) based system that detects the regions while the image is transferred pixel-wise from the camera. The system has detected all regions immediately after the processing of the last pixel. While most of the region detection is performed in hardware, some steps are executed on an embedded processor. The embedded processor uses an interrupt driven architecture without operating system and is therefore predictable in runtime and jitter. The found robot positions are transferred to the mobile robot controller using the controller area network (CAN) bus (ISO/IEC, 2003). The CAN bus is a real-time capable bus for multiple controllers (Tindell et al., 1995). With this architecture, the PC and its unpredictable behavior is omitted from the control loop. However, all signals as well as the acquired image are transferred to the PC via USB. The extended architecture is shown in Fig. 6. Glass
Robot
LEDs
Control Unit
USB
actuation signals
LED Tracker
Camera
CAN USB
pixel data
Fig. 4. p is the current pixel, the dark region is the stored line. The algorithm takes the first an the last pixel of the stored line into consideration. 449
Control PC
Fig. 6. System architecture including the new hardwarebased robot tracking system.
Mechatronics'10 Cambridge, MA, USA, Sept 13-15, 2010
The tracking system is implemented on a XILINX Spartan3E FPGA (XC3S1200E). A softcore embedded processor (Microblaze) with a clock frequency of 50 Mhz is used. 3.3 Hardware implementation To detect regions without random pixel access, the algorithm must decide the region membership for each pixel with limited information. The only information available is the region membership of the pixels left and above of the current pixel. The algorithm buffers the region membership of a full line. Each pixel above the threshold is mapped to numerical a region id named R (0 ≤ R ≤ Rmax ). To allow for high throughput of the system, each pixel is pipelined through different processing blocks to detect the regions and calculate the parameters for the weighted center of gravity. After the last pixel is processed, the found region parameters are transferred to an embedded processor to calculate the robot’s position. Each of the pipelining units performs its tasks in a single clock cycle. There are two processing modes, the pixel processing mode for region detection and the controller mode for transferring the found regions to the embedded processor. The pipelining steps for the pixel processing mode are shown in Fig. 7. The grayed parts are described later in this section when the controller mode is presented. The first pipelining step is the threshold comparison and the calculation of w(x, y). If the pixel value is below gth , the value of w(x, y) is set to 0, indicating that the pixel is not part of any region. The next step (neighborhood ) matches the pixel to a region number. The unit buffers the region identifier of the last pixel and the region identifiers of a full line of the image. Eq. 6 describes the region identifier decision of this component.
Rx,y
x, y, g
0, Rx−1,y , = R , x,y−1 Rnew , threshold
when w(x, y) = 0, when x > 0 ∧ Rx−1,y valid when y > 0 ∧ Rx,y−1 valid else. x, y, w
neighborhood
controller
joinFilter
x, y, w R, Rj
controlMux
x, y, w R, Rj
mapRegions valueBuffer
(6)
R, Rj, w*x, w*y
nextWeight
x, y, w R, Rj x, y, w R, Rj
Fig. 7. Pipeline in processing mode g = g(x, y), w = w(x, y), R = Region identifier, Rj = Region id to join. Red values indicate possible updates by previous block. 450
If R(x − 1, y) and R(x, y − 1) are valid and different, two regions that were found as distinct regions need to be treated as a single region. In this case, the region identifier of the pixel above is sent to the next pipelining element as Rj to indicate a join. To deal with the problem of discontinued regions, the joinFilter processing unit keeps track of regions that were already joined at the current line. If R contains a region that was already joined in this line, R is changed to the last valid region that passed through this unit. This procedure is correct due to the restriction that the region from the left pixel stays valid while the above pixels region is joined. Together with the mapRegions block, the problem of nested joining (see Fig. 5d) can be overcome. The controlMux processing unit is needed after region processing to collect the data for transfer to the embedded processor and will be described later in this section. In the image processing mode, it simply forwards the values from the joinFilter. The mapRegions processing unit has two tasks. First, it keeps track of region joining and maps already joined regions to the correct region number. Second, as the neighborhood unit does not keep track about joins, it checks if a join has already been performed earlier and controls whether the join Region Rj is propagated to the next pipelining unit. Because of the joinFilter unit, nested joins are already resolved. The nextWeight unit calculates x · w(x, y) and y · w(x, y). This is done in a separate pipelining unit to increase the maximum clock frequency of the overall system. The valueBuffer unit summarizes and stores values for each region that are needed for the center of gravity calculation (Sx , Sy and St ). Additionally, the number of pixels (P ) is stored for each region to have an indicator of the region size. If regions need to be joined, the values of two distinct regions are summed. The unit utilizes a single dual port block RAM to store the region values. 32 bits are used for Sx , Sy and St , 16 bits are used for the number of pixels, resulting in a total of 112 bits for each region. The block RAM has to be dual ported, because the parameters of two distinct regions need to be read at the same clock cycle if a join is performed. After the processing of the last pixel, the information about valid regions and their parameters are implicitly spread along the different units. The neighborhood unit contains the max region count, the joinRegion entity contains information about the validity of region identifiers. The parameters are stored in the block RAM of the valueBuffer. To process the information and transfer valid region parameters to the embedded processor, two additional units are introduced (see Fig. 8). First, a controlMux pipeline unit is added. In normal operating mode, this units simply forwards the signals from the joinFilter unit and can be switched to control mode by the controller. Working in control mode, x, y, w(x, y) and Rj are set to zero. Because Rj is zero, the mapRegions unit will simply match the incoming region identifier to the joined region identifier. The valueBuffer does not perform any summing if w(x, y) is zero and simply outputs Sx , Sy , St and P for the incoming region identifier.
Mechatronics'10 Cambridge, MA, USA, Sept 13-15, 2010
threshold
Table 1. Measured region sizes for rvalid = 2 for different thresholds and number of pixel bounds.
neighborhood joinFilter
to FIFO
Rmax
controller
controlMux
Rc Sx,Sy, St, P
gth 200 150 100 80
w,R
mapRegions R
valueBuffer
w,R R
Pmin 100 100 100 200
rjoin min max 7 15 7 16 9 17 11 22
rinvalid min max 0 0 0 2 0 6 2 8
∆tcontrol min max 54T 86T 54T 104T 62T 136T 84T 170T
the FIFO needs 12 clock cycles. The total runtime of the controller is given in Eq. 7.
nextWeight
Fig. 8. Pipeline in transfer mode. The controller applies Rc and matches it against R. On success P (number of pixels) is checked and Sx , Sy and St are transferred to the software accessible FIFO. Rmax contains the number of distinct regions found by the neighborhood processing unit during the last processing mode. The controller unit is connected to the maximum regions output Rmax of the neighborhood unit. After the last pixel is processed, the controller switches the pipeline to controller mode. The controller then drives the first region identifier into the pipeline (Rc ) and compares it with to region identifier output of the mapRegions unit. If the region identifiers do not match, the region identifier was joined and is not valid anymore. In this case, the controller continues with the next region identifier. If the region identifiers match, the controller forwards the region parameters from the valueBuffer to a software accessible FIFO and continues with the next region. The minimum and maximum number of pixels for valid regions can be specified to add only these valid regions to the FIFO. When Rmax is reached, the controller resets all pipelining units for the next frame and triggers an interrupt at the embedded processor. The whole pipeline can run with a maximum pixel clock of 75 Mhz allowing for a maximum update rate of approx. 200 Hz at VGA resolution. 3.4 Software implementation
∆tcontrol = (2 + (4rjoin ) + (7rinvalid ) + (12rvalid )) T (7) Measurements were performed with the target application, which has two valid regions (rvalid = 2). The minimum / maximum values for each runtime component as well as for ∆tcontrol are shown in Table 1. The camera used during the experiments has a pixel clock of 25 Mhz (T = 40 ns). The maximum runtime overhead is 6.84 µs (gth = 80, Pmin = 200). The maximum jitter is 3.44 µs (gth = 80, Pmin = 200). Embedded software timing behavior On the embedded processor, the runtime is dependent on the number of valid regions as well as the angle of the tracked mobile robot. For correct threshold and pixel count boundaries, the number of valid regions is constant. The angle of the mobile robot is calculated using the atan2 function, the runtime of which depends on the direction the robot is heading and is between 180 µs and 200 µs. The calculated robot pose is then transferred in constant time to the embedded CAN controller. A full robot pose consists of two CAN messages, one containing the position and one containing the robot angle. The first CAN message is transferred to the CAN controller before the angle is calculated. This way it can be transferred while the time consuming atan2 function is executed. The overall runtime of the software part for two valid regions until the angle is transferred to the CAN controller is 480 to 500 µs. 4. EVALUATION
The interrupt service routine on the embedded processor first retrieves the region parameters from the hardware part through the software accessible FIFO and calculates cx and cy for each region. It then calculates the robot pose and sends it onto the CAN bus, before transferring the robot pose and region information via USB to the PC. 3.5 Timing behavior Hardware timing behavior The total time overhead of the hardware part is dependent on the camera pixel clock cycle. Due to the pipelining behavior, the time overhead during the region finding is marginal. Each pipelining unit adds a latency of 1 clock cycle, resulting in an overhead of nine clock cycles. The runtime of the controller is dependent on the number of regions. Each region that was joined to another region needs a runtime of 4 clock cycles, each region that does not match the validity bounds needs 7 clock cycles and each valid region that is transferred to 451
To evaluate the performance of the new sensor system, the same measurements as for the software system were performed (see Section 2.4). Initially, the calculated positions were transferred via USB to a PC and then transferred to the mobile robot controller. The average update rate and latency improved because of the used camera, but the jitter of both did not. The update rate has a jitter of 0.84 ms (4.5% of the average) and a min-to-max jitter of 20.25 ms. The results of the CAN transfer show very good results regarding latency and jitter. Using the CAN bus, the update rate is at a constant value of 18.81 ms with a negligible standard deviation of 1.4 µs (less the 0.01% of the average) and a min-to-max jitter of 9.4 µs. The superior performance regarding update rate deviation is shown in Fig. 9. Concerning latency, the CAN communication is at average 1.2 ms faster than the USB transfer and has a negligible
Mechatronics'10 Cambridge, MA, USA, Sept 13-15, 2010
19.2
usb intervall can intervall
24
REFERENCES
intervall (ms)
intervall (ms)
19.1 22
20
18
19 18.9 18.8 18.7
16
18.6
0
500
1000
1500
2000
0
sample number
500
1000
1500
sample number
2000
intervall (ms)
18.88 18.86 18.84 18.82 18.8 18.78 18.76 18.74 18.72 0
500
1000
1500
2000
sample number
Fig. 9. Update intervals of hardware-based led tracking with CAN and PC-routed USB updates. jitter. The jitter has a standard deviation of 1.4 µs (less the 0.01% of the average) and a min-to-max jitter of 9.4 µs. The latency is constant at the value of one update rate. The system speed was increased further by selecting a region of interest with the size of 752x150 px, which increased the update rate to 145 Hz. At this speed, the system was performing with the same stable jitter and latency as for full resolution. The results show that jitter is still a major problem with a common PC inside the control loop, even if the tracking is carried out by a different unit. 5. CONCLUSION AND FUTURE WORK This paper presents a novel approach to calculate the weighted center of gravity that can be implemented as stream-mode hardware-software co-design. The implementation was carried out on an FPGA to overcome the limitations of PC-based position tracking of a mobile robot. The implementation allows for update rates for up to 200 Hz at VGA resolution. Even higher update rates can be achieved using a region of interest. It is also shown that the system is predictable regarding latency and jitter. Experiments underline the superior performance of the mechatronic system in terms of latency and jitter over a software-based approach. The system will be improved with respect to several requirements. First, the tracking of multiple mobile robots in the same image will be implemented. Key challenges are the mapping of the individual LEDs to the correct robot as well as the transfer of pose information to the correct robot. Second, the algorithm implementation will be extended to operate with multiple pixels at the same time, to allow for the use of CameraLink-based cameras. 452
Arias-Estrada, M. and Rodr´ıguez-Palacios, E. (2002). An fpga co-processor for real-time visual tracking. In FPL ’02: Proceedings of the Reconfigurable Computing Is Going Mainstream, 12th International Conference on Field-Programmable Logic and Applications, 710–719. Springer-Verlag, London, UK. Cho, J.U., Jin, S.H., Pham, X.D., Jeon, J.W., Byun, J.E., and Kang, H. (2006). A real-time object tracking system using a particle filter. In Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2822–2827. Dahmen, C., Wortman, T., and Fatikow, S. (2008). Olvis: A modular image processing software architecture and applications for micro- namd nanohandling. In Proc. of IASTED International Conference on Visualization, Imaging and Image Processing (VIIP). Eichhorn, V., Fatikow, S., Wortmann, T., Stolle, C., Edeler, C., Jasper, D., Sardan, O., Bøggild, P., Boetsch, G., Canales, C., and Clavel, R. (2009). NanoLab: A Nanorobotic System for Automated Pick-and-Place Handling and Characterization of CNTs. In Proc. of IEEE Int. Conference on Robotics and Automation (ICRA). Fatikow, S., Eichhorn, V., Wich, T., H¨ ulsen, H., H¨ anßler, O., and Sievers, T. (2006). Development of an automatic nanorobot cell for handling of carbon nanotubes. In Proc. IARP - IEEE/RAS - EURON Joint Workshop on Micron and Nano Robotics. Paris, France. Http://iarp06.robot.jussieu.fr/Papers/Huelsen/. Gribbon, K.T., Bailey, D.G., and Johnston, C.T. (2006). Using design patterns to overcome image processing constraints on fpgas. In DELTA ’06: Proceedings of the Third IEEE International Workshop on Electronic Design, Test and Applications, 47–56. IEEE Computer Society, Washington, DC, USA. doi: http://dx.doi.org/10.1109/DELTA.2006.93. ISO/IEC (2003). Road vehicles – Controller area network (CAN) – Part 1 & 2. Technical report, International Organization for Standardization, Geneva, Switzerland. Jasper, D., Diederichs, C., Edeler, C., and Fatikow, S. (2010). High-speed nanorobot position control inside a scanning electron microscope. In Proc. of Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology Conference IEEE ECTI-CON. Jasper, D. and Edeler, C. (2008). Characterization, optimization and control of a mobile platform. In Proc. of 6th Int. Workshop on Microfactories (IWMF). Johnston, C.T., Gribbon, K.T., and Bailey, D.G. (2005). Fpga based remote object tracking for real-time control. In st International Conference on Sensing Technology, 66–72. Schlessman, J., Chen, C.Y., Wolf, W., Ozer, B., Fujino, K., and Itoh, K. (2006). Hardware/software codesign of an fpga-based embedded tracking system. In CVPRW ’06: Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop, 123. IEEE Computer Society, Washington, DC, USA. doi:http://dx.doi.org/10.1109/CVPRW.2006.92. Tindell, K., Burns, A., and Wellings, A.J. (1995). Calculating controller area network (can) message response times. Control Engineering Practice, 3(8), 1163 – 1169.