Computers & Industrial Engineering 124 (2018) 462–473
Contents lists available at ScienceDirect
Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie
ARTool Zero: Programming trajectory of touching probes using augmented reality Matteo Ragnia, Matteo Perinia, Amedeo Settib, Paolo Bosettia, a b
T
⁎
University of Trento, Department of Industrial Engineering, 9 Sommarive, 38123 Trento (IN), Italy PROM Facility, Trentino Sviluppo, 8 Fortunato Zeni, 8068 Rovereto (TN), Italy
A R T I C LE I N FO
A B S T R A C T
Keywords: Touching probe Augmented reality Human machine interface Part program generation
In manufacturing applications, process preparation is a time consuming and an error prone operation, and costs are especially relevant when dealing with small batches. One mandatory operation for machining preparation is the raw material alignment with respect to machine reference frame, commonly done through an electronic touching probe. Machine manufacturers implement incontroller preparatory codes for the identification of a range of geometric features (simple contacts, plane normals, circles, etc.) in order to make alignments faster, but those procedures rely on human-input parameters, and an error may have catastrophic consequences for the probe itself. ARTool Zero supports the operators in programming touching probe trajectories, generating and simulating on-the-fly the part-program that guides the probe in the identification of a geometric features. The interface of ARTool Zero is an augmented reality application based on the ARTool Framework. The application runs on a mobile device that communicates with the machine controller. ARTool Zero projects coordinates from the mobile device screen to the machine active reference frame through the means of markers. One of the markers acts as an anchor for a virtual plane, i.e. the plane in the real world on which the mobile screen coordinates are projected, allowing to convert a 2D screen tap in a 3D point in space. Ego-localization accuracy is one of the critical aspects of the application, thus the validation of the core ARTool library is discussed. A video that presents the application in a real-case scenario is available at https:// youtu.be/wFF89pqfm6c.
1. Introduction 1.1. The costs of machining There are several factors that should be considered in the definition of the efficient machining process, and not all of them are related to the actual machining operation (Grzesik, 2008). The total cost per part is a combination of: a. machining costs (e.g. effective operation, maintenance, man-hours), b. costs for machining setup (e.g. mounting cutters, setting fixtures, testing part program), c. costs of loading and unloading, and d. tooling costs. It is clear that one of the major cost is related to setup time, which includes time for loading and unloading new workpieces, alongside with non-productive time needed for machine preparation (Kalpakjian, Schmid, & Kok, 2008). The optimization of the process is strongly related to the reduction of setup time, specially in case of small production lots and for shop-floors with limited automation. The application of AR
technologies to manufacturing can boost the efficiency (Meden, Knodel, & Bourgeois, 2014), while reducing the error rate. The work presents an Augmented Reality (AR) application for generation of part-programs for electronic touching probes that aims at reducing setup time related to raw material alignment. This application is focused on the identification of different geometric primitives, and may be extended to the identification of complex geometric feature. The trajectory of the probe is generated starting from the information collected by the camera and from the user input; the device is a mobile tablet, which allows to project on camera feed the simulation of the generated listing, which is then sent to the machine controller for the execution. 1.2. Recent AR works in manufacturing
⁎
Modern machine tools implement specific preparatory codes for
Corresponding author. E-mail addresses:
[email protected] (M. Ragni),
[email protected] (M. Perini),
[email protected] (A. Setti),
[email protected] (P. Bosetti). https://doi.org/10.1016/j.cie.2018.07.026 Received 11 May 2017; Received in revised form 9 July 2018; Accepted 16 July 2018 Available online 27 July 2018 0360-8352/ © 2018 Published by Elsevier Ltd.
Computers & Industrial Engineering 124 (2018) 462–473
M. Ragni et al.
1.3. Paper structure
touching probes, based on the assumption that the operators already have a rough knowledge about positions and orientations of fixed workpieces in the working volume. At least, some approximate measurements are required, and tolerance parameters for approaches must be set. Some simulations can be seen on the CNC (Computer Numerical Control) screen before execution, although no feedback is provided with respect to real objects in the workspace. This procedures require time and experience, since errors bring to catastrophic damages for machine, workpieces, and touching probes. In literature (Fiorentino, Uva, Gattullo, Debernardis, & Monno, 2014; Ramírez, Mendoza, Mendoza, & González, 2015), AR is used as an alternative training method. Educational and informational applications, such as augmented manuals and operator training, are now literally mainstream. In Monroy Reyes, Vergara Villegas, Miranda Bojórquez, Cruz Sánchez, and Nandayapa (2016), the authors use a marker solution to build interactive lectures on machinery handling for completely inexperienced students, revealing once again the high acceptance of the methodology, which improves understanding of programming caveats for complex paths (Ćuković et al., 2015). Syberfeldt, Danielsson, Holm, and Wang (2016) pushes towards the integration of AR in training setups and in expert systems to aid inexperienced operators in decision making. Both Product Design and Planning, and Workplace Design and Planning benefit from an AR developing environment (Ong, Pang, & Nee, 2007) that supports designers and engineers in designing new assembly lines, with AR interfaces (Bondrea & Petruse, 2013; Büttner, Sand, & Röcker, 2015) that guide the operator in the execution of a specific task—i.e. projection on workpieces of welding spots (Doshi, Smith, Thomas, & Bouras, 2016). The ergonomics of the technology is evaluated in Vignais et al. (2013). Few studies started to develop decision supporting tools ex-ante (Elia, Gnoni, & Lanzilotto, 2016), for figuring out the effectiveness of the approach in a specific manufacturing process, since costs of integrating such a new technology can be significant. On process machines, the most prolific field regards programming and collision avoidance for manipulators. End-effector pose and trajectory reached by a complex kinematic chain during a procedure is more intuitive when programmed (Fang, Ong, & Nee, 2012b) and visualized (Chong, Ong, Nee, & Youcef-Youmi, 2009; Fang, Ong, & Nee, 2012a) by the mean of an augmented user interface—i.e. mobile, projection on half silvered glasses, or head up displays (Jozef, Miroslav, & Ludmila, 2014). General survey for industrial applications can be found in Ong, Yuan, and Nee (2008) and Nee and Ong (2013). The number of AR applications on machine tools are limited and may be referred as proof-of-concept prototypes rather than proof-ofbenefit ones. In Suárez-Warden, Mendívil, Ramírez, Garza Nájera, and Pantoja (2015), an AR appliance is used to expand the psycho-motor ability of the operator by incorporation of AR in preparatory procedures for a pipe manufacturing machine. In Meden et al. (2014), AR is used to develop a framework that allows to validate dimensions of finished parts. The framework is marker based, which is one of the most reliable solution when considering the precision required for manufacturing applications. The work also illustrates statistical evidence of advantages, both economical and practical, induced by AR applications in manufacturing. Another approach, typically discussed in literature, is the superposition of virtual image on camera feed for the validation of complex paths (Weinert, Zabel, Ungemach, & Odendahl, 2008). Virtual images contain augmented information about the process, and are visualized through the use of different devices, such as stereo-projectors (Olwal, Gustafsson, & Lindfors, 2008) or mobile devices (Setti, Bosetti, & Ragni, 2018b, 2018a). In general, the idea is to use the augmented visualization to give to the operators more insight about the process—usually before performing the actual machining operation (Zhang, Ong, & Nee, 2010)—rather than provide an user interface.
Before diving into the detailed description of the ARTool Zero procedures and internals, Section 2 shows the practical usage of the application, in the case of a corner feature identification, in order to allow the reader to understand the human/machine interaction of the appliance. Section 3 contains a brief overview of the general ARTool framework, alongside with the description on how ARTool has been employed to project the user input into the machine workspace, transforming the framework in a complete input-output interface. The section also describes the NC interface required by ARTool Zero. Section 4 details the part-programs generated for each geometric feature. To get consistent programs, the ego-localization capability is fundamental, thus benchmark of the core of ARTool, from which ARTool Zero receives coordinates, is summarized in Section 5. 2. Corner detection example The section presents an example regarding the identification of a corner as an intersection of three orthogonal planes in space, in order to better understand the actual contribution of the application discussed. The corner is one of the very first features that an operator learns to identify. The example composed by a sequence of images that present the user interface, the simulated sequence and the executed sequence. The procedure is a corner identification through the means of three simple contacts, operation that is described detail in Section 4.6. In Fig. 1, user overlaps a 3D asset, which is a gray mask, over the geometrical feature to be recognized in the virtual world, scaling its dimensions accordingly, to let the part-program generator to derive the sequence of operations, in the form of a machine part-program. The part-program is then simulated through the augmented animation that is depicted in the left column of Fig. 2. Once the simulation is completed, a dialog window engages the user, asking or to check the simulation further, or to send the program to the machine controller, or to completely abort the procedure. The result, in terms of axis actual movements, is shown in the right column of the same figure (see Fig. 2). 3. Platform description 3.1. ARTool framework ARTool is a framework that allows to implement AR applications, specifically designed for manufacturing applications. It already proved
Fig. 1. The mask for the corner is a cube, with a dot on the target corner. The image is from an alpha version of the application, in which pinch-to-zoom callback is not yet implemented, and scaling parameter is defined manually in a configuration menu. 463
Computers & Industrial Engineering 124 (2018) 462–473
M. Ragni et al.
The platform implements a series of tools: information authoring for both technical offices and machine manufacturers, storage of data, visualization of augmented information on mobile devices for shop-floor users. Machine tool controllers use clients—either an embedded system or a software service—to take part to the information network. The client reports machine status, simulated part programs hooks, reference systems table, tools table, and diagnostic, while it receives software updates and new part programs (Fig. 3). Data are centralized in SCADA servers that can be queried by the different authenticated assets of the network. Shop-floor operators are equipped with a tablet system providing the most prominent functionality of ARTool framework. Machine workspace is almost uniquely reconstructed through markers (Gao, Hou, Tang, & Cheng, 2003). As for now, tablet computers meet requirements needed for ego-localization with respect to surrounding environment (camera, IMU) and are low-cost with respect to other solutions—e.g. head-mounted displays. The ARTool interface presents to operators:
• bulk model and possibly fixtures models; • spindle head and tool, or mechanical axes simulacra, in motion with respect to simulated trajectory; • reference systems; • tool-paths; • distances between markers and location of markers with respect to active reference frame; • subsidiary text and element descriptions. For a detailed description of the ARTool framework, the interface and a complete benchmark on the ego-localization capabilities, the usability assessment refer to Setti et al. (2018b, 2018a). 3.2. ARTool zero description The first version of ARTool ships an augmented output-only interface for shop-floor users. Inputs come in as models localized in machine reference frame, prepared by machine manufacturers and technical personnel—e.g. CAD models, fixture elements, etc. With ARTool Zero, capabilities of the framework are expanded to reach the paradigm of input/output human-machine interface for CNC operators. The input allows to recognize points and geometric features in space, without adding further hardware to the tablet device. The mobile device camera maps the tri-dimensional perceived world to a bi-dimensional image that is shown on the mobile screen. This projective property is approximated via camera models. Different camera models may be employed, but the most used ones are linear transformation via the so called camera calibration matrix CB . For this application, since the highest precision possible is required, the general projective camera matrix is applied, in the form of a 3 × 4 matrix with rank 3 and 11 degrees of freedom (Hartley & Zisserman, 2004). Parameters are specific for each camera and experimentally identified. The machine workspace is reconstructed via markers. Each marker defines a virtual reference frame. The z ̂ direction is the marker normal. Considering Fig. 4, which shows the virtual plane x ̂ × y ̂ that contains the origin, it is possible to cast through the camera matrix the bi-dimensional screen coordinate to a tri-dimensional point that lies on the virtual plane. In other words, a 2-D screen point is the projection along the line of sight on the virtual plane. Marker dimension is known, thus it is
Fig. 2. The sequence on the left shows the simulated part program on the display of the tablet: users can frame the scene from different directions to collisions checking; on the right, the sequence of resulting machine movements are depicted. In each of the scene it is possible to see two markers: the feature marker that generates the virtual plane, and the machine marker, whose position is known with respect to machine reference frame. The sequence can be seen in video, where the system is adopted on a Alesa Monti Milling machine, with a Fidia controller that communicates with the ARTool Backend: https://youtu. be/wFF89pqfm6c.
effective in:
• reducing setup time and fixture time (e.g. fixing new bulks or testing new part programs) • reducing maintenance time by supporting diagnosis and failure discovery on complex systems
Fig. 3. The connections and the Applications Programming Interface (API) technology used for the example presented in Fig. 2. Tablet communicates wirelessly (802.11) through an HTTPS authenticated connections, on which the client exposes a JSON RESTful interface. Client executes on the CNC controller using proprietary interfaces over Ethernet (802.3) connection. 464
Computers & Industrial Engineering 124 (2018) 462–473
M. Ragni et al.
projected on the plane of B, pB :
pB = CB† ptap
(5)
where pB ·z ̂ = 0 . There is no need to keep both the machine marker and the free marker framed at all times: indeed, once the free marker is positioned and anchored, it can be used as a machine marker. This allows to create chains of anchored markers, extending the volume of view in which it is possible to input a position, although the accuracy of the ego-localization decreases exponentially at each chain hop. The input procedure is enough to interpret basic geometric features and to perform alignments. The camera feed cannot guarantee the precision required in manufacturing technology—i.e. ARTool showed a reliability in the order of ± 1 mm—but the perceived space is precise enough to maneuver an electronic touching probe, which collects accurate measurements. This entails that a client with part program generation capabilities is connected to the machine. Fig. 4. Using the mobile device as a 3D input system, through a mobile marker.
3.3. Client interface possible to perceive also the world scale, and ascribe coordinates in metric units. The procedure is explained in Fig. 4. Perspective projection matrices are notation for reference systems. One reference frame with index j, defined with respect to another frame with index i, namely ij , is composed by: a. a rotational matrix ij , which is 3 × 3, where each column represents orthogonal directions in space; b. an origin point in homogeneous coordinates, ij , which is 4 × 1 where the first three elements are coordinates in space, and the last one represents an homogeneous scaling factor, that is always considered 1; c. a vector of zeros 01 × 3 : when ij is concatenated with this vector, the notation ij is used. i j
i i j1 … 3 ⎞ j i = ⎜⎛ ⎟ = ( j 1 ⎠ ⎝ 01 × 3
i j)
(1)
The inverse transformation is:
( ij )⊤ − ( ij )⊤ ij1 … 3 ⎞ ( ij)−1 = ⎜⎛ ⎟ 1 ⎠ ⎝ 01 × 3
(2)
Each machine has an absolute coordinates system, known as machine reference 0 ∈ 4 × 4 . Mathematically, this reference is an identity matrix 4 . Commercial CNCs save transformations in the so called reference table. A point pr ∈ 4 × 1, described in a reference at the index r of the table ( 0r ) is reported internally in machine coordinate with the relation:
p0 = 0 0r pr = r pr
(3)
By selecting a reference in the CNC, it is possible to specify a part program with coordinates relative to a position and an orientation of the part that is the reason why alignment procedure are important. An ARTool-ready machine has a fixed machine marker A that is associated with a known coordinate transformation 0A . The marker in 0 is used by ARTool library to ego-localize the mobile device. User A fixes further free markers—e.g. B—in the working-area: ARTool closes the chain between machine active reference and the marker reference, passing through the fixed marker reference: r B
= r0 0A AB
(4)
The client software communicates with the computer CNC, exposing the table reference frames, describing actual end-effector machine coordinates, and commanding the execution of preparatory code. Unfortunately, each machine has a different part program flavor (Siemens, Heidenhain, FANUC, FIDIA, etc.) and different touching probe preparatory blocks (Heidenhain, Renishaw, etc.), thus the client must abstract an intermediate post processor: this approach is not different from what is currently done by commercial Computer Aided Manufacturing (CAM) softwares. For the sake of the argument, the following routines are assumed to be abstracted for the communication between client and NC: safe() the routine allows to bring the machine end-effector in a safe position—e.g. to maximize the z ̂ coordinate in the machine reference. goto(, p) the routine performs an interpolated movement of the end-effector. Takes as input a reference frame and an arrival point p that lays in the same reference; the procedure projects the arrival point in the active reference and than executes a G01 preparatory block, at maximum feed. getFrame(i) the routines reads reference matrix with respect to machine reference, which is stored at the index i in the reference table. With no argument, returns the active reference frame. probe(, p, α ) commands the touching probe to find a contact point on the line that connects the actual position of the probe and the point p specified in the reference ; the argument α commands the retraction distance—i.e. the distance that must be traveled back after contact, along the line that connects starting point and contact point. The routines takes into account the uncertainty that surrounds the contact point. This routines calls a preparatory code that usually raises an error on the NC when the contact is not achieved. align() aligns the z ̂ of the end effector with z ̂ of the argument frame, using interpolated movements. Align always performs a safe() before movements. Each probe manufacturer defines a set of proprietary preparatory codes that perform inspection of different geometric features. This forces the implementation of a series of basic procedures that rely on a single interface that is common to all manufacturers. From these basic functions, complex procedures to perform alignment are derived.
The procedure is also known as anchoring: it allows ARTool to save the transformation between the free and the fixed marker, when both are framed. When the user taps the mobile screen, using an inverted camera matrix CB† and the previous transformation, the 2D coordinates of the tap on the screen ptap are transformed in the coordinates of a 3D point 465
Computers & Industrial Engineering 124 (2018) 462–473
M. Ragni et al.
localization mask which is specific for that feature, on the marker reference frame. User can translate, rotate and scale the mask to nearly fit the real feature. Each mask is associated with a reference frame and a parameter σ that represents the scaling of the mask. The AR interface commands the simulation of a generated part-program, when user confirm the mask position. Alignments are defined through procedures that enforce robustness with respect to misalignments and collisions. It must be clear that ARTool Zero is the interface only, and even if definitions of procedures try to reduce mistakes and impacts, user has always to check through the augmented simulation if probe can reach contact points, while avoiding collisions. These procedures are constructed with respect to the coordinate frame of the marker, where marker normal is assumed to be on the z ̂ axis. 4.2.1. Simple contact The simple contact feature is the very basic one, and it is employed in all other procedures, thus it has to be complete and versatile. When the user frames the marker related to simple contact procedure, a mask shaped like a little circle is shown on the display, that is associated with a reference frame Bμ . It is possible to translate the mask only on the plane x ̂ × y ̂ of the marker, thus the origin of the mask frame has the third component always equal to zero. Internally, the procedure, transforms the mask reference origin in a contact point to be reached by the touching probe:
Fig. 5. The different feature list on the model of a sample bulk, with all the features described in this work.
4. Alignment procedure 4.1. Features description From a geometrical point of view the most interesting features for defining a reference system are:
v = r rB Bμ
(6)
that in Fig. 6 is the red dashed vector. The actual trajectory of the probe is the vector v−w and the parameter δ specify the so called retraction. The final position of the probe is:
• simple touch • line • plane normal • inner and outer circles • corner • sphere
p = (v−w )−δ
v−w |v−w|
(7)
The reconstructed reference frame, has the same orientation of the active frame, and it is translated in the contact point v:
A graphical example of the features is depicted in Fig. 5. The simple contact is the very basic feature that represents a touch between the touching probe and the workpiece, while the probe moves along a direction in space, in a predefined system of coordinates. This procedure is a corner stone for all other routines, while being fundamental for users during identification and alignment of free-form geometries. The line is identified by performing two simple contacts on one face of the workpiece, and it is typical for 2.5D machines. From the joining of the two touches it is easy to reckon slope of edges with respect to an arbitrary system of coordinates. The plane normal is geometrically defined through three simple contacts. This procedure is actually the combination of two edges. Inner and outer circles are in many situations good alignment features, and are associated with a plane normal that determines an origin offset, where the identified circumference lays. For inner circles, origin is typically placed on the aperture, while for outer ones it is located on top of the extruded material, in order to avoid collisions. Corners are the most used feature in alignment for prismatic workpiece: edges can be aligned with machine axis, with three faces representing the zero of each axis. The procedure to identify a corner expects at least three simple contacts that define an origin of the coordinate system. The last feature is the sphere. In several cases this feature can be useful in the identification of a space, while neglecting its orientation. The sphere center is identified by the solution of a minimization problem.
r k
= ( r r v )
(8)
When δ is not specified, the probe retracts to its initial position. There is no constraints on the input , and internally it is possible to set a combination of and p that has an offset with respect to marker plane. The procedure is described in the following function. The returned frame is always with respect to machine reference.
function SIMPLECONTACT (, p , δ ) const k
▷ Constant number for last touched point
4.2. Procedure Fig. 6. The simple contact procedure, with probe approaching and retracting trajectory.
When a feature marker appears in field of view of the mobile device camera, ARTool Zero detects the type of feature, and project a 466
Computers & Industrial Engineering 124 (2018) 462–473
M. Ragni et al.
probe(, p , δ ) return ← getFrame(k) end function 4.3. Line Line feature is a peculiar case of alignments, mainly introduced for 2.5D machines, where the angle between one straight face of the workpiece and a machine axis has to be compensated. There are contact in two points, in such a way they can approximate a line that belongs to a face of the workpiece. The mask has the shape of a vector. The origin of the mask reference frame is in the application point of the vector. The contact procedure is shown in Fig. 7. The mask should be approximately aligned on one edge-line of the face of the workpiece, that is the actual feature to be reconstructed. Users move, rotate and scale the mask in the x ̂ × y ̂ plane. While the rotation specify the orientation of the vector, the scale parameter σ specify the length, adjustable through pinch-to-zoom. Points to be touched with two simple contacts are:
Fig. 7. The edge contact procedure: the mask is in full blue, while actual contact points are in blue stroke and white fill. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
v1 = r rB Bμ (δ1 x )̂ v2 = r rB Bμ (δ1 x ̂ + σy )̂
(9) centroid of the mask. The three points are in the corners of the mask, and it is simple to reconstruct their position, in the mask reference frame:
where δ1 is an offset imposed for safety reasons that prevents the probe from missing the workpiece. The δ2 offset sets the clearance between the probe tip and the workpiece surface, that determines the two dwell positions:
2πj 2πj ⎞ x ̂ + σ sin ⎛ ⎞ y ̂ j = 0, 1, 2 vμ, j = σ cos ⎛ ⎝ 3 ⎠ ⎝ 3 ⎠
w1 = r rB Bμ (δ1 x ̂ + δ2 z )̂ w2 = r rB Bμ (δ1 x ̂ + σy ̂ + δ2 z )̂
while the cleared positions are:
(10)
wμ, j = vμ, j + δz ̂ j = 0, 1, 2
Both parameters are modifiable through slider elements. The procedure reaches the point w1, and performs a simple contact in v1 with a complete retraction, than moves to w2 and performs a second simple contact in v2 . The reconstructed reference frame has x ̂ axis alongside touched points, y ̂ axis is the skew symmetric of the normal of the marker reference and the x .̂ Last axis, z ̂ is the skew symmetric between x ̂ and y .̂ 1 The sequence of operation are summarized in the following function:
function
LINECONTACT( Bμ ,
(11)
(12)
with δ parameter for dwell clearance. The vectors are not expressed in the machine active reference: as before, it is possible to use the previously defined functions to reach them in the machine reference. Once the three points have been reached, the reconstructed frame has origin:
b=
1 3
2
∑
vk
(13)
k=0
The first orientations that are reconstructed are x ̂ and z .̂ The procedure can be configured to reach the secure position and to align the probe to be orthogonal to the mask. While the procedure is summarized in the following function, the sequence of movements is depicted in Fig. 8.
σ , δ1, δ2 )
for ξ ← [δ1 x ,̂ δ1 x ̂ + σy ]̂ do ̂ goto( Bμ , ξ + δ2 z ) 0 ← simpleContact( B , ξ ) i μ end for
x ̂← ŷ←
0 − 0 2 1 ‖ 02 − 01‖ [[ μB ]1 … 4,3 ]× x
function
̂
align( Bμ )
▷ [·]× is the skew operator
goto( Bμ ,
z ̂ ← [x ]̂ × y ̂ return ( x ̂ y ̂ z ̂ end function
PLANECONTACT( Bμ ,
σ, δ ) ▷ configured by user
if align?
̂ δz )
for ξ ← [0, 1, 2] do
0 1)
vi = σ cos
( ) x ̂ + σ sin ( ) y ̂ 2πk 3
2πk 3
̂ goto( Bμ , vi + δz ) i← simpleContact( Bμ , vi ) end for
4.4. Plane To reconstruct a plane, the machine has to probe the workpiece three times. User places a triangular mask, that may be re-oriented and scaled (σ factor). The assigned reference frame is centered in the
b←
x ̂←
1 3
2
∑i = 0 i
0 − b ‖0 − b‖ −
1
The skew symmetric operator is defined as:
⎛ 0 − v3 v 2 v 0 − v1 [v]× = ⎜ 3 0 ⎜⎜− v2 v1 0 0 ⎝ 0
−
z ̂ ← ⎡ ‖1 − 0‖ ⎤ ‖2 − 1‖ ⎣ 1 0 ⎦× 2 1 y ̂ ← [z ]̂ × x ̂ return ( x ̂ y ̂ z ̂ b ) end function
0⎞ 0⎟ 0⎟ ⎟ 1⎠
. 467
Computers & Industrial Engineering 124 (2018) 462–473
M. Ragni et al.
Fig. 11. The touching point schemes derived from parameter δ2 , as a discrete rotation around z ̂ axis of π/4 steps, looking at the front face of the cube mask. Fig. 8. The plane contact procedure: the mask is in blue, and the first contact point is highlighted with the filled circle. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 12. The complete sequence of movements for the corner contact procedure.
Fig. 9. The inner circle procedure, over δ2 = 4 contact points. For clarity, only the movements for the first contact is shown.
identification. There are two masks that are consecutively placed on the workpiece surface: a plane normal mask and a circle mask that is specific for the procedure. The system engages the user in positioning a plane normal mask, since it is necessary to evaluate a precise axis of the circle in order to keep evaluation of circle consistent. Plane normal mask has reference Bμ1. The inner circle mask is the slice of a full cylinder that superposes graphically the convex hull of the probe reached space during touches (see Fig. 9). The mask has orientation defined through two fingers gesture and it is scaled through pinch-to-zoom. Other parameters are input through sliders: the final angle for the sequence (π /12 ⩽ δ1 ⩽ 2π ), the number of touches (δ2 ⩾ 3), and the depth at which the actual touch will be performed (δ3 > 0 mm ). This depth is the height of the visualized mask. The reference of the mask is Bμ2 that is centered along the axis of the cylinder. With respect to this reference, the points to be touched are:
iδ iδ ̂ δ3 z ̂ i = 0, …, δ2−1 vi = σ cos ⎛ 1 ⎞ x ̂ + σ sin ⎛ 1 ⎞ y − ⎝ δ2 ⎠ ⎝ δ2 ⎠ ⎜
Fig. 10. The outer circle procedure, over δ2 = 4 contact points. For clarity, only the movements for the first contact is shown. The marker is positioned on the top of the feature.
⎟
⎜
⎟
(14)
Before reaching the dwell position, which is in the center of the reference frame identified by the mask, the procedure forces an align ( Bμ ). Points collected by the touching probe fulfill the equation of a sphere, that has center in q and radius ρ
4.5. Inner and outer circles Circles are the feature in which AR interface shows its true expressiveness potential. Let’s consider the first case of a inner circle
(vi−q)⊤ (vi−q) = ρ2 468
(15)
Computers & Industrial Engineering 124 (2018) 462–473
M. Ragni et al.
Fig. 13. The subdivision scheme for sphere contact procedure. The subdivision is used to create the moving strategy.
contact one, and the frame is centered at the opening of the inner circle. The procedure can be summarized as follows:
function
ICCONTACT( Bμ
1
, Bμ2 , σ1, σ2, δ0, δ1, δ2, δ3 )
align( Bμ2 ) 0 ← z
planeContact( Bμ1, σ1, δ0 )
̂ goto( Bμ2 , δ0 z ) goto( Bμ2 , 0) for ξ ← [0, …, δ2−1] do
vi = σ cos
( ) x ̂ + σ sin ( ) y −̂ δ z ̂ i δ1 δ2
( 0z )−1
i δ1 δ2
i ← end for ̂ goto( Bμ2 , δ0 z )
Fig. 14. The parameters for the sphere contact procedures.
3
simpleContact( Bμ ,
Φ ← Φ(0 , …, (δ2− 1) )
vi )
▷ cfr. Eq. (17)
A ← A (0 , …, (δ2− 1) ) [q, ρ2 −q⊤q]⊤ ← (A ⊤ A )−1A ⊤ Φ return ( 0 z (q x ̂ + q y ̂ + 0z z )̂ ) end function
For outer circle, the second mask is a slice of a cave cylinder, that represents the convex hull of the space reached by the touching probe. The inner radius of this mask should be on the later surface of the geometric feature to be measured. With respect to the previous situation, a new parameter is necessary, that is the thickness of the cylinder on the mask. The procedure details are not reported for the sake of brevity, while the mask parameters are reported in Fig. 10. 4.6. Corner
Fig. 15. A frame of the video used for bench-marking.
Corners are identified through three simple contacts for orthogonal faces. The procedure shows mask that has a cubic shape. The cube has one corner highlighted, that is the target corner. The mask reference is centered in the highlighted corner. The reconstructed reference has the origin in the intersection of the three planes of the active reference, which is the reason why this procedure has to be performed after a plane and a line procedure, in order to be used with general orientation. Double tap on the mask change the highlighted target corner. This changes the approach sequence in the procedure. The mask dimensions show the extremes at which the touching probe will approach the material to perform contacts. The cube has depth that extends below the marker plane. The procedure has two parameters δ1 > 0 , which is the dwell position of the probe above the material in each direction, and δ2 ∈ [0, …, 3], which represents the corner to be touched—i.e. the corner of the front of the cubic mask. The effect of δ2 is to apply a rotation on all points declared, of δ2 π/4 , as pictured in Fig. 11. The reference of the mask has origin placed in the center of the selected corner. The procedure function is not reported for the sake of brevity.
It is possible to manipulate Eq. (15) to the following expression: ⊤
⊤ 1⎞ q ⎛ 2v0 ⎛ v0 v0 ⎞ ⎞ … … ⎛ … ⎟⎟ ρ2 −q⊤q ⎜v ⎟ = ⎜⎜ ⊤ ⎝ ⎠ δ − 1 v 2 1 2 δ −1 ⎠ ⎝ ⎝ ⎠ 2 Q ⎜
Φ
A
⎟
(16)
Unknowns are contained in Q. If A has full rank, the solution is obtained through a left pseudo-inverse, that is the solution of the associated minimum squares problem:
Q = (A⊤ A)−1A⊤ Φ
(17)
To guarantee the existence of a solution with sample collected on the same plane, the solution has to drop the z ̂ component of the formulation, that is useless in this particular case. The z ̂ component is inherited from the plane procedure. The modified minimization problem is identified by over-lined symbols (Φ, Q , A ). Even if radius is reconstructed, it is currently dropped, since the CNC interface does expose a unified method for saving variables. The returned reference frame has the same orientation of the plane 469
Computers & Industrial Engineering 124 (2018) 462–473
M. Ragni et al.
Fig. 16. Projections for the 3D trajectory used as benchmark. The origin of the markers are also reported.
Fig. 12 shows the graphical representation of the procedure.
Table 1 Comparison of speed (in frame per seconds) and reliability (percentage of frame identified with respect to total—21,599).
Speed Reliability
ARTool
ARUCO
114.5 fps 98.9% (21,380)
94.3 fps 86.8% (18,739)
function
SPHERECONTACT( Bμ ,
4.7. Sphere The mask of the sphere procedure is a slice of an hemisphere. The center of the hemisphere is identified through the minimum square problem expressed in Eq. (17). The procedure has several parameters, that determines the grid of contact points of the procedure. The resulting system of coordinate
σ , δ1, δ2, δ3, δ4, δ5, δ6, δ7 )
align( Bμ ) for i, θi , ψi← subdivision(δ1, δ2, δ3, δ4, δ5 ) do
vi ← σ (sin(θi )cos(ψi ) x ̂ + sin(θi )sin(ψi ) y ̂ + cos(θi ) z )̂ + δ6 z ̂ wi ← vi + δ7 (sin(θi )cos(ψi ) x ̂ + sin(θi )sin(ψi ) y ̂ + cos(θi ) z )̂ moveStrategy(wi ) i← simpleContact( Bμ , vi ) end for Φ ← Φ(0 , …, (2 + δ5)2 )
A ← A (0 , …, (2 + δ5)2 ) [q, ρ2 −q⊤q]⊤ ← (A⊤ A)−1A⊤ Φ 0 ← getFrame() r return ( 0r (q, 1)⊤ ) end function
470
▷ cfr. Eq. (17)
Computers & Industrial Engineering 124 (2018) 462–473
M. Ragni et al.
Fig. 17. Ego-localization errors. On the left, there are position plot and errors between marker libraries and MoCAP. On the right Euler’s angles and their errors are plotted. ARUCO fails the identification between frames 5,672 and 5,807 (vertical hatched band).
System (MoCAP OptiTrack, equipped with 8 Prime 13 cameras running at 120 fps). For the localization test, a MoCAP 3D reference is attached on the iPad that records a video of a board of 4 markers. At least one marker is always framed during the video (see Fig. 15). The recorded video is then used to benchmark the ARTool core library with respect to the well known ARUCO library (Garrido-Jurado, noz Salinas, Madrid-Cuevas, & Marín-Jiménez, 2014). The benchmark trajectory projection along the principal directions is depicted in Fig. 16. The reference frames of the markers are also presented. ARUCO localization presents instabilities, and in several occasions it is not able to reconstruct the pose of the tablet. In particular, between the frame 5672 and 5807 it completely loses the tracking. The ARUCO missing trajectory is approximated linearly between the last known and the first new localization. However this segment is the main cause of the differences reported in Table 1, where it is noticeable the reliability of ARTool, that almost never drops track of the marker, scoring a quite high reliability index (98.9%). Fig. 17 shows a comparison of the three trajectories and the error of the trajectories detected by the libraries under investigation with respect to the MoCAP one. In Fig. 18, histograms report errors probability distributions. For what concern positions, the error distribution of ARUCO tends to be larger with a mode that diverges slightly from zero. Numerical analysis is reported in Table 2. Regarding attitude estimation, the performance can be considered comparable.
inherits the same orientation of the currently active reference, while the origin is the center of the hemisphere. The procedure has several parameter, shown in Fig. 14:
• the polar initial angle of the sphere: 0 < δ < π /2, • the polar span of the sphere: 0 < δ < π /2−δ , • the azimuthal initial angle: 0 < δ < 2π , • the azimuthal span angle: 0 < δ < 2π−δ , • the subdivisions parameter: δ ⩾ 0, • the z ̂ elevation of the hemisphere: δ ⩾ 0, • the dwell position for the probe: δ ⩾ 0. 1
2
1
3
4
3
5
6
7
Using the maximum possible values for δ2 and δ4 the mask is a full hemisphere. There are two additional procedures used inside the sphere contact. The first one evaluates the subdivisions for the angles to be reached, as in Fig. 13. The latter, is the moving strategy, that emulates a circular interpolation. The strategy uses simple goto calls, in such a way the probe can move over the hemisphere surface without colliding. 5. Ego-localization accuracy One of the critical aspects of the application is the ego-localization of the device with respect to surroundings. This procedure, at the very core of the ARTool framework, has been extensively tested in Setti et al. (2018b, 2018a). This section summarizes the results about the ego-localization accuracy. The ground-truth is provided by a professional level Motion Capture 471
Computers & Industrial Engineering 124 (2018) 462–473
M. Ragni et al.
Fig. 18. Ego-localization errors distribution. The left column contains positions, while the right column contains Euler’s angle.
achievements of the application. The ARTool framework, developed as a visualization system for shop-floor machinists, has been expanded with input capabilities, without employing further hardware. A marker acts as a movable virtual plane in machine workspace, on which it is possible to project points of tablet display. This capability has been employed to create an interface for declaring part-program for touching probes, and the work has shown the implementation of specific procedures for peculiar geometrical entities—i.e. holes, corners, sphere, etc.—reducing programming time, process setup time and chances of collision. ARTool Zero requires an abstraction layer that allows to command axes movements and retrieve reference determined by the touching probe, from the machine reference table. Since this interface is extremely machine dependent, no specific implementation is provided, while a minimum set of operations are clearly stated in terms of Application Programming Interface functions. The ARTool specification already provide for a client that connects the machine to the ARTool network, thus such interface is already included in the existing hardware. The ARTool framework allows to simulate the trajectory of the touching probe, localized in the machine workspace. The simulation permits a clearer identification of possible contacts of probe with workpiece and fixtures. The reference constructed using ARTool Zero are used to align workpiece in machine workspace, and to perform machining operations with respect to precisely measured workpiece geometrical entities.
Table 2 Statistical indicators for errors distribution (mean μ , standard deviation σ and kurtosis k). μ
σ
k
x (mm) y (mm) z (mm) α (rad) β (rad) γ (rad)
−3.10 1.22 9.37 × 10−1 1.92 × 10−2 −9.07 × 10−4 2.13 × 10−2
ARTool 5.38 4.33 5.66 2.99 × 10−1 2.23 × 10−2 3.84 × 10−1
9.72 8.75e 3.59 1.33 × 102 2.38 1.52 × 102
x (mm) y (mm) z (mm) α (rad) β (rad) γ (rad)
−6.04 4.05 4.72 4.07 × 10−2 −4.20 × 10−3 1.67 × 10−2
ARUCO 2.93 × 101 2.05 × 101 8.20 3.09 × 10−1 5.17 × 10−2 3.39 × 10−1
7.58 × 101 5.85 × 101 5.67 1.09 × 102 4.75 × 101 1.53 × 102
6. Conclusions The work shows the implementation of a novel interface for manufacturing machines, which allows operators to quickly develop complex part-programs for touching probes in an augmented interface. The main objective of ARTool Zero is the reduction of setup time for manufacturing processes that require machine alignment. Simulations in augmented interface and 3D workspace input are the main 472
Computers & Industrial Engineering 124 (2018) 462–473
M. Ragni et al.
Appendix A. Supplementary material
Hartley, R. I., & Zisserman, A. (2004). Multiple view geometry in computer vision (2nd ed.). Cambridge University Press ISBN: 0521540518. Jozef, N.-M., Miroslav, J., & Ludmila, N.-M. (2014). Augmented reality aided control of industrial robots. Advanced Materials Research, 1025–1026, 1145–1149. Kalpakjian, S., Schmid, S. R., & Kok, C.-W. (2008). Manufacturing processes for engineering materials. Pearson-Prentice Hall. Meden, B., Knodel, S., & Bourgeois, S. (2014). Markerless augmented reality solution for industrial manufacturing. In ISMAR 2014 – IEEE international symposium on mixed and augmented reality – Science and technology 2014. Proceedings (pp. 359–360). Monroy Reyes, A., Vergara Villegas, O., Miranda Bojórquez, E., Cruz Sánchez, V., & Nandayapa, M. (2016). A mobile augmented reality system to support machinery operations in scholar environments. Computer Applications in Engineering Education, 24(6), 967–981. Nee, A., & Ong, S. (2013). Virtual and augmented reality applications in manufacturing. In IFAC proceedings volumes (IFAC-PapersOnline) (pp. 15–26). Olwal, A., Gustafsson, J., & Lindfors, C. (2008). Spatial augmented reality on industrial CNC-machines. In Proc. SPIE (Vol. 6804, pp. 680409–680409-9). Ong, S., Pang, Y., & Nee, A.b. (2007). Augmented reality aided assembly design and planning. CIRP Annals – Manufacturing Technology, 56(1), 49–52. Ong, S., Yuan, M., & Nee, A. (2008). Augmented reality applications in manufacturing: A survey. International Journal of Production Research, 46(10), 2707–2742. Ramírez, H. B., Mendoza, E., Mendoza, M., & González, E. (2015). Application of augmented reality in statistical process control, to increment the productivity in manufacture. In Procedia computer science (Vol. 75, pp. 213–220). Setti, A., Bosetti, P., & Ragni, M. (2018a). ARTool—Augmented reality human-machine interface for machining setup and maintenance. Cham: Springer International Publishing pp. 131–155. Setti, A., Bosetti, P., & Ragni, M. (2018b). ARTool – Augmented reality platform for machining setup and maintenance. Cham: Springer International Publishing pp. 457–475. Suárez-Warden, F., Mendívil, E., Ramírez, H., Garza Nájera, L., & Pantoja, G. (2015). Mill setup manual aided by augmented reality. In Mechanisms and machine science (Vol. 25, pp. 433–441). Syberfeldt, A., Danielsson, O., Holm, M., & Wang, L. B. (2016). Dynamic operator instructions based on augmented reality and rule-based expert systems. In Procedia CIRP (Vol. 41, pp. 346–351). Vignais, N., Miezal, M., Bleser, G., Mura, K., Gorecky, D., & Marin, F. (2013). Innovative system for real-time ergonomic feedback in industrial manufacturing. Applied Ergonomics, 44(4), 566–574. Weinert, K., Zabel, A., Ungemach, E., & Odendahl, S. (2008). Improved nc path validation and manipulation with augmented reality methods. Production Engineering, 2(4), 371–376. Zhang, J., Ong, S., & Nee, A. (2010). A multi-regional computation scheme in an ARassisted in situ CNC simulation environment. CAD Computer Aided Design, 42(12), 1167–1177.
Supplementary data associated with this article can be found, in the online version, at https://doi.org/10.1016/j.cie.2018.07.026. References Bondrea, I., & Petruse, R. (2013). Augmented reality – An improvement for computer integrated manufacturing. Advanced Materials Research, 628, 330–336. Büttner, S., Sand, O., & Röcker, C. (2015). Extending the design space in industrial manufacturing through mobile projection. In MobileHCI 2015 – Proceedings of the 17th international conference on human-computer interaction with mobile devices and services adjunct (pp. 1130–1133). Chong, J., Ong, S., Nee, A.c., & Youcef-Youmi, K.b. (2009). Robot programming using augmented reality: An interactive method for planning collision-free paths. Robotics and Computer-Integrated Manufacturing, 25(3), 689–701. Ćuković, S., Devedžić, G., Pankratz, F., Baizid, K., Ghionea, I., & Kostić, A. (2015). Augmented reality simulation of cam spatial tool paths in prismatic milling sequences. IFIP Advances in Information and Communication Technology, 467, 516–525. Doshi, A., Smith, R., Thomas, B., & Bouras, C. (2016). Use of projector based augmented reality to improve manual spot-welding precision and accuracy for automotive manufacturing. International Journal of Advanced Manufacturing Technology, 1–15. Elia, V., Gnoni, M., & Lanzilotto, A. (2016). Evaluating the application of augmented reality devices in manufacturing from a process point of view: An AHP based model. Expert Systems with Applications, 63, 187–197. Fang, H., Ong, S., & Nee, A. (2012). Robot path and end-effector orientation planning using augmented reality. In Procedia CIRP (Vol. 3, pp. 191–196). Fang, H., Ong, S., & Nee, A. (2012b). Interactive robot trajectory planning and simulation using augmented reality. Robotics and Computer-Integrated Manufacturing, 28(2), 227–237. Fiorentino, M., Uva, A., Gattullo, M., Debernardis, S., & Monno, G. (2014). Augmented reality on large screen for interactive maintenance instructions. Computers in Industry, 65(2), 270–278. Gao, X.-S., Hou, X.-R., Tang, J., & Cheng, H.-F. (2003). Complete solution classification for the perspective-three-point problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(8), 930–943. Garrido-Jurado, S., noz Salinas, R. M., Madrid-Cuevas, F., & Marín-Jiménez, M. (2014). Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 47(6), 2280–2292. Grzesik, W. (2008). Chapter fourteen – Machining economics and optimization. In W. Grzesik (Ed.). Advanced machining processes of metallic materials (pp. 199–212). Amsterdam: Elsevier.
473