Parallel manipulator kinematics learning using holographic neural network models

Parallel manipulator kinematics learning using holographic neural network models

Robotics and Computer-Integrated Manufacturing 14 (1998) 37 — 44 Parallel manipulator kinematics learning using holographic neural network models Rog...

252KB Sizes 12 Downloads 122 Views

Robotics and Computer-Integrated Manufacturing 14 (1998) 37 — 44

Parallel manipulator kinematics learning using holographic neural network models Roger Boudreau*, Glen Levesque, Salah Darenfed E! cole de ge& nie, Universite& de Moncton, Moncton, New Brunswick, Canada E1A 3E9 Received 7 December 1995; accepted 13 June 1997

Abstract The forward kinematic problem of parallel manipulators is resolved using a holographic neural paradigm. In a holographic neural model, stimulus—response (input—output) associations are transformed from the domain of real numbers to the domain of complex vectors. An element of information within the holographic neural paradigm has a semantic content represented by phase information and a confidence level assigned in the magnitude of the complex scalar. Networks are trained on a database generated from the closed-form inverse kinematic solutions. After the learning phase, the networks are tested on trajectories which were not part of the training data. The simulation results, given for a planar three-degree-of-freedom parallel manipulator with revolute joints and for a spherical three-degree-of-freedom parallel manipulator, show that holographic neural network models are feasible to solve the forward kinematic problem of parallel manipulators. ( 1998 Elsevier Science Ltd. All rights reserved. Keywords: Parallel manipulators; Forward kinematics; Holographic neural networks; Learning systems

1. Introduction Parallel manipulators [7,13,16] are composed of closed kinematic chains compared to simple open chains for serial manipulators. Parallel manipulators have the advantage of a much stiffer structure. The workspace, however, is smaller and the kinematics and dynamics are more complex than for serial manipulators. Contrary to serial manipulators, the inverse kinematic problem is simple for parallel manipulators, while the forward kinematic problem is complex. The non-linear mapping from known joint coordinates to an unknown Cartesian coordinate position is known as the forward kinematic problem, while a mapping in the inverse direction is known as the inverse kinematic problem. For parallel manipulators, the inverse kinematic problem can be solved in closed form, while the forward kinematic problem generally has no closed-form solution. Numerical iterative solutions [16], for example a Newton—Raphson scheme, and polynomial solutions

* Corresponding author. Tel.: 001 506 858 4300; Fax: 001 506 858 4082; e-mail: [email protected]. 0736-5845/98/$19.00 ( 1998 Elsevier Science Ltd. All rights reserved. PII: S0736-5845(97)00022-7

[11,12,18] have been used. For certain specific configurations or conditions, closed-form solutions have been found. For example, Stewart platforms have a closedform solution when one rotational degree of freedom is decoupled from the other five degrees of freedom [25], or when additional displacement sensors are used [4,14]. Novel methods have also been applied to solve the forward kinematic problem of parallel manipulators. The forward kinematics of a Stewart platform was solved by Geng and Haynes [8] using a multiple neural network structure called a cascaded CMAC (Cerebellar Model Arithmetic Computer). Boudreau et al. [2] solved the forward kinematics of a planar three-degree-of-freedom parallel manipulator with revolute joints using polynomial learning networks. Boudreau and Turkkan [3] treated the forward kinematics of three three-degree-offreedom parallel manipulators as an optimization problem using a genetic algorithm approach. In this paper, the forward kinematic problem of parallel manipulators is solved using a holographic neural paradigm [23,24]. Neural networks using a back-propagation approach [20] have been used to solve the kinematics of simple serial manipulators. Josin [15] and Nagarjuna and Soni [17] used this approach to solve the forward kinematics and the inverse kinematics of a serial

38

R. Boudreau et al. / Robotics and Computer-Integrated Manufacturing 14 (1998) 37—44

two-degree-of-freedom manipulator for which closedform solutions are readily found. Boudreau et al. [2] tried this approach on a planar three-degree-of-freedom parallel manipulator with revolute joints without much success. The holographic models used here successfully learned the kinematic transformations of parallel manipulators. Example applications are given for a planar three-degree-of-freedom parallel manipulator with revolute joints and for a spherical three-degreeof-freedom parallel manipulator. The holographic network learns the topology of the mapping based on examples of the transformation. The performance of the networks can then be tested on unseen data. The accuracy of the networks depends on the number of training data points. At first, holographic neural networks are introduced, then for the two manipulators studied, a kinematic description and the learning procedure are presented, followed by results. Finally, a discussion of the structure of the networks and a conclusion are given.

2. Holographic neural model Artificial neural networks (ANN) have recently emerged as a useful tool for learning a mapping from a given set of input—output associations. The network comprises of a number of neurons organized in successive layers, with the outputs of the neurons in each layer connected by synapses of variable weights to the inputs of neurons of the next layer. Each neuron is a non-linear input—output unit with a sigmoidal transfer function which, being easily differentiable, lends itself to a gradient descent method called back-propagation. The error of the output of each layer is propagated back to the previous layer, and the weights are changed in a way to decrease that error. The major problems encountered when using ANN are the prescription of the number of neurons in each layer, the number of hidden layers and the susceptibility of the gradient descent to local minima and flat spots, resulting in considerable learning time. The holographic neural networks (HNN) introduced in Ref. [23] presents a viewpoint that is different from that of the classical ANN. With the HNN, the input—output associations are both learned and recalled in a non-connectionist and a non-iterative transformation. The theoretical basis behind HNN design is that it employs the mathematics of complex integers in a manner similar to holography. HNN are described in general terms in the following section. Further details are given in Ref. [24]. A sequence of scalar input—output (stimulus-response) data fields are transformed into a set of complex vectors oriented about the origin in the complex plane and bounded within a unit circle (s Pj e*mk). Each vector k k

operates within two degrees of freedom, i.e. phase (m ) and k magnitude (j ). Each element of information within this k complex domain contains a representation of both, the analog information value (j ), in this case repk resentative of robot position and/or orientation, and an associated confidence level (phase m ). The values of k magnitude are typically bounded within a probabilistic range (i.e. 0.0—1.0). It represents the weight allocated to the phase value. For phase angles to represent scalar values that extend over an unrestricted range, a scaling transformation is used (m P2n(1#e(k~sk)@p)~1) where k k and p are the mean and the variance of data distributions, respectively. The sigmoid function provides a direct means of mapping the stimulus—response data fields into a closed circular range [0, 2p] extending about the complex plane. The symmetrical form of the sigmoid transformation achieves a uniform probabilistic distribution of vectors oriented about the origin in the complex plane. In the case of small input sets, the transformed complex input data fields can be expanded to higher-order combinatorial product terms or statistics. This operation is analogous to hidden layers within conventional ANN [23]. The encoding (learning) operation involves an association between the complex representation of the stimulus and response fields and is captured within a correlation matrix. This matrix is the result of the inner product of the complex conjugate of the stimulus matrix and the complex response matrix. Each of the complex elements of the correlation matrix has a magnitude and a phase orientation. The magnitude represents a weighting of the learned input—output association extending over a unit circle. The phase is the difference between the response phase and the input phase. This phase difference defines the ability of the HNN to express a component of analog response when presented with a prior analog stimuli. Thus, the holographic process preserves information within the aggregate complex scalars superposed or enfolded with a complex summation. The objective is the recall of input—output associations stored within the complex-valued correlation matrix cells. Decoding or response recall is achieved by the complex inner product of the new input matrix and the correlation matrix. The prior encoded stimulus patterns displaying the greatest similarity to the new input pattern produce a predominant recognition response (in phase and magnitude). The remaining terms manifest a smaller dominance or magnitude and follow a path characteristic of random walks in the ideal symmetrical case. Further mathematical details of the holographic neural method are given in Ref. [24]. In this study, the stimulus fields consist of the joint coordinates of the parallel manipulators, whereas the response fields consist of the corresponding Cartesian coordinates.

R. Boudreau et al. / Robotics and Computer-Integrated Manufacturing 14 (1998) 37—44

3. Applications

where

3.1. Planar three-degree-of-freedom manipulator with revolute joints

a "atan 2(y , x ) i 2i 2i and

3.1.1. Kinematic analysis A planar three-degree-of-freedom manipulator with revolute joints [9,22] is shown in Fig. 1. Motors M , 1 M and M are fixed and are located on the vertices of an 2 3 equilateral triangle. Triangle ABC constitutes the end effector of the manipulator. The position and orientation of the end effector are given by its centroid (x, y) and by the angle /. When x, y and / are given, the inverse kinematic problem consists of the determination of the motor angles h , h and h . The solution for h can be 1 2 3 i found from Fig. 2. For the ith leg, h "a $t , i"1, 2, 3 i i i

(1)

C

Fig. 2. Kinematic description of the first leg of the manipulator.

(2)

D

l2!l2#x2 #y2 2 2i 2i t "cos~1 1 (3) i 2l (x2 #y2 )0.5 2i 1 2i with t chosen such that 0)t)n. The coordinates i x and y are given by 2i 2i x "x!l cos / !x (4) 2i 3 i oi y "y!l sin / !y . (5) 2i 3 i oi The angles M/ N, i"1, 2, 3, are i / "/#n/6 (6) 1 / "/#5n/6 (7) 2 / "/!n/2. (8) 3 The coordinates of the motors in this application are Mx N3"M0, 100, 50N oi 1 My N3"M0, 0, 100 sin n/3N. oi 1

Fig. 1. Planar 3 dof manipulator with revolute joints.

39

(9) (10)

Thus, each branch has two possible solutions, giving eight possible solutions to the inverse kinematic problem, while the forward kinematic problem has six possible solutions [11,19]. Learning is achieved for a given configuration of the manipulator in each branch, i.e. elbow up or elbow down, such that one solution, corresponding to a certain assembly mode, is found for the forward kinematic problem. The distance between the motors was set at 100 units. The dimensions used for the manipulator were l "l @"50, i"1, 2, 3, with sides of triangle ABC of 50, i i which gives l "l@ "lA "28.9. 3 3 3 3.1.2. Learning procedure A configuration of the manipulator was chosen which corresponded to the plus sign for all three branches of Eq. (1). The inverse kinematic equations were used to compute the motor angles (h , h , h ) corresponding to 1 2 3 different Cartesian coordinates (x, y, /), thus generating a database for training the holographic networks. Databases of different sizes were generated to examine the influence of the number of points used in training. Learning was performed with the motor angles as the stimulus fields and the Cartesian coordinates as the response fields. The performances of the trained networks were tested on an entirely different evaluation database. In this database, different trajectories in the workspace such as

40

R. Boudreau et al. / Robotics and Computer-Integrated Manufacturing 14 (1998) 37—44

lines, a circle and an ellipse, were generated and the motor angles were computed from the inverse kinematic equations. The motor angles were then input to the networks to compare the predicted trajectory of the networks with the desired one. To generate a database for learning, a region was chosen where the manipulator has a good kinematic accuracy. The accuracy of control can be evaluated by computing the condition number of the Jacobian matrix [1,21]. For orientation angles between !30°)/)30°, this manipulator is relatively well-conditioned when 35)x)65 and 20)y)40. For this region, databases were generated with points chosen on grids of 3]3, 4]4, 6]6 and 8]8, where each grid consisted of equally spaced points on both the x- and y-axis, covering the whole domain. At each point, orientation angles were generated in increments of 5° between $30°, giving 13 orientation angles at each position. The inverse kinematic equations were used to compute the motor angles corresponding to each Cartesian position and orientation.

Fig. 4. Performance of network trained on 8]8 grid, orientation.

3.1.3. Results The performance of the networks found is shown in Figs. 3—6. Fig. 3 shows an ellipse centered at (50, 30) with a large axis of 28 and a small axis of 18. One hundred points were generated on the ellipse. For each point, an orientation angle was randomly generated between $30°. The computed motor angles were then input to the networks to see which position and orientation were

Fig. 5. Performance of the networks and effect of the number of training points, position.

Fig. 3. Performance of network trained on 8]8 grid, position.

predicted. Fig. 3 shows the positional training points used, the desired trajectory, as well as the trajectory produced by the networks. Fig. 4 shows the desired orientation angles as well as the predicted orientation angles for the 100 points on the ellipse. Figs. 5 and 6 demonstrate the effect of the number of points used for training. Fig. 5 shows how close the solution found from the networks is to the desired trajectory for each database used for training. The units from desired output on the abscissa represent the Euclidean distance between the desired and predicted trajectory. Fig. 6 shows the same results for orientation where units are in degrees. As expected, the number of points closer to the desired location increases as the number of training points increase. With the 8]8 grid, the average

R. Boudreau et al. / Robotics and Computer-Integrated Manufacturing 14 (1998) 37—44

41

error is 0.25 units and 0.2°, while the maximum error is 1.32 units and 0.93°, for position and orientation, respectively. Also, more than 90% of the points have an error less than 0.5 units in position and less than 0.5° in orientation. In Figs. 6 and 7, not all the points are shown for the 3]3 and 4]4 grids since some of the points exhibited errors of more than 2 units. 3.2. Spherical three-degree-of-freedom parallel manipulator 3.2.1. Kinematic analysis A general spherical three-degree-of-freedom parallel manipulator [5,6] is shown in Fig. 7. Three identical chains connect the base to the end effector which are made up of two pyramid modules. All rotations occur about the geometric center of the manipulator. Fixed motors actuating the revolute joints are located on the edges of the base pyramid. Two free revolute joints connect the proximal and distal link and the distal link and the end effector, on each chain. The triangles at the base of each pyramid are equilateral triangles. The angles between two edges on the base pyramid and the angles between two edges on the end effector pyramid are c and 1 c , respectively. b is the angle between the edges of 2 i a pyramid and a perpendicular from the base of the pyramid passing through the geometric center. Angles b and c satisfy the following relation: i i 2J3 c sin b " sin i, i"1, 2. i 3 2

(11)

Angles a and a , the angular lengths of the proximal 1 2 and the distal links, respectively, are the same on each chain for symmetry.

Fig. 7. Spherical 3 dof parallel manipulator.

To solve the inverse kinematic problem, the notation of Gosselin et al. [12] is used. The unit vectors u , i i"1, 2, 3, are directed along the axes of the revolute joints of the base and the unit vectors v , i"1, 2, 3, are i directed along the axes of the revolute joints of the end effector. The origin of the reference frame is at the geometric center. The unit vectors corresponding to the revolute joints connecting the proximal and distal links on each leg i are designated by w , i"1, 2, 3. When this i vector is written in the reference frame, it can be shown [12] that w "[j j j ]T, i"1, 2, 3 i 1i 2i 3i where

(12)

j "cos g cos h sin a #cos a sin b sin g 1i i i 1 1 1 i #cos b sin a sin g sin h (13) 1 1 i i j "!cos a cos g sin b #cos h sin a sin g 2i 1 i 1 i 1 i !cos b cos g sin a sin h (14) 1 i 1 i j "!cos a cos b # sin a sin b sin h . (15) 3i 1 1 1 1 i Angles h , i"1, 2, 3, represent the actuator angles, i while angles g , i"1, 2, 3, represent the angles between i an axis in the fixed reference frame and the projections of the axes of the actuated joints on the base plane. Due to symmetry, these angles are written Fig. 6. Performance of the networks and effect of the number of training points, orientation.

2n 4n g "0, g " , g " . 1 2 3 3 3

(16)

42

R. Boudreau et al. / Robotics and Computer-Integrated Manufacturing 14 (1998) 37—44

The inverse kinematic problem is solved by the constraint equations w ) v " cos a , i"1, 2, 3. (17) i i 2 For a given orientation of the end effector, unit vectors v , i i"1, 2, 3, are known, and this leads to three uncoupled quadratic equations: A ¹2#2B ¹ #C "0, i"1, 2, 3 i i i i i where

AB

h ¹ " tan i , i"1, 2, 3 i 2

(18)

(19)

and A "!cos g sin a v #cos a sin b sin g v i i 1 ix 1 1 i ix !cos a cos g sin b v !sin a sin g v 1 i 1 iy 1 i iy !cos a cos b v !cos a (20) 1 1 iz 2 B "cos b sin a sin g v !cos b cos g sin a v i 1 1 i ix 1 i 1 iy #sin a sin b v (21) 1 1 iz C "cos g sin a v #cos a sin b sin g v i i 1 ix 1 1 i ix !cos a cos g sin b v #sin a sin g v 1 i 1 iy 1 i iy !cos a cos b v !cos a (22) 1 1 iz 2 with v , v , and v being the components of vector v . ix iy iz i Two solutions are thus obtained for angle h , i"1, 2, 3, i for a given orientation. Gosselin et al. [12] found the polynomial solution to the forward kinematic problem of this manipulator which has a maximum of eight different solutions.

where c/ "cos / , s/ "sin / , etc. The transformed 1 1 1 1 vectors were used to compute the input angles from Eqs. (18)—(22). Angles were generated for !30°)/ i )30°. The angles were generated on different grid patterns of 4]4]4, 5]5]5, 6]6]6 and 8]8]8, where each grid consisted of equally spaced points on each /-axis. For each combination of orientation angles within the chosen domain, the inverse kinematic equations were used to compute the corresponding motor angles. To train the networks, the motor angles were then used as the stimulus fields and the orientation angles made up the response fields. 3.2.3. Results The findings of the performance of the networks are shown in Figs. 8—10. Fig. 8 shows a helicoidal curve consisting of 250 points. For each point on the curve, the corresponding angles / , i"1, 2, 3, were used to comi pute the input motor angles. These motor angles were then input to the networks to see which orientation angles were predicted. Figs. 8 and 9 show the results obtained when using a grid of 6]6]6, and 8]8]8, respectively. Fig. 10 contains a histogram which shows the effect of the number of training points on the performance of the networks. The units from the desired output represents the square root of the sum of the squared error for each orientation angle, or the distance between two points on the curve, in degrees. With the 8]8]8 grid, the average distance between two points is 0.31°, with the maximum distance being 0.96°.

4. Holographic neural network structure 3.2.2. Learning procedure A manipulator with an isotropic configuration was chosen to generate data in order to train the networks. For a "a "c "c "n/2, the Jacobian matrix is per1 2 1 2 fectly conditioned [10] for equal values of h , i"1, 2, 3. i In regions close to this configuration, the average conditioning is also very good. This isotropic condition is attained when the vertices of the base and end effector pyramids are aligned, i.e. v "!u , v "!u , 1 3 2 1 v "!u . 3 2 To generate data, unit vectors v were expressed in i a reference frame fixed to the base of the manipulator. The vectors were then rotated about fixed X, ½ and Z axes by angles / , / and / , respectively. The rotation 1 2 3 transform is expressed by Q"

C

D

c/ c/ c/ s/ s/ !s/ c/ c/ s/ c/ #s/ s/ 3 2 3 2 1 3 1 3 2 1 3 1 s/ c/ s/ s/ s/ #c/ c/ s/ s/ c/ !c/ s/ 3 2 3 2 1 3 1 3 2 1 3 1 !s/ c/ s/ c/ c/ 2 2 1 2 1 (23)

As mentioned previously, HNNs consist of a nonconnectionist structure in which the ability to superimpose a set of input—output associations is defined by the correlation matrix. This represents the basic encoding method. The system used in this study, the HNet Discovery PackageTM described in Ref. [24], employs a process whereby learning is a function of memory previously enfolded within the correlation cell. This enhanced learning process consists of three main stages. The first stage executes a response recall operation whereby the new input set is transformed through the correlation set in the generation of a response set. The second stage computes the difference between the response recall and the desired response for this new input. The last stage realizes an encoding whereby the new input field is mapped to the desired output. This process represents one reinforcement learning iteration and may be performed several times to achieve convergence to the set of desired input—output associations. In this application, sigmoidal transformations are applied in both input and output fields. The stimulus field

R. Boudreau et al. / Robotics and Computer-Integrated Manufacturing 14 (1998) 37—44

43

Fig. 8. Performance of network trained on 6]6]6 grid.

Fig. 9. Performance of network trained on 8]8]8 grid.

Fig. 10. Performance of networks and effect of the number of training points.

44

R. Boudreau et al. / Robotics and Computer-Integrated Manufacturing 14 (1998) 37—44

for both manipulators are expanded to higher-order product terms or “statistics” of 60 and 300. The learning rate is set to 100% initially and is reduced gradually to 10% during training. 5. Conclusion A good learning scheme exhibits an ability to generalize when presented with a limited set of input—output associations. The holographic model displays a property of generalization. Following a training on a small data set, a large region of the input angle state is mapped onto the desired output state. Figs. 5, 6 and 10 illustrate the property of this generalization. The accuracy of the response on complex trajectory motions, following training on a small database, indicates that generalization has taken place. These figures clearly indicate the learning ability of the networks since the performance of the networks improves when more points are used. A major advantage of HNN compared to ANN is that the structure of the HNN does not have to be specified a priori. Learning time is also considerably reduced. Previous use of ANN to solve this type of a problem consisted of simple manipulators with closed-form solutions. The forward kinematic problem for both manipulators solved here have no closed-form solution. The simulation results show that this new neural network paradigm is feasible to solve the forward kinematic problem of parallel manipulators. References [1] Angeles J, Rojas AA. Manipulator inverse kinematics via condition number minimization and continuation. The Int. J. of Robotics and Automation 1987;2:61—9. [2] Boudreau R, Darenfed S, Gosselin C. Efficient computation of the direct kinematics of parallel manipulators using polynomial networks. Proceedings of the ASME Mechanisms Conference 1994;72:263—70. [3] Boudreau R, Turkkan N. Solving the forward kinematics of parallel manipulators with a genetic algorithm. Journal of Robotic Systems 1996;13(2):111—25. [4] Cheok KC, Overholt JL, Beck RR. Exact methods for determining the kinematics of a Stewart platform using additional displacement sensors. Journal of Robotic Systems 1993;10(5):689—707. [5] Cox DJ, Tesar, D. The dynamic model of a three-degree-of-freedom parallel robotic shoulder module. Proceedings of the 4th International Conference on Advanced Robotics 1989;13—15. [6] Craver WM. Structural analysis and design of a three-degree-offreedom robotic shoulder module. Master’s Thesis, The University of Texas at Austin 1989.

[7] Fichter EF. A Stewart platform-based manipulator: general theory and practical construction. The Int. J. of Robotics Research 1986;5(2):157—82. [8] Geng Z, Haynes LS. Neural network solution for the forward kinematics problem of a Stewart platform. Robotics and Comp.Int. Manuf. 1992;9(6):485—95. [9] Gosselin C, Angeles J. The optimum kinematic design of a planar three-degree-of-freedom parallel manipulator. ASME J. Mechanisms, Transmission Automation Des. 1988;111(2):202—07. [10] Gosselin CM, Lavoie E. On the kinematic design of spherical three-degree-of-freedom parallel manipulators. The Int. J. of Robotics Research 1993;12(4):394—402. [11] Gosselin CM, Sefrioui J, Richard MJ. Solutions polynomiales au proble`me ge´ome´trique direct des manipulateurs paralle`les plans a` trois degre´s de liberte´. Mechanism and Machine Theory 1992;27(2):107—19. [12] Gosselin CM, Sefrioui, J. Richard, MJ, On the direct kinematics of general spherical three-degree-of-freedom parallel manipulators. Proceedings of the ASME Mechanisms Conference 1992;7—11. [13] Hunt KH. Structural kinematics of in-parallel-actuated robot arms. ASME J. Mechanisms, Transmission Automation Des. 1983;105(4):705—12. [14] Jin Y. Exact solution to the forward kinematics of the general Stewart platform using two additional displacement sensors. Proceedings of the ASME Mechanisms Conference 1994;72:491—95. [15] Josin GM. Neural-space generalization of a topological transformation. Biological Cybernetics 1988;59:283—90. [16] Merlet JP. Les robots paralle`les, Traite´ des nouvelles technologies, se´rie robotique. France: Hermes, 1990. [17] Nagarjuna PK, Soni AH. Solution to the forward and inverse kinematic equations using multilayer back-propagation neural network. Proceedings of the ASME Flexible Assembly Conference 1994;73:29—33. [18] Parenti-Castelli V, Innocenti C. Forward displacement analysis of parallel mechanisms: closed form solution of PRR-3S and PPR3S structures. ASME J. Mech. Des. 1992;114:68—73. [19] Peysah EE. Determination of the position of the member of threejoint and two-joint four member assur groups with rotational pairs. Machinery, 1985;5:55—61 (in Russian). [20] Rumelhart DE, McClelland JL, the PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition Vols. 1&2. Cambridge MA: MIT Press, 1986. [21] Salisbury JK, Craig JJ. Articulated hands: force control and kinematic issues. The Int. J. of Robotics Research 1982;1(1):4—17. [22] Shirkhodaie AH, Soni AH. Forward and inverse synthesis for a robot with three degrees of freedom. Proceedings of the 1987 Summer Computer Simulation Conference 1987, Montreal, 851—856. [23] Sutherland JG. The holographic model of memory, learning and expression. Int. J. Neur. Syst. 1990;1(3):256—67. [24] Sutherland JG. The holographic neural model, Fuzzy, holographic and parallel intelligence. In: Soucek B and The IRIS Group (Eds.), The Six-Generation Computer Technology Series, Chap. 2, Wiley: New York, 1992. [25] Zhang C, Song S-M. Forward kinematics of a class of parallel (Stewart) platforms with closed-form solutions. Proceedings of the IEEE International Conference on Robotics and Automation 1991;2676—2681.