Neurocomputing 74 (2011) 3335–3342
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Application of arachnid prey localisation theory for a robot sensorimotor controller S.V. Adams a,n, T. Wennekers a, G. Bugmann a, S. Denham b, P.F. Culverhouse a a b
Centre for Robotics and Neural Systems, School of Computing and Mathematics, University of Plymouth, PL4 8AA Plymouth, United Kingdom School of Psychology, University of Plymouth, PL4 8AA Plymouth, United Kingdom
a r t i c l e i n f o
abstract
Article history: Received 14 December 2010 Received in revised form 9 May 2011 Accepted 15 May 2011 Communicated by R. Tadeusiewicz Available online 28 June 2011
We extend an existing spiking neural model of arachnid prey orientation sensing with a view to potentially using it in robotics applications. Firstly, we have added ‘motor’ behaviour by implementing a simulated arachnid in a physics simulation so that sensory signals from the neural model can be translated into movement to orient towards the prey. We have also created a spiking neural distance estimation model with a complementary motor model that enables walking towards the prey. Results from testing of the neural and motor aspects show that the neural models can represent actual prey angle and distance to a high degree of accuracy: an average error of approximately 71 in estimating prey angle and 1 cm in the estimation of distance to prey. The motor models consistently show the correct turning and walking responses but the overall accuracy is reduced with an average error of around 151 for angle and 1.25 cm for distance. In the case of orientation this is still in line with the error rate of between 121 and 151, which has been observed in real arachnids. & 2011 Elsevier B.V. All rights reserved.
Keywords: Spiking neural network Sensorimotor coordination Robot controller Arachnid prey localisation
1. Introduction Navigation and localisation tasks in robotics generally require some sort of sensorimotor coordination. In the simplest case infrared (IR) or bump sensors can be set up to send information directly to wheel motors to steer the robot toward or away from objects in the environment. An example of this type of architecture is the Braitenberg vehicle [1], where connections between sensors and wheel motors can be set up to produce various different ‘behaviours’ such as attraction and avoidance. More sophisticated implementations have used evolutionary techniques to evolve more complex controllers, for example the work described in Ref. [2] attempted to mimic how adaptation occurs in natural systems by evolving for a progressively more demanding set of tasks ranging from simple forward locomotion to distinguishing between two shapes in the environment. Following experiments with simulated agents, the methods were replicated on wheeled robots and performed equally well. Another approach, described in Ref. [3], concentrated on learning sensorimotor coordination by direct interaction with the environment. Here an elegant and simple spiking neural network model including synaptic plasticity was used for obstacle avoidance. This was implemented on several versions of wheeled robot. The design of the neural network was inspired by
n
Corresponding author. Tel.: þ44 1752 586294; fax: þ 44 1752 232540. E-mail address:
[email protected] (S.V. Adams).
0925-2312/$ - see front matter & 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2011.05.020
the behaviour of the sea slug (Aplysia), which despite being a very simple organism is capable of associative and non-associative learning. In a first set of experiments the robots were equipped only with left and right bump sensors and the neural system was wired to include a reflex response to back away after bumping into an obstacle. In subsequent experiments, the neural network was rewired to include input from infrared (IR) sensors and also presynaptic facilitation was added. The results of these experiments showed that first and second order classical conditioning were possible: the robot firstly ‘learned’ (via synaptic changes) to associate IR sensor input with bumping into objects and thus to avoid them and subsequently it learned to associate obstacles with a second order stimulus (their shadow). An interesting approach not involving neural methods is described in Ref. [4]. Here vibration signals are used as a communication method between a group of wheeled robots to allow them to locate each other. Two signals in different frequency bands are transmitted and they are detected by a matched filter technique, which calculates the cross-correlation between the received and expected signal. To perform more complex tasks such as localisation (enabling the robot to sense its own location in the environment) and navigation (obstacle detection and avoidance), vision systems are usually employed. An overview of robot localisation and object recognition using two popular techniques: SLAM (Simultaneous Location and Mapping) and SIFT (Scale Invariant Feature Transform) is given in Ref. [5]. Metrical SLAM approaches construct a grid-based map where each cell has a probability of containing
3336
S.V. Adams et al. / Neurocomputing 74 (2011) 3335–3342
an object. Downsides of this method are that it is memory-hungry and very dependant on sensor robustness. Topological SLAM uses a graph-based approach where relationships between significant places are stored instead. This approach is less memory-hungry and of lower complexity than metrical SLAM but graph updating is difficult to do accurately [5]. SIFT is concerned with both the detection and recognition of objects and requires that a database of object features (SIFT descriptors) is constructed for matching with real objects. An important feature of SIFT is that it can simultaneously detect and recognise objects. Despite the fact that the approaches previously discussed are successful it is still the case that current robot capabilities cannot match those of real organisms. The work presented here takes a slightly different approach to robot sensorimotor coordination and uses methods inspired by how a real organism solves a sensorimotor coordination problem. The field of Computational Neuroethology encompasses the modelling of real animal behaviour grounded in biologically realistic neural models [6]. Such modelling is extremely useful in robotics research as it can provide insights into how nature has equipped animals with efficient survival strategies and moreover how fairly complex behaviours can be generated by minimal neural architectures: natural systems manage to achieve speed, fault tolerance and flexibility with low power requirements and solve problems we find very difficult to implement on machines. An important component of Computational Neuroethology is to model situations where entire animal ‘behaviours’ are generated from interaction with the environment: i.e. ‘closing the external feedback loop from motor output and sensory input’ [6]. An additional benefit in making a serious study of natural sensorimotor systems is to anticipate the future direction of robotic hardware, in particular the field of Neuromorphic Engineering. Advances in this area are now making it possible to simulate large neural networks in hardware in real time. Such ‘neural chips’ are massively parallel arrays of processors that can simulate thousands of neurons simultaneously in a fast, energy efficient way and compute using similar methods to the way real neurons behave. Therefore it is becoming much more feasible for researchers to actually implement biologically realistic models on board autonomous robots. The original contribution of the current work is to extend a neural model proposed in Refs. [7,8] in two ways. Firstly, by creating a physics simulation and visualisation of a virtual arachnid and linking this to the neural model so that vibration signals result in reflexive turning behaviour to face the direction of a virtual prey. Secondly, to add a neural distance sensing mechanism and a complementary motor system to cause walking towards prey following an orientation movement. The work of Ref. [9] describes a previous implementation of localisation for a hexapod robot, which borrows from some of the orientation sensing ideas in Refs. [7,8] but does not use a neural approach. Also, finding the location of a vibration source is done using a completely separate system using radio beacons. In our work we add a prey distance estimation mechanism based upon the same biological neural network used for the orientation sensing with the aim of trying to explain/predict how the real animal might achieve distance sensing given that we know the kinds of sensors it has and its neurobiology. An integrated orientation and distance estimation method enables the future possibility of full localisation and tracking behaviour for robots using a minimal-architecture spiking neural network. The structure of this paper is as follows: Section 2 gives an overview of the biological theory behind the model of arachnid orientation behaviour and previous works which have developed computational models of it. Section 3 describes how the model in Ref. [7] has been extended by adding orientation motor behaviour in a physics simulation of an arachnid.
Section 4 explains the rationale for and implementation of a distance estimation method based upon biological evidence from vibration detection experiments with real arachnids. Section 5 presents some results from testing the orientation and distance sensing behaviour of the model in response to a randomly placed prey. The final section summarises the performance of the current model and makes some suggestions for future work.
2. Modelling prey orientation detection in arachnids The work of Brownell et al [10,11] examined the orientation behaviour of the Desert Scorpion, Paruroctonus mesaensis, which is nocturnal and able to locate prey purely by detection of vibrations carried by the sand substrate. The vibrations are picked up by detectors called Basitarsal Compound Slit Sensilla (BCSS), which are present on the tarsi of the scorpion’s eight legs. The experiments consisted of measuring the orientation behaviour in response to artificially created mechanical vibration signals. In order to explain the neural basis of the orientation mechanism they also looked at the results of blocking the signals to one or more legs at a time and observing the degradation in turning accuracy. The mathematical model described in Refs. [7,8] was based upon the findings of this experimental work and was able to reproduce similar results to those seen in the real animal. The model is based upon a Spiking Neural Network (SNN). In contrast to traditional Artificial Neural Networks (ANNs), which are rate-based, spiking neurons compute with pulses, much like real neurons do. In the simplest form of such models the membrane voltage of a neuron increases as spikes are received from connecting neurons. Once a threshold value is exceeded, the neuron spikes and the membrane voltage is reset. Gradually, the neuron recovers during a refractory period until it is able to spike again. In Ref. [12], Maass demonstrated that SNNs are more powerful than ANNs as they can compute the same functions using less neurons. The original orientation model consists of a ring of eight sensory spiking neurons representing the basitarsal compound slit sensilla (BCSS) mechanoreceptors present on each of the arachnid’s legs. In the real animal the legs are held in a ‘ready’ stance at specific orientations relative to the body (7181, 7541, 7901, 71401). These sensory neurons are linked with excitatory connections to eight command neurons that represent control structures in the Sub-Oesophageal Ganglion (SOG), a major component of the nervous system in arachnids. The model assumes that the command neurons are responsible for both integration of sensory signals and executing motor commands. In reality these SOG neurons may relay sensory information to the arachnid ‘brain’ (located in the SupraOesophageal Ganglion), which then sends signals back to control the legs. Each BCSS/command neuron pair is linked to an inhibitory interneuron. Fig. 1 illustrates the arrangement and connectivity of neurons. For clarity, only connections through three legs and one interneuron are shown. Command neurons connect in ‘triads’ to inhibitory interneurons (Fig. 1 illustrates one triad), which are in turn connected to a command neuron on the opposite side of the network. The placement of legs, and thus sensors at intervals around the body determines the information available to the arachnid to enable it to estimate the prey orientation: the crucial information is actually the delay between activation of the sensors of each leg as the wave signal arrives. As shown in Fig. 1, each command neuron receives both excitatory and inhibitory signals from BCSS sensory neurons. Excitatory signals come from the BCSS neuron directly linked to a command neuron and inhibitory signals come from the inhibitory triad on the opposite side of the network. The ‘time window’ of activation of a command neuron depends upon the delay between activation and inhibition and the number of spikes generated depends upon the length of the time window in
S.V. Adams et al. / Neurocomputing 74 (2011) 3335–3342
3337
50 Hz.; wk is a random noise value taken from a uniform distribution between 0 and 2p. In the current implementation, y(t) is pre-calculated for each simulation time step using the range of values for f determined by the parameter k as given above. In line with the observed arachnid response time the simulation runs for 500 ms at a time step interval of 0.1 ms. At each time step of the simulation the wave signal at that time (minus the delay for the appropriate BCSS sensor calculated using Eq. (1)) is applied during the calculation of the BCSS neuron voltage The simulation output is a vector of spike counts from the neurons corresponding to the eight legs and this information can be used to estimate the prey orientation using a standard population vector decoding technique such as described in Ref. [13] and shown as ! N X y ¼ arg nk eigk ð3Þ k1
Fig. 1. The arachnid neural network.
which the signal is received. Command neurons at or near the prey orientation will in general receive more spikes as excitatory signals from the command neurons are not inhibited by the opposing interneuron quickly enough. Similarly, neurons on the opposite side to the prey will be inhibited more quickly by the firing command neurons on the side of the prey and so produce less spikes. In Refs. [7,8] the time delay between sensors is modelled by
Dtðgk , gl 9js Þ ¼
R ½cosðjs gl Þcosðjs gk Þ vr
where y is the estimated direction of prey; N is the number of command neurons (in this case 8); nk is the spike count from neuron k; gk is the angle of leg k. It should be noted that a variant implementation of the same arachnid prey localisation model is described in Ref. [14] where a slightly different neural model is used and a sinusoidal array technique is employed to hold and process the sensory information: this is proposed to make the calculations easier and the model more robust to neuronal noise. In the current work the simpler model of Ref. [7] is adhered to. For the basis of our work, we use an implementation of Ref. [7] previously created in the freely available Brian spiking neural network simulator [15]. The original orientation model code is included as a code example with the source distribution, so we do not repeat implementation details of the wave modelling, BCSS and command neuron model, connectivity or decoding of the signal here as they can be freely obtained from this code.
3. Creating the arachnid simulation 3.1. Modelling software
ð1Þ
where Dt is the time delay; js is the actual prey angle; gk and gl are the angles of legs k and l, respectively; R is the scorpion radius (taken as 2.5 cm in the model); vr is the wave speed (approximately 50 m s for surface Rayleigh waves) In the implementation of the model, Eq. (1) is used to precalculate an array of delays for the sensor neurons, and this array is used to apply the appropriate delay to the wave signal when calculating the input voltage to the neurons. According to Refs. [10,11] the BCSS mechanoreceptors in the real animal are specifically activated by Rayleigh (surface) waves travelling through the sand. Although another type of surface wave exists (the Love wave) this was not observed to play any part in activating the BCSS sensors. Therefore, in this work we use the terms Rayleigh and surface wave interchangeably. Using the physical characteristics of Rayleigh waves in sand from Ref. [10] the model of Refs. [7,8] represents the wave signal mathematically as a discrete Gaussian distribution of cosine waves using P Dðf Þcosð2pfk t þ wk Þ yðtÞ ¼ 100 k k P ð2Þ k Dðfk Þ
As previously mentioned, the orientation neural model of Ref. [7] had already been implemented in the brian spiking neuron simulator [15]. Brian is a freely available simulation tool for spiking neural networks based on the scripting language Python.1 For implementing the motor parts of our system we have also used freely available software, which can interface easily with Python. For initial prototyping, we used PyODE (a Python interface to the physics simulator Open Dynamics Engine)2 and VPython, a Python visualisation module3 to create the full arachnid simulation. However there were issues with the performance of the motor parts of the model, which were attributed to the physics simulation implementation. Although the PyODE wrapper was easy to use it transpired that ODE required careful parameter tuning to get a stable simulation. Even though some tuning was subsequently done, it was not possible to get completely satisfactory motor behaviour. The results of this initial work are described in Ref. [16]. We improved the motor model using a different physics simulation based upon the commercial physics engine Nvidia PhysX with the JPhysx Java wrapper.4 This had already been successfully used to 1
http://www.python.org/. http://pyode.sourceforge.net/. 3 http://vpython.org/. 4 http://developer.nvidia.com/page/home.html and http://sourceforge.net/pro jects/jphysx/. 2
where y(t) is the amplitude of the wave signal at time t; fk is the wave frequency, calculated as 300þ(k 150) Hz with 0rkr300; D is a Gaussian distribution with mean 300 Hz and standard deviation
3338
S.V. Adams et al. / Neurocomputing 74 (2011) 3335–3342
create insect-like agents in Ref. [17] and showed none of the issues seen when using PyODE in the initial prototype. In the present work interfacing to the Python code is done using JPype,5 which allows the physics simulation Java classes to be called from Python. The neural and motor simulation parts do not run concurrently: the neural model code is spawned as a separate process from the main Python program. It processes the sensory signals and returns them back to the main programme, which then calls the physics simulation and passes the required information.
Table 1 Tetrapod gait parameters. Parameter
Coxa-trochanter (swing)
Trochanter-femur (stance)
Amplitude (deg.) Frequency (Hz) Relative phases for legs 1–4a,b Relative phases for legs 5–8a,b
30 0.75 1.5, 1.0, 1.5, 1.0 0.5, 0.0, 0.5, 0.0
30 0.75 0.5, 0.0, 0.5, 0.0 1.5, 1.0, 1.5, 1.0
a Legs 1–4 are on the right hand side of the body and legs 5–8 are on the left hand side. b Phases are measured in number of cycles.
3.2. A simplified arachnid model In reality, the arachnid leg has several segments and joints. For the purpose of the current work it was decided that this would be simplified and only two segments and two joints would be used per leg to control the ‘swing’ (forward–backward) and ‘stance’ (up–down) phases of locomotion. These correspond approximately to the coxatrochanter (C-TR) and trochanter-femur (TR-F) joints in real arachnids [18]. The walking gait used by real arachnids is described in Refs. [18,19] and can be approximated by a tetrapod walk where exactly four legs are on the ground and four off at any one time. This is usually an L1, L3, R2, R4/R1, R3, L2, L4 pattern. The legs have been modelled in the physics simulation with simple 1 Degree Of-Freedom (DOF) hinge joints. Movement is controlled by sinusoidal generators, which calculate the ‘set point’ or angle for each 1 DOF joint at each time step. These are of a form similar to that used in Ref. [20] for biologically inspired snake robots and are modelled by
yi ðtÞ ¼ A sinð2pvt þ 2pjÞ
ð4Þ
where yi(t) is the set point for joint i at time t; A is the amplitude; v is the frequency; j is the phase. Amplitude controls the extent of swing of the joint up and down or side to side and is measured in degrees. Frequency sets the number of swing cycles executed per second. These parameters are set at predefined values that give a reasonable height of step and speed of movement. The phase parameter is an offset to control how each joint executes the swing cycle relative to the other joints and is measured in cycles. To implement the tetrapod gait pattern, the phase parameter for swing and stance joints is set so that adjacent legs cycle out of phase. Legs on the left and right sides of the body also operate out of phase. The parameters used to generate the basic arachnid gait are given in Table 1 and Fig. 2 shows a diagram of the simulated arachnid body and leg arrangement. The main body consists of four spheres linked by rigid joints with a pair of legs attached to each sphere. The ‘head’ end of the arachnid is the sphere holding legs 1 and 8. Fig. 3 shows a snapshot of the constructed arachnid at its starting point in the simulated world. The arachnid starts the simulation aligned along the positive x axis and with the head placed at the centre of the world (0,0,0). The x axis is designated to be 01 with movements clockwise being positive angles and those anticlockwise being negative angles. In the simulation 1 distance unit is taken to be 1 cm. The prey is constructed as a simple white sphere and is generated at a random angle up to 71801 with respect to the initial arachnid position. Please note that ‘prey angle’ or ‘prey orientation’ in the remainder of this document refers to this angle of the prey with respect to the arachnid. The prey is not able to move during the simulation. 3.3. Translating spikes to orientation motor behaviour The original orientation neural model of Refs. [7,8] generates ‘an answer’, i.e. a prediction of the prey angle, but it remains 5
http://jpype.sourceforge.net/.
Fig. 2. The arachnid model.
Fig. 3. The simulated world showing arachnid and prey.
unclear how this information might be used in real animals to cause orientation towards the prey, or how the mechanism might be used to generate an orientation sensing behaviour for a robot. There is little or no precise information in the literature about how this happens in real animals, nor any previous work that attempts to model the motor control of arachnid prey localisation. Consequently, a main aim of the current work has been to extend the arachnid prey orientation neural model to include motor behaviour. In particular, we will demonstrate how sensory input (vibration waves caused by moving prey) can be linked to motor output (predator orients and moves towards prey) via a biologically realistic spiking neural network.
S.V. Adams et al. / Neurocomputing 74 (2011) 3335–3342
Although a motor model was not implemented in their work, the final discussion section of Ref. [11] considers the turning behaviour of spiders and mentions the results of some previous work in Ref. [21], which examined walking and turning behaviour in jumping spiders. This work showed that turning is a simple modification of the standard walking gait. Legs that move in phase and in the same direction on opposite sides of the body during walking (i.e. the L1, L3, R2, R4/R1, R3, L2, L4 pattern described earlier) stay in phase but step in opposite directions during turning. It also confirmed that stepping frequency and amplitude on both sides remain the same during turning as for normal walking. We conclude from this that legs are probably not controlled individually. It is likely that there is a Central Pattern Generator (CPG) system, which determines the basic gait, but this can receive an overriding tonic signal that temporarily changes the pattern to produce the turning movement. Such systems are ubiquitous in nature and have been extensively studied in the field of biologically inspired robotics; for an example see Ref. [22]. Therefore this approach is used in the current work. The basic tetrapod gait described in Section 3.2 is used as the default walking pattern and is modified by a ‘tonic signal’, i.e. the sign and magnitude of the angle calculated by the neural system. When this information is passed back from the neural processing, the sign and magnitude of the required turn angle are extracted. The sign is used directly to modulate the sign of the signal generated by the sinusoidal controllers and this causes a change in direction. The magnitude of the angle is translated into a number of time steps duration to be spent turning. This involves simply multiplying by a factor that was determined by tests with the simulator and involved running with a fixed angle to see how many time steps were needed to achieve the turn. It should be noted that the neural processing of the sensory signals and physics simulation of the motor behaviour do not operate concurrently: the neural model is run first to generate the sensory information and is then passed to the physics simulation to execute the motor operations. Furthermore, there is no feedback from the motor behaviour to the sensory processing so no ‘online’ correction: the simulation ends once the motor behaviour has been completed.
4. The distance estimation model 4.1. Rationale for distance sensing in arachnids According to Ref. [8] the desert scorpion has two distinct behaviours, which involve an orientation response. These are the Defensive Orientation Response (DOR), where the animal orients only, and the Predator Orientation Response (POR), which involves orientation and movement towards the prey. The latter behaviour allows real scorpions to accurately catch prey in one movement if it is within 20 cm radius. We assume that distance estimation must be an important element of this behaviour in order to vary the amount of forward movement with prey distance. Although distance sensing is mentioned very briefly in some of the orientation sensing works we are not aware of any research investigating the neural mechanism in real scorpions or any theoretical or software models of such a process. Therefore, an original contribution of the current work is to propose a possible mechanism, involving a simple modification of the existing orientation neural model. A prey animal moving along on the ground produces both surface transverse travelling waves (Rayleigh waves) and longitudinal travelling waves (P waves). The P waves travel approximately three times faster than the surface waves according to Ref. [10]. In theory then, the scorpion could judge the distance of the prey by detecting the difference in arrival time for the two waves [7,11,14]. It was also proposed in Ref. [11] that the varying
3339
amplitude of P waves across leg sensors could be used as a measure as P waves dissipate more quickly than Rayleigh waves over distance. Results from the experimental work described in Refs. [10,11] suggest that hairs present on the bottom of the tarsi, which are directly in contact with the sand, are the main mechanoreceptors for detecting P waves; the Basitarsal Compound Slit Sensilla (BCSS) appear to perform only surface (Rayleigh) wave detection. The orientation model of Ref. [7] used a ‘time window’ approach where command neuron activation is determined by the balance between excitation from BCSS sensors in the direction of prey and inhibition from BCSS sensors on the opposite side. A similar principle could be used to estimate the prey distance using the interaction between P and Rayleigh waves reaching the BCSS and tarsal hair sensors, respectively. For instance, as the P waves are fastest they arrive first and activate the tarsal hair sensors of all legs. Upon later arrival of Rayleigh waves, activation of the BCSS sensors could be used to inhibit tarsal sensors thus closing the ‘window’ of activity. The amount of tarsal hair sensor activity during this window would thus encode the distance to prey and govern the strength of the forward response. According to Ref. [10], the P wave sensor mechanism in scorpions is fairly short range and operates best up to about 10 cm whereas the Rayleigh waves can be detected up to 50 cm. In the current work we have chosen not to model this distinction, and both waves are assumed be detectable over the same range. 4.2. The distance estimation neural model Following on from the ideas discussed in Section 4.1, the original neural orientation model was extended to include eight Tarsal Hair Sensor (THS) neurons and their corresponding command neurons. These have the same synapse connection structure between them as the BCSS sensory and command neurons shown in Fig. 1. In addition the THS and BCSS sensory neurons in each leg are connected by strong inhibitory connections so that as soon as BCSS neurons fire they drastically reduce the activity of the THS neurons. The P wave has been modelled using the same method as the Rayleigh wave discussed in Section 2 but instead using information about P waves obtained from Ref. [8]. Eq. (1) is used with a P wave speed of 150 m s and Eq. (2) with D as a Gaussian distribution with mean 1200 Hz and standard deviation 200 Hz and fk the wave frequency, calculated as 1200þ (k 200) Hz with 0rkr500. As the wave signals over time are generated by a computational model and do not actually travel, the speed difference between P and Rayleigh waves has been simulated by altering the time constant of the model equations for the tarsal hair sensors and their command neurons so the response compared to the BCSS sensors is proportionately faster dependant on the distance between arachnid and prey. It has been estimated elsewhere that the time difference is of the order of 1.3 ms per 10 cm distance [11] and so this factor is used in the current work. The equation used to calculate the THS model time constant is given in Ptau ¼
Rtau dprey 13
ð5Þ
where Ptau is the THS model time constant in milliseconds; Rtau is the BCSS model time constant in milliseconds; dprey is the actual prey distance in metres multiplied by the factor (in milliseconds per metre). In contrast to the orientation sensing mechanism, which uses a population vector decoding method to estimate the prey direction, the distance neural model uses the total activity generated by the tarsal hair sensors (i.e. the sum of spikes from all legs) over the simulation time of 500 ms. Using a standalone version of the extended neural model, test runs were done to establish the quantitative relationship between the THS activity and distance to prey so that this could be used to
3340
S.V. Adams et al. / Neurocomputing 74 (2011) 3335–3342
generate walking behaviour. The total numbers of THS and BCSS spikes generated during a 500 ms run were collected for a range of distances from 0.1 to 0.6 m. Fig. 4 shows a plot of activity (total spike count over 500 ms) vs. distance for both BCSS and THS sensors. This graph shows that the total activity of BCSS sensors in this model (dashed line) is not distance sensitive (the orientation sensing relies only on differences in activity between legs). However, there is a distinct relationship between prey distance and THS activity (solid line). The greater the distance between the arachnid and prey, the bigger the time window for fast travelling P waves to activate the THS sensors and so more spikes are generated. At close distances, THS activity is almost instantaneously suppressed below the activity of BCSS sensors due to the inhibitory nature of the connections between them. In order to turn this relationship into a quantitative model for use in determining the amount of walking required in the corresponding motor model a linear regression was performed using the THS activity test data (THS activity against prey distance) to determine the parameters for an equation using THS activity as a predictor of distance. The p value generated by the fitting process was 1.93e05 (well below the significance value of 0.05) and the R2 value was 0.9928 showing that THS activity is a very good predictor of prey distance. The generated parameters resulted in Dest ¼ 0:00018 THS þ 0:06
ð6Þ
where Dest is the predicted prey distance; THS is the THS total spike count In the generation of walking behaviour, Eq. (6) is used to estimate the prey distance based upon the THS spike count returned from the neural processing. The distance prediction is then translated into a duration to be spent walking. This involves simply multiplying by a factor that was determined by tests in the simulation measuring how many time steps were needed to walk a fixed distance. In the extended motor model, the turning and walking behaviours are combined in a simple way. Turning behaviour is executed first to face the direction of prey and then walking behaviour is executed to walk towards it, i.e. turning/ walking behaviour is not concurrent. Although this was done for simplicity, it was later realised that according to Ref. [11] that this
kind of separation of the two behaviours has actually been observed in real arachnids!
5. Results Once the orientation neural model had been extended with the distance sensing mechanism and motor behaviours added via the physics simulation, 100 trials of the software were run recording the turning and walking behaviours in response to a prey placed randomly at an angle up to 71801 orientation and 8–20 cm distance with respect to the arachnid starting position. The orientation and distance performance are discussed separately in the following subsections. 5.1. Orientation performance The actual prey angle, estimated prey angle from the neural model and final angle of the simulated arachnid were collected over 100 trials. Fig. 5a shows a comparison of actual prey angle against the prediction of the neural model and Fig. 5b a comparison of the neural prediction against the final arachnid angle. Both graphs show a clear straight-line relationship with data points spread evenly across the linear regression line. In terms of the neural model alone the average absolute error (with respect to the actual prey angle) over the 100 trials is 6.521. In terms of the combined neural and motor model the absolute average error (with respect to the actual prey angle) is 15.471. Therefore, the neural model is capable of a very accurate prediction of prey orientation but some accuracy is lost during actual movement. According to the graph in Fig. 3(a) in Ref. [7] the error in the real scorpion is in the region of 712–151. Therefore the system behaviour is close to the performance expected of the real animal. 5.2. Distance sensing performance The actual prey distance, estimated prey distance from the neural model and final distance travelled by the simulated arachnid were collected over 100 trials. Fig. 6a shows a comparison of actual prey distance against the prediction of the neural model and Fig. 6b a comparison of the neural prediction against the final arachnid distance. Again, both graphs show a clear straight-line relationship with data points spread evenly across the linear regression line. In terms of the neural model alone the average absolute error (with respect to the actual prey distance) over the 100 trials is 1.00 cm. In terms of the combined neural and motor model the absolute average error (with respect to the actual prey distance) is 1.25 cm. Although there are no results from a real animal to compare to in this case, this seems a reasonable level of performance given the distance over which the prey can be sensed (20 cm) and the size of the scorpion (2.5 cm).
6. Discussion and future work
Fig. 4. BCSS and THS sensor activity with prey distance.
This arachnid orientation model has several advantages as a basis for a legged robot sensorimotor controller. It has been described mathematically (in Refs. [7,8]) and is based upon actual experimental results. It provides simple but useful behaviour: locating an object without vision. It is also likely to be naturally fault tolerant, as in the experiments described in Refs. [10,11] it was found that removing the signal from one or two legs did not completely destroy the localisation ability, the accuracy merely degraded. Furthermore, the mechanism is applicable to systems with different numbers of legs. Here we have used the original eight-legged model. The work of Ref. [9] described a six-legged robot implementation using similar
S.V. Adams et al. / Neurocomputing 74 (2011) 3335–3342
3341
Fig. 5. Performance of the neural and motor orientation models: (a) actual vs predicted angle and (b) predicted vs final angle. Note: all angles are in decimal degrees.
Fig. 6. Performance of the neural and motor distance models. (a) actual vs predicted distance and (b) predicted vs final distance. Note: all angles are in decimal meters.
orientation sensing theory. In Ref. [23] the orientation sensing mechanism is used to model turning behaviour for an organism that uses only four legs as vibration detectors. The results from our implementation have shown that the neural models for orientation and distance sensing are able to predict actual prey orientation and distance to a good level of accuracy: approximately 71 in the case of the orientation and 1 cm in the case of distance. Including the performance of the motor models reduces the accuracy for both orientation and
distance, but the orientation results still show comparable performance to a real animal. A deficiency of the current system is that the Rayleigh and P waves are mathematically simulated following the original neural implementation of Refs. [7,8] instead of modelling the physics of the wave fronts which should have been possible in the physics simulation. The consequence of this is that the representation of the relative speeds of the two waves is somewhat artificial: the neural model response is modified to simulate difference arrival
3342
S.V. Adams et al. / Neurocomputing 74 (2011) 3335–3342
times of the waves. Future work could include incorporation of the wave fronts into the physical simulation. There are several other enhancements that the authors believe would also considerably improve the current work in terms of its applicability to robotics. In this initial implementation the neural processing and physics simulation do not operate concurrently: the neural model is run first to generate the sensory information and is then passed to the physics simulation to execute the motor operations: there is no feedback between the two. Making the two processes concurrent would enable re-estimation of angle and distance ‘online’ and allow for corrective behaviour. It would also allow the possibility of a moving prey so that the predator can orient and intercept in a reactive way. Secondly, it would be desirable to implement the motor parts of the model on an arachnid (or other) robot rather than a simulation. However some consideration would need to be given to the appropriate sensory input, for example either using a vibration sensor or alternatively infrared or sound input.
[22] A.J. Ijspeert, A. Crespi, D. Ryczko, J.-M. Cabelguen, From swimming to walking with a salamander robot driven by a spinal cord model, Science 315 (2007) 1416–1419. [23] J.L. van Hemmen, A. Schwartz, Population vector code: a geometric universal as actuator, Biological Cybernetics 98 (2008) 509–518.
Samantha Adams received the BSc degree in mathematics and physics from the Open University, UK in 2003 and the MRes degree (with Distinction) in Computing from the University of Plymouth, UK in 2009. She is a Ph.D. candidate at the Centre for Robotics and Neural Systems, University of Plymouth, UK. Her research interests centre around biologically inspired robotics, in particular applications of spiking neural networks for robotics control.
Acknowledgements The authors would like to thank the anonymous reviewers, whose comments and suggestions have helped to improve this paper. References [1] V. Braitenberg, Vehicles: Experiments in Synthetic Psychology, MIT Press, Cambridge, MA, 1984. [2] I. Harvey, P. Husbands, D. Cliff, A. Thompson, N. Jakobi, Evolutionary robotics: the Sussex approach, Robotics and Autonomous Systems 20 (1997) 205–224. [3] R.I. Damper, R.L.B. French, T. Scutt, ARBIB: an autonomous robot based on inspirations from biology, Robotics and Autonomous Systems 31 (4) (2000) 247–274. [4] A. Silvola, R.A. Russell, Robot Communication via Substrate Vibrations, in: Proceedings of the Australasian Conference on Robotics and Automation, 2005. [5] A. Ramisa, Localization and Object Recognition for Mobile Robots, Ph.D. thesis, Universitat Autonoma de Barcelona, 2009. [6] D. Cliff, Computational neuroethology: a provisional manifesto, in: Proceedings of the First International Conference on Simulation of Adaptive Behavior, From Animals to Animats, MIT Press, 1991, pp. 29–39. ¨ [7] W. Sturzl, R. Kempter, J.L. van Hemmen, Theory of arachnid prey localization, Physical Review Letters 84 (2000) 5668–5671. [8] P.H. Brownell, J.L. van Hemmen, Vibration sensitivity and a computational theory for prey-localizing behavior in sand scorpions, American Zoologist 41 (2001) 1229–1240. [9] A. Wallander, R. Russell, K. Hyyppa, A robot scorpion using ground vibrations for navigation, in: Proceedings of the Australian Conference on Robotics and Automation, 2000. [10] P.H. Brownell, Compressional and surface waves in sand: used by desert scorpions to locate prey, Science 197 (1977) 479–482. [11] P.H. Brownell, R. Farley, Orientation to vibrations in sand by the nocturnal scorpion Paruroctonus mesaensis: mechanism of target localization, Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology 131 (1979) 31–38. [12] W. Maass, Networks of spiking neurons: the third generation of neural network models, Neural Networks 10 (1997) 1659–1671. [13] A. Georgopoulos, A. Schwartz, R. Kettner, Neuronal population coding of movement direction, Science 233 (1986) 1416–1419. [14] D. Kim, Neural network mechanism for the orientation behavior of sand scorpions towards prey, IEEE Transactions on Neural Networks 17 (2006) 1070–1076. [15] D.F. Goodman, R. Brette, Brian: a simulator for spiking neural networks in Python, Frontiers in Neuroinformatics 2 (2008). [16] S.V. Adams, P.F. Culverhouse, T. Wennekers, G. Bugmann, S. Denham, A sensori-motor model of arachnid prey localisation, in: Proceedings of the Eleventh Towards Autonomous Robotic Systems Conference (TAROS 2010), Plymouth, UK, 2010, pp. 1–6. [17] S.V. Adams, Hox Genes, Diversity and Adaptation in Evolutionary Robotics, unpublished MRes Thesis, University of Plymouth, UK, 2009. [18] T.M. Root, Chapter 9: Neurobiology, in: G.A. Polis (Ed.), The Biology of Scorpions, Stanford University Press, Stanford, California, 1990, pp. 341–411. [19] J. Shultz, Walking and surface film locomotion in terrestrial and semi-aquatic spiders, Journal of Experimental Biology 128 (1987) 427–444. [20] A. Crespi, A. Badertscher, A. Guignard, A.J. Ijspeert, Swimming and crawling with an amphibious snake robot, in: Proceedings of the IEEE International Conference on Robotics and Automation, 2005, pp. 3035–3039. [21] M. Land, Stepping movements made by jumping spiders during turns mediated by the lateral eyes, Journal of Experimental Biology 57 (1972) 15–40.
Thomas Wennekers is an Associate Professor (Reader) in Computational Neuroscience. His main research interest is the grounding of human perception and cognition in the neurons and connections of the nervous system, especially in terms of cortical microcircuits and systems-level brain networks. He recently contributed to several interdisciplinary projects centred on the development of neuro-inspired hardware for visual, auditory, and sonar processing based on cortical principles (The COLAMN, FACETS and SCANDLE projects).
Guido Bugmann is an Associate Professor (Reader) in the University of Plymouth’s School of Computing and Mathematics where he develops human-robot dialogue systems, vision-based navigation systems for wheeled and humanoid robots, and investigates computational properties of biological vision and decision making. He has three patents and more than 100 publications. Bugmann studied physics at the University of Geneva and received his Ph.D. in physics at the Swiss Federal Institute of Technology in Lausanne. He is a member of the Swiss Physical Society, the British Machine Vision Association, AISB, the board of EURON (2004–2008) and the EPSRC peer review college.
Susan Denham is a Professor of Cognitive Neuroscience in the School of Psychology, University of Plymouth. She received B.Sc. degrees in physics, 1980, and computer science, 1992, from the University of South Africa, and a Ph.D. from the University of Plymouth, 1995. Her research interests lie in understanding auditory cognition using perceptual experiments and computational models, and in applying this understanding in the development of computationally efficient implementations for practical technological applications, and in the creation of novel devices.
Phil Culverhouse is an Associate Professor (Senior Lecturer) in Computer Vision and Engineering Design and a member of the IEEE. He has been the principal investigator on four EU RTD contracts since 1992 and has more than seventy-nine academic publications. He is cochair of the international Working Group 130 on automatic plankton identification sponsored by SCOR (Scientific Committee on Oceanic Research) and the Royal Society (in the UK). Culverhouse also leads the Bunny Robot project, a low power multi-processor humanoid bipedal robot. He has also led a number of KTP programmes in control engineering and engineering design.