Real and apparent biological inspiration in cognitive architectures

Real and apparent biological inspiration in cognitive architectures

Biologically Inspired Cognitive Architectures (2013) 3, 105– 116 Available at www.sciencedirect.com journal homepage: www.elsevier.com/locate/bica ...

625KB Sizes 1 Downloads 47 Views

Biologically Inspired Cognitive Architectures (2013) 3, 105– 116

Available at www.sciencedirect.com

journal homepage: www.elsevier.com/locate/bica

INVITED ARTICLE

Real and apparent biological inspiration in cognitive architectures Owen Holland a,*, Alan Diamond a, Hugo Gravato Marques b, Bhargav Mitra a, David Devereux a a b

Department of Informatics, University of Sussex, Falmer, Brighton BN1 9QJ, United Kingdom Bio-Inspired Robotics Lab, Department of Mechanical and Process Engineering, ETH, Zurich, Switzerland

Received 28 June 2012; received in revised form 8 July 2012; accepted 8 July 2012

KEYWORDS Biological inspiration; Cognitive architectures

Abstract This paper examines the role and nature of biological inspiration in the new field of biologically inspired cognitive systems. The aim of producing human-like systems is shown to require the consideration of normative, conscious, and embodied systems. In addition to real direct biological inspiration, it is shown that there are interesting and potentially important ways in which indirect and apparent biological inspiration can be produced in a number of ways, and particularly by the effects of constraints common to biological and artificial systems. Some of these points are illustrated using a robot with a uniquely human embodiment.

ª 2012 Elsevier B.V. All rights reserved.

Introduction: the idea of BICA BICA (Biologically Inspired Cognitive Architectures) is a new interdisciplinary initiative, and is currently in an exciting stage of rapid expansion, reminiscent of the early stages of the artificial life and adaptive behaviour movements. The focus of the enterprise is on ‘‘. . .the integration of many research efforts in addressing the challenge of creating a real-life computational equivalent of the human mind’’. Given the wide brief, ‘Let a thousand flowers bloom’ is undoubtedly the best policy at this juncture, but it is certainly not too soon to look around, back, and forward to * Corresponding author. E-mail address: [email protected] (O. Holland).

see if there are any considerations of scope or direction that might require modification or clarification in the future in order to assist the developmental process. Of course, BICA will ultimately be defined by its outputs. This paper, however, will mainly pay attention to the inputs, and will further constrain the playing field by limiting itself to dealing with the biologically inspired component, rather than the cognitive architecture component which can for the moment be taken for granted. The paper is organised as follows: We first consider what is meant by biological inspiration, and suggest some extensions to current concepts. Next, we examine the influence on BICA systems of the proposed end use. A number of ways in which systems can acquire apparent biological characteristics are then identified and explored. In the next section

2212-683X/$ - see front matter ª 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.bica.2012.07.008

106

O. Holland et al.

What do we mean by biological inspiration?

in hyperbolic discounting the rate decreases with time.) However, the irrationality of hyperbolic discounting has been challenged in contexts where the future discount rate is uncertain – a context corresponding to many real life situations – where it has been shown by Farmer and Geanakoplos (2009) that hyperbolic discounting can be rational.

Human-level or human-like?

Inspired by conscious cognition?

‘‘A real-life computational equivalent of the human mind’’ implies equivalence both in performance (human-level) and also in nature (human-like). Although ‘human level’ will usually be interpreted as ‘at least at a human level’ it is important to remember that human cognitive abilities have some quantitative limitations that do not necessarily apply to artificial systems, and that a system routinely capable of remembering a list of 50,000 random numbers from a single reading should not be considered as performing at a human level. (It is of course true that individuals have recited more than the first 50,000 digits of pi, but these were learned over a long period of time.) However, it is the idea of producing a ‘human-like’ system that really needs to be more closely examined. The ‘human-like’ part of the challenge would not be an issue if humans were without any faults as cognitive systems, but they are not. Recent research, some of which led to a Nobel prize (Kahneman & Tversky, 1979), has shown that humans are riddled with cognitive defects, in that they routinely and frequently have thoughts and perceptions, and choose and execute actions, which with respect to various criteria (e.g. logic, or utility) are demonstrably incorrect or suboptimal. This whole paper is too short to contain anything like a complete list, but there are many examples. The very narrow field of behavioural finance studies the errors made in investment decisions; a recent handbook (Montier, 2007) lists overoptimism, the illusion of control, the illusion of knowledge, overconfidence, self attribution bias, confirmation bias, hindsight bias, cognitive dissonance, conservatism bias, representativeness, framing, categorisation, anchoring, availability bias, cue competition, loss aversion, hyperbolic discounting, ambiguity aversion, regret theory, imitation, contagion, herding, and cascades, among others. While there are many situationspecific methods for counteracting these failures of rationality, the general prerequisite is for the individual to be aware of the nature of the errors. A truly human-like cognitive system involved in making decisions and judgments should be normative, in that it should include all the mechanisms giving rise to these errors, and should also have the capacity to acquire and use the knowledge enabling it to mitigate their effects. However, the assumption that these biases always lead to irrationality may not be as well founded as was once thought. To take but one example, the concept of discounted utility was introduced into economics by Samuelson (Samuelson, 1937), and it later became generally accepted in the social sciences that it is rational to discount future rewards exponentially with time, and so the observed hyperbolic discounting typical of humans and many animals was, and often is, viewed as irrational. (In exponential discounting, future rewards are discounted with time at a constant rate, whereas

A major and largely unaddressed problem in the field of cognitive architectures is that of consciousness. Most people identify human cognition with conscious thought, and this attitude carried over to early work on cognitive architectures, where the protocols of subjects solving problems while verbalising their thoughts were analysed in order to discern the mechanisms involved in problem solving. However, with a few honourable exceptions, such as Franklin’s IDA (Franklin, 2003) and LIDA (Ramamurthy, Baars, D’Mello & Franklin, 2006) architectures based on Baars’ Global Workspace Theory (Baars, 1997), and perhaps CLARION’s distinction between implicit and explicit processes (Sun, 2003), most existing cognitive architectures take no account of what is now known about conscious (and unconscious) processes. Given that consciousness is generally regarded as the highest level of evolutionary cognitive achievement, possibly unique to humans and a small set of mammals, it would seem impossible to claim that a cognitive architecture was human-like if it did not represent something like a functional analogue of consciousness. This is particularly important in the context of the previous paragraph because consciousness, like human judgment and decision making, is very far from objective perfection as a rational system. Research in psychology and consciousness studies has thrown up a large number of apparent functional errors and deficits, some of the worst of which are succinctly summarised by the science journalist Norretranders (Norretranders, 1991):

we describe a robot with a uniquely human embodiment, which serves to illustrate some of the points raised in previous sections. Finally, we summarise our conclusions and recommendations for the future progress of the field.

‘Consciousness is a peculiar phenomenon. It is riddled with deceit and self-deception; there can be consciousness of something we were sure had been erased by an anaesthetic; the conscious I is happy to lie up hill and down dale to achieve a rational explanation for what the body is up to; sensory perception is the result of a devious relocation of sensory input in time; when the consciousness thinks it determines to act, the brain is already working on it; there appears to be more than one version of consciousness present in the brain; our conscious awareness contains almost no information but is perceived as if it were vastly rich in information’. The inclusion in a cognitive architecture of an analogue of consciousness accurate enough to generate these errors would certainly not make it a better performing system unless consciousness itself contributed some positive functionality that would outweigh them. Unfortunately no such functionality has yet been convincingly identified, although dozens of possibilities have been suggested – see for example the many online papers on the function of consciousness (Chalmers & Bourget, n.d.). Given these difficulties with consciousness, it is again clear that being more ‘human-like’ may actually militate against having higher levels of performance in many situa-

Real and apparent biological inspiration in cognitive architectures tions. Of course, this does not rule out the productive investigation of less human-like systems with superhuman performance along the way, but their relationship and relevance to the eventual BICA destination of a human-like system will then be rather more clearly defined.

A human-like mind in a human-like body? All human minds are embodied in human bodies. In contrast, almost all the major cognitive architectures are effectively disembodied (or minimally embodied in computers), with the exceptions being embodied in real or virtual robots, of which only a few, such as the iCub (Metta et al., 2010) bear a greater or lesser resemblance to the human body. (Virtual robots are simulated robots operating in simulated environments. Although in the past much was made of the allegedly crucial differences between real and virtual robots, modern physics based simulation techniques have narrowed the gap, and good simulations now play an important and accepted role in robotics, as they have done in industry for many years. Much useful work in the iCub project has been done using the iCub simulator – see for example http://www.youtube.com/watch?v=3KHjocH54qQ). The degree to which the fact or nature of embodiment makes some essential functional difference to a biologically inspired cognitive architecture is still uncertain, although there are many thoughtful discussions of various kinds of embodiment (e.g. Ziemke, 2003) and on the nature of embodied cognition (Pfeifer, Lungarella, & Iida, 2007; Wilson, 2002). The recent interest in morphological computation (Pfeifer & Gomez, 2009) emphasises the ways in which some computation, and even cognition, may be effectively outsourced to the body. Information based metrics have also been used to show quantitatively how the nature of a given embodiment can interact with the nature of a task to ‘relieve the cognitive burden’ on the agent of finding a solution (Polani, 2011). While there is as yet no hard evidence that a body, real or virtual, or like or unlike a human, is necessary to support a human-like mind in the sense intended by BICA, the theoretical landscape is becoming populated with potentially useful ideas about the nature and uses of embodiment, and of particular embodiments, and so the issue may one day be resolvable. An aspect of embodied implementations that is often missed is that having a real mobile body in a real dynamic environment imposes real-time constraints (Krichmar, 2012). The computations necessary to notice and recognise an approaching predator (or vehicle), and to plan and implement suitable avoiding actions, must be completed before reaching the stage where attack or collision is inevitable. Since computational resources are finite, this must be taken into account in the design of the cognitive architecture of such a system; in the case of humans, the preferred use of ‘fast and frugal’ heuristics rather than classical rational procedures has been emphasised by Gigerenzer and Goldstein (1996). However, time does not matter for unembodied systems, and so they are not subject to this constraint; this may make their cognitive architectures less human-like. Finally, it is abundantly clear that the structural, sensory, motor, and metabolic body plays a major role in the operations of the human brain, and that many higher level phenomena (e.g. emotions, and the sense of self) are strongly

107

linked to the body (Damasio, 1999). While these higher level phenomena may not always be purely cognitive components in themselves, they can certainly have very strong effects on human cognition. Damasio (1994) has claimed that human decision making depends on the existence and integrity of the emotional system. If this is correct, then a human-like BICA will also require a functional analogue of the emotional system. However, while it makes sense to discuss some kind of body-centred emotional system in the context of a robotic cognitive system (e.g. Ziemke, 2008), it is difficult to see how this could be implemented in a disembodied cognitive architecture.

How might the end use affect the nature and role of the biological inspiration? Scientific uses A possible scientific use is that of deliberately experimenting with BICA designs, especially embodied examples, to cast light on cognition – Cordeschi’s so-called synthetic method (Cordeschi, 2002), which goes well beyond traditional cognitive modelling. One unique advantage of embodied examples, especially mobile robots, is that they enforce the discipline of working with whole systems, of considering ‘the whole iguana’ (Dennett, 1978). The emphasis on architectures and whole systems, even if they fall well short of the human level, makes this distinct from the modelling of isolated components. There are several possible approaches. The first is experimental: does the artificial system perform as intended, matching the biological system in crucial respects? If so, it may validate the analysis of the biological system; if not, it may invalidate either the analysis or the implementation (McKinstry, Edelman, & Krichmar, 2006). The second approach is also experimental, but is less direct: if a system can achieve the same performance as a biological system without using some components thought to be essential for the biological system, then a reexamination of the biological system is indicated (Webb, 1995). Both of these approaches imply the application at some stage of specific biological expertise, rather than functional abstraction, and it is unlikely that they will be undertaken at a specifically human level in the near future.

Functional applications Another option for end use is that of engineering BICA systems because they might be expected to outperform other cognitive systems in some particular application. For example, the BICA solution might be directed towards achieving human-level performance in some domain, predicated on the assumption that since humans are the best all-round cognitive systems known, drawing on the design of the human system might produce better results than using other principles. The application oriented focus would limit the biological input to whatever was required to achieve the targeted performance, and there would be no restrictions on adding non-biologically inspired enhancements to form a hybrid system. Biological fidelity would therefore be likely to be relatively low.

108

Interface applications The third possible option is to target the attribute of being human-like, perhaps because such a system would respond to contingencies in the same way as a human (and would be useful in predicting or dealing with a human’s responses) or because the similarity to a human would make the communication with and understanding of the system easier for humans. This would emphasise the normative aspects of human cognitive modelling. However, until systems of this type have been developed and deployed, there is no clear evidence that the human-like structure of the architecture would produce a better imitation or prediction of human behaviour than a non-biologically inspired architecture, nor whether the system would be perceived by humans as being easier to use than a more conventional system.

Sources of apparent biological or human-like characteristics There are a number of possible sources of biological or human-like characteristics that might appear in a cognitive architecture. The most obvious in the present context is by design, through the inclusion of features inspired by the structural or functional characteristics of real biological architectures; these may range from high level functional components or mechanisms, such as long term memory, or action selection, to the components at the lowest level above the computational substrate, such as Izhikevich neurons (Izhikevich, 2003). This approach forms the mainstream of current investigations of BICA, and requires no further discussion. Instead, this section will point out three of the main ways in which cognitive systems may come to possess characteristics of biological systems without their being designed in.

By influence or inheritance The first way is quite trivial, but still worth noting. Some of the best known and most influential cognitive architectures (e.g. ACT-R, Soar) were explicitly inspired by, or were designed to model, human cognition, and therefore qualify as BICAs. Because of this, architectures derived from or inspired by these and similar architectures, however loosely, should also qualify as BICAs, whatever their intended application domain.

By using a biological design method: evolution and bricolage Part of the reason for apparent biological imperfections is that evolution does not create systems from first principles, like modern engineers, but rather engages in bricolage, or tinkering, using or modifying the materials at hand to produce something good enough for the moment. If this turns out later to have been a false move, leading to difficulties, then there is no way of going back to rationalise the design – the fault must be coped with, as likely as not through the use of more bricolage. The vertebrate eye is a design disaster, with the neurons in front of the sensory cells, obstructing the light and having to be gathered together to leave the

O. Holland et al. eyeball, creating the blind spot. In contrast, the eye of the octopus – a mere mollusc, from a different lineage – avoids these problems by placing the neurons behind the sensory cells. Why might this be relevant to BICA? Because some cognitive abilities, such as human episodic memory, may also be the result of bricolage. It is possible, and perhaps even probable, that forms of imagination evolved before episodic memory, mainly because there is a clear evolutionary benefit in being able to anticipate the outcomes of actions, but no such benefit attaches to episodic memory per se. However, the simulation mechanisms supporting imagination may later have been used as a basis for episodic memory, constraining it to becoming an error prone process of active reconstruction from sparse cues. In contrast, it is computationally simple to store memories at whatever resolution is required, and to retrieve and replay them perfectly accurately as many times as necessary. The temptation for a designer of a cognitive system is to provide a better system than nature has contrived – and to do so with very little effort. This may benefit performance, but the end result will be less humanlike. Which option is taken is really a matter for the comments in the section ‘What do we mean by biological inspiration?’. The point we wish to make here is that the production of a cognitive architecture using biologically inspired methods of design such as artificial evolutionary bricolage may also be a justification for calling the architecture a BICA. This might seem a small and unimportant observation now, but the evolution of cognitive systems within complex physics-based simulations of embodied agents in complex worlds may soon be an everyday affair.

By being subject to the same constraints as biological systems Convergent evolution is the phenomenon in which unrelated biological lineages evolve similar features in response to strong environmental and physical constraints. The usual examples are wings, which are similar in bats (mammals) and pterosaurs (reptiles), although no common ancestor had wings. In much the same way, parallels can be drawn between some engineered and some evolved structures: for example, energy efficient fast travel through fluid environments requires some degree of streamlining in both animals and machines. It is also overwhelmingly likely that the constraints imposed on an information processing system, whether natural or artificial, will tend to produce both designed and evolved systems with some functional and even structural similarities, as long as the available building blocks permit this. The discovery of feedback control in engineering facilitated the understanding of homeostasis in animals; common problems had indeed led to common solutions. The active constraints may include speed and accuracy of response, appropriate use of energy, the optimization of risk and reward, the need to profit from previous experience, and so on. For example, as many have observed, the ability to simulate a candidate action sufficiently well to assess its utility is a very valuable attribute, whether it happens in a chimpanzee or in a model-based predictive controller in a chemical plant. The implications of this train of thought for BICA are clear: the best design for a cognitive system which is subject

Real and apparent biological inspiration in cognitive architectures

109

Fig. 1 (a) An anthropomimetic robot, the ECCEROBOT Design Study (EDS). (b) 3D Static structure captured in Blender model. (c) Dynamic behaviour modelled in the Bullet physics engine.

to certain strong task and environmental constraints may be very similar to an existing biological system subject to the same constraints. The designed system may therefore appear to be a BICA but there will have been no identifiable process of biological inspiration. Since a major source of constraints is the system’s physical embodiment, and since cognitive systems are now being combined with humanoid robots with increasing frequency, this should perhaps encourage us to recognise another category of BICA in which the architecture is not inspired by a biological mind or brain but shaped by a biological embodiment, as noted in the subsection ‘A human-like mind in a human-like body?’

ECCEROBOT: a human-like embodiment We recently took part in a Europe-wide project, ECCEROBOT (Embodied Cognition in a Compliantly Engineered Robot) which was intended to explore some of the consequences of having a specifically human embodiment. This part of the paper is not intended as a detailed description of the full project; instead, we will use parts of it to highlight some of the factors we have discussed in the section ‘Sources of apparent biological or human-like characteristics’. The project centred around a series of robots each of which copied the musculoskeletal structure of the human body, with a human-like skeletal torso, and analogues of muscles elastically coupled to the bones via elastic tendons. Fig. 1(a) shows an example from 2010, the ECCEROBOT Design Study (EDS). This anthropomimetic approach (Holland & Knight, 2006) contrasts with that of conventional humanoid robots, which, although they fit within a roughly human envelope, are constructed using the same technology as

industrial robots, with stiff, precisely controlled motors and joints.

Aims of ECCEROBOT The ECCEROBOT project (ECCEROBOT, 2012) had three explicit aims: (a) To design and build an anthropomimetic mobile robot.An anthropomimetic robot (Holland, 2006) is one that copies as far as possible the structural and functional physical architecture of the human body – the skeleton, and the compliant muscles and tendons. The approach taken in designing and building the ECCEROBOTs has been described in detail elsewhere, and the principal differences between this approach and a conventional engineering approach have been identified and characterised (Marques et al., 2010; Wittmeier et al., 2012). (b) To characterise its dynamics and control it successfully.Three approaches to control were investigated: classical control theory; heuristic methods (search, learning, evolutionary algorithms) using a physics based simulation of the body; and sensory/motor control. None of these used methods explicitly formulated to match those thought to be used by humans; however, there were already clear parallels between some components of the artificial and the natural control architectures – for example, a physics-based body model could be said to correspond to the body schema thought to play a part in human motor control (Gallagher, 2005; Hoffmann et al., 2010). It is also

110 worth noting that some of the features of anthropomimetic robots (e.g. multi-articular actuation and complex joints, as identified in (Marques et al., 2010) prevented the application of current classical classical methods (Potkonjak, Svetozarevic, Jovanovic, & Holland, 2010), thereby forcing our attention onto the second approach (Wittmeier et al., 2011). It is worth noting that, although the problem of control is often explicitly separated from cognition, almost every example of control above the simplest level includes some functional or infrastructural aspects of cognition – for example, abstractions from data representing perceptions, or schemes for action selection and movement planning. At the highest level – for example, adaptive model based predictive control – some control schemes are predominantly cognitive in nature, often with a real time emphasis that is absent in many cognitive systems. (c) To explore and exploit its human-like characteristics to produce some human-like cognitive features.The human-like characteristics referred to in this aim are those of the embodiment, as expressed through the shaping of movements, actions, and some sensory inputs by the morphology (Pfeifer et al., 2007). Krichmar has already emphasised the importance, and indeed necessity, of a cognitive system being embodied (Krichmar, 2012). However, the emphasis in BICA of the human-like aspects of cognitive systems points to a human-like embodiment, and ECCEROBOT represents the current state of the art in this respect. Although no specifically human cognitive features were identified within the short timescale of the project, future work will aim to compare robot and human characteristics using a range of objective techniques including information-based measurements related to cognition (Polani, 2011). If and when (c) is successful, it will show how a system may acquire some human-like cognitive features through embodiment rather than by design. This will align well with the principles outlined in the subsection ‘By being subject to the same constraints as biological systems’.

How is ECCEROBOT different from classical robots? There are four key characteristics which distinguish anthropomimetic robots like ECCEROBOT from traditional humanoids: tendon-driven redundant actuation, multi-articular joint actuators, compliance, and complex joints (Marques et al., 2010). While these succeed in producing a distinctively human (or animal) embodiment, they also make it almost impossible to use the standard engineering control techniques which conventional humanoids are so carefully designed to facilitate. It is for this reason that a key part of the ECCEROBOT project was to investigate how such robots might be controlled – and of course, the control methodology necessarily both constrains and enables the possibilities for cognition. Whatever control system is adopted, in order to act appropriately it needs information about the robot’s state, the state of the environment, and the relation between the robot and the environment. Ideally, all of this information

O. Holland et al. would be derived from sensors mounted on the robot, and those sensors and their associated processing architectures would be biologically inspired. We have satisfied most of the first requirement – the use of offboard sensors (for visual motion capture) is minimal – but the severe constraints of the physical embodiment, as well as our substantial ignorance about how the nervous system processes sensory information, have led us to adopt a more pragmatic approach to the second. The key provider of information about the environment is vision. After initial investigations using a single camera (hence the single eye of the EDS), which is known to be capable of providing all the required information (Newcombe & Davison, 2010) we have adopted the Kinect (Microsoft, 2012) as the main visual sensor. The Kinect provides a depth map co-registered with an RGB image; these data are processed to produce a simplified texture mapped depth map (Devereux, Mitra, Holland, & Diamond, 2011). Within this map, known objects can be recognised and localised, and can then be replaced with detailed precompiled physically and cosmetically correct models as described below. The position of the robot’s head in relation to the environment is known from the Kinect data; the static and dynamic configuration of the rest of the body can be derived from a knowledge of the positions of the motors, the lengths of the muscle/tendon units, the motor currents, and the tensions in the tendons. All sensory and motor data are managed by a distributed control architecture (Ja ¨ntsch, Wittmeier, & Knoll, 2010).

How apparent biological inspiration may arise through common constraints The part of the project best suited to illustrating the material in the subsection ‘By being subject to the same constraints as biological systems’ is the use of physics-based modeling to support the search for a suitable motor program to achieve a simple task – reaching for a known object. This is conceptualised as a form of functional imagination (Marques & Holland, 2009), which was designed, and is described, in engineering rather than biological terms. The strategy in this section is to describe each of the six basic stages in purely functional terms, and then to point out possible correspondences, weak and strong, with some of what is known about the constraints and characteristics of relevant biological systems. Stage 1 is the detection and localization of the target object, which we shall assume to be present. This is done using a Microsoft Kinect mounted on the robot’s head. The Kinect does not correspond to any biological system. It uses structured infrared light to produce a depth map, and an RGB camera to produce an approximately co-registered visual image. The system is preferred to a conventional stereo imaging system because it is lightweight, self-contained, and very fast. The detection of the target object is carried out by using an optimal trade-off maximum average height correlation filter (OT-MACH) on the RGB image. The location and range of the centroid of the object is then determined by examining the corresponding portion of the Kinect’s depth map (Devereux et al., 2011).

Real and apparent biological inspiration in cognitive architectures Because of the requirements and constraints of the task, there will inevitably be parallels between almost any artificial scheme for visual object recognition and localisation, and what is known of the biological arrangements, as can be seen, e.g. from Kreiman’s review (Kreiman, 2008). In contrast, there is such a variety of methods available that any apparent biological correspondence could result from the arbitrary selection of one method rather than another. For example, our scheme features distinct processes of recognition and localisation, and this is reminiscent of the well known separation between the brain’s dorsal ‘where’ pathway and the ventral ‘what’ pathway (Goodale & Milner, 1992). However, it is also typical of the current level of biological knowledge that this separation is not universally accepted (Cardoso-Leite & Gorea, 2010). Stage 2 produces a simplified 3D representation of the environment, and inserts it into the appropriate position in the 3D physics-based modelling system. This is done by converting the point cloud (the 3D distribution of points obtained from the Kinect’s depth sensor) to a surface by triangulation (meshing), simplifying the mesh by extracting plane areas, and inserting it into the modelling environment as a collision surface – a solid surface allowing any collisions with the simulated robot or any other moving component to be detected. The surface can be textured (or ‘painted’) if required using the RGB data from the Kinect; this can be useful for viewing the simulation. The reduction in the complexity of the modelled environment is necessary for computational reasons; even using GPU accelerated techniques, it takes from tens to hundreds of milliseconds to process the Kinect data. The simplified model of the environment as a coloured surface is a consequence of severe computational constraints, and this may represent a parallel with the ‘grand illusion’ of biological vision (Noe ¨, 2002), in which the apparent richness of the subjective visual world is not supported by a corresponding richness of the underlying representation. Although the textured surface models the real world in depth and colour, there is no representation beyond that, except for the location of the centroid of the target object. Stage 3 is the replacement of the meshed representation of the target object by a precompiled physics-based model of the target object. This is not a simple process, and requires the extraction of the meshed representation of the object, the repair of the remaining mesh (by making assumptions about the continuity and orientation of surfaces) and the insertion into the modelled environment of the detailed and accurate physics-based model of the target object. In biological terms, the target object is both attended to (in that it is allocated more processing resources and is represented differently from the contents of the rest of the visual field) and familiar. In psychology there are many differences between the perception of, and memory for, attended versus unattended objects and familiar versus unfamiliar objects. Many such differences (e.g. clarity of perception) favour the attended and familiar objects, as is the case in our model. The heuristics used for cleaning up the mesh involve various assumptions about what is likely to be the case; such assumptions have been familiar in studies of visual perception since the work of Helmholtz on ‘unconscious inference’ (Helmholtz & Southall, 1924).

111

In Stage 4, the precomputed physics-based model of the robot is also inserted into the simulation environment with the modelled Kinect sensor in the position calculated from the environmental model, and with the posture, force distribution and dynamics derived from the proprioceptive sensors. In fact there are two robot models. One is highly detailed, and corresponds closely to the functionally relevant dimensions of the real robot; this is used to calculate the movements of the model under motor actuation, gravity, and inertial loadings. The other is a much simpler envelope enclosing the robot, and is used for the computationally demanding task of detecting collisions between robot components, or between the robot and the environment. From a biological perspective, the detailed model corresponds in many respects to Gallagher’s body schema (Gallagher, 2005); perhaps equally interestingly, it can be visually rendered, when it has some correspondences with Gallagher’s body image (Gallagher, 2005). It should be remembered that the need for using the models of the robot and the environment was driven by the incompatibility between the robot’s embodiment and the principles of classical control, so this, although indirect, is probably the strongest candidate for common constraints producing common solutions. In Stage 5, the system searches for a sequence of simulated motor activations that will achieve the required goal, in this case the reaching for the simulated object by the simulated robot. The paradigm is essentially that of functional embodied imagination as described by Marques and Holland (2009), but the search process is guided by learning, and the search space is reduced through the emergence of muscle synergies (A video showing the model learning to reach for an object in different positions can be seen at http://www.youtube/0D_pb8qjNKE). The motor activations used to drive the model are controlled by the same instructions as those used to drive the robot (Ja ¨ntsch et al., 2010). The motivation for this was the planned use of the simulated robots as platforms for developing control systems for the real robots; using the same instructions eliminates any intermediate step of having to convert a controller discovered in simulation to one that will operate the real robot. However, the use of common instructions is noteworthy from the biological point of view, because there is a great deal of evidence (Lamm, Windischberger, Leodolter, Moser, & Bauer, 2001; Lamm, Windischberger, Moser, & Bauer, 2007; Richter et al., 2000; Wexler, Kosslyn, & Berthoz, 1998) suggesting that imagined movements (and perceptions) are controlled by the same brain circuitry that controls their real equivalents. In fact, this is the observation underlying the embodied imagination paradigm, and also its precursors such as Hesslow’s simulation theory (Hesslow, 2002). The common constraint here is therefore that of achieving economy of means by reusing previously developed components. Stage 6 should consist of the execution by the real robot of the sequence of motor activations (the motor program) found by the ‘imagination’ process using the model robot. In fact this has not yet been achieved with the full robot, because the task of calibrating the model against such a complex robot is not yet feasible. However, it has proved possible to calibrate a key anthropomimetic robot component – the shoulder and upper arm – against its physics based model using evolutionary strategies (Wittmeier, Gaschler et al.,

112 2012), and to demonstrate with sufficient accuracy that the same motor instructions produce the same movements. Even if it is eventually possible to calibrate a full anthropomimetic robot model against the real robot, there is an additional problem: mainly because of the compliant materials used, many of the characteristics of the real robot vary with temperature, time, and use, and so any calibration process will have to be constantly active (online) under normal working conditions. This also applies to the human body, and it is very likely that any online calibration process developed for these robots will take account of recent work on sensorimotor learning (Wolpert, Diedrichsen, & Flanagan, 2011) which uses ideas from engineering-based optimal control theory to explain the adjustment of human internal models. In this case it will be difficult to identify the involvement of biological inspiration, but it will be clear that the common constraint of having a variable embodiment will have determined a common solution. In summary, the detailed examination of a single basic task to be carried out by a robot with human-like features shows a variety of solutions more or less similar to some of those thought to be used in biological systems. Most, but not all, of these are due to common task constraints independent of the human-like nature of the robot. Although many of the parallels with natural and human systems might seem superficial or incidental, in that some such parallels could be made for some aspects of even the most abstract AI planning system, it seems highly likely that some features of future cognitive architectures, especially embodied examples, will eventually be understood to be linked to certain typically ‘biological’ constraints. In fact, we believe that it would be as well to begin the process of identification now in order to pick out what might be a valuable strand within the wider sweep of BICA.

How common constraints may shed new light on biological systems The previous section considered how similar constraints of task or embodiment might lead to similarities in the structures of both artificial and biological cognitive systems. This section picks up an observation in the subsection dealing with the scientific uses of biologically inspired architectures: if a system can achieve the same performance on a given task as a biological system without using some components thought to be essential for the biological system, then a reexamination of the biological system may be indicated. The root problem here is that our understanding of biological systems is still quite limited, and so it is quite possible to design a system inspired by that understanding that will include components or processes that may not in fact exist in the biological system. For example, Webb’s work (Webb, 1995) showed that the behaviour of a female cricket moving towards a male singing the appropriate species specific song could be accurately mimicked by a robot controlled by a simple four neuron circuit, and that there was no need for the conventionally hypothesised separate recognition and localisation circuitry. In this section, we examine the widespread assumption that an embodied cognitive system must contain a representation of its current state by considering the design of the motor planning system for ECCEROBOT.

O. Holland et al. The motor planning strategy for ECCEROBOT’s compliant, complex and non-linear structure takes as its premise the assumption that, in our present state of knowledge, it is unlikely that either an adequate analytical model or an analytically derived control signal could be designed. We have therefore used a generic physics engine to build a detailed simulation model of the robot’s structure and joints, including models of the passively compliant tendons, the motors and gearboxes. By stepping the physics model forward in time under the influence of simulated motor inputs we can then use it as a forward model supporting search or learning strategies in kinodynamic space to obtain a sequence of open loop motor inputs taking the model from a given starting state (the captured state of the robot and environment) to a target state (e.g. grasping an object). This sequence could then be downloaded to the real robot for execution.

Delay compensating control architecture for ECCEROBOT As with any control system, delays must be taken into account. The most important delay is the end to end delay between the actual state of the system at a given time, and the earliest time that a control output based on the sensing of that state can begin to act. The total end to end delay is (din + dout) where din is the time to capture, transmit and process sensor readings to obtain the relevant state estimate, and dout is the time taken to generate a new (or revised) motor activation plan plus the time to transmit this to the physical motors. Thus, if S(t) is the actual robot state at time t, then the motor planner must be initialized with the state S(t + din + dout) as this is the earliest state of the system where any new motor plan can have any physical effect on its motion. Of course, during din and dout the robot will continue to be moved under the existing motor plan, and so din must include not only the time for computing S(t) from the sensor data but also the time dpredict for rolling this state estimate forward to S(t + din + dout). The output side of the delay-compensation control architecture is summarized in the schematic Fig. 2, in which for convenience (din + dout) is written as d. The current proposed motor plan to reach the goal state is quantized and queued into the motor timeline cache. Control signals are sent to the robot motors continuously, read from this single master queue. The model of the robot and its elastic actuators takes the estimated current state S and drives it with the current motor plan, obtained by reading out the set of upcoming signal sequences covering the period d from the timeline cache. A predicted future state S(t + d) can thus be obtained. The motor planner now locates a new best plan that will take the robot from S(t + d) to the goal state. Revised plans are loaded into the queue, overwriting the old values but starting from the time step at (t + d).

Modelling an ECCEROBOT To create a sufficiently fast non-linear, dynamic model we chose to use the Bullet physics engine (bulletphysics.org) which was originally designed for fast 3D games. It is nevertheless a modern, customizable and open-source update on

Real and apparent biological inspiration in cognitive architectures

Fig. 2

113

Output delay compensation control architecture for ECCEROBOT.

older engines such as ODE, with GPU accelerated collision detection and constraint solving planned for release shortly. Custom extensions have been added to Bullet to model the behaviour of the elastic muscles, pulleys, gearboxes and motors (Wittmeier, Ja ¨ntsch, Dalamagkidis, & Knoll, 2011). A first-pass model, shown in Fig. 1(b), was produced using the Blender tool (blender.org) to create a static 3D model of the robot from extensive measurements, photographs and videos. This was exported in sections to Bullet, where joint constraints were then added to create a dynamic model, as shown in Fig. 1(c). Finally motor attachment points and pulleys – or pulley-like behaviour where muscle cables wrap around the shoulder or scapula were added.

Merging the robot and environment to create a simulated ‘inner world’ Planning tasks where the robot must move about and interact with objects cannot take place without the robot model being situated accurately within the model of the environment, and in relation to the modelled target object. A significant attraction of a generic physics engine approach is that the simulation can be extended to incorporate not just the robot but also a three dimensional model of the environment and the target object. Furthermore, using the Kinect sensor and object recognition as described previously (Devereux et al., 2011), these models can be added selectively as either homogeneous static ‘collision shapes’ (the environment, typically a room) or as full dynamic models in their own right – for example, a target object such as a bottle that is to be grasped and lifted. Once this is achieved this now unified physics-based world model can be used to plan and select the best set of activation signals. Fig. 3 shows a schematic of this process. The world state W(t) is generated by merging the state of the robot model with a static collision mesh along with explicit dynamic models of recognised potential target objects. W(t) is then stepped forward in the physics engine for a period (din + dout) before motor planning is commenced.

The absence of the objective present The ubiquity of processing delays means that any representation of the present in an artificial system must be a representation of a predicted present, and that this must also be true of any representation of the present in a natural system. Of course, given sufficient computational resources, it is certainly possible for a cognitive system, whether artificial or natural, to construct a representation of a predicted present, as is routinely done in certain engineering systems. However, in the ECCEROBOT system we have just described there is no requirement for any explicit representation of the present; instead, the system contains only data-driven representations of the recent past and predicted representations of the near future. If the human system for action planning and control is similar to the one being developed for ECCEROBOT – in other words, if both systems are constrained by the same action planning requirements – then this raises the intriguing possibility that no representation of the (predicted) present exists in the brain, because there is no use for one. If there is a representation of the predicted present, then it must be for some purpose unrelated to the brain’s function as a control system. There is of course a huge literature, philosophical and psychological, on the conscious perception of the apparent present, and on other aspects of the perception of time, such as duration, temporal order, and simultaneity. We cannot offer an opinion as to whether any of this depends on the explicit assumption that the brain maintains a representation of the instantaneous present, but since much of the literature is concerned with the temporal anomalies of perception, it might be useful to point out that the system does possess two representations of state, namely the past state at S(t), available at (t + din dpredict), and the future state S(t + din + dout), available at (t + din). Although little is known of how, when, and why perceptions of state become conscious, it is at least possible that either of these states might be used for that purpose. For example, a range of studies in both cognitive science and neuroscience directly support the notion that the state

114

O. Holland et al.

Fig. 3

Control architecture using the physics engine to merge environment capture with the robot model.

consciously perceived when planning or executing a motor task may correspond not to the state captured at the moment of sensory input, but rather to an estimate of a predicted future state. The flash-lag effect (Nijhawan, 1994), where a moving dot is incorrectly perceived to be ahead of a static one, is a well known simple example, although there are competing interpretations. Similarly, the auditory continuity (Grossberg, 1995) and phonemic restoration (Grossberg & Myers, 2000) illusions, where interruptions in sensory data are not perceived at all by a subject so long as the data resumes along a predictable path, demonstrate how some conscious perceptions appear to derive not from direct data but from a predicted state generated some time after a period of data acquisition. More interesting still, Ariff, Donchin, Nanayakkara, and Shadmehr (2002) found that the position of eye saccades tracking an unseen reaching movement appeared to reflect the output of a state predictor, rather than the actual position. The saccades correctly predicted the hidden hand position until the hand was subjected to a force field, when the eyes at first continued to track the predicted path until the saccades were briefly inhibited and a corrected estimated position was tracked. Similarly, Fourneret and Jeannerod (1998) found that subjects performing a motor movement were actually more conscious of the relevant stage in their planned movement than of that in their actual movement, which they had been induced to unconsciously distort. Of course, the hypothesis that the availability of S(t + din + dout) leads to its use for the perception of the apparent present cannot account for particular anomalies without considerable further work; nevertheless, further theoretical and experimental investigation might well be worthwhile. From the point of view of the second approach described in Scientific uses, this argument is an example of an artificial cognitive architecture being able to function without an attribute – a representation of the present state – commonly assumed to be present in the corresponding natural system, thus raising the question of whether the attribute is indeed present in the natural system.

Conclusions Our examination of the ways in which a cognitive architecture aiming at human level performance can include bio-

logical inspiration has uncovered a number of issues that should be taken into account in the future development of the BICA enterprise. However, none of these considerations should be thought of as attacking the legitimacy of current research activities within the BICA framework. Three distinct aspects of human and biological inspiration have been identified: the implicit requirement for human-like as well as human-level performance; the influence of the proposed end uses; and the ways in which biological characteristics can appear in an architecture other than by design. The distinction between human-like and human-level performance probably has the most serious implications for future BICA work. We have identified normative, conscious, and embodied components of targeting human-like performance. The normative and conscious factors both imply the inclusion of a host of known defects in human cognition, with no clear advantages in performance, and they will certainly make the achievement of human-level performance more difficult. A useful comparison can be made with the artificial general intelligence movement within AI, which is also aiming at a human level of performance but is not handicapped by having to include the whole panoply of human cognitive defects. It would seem that the distinctiveness of BICA may hinge on a commitment to include the normative and conscious factors, but this emphasis is not yet apparent within the movement. The third component, embodiment, is certainly present within BICA, but its implications, especially in respect of the relevance of emotions to human cognition, have not yet been fully explored. The effects on BICA of the proposed end uses of the systems do not seem to be particularly relevant to much current work, but they are likely to become more important as the field matures, especially when the scientific contribution is taken into consideration. Biological inspired cognitive architectures do not necessarily require accurate cognitive modelling at the neural level, as exemplified for example by the work of McKinstry, Edelman, and Krichmar (2006), but such modelling in the pursuit of better systems may well contribute to scientific advancement. The ways in which biological characteristics can appear in an architecture other than by design is at present more a matter of interest than of consequence to BICA. However, the emerging possibility of providing an increasingly human-

Real and apparent biological inspiration in cognitive architectures like embodiment, as demonstrated by the ECCEROBOT project, may bring these factors to the fore. In summary, the question of the balance between humanlevel and human-like performance in BICA is a strategic question requiring resolution in order for the enterprise to progress in a clearly defined direction. The other aspects of biological inspiration discussed in this paper are less important, but will become more salient as the initiative matures.

Acknowledgments This paper includes material from two papers presented at the Second Annual Conference on Biologically Inspired Cognitive Architectures, 2011. The research leading to these results has received funding from the European Community’s Seventh Framework Programme FP7/2007-2013 Challenge 2 (Cognitive Systems, Interaction, Robotics) under Grant Agreement No. 231864-ECCEROBOT. The ECCEROBOTs are designed and built by Rob Knight, The Robot Studio, Divonne-les-Bains, France.

References Ariff, G., Donchin, O., Nanayakkara, T., & Shadmehr, R. (2002). A real-time state predictor in motor control: Study of saccadic eye movements during unseen reaching movements. Journal of Neuroscience, 22, 7721–7729. Baars, B. J. (1997). Global workspace theory: A rigorous scientific theory of consciousness. Journal of Consciousness Studies, 4(4), 292–309. Cardoso-Leite, P., & Gorea, A. (2010). On the perceptual/motor dissociation: A review of concepts, theory, experimental paradigms and data interpretations. Seeing and Perceiving, 23(2), 89–151. Chalmers, D., & Bourget, D. (Eds.). Online papers on consciousness: The function of consciousness at . Cordeschi, R. (2002). The discovery of the artificial: Behavior, mind and machines before and beyond cybernetics. Dordrecht, The Netherlands: Kluwer. Damasio, A. (1994). Descartes’ error: Emotion, reason, and the human brain. Putnam. Damasio, A. (1999). The feeling of what happens: Body and emotion in the making of consciousness. Harcourt. Dennett, D. C. (1978). Why not the whole iguana? Behavioural and Brain Sciences, 1, 103–104. Devereux, D., Mitra, B., Holland, O., & Diamond, A. (2011). Using the Microsoft Kinect to model the environment of an anthropomimetic robot. In Proceedings of the second IASTED internatioal conference on robotics (Robo2011), Pittsburgh, USA. ECCERobot. (2012). Embodied cognition in a compliant engineered robot. . Farmer, J. D., & Geanakoplos, J. (2009). Hyperbolic discounting is rational: Valuing the far future with uncertain discount rates. Cowles foundation discussion papers 1719, Cowles Foundation for Research in Economics, Yale University. Fourneret, P., & Jeannerod, M. (1998). Limited conscious monitoring of motor performance in normal subjects. Neuropsychologia, 36, 1133–1140. Franklin, S. (2003). IDA: A conscious artifact? Journal of Consciousness Studies, 10, 47–66. Gallagher, S. (2005). How the body shapes the mind. Clarendon Press.

115

Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103(4), 650–669. Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15(1), 20–25. Grossberg, S., & Myers, C. W. (2000). The resonant dynamics of speech perception: Interword integration and duration-dependent backward effects. Psychological Review, 107(4), 735– 767. Grossberg, S. (1995). The attentive brain. American Scientist, 83, 438–449. Helmholtz, H. V., & Southall, J. P. C. (1924). Helmholtz’s treatise on physiological optics. In J. P. C. Southhall (Ed.), The optical society of America, Rochester. Hesslow, G. (2002). Conscious thought as simulation of behaviour and perception. Trends in Cognitive Sciences, 6(6), 242–247. Hoffmann, M., Marques, H., Arieta, A., Sumioka, H., Lungarella, M., & Pfeifer, R. (2010). Body schema in robotics: A review’. IEEE Trans- actions on Autonomous Mental Development, 2(4), 304–324. Holland, O., & Knight, R. (2006). The anthropomimetic principle. In J. Burn & M. Wilson (Eds.), Proceedings of the AISB06 symposium on biologically inspired robotics. Izhikevich, E. M. (2003). Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14, 1569–1572. Ja ¨ntsch, M., Wittmeier, S., & Knoll, A. (2010). Distributed control for an anthropomimetic robot. In Proceedings of IEEE/RSJ international conference on Intelligent Robots and Systems IROS 2010 (pp. 5466–5471). Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, XLVII, 263–291. Kreiman, G. (2008). Biological object recognition. Scholarpedia, 3(6), 2667. . Krichmar, J. L. (2012). Design principles for biologically inspired cognitive robotics. Biologically Inspired Cognitive Architectures. http://dx.doi.org/10.1016/j.bica.2012.04.003. Lamm, C., Windischberger, C., Leodolter, U., Moser, E., & Bauer, H. (2001). Evidence for premotor cortex activity during dynamic visuospatial imagery from single-trial functional magnetic resonance imaging and event-related slow cortical potentials. NeuroImage, 14(2), 268–283. Lamm, C., Windischberger, C., Moser, E., & Bauer, H. (2007). The functional role of dorso-lateral premotor cortex during mental rotation: An event-related fMRI study separating cognitive processing steps using a novel task paradigm. NeuroImage, 36(4), 1374–1386. Marques, H. G., & Holland, O. (2009). Architectures for functional imagination. Neurocomputing, 72(4–6), 743–759. Marques, H., Jaentsch, M., Wittmeier, S., Holland, O., Alessandro, C., Diamond, A., et al. (2010). ECCE1: The first of a series of anthropomimetic musculoskeletal upper torsos. In 10th IEEE-RAS international conference on humanoid robots (Humanoids 2010) (pp. 391–396), IEEE. McKinstry, J. L., Edelman, G. M., & Krichmar, J. L. (2006). A cerebellar model for predictive motor control tested in a brainbased device. Proceedings of the National Academy of Sciences of the United States of America, 103(9), 3387–3392. Metta, G., Natale, L., Nori, F., Sandini, G., Vernon, D., Fadiga, L., von Hofsten, C., Santos-Victor, J., Bernardino, A., & Montesano, L. (2010). The iCub humanoid robot: An open-systems platform for research in cognitive development. Neural Networks, Special Issue on Social Cognition: From Babies to Robots, 23, 1125– 1134. Microsoft. (2012). The Kinect sensor. . Montier, J. (2007). Behavioural investing: A practitioner’s guide to applying behavioural finance. John Wiley and Sons.

116 Newcombe, R. A., & Davison, A. J. (2010). Live dense reconstruction with a single moving camera. In 2010 IEEE conference on Computer Vision and Pattern Recognition (CVPR), IEEE (pp. 1498–1505). Nijhawan, R. (1994). Motion extrapolation in catching. Nature, 370, 256–257. Norretranders, T. (1991). The user illusion: Cutting consciousness down to size. New York: Viking Press. Noe ¨, A. (2002). Is the visual world a grand illusion? Journal of Consciousness Studies, 9(5–6), 1–12. A. Noe ¨, Ed.. Pfeifer, R., & Gomez, G. (2009). Morphological computation – connecting brain, body, and environment. In B. Sendhoff, O. Sporns, E. Ko ¨rner, H. Ritter, & K. Doya (Eds.), Creating brainlike intelligence. From basic principles to complex intelligent systems (pp. 66–83). Berlin: Springer. Pfeifer, R., Lungarella, M., & Iida, F. (2007). Self-organization, embodiment, and biologically inspired robotics. Science, 318(5853), 1088–1093. Polani, D. (2011). An informational perspective on how the embodiment can relieve cognitive burden. In Proceedings of IEEE symposium series in computational intelligence 2011 – symposium on artificial life (pp. 78–85). Potkonjak, V., Svetozarevic, B., Jovanovic, K., & Holland, O. (2010). Biologically inspired control of a compliant anthropomimetic robot. In Proceedings of the 15th IASTED international conference on Robotics and Applications (RA2010) (pp. 182– 189). ACTA Press Scientific. Ramamurthy, U., Baars, B., D’Mello, S. K., & Franklin, S. (2006). LIDA: A working model of cognition. In D. Fum, F. Del Missier, & A. Stocco (Eds.), Proceedings of the seventh international conference on cognitive modeling (pp. 244–249). Trieste, Italy: Edizioni Goliardiche. Richter, W., Somorjai, R., Summers, R., Jarmasz, M., Menon, R. S., Gati, J. S., & Georgopoulos, A. P. (2000). Motor area activity during mental rotation studied by time-resolved single-trial fMRI. Journal of Cognitive Neuroscience, 12(2), 310–320.

O. Holland et al. Samuelson, P. (1937). A note on measurement of utility. Review of Economic Studies, 4, 155–161. Sun, R. (2003). A tutorial on CLARION 5.0. Technical Report, Cognitive Science Department, Rensselaer Polytechnic Institute. Webb, B. (1995). Using robots to model animals: A cricket test. Robotics and Autonomous Systems, 16(2–4), 117–134. Wexler, M., Kosslyn, S. M., & Berthoz, A. (1998). Motor processes in mental rotation. Cognition, 68, 77–94. Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin and Review, 9(4), 625–636. Wittmeier, S., Alessandro, C., Bascarevic, N., Dalamagkidis, K., Devereux, D., Diamond, A., et al. (in press). Towards anthropomimetic robotics: Development, simulation and control of a Musculoskeletal Torso. Artifical Life. Wittmeier, S., Gaschler, A., Ja ¨ntsch, M., Dalamagkidis, K., & Knoll, A. (2012). Calibration of a physics-based model of an anthropomimetic robot using evolution strategies. In Proceedings of IEEE/RSJ international conference on Intelligent Robots and Systems (IROS) (accepted). Wittmeier, S., Ja ¨ntsch, M., Dalamagkidis, K., & Knoll, A. (2011). Physics-based modeling of an anthropomimetic robot. In Proceedings of IEEE/RSJ international conference on Intelligent Robots and Systems (IROS2011) (pp. 4148–4153). Wittmeier, S., Ja ¨ntsch, M., Dalamagkidis, K., Rickert, M., Marques, H. G., & Knoll, A. (2011). CALIPER: A universal robot simulation framework for tendon-driven robots. In 2011 IEEE/RSJ international conference on Intelligent Robots and Systems (IROS) (pp. 1063–1068). Wolpert, D. M., Diedrichsen, J., & Flanagan, J. R. (2011). Principles of sensorimotor learning. Nature Reviews Neuroscience, 12, 739–751. Ziemke, T. (2003). What’s that thing called embodiment? In Proceedings of the 25th annual meeting of the cognitive science society (pp. 1305–1310). Lawrence Erlbaum. Ziemke, T. (2008). On the role of emotion in biological and robotic autonomy. Biosystems, 91, 401–408.