Development of computational models of emotions: A software engineering perspective

Development of computational models of emotions: A software engineering perspective

Journal Pre-proofs Development of Computational Models of Emotions: A Software Engineering Perspective Enrique Osuna, Luis-Felipe Rodríguez, J. Octavi...

2MB Sizes 0 Downloads 44 Views

Journal Pre-proofs Development of Computational Models of Emotions: A Software Engineering Perspective Enrique Osuna, Luis-Felipe Rodríguez, J. Octavio Gutierrez-Garcia, Luis A. Castro PII: DOI: Reference:

S1389-0417(19)30510-8 https://doi.org/10.1016/j.cogsys.2019.11.001 COGSYS 915

To appear in:

Cognitive Systems Research

Received Date: Accepted Date:

20 October 2019 9 November 2019

Please cite this article as: Osuna, E., Rodríguez, L-F., Octavio Gutierrez-Garcia, J., Castro, L.A., Development of Computational Models of Emotions: A Software Engineering Perspective, Cognitive Systems Research (2019), doi: https://doi.org/10.1016/j.cogsys.2019.11.001

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

© 2019 Published by Elsevier B.V.

Development of Computational Models of Emotions: A Software Engineering Perspective Enrique Osunaa , Luis-Felipe Rodr´ıgueza,∗, J. Octavio Gutierrez-Garciab , Luis A. Castroa a

ITSON, Av. Antonio Caso 2266, Ciudad Obreg´ on 85137, Sonora, M´ exico b ITAM, R´ ıo Hondo 1, Ciudad de M´ exico 01080, M´ exico

Abstract Computational Models of Emotions (CMEs) are software systems designed to explain the phenomenon of emotions. The mechanisms implemented in this type of computational models are based on human emotion theories reported in the literature and designed to provide intelligent agents with affective capabilities and improve human-computer interaction. However, despite the growing interest in this type of models, the development process of CMEs does not seem to follow formal software methodologies. In this paper, we present an analysis of CMEs from a software engineering perspective. We aim to identify what elements of software engineering are used in their development process and to demonstrate how some software engineering techniques may support and improve their development process. We discuss a series of challenges to be addressed in order to take advantage of software engineering techniques: 1) definition of guidelines to help decide which emotion theories should be implemented computationally, 2) homogenization of terms about human emotions, their components, phases, and cycles implemented in CMEs, 3) design of CMEs whose components can be reusable, 4) definition of standard criteria for comparative analysis between CMEs, 5) identification of software engineering principles, concepts, and design practices useful in the construction of CMEs, 6) definition ∗ Corresponding

author Email addresses: [email protected] (Enrique Osuna), [email protected] (Luis-Felipe Rodr´ıguez), [email protected] (J. Octavio Gutierrez-Garcia), [email protected] (Luis A. Castro)

Preprint submitted to Cognitive Systems Research

November 13, 2019

of standard frameworks to validate CMEs. Keywords: Computational Model of Emotion, Software Engineering, Formal development process, Software methodology

1. Introduction Research in areas such as psychology and neuroscience have revealed the extensive interaction between emotion and cognition in human beings (e.g., perception, attention, learning, and decision making) [67, 27, 76, 25, 55, 39]. Emotions are a crucial element in people’s daily lives. The understanding of emotional signals in everyday life environments becomes an important aspect that influences people’s communication and verbal and non-verbal behavior [80, 45]. People’s facial expression, voice intonation, and body posture are also shaped by the emotion states experienced [77, 93]. Nevertheless, the definition of the term emotion is still a discussed among the research community [67]. One of such definition states that emotion is the cognitive data resulting from the evaluation of internal and external events that is used to prepare responses and attribute a concept and states to these perceived events [53]. Importantly, the study of human emotion has led to the emergence of several theories of emotion, including cognitive theories [49, 24, 89, 96], social constructivist theories [3, 75, 40], biological theories [28], and other more integrative approaches to emotions [17, 68, 101]. Computational models of emotion (CMEs) are software systems that attempt to explain the phenomenon of emotions by implementing computational mechanisms for the evaluation of emotional stimuli, elicitation of emotions, and generation of emotional behaviors. This type of model is inspired by at least one emotion theory, thus proposing a computational version of such emotion theories. Computational models of emotion are designed to provide artificial agents with affective processing [84], thus contributing to achieve a key objective in the field of artificial intelligence where the research community seeks to create emotionally intelligent entities capable of detecting emotions from human and

2

artificial agents and expressing emotions in response to perceived events [45]. Importantly, the development of CMEs contributes to the way in which emotion theories are formulated and evaluated as computational models require the definition of well defined procedures and the systematization of emotion theories [58]. The software development process of CMEs is driven by theoretical and computational aspects. The explanations provided by emotion theories about the functioning of human emotions help to define the emotion mechanisms and architectures implemented in CMEs. Mechanisms such as the evaluation of emotional stimuli and generation of emotions usually follow specific emotion theories. On the other hand, computational elements (mainly from the software engineering field) are used to ensure the correct technical functioning of the mechanisms implemented in CMEs. This involves aspects such as the programming language in which the computational algorithms are implemented and the software design techniques utilized to assist the several phases of the software development process [43, 78]. Despite the growing need for CMEs, there is a lack of formal software methodologies and software engineering standards appropriate for guiding the construction of this type of computational model. This lack of computational tools and techniques developed in areas such as software engineering has led to the construction of CMEs whose development process seems to follow informal procedures. In contrast to conventional software systems whose design and development process is guided by formal and standard methodologies and techniques, the development process of CMEs still requires a well structured software design procedures, formal methodologies, and the use of software engineering principles and best practices. In particular, the development process of contemporary CMEs reported in the literature seems to follow the next general informal procedure [12, 53, 86, 84]: Selection of theoretical foundations: the emotion theories that guide the development process of the CME are selected.

3

Formal interpretation: emotion theories are interpreted using formal languages. Computational codification: the interpreted emotion theory is translated into computational algorithms using specific programming languages. These algorithms represent an executable version of the selected theories plus additional software artifacts implemented to fill gaps in emotion theory as well as to meet specified requirements. Embodiment in cognitive agent architectures: the resultant computational model is included in a larger software system that represents the underlying cognitive architecture of an intelligent agent. In this manner, affective mechanisms are provided to such type of agents to be capable of affective interaction and behavior. Validation: the mechanisms implemented in the CME are validated in order to assess aspects associated with the evaluation of emotional stimuli and the generation of emotion. This validation may be carried out previous to the inclusion of the CME in a cognitive agent architecture. Although this general procedure has led to CMEs that have proven useful in several application domains, the development process of CMEs is still subject of improvements to that may led to a formal procedure, based on well defined software methodologies, techniques, tools, principles, and best practices. Software engineering is an area that provides a wide range of fundamental techniques, methods, practices, and principles that support the development process of sotware systems [99]. Research in the field of software engineering is focused on supporting the design, implementation, testing, and maintenance processes of computer systems, emphasizing on the importance of the quality of the resulting software product [48]. Software systems that are developed under software engineering principles achieve key quality features in computer systems such as scalability, adaptability, extensibility, reusability, and modularity [107, 52, 103, 102]. 4

A computational model of emotions is in essence a software system. In this line, the key quality features mentioned above are also desirable in CMEs. In fact, many of the problems in the field of CMEs may be approached from a software engineering perspective. Considering that the field of software engineering is currently a mature field, a pertinent question is how could software engineering improve the development process of CMEs? It is important to emphasize that there is no a universal software development process. This must be adapted to the nature of the software product that is being developed. In this sense, there has been some efforts to standardize some elements of the development process in CMEs [18, 13]. However the development process of CMEs is still far from using formal methodologies and techniques, such as those provided by the software engineering field. A formal software engineering approach in the development of CMEs could help to address some issues that arise in the development of CMEs and that are related to aspects such as the implementation of CMEs in diverse domains, the extension of functionalities of already implemented CMEs, the definition of mechanisms through component based approaches that facilitate the internal data exchange in CMEs, among many other qualities. Although most of the software engineering principles, techniques, standards, and approaches mentioned above are not taken into account in most contemporary CMEs, it is evident that some of these software engineering elements are partially utilized in the development of CMEs, such as the definition of requirements, the use of architectural design models, component-based approaches, and evaluations cases. In this paper, we present an analysis of CMEs from a software engineering perspective. We aim to identify what elements of software engineering are used in their development process and to demonstrate how some software engineering techniques may support and improve their development process. This analysis is organized according to the general phases followed in the development process any type of software system: analysis of requirements, design phase, and evaluation phase. We discuss a series of challenges to be addressed in order to take advantage of software engineering techniques: 1) 5

definition of guidelines to help decide which emotion theories should be implemented computationally, 2) homogenization of terms about human emotions, their components, phases, and cycles implemented in CMEs, 3) design of CMEs whose components can be reusable, 4) definition of standard criteria for comparative analysis between CMEs, 5) Identification of software engineering principles, concepts, and design practices useful in the construction of CMEs, 6) definition of standard frameworks to validate CMEs. The papers analyzed in this review were found using the following keywords: computational model of emotion, or affect, or feeling, or sentiment, or cognition, in research databases such as IEEExplore, ACM Digital library, ScienceDirect, and SpringerLink. We included those papers reported in the literature that present CMEs designed to be included in the cognitive architecture of artificial agents and that provided details about their theoretical foundations, underlying design architectures, and the emotion generation process implemented. In general, we selected those papers that allowed us to analyze and understand the functional requirements, architectural designs, underlying components, among other related software engineering aspects of CMEs. Those papers that reported CMEs as a secondary contribution were excluded as these usually provide no details that contribute to the understanding of the software engineering elements that are usually utilized in the development process of contemporary CMEs. The paper is organized as follows. In Section 2 we provide details about the software engineering approach utilized for the analysis of CMEs. In Sections 3, 4, 5, and 6 we present the results of the analysis of CMEs from a software engineering approach. We discuss some challenges and research opportunities regarding the development process of CMEs in Section 7 and provide concluding remarks in Section 8.

2. A software engineering approach to analyze CMEs Developing CMEs involves carrying out a set of activities in order to create a model capable of simulating human emotional behavior. This process in-

6

volves a large number of aspects related to CMEs ranging from their theoretical conception to their implementation and simulation using artificial intelligence techniques. As in any software project, developing CME should be carried out following a formal methodology, standards, and techniques that allow discovering, classifying and, organizing requirements that should be met by the design and final implementation of CMEs. Hence, this paper presents a review of the software development process of CMEs. This review is organized according to the software engineering phases stated by [11]: requirements analysis, design, implementation, testing, and maintenance. 1. Requirements analysis. This phase starts with the conception of the software. Its main objective is to specify what the CMEs must do. In software engineering, this phase covers all activities related to discovery, elicitation, classification, organization, and analysis of requirements. This process usually involves defining one or more (semi-)formal models to completely understand and specify the software product (e.g., a CME) [56]. Section 3 describes the status quo of the requirements analysis techniques employed to develop CMEs as well as to what extent these techniques and standards are followed. 2. Design. This phase takes place once all the requirements have been identified, negotiated, and analyzed. With respect to design, the main objective of this phase is to specify how the previously identified parts of the software product (e.g., a CME) fit together to create a software architecture. The design of CMEs specifies the components, the operating cycle, data flow, the computational techniques to be employed, and the particularities of the theory of emotion used for the implementation of CMEs. Section 4 reviews to what extent these techniques and standards are followed in the context of CMEs. 3. Implementation. Taking into account the blueprints of the software product (e.g., a CME), the implementation phase deals with the translation of the design into program code. This phase involves selecting tools, pro-

7

gramming languages, coding standards as well as to how programmers organize to implement the software product. Section 5 reviews current implementation practices for the development of CMEs. 4. Testing. Testing either parts or the complete software solution has the objective of validating the software product, i.e., demonstrating that the software does what it is expected to do (as stated in the requirements). The testing phase also has the purpose of finding bugs [81]. Section 6 reviews testing practices for validating the development of CMEs. 5. Maintenance. Software maintenance can be defined as the process in which a given software product (previously released) is adapted to new requirements. Software maintenance also may involve correcting bugs and/or improving performance. It should be noted that we have not reviewed maintenance practices of CMEs because based on a preliminary search for maintenance practices the literature is scarce. This may be because research papers on CMEs are mainly focused on presenting novel approaches instead of small, incremental changes to the CMEs.

3. Requirements analysis The elicitation and analysis of requirements play an important role in the development process of any computer system and CMEs are not the exception to the rule. To conduct requirements elicitation, there is a myriad of techniques including interviews, document analysis, questionnaires, surveys, among others. In a similar manner, as indicated in [1], requirements analysis can be conducted using multiple techniques ranging from affinity diagrams and storyboards to function allocation. However, according to [108], none of these techniques works for every situation and often, a combination of them is used. For instance, in the context of CMEs, the document analysis technique is the most used to requirements elicitation. In addition, given that the majority of the developments of CMEs (as software products) is still conducted within the context of research projects (as in

8

[59, 31, 57]), both the client (from whom requirements are elicited) and the software provider are played by researchers within the same research group. This may influence the selection of requirement elicitation and analysis techniques. Requirements can be categorized into functional and non-functional. Functional requirements indicate what a computer system (e.g., a CME) should do, whereas non-functional requirements specify qualities of the computer system such as maintainability and those related to performance. With the context of CMEs, an example of a functional requirement is that the CME should take into account the OCEAN model of personality. An example of a non-functional requirement is that the CME of an avatar should display emotional responses in less than a given number of seconds. The functional requirements of CMEs are mainly focused on two general purposes: Imitating the process of human emotions. This type of models extracts their functional requirements from emotion theories. Generating a model of emotions for a domain-specific application. This type of models obtains the requirements from the application domain in which the CME will be implemented. As part of the software engineering process, there is a refinement of requirements. This process starts when CMEs are conceived and ends when all the requirements are discovered, elicited, and analyzed. Initially the researchers define functional requirements that they wish to integrate into CMEs. At this point, the functional requirements may be general. An example of these highlevel functional requirements is as follows: the CME must take into consideration personality in order to generate more realistic emotions. Once a high-level functional requirement is defined, an exhaustive analysis is performed to identify theories that could potentially meet the needs of that requirement. This analysis could be as exhaustive as required. Within the context of the last example, a researcher should perform an in-depth analysis of the different existing personality theories. As a result of the theoretical analysis, a set of possible 9

Figure 1: Requirements refinement process in CMEs.

theories have been identified from which one is selected with the aim of meeting the requirement. Within the context of the personality example, the researcher may determine that the OCEAN personality model [26] will be incorporated into the CME. Figure 1 illustrates this process. The following subsections present an analysis of CMEs whose purpose is to imitate the process of human emotions and those who seek to generate emotions in domain-specific applications. The analysis is performed taking into account the objective under consideration, the general operation, and the requirements. Our analysis of the CMEs’ requirements aims to thrown new light on how the requirements analysis phase is conducted. 3.1. Domain-specific CMEs Some CMEs (incorporated into affective agents) are conceived within a specific domain. The definition of requirements for a CME within this context is strongly related to a specific emotional behavior that is sought to be implemented in such affective agents. For examples of domain-specific CMEs see [82, 22, 69, 9, 109]. The rest of this section analyzes the requirements analysis phase of this type of CMEs. GAMYGDALA [14, 79] is an emotional appraisal engine for game developers with the aim of incorporating emotions into non-player characters. The authors of GAMYGDALA identified three main requirements for the development of its CME: (1) the CME of the non-player characters should be capable 10

of evaluating events using a psychological foundation. As a consequence, they adopted the emotion model of Ortony, Clore and Collins (OCC) [73]; (2) the CME should be modular and independent so as to GAMYGDALA developers do not require extensive knowledge about appraisal theories; (3) the CME should be efficient because games may have a large number of agents endowed with the CME. A video game that implements GAMYGDALA must assign objectives to non-player characters. Depending on these objectives, non-player characters evaluate events according to the level of fulfillment of their objectives, which will result in appropriate emotions for the non-player characters. GAMYGDALA supports 16 of the 24 emotions offered by the OCC model ranging from internal emotions to social emotions (i.e., emotions towards other non-player characters). GAMYGDALA’s emotions are expressed as pleasure-displeasure, arousal-nonarousal, and dominance-submissiveness (PAD) values. The Conscious-Emotional-Learning Tutoring System (CELTS) [32] is a cognitive tutoring agent inspired by neuro-scientific theories. CELTS is capable of automatically building and updating emotional profiles in order to better interact with students. The requirements of CELTS’ CME are as follows: (1) The CME must keep track of emotional episodes in CELTS. The emotional memory system must be based on the Ledoux principles [51] regarding the memory system of the amygdala. (2) The CME must learn/adapt from the emotional interaction between the tutor agent and the learners. So, the tutor agent implements two types of emotional learning (pure emotional learning and emotionally modulated learning), which are based on the work by Squire and Kandel [100]. (3) The CME must be able to interpret stimuli quickly. So, the tutor agent interprets external stimuli unconsciously based on the peripheral concepts of James et al. [46] (4) In addition, the CME must perform precise stimuli evaluations, so it uses centralist concepts proposed by Cannon [20] to make conscious judgments of external stimulus. Chowanda et al. [23, 22] propose ERiSA, a CME endowed with personality and social skills for non-player characters of a commercially known game (namely, The Elder Scrolls V: Skyrim). The objective of this CME is to pro11

vide a novel user experience when playing video games. The requirements for the CME are: (1) The CME must have a personality component to generate realistic interactions, so the OCEAN personality model [26] was used. (2) The CME must allow non-player characters to generate emotions, so the basic classification system of the six basic emotions of Ekman [30] was used. (3) The CME must take into consideration time, so a function of emotion decay was included to return non-player characters to their original state after a period of time. (4) The CME must take into account potential social relationships, so a two-dimensional social relationship model based on [22] was used. (5) Finally, a dynamic interaction of the CME with the current state of the game and its rules was required. The proposed CME is incorporated into a framework for social game agents in order to perceive and interpret the emotions of users. 3.2. CMEs that imitate the process of human emotions The main objective of this type of CMEs is to closely imitate some aspects of the process of human emotions, and as a consequence, emotion theory plays a key role in defining its requirements. Unlike domain-specific CMEs, in this case the requirements are mostly formulated from emotion theories. In particular, the theoretical foundation dictates the functional behavior of the CME. Emotion theories are interpreted and translated into computational characteristics that are subsequently used for the development of the computational model. Emotion theories determine the number of components that are considered in the computational model, namely. the execution order, the data flow, and other particular characteristics of an emotional theory. The conception of CMEs that seek to imitate the process of human emotions begins with the formulation of high level requirements (e.g., modeling mood dynamics) that are subsequently analyzed to identify emotion theories that potentially explain such requirements. This demands a solid level of understanding about the emotional theory that is used, since it is moving from a conceptual model to a computational model, where it is necessary to specify all the mechanisms and processes that are not normally implicit in a theory of 12

emotion. Rather, researchers are responsible for interpreting the requirements. Since the requirements are only obtained from the theories, the researchers present themselves with other types of difficulties such as the lack of guidelines that indicate which theory they should use to obtain requirements. Another element to take into account is that CMEs can be based on more than one theory of emotion [41, 36, 106, 57, 7], which implies that the requirements are obtained from different theoretical sources. For example, a CME can adopt the appraisal theory to evaluate the event through appraisal variables and use the PAD (explained in the section below) as a mechanism to generate a mood in the agent. Whereas the domain-specific CMEs obtain requirements from agents’ qualities, which are related to the environment in which it will be implemented, the CMEs that imitate the process of human emotions adopt the rules that theories dictate and consider them as requirements. Next, we will make a brief summary about the dimensional theories and appraisal theories respectively, then we perform an analysis on the phase of analysis of requirements of the CMEs that imitate the process of human emotions. 3.2.1. Emotion theories The development of CMEs is commonly based on two types of emotion theories: dimensional theory of emotion and appraisal theory of emotion. Dimensional models characterize all emotions as coordinates within a multidimensional space (usually two or three dimensions such as valence, arousal, and dominance) [74]. This type of theory postulates that affective states are not different from each other and that they are systematically related [91, 65, 63, 5]. Appraisal theories are the most utilized in CMEs and argue that emotions are the result of the assessment of perceived stimuli in terms of the individual’s objectives, beliefs, values, and needs [49, 35, 94, 98, 87].

13

Figure 2: Requirements elicitation for the development of a CME, an adaptation of [16].

Dimensional theories Dimensional theories provide a framework for representing emotions from a structural perspective [84]. For example, Russell [91] proposed a two-dimensional model that places various emotional states within the axes of pleasure and excitement. Similarly, the dimensional model by Mehrabian [65] includes three dimensions called pleasure, arousal, and dominance (known as the PAD model). Each dimension corresponds to an orthogonal line (pleasure-displeasure, arousalnonarousal and dominance-submissiveness) so that an event is evaluated based on these three dimensions to consequently elicit a certain emotion. Importantly, Mehrabian [65] argues that the PAD space is an ideal model for representing emotions. Dimensional models may include additional dimensions to allow for more complexity and accuracy in the elicitation of emotions [33, 62]. The pleasure dimension determines how pleasant or unpleasant is an emotion. A person with a positive emotion (i.e., high pleasure value) tends to process

14

Emotion Joy Hope Relief Pride Gratitude Love distress fear disappointment remorse anger hate

Pleasure 0.40 0.20 0.20 0.40 0.40 0.30 0.40 0.64 0.30 0.30 0.51 0.60

Arousal 0.20 0.20 0.30 0.30 0.20 0.10 0.20 0.60 0.10 0.10 0.59 0.60

Dominance 0.10 0.10 0.40 0.30 0.30 0.20 0.50 0.43 0.40 0.60 0.25 0.30

Mood type +P+A+D Exuberant +P+AD Dependent +PA+D Relaxed +P+A+D Exuberant +P+AD Dependent +P+A+D Exuberant -PAD Bored P+AD Anxious P+AD Anxious P+AD Anxious P+A+D Hostile P+A+D Hostile

Table 1: Mapping of 12 OCC emotions to the PAD space

negative stimuli in an optimistic way when facing challenges. The arousal dimension determines how exciting or calm an emotion is and it is conceived as a state of feeling. Finally, dominance represents the level of attention or rejection of an emotion and describes the degree to which a person feels free to act in a given situation [63]. Table 1 shows a mapping of 12 emotions defined by the Ortony, Clore, and Collins (OCC) model [72], which represent the set of emotions commonly used in dimensional theories of the emotion. Appraisal theories Appraisal theory states that an individual evaluates an event in terms of his/her beliefs, desires, and intentions. This assessment generates an emotion that subsequently influences his/her behavior [49, 96]. This assessment is carried out through a set of appraisal variables such as relevance, desire, urgency, and unexpectedness, which seek to describe in greater detail the individualenvironment relationship. Each of these appraisal variables has a value and the configuration of all the values corresponds to a specific emotion label. Different appraisal theories can use different variables and a different number of them [96]. Most of the CMEs based on the appraisal theory use at least six appraisal variables. 15

Figure 3: The Pleasure-Arousal-Dominance model [104].

There are different appraisal theories [49, 73, 2, 34, 88]. Although they are very similar, there are certain differences in terms of how the appraisal process happens. For example, some appraisal theories argue that before an emotion is generated, there is a cognitive processing that occurs unconsciously and automatically [96]. However, it has also been argued that there is a conscious attribution of emotion by the individual [66]. Despite the differences in appraisal theories, an element that most of these theories share is that the appraisal process is influenced by cognition [49] and that it involves processes such as planning, explanation, perception, and memory [38]. 3.3. Requirements phase of CMEs that imitate the human emotion process WASABI [6] is a CME developed with the purpose of increasing the credibility of an agent in social interactions. It is implemented within a virtual agent called MAX in which cognitive reasoning abilities are combined to achieve the simulation of primary and secondary emotions. WASABI proposes a series of basic requirements for its development. (1) Generating a set of specific emotions, so five of the six primary emotions of Ekman [29] and a group of secondary emotions based on prospect-based emotions from the OCC model. (2) Assigning specific values to each emotion generated, so it was decided to use the PAD 16

model [92] to assign these values. (3) There should be fast-unconscious and slow-conscious appraisals of the stimuli perceived by the agent based on the low road and high road proposed by [50]. (4) There should be a long-term agent’s emotional state through the inclusion of a computational simulation of mood based on [95]. The operation of WASABI can be summarized in two simple procedures. The first procedure begins when the agent perceives an internal or external event, so that an unconscious appraisal of such event is made by sending a positive emotional impulse to a dynamic component of the emotions of the WASABI architecture that subsequently results in the primary emotions. The second procedure takes place when the agent makes a conscious appraisal of the event to which an emotional impulse is sent to the dynamic component of emotions, which, in turn, results in obtaining secondary emotions. EBDI [47] is a CME that aims to imitate the practical reasoning of humans by adding the influence of primary and secondary emotions in the decisionmaking process of a BDI agent architecture. Some of the requirements that were considered for the construction of EBDI were: (1) The computational model should be based on an architecture with a high acceptance on its philosophical roots, so the BDI architecture was chosen. This architecture provided them with a framework for modeling and reasoning, in addition, there is a relatively large set of software systems that employ the concepts of this architecture. (2) It has a method for modeling emotions in agents, so the first-order logic and multidimensional logic of [37] was chosen to achieve it. (3) It has a mathematical support that differentiates one emotion from another, so the PAD scale for emotions was used [64]. (4) Since it was sought to emulate a set of specific emotions in the agent, the primary and secondary emotions proposed by Damasio were used [27]. EBDI has two functions for updating emotions, the first corresponds to primary emotions, which give a quick affective response to a situation faced by the agent, which represents an extremely useful option for decision making when time is limited. The second corresponds to secondary emotions and subsequently appears to primary emotions. They represent an 17

affective result of greater deliberation and are used to refine decision making if time permits. ALMA [36] is a CME developed with the objective of creating interactive virtual characters to be used as dialogue partners with realistic conversation skills. Since emotions play an important role in improving realism in conversational agents, ALMA raises three requirements related to affection. (1) Emotions that reflect short-term feelings based on the emotions of OCC [73]. (2) A mood that reflects the medium-term feelings based on the PAD mood model proposed by [63]. (3) Agent personality that reflects long-term feelings based on the fivefactor personality model [61]. Under these three affective requirements, the functioning of ALMA begins when an event triggering an emotion. The information about agent’s personality traits is used to control the calculations of the intensity of emotions using the PAD. Subsequently, the mood is defined as an average of the agent’s emotions. The PAD is also used to calculate the mood of the agent. In conclusion, CMEs based on dimensional theories of emotion usually implement mechanisms that involve the mood of an agent and are commonly supported by appraisal processes (such as OCC) to trigger changes in it [58].

4. Design phase The design process begins once the system requirements have been defined and its objective is to generate a representation of the software in terms of its architecture, data structure, interface and components necessary to build it [81]. The central idea of the design phase is to generate a plan of an entity that will be built later. This process is iterative, initially the requirements obtained are taken and transformed into a high-level abstraction design, subsequently, as the iterations are executed, the design generates lower levels of abstraction. Within the development process of CMEs, the design phase involves a series of decisions related to the process of human emotion and other aspects related to the requirements of the user and the system [84]. In this sense, we organize

18

the analysis of CMEs based on two design aspects that are highly influenced by software engineering: Architectural design of the model: it is a high-level abstraction and seeks to transform the functional and non-functional requirements into components and data that the model will use (for example, appreciation variables). Design of the components of the model: it is a low-level abstraction and seeks to refine the architectural presentation in terms of the internal functioning of each component that subsequently leads to the algorithmic representation of the CMEs. There are many other design elements in software engineering that can be considered in CMEs. However, we have focused our analysis of the design phase from the perspective of these two aspects because they are the most commonly adopted by researchers in a formal or informal way. In addition, performing an analysis of CMEs based on these terms has two purposes. First, it allows a clear view of how the design phase in the construction of CMEs happens. Second, it identifies the extent to which the design of the CMEs complies with the procedures commonly used in software engineering for the design of conventional software. 4.1. Architectural design in CMEs The objective of the architectural design is to provide an overview of a model, that is, a perspective of a set of interconnected components. The design process in CMEs begins with defining the general distribution of the components, their function, size, relationships between them, as well as their input and output interfaces that allow the flow of data between the components. The architectural design can take into account other cognitive components, which interact with the emotion model (if it is applicable). Each component can have its own architectural design if it is considered complex enough. For example, in a CME, a

19

stimulus evaluation component contains its own mechanisms that can be interpreted as components. This type of design also facilitates dependence between components. In the context of CMEs, the architectural design does not involve details about the internal functionality of the components, rather, the components are integrated to form a cohesive whole. In short, this type of design provides a way to structure the overall design of the model and generates an understandable view of the CME. There is a relatively large number of architectural taxonomies widely used in the design of conventional software that can be applied to the design process of a CME. Examples of architectural designs are: data-centric architecture, data flow architecture, call and return architecture, oriented to objects architecture, layered architecture, among others[81]. Another important advantage of designing a CMEs under the principles of software engineering is that it is possible to adapt techniques to express and facilitate the design, for example, the unified modeling language (UML) allows specifying, visualizing, building and documenting the artifacts of a software system [90] and is widely used for software modeling. While the software engineering area offers a large number of design techniques that can be transferred to the development of CMEs, some researchers have already incorporated various architectural designs either explicitly or implicitly to facilitate the development process of their CMEs. Deep Emotion [41, 42] is a CME that aims to explain the process of emotions through three main layers that can be interpreted as components. The first layer corresponds to the appraisal component and its function is to generate an interception based on an internal and external evaluation. The second layer has a memory that is responsible for adjusting the results obtained from the appraisal component to the surrounding environment that the agent face. The third and last layer is a learning component that incorporates reinforcement learning and sequential learning for the agent. Figure 4 shows the components mentioned above, their function within the model, the size and the relationship 20

Figure 4: First (detailed) architectural design of DeepEmotion [41].

that exists between each of them, so it can be considered as a representation of an architectural design. However, as mentioned earlier, the design phase is an iterative process and particularly the architectural design must be a coherent and understandable representation of the model. In the case of DeepEmotion, the authors firstly designed a relatively complex architecture (Figure 4) with a great level of detail about the behavior of the model. However, this first version of the architecture did not serve as a guide for its implementation. This led to the redesign of the architectural representation of the model (Figure 5) to generate a more comprehensive and useful version for implementation. FLAME [31] is a CME that seeks to imitate the process of human emotions emphasizing memory and learning processes as the basis for the emotion dynamics. One of the most important features in FLAME is the use of fuzzy logic to interpret emotions by their intensity. The functionality of FLAME starts when the agent perceives an event in the environment through a decision-making component. The information of this event is sent simultaneously to a learning component and an emotional component. The emotional component executes a sequence of processes to generate an emotional behavior in the agent: the evaluation of the event, the appraisal of the event, the filtering of emotions and

21

Figure 5: Final architectural design of DeepEmotion [41].

Figure 6: General design of the agent architecture in which FLAME is implemented [31].

the behavior selection . Figure 6 shows an overview of the agent architecture in which FLAME is implemented. This architecture can be interpreted as an architectural design, in addition, it is presented on two levels. The first architecture presents three components: a learning component, an emotional component and a decision making component, and their relationships. The second architectural design (Figure 7) of the emotional component includes four processes: event evaluation, event appraisals, emotional filtering, and behavior selection. The second architectural design also includes a mechanism of decay of emotions. In both architectures, the component functions and their relationships are defined. There are other CMEs such as [70, 32, 22, 47, 83, 57, 59], which also use 22

Figure 7: FLAME emotional component architecture design [31].

architectural design approaches to create an overview of their model. However, it should be noted that these models do not use a rigorous nomenclature in terms of how an architectural design should be done, instead, we believe that the goal of performing this type of design is to create a representation to show their components, their functions, and their relationships. This serves as a first step towards designing the architecture and facilitates its future implementation. 4.2. Component-level design In software engineering, a component is an interdependent system entity that is responsible for providing a series of functions to other components through its interfaces [54]. Developing component-based systems offers a number of benefits related to software reusability, since no knowledge of the source code is required to use a component, it can be accessed as if it were another system. The different mechanisms that a component stores can be offered to other elements of the system through its interface, so it is the interface that allows data to be requested from other components in order to execute its internal functions. So generally one component has at least an input and an output interface, also 23

called the requires interface and the provides interface respectively. Many CMEs found in the literature separate the different functionalities into components (for example, a component for event evaluation, another component for mood, etc.), so the component-based approach is nothing new in the context of CMEs. This approach is used since a component within the CME commonly provides data to one or more components. Once we have an overview of the CME’s components and their relationships, we can focus on defining the internal functioning of each component. In software engineering this is known as component-level design and is another phase that we consider can be used to facilitate the design of CMEs. In this context, the component-level design could fully describe the internal details of each component within the model by defining data structures and the algorithms that take place within each component. This type of design facilitates the definition of data structures that are accessed directly through one or more components of the CME. In addition, since most intelligent agents are built in object-oriented programming languages, it would provide a clear vision about the classes that are implemented in each component and how they collaborate with each other to achieve the internal mechanisms. The idea of this approach is to design each class of a component to include all the attributes and operations relevant to its implementation. As in architectural design, some researchers use this approach to design CMEs, but do not include evidence on the interfaces of each component or the internal data structures. However, we believe that a deeper approach to software engineering could add important design features (interface definition and data structures) to the components of the CMEs. The Infra (integrative framework) is a framework to build CMEs that has been developed under the component-based approach [83, 21]. The Infra incorporates input and output interfaces for data exchange with other cognitive processes of the agent. This gives the researcher the possibility of building integrative CMEs [85], that is, CMEs that were designed to communicate with other cognitive components of an agent architecture. This follows the premise that the more cognitive components are considered in the process of emotion generation, 24

the greater the level of similarity with that of humans. Infra works through two parallel routes. The first corresponds to a direct route that could be interpreted as the agent’s first reaction to a stimulus. The purpose of the direct route is to give the agent the ability to identify potential hazards in the environment and react quickly, assigning them the necessary attention. The second is an indirect route, which makes a second appraisal of the stimulus incorporating processes of the agent’s cognitive architecture to assign more precise emotional values. It allows the agent to properly handle social and emotional situations. The Infra includes a series of components that are briefly described below: 1. General Appraisal: determines the emotional significance of an event perceived by the agent through a set of appraisal variables. The requires interface of this component requests the stimulus, i.e., the data perceived by the agent, while the provides interface offers the calculated appraisal variables. 2. Emotion Filter: re-assess the values of the appraisal variables considering elements of the agent’s cognitive architecture, increasing, decreasing or maintaining the emotional meaning of the stimulus perceived by the agent. Its input interface requests at least three elements, (1) data of pre-calculated appraisal variables, (2) the current emotional state of the agent and (3) emotional information from previous stimuli. Its output interface provides a new calculation of the appraisal variables (now taking into consideration the elements of the agent’s cognitive architecture). 3. Behavior Organization: It selects the agent’s emotional behavior based on the assessment obtained from the perceived stimulus. The required interface of this component includes: (1) The values of the pre-calculated appraisal variables to generate reactive emotional behavior in the agent, corresponding to the direct route mentioned above (it is the General Appraisal component that provides this data). (2) The values of the precalculated appraisal variables to generate a more precise emotional behavior in the agent correspond to the previously mentioned indirect route

25

(it is the Emotion Filter component that provides these values). (3) Emotional information from previous stimuli. (4) The agent’s current emotional state. The provides interface offers trends in emotional behavior with the idea that other components of the agent’s cognitive architecture can use them. 4. Emotion and Mood States: It maintains the emotional state of the agent in the long term. Its input interface includes the values of the pre-calculated appraisal variables (this includes the reactive calculation of the General Appraisal component and the cognitive calculation of the Emotion Filter component). The output interface provides information related to the agent’s current mood. 5. Internal Memory: It is responsible for storing the emotional information of the stimuli through the use of associative learning. The interface requires the values of the appraisal variables of the rapid calculation (which are provided by the General Appraisal component) and the values of the appraisal variables of the slow calculation (which are provided by the Emotion Filter component). The interface provides emotional information related to the previous stimuli. EEGS (Ethical Emotion Generation System) [70, 71] is a CME that follows a component-based approach. This model is focused on generating emotional responses for robots, adding an ethical part, which is intended to provide them with the ability to make ethical judgments about real-world situations. Ethical behavior is generated by a compensation module, which compensates for any unethical emotion previously generated by an appreciation module. A submodule of ethics, personality, and mood skew the appraisal results by altering the results in terms of intensity, being the sub-module of ethics the one that has a higher weight. In addition, it is the latter component that is responsible for selecting the most ethical emotion of all candidate emotions. The components incorporated by EGGS are the following: 1. Interaction Module: It functions as the interface to interact with the sys26

tem, promoting a connection channel between environment and the system, all the events perceived in this module activate the appraisal module. The requires interface of this component involves the information of the perceived event and information related to the memory, while the interface provides offers ordered information such as the event, type of event, cause of the event and data on who is affected by the event. 2. Appraisal Module: The appraisal module uses the information obtained from the interaction module and uses it to make an appraisal of the event. It has the ability to assess one or more events through appraisal variables (expectedness, unpleasantness, goal hindrance, coping potential, immorality and selfconsistency are some of the appraisal variables used). As mentioned earlier, the input interface of this component receives the name, type, cause and related information about who is affected by the event. The output interface provides the resulting calculation of each of the appraisal variables used. 3. Compensation Module: This module is responsible for compensating any unethical results from the appraisal module. It works through three submodules; ethics module, mood module, and personality module. The three sub-modules are used to skew the result of the appreciation variables to a different extent. The requires interface obtains the result of the appraisal variables, while the provides interface offers the result of the biased calculation of the appraisal variables. 4. Relationship Module: It is responsible for keeping track of the agent’s relationships with other agents or humans. This module has a memory module that stores past experiences and the perception module that maintains the agent’s perception for other agents or humans, as well as the surrounding situation. The input interface receives information related to the emotion produced and its intensity. The output interface provides data related to the agent’s memory. 5. Affect Generation Module: This module performs two main functions through the sub-modules: affection derivation and affection intensity. The 27

affection derivation module is responsible for selecting a winning emotion among a number of candidate emotions based on the result of the calculation of the appraisal variables. The affection intensity module assigns an intensity to each emotion, casting values between 0 and 1, where 1 represents a highly intense emotional state. The requires interface of this component receives the values of the appraisal variables. The interface provides offers the label of the information produced by an event and its respective intensity. 6. Expression Module: It is responsible for demonstrating the emotions and intensity produced by an event through a system interface, the objective of this module is for a human user to understand what kind of emotions the EGGS system is generating and expressing. The input interface receives the emotion produced by an event, as well as its intensity. While the output interface provides information related to the expression resulting from the emotion produced. 5. Implementation phase A crucial phase in the construction of a software system is the implementation phase, which aims to generate an executable version of the system [99]. The purpose of this phase is to capture the design of the CME using a programming language. This implies the implementation of the architectures and mechanisms generated in the design phase. Currently, object-oriented programming languages are the most used to develop any type of software system. In particular, the software engineering field provides several elements to guide this coding process: Tools: integrated development environments that offer comprehensive services to facilitate the coding process. They incorporate a source code editor with features such as smart auto-complete, compiler, among others. Best practices: involves elements such as the documentation of the source code and the way in which the software system operates. 28

Coding standards: used to define a homogeneous programming style that allows all participants to understand the code and thus achieve a code that is maintainable. Despite the existence of software engineering principles to assist the implementation phase, there is currently no evidence on the practices that are used to implement contemporary CMEs. However, some CMEs explicitly report the programming language in which they were developed (e.g [31, 105, 19, 15, 60, 10]). Importantly, although it could be assumed that developers of CMEs understand the principles of programming and have experience at least related to high-level programming languages, it could not be assumed that the implementation phase of CMEs is carried out in a formal way. There is few open source CMEs [4, 36, 44, 97], which allows taking a look at the practices used during the implementation phase of these computational models. Derek [97] presents a project that seeks to create a computational model of appraisal of the emotions of a virtual character. This is a CMEs inspired by the appraisal theory of Roseman [89]. This computational model is executed in a web browser, where a robot waiter brings food to customers while adding and removing obstacles. The waiter’s emotions are altered in response to these random events. The computational model works with eight especially relevant emotions that are organized in opposite pairs whose values are represented by indicators that are updated as the events unfold. The emotion with the most significant delta is represented on the robot’s face as it crosses the map. Regarding the practices associated with the code of this computational model, a documentation on the different functionalities of the agent is included, there is a homogenization in terms of objects and variables utilized, and a logical structure of the implemented code. CakeChat [44] is another open source computational model implemented in the Python programming language. It is a dialogue system that generates answers in a text-based conversation. The dialog is biased by 5 emotional states: happiness, anger, sadness, fear and a neutral mood. CakeChat is based on a

29

sequence-to-sequence model trained on dialog pairs of contexts and responses. This model incorporates software engineering practices at the code level, including flexibility that allows to condition model’s responses by an arbitrary categorical variable. The code is also well documented, presents specialized exception handling and modularity.

6. Testing phase Commonly, researchers validate the mechanisms of the CMEs in order to ensure the proper functioning of the proposed model with regard to the affective capabilities. In the software engineering field, testing represents an essential activity throughout the development process. This ensures the quality of the resulting software system [8]. There are a variety of techniques for testing a software system. However, there are two main approaches to testing a computer system [99]. The first approach aims to demonstrate by carrying out test cases that the software meets the requirements previously defined. The second approach aims to find conditions in which the behavior of the software is incorrect or undesirable. These conditions are commonly a cause of software issues. It is difficult to adapt any technique to test CMEs given that contemporary CMEs seek to evaluate how realistic the dynamics of emotions are in the agent in which it is implemented rather than testing technical functionality. Researchers usually utilize the first approach to evaluate the functionalities of CMEs (i.e., test cases), which are called example scenarios or simulations. In software engineering, the test cases help verify that the proposed model meets the defined requirements, offering therefore to CMEs an option to test the emotion dynamics. Particularly, this approach might be applied in the context of CMEs by i) designing the test case, ii) defining the input and output data associated with the CME, iii) executing the underlying computer program of the CME, and iv) checking the test case results. This general procedure is commonly used to validate that a CME meets the requirements defined from emotion theories.

30

PEACTIDM [57] is a CME that seeks to integrate emotion and cognition through the appraisal theory, the abstract theory of cognition, and the use of Soar cognitive architecture. PEACTIDM works through a series of abstract functions responsible for generating behavior: Perceive, Encode, Attend, Comprehend, Tasking, Intend, Decode, and Motor. It is through these functions that appraisal is done incrementally to generate emotions. The first step to carry out the evaluation was to design the scenario. A Pacman-like domain called Eaters was used that places an agent endowed with PEACTIDM in a 2-D grid. The agent can move from one frame to another, except where there are walls, which are placed as obstacles. The goal is to make the agent move from an initial location to a specific target location. The agent must deal with the obstacles posed to try to accomplish its goal. The second step was to prepare the test data, placing the agent at a specific point on the grid so that only a sequence of steps could take it to the target location, forcing the agent to evaluate a series of stimuli controlled by the obstacles that are in its way. The third step is to run the simulation. The agent passed from an initial location to a final location, which yields a series of data as a result of the test. Finally, the last step was to take the data obtained as a result of the test to be compared with the requirements defined for this CME and corroborate compliance with those requirements. EMotion and Adaptation (EMA) [59] is a CME based on the appraisal theory that aims to explain the dynamics of emotions. Its functionality is based on the agent’s perception of the individual-environment relationship using a structure called causal interpretation, which is responsible for giving an explicit representation of the environment, as well as the agent’s beliefs, desires, and intentions. The causal interpretation is constituted of perceptual and inferential cognitive processes and translates each event into an appraisal frame (i.e., a set of appraisal variables). The configuration of values on appraisal variables triggers an emotion label and its associated intensity level. The affective state of the agent is another important characteristic in EMA. It is calculated by adjusting all the active appraisal frames to the mood, the one with the higher intensity 31

determines the response and mood of the agent. To evaluate the capabilities of EMA, a scenario was first determined in which prepositions (environmental states) and actions that can be executed by both the agent and the rest of the participants are defined, this corresponds to the first step of the sequence outlined above. The sequence of stimuli that the agent would interpret was then defined. Once the test is executed, the resulting appraisal frames were obtained for each event with their respective emotional labels, which corresponds to the third step of the sequence outlined above. Finally, the resulting appraisal frame, the emotional labels, and mood experienced by the agent are compared with the requirements previously defined for EMA, which corroborates the capabilities of the model. In summary, most CMEs validate its capabilities by following the approach described above [70, 32, 7, 47, 36, 41, 31, 14]. Importantly, although the field of software engineering provides formal techniques for testing software systems, most of these techniques are still to be applied in the context of CMEs. As shown above, test cases are a software engineering technique that can be applied to the context of CMEs.

7. Discussion There are several CMEs whose development process seems to adopt various software engineering artifacts. In this section we discuss the challenges and research opportunities identified about the software engineering elements utilized in the development process of CMEs. Regarding the definition of requirements, we found that most of the CMEs analyzed define a set of initial requirements with some degree of detail [83, 70, 31, 59]. Nevertheless, some others just mention the final requirements (at a high level of description) that guided the development of a given CME [22, 32, 14]. Regarding the architectural designs, we found that several CMEs explain the functionality of their underlying implemented model using such design element [7, 41, 83], whereas some CMEs describe their functionality just textually without making use of such type of

32

representative models (i.e., architectural design diagrams) [14, 36, 32]. Nevertheless, it was noted that the development process of CMEs is not guided by a sequence of well-defined, precisely established steps, nor do they use methods or apply techniques systematically. In this sense, we might determine that there is a very low degree of formality during the development process of CMEs. We identified a series of challenges to be addressed in order to take advantage of software engineering principles: Definition of guidelines to help decide which emotion theories should be implemented computationally. Homogenization of terms about human emotions, their components, phases, and cycles implemented in CMEs. Design of CMEs whose components can be reusable. Definition of standard criteria for comparative analysis between CMEs. Identification of software engineering principles, concepts, and design practices useful in the construction of CMEs. Definition of standard frameworks to validate CMEs. The challenges of definition of guidelines and homogenization of terms, despite being different aspects, pose similar situations. The choice of an emotion theory is a key decision associated with the requirement phase. Emotion theory provides clues to the researcher about how emotion mechanisms should be implemented within a CME. An emotion theory provides explanations that range from simple concepts such as what is an emotion? to more complex ones that must be interpreted by researchers, including the components of the emotion model and their respective functioning as well as the operating cycle that the system will execute to produce credible emotional behavior. The difficulty of deciding the emotion theory underlying the design of CMEs is mainly a consequence of i) the origin of the theories (from different disciplines such as neuroscience and psychology), ii) the considerable amount of emotion theories found 33

in the literature (e.g., appraisal theories, dimensional theories, and network theories of emotion), and iii) the diversity of purposes that drive the development of a particular CME. This also leads to difficulties to achieve a set of homogeneous emotion terms among theories and among CMEs, which in turn leads to a lack of guidelines to determine which theory of emotion should be used to ensure a successful development of a CME. The definition of guidelines and homogenization of terms of emotion, components, phases, and the operating cycle in emotion theory may facilitate the selection of theoretical foundations that best meet the requirements of a CME. This also would reduce the effort devoted to carry out the analysis of emotion theory and thus focus on the analysis of those that potentially are suitable to meet the requirements a researcher may seek for a given CME. Regarding the reusability in CMEs, it is important to note that given the nature of CMEs, they share a high level of similarity since they are all a computational representation of emotions. In this sense, it seems convenient that CMEs are designed to be reusable. However, as shown in previous sections, the development of this type of model is not oriented to achieve a reusable software system. We believe that this is mainly because the diversity of emotion theories available differ in explaining the process of emotions and sometimes these may be contradictory. This has led researchers to focus on developing mechanisms based on certain emotion theories that, from a particular perspective, better represent the process of human emotions. However, these developed mechanisms end up being difficult to reuse by other researchers who based the design of a CME on different theoretical approaches. Importantly, reusability in CMEs may be achieved by following software engineering principles as in conventional software systems. Therefore, as software engineering best practices are migrated into the domain of CMEs, we will be able to develop CMEs that incorporate quality elements such as reusability, scalability, modularity and interoperability. The complexity of achieving a standard criteria for comparative analysis between CMEs also has to do with the theoretical foundations of CMEs. For instance, it is common for researchers to incorporate more than one emotion the34

ory to make up the operating cycle of a CME, which leads to CMEs that inherit the variety of terms and concepts in emotion theory. Furthermore, this leads to CMEs that model a same emotion mechanism but using different underlying principles. For example, in the case of researchers who use similar approaches such as the appraisal theory for the generation of emotions, it may be difficult to establish comparative criteria since they could use different appraisal variables depending on which approach of the theory utilized. Nevertheless, by following a formal development process, CMEs could be compared using software engineering metrics and quality attributes that are strongly established and that are used by developers to make comparisons between different software systems. Moreover, metrics related to realism could be defined for CMEs that are developed for specific application domains. In the case a CME is developed with the intention of imitating the process of human emotions, metrics related to the number of emotions that can be generated or the intensity of each emotion generated could be defined. There are significant differences between the requirements analysis in CMEs and conventional software systems. For instance, the requirements of a software system are commonly provided by people who may represent the final users, whereas in the case of CMEs, the requirements are formulated from emotion theories and specifications of the application domain by the researcher. As shown above, the requirements in contemporary CMEs are described using a high level of description. Actually, it seems that the requirement description presented correspond to a final version of the requirement analysis. This makes it difficult to understand how the evolution from initial requirements to those final version of requirements occurs, as well as the implications in the next phases of design and computational implementation. As an example, it would be useful to find the patterns in the analysis of requirements phase when it begins from requirements such as the need of mechanisms for maintaining a mood state to the selection of the corresponding theory and how such theory helps to meet the basic requirement. Importantly, the analysis of CMEs reveals some patterns about the requirements which are mostly associated with the 35

following elements: Specify emotions: what are the emotions to be implemented in the CME. Agent architecture: defines whether the CME is designed as part of a specific agent architecture. Cognitive functions: involves the integration of emotions with other cognitive elements of an agent (e.g., decision-making, learning, and perception). Emotional behavior: defines the specific emotional behavior that the agent seeks to express such as emotional facial expressions and postures. Perceived stimuli: involves the type of stimulus that is expected to be perceived by an agent and that should be assigned an emotional significance by the CME. Application domain: specific requirements that come from a specific application domain in which the CME will be implemented and evaluated. A design practice followed in the development of CMEs is to divide the functionality of the model into several components. However, this practice is still far from taking advantage of software engineering approaches such as the component-based design. Similarly, design approaches such as those based on modeling the software system from different perspectives are available. For example, the external perspective of the system models its interaction with the elements in the environment and therefore the dynamics of the system in relation to certain stimuli. As shown above, two of the design elements of software engineering used in the development of CMEs are the architectural design and the componentbased approach. However, from the available information of CMEs in their corresponding papers, it becomes difficult to assert that CMEs actually follow closely these two paradigms as part of a formal procedure to design CMEs. Nevertheless, by following a formal procedure based on well defined software

36

engineering elements, CMEs could get the benefits of conventional software systems: 1. Architectures created based on design patterns that have already been recognized, tested, and widely used. 2. Modular CMEs whose different functionalities of the model are logically divided into components or subsystems, which would also promote the reusability of CMEs. 3. Definition of a representation of data, architecture, interfaces, and components that will eventually be incorporated into the CME. 4. Definition of the different data structures for the classes that will be implemented in the coding of the model. 5. Definition of components with independent functional characteristics. 6. Definition of interfaces that facilitate communication between the different internal components of the CME as well as with the components external to the CME (i.e., components of the agent’s cognitive architecture). 7. Representations or models of requirements from the theories of emotion and the domain of the application in which the CME will be implemented. 8. Use of modeling notations to view, specify, build, and document a CME. An profound analysis of the implementation phase of CMEs is still far from being precise and detailed. It is not common to find available the source code of CMEs reported in the literature, which is certainly associated with issues such as the reusability of CMEs. The papers found in the literature that report CMEs provide little or no information regarding the implementation process. Certainly, there are models that present pseudocode to explain some internal functionalities of the CME, but usually these provide insufficient detail to replicate the functionalities at the code level. In this context, it seems that the implementation phase in CMEs still poses a great opportunity for improvement and we believe that the integration of software engineering principles and best practices could help to formalize this crucial phase in the development process of software systems. 37

Regarding the definition of standard frameworks to validate CMEs, in the field of software engineering there is a variety of techniques available for testing functional or non-functional aspects of software systems (e.g., black box, white box, unit and tests). In the case of CMEs, beyond testing the technical functionality, it is pertinent to test elements related to the dynamics of emotions within the model. As shown above, currently there are not standard frameworks to guide the validation of CMEs. Usually, CMEs are validated in very specific environments under very specific conditions. Particularly, this type of testing of CMEs seems to be related to the test cases used in software engineering. However, we found no evidence that this type of software engineering element is actually used formally in CMEs. Adapting a formal methodology based on the principles of software engineering for the development of CMEs implies making an important change in the way in which CMEs are perceived. Particularly, in the validation phase it would not be enough to create a test case to demonstrate the functionalities of the CME (as it is currently done), since there would be more elements related to the requirements, the design or other elements that also need to be tested. However, this would add formality not only to the CMEs development process, but also to the way they are evaluated.

8. Conclusions Although the modeling of emotions has been extensively studied in the last ten years under an artificial intelligence umbrella made up of multiple disciplines such as affective computing, psychology, human-computer interaction, and neuroscience. It is still considered a new research area and is taking its first steps. Particularly in CMEs, there are still a lot of unresolved questions and challenges about their development process. While there is a good starting point thanks to the CMEs that are currently reported in the literature, developing a complex CME and aiming to solve the problems raised in the present paper is still complicated. However, software engineering is a field that has proposed a variety of

38

formal artifacts to assist the development of software systems that can be taken to the context of CMEs in order to help address these problems. In this paper, we have analyzed CMEs from a software engineering perspective. We focused this review on analyzing the development life cycle in order to identify to what extent the elements of the software engineering field are taken into account to build this type of computational models. We have also identified how some software engineering techniques are adopted informally by researchers to develop CMEs. We believe that the proposal of a formal software engineering methodology appropriate for the development of CMEs could improve the procedures used to build this type of models. The problems raised in this paper could also be approached with such a methodology based on the widely tested principles of software engineering as well as on the particularities inherent to the development of human functions (i.e., human emotions). Adopting a formal methodology for the development of CMEs could change the way in which research related to CMEs is reported, making it more formal with respect to the development process used and therefore providing ways to take advantage of the results in this domain by re-using already developed components in diverse application domains. Competing Interests The authors declare that they have no competing interests. Acknowledgments J. O. Gutierrez-Garcia gratefully acknowledges the financial support from the Asociaci´on Mexicana de Cultura, A.C. This work was supported by PROFEXCE 2020. [1] Ali, N., & Lai, R. (2017). A method of requirements elicitation and analysis for global software development. Journal of Software: Evolution and Process, 29 , e1830. 39

[2] Arnold, M. B. (1960). Emotion and personality.. Columbia University Press. [3] Averill, J. R. (1980). A constructivist view of emotion. In Theories of emotion (pp. 305–339). Elsevier. [4] Azad, S., & Martens, C. (2019). Lyra: Simulating believable opinionated virtual characters. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (pp. 108–115). volume 15. [5] Barrett, L. F. (2006). Are emotions natural kinds?

Perspectives on

psychological science, 1 , 28–58. [6] Becker-Asano, C. (2008). WASABI: Affect simulation for agents with believable interactivity volume 319. IOS Press. [7] Becker-Asano, C., & Wachsmuth, I. (2010). Affective computing with primary and secondary emotions in a virtual human. Autonomous Agents and Multi-Agent Systems, 20 , 32. [8] Bertolino, A. (2007). Software testing research: Achievements, challenges, dreams. In 2007 Future of Software Engineering (pp. 85–103). IEEE Computer Society. [9] Bian, Y., Yang, C., Guan, D., Xiao, S., Gao, F., Shen, C., & Meng, X. (2016). Effects of pedagogical agent’s personality and emotional feedback strategy on chinese students’ learning experiences and performance: A study based on virtual tai chi training studio. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 433–444). ACM. [10] Bouazza, H., & Bendella, F. (2017). Adaptation of a model of emotion regulation to modulate the negative emotions based on persistency. Multiagent and Grid Systems, 13 , 19–30.

40

[11] Braude, E. J., & Bernstein, M. E. (2016). Software engineering: modern approaches. Waveland Press. [12] Broekens, J., Bosse, T., & Marsella, S. C. (2013). Challenges in computational modeling of affective processes. IEEE Transactions on Affective Computing, 4 , 242–245. [13] Broekens, J., Degroot, D., & Kosters, W. A. (2008). Formal models of appraisal: Theory, specification, and computational model. Cognitive Systems Research, 9 , 173–197. [14] Broekens, J., Hudlicka, E., & Bidarra, R. (2016). Emotional appraisal engines for games. In Emotion in Games (pp. 215–232). Springer. [15] Broekens, J., Jacobs, E., & Jonker, C. M. (2015). A reinforcement learning model of joy, distress, hope and fear. Connection Science, 27 , 215–233. [16] Bruegge, B., & Dutoit, A. H. (2009). Object–oriented software engineering. using uml, patterns, and java. Learning, 5 , 7. [17] Buck, R. (1985). Prime theory: An integrated view of motivation and emotion. Psychological review , 92 , 389. [18] Burkhardt, F., Pelachaud, C., Schuller, B. W., & Zovato, E. (2017). Emotionml. In Multimodal interaction with W3C standards (pp. 65–80). Springer. [19] Cami, A., Lisetti, C., & Sierhuis, M. (2004). Towards the simulation of a multi-level model of human emotions. In Proceedings of the 2004 AAAI Spring Symposium. AAAI Press, Menlo Park, CA. [20] Cannon, W. B. (1927). The james-lange theory of emotions: A critical examination and an alternative theory. The American journal of psychology, 39 , 106–124.

41

[21] Castellanos, S., Rodr´ıguez, L.-F., Castro, L. A., & Gutierrez-Garcia, J. O. (2018). A computational model of emotion assessment influenced by cognition in autonomous agents. Biologically inspired cognitive architectures, 25 , 26–36. [22] Chowanda, A., Blanchfield, P., Flintham, M., & Valstar, M. (2014). Erisa: Building emotionally realistic social game-agents companions. In International Conference on Intelligent Virtual Agents (pp. 134–143). Springer. [23] Chowanda, A., Blanchfield, P., Flintham, M., & Valstar, M. (2016). Computational models of emotion, personality, and social relationships for interactions in games. In The 2016 International Conference on Autonomous Agents & Multiagent Systems (pp. 1343–1344). International Foundation for Autonomous Agents and Multiagent Systems. [24] Clore, G. L., & Ortony, A. (2008). Appraisal theories: How cognition shapes affect into emotion.. Guilford Press. [25] Clore, G. L., & Palmer, J. (2009). Affective guidance of intelligent agents: How emotion controls cognition. Cognitive systems research, 10 , 21–30. [26] Costa, J., & PAUL, T. (1996). of personality theories: Theoretical contexts for the five-factor model. The five-factor model of personality: Theoretical perspectives, 51 . [27] Damasio, A. R., & Error, R. D. (1994). Emotion, reason and the human brain. New York: Putnam, . [28] Darwin, C. (1969). The Expression of the Emotions in Man and Animals: London, Murray, 1872 . Culture et civilisation. [29] Ekman, P. (1992). An argument for basic emotions. Cognition & emotion, 6 , 169–200. [30] Ekman, P. (2004). Emotions revealed. BMJ , 328 , 0405184.

42

[31] El-Nasr, M. S., Yen, J., & Ioerger, T. R. (2000). Flamefuzzy logic adaptive model of emotions. Autonomous Agents and Multi-agent systems, 3 , 219– 257. [32] Faghihi, U., Fournier-Viger, P., & Nkambou, R. (2013). Celts: A cognitive tutoring agent with human-like learning capabilities and emotions. In Intelligent and Adaptive Educational-Learning Systems (pp. 339–365). Springer. [33] Fontaine, J. R., Scherer, K. R., Roesch, E. B., & Ellsworth, P. C. (2007). The world of emotions is not two-dimensional. Psychological science, 18 , 1050–1057. [34] Frijda, N. H. (1986). The emotions. Cambridge University Press. [35] Frijda, N. H. (1988). The laws of emotion. American psychologist, 43 , 349. [36] Gebhard, P. (2005). Alma: a layered model of affect. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems (pp. 29–36). ACM. [37] Gershenson, C. (1999). Modelling emotions with multidimensional logic. In 18th International Conference of the North American Fuzzy Information Processing Society-NAFIPS (Cat. No. 99TH8397) (pp. 42–46). IEEE. [38] Gratch, J., & Marsella, S. (2005). Evaluating a computational model of emotion. Autonomous Agents and Multi-Agent Systems, 11 , 23–43. [39] Gros, C. (2010). Cognition and emotion: perspectives of a closing gap. Cognitive Computation, 2 , 78–85. [40] Harr´e, R. et al. (1986). The social construction of emotions volume 42. Blackwell Oxford.

43

[41] Hieida, C., Horii, T., & Nagai, T. (2018). Deep emotion: A computational model of emotion using deep neural networks. arXiv preprint arXiv:1808.08447 , . [42] Hieida, C., & Nagai, T. (2017). A model of emotion for empathic communication. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (pp. 133–134). ACM. [43] Hudlicka, E. (2011). Guidelines for designing computational models of emotions. International Journal of Synthetic Emotions (IJSE), 2 , 26–79. [44] Ivanov, N. (2019). Cakechat: Emotional generative dialog system. URL: https://github.com/lukalabs/cakechat#quick-start. [45] Ivanovi´c, M., Budimac, Z., Radovanovi´c, M., Kurbalija, V., Dai, W., B˘adic˘a, C., Colhon, M., Ninkovi´c, S., & Mitrovi´c, D. (2015). Emotional agents–state of the art and applications. Computer Science and Information Systems, 12 , 1121–1148. [46] James, W. (1983). What is an emotion?[1884]. Collected Essays and Reviews, (pp. 244–275). [47] Jiang, H., Vidal, J. M., & Huhns, M. N. (2007). Ebdi: an architecture for emotional agents. In Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems (p. 11). ACM. [48] Kolakowska, A., Landowska, A., Szwoch, M., Szwoch, W., & Wr´obel, M. R. (2013). Emotion recognition and its application in software engineering. In 2013 6th International Conference on Human System Interactions (HSI) (pp. 532–539). IEEE. [49] Lazarus, R. S., & Lazarus, R. S. (1991). Emotion and adaptation. Oxford University Press on Demand. [50] LeDoux, J. (1996). The emotional brain. new york: Touchstone.

44

[51] LeDoux, J. E. (2000). Emotion circuits in the brain. Annual review of neuroscience, 23 , 155–184. [52] Lee, L. C., Nwana, H. S., Ndumu, D. T., & De Wilde, P. (1998). The stability, scalability and performance of multi-agent systems. BT Technology Journal , 16 , 94–103. [53] Lin, J., Spraragen, M., & Zyda, M. (2012). Computational models of emotion and cognition. In Advances in Cognitive Systems. Citeseer. [54] Liu, C., van Dongen, B., Assy, N., & van der Aalst, W. M. (2018). Component interface identification and behavioral model discovery from software execution data. In Proceedings of the 26th Conference on Program Comprehension (pp. 97–107). ACM. [55] Loewenstein, G., & Lerner, J. S. (2003). The role of affect in decision making. Handbook of affective science, 619 , 3. [56] Mall, R. (2018). Fundamentals of software engineering. PHI Learning Pvt. Ltd. [57] Marinier III, R. P., Laird, J. E., & Lewis, R. L. (2009). A computational unification of cognitive behavior and emotion. Cognitive Systems Research, 10 , 48–69. [58] Marsella, S., Gratch, J., Petta, P. et al. (2010). Computational models of emotion. A Blueprint for Affective Computing-A sourcebook and manual , 11 , 21–46. [59] Marsella, S. C., & Gratch, J. (2009). Ema: A process model of appraisal dynamics. Cognitive Systems Research, 10 , 70–90. [60] Mart´ınez-Miranda, J., & Alvarado, M. (2017). Modelling personalitybased individual differences in the use of emotion regulation strategies. In Canadian Conference on Artificial Intelligence (pp. 361–372). Springer.

45

[61] McCrae, R. R., & John, O. P. (1992). An introduction to the five-factor model and its applications. Journal of personality, 60 , 175–215. [62] McKeown, G., Valstar, M. F., Cowie, R., & Pantic, M. (2010). The semaine corpus of emotionally coloured character interactions. In 2010 IEEE International Conference on Multimedia and Expo (pp. 1079–1084). IEEE. [63] Mehrabian, A. (1996). Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psychology, 14 , 261–292. [64] Mehrabian, A. (1998). Correlations of the pad emotional scales with selfreported satisfaction in marriage and work. Genetic, Social, and General Psychology Monographs, 124 , 311. [65] Mehrabian, A., & Russell, J. A. (1974). An approach to environmental psychology.. the MIT Press. [66] Moors, A. (2009). Theories of emotion causation: A review. Cognition and emotion, 23 , 625–662. [67] Moors, A., Ellsworth, P. C., Scherer, K. R., & Frijda, N. H. (2013). Appraisal theories of emotion: State of the art and future development. Emotion Review , 5 , 119–124. [68] N´abr´ady, M. (2006). Az ´erzelmekt˝ol a pozit´ıv pszichol´ogi´aig.[from emotions to positive psychology].

Pszichol´ ogiai eszk¨ oz¨ ok az ember megis-

mer´es´ehez , (pp. 1–123). [69] Neto, A. B. F., Pelachaud, C., & Musse, S. R. (2017). Giving emotional contagion ability to virtual agents in crowds. In International Conference on Intelligent Virtual Agents (pp. 63–72). Springer. [70] Ojha, S., & Williams, M.-A. (2016). Ethically-guided emotional responses for social robots: Should i be angry? In International conference on social robotics (pp. 233–242). Springer. 46

[71] Ojha, S., Williams, M.-A., & Johnston, B. (2018). The essence of ethical reasoning in robot-emotion processing. International Journal of Social Robotics, 10 , 211–223. [72] Ortony, A. (2002). On making believable emotional agents believable. Trappl et al.(Eds.)(2002), (pp. 189–211). [73] Ortony, A., Clore, G. L., & Collins, A. (1990). The cognitive structure of emotions. Cambridge university press. [74] Osgood, C. E. (1966). Dimensionality of the semantic space for communication via facial expressions. Scandinavian journal of Psychology, 7 , 1–30. [75] P´eter, B. (2002). On emotions. A Developmental Social Constructionist Account. [76] Phelps, E. A. (2006). Emotion and cognition: insights from studies of the human amygdala. Annu. Rev. Psychol., 57 , 27–53. [77] Planalp, S. (1996). Communicating emotion in everyday life: Cues, channels, and processes. In Handbook of communication and emotion (pp. 29–48). Elsevier. [78] Plaut, D. C. (2000). Methodologies for the computer modeling of human cognitive processes. Handbook of neuropsychology,, 1 . [79] Popescu, A., Broekens, J., & Van Someren, M. (2013). Gamygdala: An emotion engine for games. IEEE Transactions on Affective Computing, 5 , 32–44. [80] Poria, S., Cambria, E., Bajpai, R., & Hussain, A. (2017). A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion, 37 , 98–125. [81] Pressman, R. S. (2005). Software engineering: a practitioner’s approach. Palgrave Macmillan. 47

[82] Randhavane, T., Bera, A., Kapsaskis, K., Sheth, R., Gray, K., & Manocha, D. (2019). Eva: Generating emotional behavior of virtual agents using expressive features of gait and gaze. In ACM Symposium on Applied Perception 2019 (p. 6). ACM. [83] Rodr´ıguez, L.-F., Gutierrez-Garcia, J. O., & Ramos, F. (2016). Modeling the interaction of emotion and cognition in autonomous agents. Biologically Inspired Cognitive Architectures, 17 , 57–70. [84] Rodr´ıguez, L.-F., & Ramos, F. (2014). Development of computational models of emotions for autonomous agents: a review. Cognitive Computation, 6 , 351–375. [85] Rodr´ıguez, L.-F., & Ramos, F. (2015). Computational models of emotions for autonomous agents: major challenges. Artificial Intelligence Review , 43 , 437–465. [86] Rodr´ıguez, L.-F., Ramos, F., & Garc´ıa, G. (2011). Computational modeling of brain processes for agent architectures: issues and implications. In International Conference on Brain Informatics (pp. 197–208). Springer. [87] Roseman, I., & Evdokas, A. (2004). Appraisals cause experienced emotions: Experimental evidence. Cognition and Emotion, 18 , 1–28. [88] Roseman, I. J. (1996). Appraisal determinants of emotions: Constructing a more accurate and comprehensive theory. Cognition & Emotion, 10 , 241–278. [89] Roseman, I. J., Spindel, M. S., & Jose, P. E. (1990). Appraisals of emotioneliciting events: Testing a theory of discrete emotions. Journal of personality and social psychology, 59 , 899. [90] Rumbaugh, J., Jacobson, I., & Booch, G. (2004). Unified modeling language reference manual, the. Pearson Higher Education.

48

[91] Russell, J. A. (1980). A circumplex model of affect. Journal of personality and social psychology, 39 , 1161. [92] Russell, J. A., & Mehrabian, A. (1977). Evidence for a three-factor theory of emotions. Journal of research in Personality, 11 , 273–294. [93] Scherer, K. R. (2003). Vocal communication of emotion: A review of research paradigms. Speech communication, 40 , 227–256. [94] Scherer, K. R. (2004). Feelings integrate the central representation of appraisal-driven response organization in emotion. In Feelings and emotions: The Amsterdam symposium (pp. 136–157). [95] Scherer, K. R. (2005). Unconscious Processes in Emotion: The Bulk of the Iceberg.. Guilford Press. [96] Scherer, K. R., Schorr, A., & Johnstone, T. (2001). Appraisal processes in emotion: Theory, methods, research. Oxford University Press. [97] Schultz, D. (2014). Computational model of appraisal. URL: https: //github.com/derek-schultz/appraisal-model. [98] Smith, C. A., & Ellsworth, P. C. (1985). Patterns of cognitive appraisal in emotion. Journal of personality and social psychology, 48 , 813. [99] Sommerville, I. (2011).

Software engineering 9th edition.

ISBN-10 ,

137035152 . [100] Squire, L. R., & Kandel, E. R. (2003). Memory: From mind to molecules volume 69. Macmillan. [101] Strongman, K. T. (2003). The psychology of emotion: From everyday life to theory. Wiley-Blackwell. [102] Subramanian, N., & Chung, L. (2001). Software architecture adaptability: an nfr approach. In Proceedings of the 4th International Workshop on Principles of Software Evolution (pp. 52–61). ACM. 49

[103] Sun, H., Ha, W., Teh, P.-L., & Huang, J. (2017). A case study on implementing modularity in software development. Journal of Computer Information Systems, 57 , 130–138. [104] Tarasenko, S. (2010). Emotionally colorful reflexive games. arXiv preprint arXiv:1101.0820 , . [105] Vel´asquez, J. D. (1996). Cathexis–a computational model for the generation of emotions and their influence in the behavior of autonomous agents. Ph.D. thesis Massachusetts Institute of Technology. [106] Velsquez, J. (1997). Modeling emotions and other motivations in synthetic agents. In Proc. 14th Nat. Conf. Artif. Intell (pp. 10–15). [107] YOUNOUSSI, S., & ROUDIES, O. (2015). All about software reusability: A systematic literature review. Journal of Theoretical & Applied Information Technology, 76 . [108] Yousuf, M., & Asger, M. (2015). Comparison of various requirements elicitation techniques. International Journal of Computer Applications, 116 . [109] Zhou, H., Huang, M., Zhang, T., Zhu, X., & Liu, B. (2018). Emotional chatting machine: Emotional conversation generation with internal and external memory. In Thirty-Second AAAI Conference on Artificial Intelligence.

50