A tool-supported development method for improved BDI plan selection

A tool-supported development method for improved BDI plan selection

Engineering Applications of Artificial Intelligence 62 (2017) 195–213 Contents lists available at ScienceDirect Engineering Applications of Artificia...

2MB Sizes 2 Downloads 43 Views

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

Contents lists available at ScienceDirect

Engineering Applications of Artificial Intelligence journal homepage: www.elsevier.com/locate/engappai

A tool-supported development method for improved BDI plan selection a,⁎

MARK

a,b

J. Faccin , I. Nunes a b

Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil TU Dortmund, Dortmund, Germany

A R T I C L E I N F O

A BS T RAC T

Keywords: Model-driven development Agent-oriented software engineering BDI

Model-driven development (MDD) proposes the use of high-level abstractions to create models that describe concepts in a given domain, and a series of model-to-model and model-to-text transformations targeting several platforms. Many agent-oriented methodologies adopted such paradigm in the context of autonomous agents and multi-agent systems. Although the use of MDD takes from developers most of the development effort, there is still much left to them if we consider the need for adapting and providing agents with a sophisticated behaviour. Moreover, there is limited work evaluating the benefits provided by MDD in the context of agentbased systems. In this paper, we propose a tool-supported development method that applies MDD techniques to design and implement agents based on the belief-desire-intention architecture with a sophisticated plan selection process. We evaluated our approach with a user study, and results suggest that the use of our toolsupported development method can potentially ease the process of understanding and developing agent-based systems when facing them for the first time.

1. Introduction Multi-agent systems can be seen as a software development paradigm (Wooldridge, 2009) that provides adequate metaphors to model and implement complex and dynamic software systems. Several agent-based approaches have been proposed to support the development of such systems, including many agent-oriented methodologies (Wooldridge et al., 2000; Bresciani et al., 2004; Padgham and Winikoff, 2004; Pavón et al., 2005). These methodologies aim to guide the development process of autonomous agents and multi-agent systems by describing several steps, from requirement elicitation to agent implementation, which must be followed in a specified order to produce high-quality systems. Moreover, supplementary notations are provided to design development models, part of these methodologies. Many of these agent-oriented methodologies propose the use of model-driven development (MDD) (Stahl et al., 2006) to minimise the need for adapting existing development artefacts when targeting different programming languages or platforms. An MDD approach comprises the definition of meta-models describing high-level concepts, and sets of transformations from such meta-models to models with lower abstraction levels (model-to-model transformations) or directly to source code (model-to-text transformations). Using MDD, an agent or multi-agent system modelled focusing on a single platform can have code generated to different target platforms by the develop-



Corresponding author. E-mail addresses: [email protected] (J. Faccin), [email protected] (I. Nunes).

http://dx.doi.org/10.1016/j.engappai.2017.04.008 Received 4 October 2016; Received in revised form 28 March 2017; Accepted 11 April 2017 0952-1976/ © 2017 Elsevier Ltd. All rights reserved.

ment and execution of one or more transformations. There are several tools that support agent-oriented methodologies that adopt an MDD approach (Warwas and Hahn, 2009; Gascueña et al., 2012). Usually, these tools provide graphical environments, supporting the modelling process of autonomous agents and multi-agent systems. Transformations from instantiated meta-models to their corresponding code are automated using code generation features. Although system developers perform significant less effort when code is automatically generated, much work is still left to them if we consider how MDD agent-based approaches support the development of complex agent behaviour, requiring the use of techniques such as adaptation and learning. For instance, an agent structured with the Belief-Desire-Intention (BDI) architecture (Rao and Georgeff, 1995) performs the so-called plan selection process, where a suitable plan able to achieve a given goal is selected, among plan candidates, to be executed. This process can be modified to provide sophisticated plan selection, learning from the outcomes of previously executed plans, for example. This kind of adaptation may demand from developers the technical knowledge they might not have, and that is not abstracted in existing agent-based approaches and tools. Additionally, despite claiming that the use of tool-supported approaches allows developers to save time and increase productivity, most of the existing work does not provide evidence, such as through empirical evaluations, of the impact of using MDD in the context of agent-oriented development (Murukannaiah and Singh, 2014; Challenger et al., 2016a).

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

In this paper, we present a tool-supported method for developing BDI agents with learning capabilities. This method includes a modelling notation and model-to-text transformations, which are implemented in a tool called Sam. This tool demonstrates the practical use of our approach and is used to evaluate how our method improves agentoriented development regarding system understanding and maintenance. Our work is based on the BDI4JADE framework (Nunes et al., 2011), which implements the BDI architecture, and on an approach that provides agents with a learning-based plan selection technique (Faccin and Nunes, 2015). In summary, the main contributions of this work are (i) a tool-supported development method to provide agents with learning capabilities hiding from developers technical details associated with a sophisticated plan selection process, and (ii) an empirical study of the impact of using such method while maintaining an agent-based system. The remainder of this paper is organised as follows. In Section 2, we present an overview of previous work on which our work is based, namely the learning-based plan selection technique, which extends the BDI architecture, and the BDI4JADE framework. Section 3 introduces our development method and Section 3.3 presents the tool implemented to support it. The evaluation of using our method and tool is presented in Section 4. In Section 5, we discuss related work proposing the use of model-driven techniques and tool-supported methods for agent-oriented development. Finally, we conclude this paper in Section 6.

Fig. 1. Overview of the Learning-based Plan Selection Technique (Faccin and Nunes, 2015).

agent uses a default policy for plan selection with the aim of collecting information about the outcomes of different plan executions. When there is enough collected data, the agent builds a prediction model for each of its plans. In the second stage, the default plan selection strategy is replaced by one in which plans are selected according to their expected outcomes, which are provided by the prediction models built previously. As said above, the plan selected for execution is that whose predicted outcomes best satisfy agent preferences. In this stage, plan execution outcomes are still recorded, and prediction models are updated periodically. A meta-model that extends the typical BDI architecture includes the particular elements considered during this selection process, e.g. preferences (Nunes and Luck, 2014), softgoals (Bresciani et al., 2004), outcomes, among others. We describe these elements next.

2. Background The Belief-Desire-Intention (BDI) architecture (Rao and Georgeff, 1995) is perhaps one of the most used architectures in agent development. It structures agents in terms of three key concepts: beliefs, desires and intentions, which name the architecture. Beliefs represent the information an agent has about the world, while desires, also referred to as goals, specify states of the world the agent wants to reach. Intentions are related to the agent commitment to achieve its desires, which are carried out by the execution of typically predefined sets of actions—the so-called plans. The integration of these concepts and several abstract functions into a reasoning cycle provides agents with a robust and flexible behaviour. Such reasoning cycle can be generally described as follows. The BDI agent updates its beliefs and goals based on internal and external events that it perceives. From these updated goals, the agent selects those it will commit to achieve, transforming them into intentions. For each existing intention, a plan able to achieve the corresponding goal is selected and executed, and the cycle restarts. One of the abstract functions that compose the BDI architecture is the plan selection process, which is responsible for selecting a suitable plan to achieve a given goal. Next, we present an approach that improves this plan selection process by allowing BDI agents—hereafter referred to simply as agents—to learn the relationship between their context and the outcomes of previous plan executions, and to select the plan expected to perform better in a given context.

Softgoal is a secondary goal that cannot be fully achieved by a plan, but more or less satisfied according to the actions performed by this plan, e.g. minimise cost or maximise performance when achieving a goal. Preference is a value between 0 and 1 expressed over an agent's softgoal. Preferences represent the trade-off among softgoals and indicate the importance of a given softgoal to the agent. The greater the importance given to a softgoal, the greater the preference value assigned to it. Outcome corresponds to an observable plan output. It is a value that can be measured during and/or after the execution of a plan. Usually, an outcome is related to the use of a particular resource within a domain, such as time, fuel, money, among others. Influence Factor consists of a context attribute that influences plan outcomes. It is a variable that changes according to context, is directly mapped to an agent's belief, and can affect one or more outcomes. The weather condition can influence the time taken to go to work by driving a car, for example. Optimisation Function defines how an outcome affects a softgoal. It states if the value of an outcome must be maximised or minimised to better satisfy agent's preferences over a given softgoal. Plan Metadata Element is the element responsible for linking an outcome, its influence factors, a machine learning model/algorithm, an optimisation function and its respective softgoal to a plan.

2.1. A learning-based plan selection technique A plan selection function within the BDI architecture receives as input a set or a list of plans that can achieve a given goal and provide as output a plan selected to be executed to achieve this goal. Usually, BDI frameworks provide agents with a default First In, First Out (FIFO) plan selection process. However, if different factors are considered before making a decision, e.g. current context and preferences, this process would be more effective. In a previous work (Faccin and Nunes, 2015), we enhanced the default plan selection process by allowing agents to learn how the context influences plan execution outcomes. They use this knowledge to predict outcomes in different contexts, and to select the plan with the best predicted outcomes according to agent preferences. Fig. 1 depicts such process, which can be split into two stages. In the first one, an

This improved plan selection process can be embodied in any framework that implements the BDI architecture and allows the 196

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

implemented as independent classes without any disadvantage to the final result. GoHomeGoal is an implementation of the Goal interface (line 3) with no further code, while GoHomePlanBody extends AbstractPlanBody—a default implementation provided by BDI4JADE for the PlanBody class (lines 5–10). The set of actions the plan goHomePlan performs must be detailed into the action() method of GoHomePlanBody. This platform makes extensive use of annotations to simplify the development task. In Listing 1, the use of such resource is observed when we use the @Belief and @Plan annotations to add the belief raining (lines 12–13) and the plan goHomePlan (lines 15–16) to the agent's belief base and plan library, respectively. Although these annotations automate the process of accessing a capability and inserting beliefs and plans into their respective sets, users are still able to perform this operation manually.

customisation of its different abstract functions. In fact, there are many of such frameworks (Pokahr et al., 2005; Bordini et al., 2007; Nunes et al., 2011), each of them presenting different characteristics. Next, we present BDI4JADE, a BDI framework entirely based on the Java programming language, which was selected to be used in our work. 2.2. The BDI4JADE framework BDI4JADE (Nunes et al., 2011) is a framework that implements the BDI architecture on top of JADE (Bellifemine et al., 2007). It is purely Java-based, which makes developed agents able to be extended or integrated with different technologies currently in use, such as AspectJ,1 Hibernate,2 among others. The mutual interest in providing solutions capable of being adopted in industrial applications was the reason why this framework was chosen to be used in our work. Several classes comprising this framework are related to the components of the BDI architecture. Developing an agent requires the instantiation of a class that implements the BDIAgent interface and contains at least one capability (Busetta et al., 2000), which represents the agent's ability to act toward goals. In BDI4JADE, capabilities are implemented as extensions of the Capability class and encompass sets of beliefs and plans. Beliefs are the informative components of the system and are implemented as subclasses of the Belief class. They can be added to capabilities in both design or runtime, as such as plans. These, in turn, are instances of the Plan class and hold the information about the goals they can achieve. The set of actions a plan must execute to achieve a goal is specified through the instantiation of a class implementing the PlanBody interface while goals are implementations of the Goal interface. In Listing 1, we present an example of an agent developed using the BDI4JADE framework.

3. Development method Developing autonomous agents or multi-agent systems does not differ from creating standard software projects. It demands developers to follow or iterate through different activities (Sommerville, 2010), which typically comprise (i) specification, (ii) design and implementation, (iii) validation and verification, and (iv) evolution. Several agentoriented methodologies have been proposed addressing such activities and establishing guidelines to the agent development process, such as Tropos (Bresciani et al., 2004), Prometheus (Padgham and Winikoff, 2004), GAIA (Wooldridge et al., 2000), among others. We propose a model-driven development method that specifically focuses on the design and implementation activity to create agents with learning capabilities. The remaining activities are out of the scope of

The code snippet presented in Listing 1 represents an agent PersonAgent, which can achieve the goal GoHomeGoal through the execution of the plan goHomePlan. For simplicity of presentation, the Goal implementation and the PlanBody subclass are presented as inner classes of PersonAgent (lines 3–10), though they can be

this work; however, we assume that developers can exploit existing agent-oriented methodologies to address them.

3.1. Design activity In our model-driven development method, designing an agent means essentially creating an instance of the meta-model we mentioned in Section 2.1, which extends the BDI architecture with out-

1

https://eclipse.org/aspectj/ 2 http://hibernate.org/

197

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Fig. 2. Graphical Notation.

able agent-oriented methodology. Initially, developers must instantiate an agent element, which is the element that represent the highest level of abstraction in our metamodel and serves as the root node for the other elements. In our metamodel, an agent may have preferences over several softgoals, which can be seen as non-functional requirements to be satisfied. Therefore, for each identified softgoal, a softgoal element must be instantiated and related to the agent representation through an association connection. For convention, developers must assign a unique identifier to each element instantiated in this diagram. Specifying the agent preferences involves creating an agent model file, which contains every information related to the learning-based plan selection technique, and, in this model file, mapping a preference value to each instantiated softgoal. An agent can also contain one or more capabilities, each of which encapsulates the agent's ability to achieve one or more goals and comprises a subset of agent's belief base and plan library. Capability elements are instantiated for each capability identified and related to the existing agent representation through containment connections. For each capability, developers must add to the agent diagram the corresponding goal, belief and plan representations for which the capability is related to, and connect each of them to such capability element using a containment connection.

comes, influence factors, and other elements to support the learningbased plan selection technique. However, the existing work on which our work is based lacks a textual or graphical language to create models. Therefore, we next propose a notation to support the instantiation of such models. 3.1.1. Graphical notation We propose the use of a graphical notation, presented in Fig. 2, to represent meta-model elements and their relationships. In this notation, agents and capabilities are presented as circles with solid and dashed border lines, respectively, containing its names (Fig. 2a). A belief is depicted as a rectangle containing a stereotype ≪belief≫ and the belief's name. Outcomes have the same representation as beliefs, with the stereotype ≪outcome≫ instead (Fig. 2d). Goals are represented by rounded rectangles containing a label that displays the goal's name, while a softgoal is represented by a shape that resembles a cloud and contains its name (Figs. 2c and d). Finally, plans and plan metadata elements are depicted as hexagons with solid and dashed border lines, respectively (Fig. 2b). The elements of our meta-model can present two different relationship types: containment and association. A containment relationship occurs when an element contains another, e.g. an agent that has a capability. Association relationships, in turn, refer to elements that relate to others without containing them, e.g. a plan that can achieve a goal. These relationships are illustrated as directional lines connecting source and target elements, with an open arrowhead near the target. Containment relationships are characterised by the use of a solid line, while association relationships are depicted as a dashed line connecting the elements. Optimisation functions are represented as labelled association relationships, which show the name of the given function. It is important to highlight that the introduced notation reuses representations of some BDI components provided by the Tropos methodology (Bresciani et al., 2004) in order to promote the integration with existing BDI methodologies. Using our graphical notation to model an agent involves creating two main artefacts. The first one is the agent diagram, which uses the notation to depict the modelled agent. The second is an agent model file containing specific information about this agent, such as preference values and properties concerning the learning-based plan selection technique, for example. Each of these artefacts describes a single agent and as a result, modelling different agents implies having different agent diagrams and model files.

3.1.3. Plan modelling So far, the model resulting from our design activity does not differ from those created from most of the existing approaches for modelling BDI agents. The main difference in our proposal is the possibility to provide agents with the information needed to perform a sophisticated plan selection. Such information is specified during the process of modelling plans. A plan is responsible for defining the course of actions to be executed in order to carry out a given goal. In our modelling process, this relationship between a plan and the goal it can achieve is expressed when the plan and goal representations are related by an association connection. Additionally, plans can contribute differently to the satisfaction of agent preferences over softgoals. Such contribution is derived from the information provided by further metadata added to plans, which in our meta-model is represented by plan metadata elements. A plan metadata element must be instantiated for each softgoal for which a given plan contributes, with containment connections linking the plan element to its corresponding metadata representations. Developers are required to use an optimisation function representation to relate such metadata to the softgoal it is referred to. Plan metadata elements encapsulate the relationship between plan execution outcomes and its influence factors. Each plan outcome identified during the analysis step has to be instantiated into the agent diagram, with association connections being used to relate plan

3.1.2. Agent modelling The design activity starts with developers instantiating and organising model elements comprising an agent in the agent diagram. We assume these elements were already identified in a previous analysis step, which might have been performed using an existing and applic198

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Fig. 3. Example of an Instantiated Model.

SoftGoalPreferences, which are extensions of beliefs. A PlanMetadataElement relates a plan to a softgoal it can satisfy, specifies the InfluenceFactors that affect the Outcome of the given plan, and specifies the OptmisationFunctionOptmisationFunction that is considered when handling such outcome. Finally, all these relationships are established within an extension of the LearningBasedCapability, which sets a plan selection strategy implementing our learning-based approach. In Fig. 4, classes presented in light grey are those transparent to developers, while classes in white are those they explicitly manipulate, if they manually implement an agent without the aid of our MDD method. Therefore, note that the actual implementation of the learning-based plan selection technique is encapsulated into classes that are not handled during this activity, thus becoming hidden from developers. We must highlight that some classes, namely SoftGoal, NamedSoftGoal, and SoftGoalPreferences, were reused from the extension available in the BDI4JADE website (Nunes and Luck, 2014). During the agent reasoning cycle, the plan selection technique encapsulated into the LearningBasedPlanSelectionStrategy selects plans randomly, in an exploration process that aims to build the initial knowledge about plan outcomes. Every plan is selected and executed several times until they reach the minimum number of executions specified by developers. This process of collecting data from plan executions explicitly involves the InfluenceFactor and Outcome classes. The former provides the current status of agent's beliefs, while the latter informs the impact of the plan execution in a given data related to a particular softgoal. When a plan is selected, the implemented technique collects the current state of plan influence factors related to each softgoal. Right after the plan is executed, its outcomes are computed and the information collected is stored. When the plan reaches the minimum number of executions specified, the LearningBasedPlanSelectionStrategy activates the Learning Algorithm which, based on the recorded information, builds a prediction model for each plan outcome. From this moment, the LearningBasedPlanSelectionStrategy uses the values provided by prediction models, optimisation functions (represented by the OptFunction enumeration) and SoftgoalPreferences to assign expected utility values to plans and select the plan with the highest expected utility. A prediction model is updated every time its corresponding plan reaches the number of executions specified by developers. The behaviour of the additional abstract functions comprising the BDI reasoning cycle remained unmodified. From the infrastructure provided by the extended BDI4JADE framework, we specify our model-to-text transformation. It consists of a set of transformation rules that, applied to an instantiated agent

metadata elements to them. Influence factors are abstractions of beliefs and for this reason, do not need particular representations in the diagram. The set of influence factors affecting an outcome is related to it through association connections, which link beliefs representing these influence factors to the plan metadata element associated with the given outcome. The end of our plan modelling process consists of assigning to each plan metadata element a set of values regarding (i) how many plan executions are used for collecting data before building the first prediction model, (ii) how many plan executions are performed between prediction model updates, and (iii) the machine learning model used to create such prediction model. This assignment is performed within the agent model file. Fig. 3 presents an example of an instantiated agent model. The information provided to the corresponding agent model file is presented between brackets. As said above, the mandatory information for performing the learning-based plan selection technique—the relationship between influence factors and plan outcomes, in particular—is provided during this plan modelling process. Therefore, it is possible to model a traditional BDI agent, modelled without our plan selection approach, and evolve it to incorporate such an approach. In fact, an example of such scenario is presented in Section 4.1.3. Moreover, although the meta-model that is the focus of this MDD method can be seen as an independent meta-model able to describe the structural aspects of a single agent, it can also be integrated with existing BDI agent metamodels able to describe different aspects of agent systems, e.g. coordination features. 3.2. Implementation activity Using the agent diagram and model file from the previous design activity, it becomes possible to perform the second activity of our development method. This task consists of implementing the designed agent, exploiting a model-to-text transformation process to generate the corresponding agent code. Before presenting this transformation process, let us introduce an extended version of the BDI4JADE framework that implements our learning-based plan selection technique. It is used to reduce the gap between the model and generated code. We adapted BDI4JADE by inserting the elements of our metamodel into its default implementation. Fig. 4 presents a class diagram depicting such extension. Classes in dark grey are those already existing in BDI4JADE, while the remaining classes are those we implemented. Within this extended version, a BDIAgent has Softgoals, and preferences over these softgoals are expressed by 199

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Fig. 4. Meta-model as a BDI4JADE Extension (Faccin and Nunes, 2015).

that represents the extended BDI4JADE framework to be used as a PSM, as well as to use other existing PSMs as intermediary models, such the Jason meta-models proposed by Cossentino et al. (2012) and Tezel et al. (2016), for example, to generate code targeting different agent platforms. Therefore, the only requirements for using our improved plan selection technique in various agent platforms are that (i) the target platform must be extensible and support the customisation of the plan selection process, and (ii) a particular model-to-text transformation that matches that platform must be specified. Besides BDI4JADE, most existing agent platforms are open for customisations related to the BDI reasoning cycle, e.g. Jadex and Jason, among others. Next, we briefly describe how these two agent platforms can be extended and customised to support our technique. Jason (Bordini et al., 2007) is a well-known Java-based interpreter for an extended version of the AgentSpeak (d'Inverno and Luck, 1998) language. It allows developers to customise the agent behaviour by extending the Java class that represents agents and overriding particular methods that correspond to the abstract functions of the BDI reasoning cycle. It is possible to provide agents with our improved plan selection process by overriding the selectOption() method in such class. The information required during the plan selection, which refers to influence factors, outcomes, optimisation functions and other related data, is provided with the use of Jason's annotated plans. Softgoals and corresponding preferences, in turn, are implemented as beliefs. Differently from Jason, Jadex (Pokahr et al., 2005) supports the development of agents using not only Java but XML as well. It provides a feature called meta-level reasoning, which can be executed whether there are several plans able to achieve a given goal. For existing goals, developers can provide a meta-level goal, whose achievement corresponds to the selection of a plan for execution, and plans to achieve such metalevel goals. In our case, the meta-level plan implements our learningbased plan selection technique while the information corresponding to our plan metadata elements is provided through the tag parameter in the XML definition of plans. These parameters can be accessed by the meta-level plan and used during the plan selection. Clearly, other adaptations must be made in each platform to fully support the technique,

model, generates the corresponding agent code. Table 1 presents these rules. On the left column, there are rules that, when triggered, produce the content described in the right column. For rules in italics, such content is created into a new file. The transformation process initiates with the Agent rule being applied to the agent element instantiated in our agent diagram. The transformation process described above results in an almost complete BDI4JADE agent code. Therefore, instead of manually implementing classes based on the BDI4JADE implementation, our MDD method is able to automatically generate code. However, even with our MDD method, developers still need to provide additional code pieces that cannot be generated from the application of our transformation rules. These code pieces refer to domain-specific code that must be added to each outcome and plan body class created. Outcome classes require developers to add the code responsible for collecting data from sensors. Plan body classes, in turn, require the code that represents the set of actions an agent will carry out when executing a plan related to the given plan body. Providing such domain-specific code culminates in an executable BDI4JADE agent. This concludes the description of our development method. Although the model-to-text transformation we provide here refers specifically to the BDI4JADE framework, our proposal is not limited to this single platform. The meta-model underlying our proposed MDD technique abstracts from particular agent programming languages or execution platforms in such a way that it can be classified as a platformindependent meta-model (PIM). According to the Model-Driven Architecture (MDA) (OMG, 2003), a PIM must focus on the operational aspects of a system without providing details regarding its implementation in a particular platform. Model transformations can thus be applied from PIMs to different platform-specific meta-models (PSMs), which may contain the implementation details that are not present in the corresponding PIM. Transformations applied to a PSM results in platform-specific code. In fact, in our implementation activity, we present a direct transformation from our platform-independent meta-model to BDI4JADE code. However, it is possible for us to specify a meta-model 200

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

instantiation of agent models using our notation, and a code generation feature, which automates the model-to-text transformation process. Fig. 5 presents the general appearance of our tool. The Sam's user interface comprises several elements playing different roles. At its centre is the editor view, which is responsible for displaying the content of agent diagrams. On the left-hand side of the editor view, the package explorer is located, through which developers can navigate between agent diagram and model files. On the right-hand side of the editor view, stands the creation palette, which contains the meta-model elements able to be instantiated into a diagram. Every information developers must insert into the agent model file must be provided through the property view, which is positioned below the editor view. Such information includes preference values and other data regarding the learning-based plan selection technique. Finally, an additional view below the package explorer is responsible for presenting a miniature of a given agent diagram the editor view displays. Developers can instantiate an agent model by dragging meta-model elements from the creation palette and dropping them into a diagram on the editor view area. To improve diagram readability, Sam provides the so-called collapse feature, which allows developers to choose between showing a goal's name inside the plan element able to achieve it or explicitly displaying a connection between these elements. This same feature is available for plan metadata elements, which can display its connections to beliefs and softgoals, or show the names of these elements inside its representation. This feature focuses mainly on supporting the visualisation of models with many element instances, thus addressing scalability. However, further improvements can be done towards the provision of an increased support. Recent work proposed the use of capability relationships to improve BDI agent modularisation (Nunes, 2014; Nunes and Faccin, 2016), thus allowing the construction of smaller composable BDI agent modules. Therefore, this is a natural extension of Sam to increase its support to scalable models. Additionally, Sam implements a code generation feature that automates the model-to-text transformation process from Section 3.2. Such feature is triggered when developers open the context menu of an agent model file and select the generate code option. Applying our transformation rules, Sam can produce the complete structure of packages and classes required for an agent targeting the extended BDI4JADE framework. However, developers still have to provide domain-specific code concerning plan bodies and outcomes. In Fig. 5, we present a model instance of an agent responsible for making hotel reservations, inspired by the scenario proposed by Visser et al. (2016). This agent, called BookingAgent, is able to book a hotel room in a given city considering the need for maximising guest comfort (MaxComfort softgoal) while minimising the cost of the stay (MinCost softgoal). This agent has a single capability, namely BookingCapability, which can achieve the BookingGoal by the execution of two available plans: BookHotelX or BookHotelY, which book a room in Hotel X and Y, respectively. The impact of a plan execution in the existing softgoals is measured through particular outcomes related to each plan. The final Price of a booking is related to the MinCost softgoal, while a Satisfaction rating, which is provided by guests after the stay in the selected hotel, is related to the MaxComfort softgoal. Several influence factors affect the final price of a booking as well as the comfort perceived by guests. The price outcome is affected by (i) the number of guests; (ii) the number of days of the stay; (iii) the current currency exchange; (iv) the time of the year in which the stay occurs; and (v) the location of the selected hotel, given that central places are typically more expensive. The guest satisfaction is influenced by almost the same factors of the price outcome. The only difference concerns the predicted weather during the stay, which is considered instead of the current currency exchange. From this instantiated model, developers can generate the corresponding BookingAgent code. They need to provide only the code related to

Table 1 Transformation Rules. Rule

Transformation

Agent

name extends MultiCapabilityAgent Constructor() For Each capability → AddCapability init() For Each capability For Each goal → AddGoal For Each softgoal → AddSoftgoal initPreferences() For Each softgoal → SetPreference For Each capability → Capability For Each goal → Goal Softgoals

AddCapability AddGoal AddSoftgoal SetPreference Capability

Add capability to agent Add goal to agent Add softgoal to agent Set preference belief value associated with the softgoal name extends LearningBasedCapability For Each belief → AddBelief For Each plan → AddPlan init() For Each plan For Each plan metadata element → AddPME For Each plan → Plan

AddBelief AddPlan Plan

Add belief to capability Add plan to capability name extends DefaultPlan Constructor() Set goal to plan Set plan body to plan PlanBody

PlanBody

name extends LearningBasedPlanBody action()

AddPME

Instantiate plan metadata element For Each influence factor → AddInfluenceFactor Add plan metatada element to plan metadata Outcome (If outcome file does not exist)

AddInfluenceFactor Outcome

Add influence factor to plan metadata element name extends Outcome getMeasurement()

Goal Softgoals

name implements Goal Generate softgoals array

such as the implementation or adoption of libraries that supports the usage of machine learning models. Although we do not provide the complete implementation of our proposal in Jason and Jadex, we demonstrate that it is possible to target other platforms than BDI4JADE.

3.3. Sam: a model-driven development tool To ease the adoption and demonstrate the practical applicability of our proposed method, we developed a tool that implements the modelling notation and transformation process presented above. This tool, named Sam,3 is a development environment that supports the design and implementation tasks introduced previously. Developed as an Eclipse4 plug-in built upon the Graphiti5 framework and the XPand6 template language, Sam provides a graphical editor, which supports the 3 Although Sam is fully implemented, further tests should be carried out to consider it a stable version, allowing the tool to be robust enough to be massively adopted. Currently, the tool can be obtained by contacting the corresponding author and, as soon as a stable version is produced, Sam will be made available in the BDI4JADE website. 4 http://www.eclipse.org 5 http://www.eclipse.org/graphiti 6 http://wiki.eclipse.org/Xpand

201

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Fig. 5. BookingAgent Model in the Sam Tool.

regarding one specific concern: project understanding and development quality. We assigned two groups of volunteers to perform these activities using different resources. Both groups used the extended BDI4JADE framework presented in Section 3.2; however, only one of them was able to use Sam—that is, we performed a between-subjects study. From these activities, we extracted metrics that helped us to measure the benefits of using our tool in the agent development process.

body of the booking plans and the measurement of the price and satisfaction outcomes. Currently, Sam is able to generate code only for the BDI4JADE platform. However, as stated in Section 3.2, the model-driven development method supported by our tool is general enough to address different platform-specific meta-models and, as a consequence, different target platforms or programming languages. The only extension that should be done to provide such support consists of the development and integration of specific XPand templates that represent the corresponding model-to-model (from our PIM to the desired PSM) and model-to-text (from the obtained PSM to code) transformations.

4.1.1. Goal and research questions We used the Goal-Question-Metric (GQM) approach (Basili et al., 1994) to model and develop this experiment. GQM is a traditional method of software metrics planning. Its purpose is to identify metrics that effectively contribute to answer specific questions, which in turn are related to a goal. Following this approach, we specify the statement bellow as the goal for our user study.

4. Evaluation of support to agent-based development More than promoting the use of our development method, providing a tool like Sam is an attempt to achieve two key goals: (i) reduce the effort required to develop agents, and (ii) improve development quality. Thereby, we conducted a user study to verify the benefits provided by the use of our tool with its underlying method to the agent development process. In this study, we made a comparison between the use of our tool-supported method and a code-based implementation in the development of two activities. Next, we present study settings detailing the procedure followed in this experiment.

To assess the improvements provided by a tool-supported BDIagent-based development method, evaluate the effectiveness of the use of this method to understand and evolve BDI-agent-based systems from the perspective of the researcher in the context of graduate and undergraduate students in Computer Science. There are several ways in which a development technique or method can aid the software development process, e.g. saving time, improving code quality, reducing costs, among others. Considering this, we limited the scope of our experiment in the context of two main benefits: improvement of project understanding and development

4.1. Study settings Our user study is composed of two main activities, each of them 202

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

questionnaire is to provide metrics to answer the first research question (RQ1), thus verifying the existence of any difference concerning project understanding when analysing a project from its source code, or assisted by our tool and, consequently, by our modelling notation. Both participant groups performed this activity being able to access the material provided in the introductory class. We recorded the time taken for each participant to answer each question as well as their responses, and extracted corresponding metrics (M1.1 and M1.2). Step 4: Project Evolution Activity. The second activity is an exercise of project evolution. In this activity we used an initial implementation of a target system as the base project, which must be evolved in certain aspects, aiming to provide metrics to answer the second research question (RQ2). Again, both participant groups were able to use the material from the introductory class. For each participant, we recorded the evolved project and the time taken to develop the task, from which we extracted metrics M2.1, M2.2 and M2.3. We present the target systems implemented to support the understanding and project evolution activities in next section. Details of the questionnaire used in Step 3 and the exercise of project evolution performed in Step 4 are presented in Section 4.1.4.

effectiveness. The first one concerns how well developers can understand an existing agent project, i.e., how well they can correctly identify different components and their relationships throughout the code and models that are part of the given project. The second benefit refers to improvements in code or model quality, and time to implement them. Thus, considering this context and aiming to achieve the goal stated above, we list our research questions as follows.

RQ1 RQ2

Does the use of our method facilitate the understanding of an existing BDI-agent project? Does the use of our method improve the evolution of an existing BDI-agent project?

Based on these questions, we can select the most appropriate metrics to be collected. To answer RQ1, we selected two metrics in the context of an existing agent project. They are: (M1.1) the number of correctly answered questions about the project; and (M1.2) the time taken to correctly answer these questions. Additionally, three metrics are used to answer RQ2. These metrics are: (M2.1) the time taken to evolve the project according to a particular task; (M2.2) the number of compilation errors in the modified project; and (M2.3) the number of logical errors in the modified project. Details of each metric are presented as follows.

4.1.3. Target systems Two different target systems were implemented to support the execution of this study. The first one, which is referred to as transportation system, was used in the third step of our experiment. In this target system an agent T has the goal transport (x, y), i.e. transporting a load from its origin in x to its destination in y. To achieve this goal, agent T can choose among three available plans: AirplanePlan, TruckPlan or ShipPlan. Each plan has its specific metadata, which provides information about influence factors and outcomes, and how they are related to each agent's softgoal, namely maxPerformance, minCosts and maxReliability. Fig. 6 shows this system modelled in Sam. The second implementation represents a sorting system, which is capable of sorting the elements of a given array. An agent A with the goal sort(x) is, thus, responsible for managing this system and selecting the best way to sort array x. Agent A can initially choose among two plans, each of them representing one particular sorting algorithm: InsertionSortPlan and SelectionSortPlan, referring to the insertion sort and selection sort algorithms, respectively. This target system is the base for the project evolution activity. Fig. 7 depicts the sorting system modelled with Sam. In that model, note that we refer to agent A's goal as SortArray. The same occurs in Fig. 6 where agent T's goal is presented as Transport instead of transport (x, y). Although there are no strong motivation and need for the adoption of an agent-based solution in our target systems, we chose them as subjects of our study mainly due to their simplicity. We selected systems in which specific domain knowledge would not be required for participating in the study, and thus a possible lack of such domain knowledge would not affect its outcome. For example, a more realistic system would be an agent-based system responsible for monitoring and preventing attacks in a computer network infrastructure. In this system, both tasks could be carried out by the execution of different applicable actions, such as blocking malicious accesses using different strategies. However, this is an example of system that requires specific knowledge about the computer network domain. Moreover, the focus of our study is not on the adopted target systems, but the tool-supported development method, and we considered that the use of complex target systems could affect the engagement of the participants in performing this experiment.

4.1.2. Procedure To evaluate our method and its associated tool, we performed a four-step study. In summary, we asked participants to answer a questionnaire regarding their experience in areas related to our experiment, then submitted them to an introductory class, giving an overview of different concepts needed as background to perform our study. After, participants performed activities regarding project understanding and evolution, with or without the assistance of our tool. These steps are detailed next. Step 1: Experience Survey. Participants were requested to answer a questionnaire regarding their previous knowledge in programming, the Java programming language, software agents and the BDI architecture. From their answers, we classified each participant as beginner, intermediary or expert. In each question we assigned a weight for existing alternatives. This weight is related to the level of knowledge expressed by a given alternative and is proportional to the amount of alternatives in the question. For instance, a question with three different alternatives would have weights 0.0, 0.5 and 1.0 for each option, respectively, given that the alternatives are sorted in ascending order of expressed knowledge. We calculated the arithmetic average from answers and assigned a final score to each participant. Participants with a score < = 0.35 were classified as beginners, while those with score >0.65 were classified as experts. Participants whose score was in the interval between beginners and experts were classified as intermediaries. The complete demographic data from those who took part in our experiment is presented in Section 4.1.5. Step 2: Introductory Class. We split participants into two groups and submitted them to an introductory class, which aimed to ensure that all participants had the same basic knowledge regarding concepts and approaches addressed by this experiment. They had contact with concepts such as software agents, the BDI architecture, the learningbased plan selection technique and meta-model as well as the extended BDI4JADE framework; thus, being able to recognise influence factors, plan metadata elements and other related elements. One of the groups had a brief overview of using Sam for modelling and generating source code for agents. This group was able to use our tool during the entire experiment while the other one had access to project source code only. Step 3: Understanding Activity. This activity consists of a questionnaire about an implemented target system. The purpose of this

4.1.4. Questionnaire and exercise The questionnaire associated with the understanding activity consists of 12 questions about the project implemented regarding the transportation system. These questions aim to ensure how well a participant understands the project and are directly associated with the 203

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Fig. 6. The Transportation System Modelled in Sam.

Fig. 7. The Sorting System Modelled in Sam.



elements of the learning-based plan selection technique, i.e., outcomes, influence factors, optimisation functions, and their relationships. There are four different types of questions, each of them asking about specific relationships and following a particular template. These question templates are presented as follows.

• • •

Which influence factors and outcome are associated with the SG softgoal in the P plan?

From these templates we derived our questions, replacing SG, P and O by different softgoals, plans and outcomes of the transportation system project, respectively. A concrete question is, for instance, “Which outcome is associated with the maxPerformance softgoal in the AirplanePlan plan?” The questions were interleaved in a way that participants did not answer the same type of question in a sequence, thus, requiring them to learn how to interpret a model or code to give

Which outcome is associated with the SG softgoal in the P plan? Which is the optimisation function used in the P plan, with respect to the SG softgoal? Which beliefs are influence factors of the O outcome in the P plan? 204

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Table 2 Metadata for the Insertion Sort Plan. Influence Factors ArraySize Additions Removals ArraySize Additions Removals

Outcome

Table 4 Demographic Characteristics of Participants. Optimisation Function

Softgoal

CPUTime

Minimise

maxPerformance

NumberOfSwaps

Minimise

minNumberOfSwaps

Characteristic

Table 3 Metadata for the Selection Sort Plan. Influence Factors ArraySize Additions Removals ArraySize Additions Removals

Outcome

Optimisation Function

Softgoal

CPUTime

Minimise

maxPerformance

NumberOfSwaps

Minimise

minNumberOfSwaps

%

Education

Undergraduate student Master Student PhD Student

5 11 2

27.8 61.1 11.1

Programming Experience (in years)

<1 year 1–2 years 2–5 years 5–10 years > 10 years

0 1 9 3 5

0.0 5.6 50.0 16.7 27.8

Experience with Java programming language

0 1 2 3 4 5 6 7 8

0 0 1 0 4 3 5 3 2

0.0 0.0 5.6 0.0 22.2 16.7 27.8 16.7 11.1

5 5 5 2 0 0 1 0 0

27.8 27.8 27.8 11.1 0.0 0.0 5.6 0.0 0.0

12 2 3 2 0 0 0 0 0

66.7 5.6 16.7 11.1 0.0 0.0 0.0 0.0 0.0

Knowledge about software agents

the correct answer. The project evolution activity considers the ability of a participant to evolve an agent-based system and how it can be improved when using a tool-supported development method. In this exercise, participants have access to an initial Java project that implements the sorting system already presented. Initially, the agent managing such system does not present any sophisticated technique for selecting which sorting algorithm should be executed to achieve its goal. We asked participants to evolve this project providing an agent that uses the learning-based plan selection technique. To accomplish the task, they were instructed to add the softgoals maxPerformance and minNumberOfSwaps to the agent, and to set the proper metadata for each existing plan, creating and relating elements when necessary. Information about the metadata that should have been added to plans was provided to participants as presented in Tables 2, 3.

N

0 1 2 3 4 5 6 7 8

Knowledge about the BDI architecture

0 1 2 3 4 5 6 7 8

- None - Minimal experience - Some experience - Substantial experience - Extensive experience - Fundamental awareness - Novice - Intermediary - Advanced - Expert - Fundamental awareness - Novice - Intermediary - Advanced - Expert

Table 5 Distribution of Participants.

4.1.5. Participants Participants in our study were all volunteers, graduate and undergraduate students in Computer Science. This experiment initiated with a total of 26 participants, of which seven were not present in Step 3 of our study. From those remaining, one participant was not present in Step 4. Therefore, 19 participants performed the understanding activity, while 18 were present for the project evolution activity. We included answers from all participants who performed a given activity in results presented in Section 4.2. Table 4 presents the demographic characteristics of participants that completed the study, while Table 5 shows how participants were distributed into groups according to their classification as beginners, intermediaries and experts.

Beginner Intermediary Expert

Before Dropouts

After Dropouts

Tool

Code only

Tool

Code only

7 6 0

7 5 1

5 5 0

4 3 1

4.2.1. RQ1: does the use of Sam facilitate the understanding of an existing BDI-agent project? We analysed metrics M1.1 and M1.2 collected from the understanding activity to answer our first research question. Metric M1.1, which corresponds to the number of correctly answered questions about the code, was analysed from two perspectives: (i) considering partially correct answers; and (ii) considering completely correct answers only. The first perspective deals with the fact that each question from the questionnaire associated with this activity has a specific answer composed of different elements. Therefore, if at least one of these elements is correctly mentioned in an answer, it is possible to consider this answer as being partially correct. The second perspective, in turn, assumes that an error in an answer invalidates the entire answer. The underlying idea is that one mistake in a project can lead to a software bug or a misconception of the entire system. Therefore, we assigned a score for each participant considering both perspectives. Regarding partially correct answers, we calculated this

4.2. Results and analysis To answer our research questions, we analysed data collected while executing the steps of our procedure. First, we compared the performance of both groups—the one using our tool and its underlying development method and the other using source code only—regarding the understanding activity, which provided metrics to answer research question RQ1. Then, we analysed data collected from the project evolution activity aiming to answer research question RQ2. 205

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Table 6 Scores by Participant (Partially Correct Answers).

averages in Q1 supports this statement. Although the difference between groups decreases as participants answer more questions and become familiar to the project, the general score of the group using Sam is still higher than that of the group using source code only. Moreover, the increasing scores for questions of the same type, e.g. the sets Q1, Q5, Q9 and Q2, Q6, Q10, show that participants learn the project and the tool/code they are using and handling. This behaviour, which was already expected, also explains the decreasing differences between group scores. Note that in large projects, these differences would likely decrease much slower, because learning a high number of classes requires significant effort from developers. Regarding the perspective of analysing these results considering completely correct answers, participants received a score of 1 for a correct answer, and 0 otherwise. Table 7 presents scores for each participant as well as a summary with average (M) and standard deviation (SD) of each group. We also highlight correct answers. From this perspective, we observe that the group assisted by our tool still presents a better performance regarding its initial comprehen-

score by dividing the number of correctly provided elements by the sum of all elements in an answer—hits and misses. We considered a miss a wrong element in the answer or the absence of a correct one. For instance, in a question whose correct answer is AirplaneConditions, WeatherConditions and AirportConditions and the participant answered AirplaneConditions, WeatherConditions and ChanceOfTheft, we considered two correct elements and two misses. Thus, the score assigned to this participant in this question would be equal to 2/(2+2) =0.5. Table 6 shows the scores for each question as well as the general score (GS) from participants of both groups. Tn and Cn represent participants of the group using our tool and participants of the group using source code only, respectively. This table also presents a summary with average (M) and standard deviation (SD) for each group. We highlight correct and partially correct answers by colouring its background. Analysing the scores of each question, we notice that developers assisted by our tool presented a better understanding of a project when confronting it for the first time. The difference presented between the 206

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Table 7 Scores by Participant (Completely Correct Answers).

the variance of this group is also much higher—note that the average of the code group is lower, as discussed above. We further analysed why this larger variance occurred, and we concluded that experienced developers were able to correctly answer questions, while beginners made many mistakes. With our tool, on the other hand, most of the participants achieved similar good results. These results are an indicator that our tool, with its underlying notation, might help project understanding when developers are not experts or are unfamiliar with the technology. We also analysed results collected from metric M1.2, which corresponds to the time taken to correctly answer questions from the understanding activity. Table 8 shows the time taken for each participant to answer each question, as well as their total time to finish this activity. Similarly, participants from the group assisted by our tool are represented by Tn, while participants using source code only are represented by Cn. This table presents the time to answer a question even if the answer is wrong. Thus, we highlighted the time taken to correctly answer a question with a light grey background. Times for

sion of the given project, while the difference between averages becomes more apparent in some questions. We also notice the same learning behaviour presented previously; however, it is remarkable the sudden growth of the learning curve presented by participants using Sam. We must highlight the set of questions Q3, Q7 and Q11, where the average score increases from 0.55 to 1.00 (almost duplicates) and remains the same, i.e. the maximum, in the last question of this type. These results strengthen the suggestion that participants using our tool might be able to easily learn a project after an initial contact. We have reasons to believe that errors made in initial questions are possibly due to the lack of understanding of how Sam works, given the fact that, although participants from this group had a brief overview of our tool and notation, that was the first time they were effectively handling it. Figs. 8a and b summarises scores from participants of each group considering both perspectives, using a box diagram. Averages are presented as black diamonds. Analysing the correctness of provided answers, we observed that although the median of general scores of participants of the code group is higher with respect to understanding, 207

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Fig. 8. Collected Measurements by Group.

for participants from both groups. Table 9 presents results from each statistical test. Although Table 9 indicates that there is no statistically significant difference for any of the measurements, as discussed above, participants using Sam presented better results than those from the code group, particularly in the first subset of questions. Therefore, although the overall difference is not statistically significant, the observed participant evolution revealed an interesting behaviour. Measurements from the group assisted by our tool also presented a much lower variance than those from the other group in both metrics, which represents a more consistent behaviour. However, the decreasing differences in these measurements may have amortised the results from statistical tests. Furthermore, as our samples were small, measurements became sensitive to outliers. In summary, evidence suggests that the use of Sam may ease the initial understanding of an existing BDI-agent project from both perspectives: correctness and time. When confronting a target system managed by an agent that uses a sophisticated plan selection technique for the first time, participants assisted by our tool were able to correctly identify elements and their relationships, also presenting a better performance regarding time in comparison with those using source code only.

partially correct questions are presented with a dark grey background, while cells containing times of wrong answers remain with a white background. We present a summary of the time taken for each group to correctly answer a question, showing average (M) and standard deviation (SD) for each question. Averages and standard deviations for total time were omitted, given that participants may have a distinct number of correct answered questions, which makes unreasonable the comparison of these values. Considering results provided by this metric, we notice that, when confronting a project for the first time, participants using Sam were able to correctly understand it faster than those using source code only. This behaviour is observed analysing the time taken to answer the first questions, with a considerable difference between groups. However, this difference disappears over time, which we consider an effect of the participants’ learning process explained previously. Regarding the total time to perform the understanding activity, we observe a trend of participants assisted by our tool to do it faster than those from the code group. We can notice this trend observing Fig. 8c, which shows an analysis of the total time taken (in seconds) from participants to answer our questionnaire. More than presenting a lower median time, the group using Sam presents a much lower variance. An analysis of this difference between groups indicates that beginners are also able to quickly understand how to use and read the information provided by Sam, which results in fast and correct responses. We performed a Shapiro-Wilk test to check for normality in data collected from metrics M1.1 and M1.2. Results indicate that samples for M1.1—total score for (i) partially correct and (ii) completely correct answers—do not follow a normal distribution ( p > 0.05). Therefore, a Mann-Whitney U test was conducted to determine whether there was a difference in total scores from participants assisted by our tool and participants using code only in an understanding activity. Samples for M1.2, in turn, follow a normal distribution ( p < 0.05); therefore, an independent-samples t-test was conducted to compare total time taken

4.2.2. RQ2: does the use of Sam improve the evolution of an existing BDI-agent project? After performing the project evolution activity, which consists of evolving an initial project based on the sorting system presented before, we analysed obtained results. Table 10 summarises the results of metrics collected from this activity, presenting averages (M) and standard deviation (SD) for each of them. Analysing metric M2.1, which represents the time taken to perform this activity, we observe that participants assisted by Sam were faster than those using source code only. On average, the former spent almost 208

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Table 8 Times by Participant in Understanding Activity.

38 min to evolve the sorting system, while the latter used approximately 43 min. This difference makes explicit the benefit of evolving a model instead of code. The use of models allows developers to focus on the specific task they are performing, without being distracted by implementation details of a given language. Therefore, once one learns how to correctly model an agent in Sam, it becomes straightforward to have a project working. Metric M2.2 provides the number of compilation errors on an evolved project. In our context, a compilation error relates to a code snippet that does not correspond to the grammar of a given programming language, thus resulting in an error in the compiler. Considering the results, participants working only with code performed better than those assisted by our tool. We must highlight that the lack of errors from the group using source code only possibly occurred because participants from this group work directly with the code, easily perceiving this kind of error and correcting it immediately because of the used IDE, which automatically compiles the code and highlights compilation errors. Moreover, compilation errors that

Table 9 Statistical Test Results. Metric

Statistical Test

Result

M1.1(i) M1.1(ii) M1.2

Mann-Whitney U Mann-Whitney U Independent-Samples T-Test

z = −0.7569 , p=0.47 z = −0.5119 , p=0.65 t (12) = 0.5786 , p=0.57

Table 10 Summary of Metrics From the Project Evolution Activity. Tool-assisted

M2.1 M2.2 M2.3 - Quantity M2.3 - Type

Code-only

M

SD

M

SD

00:37:58 01.50 14.20 05.00

00:17:52 03.24 04.73 02.45

00:42:39 00.00 06.00 03.12

00:17:14 00.00 02.72 01.46

209

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Table 11 Our Modelling Language according to the Language Dimension. Meta-model

Graphical Notation

Transformation

Element

Attribute

Relationship

Node

Attribute

Relationship

M2T Templates

13

11

14

8

11

10

15

executions are performed between prediction model updates). Other logical errors are related to modelling mistakes. We can observe that most errors found in this group are directly related to the experience of participants using Sam and may have occurred due to the limited tool usage and tutorial time. With an adequate background, the number of errors is expected to decrease over time. In a nutshell, evidence suggests that the use of Sam improves the evolution of an existing BDI-agent project when we consider the time to perform this activity. This benefit, as well as those observed in the understanding activity, come mainly from the use of graphical models to design and generate source code, which allows developers to work without focusing on implementation particularities. However, participants using our tool did not present a great performance regarding the correctness of the evolved project. This possibly occurred because the IDE used by the other group pointed out errors while participants were managing the code, and a similar feature was not available in Sam. Moreover, we must highlight that most of the errors from the group using our tool were originated by participants not following our proposed development method entirely on Sam and from a lack of knowledge on domain specific adaptations that should be made in the generated code. Therefore, we identified the need for providing a detailed tutorial that, together with a longer use of our tool, should be able to address this correctness issue. We point out that verifying model inconsistencies (similarly to compilers used in programming languages) would also allow participants using Sam to perform better. This is an issue that we were aware of and aim to address by providing our tool with a model validation feature.

appeared on projects from the group assisted by Sam were not located in the code generated by our tool, but in code snippets wrongly modified by the participants (mainly domain specific code, which is expected to be provided by developers). We also measured the number of logical errors that emerged in evolved projects (M2.3). A logical error relates to a code snippet that is correct from the perspective of a given programming language grammar, but presents logical faults such as the lack of objects or wrong relationships between elements. We analysed this metric from two perspectives, regarding the total number of occurrences and the number of different types of logical errors in a project. Considering the total number of occurrences, results show that participants assisted by our tool made more mistakes while evolving the sorting system than those working with code only. However, drawing a conclusion from this total number of occurrences of logical errors can be misleading, given that making a mistake can mask the existence of others. Consider the following scenario regarding the code evolution activity. Assume that a participant correctly relates two plan metadata elements to a plan; however, this participant does not assign to these elements any value specifying their corresponding minimum number of executions and learning gap, which are mandatory in our approach. In this case, considering the total number of occurrences, we assume that this participant made 4 logical errors (two for each plan metadata element). If a second participant performing the same activity relates only one of the correct plan metadata elements to the plan and made the same logical errors as the first participant, her final logical error count would be 3: one for not relating a plan metadata element and two for forgetting the minimum number of executions and learning gap values. However, if this second participant had correctly added a second plan metadata element to the plan, she probably would make the same mistakes she made in the first plan metadata element. Therefore, her final error count would be 4, as well as the first participant. Considering the given example, we also analysed the number of different types of logical errors in evolved projects. We can notice that the difference between groups decreases; however, participants working with code only still present better performance than those assisted by Sam. This difference observed can be related to the expertise of participants from this group, with some of them presenting more than ten years of programming experience (including beginners). Given that development experience is helpful when dealing only with the code, these participants managed to adequately evolve the implemented system. Analysing the types of errors found in projects from participants working with code only, we observe the predominance of modelling errors, such as missing elements or wrong relationships. Not adding softgoals and preferences to an agent and not adding the whole set of metadata correctly to the agent's belief base were errors also observed. Moreover, there were several cases in which the classes that implement the learning-based plan selection technique were not correctly extended. Projects from the group using Sam present different types of logical errors. Most of them are related to code snippets automatically generated by our tool, presenting a lack of information that should have been provided by participants according to the development method presented in Section 3 (e.g. how many plan executions are executed before building the first prediction model and how many plan

4.3. Additional evaluation Besides our concern in evaluating the impact of our tool-supported model-driven technique in agent development, we provide an assessment considering aspects specifically related to our modelling language, i.e. the meta-model, graphical notation and transformation. Therefore, we exploit the framework for evaluating domain-specific modelling languages for multi-agent systems proposed by Challenger et al. (2016a) to characterise and analyse our work. This framework is based on three dimensions: language, execution and quality. Here we focus particularly on the first two, which are related to a quantitative perspective. The language dimension describes a modelling language regarding its structural complexity, focusing on language elements, transformations and generated artefacts. Table 11 characterises our modelling language according to such dimension. It presents the number of elements, attributes and relationships concerning our meta-model and proposed graphical notation, as well as the number of templates (rules) used in our model-to-text transformation process. It is possible to notice that the number of elements present in our meta-model reduces when we consider the corresponding notation, which is due to some elements not represented in instantiated models, e.g. plan bodies. The same occurs with some existing relationships. The number of templates for a model-to-model transformation was not considered, given that such transformation is not executed in our approach. Although that is not our goal in this work, the information presented in Table 11 can be used for further comparison regarding expressiveness between our language and different modelling languages. However, for such comparison, it is important to remember that our approach considers only 210

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Table 12 Our Modelling Language according to the Execution Dimension. Input Model

Generated Artefact

Provided Artefact

Elements

Relationship

Attributes

File

LoC

Methods

File

LoC

Methods

17

36

25

10

291

7

0

116

11

developed using DSML4MAS (Hahn, 2008), a modelling language that allows developers to graphically model multi-agent systems. The PIM underlying this modelling language, called PIM4Agents (Hahn et al., 2009), considers several viewpoints that cover distinct aspects of multiagent systems, from a multi-agent perspective, to environmental and behavioural aspects of a single agent. Our meta-model can be directly related to the agent viewpoint, which focuses on the concept of an agent and its corresponding capabilities. The MDD approach that introduces DSML4MAS also describes model-to-text transformations targeting several agent platforms, e.g. JACK (Busetta et al., 1999) and JADE (Bellifemine et al., 2007). The PSMs in which these transformations are applied are also defined by Hahn et al. (2009). In fact, much work has been done providing definition of PSMs for different agent platforms (Cossentino et al., 2012; Tezel et al., 2016). The extended BDI4JADE meta-model provided in this paper is such an example. Gonves et al. (2015) presented a UML-based modelling language for multi-agent systems that supports the engineering of multi-agent systems composed by heterogeneous agents with distinct internal structures, i.e. BDI agents, reactive agents, among others. The authors presented a PIM together with a graphical tool called MAS-ML tool, which supports not only the modelling but also the validation of the diagrams created. However, the tool is unable of automatically generating code, as the authors point out to be an interesting future work. The Tool for AgentOriented Modelling (TAOM4E), proposed by Bertolini et al. (2006), has this same limitation. However, the authors suggested an integration between TAOM4E and existing tools to deal with the process of modelling and generating code for agents based on Tropos (Bresciani et al., 2004). The work of Gascueña et al. (2012) consists of an approach to apply model-driven techniques in the context of an existing methodology for agent development called Prometheus (Padgham and Winikoff, 2004). For this purpose, they developed the Prometheus Model Editor, which allows developers to graphically model agents based in the Prometheus meta-model, and to generate code targeting the JACK platform. An existing development methodology is also exploited by Pavón et al. (2006), who proposed a reformulation of the methodology called INGENIAS (Pavón et al., 2005). The authors took advantage of existing tools provided by the INGENIAS Development Kit (IDK) to apply MDD concepts into the methodology. This development kit supports agent modelling and code generation and uses different model-to-text transformations to address specific agent platforms. Fragments of Prometheus and INGENIAS are also used in a methodology for the development of surveillance systems proposed by Gascueña et al. (2014). The authors suggested the use of automated model-to-model and model-to-text transformations to obtain the final agent code targeting a particular framework. Such transformations are performed using tools provided by both existing methodologies and the authors. Kardas et al. (2009) specified a conceptual architecture as well as a PIM for multi-agent systems in the context of the Semantic Web. Their work presents model-to-model transformations from such a PIM to different existing PSMs, specifically SEAGENT and NUIN. Model-to-text transformations from these PSMs were also provided, with the corresponding SEAGENT transformation being implemented in a graphical tool. Challenger et al. (2014) re-engineered the PIM specified by Kardas et al. (2009) and proposed a development process similar to ours. Their process is based on a domain-specific language called SEA_ML, and the PIM underlying their proposal covers several viewpoints, including

the internal viewpoint of BDI agents. Within the execution perspective, we evaluate the performance of our modelling language regarding the code generation process. For this purpose, we analyse the ratio between automatically generated and manually provided artefacts obtained from the evolved sorting system used in our user study. Given the input model, automatically generated artefacts are those resulting directly from the use of our model-to-text transformation, while manually provided artefacts are those manually created by developers. In this evaluation, we consider three particular artefacts: files, lines of code (LoC) and methods. Table 12 presents our results. It is possible to note that, from a relatively small instantiated model (17 elements only), a considerable amount of lines of code is automatically generated (291). In fact, it corresponds to 71.5% of the lines of code of the whole system (407). One may argue that developers still need to provide a considerable amount of code; however, we must consider that such amount can vary from developer to developer according to their skills and programming style. More skilled developers may be able to develop solutions with fewer lines of code than those without such abilities. The same occurs with the amount of generated methods (7), which is considerably exceeded by the number of manually provided methods (11). In this case, developers that prefer to have a modularised code tend to create a higher number of methods. It is important to highlight that every piece of code provided by developers is related to the implementation of plan bodies and outcomes, and the whole code corresponding to our learning-based plan selection technique is automatically generated. Finally, no additional file have to be manually provided. 5. Related work The use of model-driven techniques has been widely addressed in the context of agent development (Kardas, 2013), ranging from the specification of single model-to-text transformations to the definition of complete MDD methods. Most of the existing work is based on the Model-Driven Architecture (OMG, 2003), which defines transformation patterns between models of different abstraction levels and from these models to code. Agüero et al. (2009), for instance, proposed a model-driven approach to develop agents based on a platform-independent meta-model (PIM) called agent-π (Agüero et al., 2009). They specified transformations from this PIM to platform-specific metamodels (PSMs) representing the ANDROMEDA and JADE-Leap agent platforms. Model-to-text transformations are applied to each PSM targeting the corresponding platform code. As the ANDROMEDA PSM is specified based on the same concepts of the PIM used, both model transformations (PIM to PSM and PSM to code) are merged into a single step. This merging is similar to the transformation we present in this paper, in which an agent with learning capabilities has its BDI4JADE code generated directly from an instance of our PIM. Ayala et al. (2014) presented a related transformation process; however they did not focus on defining a new PIM but leverage an existing one focusing on a platform for ambient intelligent systems. As in our work, their approach supports the design and implementation activities of agent development. The latter is addressed by means of code generation, while the former is supported using modelling features provided by the DSML4MAS Development Environment. The DSML4MAS Development Environment (Warwas and Hahn, 2009) is a tool that supports modelling and code generation of agents 211

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

opinion. An user study conducted by the authors produced results similar to ours regarding the benefits of using a tool-supported approach.

an agent internal viewpoint that, similarly to that presented by PIM4Agents, can be related to the meta-model we present in this paper. The authors described different PSMs and a series of model-tomodel and model-to-text transformations, which are automatically performed through a provided tool. Challenger et al. (2016b) presented a textual abstract syntax for a related domain-specific language, named SEA_L, together with the corresponding tool infrastructure for modelling and generating code for semantic web enabled agents. Fortino et al. (2015), in turn, proposed a model-to-text transformation able to generate BDI-based code from statechart-based agents modelled with a graphical tool called ELDATool (Fortino and Russo, 2012). A different perspective is presented by Fuentes-Fernández et al. (2010) when they proposed a tool-supported model-driven technique for the development of processes for engineering multi-agent systems. Such technique is based on an already existing meta-model and comprises a series of activities to be performed in order to specify an agent-oriented engineering process. Two of these activities, specifically the definition of tasks for particular areas of concern and the specification of guidelines regarding individual modelling elements, are explicitly carried out in our work. The former is performed when we specified the procedures for agent and plan modelling. The latter is executed to specify how the elements of our meta-model must be related using the proposed notation, for example. Wautelet and Kolp (2016) provided a framework that adds strategic and tactical layers to the agent-oriented development process, which is typically concerned only with the operational aspect. Our work, for instance, can be related to this operational layer, corresponding specifically to the architectural model proposed by the authors. It is interesting to point out that many elements from their strategical and tactical layers are not represented in their architectural model, but are present in our work, e.g. an element to represent softgoals. As presented above, most of the tools that support existing approaches provide code generation features to automate model-totext transformations. However, the result of such code generation consists mostly of code skeletons. For example, they can be similar to the code obtained by generating code from UML class diagrams, possibly with an XML file. Our approach differentiates from those by being able to produce BDI4JADE code in which the plan selection process is fully generated. Such generated code encompasses machine learning algorithms that provide agents with a sophisticated behaviour and only lacks domain-specific code to be manually provided by developers, such as the implementation of plan bodies. Although there are several approaches that address the use of model-driven techniques in the context of agents and multi-agent systems, there is not much work investigating the impact of using such technology in this context (Challenger et al., 2016a). When they do, they consider the author experiences developing and implementing agent systems (Gascueña et al., 2012; Pavón et al., 2006), without any empirical evaluation. Exceptions are the work of Murukannaiah and Singh (2014) and Challenger et al. (2016a). The former present an empirical evaluation of the impact of using their proposed agentoriented methodology, which extends Tropos and its models, to develop context-aware personal agents. In this evaluation, they found that the use of their agent-oriented methodology reduces the time taken to develop an agent-based system and increases the comprehensibility of an agent project when compared to a generic methodology. It is interesting to note how results presented by Murukannaiah and Singh (2014) are directly related to those obtained in our study, where we found that the use of our tool and its underlying development method might reduce the time taken from developers to evolve and to correctly understand a project while improving such understanding. Challenger et al. (2016a), in turn, propose a method for evaluating domain-specific modelling language environments for multi-agent systems. This work contemplate three evaluation dimensions, which consider the proposed meta-model, notation and transformations, the performance obtained using such modelling language, and the user's

6. Conclusion The use of MDD techniques is being increasingly explored in agentoriented software development. In the context of AOSE, these techniques have been extensively adopted by several works, which propose their use by extending existing methodologies or creating new development methods to address particular domains. Most of these approaches provide tools supporting different stages of the development process. Their authors claim that the use of such tool-supported approaches could bring several benefits to developers, e.g. reducing the time spent on agent design and implementation. However, there is a lack of empirical experimentation supporting the existence of such benefits. In this paper, we proposed a development method to design and implement agents with learning capabilities. It involves a transformation between model to platform-specific code. To support this method, we presented a tool, named Sam, that allows developers to model agents based on a proposed graphical notation. From a modelled agent, our tool can automatically generate source code for an extended version of the BDI4JADE framework, delegating to developers only the task of implementing domain-specific code that cannot be represented using our tool. This whole process of designing and implementing an agent with learning capabilities is performed in a way that technical details of the underlying plan selection technique, which characterises such agents, become transparent to developers. We also empirically evaluated the impact of using our tool in agent development, comparing the performance of two groups of volunteers performing the same activities with and without the assistance of Sam. Obtained results suggest that the use of our tool-supported development method using MDD may speed-up the agent development process. Moreover, providing graphical modelling features may allow developers to identify and understand the elements and relationships that compose an agent system better and faster than without visual resources. Our results not only provided evidence of the potential of our approach, but also allowed us to identify its shortcomings. Therefore, performing this user study provided valuable insights of improvements that can be made to ease the development process of software agents with learning capabilities. This work also leaves many open possibilities, which includes the development and addition of new model-totext transformations to Sam, targeting different agent development platforms or frameworks. Acknowledgments This work receives financial support of CNPq, project ref. 442582/ 2014-5: “Desenvolvendo Agentes BDI Efetivamente com uma Abordagem Dirigida a Modelos.” João Faccin would like to thank CNPq for research grant ref. 141840/2016-1. Ingrid Nunes would like to thank for research grants CNPq ref. 303232/2015-3, CAPES ref. 7619-15-4, and Alexander von Humboldt, ref. BRA 1184533 HFSTCAPES-P. References Agüero, J., Rebollo, M., Carrascosa, C., Julián, V., 2009. Does Android Dream with Intelligent Agents?. Springer Berlin Heidelberg, Berlin, Heidelberg, 194–204. http:// dx.doi.org/10.1007/978-3-540-85863-8_24. Agüero, J., Rebollo, M., Carrascosa, C., Julián, V., 2009. Agent design using model driven development. In: Proceedings of International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAM), Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 60–69 http://dx.doi.org/10.1007/978-3-642-00487-2_7. Ayala, I., Amor, M., Fuentes, L., 2014. A model driven engineering process of platform neutral agents for ambient intelligence devices. Auton. Agents Multi-Agent Syst. 28 (2), 214–255. http://dx.doi.org/10.1007/s10458-013-9223-3.

212

Engineering Applications of Artificial Intelligence 62 (2017) 195–213

J. Faccin, I. Nunes

Multiagent Systems, Richland, SC, pp. 233–240 〈http://dl.acm.org/citation.cfm? Id=1402383.1402420〉. Kardas, G., 2013. Model-driven development of multiagent systems: a survey and evaluation. Knowl. Eng. Rev. 28 (4), 479–503. Kardas, G., Goknil, A., Dikenelli, O., Topaloglu, N.Y., 2009. Model driven development of semantic web enabled multi-agent systems. Int. J. Coop. Inf. Syst. 18 (02), 261–308. http://dx.doi.org/10.1142/S0218843009002014. Murukannaiah, P.K., Singh, M.P., 2014. Xipho: Extending Tropos to engineer contextaware personal agents. In: Proceedings of International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems AAMAS ’14, Richland, SC, pp. 309–316 〈http://dl.acm.org/ citation.cfm?Id=2615731.2615783〉. Nunes, I., Faccin, J., 2016. Modelling and implementing modularised BDI agents with capability relationships. Int. J. Agent-Oriented Softw. Eng. 5 (2–3), 203–231. Nunes, I., 2014. Improving the design and modularity of BDI agents with capability relationships. Engineering Multi-Agent Systems In: Proceedings of the Second International Workshop, EMAS 2014, Revised Selected Papers, Springer International Publishing, Cham, pp. 58–80. 〈http://dx.doi.org/10.1007/978-3-31914484-9_4〉. Nunes, I., Lucena, C.J.P.D., Luck, M., 2011. BDI4JADE: a BDI layer on top of JADE. In: Proceedings of International Workshop on Programming Multi-Agent Systems ProMAS, 2011, pp. 88–103. Nunes, I., Luck, M., 2014. Softgoal-based plan selection in model-driven BDI agents. In: Proceedings of International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, AAMAS'14, Richland, SC, pp. 749–756. OMG, 2003. Model Driven Architecture (MDA) Specification. Accessed in 2017-01-03. 〈http://www.omg.org/mda/specs.htm〉. Padgham, L., Winikoff, M., 2004. Developing Intelligent Agent Systems: A Practical Guide. John Wiley & Sons, Chichester, England. Pavón, J., Gómez-Sanz, J., Fuentes, R., 2005. The INGENIAS Methodology and Tools. IGI Global, Hershey, PA, USA, 236–276. http://dx.doi.org/10.4018/978-1-59140581-8.ch009. Pavón, J., Gómez-Sanz, J., Fuentes, R., 2006. Model driven development of multi-agent systems. European Conference on Model Driven Architecture-Foundations and Applications, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 284–298 http://dx. doi.org/10.1007/11787044_22. Pokahr, A., Braubach, L., Lamersdorf, W., 2005. Jadex: A BDI Reasoning Engine. In: Multi-Agent Programming. Springer, pp. 149–174. Rao, A. S., Georgeff, M.P., 1995. BDI agents: From theory to practice. In: Proceedings of International Conference on Multi-Agent Systems (ICMAS), pp. 312–319. Sommerville, I., 2010. Software Engineering 9th edition. Addison-Wesley Publishing Company, USA. Stahl, T., Voelter, M., Czarnecki, K., 2006. Model-Driven Software Development: Technology, Engineering, Management. John Wiley & Sons, Chichester, England. Tezel, B.T., Challenger, M., Kardas, G., 2016. A Metamodel for Jason BDI Agents. In: Mernik, M., Leal, J.P., Oliveira, H.G., (Eds.), Symposium on Languages, Applications and Technologies., vol. 51. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, pp. 1–9. Visser, S., Thangarajah, J., Harland, J., Dignum, F., 2016. Preference-based reasoning in BDI agent systems. Auton. Agents Multi-Agent Syst. 30 (2), 291–330. Warwas, S., Hahn, C., 2009. The DSML4MAS development environment. In: Proceedings of International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, AAMAS ’09, Richland, SC, pp. 1379–1380 〈http://dl.acm.org/citation.cfm?Id=1558109. 1558304〉. Wautelet, Y., Kolp, M., 2016. Business and model-driven development of BDI multiagent systems. Neurocomputing 182, 304–321. http://dx.doi.org/10.1016/ j.neucom.2015.12.022. Wooldridge, M., 2009. An Introduction to MultiAgent Systems 2nd edition. John Wiley & Sons, Ltd, Chichester, England. Wooldridge, M., Jennings, N.R., Kinny, D., 2000. The Gaia methodology for agentoriented analysis and design. Auton. Agents Multi-Agent Syst. 3 (3), 285–312. http://dx.doi.org/10.1023/A:1010071910869.

Basili, V.R., Caldiera, G., Rombach, H.D., 1994. The goal question metric approach. In: Encyclopedia of Software Engineering, Wiley. Bellifemine, F.L., Caire, G., Greenwood, D., 2007. Developing Multi-Agent Systems with JADE. John Wiley & Sons, Chichester, England. Bertolini, D., Delpero, L., Mylopoulos, J., Novikau, A., Orler, A., Penserini, L., Perini, A., Susi, A., Tomasi, B., 2006. A Tropos model-driven development environment. In: CAiSE Forum., vol. 231. Bordini, R.H., Hübner, J.F., Wooldridge, M., 2007. Programming Multi-Agent Systems in Agent Speak Using Jason (Wiley Series in Agent Technology). John Wiley & Sons, Chichester, England. Bresciani, P., Perini, A., Giorgini, P., Giunchiglia, F., Mylopoulos, J., 2004. Tropos: an agent-oriented software development methodology. Auton. Agents Multi-Agent Syst. 8 (3), 203–236, 〈http://dl.acm.org/citation.cfm?id=648206.749614〉. Busetta, P., Howden, N., Rönnquist, R., Hodgson, A., 2000. Structuring BDI agents in functional clusters. In: Proceedings ATAL ’99. Intelligent Agents VI. Agent Theories, Architectures, and Languages, Springer-Verlag, London, UK, UK, pp. 277–289 〈http://dl.acm.org/citation.cfm?Id=648206.749614〉. Busetta, P., Rönnquist, R., Hodgson, A., Lucas, A., 1999. JACK Intelligent Agents: Components for Intelligent Agents in Java. AgentLink Newsletter 2. Challenger, M., Demirkol, S., Getir, S., Mernik, M., Kardas, G., Kosar, T., 2014. On the use of a domain-specific modeling language in the development of multiagent systems. Eng. Appl. Artif. Intell. 28, 111–141. http://dx.doi.org/10.1016/ j.engappai.2013.11.012. Challenger, M., Kardas, G., Tekinerdogan, B., 2016a. A systematic approach to evaluating domain-specific modeling language environments for multi-agent systems. Softw. Qual. J. 24 (3), 755–795. http://dx.doi.org/10.1007/s11219-015-9291-5. Challenger, M., Mernik, M., Kardas, G., Kosar, T., 2016b. Declarative specifications for the development of multi-agent systems. Comput. Stand. Interfaces 43 (C), 91–115. http://dx.doi.org/10.1016/j.csi.2015.08.012. Cossentino, M., Chella, A., Lodato, C., Lopes, S., Ribino, P., Seidita, V., 2012. A notation for modeling jason-like bdi agents. In: Proceedings of International Conference on Complex, Intelligent, and Software Intensive Systems. pp. 12–19. d'Inverno, M., Luck, M., 1998. Engineering AgentSpeak (L): a formal computational model. J. Log. Comput. 8 (3), 233–260. Faccin, J., Nunes, I., 2015. BDI-agent plan selection based on prediction of plan outcomes. In: Proceedings of International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), vol. 2, pp. 166–173 〈http://dx.doi.org/10. 1109/WI-IAT.2015.58〉. Fortino, G., Russo, W., 2012. ELDAMeth: an agent-oriented methodology for simulationbased prototyping of distributed agent systems. Inf. Softw. Technol. 54 (6), 608–624. http://dx.doi.org/10.1016/j.infsof.2011.08.006. Fortino, G., Rango, F., Russo, W., Santoro, C., 2015. Translation of statechart agents into a BDI framework for MAS engineering. Eng. Appl. Artif. Intell. 41, 287–297. http:// dx.doi.org/10.1016/j.engappai.2015.01.012. Fuentes-Fernández, R., García-Magariño, I., Gómez-Rodríguez, A.M., González-Moreno, J.C., 2010. A technique for defining agent-oriented engineering processes with tool support. Eng. Appl. Artif. Intell. 23 (April (3)), 432–444. http://dx.doi.org/10.1016/ j.engappai.2009.08.004. Gascueña, J.M., Navarro, E., Fernández-Caballero, A., 2012. Model-driven engineering techniques for the development of multi-agent systems. Eng. Appl. Artif. Intell. 25 (1), 159–173. http://dx.doi.org/10.1016/j.engappai.2011.08.008. Gascueña, J.M., Navarro, E., Fernández-Caballero, A., Martínez-Tomás, R., 2014. Modelto-model and model-to-text: looking for the automation of VigilAgent. Exp. Syst.: J. Knowl. Eng. 31 (3), 199–212. Gonves, E.J.T., Cortés, M.I., Campos, G.A.L., Lopes, Y.S., Freire, E.S., da Silva, V.T., de Oliveira, K.S.F., de Oliveira, M.A., 2015. MAS-ML 2.0: supporting the modelling of multi-agent systems with different agent architectures. J. Syst. Softw. 108, 77–109. http://dx.doi.org/10.1016/j.jss.2015.06.008. Hahn, C., Madrigal-Mora, C., Fischer, K., 2009. A platform-independent metamodel for multiagent systems. Auton. Agents Multi-Agent Syst. 18 (2), 239–266. http:// dx.doi.org/10.1007/s10458-008-9042-0. Hahn, C., 2008. A domain specific modeling language for multiagent systems. In: Proceedings■■AAMAS ’08. International Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and

213