Science of Computer Programming 167 (2018) 70–90
Contents lists available at ScienceDirect
Science of Computer Programming www.elsevier.com/locate/scico
AdSiF: Agent driven simulation framework paradigm and ontological view Mehmet F. Hocao˘glu Istanbul Medeniyet University, Faculty of Engineering & Natural Sciences, Göztepe/Istanbul, Turkey
a r t i c l e
i n f o
Article history: Received 22 March 2017 Received in revised form 27 June 2018 Accepted 9 July 2018 Available online xxxx Keywords: Agent-driven simulation Agent Programming Logic programming Ontology State-oriented programming
a b s t r a c t AdSiF (Agent driven Simulation Framework) provides a programming environment for modeling, simulation, and programming agents, which fuses agent-based, object-oriented, aspect-oriented, and logic programming into a single paradigm. The power of this paradigm stems from its ontological background and the paradigms it embraces and integrates into a single paradigm called state-oriented programming. AdSiF commits to describe what exists and to model the agent reasoning abilities, which thereby drives model behaviors. Basically, AdSiF provides a knowledgebase and a depth first search mechanism for reasoning. It is possible to model different search mechanism for reasoning but depth first search is a default search mechanism for first order reasoning. The knowledge base consists of facts and predicates. The reasoning mechanism is combined with a dual-world representation, it is defined as an inner representation of a simulated environment, and it is constructed from time-stamped sensory data (or beliefs) obtained from that environment even when these data consist of errors. This mechanism allows the models to make decisions using the historical data of the models and its own states. The study provides a novel view to simulation and agent-modeling using a scriptbased graph programming structuring state-oriented programming with a multi-paradigm approach. The study also enhances simulation modeling and agent programming using logic programming and aspect orientation. It provides a solution framework for continuous and discrete event simulation and allows modelers to use their own simulation time management, event handling, distributed, and real time simulation algorithms. © 2018 Elsevier B.V. All rights reserved.
1. Introduction Programming languages have their own world view and they define an existence (the thing) based on ontological commitments. For example, the object-oriented paradigm is to envision the world as several entities that interact with one another and to implement definite functions. Its ontological commitment is based on Kant’s noumena and it is simply claimed that objects exist independently of human perception and are not ontologically exhausted by their relations with humans or other objects [1]. This allows modelers to describe a tangible world as if it were real. In contrast, the world view in the logic paradigm is a syntactic and semantic structure built from propositions and predicates, where the goal is to answer the queries of a user appropriately [2]. Logic programming is accommodated in AdSiF to drive behaviors depending on a reasoning result that is achieved by a Horn clause subset of the first-order predicate calculus. AdSiF extends object definition at three levels. At the first level, it wraps an object’s functions by states and collects states in semantically
E-mail address:
[email protected]. https://doi.org/10.1016/j.scico.2018.07.004 0167-6423/© 2018 Elsevier B.V. All rights reserved.
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
71
meaningful behavior descriptions. At the second level, it enhances the relation concept. A relation may exist between two entities persistently or contemporarily and it may force a set of specific actions both the existence and breaking phases of the relation. The third level is about consciousness. An object is enhanced with a reasoning capability, which allows it to manage its behaviors and to understand events and states around it, with a memory keeping earlier states of entities in the world and states it has. The agent-oriented-programming (AgOP) paradigm makes application and simulation models more social, flexible, and interoperable because it focuses on responses, is capable of behavior planning, and has anthropomorphic and autonomous characteristics. The common feature of agents is their autonomous acting on behalf of other agents and/or modelers. The agent metaphor was used to describe certain types of software programs at least 30 years ago in (distributed) artificial intelligence [3,4]. Agents are typical inhabitants of artificial open systems like the Internet. The open systems were characterized by Hewitt [5] in the 1980s as systems with continuous availability, extensibility, modularity, arm-length relationships, concurrency, asynchronous work, decentralized control, and inconsistent information. This characterization shows the fact that an agent is a continuously running program, where the work can be meaningfully described as an autonomous completion of orders or goals while the program interacts with the environment [6]. Aspect-oriented programming (AOP) is a way to create common or similar functionality needed by different parts of a program. Programmers describe the necessary behaviors in modules called aspects and rely on a specialized AOP mechanism to weave or to compose them into a coherent program [7]. The scattered and tangled requirements and codes that arise in object-oriented programming can be handled with the vertical and hierarchical design provided by AOP [6]. AdSiF provides a means of handling aspects by categorizing behaviors regarding their semantics and activating and deactivating them conditionally in run time. AdSiF is a general-purpose, multi-paradigm declarative scripting language. The main strengths of the language, which are reusability, interoperability, expressiveness, orthogonality, and some other software metrics examined in detail in Section 6, are related to agents and simulation models developed in AdSiF. The ontologies supporting the multi-paradigm consist of object-oriented paradigm, logic programming paradigm, agent-oriented paradigm [8,9] and aspect oriented programming. In the literature, we see agent languages such as PROforma [10,11] that combine logic programming, object-oriented programming and agent-oriented programming. AdSiF is distinguishably different from similar languages because, in AdSiF, all of these paradigms are structurally fused together in the State-Oriented Paradigm (SOP), and the richness of the paradigms enhances the world view that is provided to the modelers. The reasons for incorporating each programming paradigm into the state-oriented programming paradigm are explained in related sections. Broadly speaking, the logic programming brings in a reasoning capability with a knowledge-base that keeps the older facts (facts with earlier time labels) about the world and it provides inference-based behavior management such as giving up doing a behavior being done, starting up a behavior, or re-doing a behavior. Aspect orientation is accommodated basically for two reasons. The first reason is to manage scattered requirements that are satisfied by a specific set of behaviors and tangled requirements that are satisfied by a group of behaviors. The second reason is more meaningful for an agent and simulation development environment and it is to be capable of dynamically changing a course of action. In other words, it makes it possible to change active behavior sets that each represent an aspect of the agent, using activation and deactivation conditions attached to the behavior containers. This allows modelers to develop agents, that are capable of behaving in different way depending on the conditions they are in. The agent based programming approach enables an architectural view to be gained in simulation modeling. The paper is organized as follows. Section 2 is related to agent-based simulation concepts. Section 3 examines the ontological views and state-oriented programming concepts. Section 4 presents how time and cosmology concepts are taken into consideration in AdSiF. In Section 5, AdSiF is evaluated according to several software engineering criteria with pros and cons. In Section 6 simulation examples are presented to clarify the concepts. How simulation algorithms are developed as behaviors and their functions as plugins are explained and a time management approach is given, briefly, in Section 7. Finally, the paper ends with a discussion section. 2. Agent based simulation An agent-based model represents a system as a collection of autonomous decision-making entities called agents, each of which individually evaluates its situation and makes decisions based on a set of rules [12]. Agents in a simulation are accommodated or a simulation model is designed as agents to provide a strong information exchange capability and robust decision-making. Interoperability standards between agents and specifications for agent-based systems are promoted [13]. The abilities that agents have such as personality, emotions, cultural backgrounds, irrationality, and dysrationalia, which affect their decision-making capabilities, and the abilities making them more intelligent such as anticipation (proactiveness), understanding (avoiding misunderstanding), learning, and communication in natural and body language, and the abilities making them trustworthy such as being rational, responsible, and accountable [14], open a new area in simulation with agents. In this respect, the main reason to accommodate an agent in a simulation environment or for modeling simulation entities as agents is the fact that the core knowledge-processing abilities of agents include: goal-processing, goal-directed knowledge-processing, reasoning, motivation, planning, and decision-making. The synergy of simulation and software agents is the essence of agent-directed simulation (ADS) and it is shown to have important practical implications. ADS as clarified in the following, consists of contributions of simulation to agents (i.e., agent simulation) and contributions of agents to simulation (i.e., agent-supported simulation and agent-based simulation) [15]. The goal-directed characteristics
72
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
of agents are completely suitable for gaming, which is known as one of the meaning of simulation; being goal-directed [16]. Training simulation games are used to enhance the decision-making and/or communication skills of players in complex environments that can be competitive, cooperative, or competitive (cooperative competition). Competitive simulation games are zero-sum games and include business games and war games, which are also named virtual simulation [17]. Agent-based models and simulation have recently been applied to solve optimization problems whose domains present several inter-related components in a distributed and heterogeneous environment [18]. Border patrol control optimization is one agent-based optimization implementation [19]. In the problem, an agent manages the simulation execution and dictates simulation components, which each control and carry a set of sensors, on how they have to behave so that the coverage area is maximized while the traveling distance is minimized. The dictated behaviors consist of route plans, sensor selection, deployment strategies, and any others affecting the optimization result. In production planning applications, it is seen that agent implementation is used to respond to distributed production paradigms for supply change management, outsourcing, and distributed planning and control [20]. In the solution developed, customers, contractors, suppliers, sub-suppliers, and carriers are modeled as agents for Supply Chain Management simulations. Agent technologies are used to control simulation experiments as intelligent entities [21–23]. Mobility is an important characteristic of agents and some simulation environments provide a computational framework to develop mobile agents (Mobile Discrete Event System Specification (MDEVS)) and accommodate them in a simulation environment (AgentSim). The MDEVS formalism supports structural changes of the systems, which include the creation, addition, deletion, and migration of models and the dynamic changes of couplings between models. AgentSim is a software environment for simulation and execution of the MDEVS models. AgentSim is implemented as a library built on Aglets [24]. The Reactive Action Packages (RAPs) agent programming approach and DEVS formalism are brought together to develop an agent-based simulation environment. In the study, the conceptualization of agent based systems is shown in terms of how agents and simulation models may interact with one another (i.e., communicate and cooperate with one another) [25]. X-machine is a formalism developed to design agents [26,27]. It is widely used to specify multi-agent systems. It basically handles defined systems based on states in different abstraction levels, and in the literature there are some studies that use X-machine formalism to manage complex events to model multi-level [28]. A strong relation between ontology-based modeling and agent development is seen because of the necessity to have an inner representation of the environment in which the agent is, and the necessity to have facts about objects and relations between objects. For these reasons, domain-specific ontologies have been developed using Protégé [29]. AdSiF also provides an ontology based modeling support [30,31] and how this support is explained in the next section. A limited ontological background is offered by UFO (Unified Foundational Ontology). Basically, UFO makes a fundamental distinction between individuals, which are things that exist in time and space in “the real world” and have a unique identity, and universals, which are feature-based classifiers that classify, at any moment in time, a set of individuals with common features. Individuals in the ontology are classified as substance individuals, trope individuals and events [32,33]. Another generic simulation framework known as iQ, which is based on a queue-based modeling technique that targets design space exploration and optimization studies at the core component level, has been developed by Roca [34]. AdSiF has a more comprehensive ontology compared with UFO and iQ, because of its reasoning mechanism, and is more generic surrounding the relation concept between entities, definition of events and behaviors. As a parallel programming language and a C-based simulation language, Maisie aims to execute a discrete-event simulation model using several different asynchronous parallel simulation protocols on a variety of parallel architectures [35]. Maisie and AdSiF have the common interest of separating the description of a simulation model from the underlying simulation protocol, whether that is sequential or parallel, used to execute it. In this sense, AdSiF allows modelers to model any simulation execution protocol using behavior descriptions. Song et al. [36] propose a text-based simulation language that describes simulation experimental data and a model description in a text-based representation that is interpreted as a generic simulation engine. At the same time, many works focus on designing generic simulation frameworks, but most of these frameworks have been developed for a specific area. Pisla [37], Wang [38], Adelantado [39] and Çelik [40] developed modular simulation frameworks for control systems with layered architecture, manufacturing processes, air defense simulations, and simulation for mobile ad hoc networks based on DEVS, respectively. It is proposed in [41] that a layered architecture is preferable especially in an agent-based simulation environment. As a virtual simulation environment, a hardware in the loop simulation for interactive simulations has been developed by Kim [42]. The ontological commitment of AdSiF and State Orientation provides a broad entity description, and also flexibility in time management is provided by state-oriented programming enhanced with aspect-oriented programming so that AdSiF allows modelers to develop their own time management strategy by programming time management algorithms, event handling strategies (tie-breaking rules etc.) in state programming scripting [43]. AdSiF provides a declarative means to specify a mathematical object called an ‘AdSiF system’, which we will simply refer to as a ‘system’ in the remainder of this paper. In system design, basically, a system has a time base, inputs (events), states, behaviors, a reasoning mechanism, and a mechanism that manages the dynamic characteristics of the system. In AdSiF, the dynamic characteristic of a system is represented by the behavior descriptions (in the specialized state charts). As a framework, AdSiF promotes a set of design rules and presumes a design skeleton, which is based on its programming paradigm, called the SOP paradigm and its ontological view. AdSiF provides an ontological view [30], which is defined in terms of the paradigms, on which AdSiF is based.
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
73
Entities live in a certain environment [44] and have their own properties that distinguish them from each other and an atomic action that manages their properties. In this study, a simulation environment is defined as dynamic, episodic, observable if the ground truth information is used directly, partially observable if agents pick information by their sensors, multi-agent, and non-deterministic [3,44]. The atomic actions are wrapped by a state that constructs behaviors. A behavior is defined as a collection of semantically ordered states that generate a reasonable result and an entity may execute many behaviors in parallel. The behaviors of an entity are grouped in different behavior collections that each represents a behavioral aspect. Entities interact with each another using event transactions that cause a behavior set to be executed. An entity has beliefs about the environment and about other entities that share the same environment. The beliefs constitute a fact dual world vision representing the whole environment as an inner representation. The representation may have a set of goals to succeed and a reasoning mechanism with a set of decision-making algorithms. An entity may have a specific relation with another entity. The entities have a relation that activates a defined set of behaviors in both the relation creation phase and the relation breaking phase. In AdSiF, an entity is considered as an agent. In other words, agents are not accommodated in the simulation; instead, each entity is designed as an agent, whether or not it has cognitive ability. Designing an entity as an agent is useful to distribute the overall tasks to autonomous entities and to organize a framework of cooperation and interaction in a so-called multi-agent system. To act autonomously, they must be able to perceive the situation in their environment. And they must be able to choose the appropriate actions (from behavior lists) according to that situation (depending on the entry state and received event or activation condition or temporal relation) and according to their capabilities and skills (conditionally activated behavior lists). This means that they need to have some knowledge about the world and about themselves [45]. The simulation world is an environment for agents and the environment is observable, reliable for received information and actions, and dynamic. It is observable because the agent can collect any information it needs, whether or not it has a truth level or it has a probability to be able to reach. It is reliable because the agent believes in its knowledge base, it trusts any event it receives, and its behaviors result as planned. It is dynamic because entities in the environment change their states and attributes on the time horizon. A “horizontal” modularization, which is given by the so-called sense-think-act cycle, is promoted. An agent is assumed that receives and processes incoming information (receiving events sent by other entities, reaching state information that other entities share) and updates the internal belief about the environment (updating and extending the knowledge base), performs internal processing for the choice and preparation of the next behavior(s), and performs the related output actions according to the decisions made by the think-phase (sending events and/or changing state vectors) [45]. Each action requires time and it is determined by the duration of the state that fulfills the action, and the state is surrounded by a behavior. The details are given in the next section. From the ontological perspective, it is seen that AdSiF embraces an object-oriented programming paradigm, logic programming paradigm, and aspect-oriented programming paradigm and combines them in the state-oriented programming paradigm. The first step of fusing different paradigms into one ontological view is achieved by interpreting and re-wording the concepts of the paradigms, such as inheritance in OOP for SOP, and by embracing that paradigm’s viewpoint. Nowadays, mobile simulation languages are being developed because of the widespread use of mobile devices such as Kiltera [46]. Kiltera has a formal semantics based on a real-time extension of the π -calculus. 3. State-oriented programming State-oriented programming was originally introduced in 1987 by D. Harel [47] as a visual formalism to model complex reactive systems. SOP was adopted by OMG in 1997 as a part of the UML 2.0 specification and it is based on state charts [48]. AdSiF enhances state-oriented programming using multi-programming paradigms and defines it as a programming language with a hybrid paradigm. As a language, AdSiF provides programming by states instead of the programming states as done in the state charts (as is done in Generic State Chart Libraries). It interprets the extended state charts and does not require coding of the chart itself. State-oriented programming handles the state transition process, which is declared in the form of the state charts of AdSiF. In each state, the simulation model takes a certain amount of time to pass through the entire state (or at least, the model attempts to pass through the state) in an orderly fashion, and the simulation models are capable of executing many behaviors in parallel at any time. The execution of a state-oriented program has a timeline due to the state durations. The fundamental elements of AdSiF’s state orientation are attributes, classes, states, behaviors, behavior lists, events, inheritance mechanism, temporal relations that are constituted between the state and the behaviors and between behaviors, relations, declaration of subscription set, trace agents, and declaration of predicates and facts. This paper proceeds by providing further details on the SOP elements and their operational logic: Section 3.1 covers their states and behaviors, Section 3.2 covers events, Section 3.3 presents the classes the framework provides, Section 3.4 explains the inheritance mechanism, Section 3.5 introduces the relation concept and its place in state-oriented programming, Section 3.6 defines how the reasoning mechanism is used in the framework and how the logic programming paradigm is combined with state oriented programming, Section 3.7 explains what agent programming is and how it is achieved in the framework, and finally, Section 3.8 covers how and why aspect-oriented programming is accommodated in the state-oriented programming paradigm.
74
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
3.1. States and behaviors A behavior that consists of a series of states is represented as a Hierarchical Finite State Automaton. While a behavior can consist of no state, a state cannot be alone, and it must be in a behavior. Both states and behaviors are very fundamental components of a model behavioral semantic. In the subsection, these two fundamental components are examined in detail. 3.1.1. States States are the most atomic element of State-Oriented Programming. As a building block of behaviors, a state is defined as a definite mode in which a simulation model is, and it represents a posture of an entity. A definite atomic condition that a state represents has a real-world counterpart, i.e., a duration, and a description of the definite description, i.e., sleeping, shaking hands, stepping, firing or even doing nothing. The state description is similar to, but more formalized and extended than, that of DEVS’s [49] state description. A state is responsible for a series of actions that depend on the phase it has passed during its life cycle. Each action invoked by a state potentially changes the inner description of the simulation model. A state is defined by three groups of descriptors: stative descriptors, operational descriptors, and descriptors of temporal relations. Operational descriptors describe operations performed by an entity in the related state. Each operation is modeled as a method belonging to the entity and coded or interfaced from the entity class derived from the base AdSiF object class or similarly as a plug-in function. Stative descriptors show the modes of a state that are passed during its life cycle. In the state chart representation, a state is defined by a name – a textual string that is distinguishable from all other states of the object, entry/exit actions – actions executed upon entering/exiting a (simple, substate, or composite) state, substates – the nested structure of a state which may involve disjointed or concurrent substates, internal transitions – transitions that do not cause a state change, deferred events – a list of events that are postponed/queued for handling in another state. In AdSiF representation, in addition to entry/exit actions, a state has an action for the external transition, and instead of substates, a state has temporally related behaviors [50]. The operational descriptors are as follows: a) Guard constraint is a constraint defined as bind-info that returns a Boolean value and controls whether it is possible to enter the state. A bind-info is defined by a model atomic functions, outputs parameters of a predicate, parameters of a trigger event that is activating behaviors, behavior-scoped parameters, and a plug-in function return value. b) State actions (method references) refer to the entity methods that the modeler desires the entity to perform in the state. A state has three method references, and they are executed in the related phases. The first method reference is called the action-method, which is invoked during the state entry phase. The second one is called the exit phase method, which is invoked when the model leaves the state by internal transition. The final one is called the external-transition method, which is invoked during the state-exiting phase in the external transition. Internal transition is the transition from one state to another state that is located next to the current state by consuming its duration. External state transition is defined as when an active state is the entry state of a behavior that was triggered by a received event, and the behavior that has the state is canceled. Consequently, the behavior does not attempt to pass the succeeding state. The state is forced to exit without consuming the resumed duration. A state has events attached to its exit phase and its external transition phase. These are sent during the internal transition and external transition phases, respectively. c) Duration computer (DC): DC computes the duration of a state. Duration computation is a value calculation using bind-info. Usually, theoretical distributions and experimental distributions are used as DCs in simulations. The computational result is the time that the model is expected to spend in the related state; in other words, it is the duration that the model needs to achieve the related atomic actions, which is declared as a method to be invoked by the state in real life. Although it is expected, sometimes it is not realized because of an external transition, an interaction interrupting the state. d) Trace agents: A trace agent declares what is to be traced during simulation executions. The value to be traced is declared as bind-info, and is used as part of the state declaration. In the state exit phase, the return values of the trace agents are traced. The trace agent solution supports a low-dependent-high-coherent design. There is a low couple with entities and no couple with a simulation kernel and language interpreter. Another advantage of the trace agents is related to trace timing. The traces are obtained as soon as the related state is completed to prevent recording more than necessary or less than the required amount. e) Facts: A fact is a piece of knowledge known to be true by an agent about other entities that is stored in the agent knowledge base and populated by agent sensory data. A fact declared in a state declaration with valued parameters is stored for every state entry phase during execution of the behavior it is in. Synchronizers are a special type of state. Functionally, they are placed in a behavior as a state to prevent passing the behavior to the following state until other synchronizers with the same name are reached. As soon as all the active behaviors that have the same synchronizer state reach their synchronizers, all of them proceed to their next states. From the viewpoint of identifiers, they have a duration computer with infinite time and have no method to invoke. 3.1.2. Behaviors A behavior represents a valid sentence, and its real-world counterpart is a complete behavior of the model to which it belongs, such as walking, eating, and fighting. A behavior is constituted by a series of states, temporal relations, drive conditions, trigger event-entry state couples and events attached. Similarly, a behavior may consist of a zero state; such a behavior may be designed to manage other behaviors via temporal relations. The behaviors that the simulation model/agent
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
75
Fig. 1. Declaration of a temporal relation between behavior and state and between states.
has are categorized under behavior lists (FSAList). Each behavior list consists of a set of behaviors depending on their modeling aspects [51] and has activation and cancel conditions. At the beginning of simulation, at least one behavior list is set as the initial one. Simulation models/agents take behaviors to activate a pick of active behavior lists. The operational descriptors of a behavior are defined as guard constraints, events, phase functions, and drive conditions. The guards work similarly to the states, and this serves as a gate method that is necessary to control whether to activate a behavior or not. It is possible to attach events to a behavior and send these events to their destination entities when the behavior is finished. The events are not sent if the behavior is canceled. A behavior may be activated in three ways. Event-based activation: In this case, a trigger event and an entry state work as an activation rule. During the execution, when a model receives an event, the behaviors that have a trigger event that is the same as the received event and an entry state that is the same as any active state are activated. The active state is defined as a state being processed in the agenda, and in behavior activation, it does not make any difference what behavior it belongs to. Conditional activation (activation-drive condition): the activation drive condition is set to a behavior to activate subject to a constraint. As soon as the condition is satisfied, the behavior is activated. Similarly, a behavior being processed can be canceled by the cancel drive condition, suspended by the suspension drive condition, and resumed by the resume drive condition (if it was suspended earlier). Only the activation condition works on the active behavior list(s), and the rest of the drive conditions work on currently processed active behaviors. Like all agents, simulation models behave in an environment and change the environment in which they live. Changing the environment means changing the facts kept by the entities about the environment. The driver conditions bound to the facts or indirectly constrained by the environmental truth cause the model to behave in a determined way. Changing the environment affects the entities that live in the environment, and this effect is seen as an indirect interaction between entities. Temporally associated activation: In this type of activation, the behavior is activated by any other behavior via temporal relation (Fig. 1). We call the activated behavior temporally associated behavior. A temporal relation is created based on stative descriptors of behaviors. The stative descriptors for a behavior represent the phases of a behavior passed through its lifetime and they are defined as Canceled, Finished, Active, Suspended, and Resumed. A behavior that reaches the goal expected from it passes the finished phase. The execution of a behavior should produce a complete behavior such as sitting, looking around, walking, etc., and it should pass the entity from one definite condition to another. While a finished behavior sends events attached to it, a canceled behavior never sends the events attached. A suspended behavior holds itself in its active state for an infinite time duration until another behavior reactivates it or until its reactivation (resume) drive condition reactivates the behavior. Reactivation is strictly related to suspension, and it activates the behavior suspended earlier. The temporal relations have two sides: a causative side and non-causative side. They are constituted based on the phase of the causative side behavior, which manages the non-causative side behavior. A behavior-changing phase triggers the temporal relations connected with the phase that it has passed. Fig. 1, Rel.A represents a temporal relation between two behaviors. The action and phase parameters take values from the sets {Activate, Cancel, Suspend, Reactivate} and {Activated, Canceled, Suspended, Reactivated, Finished}, respectively. The temporal relation is implemented after some time (Duration), which is computed by a duration computer if Guard is satisfied. Rel. B represents a temporal relation between a state and a behavior. In this case, the phase parameter takes a value from the set of state phases as {In, Out}. For each phase, it is possible to define a set of action. At the time of phase transition, the related action set is executed.
76
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
3.2. Events Events are a means of communication between simulation entities/agents. Events are created as a result of state transition (both internal and external) by a sender model and they are attached to both a behavior and a state. Sending an event to a set of models is to make them implement a set of behaviors. Thus, from an agent-programming viewpoint, an event is a request to the sender model from the receiver model to make it behave in a specific way. The parameters necessary to achieve the desired behavior might be provided by the event parameters and/or the model attribute set. The event parameters are used by the behavior(s) activated to pass their values to the methods as the parameters invoked by the state, which belongs to the behavior(s). An event consists of a unique name and a parameter set with a name, a type and an initial value. Event parameter types are selected from a type set including basic types (such as bool, int32, int64, double, float) and complex types. While the declaration of an event is related to the design time, the value setting, which is related to the run time, is achieved during sending it in the exit phase of the state or the finish phase of the behavior to which it is attached. The values of the event parameters are set as bind-info in the sender entity side. As a standard of AdSiF, function return values, attribute values, predicate-out parameters, behavior-scoped parameters, plug-in function return values, and trigger event parameters (an event triggers the behavior) are used as bind-info to set the values for event parameters. After value binding, sending the guard is checked, event destinations are determined, and sent. An event destination is defined according to
• • • •
Model type: to selected model types. Model hierarchy: to a type of models and the models derived from them. Model ID: to a specific model instance. Related models: to models that have declared relation(s) with the sender model.
Sending an event to related models is defined as polymorphism on relations, which is explained by agent-based programming. The idea is to be able to send an event to an entity using a relation constituted with it without considering what it is. Because an entity can constitute the same relation as many other models, sending events using relations allows modelers to prevent dependencies in model design. Hereby, entities send events to others without directly knowing each other. From the simulation model viewpoint, a model says “I am sending event e to simulation models that I have a relation named Rel with”. The life cycle of an event starts with internal or external state transition phases or with the finish phase of the behaviors, depending on being attached to either a state or a behavior. This cycle is known as the event creation time and it also covers parameter bindings. A received event triggers a set of behaviors. The triggered behaviors and the behaviors that were activated by the triggered behaviors via temporal relations keep the reference of the trigger event, and it is reachable from the atomic function invoked by the related behavior states. The life cycle of an event ends by canceling or finishing all of the behaviors that reference it. 3.3. Simulation model/agent classes A simulation or agent model in AdSiF has two types of classes: model entity class and model builder class. The model entity class is executed during the simulation and it consists of atomic functions to invoke as state methods (reached by states), class-scoped attributes, and a method factory section to export the methods for the states. The model builder class is related with the scenario design time, and it is used to define attributes, to customize custom data fields, and to manage extension libraries. Thus, a model-builder class prepares an entity for simulation execution. As an atomic method, a function in a model entity class defines the model resolution. Atomic functions never keep any semantics between each other. The whole of the semantic information is kept in the behavior design. The behaviors that are constituted by the states that, surround the atomic functions are at a higher-order design level than where the atomic functions are placed. Thus, the classes keep notably primitive actions that do not invoke each other in principle. The functions have simple parameter sets: in general, they have no parameter and void or Boolean return values. They use attributes or event parameters that trigger the behavior as function input parameters. From the programming viewpoint, programming in AdSiF is very similar to developing a software agent [8]. Atomic methods are considered a bridge between state-oriented programming and object-oriented programming because the states invoke atomic methods via the method-object factory. Attributes that are declared in model script have a customizable declaration template, and they allow modelers to extend an attribute declaration by adding new properties. From this point of view, an attribute declared in an AdSiF model script may have some extended properties such as maximum value, minimum value, and visualization characteristics, which depend on the design decision, etc. Moreover, the attribute has a name, a type, a default value, accessibility (public, protected, private) and published information, which determines whether the value is published for the subscriber models in the run time for both local and HLA-distributed execution. This template-based generic definition and the provided services ensure that the attributes from the plugin extensions are used without changing and compiling the model source code. They also ensure that the new attributes can be added in both the design time and the run time without compiling source code. This property shows AdSiF’s flexibility, extensibility, and orthogonality.
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
77
Fig. 2. Declaration of behaviors driven by relation phases.
3.4. Inheritance mechanism The concept of inheritance in OOP is interpreted for SOP to keep a common understanding. Inheritance is constituted between models and, in the model scope, between states, behaviors, and behavior lists. In the library scope, inheritance is constituted between events. As a common rule, there are two sides in an inheritance: the base element and the derived element. The derived element inherits the public and protected properties of the base element. The properties inherited
• between models are attributes, state declarations, method declarations, behaviors, logical expressions, trace agent declarations, the attribute subscription set, but not the behavior lists,
• between states are temporal relations, methods, guard constraints, events attached, and trace agents, • between behaviors are temporal relations, trigger events, entry state, drive conditions, guard condition, and events attached. In other words, all definitions that belong to a behavior except the states are inherited from the base behavior,
• between behavior lists are the protected behaviors contained by the base behavior list, • between events are parameter sets with their default values. The inheritance mechanism provides a hierarchical description and supports decoupling. 3.5. Relations as a declaration issue As a part of the ontological commitment and of being declarative, entities may have relations with one another in two types: design time relations and behavioral relations (run time). Design time relations are the inheritance mechanism, aggregation relation, and composition relation. Aggregation and composition are variants of the association relationship “has a” and “owns a”, respectively, and aggregation is more specific than association. The composition relationship is stronger than aggregation, and the relationship is considered non-separable. The most important distinction from the concepts in OOP lies in the management of aggregated and composed models. If the model has or owns, it undertakes time and event management of the whole components. For both composition and aggregation, the encapsulation is satisfied at the class level (OOP) and the model level (SOP). From an OOP viewpoint, by restricting access to some of the object’s components and by bundling the data with the methods operating on that data, the encapsulation is satisfied at the class level. To ensure model-level encapsulation, direct model method invocation is not suggested. Instead of the direct method invocation, it is preferable to convey events to the sub-models (aggregated and composite models) (event-driven architecture) and to allow them to execute their own behavior structures. Invoking the model class method directly is to carry the semantics of the sub-models into the owner model. Each model being part of a relation may operate a set of actions and activate a set of behavior list that consists of a set of behaviors related with the relation establishment phase and breaking phase. As seen in Fig. 2, when Relation A is set up, Model A activates the behavior list named A and executes action Action AA, Model B activates behavior lists name1, name2, and executes actions Action1, Action2. Similarly, in breaking the relation the actions and behavior lists seen in Fig. 2 enumerated below, onRelationPassive declarations are activated. 3.6. Reasoning and logical programming perspective Logic programming is simply defined as the use of mathematical logic in programming, and a logic program is a collection of Horn clause statements and the logic programming part is about agent deliberation. A Horn clause is a clause with at most one literal in the then part (head) (positive literal) of the clause. The negative literals form the body of the clause. A Horn clause that contains only negative literals is sometimes called a goal clause or a query clause, especially in logic programming. The rules of inference are applied to determine whether the axioms are sufficient to ensure the truth of the goal statement. The execution of a logic program corresponds to the construction of a proof of the goal statement from the
78
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
axioms. In object-oriented logic programming, an object is simply defined: “an object is what we know to be true of it” [52]. The idea is to bind the thing known as true for the model and predicates parameters with behaviors. In AdSiF’s logic programming perspective, the solution approach is to manage the behaviors using a knowledge base, which keeps a fact dual equal world. As referenced in the ontological commitment, the fact dual equal world is an inner representation of a simulation world, which is defined using facts about the entities in that world and the predicates that allow the models to make reasoning. The facts are created using attributes, which include both model-own attributes and attributes published by other entities, model state transitions, shared events, behavior-scoped parameters, and functionreturn values. The predicates use the facts to make reasoning and generate a judgment about entities and situations. The facts are time-labeled and time-framed. Reasoning provides a truth value and values for predicate-out parameters to drive behaviors. A behavior processed changes the environment and the state of the entity itself, and this also allows the knowledge base (which consists of beliefs about the simulation world entities) to be updated using the time-labeled information, which is received by an event, read from the published attributes (and subscribed by the receiver model), or picked up from the environment. This property solves two major problems of causal models in the simulation and artificial intelligence literature [53] using the time-labeled state variable trajectory. Time-framed facts that are preserved in the knowledge base for a given duration (for example, a fact kept for 500 min) are used to compute the state variables in a causal continuous model, and the causal order of the model determines the time-frame size of the state variables. Thus, AdSiF provides a well-defined ontological view to weld logic programming into state-oriented programming using the truth values and the predicate-return values with the programming paradigms that it supports. The predicate truth value is used as the guard condition for the states, the behaviors, the temporal relations (which are constituted among states and behaviors or among behaviors), and the drive conditions to manage the behavior phase transitions. In AdSiF, the predicate call procedure is notably similar to that of method invocation. The idea is to call a predicate instead of an atomic function. The predicate-return values are used as bind-info similar to a function call. A predicate out parameter is used anywhere that we need a value to set, such as event parameters, state duration, trace agents to trace the predicate out parameter, etc. 3.7. Agent oriented programming – agenthood Agent-based programming begins with the traditional anthropomorphic mapping, which underlies the idea of computation, and changes the mapping in two ways. First, it changes the nature of the mapping: instead of presenting the computer as a single anthropomorphic entity, the computer is considered a society of interacting autonomous agents. Second, it extends the mapping between the anthropomorphic representation and the underlying computational reality to include new characteristics, such as goals and emotions, based on the state of the goals [54]. The most important technique to realize this concept of agents is to give them explicit goals, which are concrete and accessible representations of what they should accomplish. Goals can help crystallize the anthropomorphic metaphor: an agent is not just any type of person, but a person with a job to do. The goals can support the reactivity (the agents react to events that affect their goals) and support the detection of conflicts when one agent interferes with the goal of another agent [55]. To fulfill a task or to reach a goal, the agent may have to exchange further information with the environment and do internal processing (e.g., planning of necessary intermediate steps and compiling intermediate and final results). It is common to think of a horizontal modularization in agent programs, which is provided by the so-called sense-think-act cycle [6]: 1. Sense: Receive and process incoming information and update the internal belief about the environment. AdSiF senses the environment using events and published attributes; also the agent picks information from the environment using sensors and restores them in the knowledge base as time-labeled facts to create a dual world representation. The facts represent what exists and the things that are known as true. 2. Think: Internal processing for the choice and preparation of the next action(s). Thinking is achieved using a rule set and an inference engine in AdSiF. 3. Act: Perform the related output actions according to the decisions made during the thinking phase. A decision results in a set of fired behaviors and actions represented by the behaviors in AdSiF. In the AdSiF representation, an entity may have a goal as an agent, and the goal is modeled as a set of higher-order behaviors or a set of behaviors with logic representation. Agent-modeling languages allow goal declaration for the agents and a behavior plan that dictates how to behave to achieve the assigned goal and to alternate the plan in a fail case RAP (Reactive Action Packages) [56,57], PROforma, and Jason [58] and CosMos [59]. In AdSiF, it is possible to set the drive conditions for a behavior, which include a goal success condition, fail condition, and suspend condition. An alternative behavior plan in case of fail and a new plan following goal success are modeled behavior sets controlled by drive conditions. This plan enables one to define more than one goal for an agent and to declare the goals in behavior script level. The mechanism that an agent uses to determine when and how to drop the intentions is known as a commitment strategy. Blind commitment, single-minded commitment and open-minded commitment are commonly discussed in the literature of rational agents [60]. An agent has commitments both to the ends (i.e., the state of affairs that it wishes to achieve) and the means (i.e., the mechanism that the agent wishes to use to achieve the state of affairs). AdSiF supports the modeling of commitment strategies using behavior categories as an aspect modeling. Dropping an intention and selecting
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
79
an alternative intention or choosing another way to achieve the intention is controlled using behavior drive conditions and behavior list drive conditions (activation and cancel). AdSiF also provides a straight forward structure for agent learning. It is possible to formulate situation-action pairs that are an approach for agent learning [61] as constraint, state vector and behavior activation drive conditions. A learned new situation-action pair is stored in the knowledge base as a new fact associated with a behavior as an activation condition or as a cancel condition (to make something to do or to give up something being done). Learning from a state based on temporal reasoning has been studied by Hocao˘glu et al. [62]. In the study, the main idea is to collect the state vector of the entities in the simulation environment and to create behavior based on temporal logic formulation representing general rules about the system being simulated. 3.8. Behavior lists and aspect-oriented programming perspective The aspect-oriented programming [63,64] perspective allows categorization and modularization abstractions to help in separating design concerns at the source-code level [65]. In the simulation modeling case, it is related with abstracting behaviors regarding what modelers expect an entity to do. In AdSiF representation, each behavioral category of a model represents a different behavioral aspect of the model (a collection of concerns). The categories called “Behavior Lists” can be seen as a function bundle that satisfies many requirements, which are known as tangled requirements, and merges a collection of concerns in an aspect. As briefly given in Section 3.1.2, an entity must have at least one active behavior list and may have more than one active behavior list to form a set of behavior collections, in other words, a set of collections of concerns. They are constrained by activation and deactivation conditions to manage their active/passive status in run time. From the software engineering viewpoint, behaviors are designed to satisfy functional requirements. By collecting mutually exclusive behaviors that are designed for categorically different requirements in separate categories, the modelers can manage the models for different aspects. This method also supports the distribution of the behaviors into behavior lists and of the states into behaviors that satisfy the cross-cutting requirements, which can solve the “scattered requirement and code scattering problems” in a low-coupled, high-consistent design. In a training simulation, the user interfaces and the real time simulation execution are the fundamental requirements. In an analysis simulation case, the execution must be as fast as possible and back-end interfaces are a necessity, while the graphical user interfaces shown during execution time are not important. Both situations can be handled by organizing the model behaviors. Furthermore, for the training simulation, the behaviors that manage the user interfaces and regulate the simulation execution to real time are categorized in a behavior list. However, for analysis simulation, the behaviors that manage the logging operations (as scattered requirements) are categorized under another behavior list. Switching behavior lists from one to another is achieved by the behavior-list activation condition or event passing, which changes the execution aspect in run time. Delayed-Loaded Atomic Function Plug-ins (Dl AFPs) are another powerful property that supports aspect orientation. As the name indicates, Dl AFPs provide run-time atomic-function loading invoked by behavior states. The property allows the modelers to enhance the model even during the runtime. Each method invoked by states is loaded in the run time and the states are distributed into behaviors to satisfy cross-cutting (scattered) requirements and to minimize code dispersion. For example, a logging requirement as a scattered requirement is satisfied when the functions are scattered by the states that capture them into behaviors, which require the logging function. It is also possible to extend or define from scratch a behavior list using Dl AFPs to model a new aspect of the model. 4. Time management and cosmology in the AdSiF conceptual world In this section, since each behavior is constituted by states and each state requires a duration to complete its life cycle, the time concept is taken into consideration and there is a special interest in time and cosmology in AdSiF. AdSiF has two types of time representation. These are space-time-based representation and Euclidian time representation. In space-timebased time management, time is a dimension, in other words, an axis that is created by state changes of entities. A state transition and an event scheduled cause time axis creation. If there is no change in the space, there is no time axis. Contrary to this approach, there is a continuous time advance mechanism in Euclidian time representation. Whether or not there is a state transition, time passes by in a continuous manner. The entities change their state during the time advance. If we have any necessity to model a conceptual world advancing in predefined, constant, or calculated time intervals, at least an entity is accommodated in the world or one of the entities is ascribed to advance time by changing its state in desired time intervals continuously to regulate other entities in the world. This turns continuous world modeling into a discrete modeling. The task is managed by script programming that defines behaviors that handle time management and event handling. From this point of view, simulation management is achieved by behaviors inherited by entities. The behaviors are interpreted by the kernel interpreter and they are in charge of event ordering (tie-breaking rules) and time requests as seen in Fig. 3. The kernel collects time requests and grants the minimum one to the entities, exchanges events between the entities, and publishes attributes that are declared as published to the subscribed entities. All these services, in High Level Architecture (HLA) case, are done using HLA Runtime Infrastructure Services. Doing simulation itself by behaviors allows modelers to develop their own simulation event, time management, and synchronization algorithms. The algorithms based on event time stamps or state time requests are implemented as simulation execution behaviors. It is possible to develop optimistic and
80
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
Fig. 3. Agent software architecture with real system and HLA Interfaces.
conservative synchronization algorithms such as time warp, null message passing etc. as behaviors. This makes simulation execution itself a function set modeled by behavior interpreted by the kernel engine [66]. Since the entities can extend and use directly simulation management behaviors that implement time and event handling algorithms, this makes it possible to interact real systems using simulation entities providing interfaces to the real systems as a means of component-based development. As seen in Fig. 3, the behavior proving the interaction capability with real systems is inherited as a behavior collection and also the behavior set consists of real-time execution regulation behaviors. 5. Software metrics A set of software metrics are chosen to evaluate the framework, and the metrics to meet what actual users of a language have as certain demands on the language are given below with short explanations about how they are met by AdSiF. The software metrics presented in the table mostly focus on the developer’s point of view [67]. Time management: It is possible to model parallel simulation synchronization algorithms by behavior design such as optimistic synchronization etc. [68]. Conceptual model support: The whole development process is integrated with CASE tools. Scenario verification: Based on user-defined rules. The rules are defined based on domain-independent scripting. Conceptual model verification: Checking the conceptual model’s consistency by coverage analysis, event mapping (whether or not a generated event is consumed), and a series of tests. Batch execution: Executing-multi scenario as a batch. In this case, the system does not need any user interaction. Support for replication: It is possible to make automated scenario replications using Boolean expression queries that are defined based on predicate truth values that are defined by the user as a logic programming part, atomic functions, plug-in functions, attributes, and behavior local parameters. Component based simulation management: AdSiF allows designers to define their own simulation execution algorithms (such as tie-breaking rules etc.) by reaching kernel interpreter functions designing behaviors. This also allows the simulation execution to be managed from simulation models. Abstraction: There are three types of relations between simulation entities and agents. These are generalization, aggregation, and composition. A simulation entity may have any other simulation entity as its composite part or aggregated part and it undertakes its simulation coordination. Expressiveness: Because of its state-automaton-based programming paradigm and the logic programming paradigm it covers, AdSiF provides a modeling language that is both visual, state-based, and close to human thinking. Readability and writability: Because it is visual programming and XML markup, it is highly readable and writable. Rapid development: An entity is developed based on state diagrams and visual graph-based modeling or as an alternative way based on XML markup, so development is faster than coding directly. Because functions written in C++ do not consist of any semantic relations, development in the coding phase is still easier and faster than coding whole model semantics on functions. Efficiency: Execution speed (according to the real time), number of event processed, and number of state processed in a unit time are the very basic efficiency criteria. A performance measurement is given in the example section. Reusability: Behavior diagrams, especially, written for scattering requirements or special behaviors that are executed by more than one entity are modeled as root behaviors and inherited by the entities to use them. Also, reusability provided by the object-oriented programming paradigm is not limited. Moreover, a plugin function developed for a specific purpose can be used by any other agents and simulation models just by calling it from a behavior and this does not require any change in the agent/model source code and rebuild.
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
81
Extending a simulation model/agent: A simulation model or an agent can be extended by adding new attributes, adding new functions as plug-ins, and adding new behaviors without changing the model code. The property is supported even in run time. Verification: Analysis is possible in run time using the analysis model component. Language integration: It supports C++, Piton, and Prolog. Overloading: Overloading is supported at the behavior level, function level and class level. Information hiding: In addition to class level data hiding, behavior level data hiding is supported. Recursive programming: It is supported by recursive behavior design. Modeling time-delayed systems: Allows modelers to use an earlier value of a fact/variable in a decision or calculation process. Execution speed regulation: Capable of changing the execution speed in run time. Design patterns: AdSiF has a set of behavior design patterns such as facade, nested loops, behavior synchronization, model synchronization, etc. [69]. Extending models in run time: Using plugins and injecting new behaviors and attributes, predicates and facts Orthogonality and easy maintenance: Since each behavioral aspect and each behavior are separated from each other, any change in a well-designed behavior does not disturb other behaviors. Portability: The core interpreter kernel, which execute behavior diagrams, is compiled in Linux and Windows. Reliability and safety: The entity classes provided by the framework have services to save snapshots during simulation execution in the period determined by modelers. It is possible to start a simulation execution that is interrupted because of any problem from a snapshot point. Pedagogical value and learnability: Since the state-oriented programming that is the main paradigm of AdSiF is already known as a modeling language, and the programming paradigms it encompasses are well known paradigms, it is easy to adapt and also pedagogically enforces the concept modelers want to learn. Discrete and continuous event modeling support: Both modeling approaches are supported. Basically, a continuous universe is interpreted as a discrete universe. Event driven architecture: Interaction between entities is achieved by an event architecture. 5.1. Pros, cons and programming practices Currently, the framework is used in different projects that each have over a million lines of code. Depending on the usage, in some situations, we are faced with some performance problems. If all data sharing is achieved by event messages instead of publishing model attributes, it makes the event transition traffic high and this results in slow simulation execution. Similarly, if model behaviors are designed at a very high resolution, this makes the number of states in the behavior set high and this directly affects the simulation execution speed. The logic programming paradigm enhances the modeling capability with a reasoning ability, but this brings an overhead. Equilibrium between advantages and performance loss should be taken into consideration in the design period. Since AdSiF provides a programming environment, the execution performance depends on the design success and modeling granularity. A simulation execution requiring very small time steps is the source of bottlenecks in a coordinated or a conservative parallel simulation. It is preferable to use an optimistic parallel execution approach to avoid this problem, and a specific behavior list that is designed as a behavior list as an aspect can be created to handle simulation execution itself by managing simulation synchronization algorithms [43]. 6. Simulation examples We have two different examples to show the main properties of AdSiF. A tube simulation example is given with these two different modeling approaches, named continuous event simulation and discrete event simulation. The behaviors that are valid for the two cases are examined under common behaviors. The example is enough to show some of the properties of the system. These are discrete and continuous event simulation support, relation concept, behavior design based on relation phases, conditional behavior activation, making a behavior parallel, and event transfer mechanism using relations. In our example, an external transition property is used to manage continuous event simulation faster and more accurately and to model the activity scanning phase that continuous event simulations require to detect whether or not there is any action to process in the time slice. The second example focuses on how to use the reasoning mechanism to manage behaviors, in other words, how to integrate logic programming into state-oriented programming. The example is not limited to these, and it also includes programming and messaging by relations, and continuous and discrete components. 6.1. Tube model The tube problem is depicted in Fig. 4 with the component relations. The system has three valves and a tube. While two of the valves fill the tube, the third drains it. In the AdSiF representation, we have three valve instances (valveFilling and valveDraining) and one tube instance. The reason for choosing two valves for the inflow is to be able to show parallel execution of the tube filling behavior.
82
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
Fig. 4. Tube problem representation.
Fig. 5. Tube valve controller behaviors.
For a valve, flow.flowrate represents the water level a valve fills or drains (depending on the relation it has) and the variable has a value set limited by a minimum and maximum level ([0, Level.max]). The variable unit is volume/timeunit. The tube level is represented by the variable level and its value set is defined as [0, maxLevel]. The formulations are given below:
level = flow.Net × t if level < maxLevel else level = maxLevel, flow.Net =
N
flow.flowratei
(1)
i =1
and
flow.Net = flow.in − flow.out
(2)
In Case #1 and Case #2, the simulation is executed as a time-slicing simulation for the two different system designs. In Case #1, it is not allowed to drain the tube, but in Case #2, by a Boolean expression activation condition-based behavior, it is allowed and a cyclic simulation scenario is designed. Case #2 also shows how two different aspects are merged and used conditionally in a scenario. In Case #3, a discrete event simulation-based execution is chosen. Basically, the simulation steps reach directly discrete time points at each of which there is at least a meaningful system value such as reaching the full level or zero level, opening or closing the valves. Common Behaviors: Opening and closing valves, establishing relations The decision to open and close valves is given by the tube. Fsa_Cont-1 (Fig. 5) has an activation drive condition and it is activated any time the attribute level is equal to 0. The activation drive condition is a Boolean expression and any time it turns true it activates the behavior that is assigned as an activation condition, as is done for Fsa_Cont-1. The behavior sends the valves connected with the relation Fills openValve event, and the valve connected with the Drains closeValve event. The event address is determined by the relation between the valves and drains and it is sent to the valves connected with the tube with the relation named as Fills. Because the attribute level is initially equal to zero, Fsa_Cont-1 is activated at the beginning of the simulation and it turns valve Filling to open and valveDraining to closed. The relations between simulation entities activate a set of functions and a set of behavior lists. In Fig. 6, the relation “Drains” activates the FSAList named MinusFlow in the relation activation phase. The FSAList MinusFlow has a behavior called Fsa_Minusflow (Fig. 8). The behavior has a zero state and an activation phase action named Af_MinusFlow. The action multiplies the related valve flowrate attribute by −1 to drain water from the tube. The pseudo code example is seen in Fig. 8. Anytime the “Drains” relation is established, the behavior list MinusFlow and its initial behavior Fsa_MinusFlow are activated. In this example, there is no action or behavior list connected with the relation break phase. Case #1 Time-Slicing: The valves are open and they allow a fixed flow rate to flow in a single step (they have a constant flowrate represented as an attribute). Simulation execution starts with an empty tube, two open inflow valves and a closed outflow valve. At the beginning of the simulation, Fsa_Cont-1 is activated because of its activation condition. Fig. 7 shows the behaviors of a valve. The behavior Fsa_On is a looping behavior and it makes an internal state transition at every one (1) time unit, as declared by the duration computer, and sends a flow event to the related entities that are connected with relations “Fills” and “Drains”.
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
83
Fig. 6. Behavioral declarations based on relation phases for valves.
Fig. 7. Behavioral representation of a valve.
The flow event has a flowrate parameter that is bound by the valve flowrate attribute. As seen in Fig. 8, because the related model is the tube model, the flow event is received by the tube. The event flow triggers Fsa_Fill because of the entry state On that is actively processed by Fsa_Empty, which is selected as the initial behavior for the tube and it is activated at the beginning of the simulation. The flow event parameter determines how much water is drained or filled from/to the tube per time by its flowrate parameter. Setting Fsa_Fill as parallel allows more than one event to be handled at the same time and makes multiple valves work together. This means that the tube receives more than one flow event and processes all of them activating one Fsa_Fill for each of them. The state ComputeLevel has a trace agent to trace the level attribute value. At the time the maximum level is reached, Fsa_Full is activated temporarily and Fsa_Empty is canceled at the finish phase of Fsa_Fill without any time latency and with an always-true guard. Fsa_Cont-2 is not activated even when its activation drive condition is satisfied, since the active behavior list (FSAList default Fig. 8) does not contain Fsa_Cont-2. After reaching the maximum level, Fsa_Full is active, Fsa_Fill and Fsa_Empty are canceled and all the valves are turned to closed. Since there is no event transition and there is no time request (the valves and the tube require infinite duration), the simulation waits for any action for an infinite time (space time). If a flow event is received, it triggers Fsa_Null. The Pouring state has a trace agent and it records the flow event (as a trigger event) flowrate parameter to measure the pouring water level. During flow, if a closeValve event for the filling valve is scheduled for any time point, it interrupts the On state that belongs to Fsa_On, and cancels Fsa_On by an external state transition, and sends flow for as much as the consumed duration up to receiving the event. This is a means of handling activities arising in the time intervals. Case #2: No Pouring: In this case, Fsa_Cont-2 is added to the behavior list of the tube. To be able to activate Fsa_Cont-2, the activation condition of the FSAList named Drain that consists of Fsa_Cont-2 is activated by assigning a true value to the attribute listSelected. At the time of reaching the maximum level, the tube Fsa_Cont-2 is activated, the event closeValve is sent to the inflow valves, and the event openValve is sent to the drainValve. Because of the negative flowrate that drainValve has, each flow event sent by drainValve makes the tube level lower, down to zero. At the zero water level, Fsa_cont-1 is activated again and this cycle is repeated. Case #3: Discrete Event Interaction: Discrete event interaction execution is quite different from the time-slicing approach. The initial behavior of the valve is declared as Fsa_Off in the initial behavior list named default. Getting an openValve event, which is sent from the behavior Fsa_Cont-1, triggers the behavior Fsa_On. The behavior sends the tube an event called startFlow from the state startFlow at the exit phase and waits for an infinite time at state On (Fig. 9). The initial behavior of the tube is set as Fsa_Active in the initial behavior list named default. The event startFlow activates the behavior Fsa_RequestTime (the entry state of the behavior is kept active by Fsa_Active). The tube requests the duration to be filled to full by the state RequestTime. If there is no interrupting event before the duration requested is consumed completely, it reaches the maximum level and it is computed by the function ComputeLevel called at the exit phase. ComputeLevel uses the elapsed duration (the consumed duration by the state) as a parameter to compute the water level in the tube. Any activated behavior that has the RequestTime entry state by an event interaction interrupts the state RequestTime (because the
84
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
Fig. 8. Tube behavior design.
Fig. 9. Discrete event simulation behavioral representation for valves.
Fig. 10. Discrete event simulation behavioral representation for tubes.
behavior’s external transition is true). The behavior actively processing the state (in this case, Fsa_RequestTime) is canceled, and the external function of the state is executed. ComputeLevel is also the external transition function of the state and the elapsed duration is the duration consumed by the state until it is externally exited. The water level is computed depending on the consumed duration until it is canceled. (See Figs. 9 and 10.)
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
85
As soon as the maximum level is reached, the Fsa_Full behavior is activated by a temporal relation associated with the out phase of the state Full. After finishing Fsa_RequestTime and activating Fsa_Full, a flow event is processed by the behavior Fsa_Null to calculate and to trace the pouring level (Fig. 10). Up to this point, the tube example has 50 lines of code developed in C++ (automatically generated declaration codes are excluded) and 350 lines of XML script. A performance measurement is given below with different configurations for Case #1 and Case #2. Durations are defined in the second level such as valve flows of water at 10 cm3 /second. The execution speed is given in the form of how many times faster it is than real time and it is measured for a half hour execution. [# of Filling Valve, # of Draining Valve, # of Tube, Speed]: [2, 1, 1, 38.7x], [6, 1, 1,22.1x], [6, 2, 1, 29.73x], [6, 3, 1, 31.36x], [4, 3, 1, 34.80x], [4, 3, 2, 18.4x] Notice that the execution speed in time slicing depends on the time slice width. In Case #3, the execution speed does not depend on the tube size and flow rate because the duration necessary to reach the states such as full or empty and of event points such as valve closing and valve opening are directly calculated [70]. 6.2. Air defense simulation example The air defense simulation example is a hybrid simulation and consists of continuous and discrete components. The air defense scenario includes a series of land-based air defense systems, fighter aircraft, ships, sensory systems (surveillance and tracker radars), commanders, missiles, and free-fall bombs, and a set of communication devices. Each missile has its own seeker to follow the target that it has engaged. A land-based defense system consists of launchers with a set of missiles, radars or any other types of sensors, and a command and control (C2) unit, which is an intelligent agent with a set of communication devices connected with sensors and other C2 units. Surveillance radars detect targets, send their detections to C2 units to which they are connected with communication devices, the C2 units evaluate the threats, select a defense system and missile type for the engagement, and give the engagement order to the selected defense system. This cycle is known as a C4ISR (Command and Control, Communication, Computer, Intelligence, Surveillance, and Reconnaissance) cycle. A C2 unit may manage more than one defense system. A surveillance radar sweeps a predefined angle for a time unit. For example, it sweeps at 10◦ /second. This is a continuous event execution, but if there is a threat in its line of sight and in its look angle, it analyzes the threat and sends detection information as a discrete event. Sending the threat information to a C2 unit (if it is detected properly) using wireless devices is again a discrete simulation process. As soon as the C2 unit receives any threat information, it registers the threat to a threat list and starts a threat evaluation process. A simple behavior diagram is shown in Fig. 11. Fsa_Active is chosen as the initial behavior in the default behavior list named as Sensor. The behavior Fsa_Active activates the behavior Fsa_Turn. Fsa_Turn changes its antenna direction for the duration given by attribute turnDur. Since this is a looping behavior, it turns until some other behavior(s) cancels it. In this case, any time the radar receives an event named turnOff, it activates Fsa_Passive and because Fsa_Active is an external transit behavior, it is canceled and it cancels the behavior Fsa_Turn because of the temporal relation between each other declared as Cancel::Cancelled:[noguard]:-[after zero time]). The event inlos is sent by a server model that is in charge of calculating which entity is in which sensor’s line of sight and it activates Fsa_Analysis to analyze whether or not it detects the entity that is in the line of sight. If the detection is successful (determined by the function Af_Analysis), the event targetDetected is sent to the entity/entities related with the radar with the relation name “informs”. The whole detection process takes a duration calculated by a normal distribution with 100 msec average and 2 msec standard deviation. Since Fsa_Analysis is a parallel behavior, the radar can analyze more than one detection and it activates a behavior for each target. The total number of C++ lines of code for the functions given in Fig. 11 is equal to approximately 110. In the example, a centralized C2 structure is chosen. Commander-1 controls other commanders, picks detections from the commanders ranked below it, evaluates the targets and gives a decision on who is to engage which target using which weapon system. When Commander-1 dies, it breaks the relations that it has with the commanders at lower ranks (the relation commands). As defined in the relation programming declaration, the commanders change their behavior list to an autonomous C2 structure. They then evaluate targets themselves and give their own decisions. The whole process is given in Section 6.2.1 to explain better how to use the reasoning mechanism to manage behaviors. A commander entity at a lower rank has a relation named “Reports” with the commander that controls it and it is placed on the left hand side of the relation. The behavior lists being activated and functions being executed are declared as seen in Fig. 12 anytime the relation is established and broken. The relation behavioral script is read as follows and this is also switching from one aspect to another. Anytime the relation “Reports” is established (activated), the commander model on the left-hand side activates the behavior list named as “CentralizedC2” and executes the atomic function “Af_SetReportAddress”, and anytime the relation is deactivated (broken) the commander activates the behavior list “C2List”. As seen in Fig. 13, blue side aircraft aim on drop bombs to the red target sensor systems (in the figure, the middile phase end of the course of action is seen and Red forces and Blue Forces are tagged by R and B, respectively). Red land-based air defense systems detect and engage the blue aircraft. The deployment of the surveillance radars and the land based air defense systems are depicted in the figure. To detect blue aircraft sooner, a surveillance mission is achieved by red aircraft. A performance measurement for the execution is given in Table 2 and several alternative scenarios with different asset configuration are represented in the table. The entities in the simulation scenarios being executed are written in the format
86
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
Fig. 11. Radar behavior diagram.
Fig. 12. Programming by relation for commanders.
Table 1 Performance measurement for air defense example. Scenario assets
Time step statistics
[8, 5, 4, 6, 6, 6, 13, 1, 1] [7, 0, 0, 6, 6, 6, 13, 0, 1] [1, 4, 0, 2, 2, 6, 2, 0, 0] [1, 1, 0, 1, 1, 1, 0, 1, 0]
Statistic of behaviors
Average
Minimum
#State
#Event
Execution speed
2,7 s 3s 7,8 s 0,41 s
300 ms 0 ms 0 ms 0
65527 45526 5831 4089
48492 38492 2420 2265
8,17x 9,86x 12,1x 8,63x
#State: number of states Processed, #Event: number of events processed.
Table 2 Line of codes for the entities. Model
C++ codes with generated codes
Capabilities
XML script
Radar Launcher Commander Missile Tactic picture
2000 500 1500 + 228 prolog code 3400 (a six degrees of freedom model) 2500
Detection, tracking, search, loss calculations Single fire, salvo fire, jamming Target evaluation, fusion (identity and position), fire control 3, 4 and 6 DOF flight, explosion, phase transitions, engine control, guidance Picturizing detections, geometries, entities
3800 5900 11718 4177 710
given below. The performance measurements are calculated based on behaviors, time resolutions, and execution speed. In the scenario, missile models and planes have 6DOF (degrees of freedom), and sensors sweep by duration steps of between 100 msec and 1 sec. Scenario assets are given in the form of [#SR, #TR, #W, #C2, #L, #M, #AP, #SP, #LP]: #SR; number of surveillance radars, #TR; number of tracking radars, #W; number of wireless devices, #C2; number of commanders, #L; number of launchers, #M; number of missiles, #AP; number of air platforms, #SP; number of sea platforms, #LP; number of land platforms. To be able to measure coding effectiveness, line of codes for simulation components are given in Table 2. It is seen from the table that, the most important part of an agent or a simulation entity is developed by scripting. AdSiF moves the coding effort from the codes that are compiled to scripting that is flexible and customizable without any need for compilation.
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
87
Fig. 13. Air defense scenario view. (For interpretation of the colors in the figure(s), the reader is referred to the web version of this article.)
6.2.1. Commander fire order In this part of the air defense example, the focus is on just giving the fire order, selecting a weapon type depending on the target type, by a commander who is in charge of defending certain regions on the map (polygonal area in Fig. 11). The commander’s knowledgebase is populated by detection information sent by sensors and the C2 units tp which it is connected, and points of the regions that the commander is in charge of. The fact detection and regionPoints are designed as follows.
detection(TargetType, Position, Size, Velocity, Time) and regionPoints(RegionID, X, Y, Z) As seen in Fig. 12, a targetInfo event is received by the commander model and it activates Fsa_Order. The behavior populates the knowledge-base with a detection fact at the entry phase of the state Detection. The parameters that the fact needs are obtained from a trigger event (te) such as te.targetType, te.Position, te.Size, te.Velocity, te.detectionTime. At the exit phase of the state Detection, a fire event is sent if the sending constraint, which is defined based on predicate parameters, is satisfied. If the first parameter of the predicate fireRule is equal to 1, a fire event is sent. The fireRule predicate is given below. The rule says that if a target is in a region type #1, the 5th parameter of the whatOrder predicate is obtained as 1. Predicates and facts used in reasoning are seen in Appendix A. As seen in the code comments, region points (regionPoints) are asserted as facts to the knowledge base at the simulation initiation (using initial behaviors). Any time a detection is received by the event targetInfo, the fact detection is populated in the knowledge base by Fsa_Order (Detection state) and if the target detected is in the region type #1, the fire event is sent to the destination weapon system. The ammunition is selected by the predicate fuse. (See Fig. 14.) As an alternative design, the constraint constituted using the predicate-out parameter is used in the activation drive condition for Fsa_FireOrder behavior. The behavior sends a fire event to the related weapon system. In the second design, the fire order is given after any other state transition done by the commander without depending on the behavior populating the knowledge base asserting new detection facts (Fsa_Order). The weapon type is determined using the predicate named as fuse. The second parameter of the predicate fuse returns the ammunition type to be fired and it is set to the fire event ammunitionType parameter. Because logic programs such as prolog use dynamic type binding, it is possible to use a predicate parameter as input in one usage and as an output parameter in another usage. We call this declaration the predicate call set. Which weapon is fired at which target is represented by targetWeapon facts and these are accepted as built-in beliefs for the domain.
88
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
Fig. 14. Activating a behavior by reasoning.
7. Discussion One of the most important properties of AdSiF is having an ontological commitment with its own programming paradigm called state-oriented programming paradigm. Basically, both of these make AdSiF independent from application domains and make it possible to define any entity and interaction between entities. Ontological commitment covers the description of an entity (a thing) with its environment, its time-labeled perceptions, premises about other entities and events, relations with other entities, and a reasoning mechanism that gives an entity consciousness. In addition to the ontological commitment, the state-oriented paradigm provides a grammar to describe simulation entities and agents by their attributes, behaviors, event interactions, their behavioral aspects, and their reasoning capability as an intelligent agent with past knowledge about other entities and themselves. Multi-programming paradigms, which include aspect-oriented paradigms, object-oriented paradigms, and logic programming paradigms, support one of the most salient properties putting forward AdSiF as a powerful candidate for the agent-programming and simulation-modeling environment. Bringing all the paradigms that it supports together into a single paradigm named the state-oriented programming paradigm provides a graphic-based declarative programming language and this disciplines the modeling process. Each paradigm has its own advantages, and the state-oriented paradigm as a hybrid paradigm merges all the advantages into a single representation connecting them to each other well. For example, the logic-programming paradigm is a type pf declarative programming and it is close to human common-sense reasoning and thinking modes. The state-oriented paradigm uses this advantage to handle behaviors (activation, canceling, suspending, and resuming) directly by logic-based declarations. As is known, the logic-programming component provides a reasoning mechanism and knowledge base. The knowledge base keeps facts about entities, beliefs and premises. The facts about the entities in the simulation are saved with time stamps, which means that the agent has past knowledge. This can also be used for causal methods and inference based on past information. AdSiF provides a flexible solution for both continuous and discrete event simulations with two different time definitions. These are Euclidian time and space-time definition. This allows modelers to define time based on event occurrences, as is done in the space-time definition or by advancing the simulation execution by the time-slicing approach for continuous simulation. Being declarative is another salient property especially from the agent-programming environment point of view. The property provides modelers with the opportunity to develop behaviors to handle the simulation execution itself, such as parallel simulation synchronization algorithms (null message passing, optimistic methods etc.) that are designed as simulation operation-level behaviors. The behaviors are script declarations of the layering and separating entity behaviors and allow their execution depending on their priorities, enabling modelers to design their own simulation time management algorithm as a behavior set without making any change to the interpreter core [43]. AdSiF stores the simulation-management functions in its behavior declarations as a layer, which allows modelers to develop simulation algorithms such as synchronization algorithms, an event-handling mechanism, etc., using a behavior design without penetrating the simulation engine kernel. The simulation execution performance is directly related with the time definition. For example, since the simulation advance depends on the model parameters, time-slicing-based continuous event modeling is slower than discrete event execution. In the event interaction case, the execution speed does not depend on the simulation model parameters, as seen in tube example Case #3. This is because the simulation directly advances to specific discrete event points such as tube full, tube empty etc. and, in this case, the execution finishes in just several steps, whatever the flow rate or tube volume are. AdSiF presents a developed messaging mechanism between entities. These are sending messages 1) directly to entities (using their unique IDs), 2) to related entities, and 3) to entity types. Especially, messaging via relation provides an abstraction between agents communicating with each other. The relation concept that originates from the AdSiF ontological commitment draws a well-abstracted interaction among the models. The interaction among models succeeds by means of the models describing each other using relations. This enhances the interoperability and the loose coupling among models
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
89
because they do not depend on the inner structures of one another. As seen in both examples, the relation concept brings a script programming for the models taking place on both sides. By this property, not only does the relation make messaging easier, but it is also a programming tool. AdSiF provides a new solution to aspect-oriented programming using the conditioned-behavior containers and plug-ins managed by behaviors. Cross-cutting functions are satisfied for both states, which are scattered into behaviors that handle the required function, and behaviors that are scattered into behavior containers. Switching among model behavioral aspects in run time is achieved via well-separated and conditioned behavior containers. AdSiF fosters software-engineering criteria such as reusability due to its script language, the plug-in and event-based architecture, the extendibility, interoperability, flexibility and orthogonality, which result from the loosely coupled model architecture, the well-abstracted and separated agent-based model structures and the ontological commitment that supports multiple paradigms. Acknowledgements Special thanks to Dr. Hessam S. Sarjoughian, whom I see as my lifetime teacher, from Arizona State University for his valuable supports. Appendix A. Decision making rulebase targetWeapon(f16,amraam). /*Amraam is fired for f16*/ targetWeapon (corvette,harpoon). /*Harpoon is fired for Gabya*/ decision(f16,TargetId):-detection(TargetId, Size, Velocity, Time) ∧ Size.width >= 3 ∧ Size.width <= 5 ∧ Size. Length >= 6 ∧ Size.Length <= 10 ∧ Velocity >= 500 ∧ Velocity < 2000. decision(corvette,TargetId):-detection(TargetId,Size,Velocity,Time) ∧ Size.width >= 7 ∧ Size.width <= 20 ∧ Size. Length >= 6 ∧ Size. Length <= 30 ∧ Velocity >= 20 ∧ Velocity < 100. fuse(TargetId,Y):- decision(Z, TargetId) targetWeapon(Z, Y). targetInACO(RegionId,1,Time):- /*select action code associated with region type*/ whatOrder(TargetId,Position,Time,Order):-inWhichACO(TargetId,Position,RegionId),/*find in which Region the target is*/ targetInACO(RegionId,Order,Time)./*what order to be applied for the region the target that is in*/ References [1] G. Harman, Object-Oriented Ontology: A New Theory for Everything, Pelican, 2018. [2] A.K.M.A. Bhattacharya, A. Konar, Parallel and Distributed Logic Programming: Towards the Design of a Framework for the Next Generation Database Machines, Springer-Verlag, Berlin, Heidelberg, 2006. [3] P.N.S. Russel, Artificial Intelligence: A Modern Approach, Prentice Hall, 1995. [4] G. Weiss, Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence, MIT Press, 2000. [5] C. Hewitt, The challenge of open systems, Byte Mag. 10 (4) (1985) 223–242. [6] H. Burkhard, Agent oriented techniques for programming autonomous robots, Fundam. Inform. 102 (2010) 49–62. [7] R. Kay, Aspect-oriented programming, Computerworld 37 (40) (2003). [8] Y. Shoham, Agent-Oriented Programming, Technical Report STAN-CS-90-1335, Stanford University, 1990. [9] Y. Shoham, Agent-oriented programming, artificial intelligence, Artif. Intell. (1993) 51–92. [10] S. Das, J. Fox, Safe and Sound: Artificial Intelligence in Hazardous Applications, AAAI and MIT Press, 2000. [11] J. Fox, M. Beveridge, D. Glasspool, Understanding intelligent agents: analysis and synthesis, AI Commun. 16 (2003) 139–152. [12] E. Bonabeau, Agent-based modeling: methods and techniques for simulating human systems, Proc. Natl. Acad. Sci. USA 99 (2002) 7280–7287. [13] FIPA, Foundation for Intelligent Physical Agents, 2009, [Online]. Available: www.fipa.org. [14] T. Ören, L. Yilmaz, Synergies of simulation, agents, and systems engineering, Expert Syst. Appl. 39 (1) (Jan. 2012) 81–88. [15] L. Ören, T.I. Yilmaz, On the synergy of simulation and agents: an innovation paradigm perspective. Special issue on agent-directed simulation, Int. J. Intell. Control Syst. 14 (1) (2009) 4–19. [16] T. Ören, Toward the body of knowledge of modeling and simulation, in: Proceedings of I/ITSEC (Interservice/Industry Training, Simulation and Education Conference), 2005. [17] L. Yilmaz, T. Ören, N.-G. Aghaee, Intelligent agents, simulation, and gaming, Simul. Gaming 37 (3) (Sep. 2006) 339–349. [18] M. Barbati, G. Bruno, A. Genovese, Applications of agent-based models for optimization problems: a literature review, Expert Syst. Appl. 39 (5) (Apr. 2012) 6020–6028. [19] E.a. Hocao˘glu M. F, C4ISR Modeling and Simulation Project – Optimization Problem Solution Document, Gebze, 2004. [20] A.G. Bruzzone, R. Mosca, R. Revetria, E. Bocca, E. Briano, Agent directed HLA simulation for complex supply chain modeling, Simulation 81 (9) (2005) 647–655. [21] A. Jávor, Problem solving by the CASSANDRA simulation system controlled by combined mobile and static AI, in: Summer Computer Simulation Conference, 1997, pp. 723–728. [22] G. Jávor, A. Szücs, Intelligent demons with hill climbing strategy for optimizing simulation models, in: Summer Computer Simulation Conference, 1998, pp. 99–104. [23] G. Jávor, A. Szücs, An intelligent agent controlled methodology for determining adequate models, in: Summer Computer Simulation Conference, 2000, pp. 9–14. [24] J.-H. Kim, T.G. Kim, DEVS-based framework for modeling/simulation of mobile agent systems, Simulation 76 (6) (Jun. 2001) 345–357. [25] M.F. Hocaoglu, C. Firat, H.S. Sarjoughian, DEVS/RAP: agent-based simulation, in: AI, Simulation and Planning in High-Autonomy Systems, 2000. [26] S. Eilenberg, Automata, Languages and Machines, Academic Press, Oxford, 1974. [27] M.H. Coakley, S.R. Smallwood, Using Xmachines as a formal basis for describing agents in agentbased modelling, in: Agent-Directed Simulation, SpringSim 06, Huntsville, AL, USA, 2006.
90
M.F. Hocao˘glu / Science of Computer Programming 167 (2018) 70–90
[28] C.-C. Chen, C.D. Clack, S.B. Nagl, Identifying multi-level emergent behaviors in agent-directed simulations using complex event type specifications, Simulation 86 (1) (Oct. 2010) 41–51. [29] V.R. Komma, P.K. Jain, N.K. Mehta, Ontology development and agent communication in agent-based simulation of AGVS, Int. J. Simul. Model. 11 (4) (Dec. 2012) 173–184. [30] M.F. Hocao˘glu, Conceptual model and perdurantist modeling with reasoning, Simul. Notes Eur. 24 (2) (2014) 95–104. [31] M.F. Hocao˘glu, Conceptual model and perdurantist modeling with reasoning, in: ASIM 2014, ASIM Symp. Simul. Tech., ASIM SST, 2014. [32] G. Wagner, Ontologies and rules for enterprise modeling and simulation, in: Proc. – IEEE Int. Enterp. Distrib. Object Comput. Work. EDOC, 2011, pp. 385–394. [33] G. Guizzardi, G. Wagner, Dispositions and causal laws as the ontological foundation of transition rules in simulation models, in: 2013 Winter Simulations Conference (WSC), 2013, pp. 1335–1346. [34] D. Roca, D. Nemirovsky, M. Casas, M. Moreto, M. Valero, M. Nemirovsky, IQ: an efficient and flexible queue-based simulation framework, in: 2017 IEEE 25th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, MASCOTS, 2017, pp. 143–149. [35] R.L. Bagrodia, W.T. Liao, Maisie: a language for the design of efficient discrete-event simulations, IEEE Trans. Softw. Eng. 20 (4) (1994) 225–238. [36] X. Song, H. Ji, W. Tang, X. Zhang, Z. Xiao, A simulation- language- compiler-based modeling and simulation framework, Int. J. Ind. Eng. 24 (2) (2017) 134–145. [37] D.L. Pisla, et al., A simulation control interface for robotic structures used as flight simulators, in: 2008 IEEE Int. Conf. Autom. Qual. Testing, Robot. AQTR 2008 – THETA 16th ed. – Proc., vol. 2, 2008, pp. 404–408. [38] H. Wang, W. Zhu, Equipment simulation control system based on virtual language, in: International Conference on Mechatronics and Automation, 2009, pp. 4229–4234. [39] M. Adelantado, P. Siron, Multi-resolution modeling and simulation of an air-ground combat application, in: Proceedings of the 2001 Simulation Interoperability Workshop, 2001. [40] F. Çelik, DEVS-M: a discrete event simulation framework for MANETs, J. Comput. Sci. 13 (2016) 26–36. [41] H.S. Sarioughin, B.P. Zeigler, S.B. Hall, A layered modeling and simulation architecture for agent-based system development, Proc. IEEE 89 (2) (2001) 201–213. [42] W. Kim, B. Lee, K. Kim, T. Yang, S. Kim, A real-time HWIL simulation control system architecture for implementing evaluation environment of complex embedded systems, in: Int. Conf. Adv. Commun. Technol., ICACT, 2011, pp. 254–259. [43] M.F. Hocaoglu, Aspect oriented programming perspective in agents and simulation, Int. J. Adv. Technol. 8 (3) (2017). [44] D.L. Poole, A. k Mackworth, Artificial Intelligence: Foundations of Computational Agents, Cambridge University Press, 2017. [45] H.D. Burkhard, Agent oriented techniques for programming autonomous robots, Fundam. Inform. 10 (4) (2010) 49–62. [46] E. Posse, J. Dingel, Kiltera: a language for timed, event-driven, mobile and distributed simulation, in: Proc. – IEEE Int. Symp. Distrib. Simul. Real-Time Appl. DS-RT, 2010, pp. 87–96. [47] D. Harel, State charts: a visual formalism for complex systems, Sci. Comput. Program. 8 (1987) 231–274. [48] A. Sterkin, State-oriented programming, in: 6th MPOOL Workshop, Cyprus, 2008. [49] T.G.K.B.P. Zeigler, H. Praehofer, Theory of Modeling and Simulation, Academic Press, Florida, 2000. [50] G.F.F. Allen, Actions and Events in Interval Temporal Logic, 1994. [51] M.F. Hocao˘glu, Aspect oriented programming perspective in agent programming, in: USMOS 2013 – National Defense Application and Modeling & Simulation Conference, 2013 (in Turkish). [52] F.G. McCabe, Logic and Objects, Prentice-Hall International Series in Computer Science, 1992. [53] B.P. Zeigler, Theory of Modelling and Simulation, Robert E. Krieger Publishing Company, Malabar, Florida, 1976. [54] N.R. Jennings, M.J. Wooldridge, Software agents, IEE Rev. (1996) 17–20. [55] M. Travers, Programming with Agents: New Metaphors for Thinking About Computation, Program in Media Arts and Sciences, 1996. [56] J. Firby, Adaptive Execution in Complex Dynamic Worlds, 1989. [57] W.F.J. Firby, The RAP System Language Manual, Version 2.0, Neodesic Corporation, Evanston, IL, 2000. [58] M.W.R.H. Bordini, J.F. Hübner, Programming Multi-Agent Systems in AgentSpeak Using Jason, Wiley Series in Agent Technology, 2007. [59] V.E.H. Sarjoughian, CoSMoS: a visual environment for component-based modeling, experimental design, and simulation, in: SIMUTools 2009, 2009. [60] M.P.G.A.S. Rao, BDI agents: from theory to practice, in: Proceedings of the First International Conference on Multi-Agent Systems, ICMAS-95, 1995. [61] R. Junges, F. Klügl, Programming agent behavior by learning in simulation models, Appl. Artif. Intell. 26 (4) (2012) 349–375. [62] M.F. Hocaoglu, B.P. Zeigler, H. Sarjoughian, Temporal System Identification and Model Verification: A Reverse Engineering Approach, 2004. [63] J.I.G. Kiczales, J. Lamping, A. Mehdhekar, C. Maeda, C.V. Lopes, J. Loingtier, Aspect-oriented programming, in: Proceedings of the European Conference on Object-Oriented Programming, ECOOP, 1997. [64] J.O.K. Lieberher, D. Orleans, Aspect-oriented programming with adaptive methods, Commun. ACM 44 (10) (2001) 39–41. [65] N.C. Mendonça, C.F. Silva, I.G. Maia, M.A.F. Rodrigues, M.T.O. Valente, A loosely coupled aspect language for soa applications, Int. J. Softw. Eng. Knowl. Eng. 18 (2) (Mar. 2008) 243–262. [66] Mehmet F. Hocao˘glu, Agent based distributed interactive simulation, in: USMOS 2015 – National Defense Application and Modeling & Simulation Conference, 2015, pp. 451–457 (in Turkish). [67] R. Leblanc, A. Dingle, J.D. Hagar, J. Knight, Software Metrics Fundamentals of Dependable Computing for Software Engineers, 2012. [68] M.F. Hocaoglu, Aspect oriented programming perspective in software agents and simulation, Int. J. Adv. Technol. 8 (3) (2017). [69] M.F. Hocao˘glu, AdSiF: Developer Guide, Agena Information & Defense System Ltd., Istanbul, 2013, www.agenabst.com. [70] M.F. Hocaoglu, Qualitative reasoning for quantitative simulation, Model. Simul. Eng. 2018 (2018) 1–13.