Mobile agent evolution computing

Mobile agent evolution computing

Information Sciences 137 (2001) 53±73 www.elsevier.com/locate/ins Mobile agent evolution computing Timothy K. Shih * Multimedia Information NEtwor...

201KB Sizes 2 Downloads 110 Views

Information Sciences 137 (2001) 53±73

www.elsevier.com/locate/ins

Mobile agent evolution computing Timothy K. Shih

*

Multimedia Information NEtwork (MINE) Lab., Department of Computer Science and Information Engineering, Tamkang University, Tamsui, Taipei Hsien 251, Taiwan, ROC

Abstract The ecosystem is an evolutionary result of natural laws. Food web (or food chain) embeds a set of computation rules of natural balance. Based on the concepts of food web, one of the laws that we may learn from the natural besides neural networks and genetic algorithms, we propose a theoretical computation model for mobile-agent evolution on the Internet. We de®ne an agent niche overlap graph and agent evolution states. We also propose a set of algorithms, which is used in our multimedia search programs, to simulate agent evolution. Agents are cloned to live on a remote host station based on three di€erent strategies: the brute force strategy, the semi-brute force strategy, and the selective strategy. Evaluations of di€erent strategies are discussed. Guidelines of writing mobile-agent programs are proposed. The technique can be used in distributed information retrieval which allows the computation load to be added to servers, but signi®cantly reduces the trac of network communication. In the literature of software agents, it is hard to ®nd other similar models. The results of this research only address a small portion of the ice ®eld. We hope that this problem would be further studied in the societies of network communications, multimedia information retrieval, and intelligent systems on the Internet. Ó 2001 Elsevier Science Inc. All rights reserved. Keywords: Internet multimedia; Web services; Mobile network architectures

1. Introduction Communication over Internet is growing increasingly and will have profound implications for our economy, culture and society. From mainframe-based *

Tel.: +886-2621 5656 x2743, x2616; Fax: +886-2620-9749. E-mail address: [email protected] (T.K. Shih).

0020-0255/01/$ - see front matter Ó 2001 Elsevier Science Inc. All rights reserved. PII: S 0 0 2 0 - 0 2 5 5 ( 0 1 ) 0 0 1 0 9 - 8

54

T.K. Shih / Information Sciences 137 (2001) 53±73

numerical computing to decentralized downsizing, PCs and workstation computers connected by Internet have become the trend of the next generation computers. With the growing popularity of World Wide Web, digital libraries over Internet play an important role in the academic, business, and industrial worlds. In order to allow e€ective and ecient information retrieval, many search engines were developed. However, due to the limitation of now-a-day network communication bandwidth, researchers [15] suggest that distributed Internet search mechanisms should overcome the traditional information retrieval technologies, which perform the controls of searching and data transmission on a single machine. Mobile-agents are software programs that can travel over the Internet. Mobile search agents ®nd the information speci®ed by its original query user on a speci®c station, and send back search results to the user. Only queries and results are transmitted over the Internet. Thus, unnecessary transmission is avoided. In other words, mobile agent computing distributes computation loads among networked stations and reduces network trac. A mobile-agent, in general, can be more than just a search program. For instance, a mobile-agent can serve as an emergency message broadcaster, an advertising agent, or a survey questionnaire collector. A mobile-agent should have the following properties: · It can achieve a goal automatically. · It should be able to clone itself and propagate. · It should be able to communicate with other agents. · It has some evolution states to record the computation status. The environment where mobile-agents live is the Internet. Agents are distributed automatically or semi-automatically via some communication paths. Therefore, agents meet each other on the Internet. Agents who have the same goal can share information and cooperate. However, if the system resource (e.g., network bandwidth or disk storage of a station) is insucient, agents compete with each other. These phenomena are similar to those in the ecosystem of the real world. A creature is born with a goal to live and reproduce. To defend their natural enemies, creatures of the same species cooperate. However, in a perturbation in ecosystems, creatures compete with or even kill each other. The natural world has built a law of balance. Food web (or food chain) embeds the law of creature evolution. With the growing popularity of the Internet where mobile-agents live, it is our goal to learn from the natural to network propose an agent evolution computing model over the Internet. The model, even if it is applied only in the mobile-agent evolution discussed in this paper, can be generalized to solve other computer science problems. For instance, the search problems in distributed Arti®cial Intelligence, network trac control, or any computation that involves a large amount of concurrent/distributed computation. In general, an application of our Food Web evolution model should have the following properties:

T.K. Shih / Information Sciences 137 (2001) 53±73

55

· The application must contain a number of concurrent events. · Events can be simulated by some processes, which can be partitioned into a number of groups according to the properties of events. · There must exist some consumer±producer relationships among groups so that dependencies can be determined. · The number of processes must be large enough. For instance, with the growing popularity of the Internet, Web-based documentations are retrieved via some search engine. Search processes can be conducted as several concurrent events distributed among Internet stations. These search events of the same kind (e.g., pursuing the same document) can be formed in a group. Within these agent groups, search agents can provide information to each other. Considering the amount of Web sites in the future, the quantity of concurrent search events is reasonably large. We propose a logical network for agent connections/communications, called Agent Communication Network (or ACN). ACN is dynamic. It evolves as the agent communication proceeds. It also serves as a graph theoretical model of agent evolution computing. Our research purposes include: · Provide a model for agent evolution and de®ne the associated rules. · Construct simulation facilities to estimate agent evolution. · Suggest guidelines to write intelligent mobile-agent programs. · Suggest strategies to construct ecient ACNs. and, · Ensure network security in the simulation environment. Given an ACN, the model ®nds which agent evolution policy produces the maximum throughput (i.e., the goal of agents achieved). Or, changing the structure of an ACN, the model is able to ®nd out how to adjust the agent evolution policy in order to recover from the change (or how is the throughput a€ected). We have surveyed articles in the area of mobile-agents, personal agents, and intelligent agents. The related works are discussed in Section 2. Some terminologies and de®nitions are given in Section 3, where we also introduce the detail concepts of agent communication network. In our model, an agent evolves based on state transitions, which are discussed in Section 4. A graph theoretical model which describes agent dependencies and competitions is given in Section 5. Agent evolution computing algorithms, which we used to construct our simulation, are addressed in Section 6. Simulations are then discussed in Section 7. And ®nally, we discuss our conclusions and possible extensions in Section 8. 2. Related work The concept of agent-based software engineering is discussed in a survey paper [5]. The author presents two important issues: agent communication

56

T.K. Shih / Information Sciences 137 (2001) 53±73

language and agent architecture. Agent communication languages allow agents to share information and send messages to each other. Agent architecture, on the other hand, includes network infrastructure and software architecture that ensure agent computing. An open agent architecture for kiosk-based multimedia information service is proposed in [3]. Survey of di€erent types of agents can be found in [1,2,16]. The author [16] also indicates that Web-based intelligent agents are necessary and should be a trend of new engineering applications. However, application designers need to resolve the con¯ict between the client±server-based designs and the peer-topeer protocol. The Java Agent Template (JAT) is proposed, which allows application designers to write Java programs runnable on heterogeneous computer environments. Agent programs developed in this technique can send KQML messages to each other. In our simulation, we use JATLite which is a version of JAT on Windows NT-based environment. In [1], the author summarizes three types of agents: intelligent autonomous agents, mobile-agents, and cooperative agents. Agent applications and development tools are also discussed. A personalized agent system, BASAR (Building Agents Supporting Adaptive Retrieval), which maintains Web links based on personal bookmarks is proposed in [21]. The system is able to support information updating, provided that the content of Web site where the user marked is changed. The system also reduces the number of links by deleting seldom used ones. And, new links can be added by using search engines. A Web-based information browsing agent is proposed in [4]. Using KQML as the communication language, the proposed system signi®cantly reduces network load by pushing the computation to the server. Structured meta information is also used to decrease the complexity of browsing. A multi-agent system supporting intelligent information over the Internet is proposed in [14]. The system incorporates technologies from intelligent software agents, natural language understanding, and conceptual search mechanisms to access Earth and Space Science data. The concept of mobile-agent is discussed in several articles [11±13,17]. Agent Tcl, a mobile-agent system providing navigation and communication services, security mechanisms, and debugging and tracking tools, is proposed in [6,7,9]. The system allows agent programs to move transparently between computers. A software technology called Telescript, with safety and security features, is discussed in [20]. The mobile-agent architecture, MAGNA, and its platform are presented in [11]. Another agent infrastructure is implemented to support mobile-agents [12]. A mobile-agent technique to achieve load balancing in telecommunications networks is proposed in [18]. The mobile-agent programs discussed can travel among network nodes to suggest routes for better communications. Mobile service agent techniques and the corresponding architectural principles as well as requirements of a distributed agent environment

T.K. Shih / Information Sciences 137 (2001) 53±73

57

are discussed in [10]. The evaluation of several commercial Java mobile-agents is given in [8]. 3. Agent communication network Agents communicate with each other since they can help each other. For instance, agents sharing the same search query should be able to pass query results to each other so that redundant computation can be avoided. An Agent Communication Network (ACN) serves this purpose. Each node in an ACN (shown in Fig. 1) represents an agent on a computer network node, and each link represents a logical computer network connection (or an agent communication link). Since agents of the same goal want to pass results to each other, agent communication relations can be described in a complete graph. Therefore, an ACN of agents holding di€erent goals is a graph of complete graphs. In Fig. 1, there are six types of agents (i.e., species A±F). Since agents can have multiple goals (e.g., searching based on multiple criteria), an agent may belong to di€erent complete graphs. For instance, agents a and b both belong to the complete graphs of species A and B. On the other hand, agents may have a unique goal (e.g., agents of species E or F). Agent e of species F is the only one of its kind. We de®ne some terminologies used in this paper. A host station (or station) is a networked workstation on which agents live. A query station is a station where a user releases a query for achieving a set of goals. A station can hold multiple agents. Similarly, an agent can pursue multiple goals. An agent society (or society) is a set of agents fully connected by a complete graph, with a common goal associated with each agent in the society. A goal which belongs to di€erent agents may have di€erent priorities. An agent society with a common goal of the same priority is called a species. Since an agent may have multiple goals, it is possible that two or more societies (or species) have intersections. A communication cut set is a set of agents belonging to two

Fig. 1. An agent communication network.

58

T.K. Shih / Information Sciences 137 (2001) 53±73

distinct agent societies, which share common agents (e.g., {a, b}, {c}, and {d} in Fig. 1). The removal of all elements of a communication cut set results in the separation of the two distinct societies. An agent in a communication cut set is called an articulation agent (e.g., agents a, b, c, and d in Fig. 1). Since agent societies (or species) are represented by complete graphs and these graphs have communication cut sets as intersections, articulation agents can be used to suggest a shortest network path between a query station and the station where an agent ®nds its goal. Another point is that an articulation agent can hold a repository, which contains the network communication statuses of links of an agent society. Therefore, network resource can be evaluated when an agent checks its surviving environment to decide its evolution policy. It is necessary for us to give formal de®nitions of these terminologies to be used in our algorithms. In the following de®nitions, ``ˆˆ'' reads as ``is de®ned by'' and ``PX '' represents a set of object X. We also use a simple notation, ``::ˆ'', to introduce a data type. The type name is given on the left of ``::ˆ'' and constants on the right are separated by optional ``j'' signs. Omitted items are represented as ``:::''. Goal Achieved ::ˆ TRUE j FALSE Policy Factors ::ˆ NETWORK BOUND j CPU BOUND j MEMORY BOUND j . . . Host Station ˆˆ URL  Resource  Agent Set Resource ˆˆ Network  CPU  Memory  Information     Agent ˆˆ Goal Set  Policy Agent Set ˆˆ P Agent Goal ˆˆ Query Return URL  Query  Priority Goal Set ˆˆ P Goal Agent Society ˆˆ Agent Set Species  Agent Society URL ˆˆ STRING Network ˆˆ REAL CPU ˆˆ REAL Memory ˆˆ REAL Information ˆˆ DATA Policy ˆˆ P Policy Factors Query Return URL  URL Query ˆˆ DATA Priority ˆˆ INTEGER We use a Boolean type, Goal Achieved, to indicate whether the goal of an agent is achieved. Policy Factors is a data type which contains alternatives of agent computation policies. A Host Station has a uniform resource locator

T.K. Shih / Information Sciences 137 (2001) 53±73

59

(i.e., URL) 1 which represents the station's unique network address. A host station has system resources (i.e., Resource) and can hold some agents (i.e., Agent Set).Network represents the network facility available to a station. CPU represents the computation power of a station. Memory represents the storage of a station. It could be the main memory or the secondary memory. Information is available on a station. Other resources (omitted here) are also possible if human factors are considered. Each Agent has a set of Goals (i.e., Goal Set) and a Policy, which is a set of application-dependent factors (e.g., CPU bound, network bound, or memory bound) the agent depends on to perform its evolution computation. Query Return URL is the URL where an agent should return its query results. Query is an application-dependent speci®cation which represents a user request to the agent. Priority is an integer which represents the priority of a goal. The larger the integer, the higher the goal priority. Agent Society is a set of agents sharing a common goal. Species is an Agent Society of the same goal priority. We also use four primitive data types: STRING, DATA, REAL and INTEGER to represent ASCII strings, binary data, real numbers, and integers, respectively. The detail de®nitions of primitive data types are omitted. We use a simple notation to decompose/compose objects. For example, in our algorithm, if agent A is used, then A:Goal Set represents the set of goals of this agent, where A is unique in its belonging to agent society (or species). The agent society where agent A belongs is represented as A " Agent Society. In general, the ``:'' and the ``"'' operators perform the downward decomposition and the upward selection of an object. We will discuss the usage of these terms in our algorithms which are given in Section 6. But, ®rstly, we should address the concepts of agent evolution states and species food web in Sections 4 and 5, respectively. 4. Agent evolution states An agent evolves. It can react to an environment, respond to another agent, and communicate with other agents. The evolution process of an agent involves some internal states. As shown in Fig. 2, an agent is in one of the following states after it is born and before it is killed or dies naturally: · Searching: the agent is searching for a goal. · Suspending: the agent is waiting for enough resources in its environment in order to search for its goal. · Dangling: the agent loses its goal of surviving, it is waiting for a new goal.

1

We could use an IP address. But, since our implementation of agents is based on the Web, a unique URL is used instead.

60

T.K. Shih / Information Sciences 137 (2001) 53±73

Fig. 2. Agent evolution state transition diagram.

· Mutating: the agent is changed to a new species with a new goal and a possible new host station. An agent is born to a searching state to search for its goal (i.e., information of some kind). All creatures must have goals (e.g., search for food). However, if its surviving environment (i.e., a host station) contains not enough resource, the agent may transfer to a suspending state (i.e., hibernation of a creature). The searching process will be resumed when the environment has better resources. But, if the environment lacks resources badly (i.e., natural disasters occur), the agent might be killed. When an agent ®nds its goal, the agent will pass the search results to other agents of the same kind (or same society). Other agents will abort their search (since the goal is achieved) and transfer to a dangling state. An agent in a dangling state cannot survive for a long time. It will die after some days (i.e., a duration of time). Or, it will be re-assigned to a new goal with a possible new host station, which is a new destination where the agent should travel. In this case, the agent is in a mutating state and is re-born to search for the new goal. Agent evolution states keep the status of an agent. In order to maintain the activity of agents, in a distributed computing environment, we use message passing as a mechanism to control agent state transitions. These agent messages are discussed in Section 4.1 and used in our algorithms. 4.1. Agent messages Agent evolution computing is performed via actions response to agent messages. There are two categories of messages. The ®rst type of agent messages are sent from a source agent to a destination agent. Agents of the same species are looking for the same goal. When an agent ®nds its goal, there is no reason for others of the same kind to keep on searching. Therefore, the agent

T.K. Shih / Information Sciences 137 (2001) 53±73

61

sends an abort_search message to other agents of the same species. The message also includes the search results found by the source agent. After that, agents are ready to process a new goal. They can be re-located at a new host station or stay at the original location. The new_host message is used in conjunction with the new_goal message. After a new_goal message is received, the destination agent transits to a mutating state. Agent messages Messages from an Agent

Actions

abort_search(DestAgent, Search_results)

The search process of DestAgent is aborted. The Search_results are passed to the DestAgent. After that, the source and the destination agents both transit to the dangling state. A new search Goal is assigned to DestAgent. The DestAgent is re-located at Host_station.

new_goal(DestAgent, Goal) new_host(DestAgent, Host_station)

The second type of agent messages are sent from a host station. Upon resources available at a host station, the host station may suspend and later resume the search process of a speci®c agent. In the case the resources are lower than a watermark, the host station may kill an agent. In each host station, there is an agent server, called docking agents (discussed in Section 7). These agent servers control message passing among agents and their servers. Host station messages Messages from a Station

Actions

suspend_search(DestAgent) resume_search(DestAgent) kill_agent(DestAgent)

The agent DestAgent is suspended. The agent DestAgent is resumed. The agent DestAgent is killed.

5. Species food web and niche overlap graph Agents can suspend/resume or even kill each other. We need a general policy to decide which agent is killed. By our de®nition, a species is a set of agents of the same goal with a same priority. It is the priority of a goal we base on to discriminate two or more species.

62

T.K. Shih / Information Sciences 137 (2001) 53±73

We need to construct a direct graph which represents the dependency between species. We call this digraph a species food web (or food web). Each node in the graph represents a species. All species of a connected food web (i.e., a graph component of the food web) are of the same goal with possibly di€erent priorities. We assume that di€erent users at di€erent host stations may issue the same query with di€erent priorities. Each directed edge in the food web has an origin which represents a species of a higher goal priority and has a terminus with a lower priority. Since an agent (and thus a species) can have multiple goals which could be similar to other agents, each goal of an articulation agent should have an associated food web. Therefore, the food web is used as a competition base of agents of the same goal in the same station. For instance, in Fig. 3, the agent communication network has three species. Species A has six agents (i.e., agents a to f). Species B has agents d, e, f, g, and h. Species C has agents d, e, and i. The graph also shows that species A and B have common goals g1, g2, and g3 at agents e, d, and f, respectively. Species A and C have common goals g1 and g2. Species B and C also have common goals g1 and g2. In the food web of g1, the goal priority of g1 at species A is 3 (shown as the subscript of A). But, g1 is of priorities 2 and 1 at species B and C, respectively. Similarly, the food webs of goals g2 and g3 are constructed. Based on the directed edges of these webs, agents of a species can send messages to suspend or kill other agents of a di€erent species. Each food web describes goal priority dependencies of species. From a food web, we can further derive a niche overlap graph. In an ecosystem, two or

Fig. 3. An agent communication network and its species food webs.

T.K. Shih / Information Sciences 137 (2001) 53±73

63

more species have an ecological niche overlap (or niche overlap) if and only if they are competing for the same resource. A niche overlap graph can be used to represent the competition among species. The niche overlap graph is used in our algorithm to decide agent evolution policy and to estimate the e€ect when certain factors are changed in an agent communication network. Based on the niche overlap graph, the algorithm is able to suggest strategies to rearrange policies so that agents can achieve their highest performance eciency. This concept is similar to the natural process that recovers from perturbations in ecosystems. Fig. 3 also shows the niche overlap graphs of goals g1, g2, and g3. 6. Agent evolution computing over an ACN The algorithms proposed in this section use the agent evolution states and the niche overlap graphs discussed for agent evolution computing. An agent wants to search for its goal. At the same time, since the searching process is distributed, an agent wants to ®nd a destination station to clone itself. Searching and cloning essentially exist as a co-routing relation. A co-routine can be a pair of processes. While one process serves as a producer, another serves as a consumer. When the consumer uses out of the resource, the consumer is suspended. After that, the producer is activated and produces the resource until it reaches an upper limit. The producer is suspended and the consumer is resumed. In the computation model, the searching process can be a consumer, which needs new destinations to proceed search. On the other hand, the cloning process is a producer who provides new URLs. Agent evolution on the agent communication network is an asynchronous computation. Agents living on di€erent (or the same) stations communicate and work with each other via agent messages discussed in Section 4.1. The searching and the cloning processes of an agent may run as a co-routine on a station. However, di€erent agents are run on the same or separated stations concurrently. We use a formal speci®cation approach to describe the logic of our evolution computation. Formal speci®cations use ®rst-order logic, which is precise. In this paper, we use the Z speci®cation language [19] to describe the model and algorithms. Each algorithm or global variable in our discussion has two parts. The expressions above a horizontal line are the signatures of predicates, functions, or the data types of variables. Predicates and functions are constructed using quanti®ers, logic operators, and other predicates (or functions). The signature of a predicate also indicates the type of its formal parameters. For instance, Agent  Goal  Host Station are the types of formal parameters of predicate Agent Search. The body, as the second part of the predicate, is speci®ed below the horizontal line.

64

T.K. Shih / Information Sciences 137 (2001) 53±73

We use some global variables through the formal speci®cation. The variable goal achieved is set to TRUE when the search goal is achieved, FALSE otherwise. We also use two watermark variables, a and b, where a is the basic system resource requirement and b is the minimal requirement. Note that a must be greater than b so that di€erent levels of treatment are used when the resource is not sucient. Global Variables and Constants goal_achieved: Goal_Achieved a: REAL b: REAL a>b Algorithm Agent Search is the starting point of agent evolution simulation. If system resource meets a basic requirement (i.e., a), the algorithm activates an agent in the searching state within a local station. If the search process ®nds its goal (e.g., the requested information is found), the goal is achieved. Goal abortion of all agents in a society results in a dangling state of all agents in the same society (including the agent who ®nds the goal). At the same time, the search result is sent back to the original query station via Query Return URL. Suppose that the goal cannot be achieved in an individual station, the agent is cloned in another station (agent propagation). The Agent Clone algorithm is then used. On the other hand, the agent may be suspended or even killed if the system resource is below the basic requirement (i.e., Resource Available …A; G; X † < a). In this case, algorithm Agent Suspend is used if the resource available is still feasible for a future resuming of the agent. Otherwise, if the resource is below the minimal requirement, algorithm Agent Kill is used. Agent Searching Algorithm Agent Search : Agent  Goal  Host Station 8A : Agent; G : Goal; X : Host Station Agent Search…A; G; X † () Resource Available…A; G; X † P a ) ‰G 2 Local Search…A; X † ) Abort All…A " Agent Society†^ send result…X :URL; G:Query Return URL† ^ goal achieved ˆ TRUE _G 62 Local Search…A; X † ) Agent Clone…A; G; A " Agent Society†Š _Resource Available…A; G; X † P b ) Agent Suspend…A; G; X † _Resource Available…A; G; X † < b ) Agent Kill…A; G; X † Agent cloning is achieved by the Agent Clone algorithm. When the cloning process wants to ®nd new stations to broadcast an agent, two implementations can be considered. The ®rst is to collect all URLs of stations found by one

T.K. Shih / Information Sciences 137 (2001) 53±73

65

search engine. But, considering the network resource available, the implementation may check for the common URLs found by two or more search engines. New URLs are collected by the Search For Stations algorithm, which is invoked in the agent cloning algorithm. Agent propagation strategy decides the computation eciency of our model. In this research, we propose three strategies: · the brute force agent distribution, · the semi-brute force agent distribution, and · the selective agent distribution. The ®rst strategy simply clons an agent on a remote station, if the potential station contains information that helps the agent to achieve its goal. The semibrute force strategy, however, ®nds another agent on a potential station, and assigns the goal to this agent. The selective approach not only tries to ®nd a useful agent, but also checks for the goals of this agent. Cloning strategies a€ect the size of agent societies thus the eciency of computation. Agent Cloning Algorithm: the Brute Force Strategy Agent Clone : Agent  Goal  Agent Society 8A : Agent; G : Goal; S : Agent Society Agent Clone…A; G; S† () ‰8X : Host Station  X 2 Search For Stations…G† ) …9A0 : Agent  A0 ˆ copy…A† ^ X :Agent Set ˆ X :Agent Set [ fA0 g^ S ˆ S [ fA0 g ^ Agent Search…A0 ; G; X ††Š _‰Search For Stations…G† ˆ ; ) goal achieved ˆ FALSEŠ The brute force agent distribution strategy makes a copy of agent A, using the copy function, in all stations returned by the Search For Stations algorithm. Agent set in each station is updated and the society S where agent A belongs is changed. Agent A0 , a clone of agent A is transmitted to station X for execution. Agent Cloning Algorithm: the Semi-brute Force Strategy Agent Clone : Agent  Goal  Agent Society 8A : Agent; G : Goal; S : Agent Society Agent Clone…A; G; S† () ‰8X : Host Station  X 2 Search For Stations…G† ) ‰9A0 : Agent  A0 2 X :Agent Set ) …A0 :Goal Set ˆ A0 :Goal Set [ fGg ^ S ˆ S [ fA0 g^ Agent Search…A0 ; G; X ††ŠŠ _‰Search For Stations…G† ˆ ; ) goal achieved ˆ FALSEŠ The semi-brute force agent distribution approach is similar to the brute force approach, except that it does not make a copy of the agent but gives the goal to an agent on its destination station. The agent which accepts this new goal (i.e., A0 ) is activated for the new goal in its belonging station.

66

T.K. Shih / Information Sciences 137 (2001) 53±73

Agent Cloning Algorithm: the Selective Strategy Agent Clone : Agent  Goal  Agent Society 8A : Agent; G : Goal; S : Agent Society Agent Clone…A; G; S† () ‰8X : Host Station  X 2 Search For Stations…G† ) ‰9A0 : Agent  A0 2 X :Agent Set ) ‰G 2 A0 :Goal Set ) S ˆ S [ A0 " Agent Society _G 62 A0 :Goal Set ) …A0 :Goal Set ˆ A0 :Goal Set [ fGg ^S ˆ S [ fA0 g†Š ^Agent Search…A0 ; G; X †Š _‰X :Agent Set ˆ ; ) ‰9A00 : Agent  A00 ˆ copy…A† ^ X :Agent Set ˆ fA00 g^ S ˆ S [ fA00 g ^ Agent Search…A00 ; G; X †ŠŠŠ _‰Search For Stations…G† ˆ ; ) goal achieved ˆ FALSEŠ The last approach is more complicated. The selective approach of cloning algorithm must check whether there is another agent in the destination station (i.e., X). If so, the algorithm checks whether the agent (i.e., A0 ) at this station shares the same goal with the agent to be cloned. If two agents share the same goal, there is no need of cloning another copy of agent. Basically, the goal can be computed by the agent at the destination station. In this case, the union of the two societies is necessary (i.e., S ˆ S [ A0 " Agent Society). On the other hand, if the two agents do not have a common goal, to save computation resource, we may ask the agent at the destination station to help searching for an additional goal. This case makes a re-organization of the society where the source agent belongs. The result also ensures that the number of agents on the ACN is kept in a minimum. Whether the two agents share the same goal, the Agent Search algorithm is used to search for the goal again. In this case, Agent A0 is physically transmitted to station X for execution. When there is no agent running on the destination station, we need to increase the number of agents on the ACN by duplicating an agent on the destination station (i.e., the invocation of A00 ˆ copy…A†). The society is reorganized. And the Agent Search algorithm is called again. In the case that no new station is found by the Search For Stations algorithm, the goal is not achieved. The agent search and agent clone algorithms use some auxiliary algorithms, which are discussed as follows. The justi®cation of system resource available depends on agent policy, as de®ned in A:Policy. Agent policy is a set of factors indicated by name tags (e.g., NETWORK BOUND). The estimation of resources is represented as a real number, which is computed based on X :Resource of station X. Note that, in the algorithm, w1 and w2 are weights of factors (w1 ‡ w2 ˆ 1:0). We only describe some cases of using agent policies. Other cases are possible but omitted. Moreover, we consider the priority of goal G.

T.K. Shih / Information Sciences 137 (2001) 53±73

67

If the priority is lower than some watermark (i.e., G:Priority < h), we let r1 be a constant less than 1:0. Therefore, resources are reserved for other agents. On the other hand, if the priority is high, we consider that the value returned by Resource Available should be high. Thus the potential agent can proceed its computation immediately. The values of h and x depend on agent applications. Auxiliary Algorithms Resource Available : Agent  Goal  Host Station ) REAL 8A : Agent; G : Goal; X : Host Station; R : REAL 9w1; w2; r1; r2 : REAL Resource Available…A; G; X † ˆ R () ‰NETWORK BOUND 2 A:Policy ) R ˆ X :Resource:Network _CPU BOUND 2 A:Policy ) R ˆ X :Resource:CPU _MEMORY BOUND 2 A:Policy ) R ˆ X :Resource:Memory _CPU BOUND 2 A:Policy ^ MEMORY BOUND 2 A:Policy ) R ˆ X :Resource:CPU  w1 ‡ X :Resource:Memory  w2 ^ w1 ‡ w2 ˆ 1:0 _ . . .Š ^9h; x : Priority  ‰G:Priority < h ) …R ˆ R  r1 ^ r1 < 1:0† _G:Priority > x ) …R ˆ R  r2 ^ r2 > 1:0†Š The above algorithms describe how an agent evolves from a state to another. How agents a€ect each other depends on the system resource available. However, in an ACN, it is possible that agents suspend or even kill each other, as we described in previous sections. The niche overlap graphs of each goal play an important role. We use the Agent Suspend and Agent Kill algorithms to take the niche overlap graphs of a goal (i.e., niche compete…G†) into consideration. In the Agent Suspend algorithm, if there exists a goal that has a lower priority compared to the goal of the searching agent, a suspend message is sent to the goal to delay its search (i.e., via suspend…G0 " Agent†). The searching agent may be resumed after that since system resources may be released from these goal suspensions. In the Agent Kill algorithm, however, a kill message is sent instead (i.e., via terminate…G0 " Agent†). The system resource is checked against the minimum requirement b. If resumption is feasible, the Agent Search algorithm is invoked. Otherwise, the system should terminate the searching agent. Agent Suspend : Agent  Goal  Host Station 8A : Agent; G : Goal; X : Host Station Agent Suspend…A; G; X † () 9GS : Goal Set GS ˆ niche compete…G† ^…8G0 : Goal  G0 2 GS ^ G0 :Priority < G:Priority ) suspend…G0 " Agent††

68

T.K. Shih / Information Sciences 137 (2001) 53±73

^…Resource Available…A; G; X † P b ) Agent Search…A; G; X † _Resource Available…A; G; X † < b ) suspend…A†† Agent Kill : Agent  Goal  Host Station 8A : Agent; G : Goal; X : Host Station Agent Kill…A; G; X † () 9GS : Goal Set GS ˆ niche compete…G† ^…8G0 : Goal  G0 2 GS ^ G0 :Priority < G:Priority ) terminate…G0 " Agent†† ^…Resource Available…A; G; X † P b ) Agent Search…A; G; X † _Resource Available…A; G; X † < b ) terminate…A†† The other auxiliary algorithms are relatively less complicated. Function Local Search takes as input an agent and a station. It returns a set of goals found by the agent in this station. A match predicate is used. This match predicate is application-dependent. It could be a search program which locates a key word in a Web page or a request of information from a user (e.g., a survey questionnaire). The Abort All predicate takes as input an agent society and terminates all agents within this society. The Search For Stations function takes as input a goal and returns a set of host stations. The stations should be selected depending on the candidate station function, which estimates the possibility of goal achievement in a station. This function can be implemented as a Web search engine which looks for candidate URLs. We have omitted some detailed de®nitions of the above auxiliary algorithms, as well as some primitive functions which are self-explanatory. Local Search : Agent  Host Station ! Goal Set 8A : Agent; X : Host Station; GS : Goal Set Local Search…A; X † ˆ GS () GS ˆ fG : Goal j G 2 A:Goal Set^ match…G:Query; X :Resource:Information†g Abort All : Agent Society 8S : Agent Society Abort All…S† () 8A : Agent  A 2 S ) terminate…A† Search For Stations : Goal ! PHost Station 8G : Goal; X Set : PHost Station Search For Stations…G† ˆ X Set () X Set ˆ fX : Host Station j candidate station…G; X †g

T.K. Shih / Information Sciences 137 (2001) 53±73

69

7. Simulation It is dicult to build our agent evolution computing environment or any agent systems from the scratch. Therefore, we choose JATLite as the supporting system for the design of our simulation. Another reason for choosing JATLite is that agents written in Java, which can be achieved using JATLite, can run on di€erent computer environments, since Java virtual machine is widely available and Java applet agents can be run from most Web browsers. JATLite is a software library developed at Stanford University for building software agents which communicate over the Internet. Developed in the Java language, JATLite uses Knowledge, Query and Manipulation Language (KQML) as the agent communication language. KQML is well developed and adapted to the agent research community. JATLite has ®ve layers: the abstract layer, the base layer, the KQML layer, the router layer, and the protocol layer. The lowest layer is the abstract layer which uses TCP/IP protocol. The base layer handles basic communication, which can be extended, for instance, to allow multiple message ports. The KQML layer parses and store KQML messages. The router layer is in charge of sending and receiving agent messages, as well as agent name registration. The protocol layer supports some standard Internet services, such as FTP, SMTP, HTTP, and POP3. Since our agent evolution model does not rely on any speci®c low level communication protocol, most of the functions we used are in the router layer and the protocol layer. The design of our agent simulation environment is illustrated in Fig. 4. Based on JATLite functions, it is convenient to develop agents which pass messages to each other. However, agent cloning is a technical problem. When an agent program (e.g., a search agent) is copied to a remote host station, the agent needs to be ``alive'' on this station. However, the station where the original agent program lives cannot initiate a process on a remote station. Therefore, in the simulation environment, we have docking agents to serve this purpose. A docking agent is an agent demon run on a host station. The damon is not part of agent evolution computing, but is designed to support the computing. A docking agent is constructed in JATLite with the ability to launch another agent program (e.g., a search agent). Search agents are used as an example in our simulation. In our testbed, we also consider other types of agents. For instance, agents intensively use network resources, such as advertising agents which deliver multimedia information. The agent cloning process includes a number of steps. Firstly, when a search agent is about to clone itself, the search agent sends a query to a commercial search engine. The commercial search engine responds with a set of URLs. Based on these URLs, the search agent makes a copy of itself in a remote station through the help of the docking agent on this remote station (i.e., via sending a clone message). After that, the docking agent may start a new process

70

T.K. Shih / Information Sciences 137 (2001) 53±73

Fig. 4. Agent simulation environment based on JATLite.

to run the search agent program. Note that a docking agent is installed in each host station of our simulation environment. One of the reasons for using docking agents is that when an agent program is copied, the docking agent can have a write access to the station where the agent program is duplicated. Docking agents send host station messages (see Section 4.1) to search agents. Thus, a docking agent may kill a search agent if necessary. Search agents may send agent messages (see Section 4.1) to each other. Search results are sent back to the agent who initiates the search goal via query result messages. Each host station may hold some search agents. But, holding a message router of a host station is optional. Message routers (or agent message routers) are programs provided in JATLite tool. Their responsibility includes allowing agent name registrations, maintaining agent message queues, and sending/receiving agent messages. A message router in our simulation environment handles the following types of messages: · Agent messages. · Query result messages. · Host station messages. · Clone messages. Since a message router should be in an ecient location to deliver messages, it is recommended to install a message router in a station where an articulation

T.K. Shih / Information Sciences 137 (2001) 53±73

71

search agent lives. Articulation search agents are connected to other search agents of the same societies. Therefore, they suggest ecient routes of communication. A message router, besides holding agent registration information and an agent message queue, also holds a repository which contains useful information such as network status. 7.1. Evaluation of the simulation Di€erent agent distribution mechanisms have di€erent impacts on the sizes of the agent societies and the overall performance. In the brute force agent distribution, agent societies are growing due to the duplication of agents which are launched by the same user, as well as di€erent users. Therefore, multiple copies of agents of the same priority exist in the ACN. The complexity of agent cloning algorithm is lower. But, network trac as well as the size of agent societies is increased. In general, this cloning strategy has a bad performance. In the semi-brute force agent distribution, agent societies increase their sizes due to di€erent users launching the same goal of di€erent priorities. But agents from the same user will not increase the size of the agent societies. In the selective agent distribution, agent societies will increase their sizes upon the demand. Even the complexity of the agent cloning algorithm is higher. But the amount of agents and thus network trac are reduced. In general, this strategy achieves the best performance. 8. Conclusions Mobile agent-based software engineering is interesting. However, in the literature, we did not ®nd any other similar theoretical approach to model what mobile-agents should act on the Internet, especially how mobile-agents can cooperate and compete. A theoretical computation model for agent evolution was proposed in this paper. Algorithms for the realization of our model were given. Consequently, our contributions in this paper are: · We proposed a model for agent evolution computing based on food web, the law of natural balancing. · We developed a set of algorithms for the distributed computing of agent programs. · We designed a simulation environment based on JATLite to support our theory. However, there are other extensions to the evolution model. For instance, species in the natural world learn from their enemies. In our future model, agents can learn from each other. We can add a new state, the ``learning'' state, to the agent evolution states. When an agent is in the dangling state, it can communicate to other agents via some agent communication languages.

72

T.K. Shih / Information Sciences 137 (2001) 53±73

Computing methods can be replicated from other agents. And the agent transits to the mutating state to wait for another new goal. In addition, when a station lacks system resource, an agent in the suspending state can change its policy to admit to the environment before it transits to the searching state. These are the facts that agents can learn. On the other hand, in the cloning process, two agents on a station sharing a common goal can be composed of a new agent (i.e., marriage of agents). This agent may have more goals compared to its parents. An agent composition state could be added to the agent evolution states. But, the destination station where this new agent lives should be compromised. The evolution of computers has changed from mainframe-based numerical computation to networked stations. In line with the success of Internet technologies, in the future, computation and information storage are not limited to a single machine. It is possible that an individual buys a primitive computer that only has a terminal connected to Internet. Personal data and the computation power are embedded within the Internet. Mobile-agent and agent evolution computing will be very interesting and important. Our agent evolution model addresses only a small portion of the ice ®eld, which should be further studied in the societies of network communications, automatic information retrieval, and intelligent systems. Acknowledgements I would like to thank all members of the MINE Lab, especially Lawrence Y.G. Deng and Tseng-Hsiang Chang for their discussion.

References [1] J.G. Annunziato, A review of agent technology, in: Proceedings of the First International Conference on Multi-Agent Systems, San Francisco, CA, USA, June 12±14, 1995. [2] A. Caglayan, C. Harrison, Agent Sourcebook: A Complete Guide to Desktop, Internet, and Intranet Agents, Wiley, New York, 1997. [3] P. Charlton, Y. Chen, E. Mamdani, O. Olsson, J. Pitt, F. Somers, A. Wearn, An open agent architecture for integrating multimedia services, in: Proceedings of the Autonomous Agents '97 Conference, Marina Del Rey, CA, USA, 1997, pp. 522±523. [4] C. Dharap, M. Freeman, Information agents for automated browsing, in: Proceedings of the 1996 ACM CIKM Conference (CIKM'96), Rockville, MD, USA, 1996, pp. 296±305. [5] M.R. Genesereth, Sotftware agents, Communication of the ACM 37 (7) (1994) 48±54. [6] R. Gray, D. Kotz, S. Nog, D. Rus, G. Cybenko, Mobile-agents: the next generation in distributed computing, in: Proceedings of the 1997 2nd Aizu International Symposium on Parallel Algorithms/Architecture Synthesis, Fukushima, Japan, 1997, pp. 8±24. [7] R.S. Gray, Agent Tcl, Dr. Dobb's Journal of Software Tools for Professional Programmer 22 (3) (1997).

T.K. Shih / Information Sciences 137 (2001) 53±73

73

[8] J. Kiniry, D. Zimmerman, Hands-on look at Java mobile-agents, IEEE Internet Computing 1 (4) (1997) 21±30. [9] D. Kotz, R. Gray, S. Nog, D. Rus, S. Chawla, G. Cybenko, Agent Tcl: targeting the needs of mobile computers, IEEE Internet Computing 1 (4) (1997) 58±67. [10] S. Krause, T. Magedanz, Mobile service agents enabling intelligence on demand in telecommunications, in: Proceedings of the 1996 IEEE Global Telecommunications Conference, London, UK, 1996, pp. 78±84. [11] S. Krause, F. Morais de Assis Silva, T. Magedanz, MAGNA ± a DPE-based platform for mobile-agents in electronic service markets, in: Proceedings of the 1997 3rd International Symposium on Autonomous Decentralized Systems (ISADS'97), Berlin, Germany, 1997, pp. 93±102. [12] A. Lingnau, O. Drobnik, Making mobile-agents communicate: a ¯exible approach, in: Proceedings of the 1996 1st Annual Conference on Emerging Technologies and Applications in Communications, Portland, OR, USA, 1996, pp. 180±183. [13] T. Magedanz, T. Eckardt, Mobile software agents: a new paradigm for telecommunications management, in: Proceedings of the 1996 IEEE Network Operations and Management Symposium (NOMS'96), Kyoto, Japan, 1996, pp. 360±369. [14] J.B. Odubiyi, D.J. Kocur, S.M. Weinstein, N. Wakim, S. Srivastava, C. Gokey, J. Graham, SAIRE ± a scalable agent-based information retrieval engine, in: Proceedings of the Autonomous Agents 97 Conference, Marina Del Rey, CA, USA, 1997, pp. 292±299. [15] M. Pazzani, D. Billsus, Learning and revising user pro®les: the identi®cation of interesting web sites, Machine Learning 27 (1997) 313±331. [16] C.J. Petrie, Agent-based engineering, the web, and intelligence, IEEE Expert 11 (6) (1996) 24± 29. [17] P.S. Sapaty, P.M. Borst, WAVE: mobile intelligence in open networks, in: Proceedings of the 1996 1st Annual Conference on Emerging Technologies and Applications in Communications, Portland, OR, USA, 1996, pp. 92±195. [18] R. Schoonderwoerd, O. Holland, J. Bruten, Ant-like agents for load balancing in telecommunications networks, in: Proceedings of the 1997 1st International Conference on Autonomous Agents, Marina del Rey, CA, USA, 1997, pp. 209±216. [19] J.M. Spivey, The Z Notation: A Reference Manual. International Series in Computer Science, Prentice-Hall, Englewood Cli€s, NJ, 1989. [20] J. Tardo, L. Valente, Mobile-agent security and telescript, in: Proceedings of the 1996 41st IEEE Computer Society International Conference (COMPCON'96), Santa Clara, CA, USA, 1996, pp. 58±63. [21] C.G. Thomas, G. Fischer, Using agents to personalize the web, in: Proceedings of the 1997 ACM IUI Conference (IUI'97), Orlando, FL, USA, 1997, pp. 53±60.