Knowledge sharing and reuse for engineering design integration

Knowledge sharing and reuse for engineering design integration

Expert Systems with Applications PERGAMON Expert Systems With Applications 14 (1998) 399–408 Knowledge sharing and reuse for engineering design inte...

880KB Sizes 4 Downloads 60 Views

Expert Systems with Applications PERGAMON

Expert Systems With Applications 14 (1998) 399–408

Knowledge sharing and reuse for engineering design integration Kuo-Ming Chao a, Peter Smith b,*, William Hills a, Barry Florida-James a, Peter Norman a a Engineering Design Centre, University of Newcastle-upon-Tyne, Newcastle-upon-Tyne, U.K. School of Computing and Information Systems, University of Sunderland, Priestman Building, Green Terrace, Sunderland SR1 3PZ, U.K.

b

Abstract Completion of complex engineering designs (e.g. the design of offshore oil platforms, ships, etc.) often involves a number of agents who have to share or reuse design models within a distributed environment. This paper presents a knowledge sharing workbench which enables agents to share common domain knowledge, based on the problems which emerge in the design process. The workbench includes Application Programming Interfaces provided by expert system shells, an Object Request Broker, and a number of ontologies to facilitate the construction of new knowledge-based systems. In addition, the distributed knowledge acquisition tools generated by the workbench can maintain model consistency between agents when specification changes occur in any of the agents. A case study from the petrochemical industry is used to illustrate the use of workbench for the integration of a number of agents. This case study demonstrates that a process flow knowledge model in an offshore petrochemical plant, designed for operational purposes (e.g. fault diagnosis), can be reused to form part of a new knowledge-based system which generates the data for plant layout design. q 1998 Elsevier Science Ltd. All rights reserved.

1. Introduction Collaborations between agents in a multi-agent system tend to take one agent’s output as another agent’s input without directly utilising he agent’s built-in knowledge model (Jennings et al., 1996). Such solutions may not be sufficient to solve problems where the agents have to share a common product model in a complex engineering design process. The difficulties of modelling a common, complex product model for different disciplinary design agents have been recognised by a number of research groups (Guenov, 1996; Van Heijst et al., 1997). The solutions proposed employ sharable ontologies (Neches et al., 1991) or automatic knowledge acquisition tools (Musen, 1992). The assumption underlying these solutions is that the agents use arbitrary terms in their problem-solving methods. The knowledge models, however, can only be reused within the frameworks for which they have been built. This limitation means that existing agents, which have been built on different software and hardware platforms to the target agents, cannot be reused. Previous researchers have not considered the issue of distributed environments as an area of major importance for their proposed frameworks. Work done by the authors of this paper demonstrates that recent progress in computer networks, object-oriented programming and in

* Corresponding author.

0957-4174/98/$19.00 q 1998 Elsevier Science Ltd. All rights reserved. PII: S0 95 7 -4 1 74 ( 97 ) 00 0 92 - 4

formation modelling has made possible the development of a layer above existing design tools (models) which allows the knock-on effects of design changes across distributed design agents to be traced (Guenov, 1996). The NewSun knowledge sharing and reuse workbench described in this paper is involved in extracting process flow knowledge from a Process Flow Diagram (PFD) for use in an Associativity Data Generation (ADG) agent. The agent involved in the extraction of the knowledge is called the PFD agent. The workbench also provides a number of ontologies (e.g. cluster analysis method, equipment selection method and temperature ontology) to assist the user in constructing the ADG agent. The next section of this paper outlines the mechanisms provided by the NewSun knowledge sharing and reuse workbench which have been developed by the authors. Section 3 summarises the case study adopted from the petrochemical industry for carrying out the collaborative design of a production plant. The PFD agent which generates the knowledge model of a PFD is discussed in Section 4. Section 5 includes details of the processes provided by the workbench to assist the user in constructing the ADG agent. A number of reusable ontologies provided by the workbench are also presented. Section 6 describes how the output of the ADG agent becomes a layout design agent’s input to product an effective process plant layout. Finally, conclusions from the research are drawn and future work is specified in Section 7.

T C

T C 400

K.-M. Chao et al./Expert Systems With Applications 14 (1998) 399–408

Fig. 1. The NewSun workbench.

2. A knowledge sharing and reuse workbench The NewSun workbench (see Fig. 1) allows existing Knowledge-Based Systems (KBSs) to be reused by other systems, even though the KBSs may run on different hardware platforms, be implemented in different software, and located at different sites. In addition, the workbench also provides an ontology library which contains three different types of reusable ontologies (i.e. domain, communication and method ontology) to facilitate the construction of a new KBS. The domain ontology describes a specific domain theory in a generic way. Thus, the users input the specific requirements of the system, and the system generates the specific domain knowledge for the application. A communication ontology, which sits between two agents, plays a knowledge transformation role where there may be inconsistent terms occurring between two knowledge bases. In other words, the communication ontology ensures that the terms used by different agents are consistent. Finally, problem-solving methods often require a large number of inputs and produce a large number of outputs. The structure of a method ontology includes input and output objects, and an implemented method body. The method ontology formally defines the input which is required by the problem solving methods. The results of problem solving methods are also accommodated in the method ontology. A method ontology can be included by an agent as a part of its mechanisms through the NewSun workbench. The mechanism used to retrieve the knowledge held

within the existing KBS and to construct a new KBS utilises Application Programming Interfaces (APIs) provided by expert system shells and an Object Request Broker (ORB). ORB allows distributed agents to communicate with each other by providing a standard interface definition language. ORB and APIs are combined to make KBSs accessible across a network such as the Internet. Another important feature in the workbench is a relation mapping mechanism. The relation mapping mechanism includes renaming, transforming and levelling functions. The renaming function allows object and slot names to be changed between the source agent and the target agent. The transforming function uses a communication ontology as a function to convert terms or values in a slot into the desired terms or values. The objects in the knowledge base can be changed to instances or classes by using the levelling function in order to meet the new specifications within the target agent. During the construction of a new KBS, every process entered by the user is recorded in the workbench for the generation of distributed knowledge acquisition tools. The tools create the objects in the target knowledge model based on the source agent knowledge model, and retrieve the values from the source agent to form the new KB. The interdependent relation between two agents grows due to this characteristic. This mechanism is also used to update the knowledge model in the target agent once any change to the knowledge model in the source agent has been made. In order to make the agent’s knowledge model accessible without changing the agent’s internal structure, an agent

T C K.-M. Chao et al./Expert Systems With Applications 14 (1998) 399–408

401

Fig. 2. Agent head.

head structure is imposed. The agent head includes an acquaintance module, a self module, and a communication module (see Fig. 2). The acquaintance module stores the knowledge in which the agent is interested and which the agent can obtain. The self module describes what knowledge the agent contains. The communication module consists of a set of communication programs which allow the agent to communicate with other agents. The APIs form an interface layer between the agent head and the agent body. As a result, the knowledge model held within the agent can be retrieved and used to form a new knowledge model.

3. Case study Fig. 3 shows a simplified Process Flow Diagram (PFD) of the gas–condensate separation process used on an offshore platform. Based on the PFD, a spatial layout of the plant must be designed. To accomplish this there are seven agents (design tools) which have been developed separately at different points in time, have different representation of the design model (or part of it), run on different operating systems (UNIX, Windows NT, or Windows 3.11), and reside in different geographical locations (Newcastle and Sunderland). These are: • • • • •

PFD agent (King, 1995); Power System agent (Guenov et al., 1996); Plant Operation agent (Guenov et al., 1996); Associativity Data Generation (ADG) agent (Chao et al., 1997a); Spatial Layout Design (SLD) agent (Smith et al., 1996);

Fig. 3. Process flow diagram.

• •

3D Parametric CAD agent (Guenov et al., 1996); and Cost Estimating agent (Guenov et al., 1996).

The PFD agent (PFD simulation and design) is responsible for the design of the gas–condensate separation process and generates a model of the PFD in the form of frames and rules. The Plant Operation agent is based on the output of the PFD agent to produce diagnostic rules for the functioning of the plant. The Power System agent computes the power demand and sizes the power generators. Equipment and connectivity data are carried from the equipment catalogue and the PFD agents to the ADG agent via the NewSun workbench. The ADG agent calculates the strength of the relationship between any two pieces of equipment (connectivity, safety, function, etc.). This agent also generates the input for the Spatial Layout Design (SLD) agent, which is responsible for producing the graphical layout in 2.5 dimensions. Further improvements (e.g. reallocating equipment) can be made to the layout when the model is displayed in the 3D Parametric CAD system. The Parametric CAD system relates any design changes to the Cost Estimating agent, allowing a revision of costs following a design change (see Fig. 4). Two major mechanisms are used in the architecture, namely knowledge sharing and reuse (Chao et al., 1996a,b) and the ‘knock-on’ effect (Guenov, 1996). The NewSun knowledge sharing and reuse workbench plays a significant

T C 402

K.-M. Chao et al./Expert Systems With Applications 14 (1998) 399–408

Fig. 4. Agent interaction.

role when the ADG agent shares knowledge with the PFD agent. When there is a need to change the specifications, the whole changing mechanism is based on ‘knock-on’ effect algorithms (Guenov, 1996). The case study can be divided into two stages. The first stage is to pass the knowledge from one mechanism to the other in order to generate the first draft of the design. The second stage involves changing the specifications, after any inappropriate equipment (e.g. oversize, overcost) in the plant has been identified. This triggers the whole system into reconstructing some or all of the design (including PFD specifications, and equipment selections) according to the new requirements. When a change to information held within one agent is made, changes to the related information held within other agents are indicated by the knock-on effect mechanism. The knock-on effect mechanism traces the information flow between models, and is based on the interdependent relationship between objects in the models (i.e. equipment items). During the design process in the case study, the operating pressure of the condensate export pipes was required to be increased, because of insufficient delivery pressure. This was achieved by resizing the condensate export pump (see No. 15 in Fig. 3). The thickness of the walls of the pipes and the Pipe Inspection Gear (PIG) Launcher (No. 16) also required checking. The shaded area in Fig. 3 shows the

affected part of the PFD. The specifications of two condensate export pumps in the PFD agent had to be changed. The specifications of the PIG Launcher were also changed. The changed information was passed to the ADG agent to regenerate the associativity data for the layout design. The resized pumps demanded more electrical power, which may have required a larger turbine generator. Finally, the resized equipment had to be checked for interference in the spatial layout, and the corresponding load bearing structures also had to be checked. The ADG agent is a front end to the SLD gent. A number of factors are taken into consideration in producing this layout. For instance, pipe lengths are kept to a minimum by siting equipment close together, whereas safety may require that space is left between items of equipment. Contradictory requirements mean compromises have to be made. For instance, gas turbines in power generation and gas compression functions both have a gas input, so should be situated together. However, they are generally separated due to the dangers inherent in compression (e.g. explosion). Other considerations include grouping functionally similar equipments into a ‘cluster’ for more efficient servicing, for example in a ‘pumproom’ (Chao et al., 1997a; Hills et al., 1993). Any possible partial interference in the spatial layout as a result of adding or resizing equipment will be shown by spatial relationships. Resizing of parts may mean that their spatial envelope (the area around an item of equipment) may interfere with neighbouring equipment. Thus, a simulation is required to assess the number of affected parts, as within a certain area, items of equipment can be translated and/or rotated in order to provide more space. Where extra space cannot be freed, there may be a considerable ripple effect due to part interference (Smith et al., 1996). The ADG agent includes the knowledge models of a PFD, an equipment catalogue, and a number of problem solving methods. The knowledge model of the PFD is derived from the results of the PFD agent, and is generated by a Distributed Knowledge Acquisition (DKA) mechanism via the NewSun workbench. The PFD agent was developed for the purpose of process simulation and fault diagnosis, and part of it can be reused in the engineering design. The information relating to equipment specifications and process flows becomes the input to the ADG agent.

4. Process Flow Diagram (PFD) agent The PFD agent includes an automatic Knowledge Acquisition (KA) tool which can extract the information about the chemical plant production process from the export files produced by Computer Aided Design (CAD) tools (Design II for Windows, and Procede 2 for Windows) to form a new Knowledge-Based System (KBS). The extracted knowledge is automatically converted to the format of Gold-Hill’s GoldWorks by a software generator built into the PFD

T C K.-M. Chao et al./Expert Systems With Applications 14 (1998) 399–408

403

Fig. 5. PFD agent.

agent (see Fig. 5). The resulting KBS is capable of detecting faults in simulated operational data generated by Design II, Procede 2. The knowledge extraction tool is implemented in Cþþ. The PFD agent contains two bodies of domain knowledge. These are the PFD and the P&ID (Piping and Instrumentation Diagrams). A petrochemical plant includes many large items of equipment, such as reactors, columns and heat-exchangers, connected by a network of pipes, electrical and service utilities (water, steam lines, etc.), supported by a structural framework. The PFD is a flowsheet representing the exact route by which processing will take place, complete with details of heat and mass balancing, and large equipment items. The P&ID extends the PFD to include streams and equipment details which are taken from the PFDs, as well as additional equipment-specific information (e.g. the maximum working pressure of a particular expander). Further information is contained within the P&ID and is concerned with the instruments and valves which can control chemical processes in the working plant. However, in the early stages of P&ID design, there is no information relating to the actual lengths of piping runs, or bends in them. The knowledge base can be classified into four predefined knowledge bases in the PFD agent. These are: Equipment-Based Knowledge; Stream-Based Knowledge; Fault-Based Knowledge; and Rule-Based Knowledge. The detailed information needed within the PFD design model is described in the Equipment-Based Knowledge and Stream-Based Knowledge. These two KBs contain the specifications of the required equipment items and stream connections. For the purposes of KA, the knowledge in the PFD agent can be divided into generic knowledge, intermediary knowledge, and extracted knowledge, according to the level of generality. In GoldWorks, the general knowledge of a process plant can be modelled as a set of frames which only include certain common attributes, without having any data in them. For example, a connections frame is associated with a set of slots, representing the connectivity between equipment items. The slots hold some essential information which would be required for any stream, for example stream number and the unique numbers of the equipment items. Intermediary knowledge inherits the attributes from

generic knowledge, together with its own specific knowledge, to form another layer of knowledge. Intermediary knowledge represents a number of different possible types of equipment in a plant. For example, there is a frame of intermediary knowledge known as Heat-Exchanger, representing all types of heat-exchanger, which has an outputtemperature (TEMOUT) attribute as a slot. The extracted knowledge base is specific to the equipment items for that plant. For example (see Fig. 6), the instance (named E-5) for the heat-exchanger includes the tag-name 105, each equipment item having a unique name, and an output temperature of 290 units, which it inherits from the type of heat-exchanger represented in the HeaExc frame. The objects representing the individual equipment items which hold the extracted knowledge are a list of instances derived from the frame-structure, and are populated with data. As a result, these instances are specific to a particular plant. The extracted knowledge base is formed by the automatic KA program in the PFD agent.

5. Associativity Data Generation (ADG) agent Large-scale engineering design requires the selection and arrangement of bought-in equipment, as is the case when designing, for instance, a process plant. Prior to incorporating the physical sizes and shapes of equipment into the SLD agent (which generates the 2.5 dimensional layout), the associativity data, which determine the placement of the equipment, are generated. The Process Flow Diagram (PFD) specifies the equipments’ connections, and is an important source of information during the layout design process. However, the information provided by the PFD is not sufficient to become a direct input for layout design. Due to the nature of the design process at this stage (that is, the engineer is mainly concerned with performance), specifications of the equipment that are issued by engineers only include major functional parameters (e.g. working temperature, working pressure). There is no clear indication of the physical sizes of equipment or the relationships required between pieces of equipment, apart from connectivity. The dimensions of a piece of equipment cannot be obtained until the engineers determine the selection criteria (e.g. cost, quality, and safety) and select the equipment from the suppliers’ equipment catalogue. The specifications in the PFD allow the appropriate pieces of equipment to be selected from suppliers’ equipment catalogues modelled as equipment catalogue domain knowledge. These catalogues

T C 404

K.-M. Chao et al./Expert Systems With Applications 14 (1998) 399–408

Fig. 6. Modelling knowledge as frames.

include the physical size of pieces of equipment which can be used in the layout design. The performance of the layout design process is significantly improved if the input data reflect the need for adjacency or the need to preclude adjacency. For example, a desired adjacency might result from the need to ensure that the length of connections in the system is kept to a minimum. Conversely, it may be necessary to ensure that for safety reasons, one item of equipment needs to be widely separated from another. Adjacency constraints are produced as a result of equipment clustering, performed in order to optimise (or to reach the best compromise for) global parameters and requirements such as safety, serviceability, cost and so forth. Three main types of clustering criteria can be identified, based on connectivity, common function (e.g. equipment type) and common process/system (e.g. condensate process), although this list can be expanded when, for example, other important factors such as safety are considered. In this paper, the authors describe a system which includes three tasks and two bodies of domain knowledge. The tasks are: the equipment selection task, equipment classification task, and cluster analysis task. The knowledge is: PFD knowledge model and Equipment Catalogue knowledge model. After the methods and procedures to produce associativity data have been identified, the ADG agent can be generated by the NewSun workbench. The NewSun workbench provides two reusable methods associated with the method ontologies and their method bodies. Based on the equipment specifications knowledge model, an equipment

selection task is designed to select the appropriate equipment from the equipment catalogue knowledge model. The equipment specifications are derived from the PFD agent via the workbench. The equipment catalogue is created by a human expert and is modelled manually in the same way as the PFD specifications. The tasks of equipment selection and cluster analysis utilise the method ontologies via the NewSun workbench to determine the appropriate equipment and to group the items of equipment by their similarities, in terms of systems, function and characteristics. The PFD agent generates a set of files, for example, equip.lsp, eval4.lsp, op.lsp, op1.lsp, rules.lsp, etc. The equip.lsp file includes the defined frames for each piece of equipment in the PFD. The eval4.lsp file includes the instances of equipment which inherit the slots from each individual equipment frame (e.g. FLA representing FLAsh separator) being specified for the particular PFD. Each frame has a number of instances representing the pieces of equipment required for the PFD. The slots in the instances are populated with the values which represent values of equipment specifications in the ADG agent. The names of the frames in the eval4.lsp file are registered with its agent head, which is stored in the file called PFD_head.txt file. When the user makes a request to acquire the PFD knowledge from the PFD agent, the PFD_head.txt file is opened and its contents are passed to the workbench. The CONNECT frame and its instances represent the connections from one piece of equipment to another. This information is also required by the ADG agent. The ADG agent creates a corresponding class called CONNECTIVE class.

Fig. 7. Example rule.

T C K.-M. Chao et al./Expert Systems With Applications 14 (1998) 399–408

405

programs come from the procedures input by the user when he or she selects the instances and slots, and uses the mapping relation mechanism. The DKA also incorporates a set of programs for the retrieval of a source agent and a target agent. The user has to copy the generated program to the target agent head and register all function names corresponding to the frames to the target agent’s acquaintance module. The program includes two parts. One is for the creation of objects and their slots. The other is for retrieval of the values from the source agent to populate the created slots. The required header files for the utilisation of APIs in the expert system shells and ORB are included in the file. The code and header file attached to the target head program can be compiled and linked into an executable file. The ADG agent can then create the classes and slots and retrieve the values from the PFD agent. 5.1. Equipment selection method ontology

Fig. 8. Layout design.

The temperature slot in the PFD agent used Fahrenheit as its temperature unit. However, the temperature specification in the equipment catalogue uses Celsius as its unit. Thus, there is a need to convert from Fahrenheit to Celsius. A communication ontology, known as the temperature ontology, is used to do the conversion. The user has to select the temperature unit from which, and into which, he wishes to convert. The user in turn specifies all the required instances and slots from the PFD agent until all required equipment has been selected, and transforms them via the NewSun knowledge sharing and reuse workbench into the PFD knowledge model held within the ADG agent. The Distributed Knowledge Acquisition (DKA) tool is automatically generated by the workbench after the user has completed the mapping relation. The user also has to specify the target agent, and expert system shell used by the target agent, in order to include the implemented program (e.g. APIs and Orbix) appropriate to the agent, for accessing the knowledge base. When the user specifies the slot to be reused, the data types for the slot values must also be defined (e.g. number, string, Boolean). For the purpose of updating the knowledge in the target agent when any change to the source agent occurs due to changes in the specification, each frame will be generated in a function in the DKA programs. The contents of the DKA

The authors illustrated an equipment selection method which can be reused in similar cases. Firstly, this approach requires two knowledge models: equipment specification and equipment catalogue. These two knowledge models are formulated into frames and instances according to their types. For example, a pump class includes a list of pump instances. Thus, the equipment specification knowledge model includes a list of equipment and their specifications, for the selection of the appropriate items from another knowledge model (the equipment catalogue). The equipment catalogue knowledge model includes a number of items of equipment, containing detailed specifications of existing equipment from different suppliers. It maintains the full details of the equipment. The equipment specification knowledge model is derived from the other agent, for example, process simulation, so it consists only in that case of essential attributes for a process flowsheet. A pump entity only has input and output ‘pressure’ and ‘temperature’. The equipment catalogue knowledge model includes the equipment’s size and cost, etc. A rule is generated to determine the appropriate pump by the attribute of pressure. In the catalogue, each pump provided by suppliers has a pressure operating range. Thus, this generates a range for selection. After appropriate pump has been selected, the values of the size and cost attributes are copied to the selected instance. The value selection rule not only considers the temperature, but also the output volume of the specific pump which the valve connects to. In this case, the pump named ‘P0001’ is its connection. In order to fire the rule, the pump must be selected first (see Fig. 7). The same principle can be applied to the selection of other equipment. The equipment selection method incorporates a set of procedures, which are: mapping relation, rule generation, invocation of the method, and mapping the result to the knowledge base in the agent. The input to the method is derived from the knowledge base of the ADG agent,

T C 406

K.-M. Chao et al./Expert Systems With Applications 14 (1998) 399–408

Table 1 Cluster analysis.

When the classification task is initiated, the users and the rules will assign the appropriate values in it (0 or 1). For example, each item of equipment has a common locational attribute called a hazard category regarding safety, and a number of connection slots. The value of the hazard category locational attribute has to be input by the user. The value of the connection slot reflects the number of connections it has to other items of equipment. Each system also has its own characteristics, so when the whole PFD is divided into three subsystems according to the products it generated, the characteristics in each piece of equipment will inherit the values from the system to which it belongs. Moreover, the system allows the user to choose a preference for the layout in terms of type, system, or connective. The ADG agent takes these preferences into consideration by giving different weights which are then multiplied by the values of the locational attributes. The constraints (for example disadjacency due to safety or other reasons) are made into rules to override the given values in the preceding steps. 5.3. Cluster analysis method ontology

which has already classified the required equipment, and the equipment to be selected in the catalogue, into types. So, the mapping relation mechanism is used to map these two bodies of information into the input class of the method ontology. The system generates the program for loading the input to the method, and also stores it as a set of variables for the rule generator. The method also includes a set of objects which guide the user to complete the task, such as constraints dependency and logic. After the mapping has been done, the next step is for the rule generator to automatically produce the rules for the equipment selection, according to the specifications which are stored as a set of predefined variables. After the rules are generated, the method is ready to be operated. The output of the method is stored in an output area which is formatted into objects. The user utilises the mapping relation mechanism again, to map the output objects in the method to the output objects in the applications. The DKA program generated by the workbench is responsible for transforming the values from the output object in the method ontology to the object in the ADG agent. 5.2. System classification and locational attributes After the appropriate equipment has been selected, the ADG starts to classify the selected equipment in the PFD by types (function), systems, and connections. Each equipment class includes a number of slots which indicate their characteristics. These slots, called locational attributes, were created in the ADG.

The cluster analysis method (Dillon and Goldstein, 1984) is a method which has been commonly used to classify group entities which have the same attributes with different values. A cluster analysis method can be used to solve a number of problems where there is a need to determine the similarities between a set of objects. The input for this method is a collection of objects and their attributes. Each attribute would be assigned a number from a range to represent its character. The output of the method is the calculated strengths of their relationships, which indicate their similarities. Each item of equipment and its required locational attributes have to be mapped to the input object of the cluster analysis method ontology. The cluster analysis method ontology will create a list of instances called CLUST_INl, CLUST_IN2…, etc. to accommodate these data, and will then invoke the method body to cluster the input equipment and generate the output object. The user then has to map the output objects to the required output object in the ADG agent. The numbers in the table (see Table 1) indicate the strength of relationship between each of the pairs of equipment items. The smaller the figure, the stronger is their relationship. One method which is employed to cluster the equipment after the similarity measurement is minimised distance for each equipment pair. Items of equipment in the same subsystem will be clustered prior to being grouped by the same function constraint. For example, equipment number 14 and equipment number 15 are in the same subsystem, and have similar functions, so they are clustered together. As a result, the ADG produces the output required by the layout design, which includes the physical sizes of each piece of equipment and their associativity data.

T C K.-M. Chao et al./Expert Systems With Applications 14 (1998) 399–408

Fig. 9. 3D layout.

6. Other agents The SLD agent can then take the output to produce an optimal layout within a limited space (see Fig. 8). Each rectangle in Fig. 8 represents an equipment item. The equipment items are located close to each other. In other words, space between equipment items has been optimised, and there is no overlap between equipment. However, the layout displayed in Fig. 8 is still difficult for engineers to understand. Therefore, the 3D agent takes the information in the 2.5D layout from the SLD agent to generate a 3D layout (see Fig. 9). Fig. 9 shows the equipment items in the PFD being clustered by the subsystems to which they belong. This demonstrates that the associativity data generated by the ADG agent are valid. Within this example, the engineers identify that item No. 16 and No. 15 (see Fig. 3) have to be changed due to new requirements (e.g. higher pressure). As a consequence, the PFD specifications in the PFD agent are changed. The user of the ADG agent triggers the DKA program in its acquaintance module to update the data, and regenerates the associativity data for the SLD agent. The 3D diagram agent displays the new output for the new requirements. The change mechanism is based on the algorithms of the ‘knock-on’ effect (Guenov, 1996).

407

is developed. The systems are implemented for each agent, using suitable software (e.g. expert system shells, programming languages and CAD tools) and hardware (e.g. PC, Sun workstations). This results in them using different tools and terms to develop their models. In a complex engineering design, the agents may be located at various geographical sites. A central processing method to integrate these agents will lead to a bottleneck developing in the central database, because of the heavy load on the central processor. The common domain knowledge needs to be shared by these agents. Seeking a solution to these problems has inspired the development of the NewSun workbench. The NewSun workbench enables the ADG agent to extract the knowledge base from the PFD agent in order to construct a new knowledge base for the ADG agent. The ADG agent also utilises reusable methods through method ontologies, such as the equipment selection method ontology and the cluster analysis method ontology, using the workbench to solve the problems. The workbench, based on the user’s design model, generates the DKA tool for the ADG agent. Consequently, the DKA tool brings a great deal of flexibility to the ADG agent when the equipment specifications of the design model in the PFD agent have been modified. The modification of equipment specifications often occurs to an engineering design in the early stages of design. The DKA tool can rapidly acquire new information and transport it to the ADG agent. The mapping relation mechanism allows the users to map the design model of the PFD agent to the one in the ADG agent, by providing renaming, levelling, transforming and mapping functions. In this research, the authors have also identified the lack of a mechanism to deal with the reasoning conflicts between design agents. For example, upgrading a piece of equipment might mean that the platform envelope cannot accommodate the new equipment in terms of size. To increase the size of the platform, cost has to be taken into consideration. This requires an optimisation mechanism to assist engineers in making such a decision. Therefore, the next step is to incorporate a multiple criteria decision support system (Yang and Sen, 1994) to deal with the optimisation between agents when their disciplines conflict. Moreover, a safety system (McAlinden et al., 1997) to evaluate safety will be involved. Both of these systems have been developed at the University of Newcastle. The safety system is expected to compliment the ADG agent whereby the ADG agent generating the associativity data would take the detailed safety information into account.

7. Conclusions and future work The amount of information required by the agents (e.g. PFD agent, ADG agent, SLD agent, etc.) in this case study is substantial. They play different roles, yet each of them contributes to the process of solving design problems. However, they do share some common domain knowledge. Each agent has a specific design model and methods in which its system

Acknowledgements This research work has been funded by the U.K. Engineering and Physical Sciences Research Council, Grant Ref. No. GR/J40270. Thanks to Dr. M. Guenov and Dr. B. King for their kind permission to use their systems.

T C 408

K.-M. Chao et al./Expert Systems With Applications 14 (1998) 399–408

References Chao, K.-M., Guenov, M., Hills, B., Smith, P., Buxton, I. and Tsai, C.-F. (1996a). Sharing domain knowledge in distributed artificial intelligence. ECAI Workshop-96, 11–16 August, Budapest, Hungary. Chao, K.-M., Guenov, M., Hills, B., Smith, P., Buxton, I. and Tsai, C.-F. (1996b). Sharing domain knowledge and reusing problem solving method. Conference Proceedings of EXPERSYS-96, 21–22 October, Paris, France. Chao, K.-M., Guenov, M., Hills, B., Smith, P., Buxton, I. and Tsai, C.-F. (1997a). An expert system to generate associativity data for the layout design. Journal of AI in Engineering, 11 (2), 191–196. Dillon, W. R. and Goldstein, M. (1984). Multivariate Analysis Methods and Applications (pp. 157). New York: Wiley. Guenov M. (1996). Modelling design change propagation in an integrated design environment. Computer Modelling and Simulation in Engineering, 1, 353–367. Guenov, M., Chao, K.-M., Florida-James, B., Smith, N., Hills, B. and Buxton, I. (1996). Tracing the effects of design changes across distributed design agents. The Second World Conference on Integrated Design and Process Technology, 1–4 December, Houston, TX. Hills, W., Barlow, M. and Cleland, G. (1993). Layout design of large madeto-order products using a knowledge-based system. Proceedings of the

International Conference in Engineering Design, The Hague, Netherlands, 17–19 August, pp. 431–436. Jennings N. R., Mandani E. H., Coreera J. M., Laresgoiti I., Perriollat F., Skarek P., & Varga L. Z. (1996). Using ARCHON to develop realworld DAI applications, Part 1. IEEE Expert, 11 (6), 64–70. King, B. (1995). Automatic extraction of knowledge from design data, Ph.D. Thesis, University of Sunderland. McAlinden, L. P., Sitoh, P. J. and Norman P. (1997). Integrated information modelling strategies for safe design in the process industries. Computers and Chemical Engineering Journal (in press). Musen, M. A. (1992). Dimensions of knowledge sharing and reuse. Computers and Biomedical Research, 25, 435–467. Neches R., Fikes R., Finin T., Gruber T. R., Snator R. T., & Swartout W. R. (1991). Enabling technology for knowledge sharing. AI Magazine, 3 (12), 16–36. Smith, N., Hill, W. and Cleland, G. (1996). A layout design system for complex made-to-order products. Journal of Engineering Design, 7. Van Heijst G., Schreiber A. Th., & Wielinga B. (1997). Using explicit ontology in KBS development. International Journal of Human– Computer Studies, 46 (2/3), 183–292. Yang J.-B., & Sen P. (1994). A general multi-level evaluation process for hybrid MADM with uncertainty. IEEE Transactions of Systems, Man, and Cybernetics, 24 (10), 1458–1473