Knowledge-Ba~d 5VSTEMS---"
II ELSEVIER
Knowledge-BasedSystems9 (1996) 329-337
APACS: a multi-agent system with repository support Huaiqing Wang a, Chen Wang b aDepartment of lnformation Systems, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong bDepartment of Computer Science, University of Toronto, Toronto, Canada M5S I A4
Received 23 October 1995; revised27 March 1996: accepted 16 May 1996
Abstract
In this paper, we present APACS, Advanced Plant Analysis and Control System, a multi-agent system (MAS) with repository support. APACS has been designed and implemented for monitoring and diagnosing real-time nuclear power plant failures. Specifically, we demonstrate the importance of repository technology in achieving knowledge communication within a multi-agent system. In this paper, we will outline important design and conceptual problems such as the choice of data model, the portability problem, the transparency problem and the maintenance problem, encountered during our practical experience that will no doubt be encountered by other designers of MAS systems, as well as presenting our practical solutions and their benefits. It is hoped that this experience will be of interest to researchers due to its innovative solutions and their impacts in the conceptual frameworks of next generation MASs, and of interest to practitioners who may benefit from the demonstration of both the feasibility and the economical and technical benefits of knowledge sharing using repositories in the design and implementation of MASs. Keywords: Multiple agents; Information repository;Knowledgesharing
1. Introduction
The notion of knowledge sharing [1] involves the usage of knowledge bases (or portions of knowledge bases) not merely within its intended application, but also other sites within the same corporate information architecture or within the context of newly developed applications. System developers would then only need to worry about creating the specialised portion of the knowledge base new to the specific task of their system. It is clear that knowledge sharing can improve the overall quality and reliability of knowledge based systems, and can reduce the effort and the costs for building such systems [2]. Moreover, knowledge sharing has created the unique opportunity for practical development of larger and more complex knowledge bases at a fraction of today's expected cost. Some researchers are building and maintaining large, sharable common-sense knowledge bases a significant example of which is MCC's ongoing work to construct CYC [3-5]. On the other hand, other researchers are focusing on more specialised domains. For example,
email:
[email protected],
[email protected] 0950-7051/96/$09.50 © 1996 Elsevier Science B.V. All rights reserved PH S0950-7051 (96)01043-X
some researchers have concentrated on sharing biomedical knowledge [6]. The developers of the APACS (Advanced Process Analysis and Control System) project [7] are concentrated on sharing and reusing knowledge in a focused domain (nuclear power plant) with well defined tasks and knowledge. The success of the APACS project has demonstrated the benefits of knowledge sharing and knowledge reuse. As of today, the APACS project has made the successful transition from a working research prototype to an actual industrial installation and application in Ontario Hydro's Bruce B nuclear power plant. Benefits of the APACS installation of a knowledge based system into a modern nuclear operating environment come from several areas. On one hand, the predicative maintenance natural in the APACS system will enable notable reductions in the number of plant trips in a year by diagnosing potential problems early and quickly. At a cost of about one million Canadian dollars per plant trip, the APACS installation has shown an extremely short economic pay back period. On the other hand, the reusable APACS architecture as provided by the integration of repository technology into the knowledge sharing framework has paved the way for ease of integration of new APACS components in the future and serve as a knowledge
H. Wang, C. Wang~Knowledge-BasedSystems 9 (1996) 329-337
330
based backbone in future nuclear power plant operation infrastructure. This paper focuses on addressing the knowledge sharing problems mentioned above by integrating repository technology into the overall system architecture. The effectiveness of this solution is demonstrated by the authors' experience with the APACS project. The next section gives an overview of the APACS system. Section 3 presents a discussion of knowledge sharing in general and numerous practical knowledge sharing problems. In Section 4, the common representation and common ontologies are introduced. The implementation and the operation of APACS system are described in Section 5 and Section 6. Section 7 discusses related work and conclusions.
2. Overview of APACS
This section describes one of the largest real-time knowledge based system projects currently being undertaken in Canada. The 9.7 million Canadian dollars, fiveyear project began in the fall of 1990 and is expected to be completed Fall 1995. It is the goal of the APACS project to build a prototype implementation for the feedwater system of Ontario Hydro's Bruce B nuclear generating station. The goal of APACS is to develop a generic framework for building systems that assist human operators of power plants in noticing and diagnosing failures in continuous processes. The main agents being developed are knowledge-based systems to monitor and diagnose a
Data Flow
Fig. 1. APACS overview.
process, advanced simulation facilities to support the real-time diagnosis of plant malfunctions and the prediction of plant behaviour, and an operator-machine interface which complies with the operating requirements of a control room environment. An operating APACS consists of several different agents running on several different machines. Such agents are shown in Fig. 1. The various APACS components outlined in Fig. 1 are as follows: • Data Acquisition (Daq): receives data from the plant control computer. • Tracking: a real-time simulation tracks the plant by continuously updating the conductivities of links associated with each of the sensor positions. • Monitoring: takes as input the quantitative sensor values and control computer alarms and produces as output symbolic descriptions of the behaviour of the state of the plant. • Diagnosis: takes as input the output of Monitoring and attempts to generate a qualitative causal explanation of this input. • Verification: a faster than real-time simulator that verifies the outputs of Diagnosis. • Human Computer Interface (Hci): visual interface that interacts with APACS users. During operation, the Data Acquisition component receives real-time data frame by frame and passes the data to all the components in APACS. Based on the real-time data and the simulation model, the Tracking component will produce simulated values and pass them to some other APACS components. Based on the realtime data from the Daq and the simulated values from the Tracking component, the Monitoring component does inferencing and outputs symbolic description of the behaviour of the state of the plant. For instance, the Monitoring component may say that the water_level of Boiler_l is too high. The outputs (symptoms) of the Monitoring component may trigger the Diagnostic component's model-based inference engine. When the Diagnostic component reaches one or more diagnosis, it will send them to the Verification component, that will verify the diagnosis. The Hci component receives all of these information and presents them to the operators. It should be noted that each APACS component plays a specific role within the entire APACS framework. Each APACS component relies on one or more other components' services. Consequently, it is necessary for each of the components to inter-operate with others for the purpose of sharing information. Furthermore, the development of a fault detection and identification system remains a common task. The main challenge of APACS and other intelligent MASs involves the issue of knowledge sharing and knowledge reuse. Next section will discuss sich knowledge sharing in APACS.
H. Wang, C. Wang/Knowledge-BasedSystems 9 (1996) 329-337
3. Knowledge sharing in APACS One of the AI pioneers, Allen Newell, viewed knowledge as an abstraction that cannot be written down and that can never be in hand [8]. He also pointed that the data structures (or knowledge representation) that we use to encode knowledge in knowledge bases are not equivalent to the knowledge that those data structures (symbols) represent. This distinction between symbols and knowledge (or between the symbol level and the knowledge level, as pointed out by Newell [8] is essential for knowledge sharing. As Neches [2] pointed out, there are four critical impediments for sharing and reusing of knowledge. (1) Heterogeneous representation. There is no single knowledge representation that is best for all problems. One representation cannot be directly incorporated into another representation. (2) Different dialects within language families. Even in a single family of knowledge representations, it can be difficult to share knowledge across systems if the knowledge has been encoded in different dialects. (3) Lack of communication conventions. We lack an agreed-on protocol specifying how systems are to query each other and in what fbrm answers are to be delivered. (4) Model mismatches at the knowledge level. When different primitive terms (e.g. different vocabulary and domain terminology) are used, it can be difficult to combine two or more knowledge bases or to establish effective communications among them. A number of researchers have been working on common ontologies [4,5,9]. An ontology is an explicit specification of a conceptualisation [10]. In other words, they use ontologies to describe formal descriptions of objects in the world, the properties of those objects and the relationships among them. During the initial phase of APACS development, the APACS team developed several prototypes (of APACS components) and developed an ADS (APACS Data Server) for communication among these prototypes. In this phase, each APACS component has its own local knowledge base which may not be consistent with others. Therefore, this potential inconsistency can be referred to as the "vocabulary problem". For instance, the first boiler in the feedwater system may have different identifiers within different APACS components, such as BO1, BO_I, BO-1, Boiler_l, etc. Therefore, when the Monitoring components outputs the message "~The water_level of BO1 is too high", other components without the identifier "BOI" in their local vocabulary will not be able to decode such a message. A simple solution to solve this "vocabulary problem" is to build a data dictionary to store all the different vocabularies. Based on this idea, a "matching table" in ADS was developed to store this information. For instance, the table knows that the BO1 in the Monitoring
331
component is the same as BO_I in the component HCI. The evaluation of the success of this initial prototype indicates several problems described below: • Portability problem: It is hard to port the APACS system to other applications. All local knowledge is not sharable and reusable. • Transparency problem: the APACS components all rely on one or more other components' services. Consequently, it is necessary for each of the components to "know" the services provided by other components. For example, it is necessary for the APACS Diagnosis component to have knowledge about the services provided by APACS Monitoring component as the services are being dynamically updated. • Maintenance problem: Schema evolution is difficult. During the development stage, the schema in any APACS components may change from time to time. Such change should result in appropriate change in ADS's dictionary. The manual efforts necessary to perform such changes are both very expensive and difficult items for maintaining the integrity of the dictionary. Faced with the problems presented above, the APACS team decided to employ knowledge sharing technology and to store the sharable ontologies into a common repository. An extensive set of services is provided by the repository that allows components to interface with the repository [11,12]. The application of knowledge sharing technology enables the APACS development team to address the transparency problem by means of providing a centralised representation; consequently, control information can be shared among all APACS components in a transparent fashion. On the other hand, the solution to the maintenance problem is by having the central repository that manages the schema evolution. Finally and more importantly, the portability problem is tackled by having the repository to store the common ontologies, that are sharable and reusable by all the APACS components, as well as by future systems. The next section will describe the sharable ontologies in APACS.
4. Sharable ontologies in APACS The recent development of large, sharable ontologies has become a major area of knowledge based system research. These researchers view experiments to build and maintain common ontologies as being an essential prerequisite to the long-term goal of creating large, sharable knowledge bases [1]. The application of repository technology enables the APACS development team to address the transparency problem by means of providing a centralised
332
H. Wang, C. Wang~Knowledge-BasedSystems 9 (1996) 329-337
representation, consequently, control information can be shared among all APACS components in a transparent fashion. On the other hand, the solution to the maintenance problem is by having the central repository managing the schema evolution. Furthermore, unlike ADS, the repository plays an active role of brokering all information flows. Finally, the portability problem is tackled by having the respository separating its data dictionary and the management services.
4.1. Common representation and common dialect As described in Section 3, it is hard to translate knowledge represented in some particular representation scheme into some other representation scheme. In the APACS framework, each APACS component plays a different role. Some are conventional programs (e.g. the Data Acquisition component and the Human Computer Interface component) while others are knowledge based systems. There are three alternatives to select knowledge representations for the APACS framework. • All the APACS components use a global knowledge representation. • The common repository and the knowledge server share a sommon representation, while other components may use their own representations. • All the APACS components use different representations. It is simply impractical to force all components within the APACS framework to employ the same knowledge representation. So, the first solution is not realistic. In terms of flexibility, the last solution is the most flexible and probably most closely matches the reality of today's MASs. The most well-known example of such systems-is the Knowledge Sharing Effort project supported by DARPA [2]. However, the last solution is very complicated and very inefficient. It does not consider the persistent storage either. Therefore, the APACS team selected the second solution, i.e. a common representation for the common repository and the knowledge server, while all the other components can keep their own representations. The knowledge sharing and knowledge communicating in APACS will be based on the common representation, that are described below.
4.2. Reusable ontology of agents All the application programs in APACS are intelligent agents, each of which plays a particular role for the APACS. The sharable ontology of agent classes is shown in Fig. 2. The top-level class in Fig. 2 is Agents, that has three sub-classes: Repository I/O, Knowledge Server and Clients. The Repository I/O is for the input and output
-..,,~ )/~MONITORINi
~ .... 9 L
Fig. 2. The ontology of APACS agent classes. from the repository. The Knowledge Server is for managing the whole APACS system. As it can be seen that a client may receive data from more than one source, for instance, the client "MONITORING" receives data from DAQ and TRACKING. It should be noticed that the output data from the client TRACKING is based on the data received from DAQ too. If the APACS does not synchronise the data, the MONITORING component may receive data synchronously. The following sequence of data received by MONITORING component is an example: data_frame_l from DAQ, data_frame_2, from DAQ, data 1 from TRACKING based on data_frame_l, data_2 from TRACKING on data_frame_2, etc. Such asynchronous data are not acceptable for some APACS components. Therefore, the APACS Knowledge Server has a synchronous mechanism. If a client is specified as a synchronous client, it will received data synchronously, i.e. the client will never receive data in an asynchronous manner. On the other hand, ifa client is an asynchronous client, it will receive data synchronously. There are two sub-classes of clients: Synchronous Clients and Asynchronous Clients. DAQ, TRACKING, MONITORING, and VERIFICATION are four subclasses of the class Synchronous Clients. HC1 and DIAGNOSIS are two sub-classes of the class Asynchronous Clients. New agent classes can be created easily in future systems, as the presence of a sharable ontology will facilitate the ease of knowledge exchange between the new type of agent and the rest of the APACS agents. The usage of an agent class's ontology will be described in the next section (System Operation section).
H. Wang, C. Wang/Knowledge-BasedSystems 9 (1996) 329 337
333
CONFIGURATION
~
~
!Nsr~rl
PLR4'IT_STRUCTURE ~BSTRACT_STRUIC~TURE A~TP~CT~SSEH]~.Y
i i ii
II,
Fig. 3. Sub-classes of PLANT_STRUCTURE (three levels only).
4.3. Reusable ontology of domain objects The ontology about the APACS application domain, a nuclear plant, contains a set of classes, instances, and relations that constitute a physical plant model. At the present time, there are about 350 classes, 4500 instances in the common repository. The top-level class COMMON_MODEL about the domain objects has four sub-classes: PLANT_STRUCTURE, PLANT_ CONTROL, PLANT_STATUS and MESSAGE. Fig. 3 shows subclasses of the class "PLANT STRUCTURE" for three levels. All the plant devices belong to "ACTIVE_EQUIPMENT". These plant devices include valves, boilers, pipes, pumps, generators, heaters, transmitters, turbines, tanks, etc. and their relationships reflect the domain knowledge that is shared across all APACS clients. This particular ontology will be of contribution towards the creation of new APACS agents and new systems that targets the nuclear power plant application domain. Fig. 4 shows a semantic graph slice of the repository. For example, the class BOILER has an attribute relation "water level" that is the class DCC VALUE. At the instance level, the value of the attribute slot "water level" of BO1, an instance of the class BOILER, is DT210, which is an instance of the class DCC VALUE.
includes system events, such as system alarms, and monitoring and diagnostic events, such as trends, warnings, diagnoses, etc.
5. Implementation The architecture o f the A P A C S is shown in Fig. 5. There are three layers in the architecture: the agent
Class
I I~
BOILER
B01
Ihl~Sd.orn .fr
watar_lavel
water_level
~
DCC_VALUE
~DT210
Instanceof --I> Attribute Is a
4.4. Reusable ontology of dynamic
events
All the dynamic information is defined as events in the APACS system. The ontology of such dynamic objects
Fig. 4. A semantic graph slice.
334
H. Wang, C. Wang/Knowledge-Based Systems 9 (1996) 329-337
0
: port ...........
~ ...............
distributed method call
IRepository
IO I....................
I
Information Repository
Fig. 5. The APACSarchitecture. layer, which includes all the APACS components mentioned in Fig. 1. The lowest layer is the repository layer, that provides centralised representation. The middle layer is the knowledge server layer, that manages the communication. The APACS architecture supports two different communication protocols: ports and distributed method calls. A port is a point-to-point connection, that allows an application program to send knowledge to another application program. On the other hand, the APACS architecture also supports a much more direct means of peer to peer communication. An application program can also call another application program's methods using distributed method calls. A repository is used within the APACS framework to provide the knowledge management services for the purpose of knowledge sharing. Conceptually abstract knowledge sharing activities can then be conducted by decomposing them into a set of transactions that take advantage of the services provided by the central repository. The APACS framework supplies generic software and knowledge that aim towards making the building of similar systems for other plants a matter of specifying the components and topology of the plant rather than a fully-fledged knowledge engineering effort. As all the components work on the same plant and communicate with a set of valid message objects, it became evident that it is essential for all APACS components to share a common vocabulary. Furthermore, meta-information such as the configurations and the design decisions of the APACS framework (i.e. generic software and
knowledge) can be shared and reused by other similar systems in the future. Apart from the fact that all APACS agents share a common ontology of the application domain, each individual APACS agent deploys widely different design and implementation methods to achieve its particular target functionality. In particular, the Monitoring agent and the Diagnosis agent applies different reasoning methods (rule-based versus abductive model based) based on the shared APACS ontology. The Monitoring component [7] has a reasoner, named Monitoring Inference Engine (MIE), that is based on an expert system tool CLIPS. Apart from the MIE, the Monitoring component has a module library, containing several modules. Each module is corresponding to a particular task. For instance, the module "Quadrant_ Level_Deviation" compares the slopes of boilers in a quadrant to the slope of the set point. The module library is stored in the common repository, the Monitoring component is able to down load module(s) from the repository to perform different monitoring tasks. The MIE uses hybrid knowledge representation (KR) method, i.e. the knowledge is represented as objects and rules. Such a hybrid KR method is more powerful than a single KR method, e.g. production rules or frames. The Diagnosis employs a model-based reasoning method. It uses backward chaining on rules describing the causal sequences of events in order to come up with a causal network explaining a set of symptoms. Forward chaining is used to confirm hypotheses and make predictions. The KR scheme in the Diagnosis component is based on the Telos KR language [13]. The Telos knowledge representation language adopts a representational framework which includes structuring mechanisms analogous to those offered by semantic networks and semantic data models, namely classification (inverse instantiation), aggregation (inverse decomposition) and generalisation (inverse specialisation). Telos also provides an assertion language for writing integrity constraints and deductive rules, and for expressing temporal knowledge. The inference and representation scheme in the Diagnoses have been implemented in C + + and is derived from a variant of the KNOWBEL [141. Except for two knowledge-based components described above, other APACS components are conventional programs as the following. The HCI component uses an Object-Oriented GUI tool Interviews and in C++. The Tracking and the Verification are simulation programs written in FORTRAN. The Data acquisition is written in C++. The repository is implemented on the top of the commercial OODBMS Versant [15]. All the sharable knowledge are stored in a shared Versant. Furthermore, the Knowledge Server uses an ObjectOriented communication environment XShell [16] and in C++.
H. Wang, C. Wang/Knowledge-Based Systems 9 (1996) 329-337
6. System operation The section describes the numerous phases and steps in an actual operation of the entire APACS system. As mentioned above, all the sharable knowledge is stored in the repository. Any particular instance of APACS operation can be divided into two distinctive phases, namely the start-up phase and the running phase. During the start-up phase, all the APACS components attempt to establish information peer to peer communication connections within the APACS framework, while on the other hand, the running phase will in turn involve the real time data flow from the actual nuclear power plant.
6.1. S T A R T U P phase
• • • • •
a Repository IO: an instance of Repository I/O. a Server agent: an instance of Knowledge Server. a Daq agent: an instance of DAQ. a Tracking agent: an instance of T R A C K I N G . one or two Monitoring agents: instances of MONITORING. • a Verification agent: an instance of V E R I F I C A TION. • an Hci agent: an instance of HCI. • a Diagnosis agent: an instance of D I A G N O S I S .
. Set necessary connections (i.e. ports and/or distributed methods) among those agents. Fig. 5 shows such connections. . The Repository IO opens the Repository, retrieves sharable knowledge from the Repository and translates the sharable knowledge into the c o m m o n format. The Server queries the Repository IO to get sharable knowledge via distributed method calls. 5. Each agent (e.g. Monitoring, Daq, etc.) queries the server to retrieve necessary knowledge. 6. Based on their own needs, each agent sends their requests to the Server via ports. The Server stores such requests for further use. .
1. Create all the necessary agents (processes) based on the pre-defined reusable agent classes. At the present time, we need to have two U N I X workstations to run the APACS system: a S U N Sparc workstation and an SGI workstation. Therefore, the following agents are created:
335
Fig. 6. An APACS Hci screen.
336
H. Wang, C. Wang/Knowledge-BasedSystems 9 (1996) 329-337
6.2. R U N N I N G Phase
During the running phase, the Daq agent receives realtime data frame by frame and passes the data to the Server via a port. When the Server receives objects, it will distribute them to various APACS components based on their requests. Based on the real-time data and the simulation model, the Tracking agent produces an event object per frame, which contains simulated values, and passes the object to the Server agent via a port. Based on the real-time data and the simulated values, the Monitoring agent does inferencing and outputs a set of events, symbolic description of the behaviour of the state of the plant. The Diagnosis agent is usually in the waiting mode. When it reaches one or more diagnoses, it will send them to the Server agent. When the Verification agent receives any diagnosis, it will verify the diagnosis and send out its judgement on the diagnosis. The judgements will reach the Diagnosis agent via the Server agent. If the Verification agent rejects a diagnosis, the Diagnosis agent will think about the reason and process further. The Hci agent receives all of the information and presents it to the operators. Fig. 6 shows a screen dump of the Hci agent when the APACS runs. In terms of actual user interaction, the operators can focus on any sub-system (e.g. REACTOR, TURBINE, CONDENSER, etc.) when they click on the buttons at the top of the window. The buttons at the left side show the working status of some key APACS agents, such as Monitoring, Diagnosis, Verification, etc. At the right, a simplified schematic window displays the status of all the important devices within the feed water system. The operators can click on any device to get the current status of the device. For instance, if the operator clicks on the boiler BO 1, all the attributes of BO 1 will be shown in a table. If the APACS finds that any device is in an emergency situation, the colour of the device will be changed. The operators can see the flashes of the device icon on the screen and can hear the emergency alarm sound. A necessary window will appear to show the suggestions for dealing with such emergency.
7. Conclusions This paper's contribution to the knowledge-based system research community is to describe how to build a large multiple agent system and to demonstrate a practical solution of the knowledge sharing and knowledge reuse through the successes of the use of a repository. The common repository in the APACS system contains several sharable and reusable ontologies, including the ontology of agents, the ontology of domain objects, and the ontology of events. It is hoped that the entire process of building a successful APACS multiple agent system
will be valuable to the KBS practitioners in recognising the common repository designs and considerations in future knowledge sharing system development processes. It is also hoped that the process undertaken by the APACS project will be of value to KBS projects in the near future so as to achieve the economic benefit by the means of reductions in number of preventive nuclear plant shutdowns and the technical benefit by serving as the backbone open framework ready to be integrated into the overall industrial plant information infrastructure.
Acknowledgements We would like to express our thanks to the efforts of the whole APACS team, consisting of M. Benjamin, J.Q.B. Chou, B. Diebold, A. Gullen, D. Elder, P. Kar, B. Kramer, J. Mylopoulos, R. Prager, R. Randhawa, C. Wang and H. Wang. Financial support is gratefully acknowledged from the Government of Canada, Ontario Hydro, CAE Electronics, Stelco Canada, Shell Canada, Hatch Associates and PRECARN Associates Inc.
References [l] M.A. Musen, Dimensions of knowledge sharing and reuse. Comput. Beiomed. Res., 25 (1992) 435 467. [2] R. Neches, R. Fikes, T. Finin, T. Gruber, R. Patil, T. Senator and W. Swartout, Enabling technology for knowledge sharing, AI Mug. (Fall 1991) 36-56. [3] D.B. Lenat and R.V. Guha, Building Large Knowledge-Based Systems. Addison-Wesley, Reading, MA, 1990. [4] R.V. Guha and D.B. Lenat, CYC: A mid-term report. AI Mag. (Fall 1990) 32-59. [5] R.V. Guha and D.B. Lenat, RE: CycLing paper reviews. Artif. Intel., 61 (1993) 149-174. [6] M.A. Musen and S.W. Tu, Problem-solving models for generation of task-specific knowledge-acquisition tools. In J. Cuena (ed.), Knowledge-Oriented Software Design, Elsevier, Amsterdam, 1993. [7] J. Mylopoulos, B. Kramer, H. Wang, M. Benjamin, Q.B. Chou and S. Mensah, Expert system applications in process control, in Proc. Int. Symp. on Artificial Intelligence in Material Processing Operations, Edmonton, Canada, August 1992. [8] A. Newell, The knowledge level. Artif. Intell., 18 (1982) 87-127. [9] M. Cutlosky, R.S. Engelmore, R.E. Fikes, T.R. Gruber, M.R. Genesereth, W.S. Mark, J.M. Tenenbaum and J.C. Weber, PACT: An experiment in integrating concurrent engineering systems. IEEE Comput., 26 (1993) 28-37. [10] T.R. Gruber, Towards principles for the design of ontologies used for knowledge sharing, in N. Guarino and R. Poll (eds.), Formal Ontology in Conceptual Analysis and Knowledge Representation. Academic, New York, 1994. [ll] H.C. Lefkovits, IBM's Repository Manager/MVS, QED Information Sciences, Inc., 1991. [12] H. Wang, Repositories for co-operative information systems. Inform. Soft. Technol., 38 (1996) 333 341.
H. Wang, C. Wang/Knowledge-Based Systems 9 (1996) 329 337 [13] J. Mylopoulos et al., Telos: A language for representing knowledge about information systems. ACM Trans. Inform. Syst.~ 8 (1990) 325-362. [14] J. Mylopoulos, H. Wang and B. Kramer, KNOWBEL: A hybrid
337
expert system building tool and its applications. IEEE Expert.. 8 (1993) 17 24. [15] The Versant Manual, Version 3.0. Versant Inc., 1994. [16] XShell User's Manual, Version 3.0. ExperSoft Inc., 1994.