Expert Systems with Applications 38 (2011) 360–370
Contents lists available at ScienceDirect
Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa
Conceptual modeling of causal map: Object oriented causal map Soon Jae Kwon * Department of Business Administration, Daegu University, Daegu, Naeriri 15, JinRayng, Kyong San 712-714, Korea
a r t i c l e
i n f o
Keywords: Causal map Conceptual modeling Decision making Ontology
a b s t r a c t This study has been started from question ‘‘Is there a methodology that can make causal map comprised of causality as a database by using conceptual modeling method?” In this research, causal map is proposed to represent causal relation by using conceptual modeling method. Therefore, we formalize causality as a cognitive rule to allow us to control changes in the decision making environment. Such causality is embedded in the real world (application domain). And, user (decision maker) use to represent set of causality which decision-makers have retrieved from the set of knowledge or experiences in application domain. Such set of causality that decision-makers possess in their know–how and short (long) term memory are usually formalized by causal map. It is verified for users whether this causal map is helpful in solving their problems. By extending the basis of conceptual modeling theory (ontology theory, classification theory, decomposition theory, and semantic network theory), we introduce a concept of causal entity diagram and address why causal map is needed to analyze a specific domain knowledge for given decision problem solving. Finally, object oriented causal map (O2CM) were employed to verify usefulness of causal map for user (decision maker) in this study. Ó 2010 Elsevier Ltd. All rights reserved.
1. Introduction Information system (IS) projects begin by examining and understanding the business and organizational domain in which the information system is to be embedded. The system analysis phase of information systems development is concerned with representing this business domain (often called the ‘‘real world domain”). Such a description is termed a conceptual model: ‘‘conceptual modeling is the activity of formally describing some aspects of the physical and social world around us for purposes of understanding and communication” (Mylopoulos, 1992). Conceptual models, which are mostly graphic, are used to represent both static phenomena (e.g., things and their properties) and dynamic phenomena (e.g., events and processes) in some domain. They have at least four purposes: (1) supporting communication between developers and users, (2) helping analysts understand a domain, (3) providing input for the design process, and (4) documenting the original requirements for future reference (Kung & Solvberg, 1986). But initial research on this topic suffered from a lack of theory (Davis, 1992). Without theory, researchers initially created conceptual modeling techniques based on personal experience or exploratory case studies (Olle et al., 1988). This led to a proliferation of techniques, denoted the ‘‘yet another modeling approach” (YAMA) syndrome (Wand & Weber, 2002). In the last decade, theory driven research has emerged to evaluate the extent to which conceptual models communicate meaning. Wand and Weber (1993) proposed * Tel.: +82 53 850 6238; fax: +82 53 850 6239. E-mail addresses:
[email protected],
[email protected] 0957-4174/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2010.06.073
that conceptual modeling grammars should be based on a theory of ontology—that is, a theory that articulates those constructs needed to describe the structure and behavior of the world in general (see also: Ashenhurst, 1996; Wimmer & Wimmer, 1992). Parsons (1996) has used concept theory, Auramaki, Lehtinen, and Lyytinen (1988) have used speech-act theory, and Stamper (1987) has used semiotics. These studies related to conceptual modeling have been proposed framework with restructuring based on theory. Among them, Wand and Weber (2002) suggested the framework comprises four elements: conceptual modeling grammars–intergrammar (Agarwal, Sinha, & Tanniru, 1996; Kim, Hahn, & Hahn, 2000; Kim & March, 1995; Vessey & Conger, 1994) and intragrammar (Bodart, Sim, Patel, & Weber, 2001; Burton-Jones & Meso, 2006; Gemino, 2004; Shanks, Tansley, & Weber, 2004), conceptual modeling methods (Basu & Blanning, 2000; Parsons, 1996; Parsons & Wand, 1992; Storey, 1991; Wand, Storey, & Weber, 1999; Wand & Woo, 1993), conceptual modeling scripts (Kesh, 1995; Lindland, Sindre, & Sølvberg, 1994; Moody & Shanks, 1998), and conceptual modeling contexts (Batra & Davis, 1992; Dunn & Grabski, 1998; Shanks, 1997; Sutcliffe & Maiden, 1992). In this sense, object oriented causal map (O2CM) is proposed to represent causal relation by using conceptual modeling method in this study. Numerous studies have attempted to find and explore causal map. (Axelrod, 1976; Eden, 1988; Lee & Kwon, 2006; Montazemi & Conrath, 1986; Nelson, Nadkarni, Narayanan, & Ghods, 2000). Previous research of causal map addressed how to set up causal relation after concepts about specific problem area are drawn by the results of interview with experts. However, there
S.J. Kwon / Expert Systems with Applications 38 (2011) 360–370
361
are a variety of methodologies to draw casual map. In addition, individual’s cognitive structure in perceiving objects is different. Accordingly, it is not easy to draw consistent causal map about specific problem area. In this regard, in our research, This paper will attempt to show how causal map is drawn centered on conceptual modeling. This methodology has the advantage that it can create various causal maps on the feature of users by standardizing various cognitive structures existing in specific problem area and making database. Therefore, users can draw causal map easily in accordance with how they think about specific decision making problem, and solve the problem by using this.
ues at a certain time comprise the state of the thing. The possible states are constrained by the laws of the thing.
2. Theoretical background
2.1.4. Interaction Based on dynamics, interaction is defined as the ability of one thing to affect the state evolution of another. An interaction is a mutual property of the interacting things.
We start with a basic assumption: an information system represents knowledge about things in some domain. Based on this, we apply four views of representation to interpret object concepts: ontology theory (Bunge, 1977, 1979) and classification theory (Parsons & Wand, 1997), decomposition theory (Parsons & Wand, 1992), and semantic network theory (Collins & Quillan, 1969). One is that an information system represents things in the application domain, such as employees, products, and customers. Therefore, a theory of the nature of things, an ontology, can guide us in using objects in systems analysis. The second view is that an information system represents human knowledge about a domain. Hence, a theory of the structure and organization of knowledge about things, that is, a theory of concepts (classification theory), can also serve as our guide. The third view is that breakdown of a complex system into smaller, relatively independent units. It is the main tool available to simplify the construction of complex man-made systems. The last view is that semantic network theory to propose that CM lead analysts to construct efficient mental representations of a domain. Semantic network theory states that individuals store concepts in memory as nodes connected by paths (Ashcraft, 2002). 2.1. The ontology theory approach Ontology has a tradition dating back to ancient Greek philosophers, however, no universally accepted ontology has emerged. Hence, the first step in the analysis is choosing an ontological model. We base our analysis of object concepts on Mario Bunge’s ontological work (Bunge, 1977, 1979). This work has already been used to formalize object concepts (Wand, 1989). Here we use it to interpret these concepts for systems analysis, beginning first by introducing its main constructs and assumptions. Hereafter, ‘‘ontology” refers to Bunge’s ontological model. 2.1.1. Things and properties The basic assumption is that the world is made of things that possess properties, but a thing is not just a bunch of properties. A property can be intrinsic to a thing, or mutual to several things. Properties can be restricted by laws relating to one or several properties, for example, a limit on a salary, or on a salary at a certain rank. A special form of a law is precedence of properties, wherein the existence of one property implies the existence of another, preceding, property. 2.1.2. State of a thing Things possess properties independently of what we know. In contrast, attributes are characteristics assigned to things by humans. Every property can, in principle, be represented by attributes. An attribute may, or may not, represent a property. The attributes used to describe a thing are called its ‘‘functional schema”. Each of these attributes is a state variable and their val-
2.1.3. Dynamics of a thing Ontology assumes that every thing must change. A principle called ‘‘nominal invariance” states that a thing can change properties and still be the same thing. Since the world is made of things, every change involves a change of things. Every change of a thing is manifested as a change of its state. Hence, dynamics is described in terms of state changes, termed events. Events are governed by transformation laws that define the allowed changes of state.
2.1.5. Composition of things Things can be combined to form a composite thing. A property of a composite that is not possessed by any of its components is called an ‘‘emergent property”. Ontology requires that every composite thing has emergent (holistic) properties. 2.1.6. Classification of things A kind is the set of things possessing a given set of properties. A natural kind is defined by a set of properties and the laws connecting them. In the ontology-based view of objects, objects are representations of things (Wand, 1989). The attributes of the thing are represented as attributes of the object and the allowed state transformations as the operations of the object. Interactions are represented as communications between objects. 2.2. The classification (concept) theory approach Classification theory has its roots in psychological research on how humans structure knowledge about things by constructing concepts that describe categories of similar things. There are several theories of classification (Smith & Medin, 1982), differing in the view of what a class is. We use a view of classification which has previously been applied to object concepts (Parsons, 1996) to interpret object characteristics for systems analysis. The main elements of this view are instances and object, properties, and concepts. 2.2.1. Instance and object An instance denotes knowledge of the (believed) existence of a thing in the world. An object is a symbol designating the existence of an instance. To represent the existence of some set of instances, a set of corresponding surrogates is needed: hence, we need to be able to create objects to represent the existence of instances in some domain. 2.2.2. Property Properties constitute knowledge about things beyond the fact they exist. An instance possesses properties of three types (Parsons, 1996): Structural which belongs to the thing itself; relational which relate the thing to other things; and behavioral which determine possible changes to the values of structural and relational properties. 2.2.3. Concept A concept is defined by the common properties possessed by a set of instances. A concept is intensional; in other words, defined by properties, not by the instances that possess them. Specialized concepts can be derived by adding properties to the definitions
362
S.J. Kwon / Expert Systems with Applications 38 (2011) 360–370
of existing concepts. A concept is more than an arbitrary collection of properties. Concepts are meaningful only if they are useful to humans for efficient storage and retrieval of knowledge about things. Since usefulness depends on context and purpose, concepts are neither predetermined nor fixed, but may vary among individuals or over time. Usefulness of concepts is determined by the principles of cognitive economy and inference (Smith & Medin, 1982). Cognitive economy means reducing the mental work required to keep track of knowledge about instances. Inference refers to the ability to infer characteristics of instances from concepts. 2.2.4. Composite instances According to classification theory, instances of some concepts may be composed of instances of other concepts. Some instances are composed of simpler instances. A composite instance possesses emergent properties. The view of objects based on classification theory is as direct representations of cognitive instances, having structural relational, and behavioral properties (Parsons, 1996). 2.3. The decomposition theory approach The decomposition process involves perceiving a ‘‘messy” real world domain (or users’ perception of that domain) and breaking it down to identify its structure and behavior. Once an analyst has identified the domain’s structure and behavior, s/he can engage in the mapping process to articulate her/his conception using the constructs of a modeling grammar (e.g., ER- Diagram, UML). An analyst could propose several decompositions to represent their conception of a domain. The aim of good decomposition method (GDM) is to propose a necessary set of criteria for determining the relative quality of alternative decompositions (Wand & Weber, 1990). GDM approach derives from a theory of ontology (Bunge, 1977, 1979) and memory (Collins & Quillan, 1969) and specifies five criteria for defining a good decomposition. GDM evaluates decomposition quality according to five criteria that characterize the coherence and efficiency with which a system responds to dynamics: minimality, determinism, losslessness, weak coupling, and strong cohesion (Burton-Jones & Meso, 2006). 2.3.1. Minimality A system cannot contain redundant state variables, i.e., variables that do not change state or are not used during a system’s life (e.g., in UML attributes represent state variables Rumbaugh, Jacobson, & Booch, 1999). For every subsystem at every level in the level structure of the system there are no redundant state variables. 2.3.2. Determinism Determinism relates to a system’s predictability in response to events (e.g., in UML events are shown explicitly in statecharts and implicitly in activity and class diagrams). Bunge’s (1977) ontology distinguishes between external and internal events. Every event at every level in the level structure of the system is either an external event or a well-defined internal event. 2.3.3. Losslessness Losslessness requires that emergent properties (i.e., properties emerging from subsystems interacting) are not lost during decomposition. Every hereditary state variable and every emergent state variable in a system is preserved in the decomposition. 2.3.4. Weak coupling Coupling refers to interactions among subsystems (Weber, 1997). In OO design, Chidamber and Kemerer (1994) operationalized GDM’s definition of coupling as methods in one object uses methods or instance variables in another object. The cardinality
of the totality of input for each subsystem of the decomposition is less than or equal to the cardinality of the totality of input for each equivalent subsystem in the equivalent decomposition. 2.3.5. Strong cohesion Strong cohesion refers to the relationship between input sets and output sets in a transformation (Dromney, 1996). A conceptualization manifests a good decomposition only if for every set of outputs, all output variables affected cohesion by input variables are contained in the same set, and the addition of any other output to the set does not extend the set of inputs on which the existing outputs depend. 2.4. Semantic network theory In this theory, human semantic memory is structured as a network of nodes linked via directed pathways. The nodes may be entities, attributes of entities, classes, or attributes of classes. Paths stand for some type of relationship between the things represented by the nodes (e.g., membership of a specific entity in a class, or possession of a property by an entity). Nodes and pathways may be in working memory because an individual has just encoded them (e.g., read them off a conceptual schema diagram). Alternatively, they may be in long-term memory because individuals have learned them already. 3. Object oriented causal map (O2CM) 3.1. Causality definition The notion of causality is absolutely central to recent philosophical work in semantics (the philosophy of mind and intentionality), epistemology and the philosophy of science (Dretske, 1981; Fodor, 1990). Causality is defined as the extent to which objects or notions are linked and interact with each other, identified through key words implying an explicit cause-effect relationship: ‘‘if,” ‘‘then,” ‘‘because,” ‘‘so,” ‘‘as,” and so forth. Causality has been extensively discussed in the field of psychology, from the perspective of investigating the psychological causal attributions (Chandler, Shama, Wolf, & Planchard, 1981), causal dimension (Russell, 1982), causality orientation (Deci & Ryan, 1985), and locus of causality (Ryan & Connell, 1989). Causality is often viewed as a specific phenomenon (such as a policy or strategy), not as a general cognitive assumption or theory. Some studies have used similar language, such as cause (Pearl, 2000), causal understanding (Asami, Takeuchi, & Otsuki, 1998), autonomous/control causality (Pullins, 2001), mental causation (Koons, 1998) and causal model (Teas & Laczniak, 2004), while others have used the term causality in a context that differs from that of this study. For example, Goldvarg and Laird (2001) used naı¨ve causality to represent a mental model explaining causal meaning and reasoning, and Asami et al. (1998) used causality to describe relationships that can be observed in physical systems. Pearl (2000) used causality, cause, and causation to interpret scientific data, and Pullins (2001) described the orientation of individuals in the field of buyer-and-seller negotiation using the terms autonomous causality and control causality. To summarize, we formalize causality as a cognitive rule to allow us to control changes in the decision making environment. Such causality is embedded in the real world (application domain). Therefore, we use O2CM to represent set of causality which decision-makers have retrieved from the set of knowledge or experiences in application domain. Such set of causality that decisionmakers possess in their know-how and long-term memory are usually formalized by O2CM.
S.J. Kwon / Expert Systems with Applications 38 (2011) 360–370
Unidirectional
Broadcast
Concentrate
363
Proof. Let Q denote a finite set of instance, and N denote the cardinality of Q. The elements of Q can be indexed q1, q2, . . ., qn. {1, 2, . . ., N} is a set of concept nodes representing the instances in Q (each concept node n represents the instance qn), since the set of pairs {(1, q1), (2, q2), (3, q3), . . ., (n, qn)} is a 1:1 function such that f(n) = qn. Here, finiteness is equivalent to assuming any person has knowledge of finite number of things. The world is made of instances (things) that possess vales. The concept node has a function from a set of instances to a set of values, where the values may be other instances. This is formalized via the notion of functional schema. h
Transitive
Fig. 1. The type of causal relation in O2CM.
3.2. A specification of O2CM O2CM model is based on Bunge’s work on ontology (Bunge, 1977, 1979) and classification theory (Parsons, 1996; Smith & Medin, 1982). The following constructs introduced above – concept, concept node, causal relation, causal map (diagram), and reachability (adjacency matrix) - from the basis of O2CM (refer to Fig. 1). Since the cognitive research reviewed above is not systematically formalized, we develop an independent formalism to define the O2CM in a manner consistent with the theory and research results summarized in the previous section. 3.2.1. Concept A concept can be a single word such as ‘‘price” or ‘‘sale”, or a phrase such as ‘‘consumer purchasing decision”. In this sense, a concept is an ideational kernel – a single idea totally bereft of meaning except as it is connected to other concepts (Carley, 1986). Concepts are nothing more than symbols which have meanings dependent on their use, i.e., their relationship to other symbols (Carley, 1986; Gollob, 1968; Heise, 1969, 1970; Minsky, 1975). A set of concepts is referred to as a vocabulary or lexicon. There is presumed to be a countable and generally finite number of concepts at any one time in any one socio-cultural environment. Concepts can be classified or typed, but there is, a priori, no one right classification scheme. Concepts provide two primary functions: cognitive economy and inference (Smith, 1988). Cognitive economy means that concepts reduce the cognitive burden associated with storing and organizing knowledge. Knowing that a thing is an instance of some concept identifies its similarity to other instances of the concepts. Inference means that conclusions may be drawn about unobserved properties by classifying an instance based on observed properties (Smith, 1988). 3.2.2. Concept node A concept node denotes knowledge of the existence of a thing in the world. The concept node is a symbol designating the existence of an instance. To represent the existence of some set of instances, a set of corresponding surrogates is needed; hence, we need to be able to create concept nodes to represent the existence of instances in some application domain. Definition 1. Let T denote a finite set of concept nodes. Q is a finite set of instances, and f a 1:1 function f: T ? Q. An element t 2 T is an node representing q 2 Q if and only if f(t) = q.
Theorem 1. For any finite set of instances, a set of concept nodes representing those instances can be constructed.
Definition 2. A concept is modeled in terms of a functional schema Fs = [F1, F2, . . ., Fn], where each function Fs assigns a value to an observed concept nodes at time t. Let Tf denote a finite set of concept nodes and V a set of values. A concept node of Tf is a set of functions, F = {Tf ? V}, hence V = {0 1}. 3.2.3. Causal relation A causal relation is the tie that links two concept nodes together. The causal relation can have directionality and vales (Carley, 1986a). Directionality is that the causal relations between two concept nodes can be unidirectional. Value is the causal relation between two concept nodes can have values between from 1 to 1. These causal relations may be conceptual such as in ‘‘causality”, ‘‘is-a” links or may simply denote that the concept nodes are somehow cognitively related in the mind of the text’s author or are proximally located in the text (Axelrod, 1976; Minsky, 1975; Winston, 1977). Definition 3. Let Ci and Cj denote two concept nodes included T and R is a finite set of causal relation. If Ci provides a direct causal relation for the occurrence of Cj, then Ci and Cj are said to be connected by causal relation. The causal relation has a function from a set of values {1 1}, where the values may be other causal relations. Definition 4. Let Rt denote a finite set of causal relations and V a set of values. A causal relation of Rt is a set of functions, T = {Rt ? V}. Before introducing the fifth definition, we have to mention that it is essential to distinguish between causal relations and logical relations, when making decision about a specific on the basis of utilizing knowledge. The causal relations are unidirectional, i.e., rules have ordering between the cause and the result. Logical relations, on the other hand, describe static relations between an antecedent and a consequent, and thus are bi-directional. For example, a logical relation P ? Q can be used to infer Q from P, or to infer P from Q. Then following definition explains a close relationship between causal relations and logical relations. Definition 5. Let T1 and T2 denote a finite set of concept nodes, and R denote a finite set of causal relations. If C1 ? C2, then (C1 ? C2) ^ (C1 2 T1) ? (C2 2 T2) for all T1 2 T2. 3.2.4. Causal map A causal map is a network formed from concept nodes. At a theoretical level maps can be disaggregated into types. Hence, we take a more operational stance and simply use the term map to denote an interrelated set of concept codes. By sharing concept nodes can form networks the resultant causal map is a representation of mental model in special domain knowledge. The objective of a causal map is to describe the domain knowledge for solving a given
364
S.J. Kwon / Expert Systems with Applications 38 (2011) 360–370
decision problem. A causal map created by intersecting two individual’s causal maps can be thought of as a team causal map – i.e. a representation of the shared or team mental model. A causal map for special problem related special domain knowledge is consisting of: (1) Tf = {C1, C2, . . ., Cn}, (2) Rt = {rij} Where, Tf is a set of observed concept nodes at time t in terms of a functional schema Fs. Rt is a set of causal relations. 3.2.5. Reachability (adjacency) matrix According to the relationship assignments between concept nodes, a causal map can be constructed. There are different methods of analyzing a causal map. The causal map can be converted to an adjacency matrix. The adjacency matrix A is a square matrix of size n*n, where n is the total number of concept nodes in the corresponding causal map. In this matrix, the ’causing’ variables are listed along the left margin and the ’caused’ variables are listed along the top margin. If a positive relationship let’s say, from i to j is present in the causal map, the cell where row i intersects with column j has a positive value. If a negative relationship from i to j is present in the causal map, the cell has a negative value. If there is no relationship between i and j, the cell is zero. By convention,
variables are assumed to have no direct effects on themselves, so the diagonal cells of an adjacency matrix are zero. The most useful property of the adjacency matrix is that it permits the computation of the reachability matrix. The adjacency matrix A indicates only direct relationships between concept variables, that is, concept linkage paths of length 1. The reachability matrix R, on the other hand, reflects the existence of indirect or deductive relationships. The indirect effect of a variable a on another variable d. For example, if four variables form the path, a ? b ? c ? d, then, it can be said that for the path of length 3, the indirect effect of variable a on variable d is positive. When there is an even number of negative direct relationship between the variables, the indirect effect of a on d is positive. When there is an odd number of negative direct relationships, the indirect effect of a on d is negative. If the adjacency matrix contains no feedback loops, and if it is binary instead of weighted, the cumulative indirect effects (namely cumulative reachability matrix CR) are calculated using the matrix addition and multiplication, according to the following formula:
CR ¼ A þ A2 þ A3 þ þ An1
Fig. 2. The object model of generating causal map’s from website design.
S.J. Kwon / Expert Systems with Applications 38 (2011) 360–370
Here, A is the adjacency matrix and n is the number of variables in the causal map. The row sum of the absolute values of the element of R for row i gives the outdegree (od) of variable i. Similarly,
365
the column sum of the absolute values of the elements of CR for column i gives the indegree (id) of variable i. The total number of concept nodes reachable from concept i is represented by odi and
Fig. 3. Query by O2CM.
366
S.J. Kwon / Expert Systems with Applications 38 (2011) 360–370
Fig. 3 (continued)
idi represents the total number of concept variables reachable from concept i. The sum of odi and idi gives the total degree of i(tdi) which is a useful operational measure of that variable’s cognitive centrality in the opinion structure of the experts.
4. Drawing of causal map with O2CM We now propose a four-step process for extracting causal map from specific application domain using O2CM, which was developed ULM method. Recently, major methodologists of the object oriented techniques have proposed the unified modeling language (UML) (Booch, Rumbaugh, & Jacobson, 1999). The UML combines the strengths of the various object oriented methodologies (Booch, 1994; Jacobson, 1992, 1995; Rumbaugh, Blaha, Premerlani, Eddy, & Lorensen, 1991). The UML provides multiple diagrams from two basic perspectives: static and dynamic. The UML also provides class diagrams, state transition diagrams, and event trace dia-
grams. In addition, the UML offers object message diagrams that model the interaction between the objects within the system in terms of events and relation changes among objects. First, the user (decision maker) identifies the set of concepts that will be used in O2CM. Second, the user decomposes concepts into concept nodes and value from a functional schema. Third, the user defines the type of causal relations that can exist between these concept nodes. The user uses a computer-assisted approach to code the information based on causal relations using these concept nodes. Finally, the user can display the resultant map (causal map), serves as a representation of the specific application domain, graphically and analyze it statistically. Thus causal maps drawn from different application domain can be compared. Let us discuss about the process of converting of causal map using O2CM. For the sake of convenience, refer to website design (Lee & Chung, 2006). Assume that a decision maker who intends to solve development of website design problem possesses causal relations of concepts as follows Fig. 2 (Definition 2 and Theorem 1).
S.J. Kwon / Expert Systems with Applications 38 (2011) 360–370
A O2CM consists of manifold, concept, concept nodes, concept object, and causality. A manifold is a collection of website design concepts from concept types in different causal map which are conceptually associated. A concept is a collection of attributes from different object types which are associated. For example, an ‘‘icon” concept composes of concept nodes; audios, images, and animation, motion picture; and categories, frames, and web-pages are part of the ‘‘icon” concept. Concept nodes are commonly referred to as concepts which have attributes such as color, size, and update. Concept nodes are parts of concept that enclose some set of instances: ‘‘product information,” ‘‘speed of site access,” and ‘‘state” are examples of concept nodes. Concept objects are a symbol designating the existence of an instance of concept nodes. Causality is a collection of causal relation between objects and most O2CM’s consider objects that are in the same concept and concept nodes. For example, a ‘‘payment” concept may contain ‘‘security” and ‘‘payment method”, whereas an ‘‘interactive” concept may contain ‘‘bulletin interactivity” and ‘‘speed of site access”. The tool
367
to extract information from concept nodes are described a type of concept objects and the collection of connected concept nodes forms a causality. These concept nodes are contained within one concept, though some of their attributes may be represented in different concepts. Causal relation types such as ‘‘increase” and ‘‘decrease” are then applied within the domain of the concept. The context of ‘‘increase” may be at the ‘‘speed of site access” concept node level. Concept nodes may exist across different concepts. For example, ‘‘Bulletin interactive” concept node may be applied to ‘‘Interactive” and ‘‘Usage Behavior” concepts. Therefore, ‘‘increase” has the same meaning at both ‘‘Interactive” and ‘‘Usage Behavior” concept when represented at the ‘‘interactive” concept node level. Generation source code of Fig. 2 was described in Appendix A. There are many ways of using these causal relation characteristics. However, to define causal relations, the user (decision maker) must specify how directionality and value will be used by O2CM. Appendix B was data definition language (DDL) statement and snapshot for creating database. Let us suppose that the causal relations which user (decision maker) retrieves from his (her) knowledge to solve this problem can be described as Fig. 3 (Definitions 3–5). Our assumption here is that causal relation (rij) is that has {1 1} values, which are allowed to follow more generalized techniques such as fuzzy logic and neural networks. To draw causal maps above in query of O2CM with decisionmaker A, it can be depicted as in Fig. 4. Another CM are derived from using O2CM with decisionmaker B in Fig. 5. 5. Discussion and conclusion
Fig. 4. Decision maker A’s website design of causal map drawn by O2CM.
The essence of our proposal is that, for analysis, a causal map consists of causal relations is more suitable than other conceptual modeling approaches. We used ontological and classification theories of representation to interpret object oriented O2CM and derive guidelines for using them in specific application domain knowledge. As we stated previously, the specific application domain is associated with causality, not with a static, logical description of the world. Causal relation is different from logical relation, in that the former is unidirectional (from cause to effect), while the latter is bi-directional. We supposed that individuals control their decision making
Fig. 5. Decision maker B’s website design of causal map drawn by O2CM.
368
S.J. Kwon / Expert Systems with Applications 38 (2011) 360–370
environment by exercising the notion of causality. For example, when an individual tries to make a decision (or plan) to solve problem, s/he will describe and invoke some (not all) of its causal relations knowledge based on application domain knowledge. Since individuals is assumed to perform decision making based only on the subjectively determined set of causal relations, the qualification domain problem, which requires the set of rules to reflect reality perfectly, can be avoided. Furthermore, when individuals performs decision making based only on the subjective set of causal relations, the individuals can confirm its result with the real world and when it is correct (possibly by chance), there is no need for further processing. Individuals can adjust his/her set of causal relations so that its prediction matches the reality in critical situations, and can only survive in the real world where its prediction matches reality at critical points. We claim that individuals develop and use causal map in the form of proper set of causal relations, as the result of analysis specific application domain. Such causal maps can be revealed when s/he attempts to make decision about a problem.
public class INTERACTIVE extends WEBSITE_DESIGN { long Interactive_ID; String Use_of_Communication; String Use_of_Com_Alias; double Use_of_Com_Value; String Bulletine_Interactivity; String Bulletine_Interactivity_Alias; double Bulletine_Interactivity_Value; String Speed_of_Site_Access; String Speed_of_Site_Acc_Alias; double Speed_of_Site_Acc_Value; String Feedback_Section; String Feedback_Section_Alias; double Feedback_Section_Value; private Concept_OBJECT lnkConcept_OBJECT; } /** *@persistent */
Acknowledgement
public class SEARCHING extends WEBSITE_DESIGN { long Searcing_ID; String Serching_Function; String Serching_Function_Alias; double Serching_Function_Value; String Possibility_of_Comparison_Shopping_ of_Various_Products; String Possibility_of_Comparison_Shopping_ of_Various_Products_Alias; double Possibility_of_Comparison_Shopping_ of_Various_Products_Value; private Concept_OBJECT lnkConcept_OBJECT; }
This research was supported by the Daegu University Research Grant, 2010. Appendix A. Generating source code causal map’s from website design package Web_Site_Design; public class WEBSITE_DESIGN { String MANIFOLD; string Description; string Type; string Data_Source; } /** * @persistent */
Appendix B. Data definition language statement for creating database of website design
S.J. Kwon / Expert Systems with Applications 38 (2011) 360–370
CREATE Table ‘CAUSALITY‘ ( ‘Causality_ID‘ INTEGER, ‘Relation_From‘ VARCHAR (100), ‘Relation_To‘ VARCHAR (100), ‘Relation_Value‘ DOUBLE ) CREATE Table ‘INTERACTIVE‘ ( ‘Interactive_ID‘ INTEGER, ‘Use_of_Communication‘ VARCHAR (100), ‘Use_of_Com_Alias‘ VARCHAR (100), ) References Agarwal, R., Sinha, A. P., & Tanniru, M. (1996). Cognitive fit in requirements modeling: A study of object and process methodologies. Journal of Management Information Systems, 13(2), 137–164. Asami, K., Takeuchi, A., & Otsuki, S. (1998). A dialogue method for assisting students in understanding causalities in physical systems. Systems and Computers in Japan, 29(6), 1–15. Ashcraft, M. H. (2002). Cognition. Upper Saddle River, NJ: Prentice Hall. Ashenhurst, R. L. (1996). Ontological aspects of information modeling. Minds and Machines, 6, 287–394. Auramaki, E., Lehtinen, E., & Lyytinen, K. (1988). A speech-act-based office modeling approach. ACM Transaction of Office Information Systems, 6(2), 126–152. Axelrod, R. (1976). Structure of decision: The cognitive maps of political elites. Princeton, NJ: Princeton University Press. Basu, A., & Blanning, R. W. (2000). A formal approach to workflow analysis. Information Systems Research, 11(1), 17–36. Batra, D., & Davis, J. G. (1992). Conceptual data modeling in database design: Similarities and differences between expert and novice designers. International Journal of Man–Machine Studies, 37, 83–101. Bodart, F., Sim, M., Patel, A., & Weber, R. (2001). Should optional properties be used in conceptual modelling? A theory and three empirical tests. Information Systems Research, 12(4), 385–405. Booch, G. (1994). Object-oriented analysis and design with applications. Redwood City, CA: Benjamin/Cummings Publishing. Booch, G., Rumbaugh, J., & Jacobson, I. (1999). The unified modeling language user guide. Reading, MA: Addison-Wesley. Bunge, M. (1977). Treatise on basic philosophy. Ontology I. Furniture of the world (Vol. 3). Boston, MA: Reidel. Bunge, M. (1979). Treatise on basic philosophy. Ontology II. A world of systems (Vol. 4). Boston, MA: Reidel. Burton-Jones, A., & Meso, P. N. (2006). Conceptualizing systems for understanding: An empirical test of decomposition principles in object-oriented analysis. Information Systems Research, 17(1), 38–60. Carley, K. (1986). An approach for relating social structure to cognitive structure. Journal of Mathematical Sociology, 12, 137–189. Chandler, T. A., Shama, D. D., Wolf, F. M., & Planchard, S. K. (1981). Multiattributional causality for social affiliation across five cross-national samples. The Journal of Psychology, 107, 219–229. Chidamber, S. R., & Kemerer, C. F. (1994). A metrics suite for object oriented design. IEEE Transactions on Software Engineering, 20(6), 476–493. Collins, A. M., & Quillan, M. R. (1969). Retrieval time from semantic memory. Journal of Verbal Learning Behavior, 8, 240–247. Davis, G. B. (1992). Systems analysis and design: A research strategy and macroanalysis. In W. W. Cotterman & J. A. Senn (Eds.), Challenges and strategies for research in systems development (pp. 9–21). Chichester, England: John Wiley and Sons. Deci, E. L., & Ryan, R. A. (1985). The general causality orientations scale: Selfdetermination in personality. Journal of Research in Personality, 19, 109–134. Dretske, F. I. (1981). Knowledge and the flow of information. Cambridge, MA: The MIT Press. Dromney, R. G. (1996). Cornering the chimera. IEEE Software, 13(1), 33–43. Dunn, C., & Grabski, S. (1998). The effect of field dependence on conceptual modeling performance. Advance of Accounting Information Systems, 6, 65–77. Eden, C. (1988). Cognitive mapping: A review. European Journal of Operational Research, 36(1), 1–13. Fodor, J. (1990). A theory of content and other essays. Cambridge, MA: The MIT Press. Gemino, A. (2004). Empirical comparisons of animation and narration in requirements validation. Requirements Engineering, 9(3), 153–168. Goldvarg, E., & Laird, J. P. N. (2001). Naı¨ve causality: A mental model theory of causal meaning and reasoning. Cognitive Science, 25, 565–610. Gollob, H. F. (1968). Impression formation and word combination in sentences. Journal of Personality and Social Psychology, 10, 341–353. Heise, D. (1969). Affectual dynamics in simple sentences. Journal of Personality and Social Psychology, 11, 204–213. Heise, D. (1970). Potency dynamics in simple sentences. Journal of Personality and Social Psychology, 16, 48–54.
369
Jacobson, I. (1992). Object-oriented software engineering: A use case driven approach. Reading, MA: Addison Wesley. Jacobson, I. (1995). The object advantage: Business process reengineering with object technology. Reading, MA: Addison Wesley. Kesh, S. (1995). Evaluating the quality of entity-relationship models. Information Software Technology, 37(12), 681–689. Kim, J., Hahn, J., & Hahn, H. (2000). How do we understand a system with (so) many diagrams? Cognitive integration processes in diagrammatic reasoning. Information Systems Research, 11(3), 284–303. Kim, Y. G., & March, S. T. (1995). Comparing data modeling formalisms. Communications of ACM, 38(6), 103–115. Koons, R. C. (1998). Teleology as higher-order causation: A situation theoretic account. Minds ad Machines, 8, 559–585. Kung, C. H., & Solvberg, A. (1986). Activity modeling and behaviour modeling. In T. W. Olle, H. G. Sol, & A. A. Verrijn-Stuart (Eds.), Information system design methodologies: Improving the practice (pp. 145–171). Amsterdam, The Netherlands: North-Holland. Lee, K. C., & Chung, N. H. (2006). Cognitive map-based web site design: Empirical analysis approach. Online Information Review, 30(2), 139–154. Lee, K. C., & Kwon, S. J. (2006). The use of cognitive maps and case-based reasoning for B2B negotiation. Journal of Management Information Systems, 22(4), 337–376. Lindland, O. I., Sindre, G., & Sølvberg, A. (1994). Understanding quality in conceptual modeling. IEEE Software, 11(2), 42–49. Minsky, M. A. (1975). A framework for representing knowledge. In P. Winston (Ed.), The psychology of computer vision. New York, NY: McGraw-Hill. Montazemi, A. R., & Conrath, D. W. (1986). The use of cognitive mapping for information requirements analysis. MIS Quarterly, 10(1), 45–56. Moody, D. L., & Shanks, G. (1998). Improving the quality of entity-relationship models: An action research programme. Australian Computer Journal, 30, 129–138. Mylopoulos, J. (1992). Conceptual modeling and telos. In P. Locoupoulos & R. Zicari (Eds.), Conceptual modeling, databases, and cases. New York, NY: Wiley. Nelson, K. M., Nadkarni, S., Narayanan, V. K., & Ghods, M. (2000). Understanding software operations support expertise: A revealed causal mapping approach. MIS Quarterly, 24(3), 475–507. Olle, T. W., Hagelstein, J., Macdonald, I. G., Rolland, C., Sol, H. G., Van Assche, F. J. M., et al. (1988). Information systems methodologies: A framework for understanding. Wokingham, England: Addison-Wesley, IFIP. Parsons, J., & Wand, Y. (1992). An automated approach to information systems decomposition. IEEE Transactions on Software Engineering, 18(3), 174–189. Parsons, J., & Wand, Y. (1997). Using objects for systems analysis. Communications of ACM, 40(12), 104–110. Parsons, J. (1996). An information model based on classification theory. Management Science, 42(10), 1437–1453. Pearl, J. (2000). Causality: Models, reasoning, and inference (first ed.). Cambridge: Cambridge University Press. Pullins, E. B. (2001). The interaction of reward contingencies and causality orientation on the introduction of cooperative tactics in buyer–seller negotiations. Psychology & Marketing, 18(12), 1241–1257. Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F., & Lorensen, W. (1991). Objectoriented modeling and design. Englewood Cliffs, NJ: Prentice Hall. Rumbaugh, J., Jacobson, I., & Booch, G. (1999). The unified modeling language reference manual. Reading, MA: Addison Wesley. Russell, D. (1982). The causal dimension scale: A measure of how individuals perceive causes. Journal of Personality and Social Psychology, 42(6), 1137–1145. Ryan, R. M., & Connell, P. (1989). Perceived locus of causality and internationalization: Examining reasons for acting in two domains. Journal of Personality and Social Psychology, 57(5), 749–761. Shanks, G. (1997). Conceptual data modeling: An empirical study of expert and novice data modelers. Australian Journal of Information Systems, 4(2), 63–73. Shanks, G., Tansley, E., & Weber, R. (2004). Representing composites in conceptual modeling. Communications of the ACM, 47(7), 77–80. Smith, E., & Medin, D. (1982). Categories and concepts. Cambridge, MA: Harvard University Press. Smith, E. (1988). Concepts and thought. In R. Sternberg & E. Smith (Eds.), The psychology if human thought. Cambridge, MA: Cambridge University Press. Stamper, R. (1987). Semantics. In R. J. Bolland & R. A. Hirschheim (Eds.), Critical issues in information systems research (pp. 43–78). New York: John Wiley and Sons. Storey, V. (1991). Meronymic relationships. Journal of Database Administration, 2(3), 22–35. Sutcliffe, A. G., & Maiden, N. A. M. (1992). Analysing the novice analyst: Cognitive models in software engineering. International Journal of Man–Machine Studies, 36, 719–740. Teas, R. K., & Laczniak, R. N. (2004). Measurement process context effects in empirical tests of causal models. Journal of Business Research, 57, 162–174. Vessey, I., & Conger, S. A. (1994). Requirements specification: Learning object, process, and data methodologies. Communications of the ACM, 37(5), 102–113. Wand, Y., & Weber, R. (1993). On the ontological expressiveness of information systems analysis and design grammars. Journal of Information Systems, 3, 217–237. Wand, Y., & Woo, C. (1993). Object oriented analysis – Is it really that simple? In Proceedings of the third workshop on information technologies and systems, Orlando, FL (pp. 186–195). Wand, Y., Storey, V., & Weber, R. (1999). An ontological analysis of the relationship construct in conceptual modeling. ACM Transactions on Database Systems, 24, 494–528. Wand, Y. A., & Weber, R. (1990). An ontology model on an information systems. IEEE Transaction on Software Engineering, 16(1), 1282–1292.
370
S.J. Kwon / Expert Systems with Applications 38 (2011) 360–370
Wand, Y. A., & Weber, R. (2002). Research commentary: Information systems and conceptual modeling: A research agenda. Information Systems Research, 13(4), 363–376. Wand, Y. A. (1989). Proposal for a formal model of objects. In W. Kim & F. Lochovsky (Eds.), Object-oriented concepts, databases, and applications. Reading, MA: Addison-Wesley.
Weber, R. (1997). Ontological foundations of information systems. Melbourne, Australia: Coopers and Lybrand and Accounting Association of Australia and New Zealand. Wimmer, K., & Wimmer, N. (1992). Conceptual modeling based on ontological principles. Knowledge Acquisition, 4, 387–406. Winston, P. H. (1977). Artificial intelligence. Reading, MA: Addison-Wesley.