Grounded discovery of symbols as concept–language pairs

Grounded discovery of symbols as concept–language pairs

Computer-Aided Design 44 (2012) 901–915 Contents lists available at SciVerse ScienceDirect Computer-Aided Design journal homepage: www.elsevier.com/...

2MB Sizes 0 Downloads 10 Views

Computer-Aided Design 44 (2012) 901–915

Contents lists available at SciVerse ScienceDirect

Computer-Aided Design journal homepage: www.elsevier.com/locate/cad

Grounded discovery of symbols as concept–language pairs Amitabha Mukerjee a , Madan Mohan Dabbeeru b,∗ a

Department of Computer Science and Engineering, Indian Institute of Technology Kanpur, Kanpur 208016, India

b

Center for Robotics, Indian Institute of Technology Kanpur, Uttarpradesh-208016, India

article

info

Keywords: Cognition in design Design symbols Chunking Language learning

abstract In human designer usage, symbols have a rich semantics, grounded on experience, which permits flexible usage — e.g. design ideation is improved by meanings triggered by contrastive words. In computational usage however, symbols are syntactic tokens whose semantics is mostly left to the implementation, resulting in brittle failures in many knowledge-based systems. Here we ask if one may define symbols in computational design as {label,meaning} pairs, as opposed to merely the label. We consider three questions that must be answered to bootstrap a symbol learning process: (a) which concepts are most relevant in a given domain, (b) how to define the semantics of such symbols, and (c) how to learn labels for these so as to form a grounded symbol. We propose that relevant symbols may be discovered by learning patterns of functional viability. The stable patterns are information-conserving codes, also called chunks in cognitive science, which relate to the process of acquiring expertise in humans. Regions of a design space that contain functionally superior designs can be mapped to a lower-dimensional manifold; the inter-relations of the design variables discovered thus constitute the chunks. Using these as the initial semantics for symbols, we show how the system can acquire labels for them by communicating with human designers. We demonstrate the first steps in this process in our baby designer approach, by learning two early grounded symbols, tight and loose. © 2011 Elsevier Ltd. All rights reserved.

1. Symbols in design The knowledge that designers have is private, and not easy to access. What is available externally are what designers say about their design, or other depictions such as sketches. These external models of design use symbols, e.g. a word such as ‘‘tolerance’’ or a sign such as . The word symbol means ‘‘something that stands for or denotes something else’’ (OED, sense 2a). However, in formal use, it is merely a token used in an algebraic expression, and what is ‘‘stands for’’ is not part of the model. In constructing formal models of design, symbols are often treated as logical predicates; thus, one symbol is defined in terms of some other symbols and so on. On the other hand, a designer’s mental concepts of symbols are grounded in experience, and encode context-dependent variations and permit flexible usage. This work proposes a mechanism for defining symbols as cognitive entities, determined by experience, as opposed to defining them manually. In this view, symbols constitute a tight coupling between a linguistic or graphical or gestural label, and an



Corresponding author. Tel.: +91 988 966 7671. E-mail addresses: [email protected] (A. Mukerjee), [email protected] (M.M. Dabbeeru). URLs: http://www.cse.iitk.ac.in/users/amit (A. Mukerjee), http://terpconnect.umd.edu/∼mmd (M.M. Dabbeeru). 0010-4485/$ – see front matter © 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.cad.2011.06.004

associated concept or idea, the semantics—referred to in cognitive science as its image schema [1]. The label is sometimes called the phonological pole of the symbol, and the image schema its semantic pole [2]. These label–meaning pairs follow an agreed convention within a group, so that others may recognize the image schema when exposed to the label. Thus, the symbol evokes a rich suite of thoughts— that it is an electrical device, that its current varies linearly with voltage, that it gets hot over use, and so on. The more experienced a designer, the more richly nuanced this semantics. What lends flexibility to symbol usage among humans is the availability of this rich semantics base; without a semantics, or even with a sharply defined (boolean predicate) notion of semantics, symbol usage becomes very unstable. As a recent example, we may consider a design repository proposal [3] for suggesting similar design solutions during the ideation stage. Users may search the repository by typing in keywords. However, the design repository fails to produce any results when the user searches with the term ‘‘tank’’ instead of a standard term ‘‘reservoir’’ [4]. One solution suggested for this problem is to arrive at a ‘‘standardized vocabulary’’ of design [3,5]. This proposes to create an enormous inventory of symbols, organized in a relational graph (called an ontology or a taxonomy) to fit all kinds of design problems. However, this is unlikely to be effective, since very few symbols mean exactly the same in all contexts.

902

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

(a) Human symbols.

(b) Formal symbols.

Fig. 1. (a) Human symbols are a tight coupling between a label and a meaning or ‘‘image schema’’. (b) Formal computer symbols are just the term or label. How such symbols map to a design domain is not part of the theory.

Deprived of a mechanism for representing the semantic–pragmatic context, any attempts to convert user input into a standardized set of synonymous terms is unlikely to be effective. Also, while expanding the system, different humans working on it would define overlapping definitions and hidden conflicts. Instead, we argue for a grounded symbol learning system. The mechanism proposed works in three steps: (a) determining which aspects of a system deserve symbol-hood, (b) propose a mechanism for representing the semantics of these proto-concepts, and (c) learn labels for these based on interacting with other users. A close coupling between the semantics and the term is also reflected in a number of processes within design. For example, a number of studies on concept generation have demonstrated a role for verbal cues — e.g. contrastive words [6–8]. Clearly, the human creative processes are working on the semantics level to trigger a complex web of inter-related concepts, enabling designers to come up with divergent ideas. These types of interactions have led to an argument for a semantic relation between design concepts and the language of design [9]. This cognitive view of symbols is close to our approach, and differ substantially from formal computational theory which has been adapted for use in knowledge-based design [5,10]. Formal symbols are merely syntactic place-holders, to be operated upon based on rules of formal grammar. Where an implementation or a demonstration has to be produced, the symbols are related to the variables of design using a sharply defined (true–false) function. The structure of this function is not a part of the theory but is defined by the programmer at demo time. The result is that quite often, the system works for the demo problems, but when given a new problem, the rules turn out to require extensive re-working, a problem known as brittleness in the Artificial Intelligence literature [11]. Thus, any attempt to understand the language of design, whether from a theory of design perspective, or from a computational (implementational) viewpoint, must carefully consider, from the very early stages, the semantics of its symbols. This is our main objective — to try to develop design symbols in terms of label–meaning associations, as in human symbols (Fig. 1). There are three main difficulties in doing this: (a) the semantics (or image schema) for a symbol, even in the simplest situations, is often not known even to the user of the symbol; (b) what meaning a symbol has is crucially dependent on context (as in the ‘‘tank’’ example above) and (c) meaning is dynamic—i.e. the semantic structures undergo continuous change, small and large, with experience. This last point highlights the utility (and difficulty) of symbols—unlike static formal systems, every time a symbol is used or observed in a phrase, the image schema must also be updated, in the sense that some associations are strengthened or weakened or created or deleted. Given these difficulties and the novel nature of this enterprise in the design context, our objective in this initial work is very limited. We attempt to discover label–meaning pairs for only the very initial symbols as first encountered by a system, and not to the full dynamic sense of ‘‘symbol’’ as understood in design. Here the initial image schema is represented in terms of variables from a problem domain. In this initial stage, we consider

the process whereby an image schema is discovered, and a label for it learned by associating the new concept with units from a language. During the initial stages being considered here, the image schema emerges independently of the label, based on functional aspects of the task domain. Thus, meaning emerges first and language is mapped to it later, as is being increasingly asserted in infant cognition [12]. We note several differences from other attempts to model domain knowledge in CAD systems. First, our set of symbols (the lexicon) is not pre-determined but are discovered. Discovery implies that the lexicon is more likely to cover the kinds of distinctions needed in practice. A concept tolerance, may emerge because we find that in order to maintain tight fits, we need to hold each part to a narrow range of dimensions. Symbols are discovered when such patterns are repeatedly observed to be relevant to function. For example, an engineer may find it important to distinguish fits with very small clearances that allows guided motion (say, running fit) vs. fits that can transmit high torques without slipping (press fit) — concepts that emerge in the domain of mechanical assembly. For an electrical engineer designing a power socket, similar dimensional variations have a different symbolic expression. Thus, function plays a key role in determining the lexicon of symbols. We call our algorithm that follows this process and discovers symbols a Baby Designer. The name reflects the very elementary level of learning achieved by the system. The attempt to discover potential symbols starts by first identifying regions of the design space containing many ‘‘good’’ solutions; good solutions are determined by how well the design is doing on a set of performance measures — the performance metrics.1 The regions with good designs will be paid more attention (they will be salient)—We call such regions the Functionally Feasible Regions (FFRs). If there is a pattern – an inter-relation between the variables – that holds over the FFR, then it is likely to become encoded as an image schema. These inter-relations, known as chunks in the cognitive literature, constitute information-preserving abstractions on the design space — i.e. they describe compactly many constraints that must hold between variables. For example, when a beam width increases, its thickness is likely to increase as well, for otherwise the strength properties would be unbalanced. The two variables of width and thickness may be reduced to a onedimensional chunk (a manifold) for the space of successful designs. These inter-relations are thought to result in structures in long term memory called chunks, which are characteristic of the mental models of experts across many domains from chess to physics to fire-fighting [13]. The more information-compacting the chunk, the more it relates ‘‘to underlying principles, rather than focusing on the surface features of problems’’ [14]. In this work, we propose that the initial image schemas for symbols are these chunks which compactly capture the functionally relevant aspects of a good design. For example, a schema that recognizes tight fits may emerge from multiple insertion experiences, as that class of fits where the inserted object is more likely to wedge into the hole. It is only later that such a model of meaning may get attached to a label such as ‘‘tight’’. In this work, we shall use the notation of small capitals for the image schema or semantics (e.g. tight), the string in quotes for its label (‘‘tight’’), and square brackets for the symbol as a whole, [tight].

1 Note that the ‘‘performance metric’’ does not need to be a mathematical metric. At the very least, we require transitivity — i.e. if A is preferred over B and B over C then A should be preferred over C. Thus, any stable ranking would suffice. Given such a measure, the approach for determining regions of good functionality is outlined in Section 5.

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

1.1. Learning initial symbols More complex symbols may not be directly, experientially grounded, but they must be based on other grounded symbols. Thus the very initial symbols must be grounded. We can see this by considering the example of an engineering professor trying to teach about press fits. Suppose (very improbably) that a student arrives who, given an assembly, is not able to distinguish tight fits from loose. The professor may feel exasperated with such a student, saying that he may never understand the concepts of fits and tolerances. This is because in learning concepts such as press fit, one implicitly assumes that everyone knows what tight is. Learning about tight and loose is a part of human development, arising in attempts to put the various objects into the mouth, balls into receptacles, bodies through openings etc. This rich body of knowledge forms a substratum on which proper ‘‘design’’ knowledge, such as the idea of fits, has to build. In this work, we focus merely on the process of learning these very early concepts, and not the developed concept of an adult designer. This is why we call our system a ‘‘Baby designer’’. This process of discovering patterns involves what Schön has called reflecting on designs [15]. Instead of ending with finding the design that is most optimal in some sense, we suggest that CAD systems also reflect on the design landscape, identify the traits of the better design, and when possible, consolidate some of these traits into chunks or incipient symbols. The computational process of discovering such chunks is outlined in Section 6. Once such patterns are known one may identify a label with which this concept may be indexed. As with human learners, labels are discovered based on exposure to other language users. Again, we demonstrate this acquisition process using an unsupervised approach that makes minimal assumptions regarding the structure of the language considered. We have competent humans, who are aware of the conceptual space involved, talk about objects in the space, involving certain distinctions. For example, they may be asked to describe situations with tight fits and then some with loose fits. Without using any knowledge of grammar or syntax, we show how such narratives may be used to acquire the labels for the chunks we had learned earlier (Section 7). At the end of this process then, we have a label–meaning pair that constitutes the incipient symbol. 2. Conflicts in formal symbol systems The ‘‘symbolic’’ approach to design pioneered in the work of Simon [16], proposes that logical symbol processing constitutes the scaffolding for cognition [17]. A related stream, with its roots in value engineering ideas from the 1940s, was formalized by Pahl and Beitz [18], followed by studies such as Welch and Dixon [19], leading to ontological models for design knowledge, such as the functional basis model [5,10,3] or [20]. Both these approaches view symbols as formal entities, and while the latter is very concerned with semantics, the semantic model merely defines a symbol in terms of other empty symbols. Such formal approaches to design have been criticized rather persuasively by Schön and others [15, 21] as being antithetical to how designers actually think. Designer thought involves subconscious processes which the designer is often unable to articulate explicitly. A recent review attempting a reconciliation [22] calls for information structures to emerge based on design interactions; this is indeed the spirit in which our work is placed. The fact that formal symbols are devoid of semantics is often overlooked in the design community. Yet the term ‘‘symbol’’ as used in formal computational theory, and in phrases such as ‘‘symbol processing’’, refers to a purely syntactic entity — merely

903

the label of a symbol, empty of any semantics. Formal symbols are just strings taken from a finite alphabet of such strings. One may construct a formal semantics for such symbols, by assigning ‘‘models’’ in terms of a predicate or an ‘‘intension’’ that defines it in terms of other symbols (for an example in design; see [5]). As mentioned above, this often leads to brittle failures. If we may present an analogy, computer usage of symbols are similar to how a blind man may understand the symbol ‘‘red’’; he knows that it is a ‘‘color’’ (whatever that may be) and that ‘‘green’’ and ‘‘blue’’ are other colors, and maybe even that ‘‘crimson’’ and ‘‘vermilion’’ are shades of ‘‘red’’, but his understanding is dramatically different from that of a sighted person, because the semantics is not connected to direct experience, and he is aware of this limitation. In order for symbol usage to be flexible, it has been argued that symbols must be grounded, i.e. defined in terms of elements that are not just other symbols [23,24]. The grounded view also implies that the symbols must be an abstraction from a wide range of experiences, and not an user-given definition. This also correlates with the fact that symbols as used by designers, be it expressed in speech or algebraic notation or sketch, are based on layers of experience, and can be interpreted very flexibly depending on the context. On the other hand, in computational design, symbols are defined as independent of context, causing the brittleness mentioned above. Furthermore, user-given definitions can model the same domain using differing sets of distinctions that make the two representations very difficult to reconcile. A case in point are the design function ontologies developed by groups at NIST [20] and at Austin [25]. These two groups, both starting from the Pahl/Beitz flow/signal ontology — proposed divergent sets of symbols which differ not only in the set of terms, but in the way they decompose the design universe [26]. In attempting to reconcile these two formalisms, Hirtz et al. remark how the process of determining a set of terms depends on a subjective view: ‘‘it is important to note that the categorizations used in the taxonomies are not unique, but are rather a matter of convenience. The organization of the taxonomy . . . is a particular instance of a view of the terminology it contains. In practice, it is impossible to have a vocabulary that allows all concepts to be modeled in only one unique way, because [of] the flexibility required. . . .’’ Similar discrepancies in nomenclature also exist between different design groups, individual designers, and other design communities and occasionally lead to considerable misunderstanding; however, by and large communication and interaction between humans remains possible because the symbols they use are grounded, and can be understood via a shared experiential basis. This is why the few occasions of misunderstanding are so memorable. Due to these tendencies toward divergence, computational models of symbolic knowledge face considerable difficulties in scaling up. Practical demonstrations of symbolic reasoning in design [27,28] use a small set of symbols and rules that are relevant to that domain. The reason for using a small set of symbols is not merely to reduce computational costs, but to tune the interpretation of the symbol to the particular design view being adopted. Such models do not scale up, which has led to the call for standardizing the vocabulary of design, but as we have argued earlier, this requires a grounded symbol system to succeed.

904

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

Fig. 2. Discovering Symbols: In the pattern discovery step, given a performance metric, the space of functionally feasible regions (FFRs) or good designs is identified. Often, FFRs may lie along some low-dimensional manifold (Rd ) embedded in the high-dimensional design space RD (d ≪ D). In the second step, high-dimensional FFRs can be reduced to low-dimensional chunks that encodes only the good designs. In the final step a label is acquired for a chunk by using adult communication and it becomes an initial symbol.

3. The symbol discovery process A grounded symbol is a label–meaning pair. But what is meaning, and how does it evolve in a domain? This complex question is perhaps the main reason why formal models bypass semantics and prefer to deal with the label alone. Recent advances in cognitive science have provided many insights into the nature of concepts, which lie at the heart of semantics. Cognitive concepts are modeled in terms of an image schema, which is a representation of the concept in an extrasymbolic domain — often the space of sensorimotor experience. An image schema identifies entities that belong to the concept, and also defines its relations with other entities in the domain. Within these constraints, the term image schema has several differing interpretations [2,12,1,29]; here, we focus mainly on the model in [29], and consider a conceptual category or an image schema as a region in a high-dimensional space of sensorimotor experience. Its relations with other symbols are established through sensorimotor simulation [30] and not through logical models. This results in the structure being much more flexible. Also, the image schema may be implicit — one may not be consciously aware of the precise boundaries. This holds even for experts, who may often differ on definitions, but they rarely disagree on its usage in context. Complex symbols at the level of an expert designer have complex image schemas that have been built up on layer upon layer of symbolization, and it would be foolhardy to expect an agent to straightaway acquire such complex symbols. We have given earlier the example of a professor who may become exasperated while trying to teach tolerances to a student who does not know the difference between tight and loose. Thus, our view of a grounded symbol system is one that starts from the very earliest experiences, and builds up to more complex conceptual structures. Our Baby Designer approach is merely a start in this direction. The goals are very modest and we attempt to learn only the very first, initial symbols, discovered based on regularity observed in the functional behavior in simple tasks such as insertion. The system is implemented as a computational simulation which explores different designs, evaluating them based on a notion of function that is pre-specified. This is an important difference with a normal design situation, where functions are not given, but emerge

in the process of investigating the problem; but the baby designer is clearly an apprentice. We wish to distinguish two senses of ‘‘function’’ that are often conflated in discussions of formal models of function. The first views a set of functional needs as independent of embodiment (e.g. ‘‘provide illumination’’, what [31] calls ‘‘function’’). The second considers how these functions are evaluated or compared (e.g. ‘‘the light intensity variable in user’s room must be above a certain lumens value’’ [32], similar to what Gero calls expected behavior). In order to distinguish these more clearly, we use the terms performative behaviors and performance metric for these two senses. The performative behaviors are the subset of any object’s behavior which is of interest to the user (e.g. torque, illumination), and are general characteristics, without comparison or evaluation. When we say that two design classes have the same function, we are interpreting ‘‘function’’ as performative behavior. A performance metric πi evaluates the degree to which a design satisfies a given performative behavior, and is defined for a given type of embodiment. This reflects a significant commitment in the design, but it is typical of an apprentice situation. We consider a design to be functionally feasible if it satisfies some acceptability condition on πi (e.g. range of acceptable strengths/light lumens) [33]. Given these evaluative behaviors, the Baby Designer can simulate new designs and evaluate their performance. Each design instance is a data point in the design space. As different parts of the design space are explored, the system can reflect on these and attempt to come up with patterns. If these patterns are stable, they may become the initial image schemas for our symbol discovery process. The overall symbol discovery then operates as follows (Fig. 2): Step 1. Functional feasibility and pattern discovery. The system explores various instances in the design space, identifying the ‘‘good’’ designs in terms of performance metrics. The high performing solutions constitute the Functionally Feasible Region (FFR). As this space is being explored, the system also tries to learn a pattern whereby future design instances can be classified as good or bad without explicitly evaluating the performance. This is done using general purpose function approximation — we use a single layer

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

perceptron [34]. Such a classifier, fe (), may be considered to be an emergent property of the design space (Section 5). This classifier simulates the process by which experts are immediately aware of the ‘‘good’’ designs. Step 2. Low-dimensional semantic representation. The good designs often lie along low-dimensional manifolds embedded in high-dimensional FFRs, reflecting implicit constraints among the design variables required for high performance. These manifolds can be discovered using dimensionality reduction techniques, both linear and nonlinear. Each dimension in the resulting low-dimensional models represents a chunk or an inter-relation between the design variables that must hold among ‘‘good’’ designs (Section 6). Step 3. Discovering labels. Assuming the chunks are useful, human users may be aware of it (they may be ‘‘reified’’) and humans may already have a label for it in communication between adult speakers. If so, this label may be learned by our agent merely by listening-in on adult communication. Since the Baby Designer is aware of the concept (in this case, the distinction between tight and loose), she can compute the conditional probabilities of each word in this context, and maximize the relative importance of words in each context, to obtain a possible (initial) label (Section 7). An important aspect of this step is that we require no knowledge of syntax or linguistic structure; hence in principle, this can apply to any language. Steps 1 and 2 relate to the acquisition of expertise in a domain [13]. Expertise may relate to how a baby learns to walk or talk, as much as it relates to adult design experts. In both situations, a complex input space is analyzed based on function, and a new vocabulary emerges for dealing with the data by mapping them into a lower-dimensional space of chunks. This entire process may be subconscious, and are related to what has been called ‘‘reflection’’ [15]. A key premise for the low-dimensional representation is that the FFRs can be encoded into lower-dimensional manifolds. This is a new claim in the design literature and requires some justification perhaps. If there is a single performance goal, then the variation of this single optimum, in the input parameter space will typically be a 1D manifold. In the more common situation with multiple (say k) performance metrics, the good solutions constitute a Pareto front or a k − 1 surface in the objective space. When the function measures that map from the design variable space to the objective space are well-behaved (which is not always the case), their Jacobians would be well-posed, and the near neighbors in the objective space may correspond to near neighbors in the design space. If this holds, we may expect the designs to lie along a k − 1 surface (or ‘‘manifold’’) in the design space (shown as a folded patch in the figure (Fig. 2)). In practice, the mappings are rarely well-behaved, and no formal results can be claimed for the dimensionality of the manifold, but in many situations substantial dimensionality reduction is observed [35]. We have seen an example of a design beam earlier. As a more complex example, let us consider a digital camera, say. In terms of all its components and assembly processes, it may have several hundred design variables, but only a handful of functions that are of user interest. Based on these functional measures, the good designs will arise only in a small region of the huge design space, which are the FFRs. An expert camera designer eventually comes to recognize these regions, often implicitly, so that the design ideas that immediately suggest themselves are all quite good [36]. This is what we call the FFR pattern learning in step 1 above. Also, although the design is defined in terms of a hundred variables, for the class of ‘‘good designs’’, there are often many interrelations between the input variables — e.g. lens diameter and the maximum film speed may be found to be correlated. Owing to such

905

inter-relations, the space of good solutions has far fewer degrees of freedom than the initial space. Each dimension in this manifold space is clearly an information-compacting representation. This is similar to results based on Kolmogorov complexity — any learning process results in the learned patterns having a smaller description length [37]. A consequence of this approach is that symbols would now reflect the appropriate level of detail for making functional distinctions in a given domain. This process is captured in step 2 above. The computational symbols that emerge in this process have four important attributes which one expects of cognitive symbols. These are not present in formal symbols; we outline these below. (a) Grounding. The meanings of terms used in design are to be grounded in experience as opposed to being defined in terms of other symbols. The model for a semantic category is also not boolean — more frequently seen instances would be more prototypical; rare instances would not evoke the symbol as strongly. This is the sense in which cognitive symbols is graded. (b) Dynamism. The term-meaning relation is not static, but keeps changing with experience. An important corollary is the notion that semantics is historical — i.e. its present meaning is determined by continuously updating layers of experience [38,39]. This makes it important also to capture the dynamics through which a complex symbol is acquired by the human designer, and particularly the earliest symbols as we attempt here. (c) Information density. The patterns encoded in symbols are stable and reflect functionally relevant distinctions. Discovering these regularities results in a lower-dimensional description. This ensures that the vocabulary acquired is capable of expressing a concept compactly. (d) Functional saliency. Not all frequent patterns are relevant. Those that result in a functional difference are more salient and more likely to be focused on (reified) and finally symbolized. We also note that the label–learning process is significantly different from existing formalisms for capturing design knowledge, where the name of the symbol is assigned manually. This leads to the possibility of term mismatch and semantic overlap due to human variability. The proposed approach avoids these problems by allowing different symbols to overlap or even be synonyms — the mismatches can be detected by comparing the image schemas at the semantic level. In order to make the learning scalable to new situations, we do not constrain the user narratives from which the label is learned, nor do we require syntactic knowledge of the language being learned, so that it may be easier to scale this approach to new situations and also to other languages. We take our input in the form of text, and analyze each word for co-occurrences with the concept when it is occurring. We next review the approaches in the literature for each of the three steps in the symbol discovery process. 4. Related work: discovering patterns and learning labels The three steps proposed above draw on differing disciplines. Chunks are related to the nature of design expertise; image schemas were largely developed in cognitive science and dimensionality reduction in machine learning, while grounded language learning is a problem that has been addressed in AI. Clearly there is a diverse range of literature addressing these topics, and to our knowledge this is the first attempt to unify these domains. This work presents innovations in each of these aspects, which are discussed below. 4.1. Acquiring expertise and discovering chunks We have seen how an expert designer is ‘‘confident of immediately choosing a good [design] based on experience’’ [36].

906

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

But how can we formalize the process by which an expert acquires this ability? Traditionally, knowledge-based systems have considered explicitly discovering patterns in a design space based on design knowledge ontologies. While some of these systems discover patterns that could become chunks, they do not consider the symbol emergence process. In the Kritik system [40], given a set of performative behaviors and a binary (true–false) evaluation, the system uses a body of hand-coded domain rules to come up with possible configurations that may exhibit this behavior. Moss et al. [27] presents an attempt to learn good designs. Here performance metrics are pre-specified and multiple design objectives are combined into a single evaluation. This is then compared by an M-agent to determine which are the better designs. The configuration generator producing better designs will have a higher probability of suggesting designs, resulting in a notion similar to the learning of FFRs in this work. If the model were to consider learning a lowerdimensional map, it may result in structures similar to chunks. However, the input is in terms of user-defined symbols and rules makes it hard to scale up such an approach. Design problem reformulation is also the focus of [41], which provides a significant step in permitting designers to identify the patterns through finding the key variables to keep them as design variables (as opposed to relegating it as a dependent parameter) in different function and constraint expressions. This relates to the information compaction step in our work, but while using language as a metaphor, the model does not attempt any links to relate terms in language. Another approach that considers discovering FFRs can be seen in [42]. Here the results of multi-objective optimization are analyzed manually to propose the possible inter-relationships between design variables necessary for ‘‘good design’’. Our approach automates this process for both linear and nonlinear relationships, though only linear examples are considered here. In the lowerdimensional space, the emergent ‘‘dimensions’’ are proposed as ‘‘chunks’ that are the putative basis for symbol discovery. While these proposals approach different aspects of the problem, none of these attempt to learn the semantics underlying the symbol in a situated manner.

Fig. 3. Peg-in-hole assembly and Wedging: First, the Baby Designer learns that w must be greater than t for successful insertion. Next, it explores wedging, which t occurs at angles greater than θ = w− , when opposing compressive contact forces wµ can lock the peg in the hole [48] (µ = coefficient of friction).

learn units of language based on exposure to fragments of language. The pioneering work of Steels [45] considers artificial languages in an attempt to model the very genesis of speech sounds as meaning-bearing units. Others have considered recognizing labels from phrases that refer to single objects [46] or with elaborate hand-coded models for the semantics [47]. In contrast to these earlier models, the semantics for our approach is not hand-coded or defined in terms of single objects, but is abstracted from a design space based on functional considerations in an unsupervised manner. Once these conceptual underpinnings are available, we use similar techniques as in the earlier work to associate these with words. However, here too, we use unconstrained natural language narratives generated by human subjects while earlier work often uses constrained or artificial languages.

4.2. Image schema and dimensionality reduction 5. Learning concepts: insertion Image schemas have been related to multi-dimensional spaces in the work of Gärdenfors [29]. However, while Gärdenfors makes a claim for the regions representing image schemas being convex, he does not extend this to manifold discovery and dimensionality reduction. Many aspects of images schemas are dealt with in [1], but the analyses are mostly psychological and do not address the question of how it may be formalized in a computational system. Our approach for formalizing image schemas is based on the novel proposal that chunks as lower-dimensional manifolds capture functionally important patterns in the design space. The discovery of lower-dimensional manifolds in high-dimensional data is an important branch of machine learning. If the inter-relations among the input variables are linear, one may use Principal Component Analysis (PCA). For nonlinear manifolds (imagine the spiral layer of jelly in a swiss-roll), one may consider a manifold learning technique such as Locally Linear Embedding [43]. Whatever its local nature, discovering such a lower-dimensional manifold is a form of abstraction that relates the relevant aspects of the initial design space [44]. In learning the initial image schema tight, we use PCA to find a linear one-dimensional manifold in a 2D space. 4.3. Learning word mappings Learning language labels builds on a separate body of work in Artificial Intelligence where the attempt is to build systems that

To start our consideration of discovering patterns in the design space, let us first consider the task of inserting an object into a hole. Here the Baby Designer is given a round peg of thickness (diameter) t, which it has to insert into a hole of width w . A very early discovery for human infants is that inserted object must be narrower than the hole, i.e. the hole-width w must be greater than the inserted object thickness t (Fig. 3). For the human designer, this conceptual primitive lies at the heart of the semantics for many symbols relating to containment, assembly, dimensioning, fit, etc. These design symbols are layered on top of our earliest conceptual achievements, and here we start to discover those very earliest stages. Also, note that we start initially with a simple circular shapes for the peg and hole; as the symbol generalizes to other situations (which can now be related based on linguistic label as well as intrinsic similarity) — we may consider other kinds of shapes and situations, and eventually also abstractions. We assess this question through a simulation experiment, where we consider the learned FFR pattern after 10, 20, and 100 insertion instances (Fig. 4). By considering the figures qualitatively, we observe how, after experiencing just a few instances, the pattern is inchoate, so the Baby keeps trying to insert some fatter pegs into small holes, but this exploration itself keeps filling up the negative (black) area of the figure. After a sufficiently large

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

(a) 10 instances.

(b) 10 instances.

(c) 20 instances.

(d) 100 instances.

907

Fig. 4. Learning through experience that inserted-object-must-be-smaller-than-container (w > t). Feasible solutions in the (w, t )-space are marked ‘‘+’’ and failure regions as squares. (a, b) In very early stages of exploration by two different designers for the same 10 instances, the learned pattern is unsure of which t can go into which w . (c) After 20 instances, the pattern w > t is beginning to emerge, and (d) is quite clear after 100 instances.

number of examples, irrespective of the variation in the samples, the error in the learned function (how frequently a fat peg is thought to be insertible, or an insertible peg is misclassified) — reduces drastically, converging asymptotically to the line w = t. As this defining boundary becomes sharper, the system may be said to know the principle, at least implicitly. A simple application of a dimensionality reduction (in this case, linear), will result in a more precise formulation, which then may be reified as an explicit rule, that w must be greater than t for successful insertion. Variation and convergence among different agents A question that arises based on the grounded-learning approach is that if different agents are exposed to differing inputs, their learning may be quite different, and thus the semantic poles of their symbols may differ, owing to variation in their genetic, past learning or other predispositions and their particular experiential trajectories. However, we observe that humans within the same cultural group are able to communicate effectively, indicating some degree of coherence in their semantics. Here we argue how such a process may occur through our simulation of this simple example. We consider this question by conducting two simulation experiments. In the first, we consider individuals who have different mental predispositions (e.g. genetic variations), and are exposed to the same randomly generated sequence of trials. This is done by having a different set of initial conditions (network weights) in the perceptron learner, but exposing these on repeated trials, to the same sequence of instances (Fig. 5(a)). In a second run, we consider individuals who are exposed to different random trials. Here each instance (failure or success), is generated randomly, and each trial is therefore different (Fig. 5(b)) each case, we perform both simulation experiments with 20 trial runs each for 10, 20, 50, 100, and 500 instances.

We consider the error in the learned function fe as the misclassified fraction — i.e. those design instances that are actually infeasible but show up as accepted in the learned function (False Positives, FP), and those that are feasible but are rejected (False Negatives, FN). True Positives TP, are correctly accepted design instances and True Negative TN are the ones correctly rejected by the learned function. The error rate is the number of errors (FP +FN) divided by the number of instances (FP+TP+FN+TN). In both experiments, this is observed to reduce with the number of instances. We observe that even for the same data, with little experience, different individuals with different prior histories or genetic makeups may induce learned (emergent) patterns fe that are quite different (e.g. the 10-sample data in Fig. 4(a) and (b) are identical). This maybe why experts may arrive at differing conclusions after seeing a very little amount of data. However, we note that along with the reduction of error, the standard deviation across iterations also decreases. Thus, with just 10 trials, the s.d. is observed to be 0.105, but this reduces to 0.006 and 0.002 after 100 and 500 trials respectively (Fig. 5). As expected, the standard deviation among designers is somewhat higher when they experience differing instances. For an individual executing the task repeatedly, this gives the system a measure of confidence in its function. A significant cognitive ramification of this process is that it illustrates the process of convergence among mental models for different agents, as in design teams [49]. Note that different learners are exposed to different sets of insertion situations, or start with different initial assumptions, so that they vary in the conclusions they draw (often implicitly) from the initial data. With increasing exposure, the variation between trials reduces, and different agents, exposed to differing inputs, are likely to converge on a function that is increasingly precise. Thus, all mature agents may be thought to have converged on the same

908

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

(a) Experience with same instances.

(b) Experience with different instances.

Fig. 5. Quality of learning: The quality of learning improves with the number of experiences. Different designers are exposed to the same randomly generated design instances (a), and different randomly generated design instances (b). The vertical lines represent the standard deviation over 20 trials.

abstraction of the containment task — the rule that the holewidth w must be greater than the peg-thickness t. While this is exhibited here for an elementary task, similarities of this nature emerge in all task domains that are defined precisely enough to have a regularized, systematic behavior. In all such situations, after sufficient experience in a particular domain, designers come to acquire similar abstractions for the core behaviors of the system, as often observed for members of the same design practice. However, in the margins, or in areas where there is not enough exposure, the possibility of non-convergent semantics can always arise. Of course, part of the design challenge is that of arriving at such precise definitions. However, our contention is that in order to be able to abstract across different precise problems, the system must be able to handle abstractions from each specific situation, or each routine design, first. Thus at the Baby stage, our challenge is to try to learn at least under these regular situations, so that the greater abstraction across different situations (e.g. choosing a right embodiment for a design) can be addressed at a later stage. We cannot hope that our computers will become expert designers without going through some of the inept stages first. 6. Dimensionality reduction and chunking: tight vs. loose We have demonstrated, in the context of an insertion task, how different agents are likely to converge to similar models for ‘‘good’’ solutions after sufficient exposure to a problem. Now we broaden the problem to that of learning fits and clearances, by considering the distinction between tight and loose. This distinction is not told to our learner, but is induced given the same functional need for successful insertion. Instead of learning from successes, however, here our Baby learns from failures. A second attribute of this learning is that we shall now demonstrate is dimensionality reduction — how the problem description length can be reduced so long as we are considering only the ‘‘good’’ solutions. While linear dimensionality reduction has been long known via techniques such as PCA and MDS [50], lower-dimensional data is often encoded as nonlinear manifolds. Recent years have shown a tremendous interest in nonlinear manifold learning [51] which discover latent inter-relations in machine learning applications with voluminous data or in computer vision. In design, nonlinear manifold discovery holds immense potential in situations with many design variables, but this problem is yet to gain much attention [35,44]. In this work however, we limit ourselves to a simpler situation where linear manifold discovery proves sufficient. Each dimension in this low-dimensional space is a ‘‘chunk’’ or an abstraction that incorporates many inter-relations among the initial design variables. After learning these, we shall take these chunks as the Initial image schema that are being encoded as the semantic poles for the initial symbols.

6.1. Tight and Loose cases in containment Even in the w > t situation, the insertion task may sometimes fail when the peg gets ‘‘wedged’’ into the hole, at which point even very high insertion forces will not succeed in pushing it in (of course, a small wiggling motion or compliance may release the peg, but our Baby is yet to learn this). Wedging results in failure of insertion, and is initially undesirable for the baby, and it must learn to avoid it, but eventually the baby may also learn a value to tight fits — they may be useful in grasping, or in extended manipulation. Wedging occurs when the contact forces between peg and hole can set up compressive forces within the peg, locking the peg in the hole. This will occur when the ratio of wl is less than the coefficient of friction (µ) when there is two point contact (Fig. 3). The angle above which wedging occurs for a given hole t clearance is given as θ = w− [48], where µ is the coefficient of wµ friction, and the angle of insertion θ and depth of insertion l are task variables which vary for different insertions. µ is assumed to be 0.5 for wood–wood surfaces. The evaluation of whether wedging occurs is quantified over the task variables θ and l, in order to evaluate the design. For a given µ then, we can consider some bounds on the acceptable θ range that would be designated as a tight fit. These may be determined by the Baby Designer based on repeated trials, or may be given by a master designer under whom the Baby is serving as an apprentice. In this case, we take the range for the tight situation as (0° < θ < 11°), and for loose, loose (17° < θ < 20°). The range for tight reflects approximately one-eighth of the right-angle quadrant, and the loose region is just a higher band. Explorations in the design space results in two different functional feasibility regions (FFRs) corresponding to these two feasibility criteria being learned on the w, t space (Fig. 6). At this point, one has the FFR (gray region) for a single design situation. We may use a generalized function learner (e.g. a multi-layer perceptron) for determining whether a new design instance (w, t ) belongs to the FFR for tight or not. This is the pattern learning step in Fig. 2. 6.2. FFRs as lower-dimensional manifolds However, one may achieve much more, by reflecting on the nature of the FFR so discovered. In order to obtain an abstraction for the process, we can attempt to discover a structure among the input variables for designs satisfying this functional requirement. This is expected to correspond to a lower-dimensional surface or a manifold in the high-dimensional design space. Where the surface is linear, such manifolds may be discovered by linear dimensionality reduction (e.g. principal-components analysis [34]) on this space of ‘‘good designs’’. The resulting lower-dimensional space will reveal the underlying chunks in the design space.

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

909

Fig. 6. Emergence of chunks for fit: ‘‘tight’’ vs. ‘‘loose’’: Situations where t is close to w may cause wedging, which can occur when the insertion angle is tilted more than θ = (w − t )/wµ. Tight situations are those where wedging is more likely (θ is low). Using this as a performance metric, we can learn the Functionally Feasible Regions (FFRs) for both tight and loose. The FFRs (gray) learned after 20 instances (a, c), are poorer than after 100 instances (b, d). On reflective analysis, we find that these FFRs are well-approximated as lower-dimensional (1D) linear manifolds (e–h bottom row). Using PCA, we discover the principal eigenvector to be dominant, and the number of parameters reduces from (w, t) in 2D to an emergent 1D chunk representing invariance in w − t. This process results in two chunks, CT and CL —but we do not know that these are called ‘‘tight’’ or ‘‘loose’’ yet.

Such manifold discovery techniques average out considerable internal variation in the surface, and (like integration) reduces the noise in the data considerably. Thus, for linear manifolds, even on a small initial sample set (20 instances) the PCA results are already pointing to a clear dominance of the first eigenvector over the second (the eigenvalues are 27.92, 0.3043). Thus, the variation along the first eigenvector is expected to be 90 times dominant than that along the second — i.e. the data – the good designs – are expected to lie along the first eigenvector, e1 = (0.7390, 0.6737), which is already hinting at a 47° axis for the FFR. (Fig. 6)(a), (e). With increasing exposure, this pattern is consolidated, and we observe after 100 instances, that the eigenvalues are now 29.78, 0.1512 (the first axis is 180 times dominant). Thus, the data is overwhelmingly spread along a line with the orientation of the first eigenvector (−0.7239, −0.6900) — i.e. the concept of a tight fit lies along a 43° line in the w, t space (as shown in Fig. 6(b), (f)). This axis is a one-dimensional abstraction of the 2D w, t input space and characterizes the region along which the tight instances lie. This compact representation constitutes the chunk CT which will form the very first, initial image schema for the symbol tight. Similarly, for the loose situations, where θ is more forgiving, with the same 20 instances, we obtain the zone in the w, t space shown in Fig. 6(g) resulting in the one-dimensional characterization after PCA. Interestingly, the eigenvector here (0.8950, 0.4462) and hence the loose fit in this case lies along 26°

line in the w, t space and the data is shifted away from the w = t boundary. With the same 100 instances the loose fit now lies along 34° line in w, t space, which constitutes our initial chunk for the symbol loose, CL . We note that both these models are graded, for as the instances fall of toward one side or the other of this line, its membership in the corresponding class is weaker. We also note that the invariant along either line is the quantity w − t, different values of which become the learned chunks for tight and loose respectively. The quantity w − t eventually forms part of the semantics for the symbol clearance, which is a generalization over different degrees of fit. From a machine learning perspective, these achievements are rather elementary. Our objective in presenting it is merely to emphasize the role of even the earliest knowledge in many advanced design situations. For humans, these concepts are also among our earliest knowledge achievements; typically, infants learn containment (peg-in-hole) by about 3 months, and tight vs. loose by 5 months [52]. Many cognitive scientists believe that our concepts of abstraction, including the is-a crucial to constructing hierarchies, is a metaphorical extension of containment [53]. In particular, the tight–loose distinction forms the basis for many important engineering concepts in fits and tolerances. More significant than this specific case is how this example illustrates a process of discovering correlations between input

910

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

Table 1 A–C are the blocks containing holes and 1–6 are the pegs. All the units are in mm. w1 . . . w4 and t1 . . . t4 are the measurements at various points on the hole and the peg respectively. Hole

w1

w2

w3

w4

Peg

t1

t2

t3

t4

A

23.15

22.9

22.8

23.0

B

17.25

17.2

17.15

17.1

C

12.5

12.9

12.7

12.7

1 2 3 4 5 6

22.25 19.75 17.2 14.75 12.4 10.5

22.45 21.6 17.1 14.4 12.2 10.95

22.2 20.5 16.9 14.6 12.4 10.8

22.6 21.4 16.8 14.7 12.25 10.7

variables. Such correlations are extremely common across all kinds of problem domains. For example, if strength is to be maximized while conserving material, then different components should be balanced in strength, and one quickly realizes that many dimensions appear to rise (or fall) in tandem. Discovering this type of interdependence is thus a first step toward the process of creating semantically rich models of design. In terms of symbol discovery, the communicative agent is interested in using a small, finite number of symbols to address salient issues in a wide range of situations, hence modeling these interdependent variables using a single symbol is a natural step. Thus, symbols in human discourse are often aligned to such discovered chunks. However, not all chunks become immediately useful as symbols in a design context. Chunks are just useful patterns; among human designers, these are sometimes understood subconsciously, resulting in implicit relations that are justified vaguely (e.g. ‘‘looks right’’ [54]). If similar chunks are observed repeatedly, especially in different domains, one may become conscious of the pattern, the process known as reification [12] (Fig. 2). Subsequently, if a label is associated with for this pattern, it becomes a true symbol, with both a semantics (the chunk learned earlier) and a phonological label. We assume this is what happens with the tight and loose chunks discovered above, and illustrate the process of discovering their labels next. 7. Learning labels: language mapping At this stage, our Baby Designer has an implicit notion of the categories [tight] and [loose], in terms of the low-dimensional chunks CT and CL . While we know the relation of these categories to the w, t space (these are grounded), we cannot relate this idea to other notions because there is no indexing mechanism with which one may refer to it. The availability of such an index or label is a crucial step — without it, the chunk or image schema that has been learned cannot be related to a broad host of other concepts. Also, implicit rules where these chunks play a role, cannot be used explicitly to justify decisions, or to explicitly reason about the effects of decisions, or to communicate ideas to others. Once a label is learned, these pre-linguistic chunks will become the initial image schemas for the symbols that will be thus formed. With subsequent usage of the term, this image schema will be accessed and enriched, and accumulate many other fine details. For learning the labels, the system needs to interact with others who already have a label for this concept. We do this by exposing the system to human commentary on the same distinctions that were learned above. In this sense, this is similar to a baby overhearing or listening-in on adult discourse about a situation. To facilitate this process, we construct a simple apparatus— several flat pieces of wood with a hole, and some cylindrical pegs that fit in these holes with different types of clearance (Table 1). The objective was to give different combinations of these to human subjects and have them describe their experience with them in unconstrained English. Then we would associate individual words

appearing in these descriptions with the concepts, and see if any good labels would emerge. For the two contexts of tight vs. loose fits, we shall associate the words of the narrative (given as word-separated text) with the category distinction already available with the Baby Designer. Some knowledge of phonology (word-boundary separation) is assumed. We present two association schemes — one presuming no knowledge of morphology either, and the other based on stemming, which assumes that the root unit is distinguishable under different morphological forms. In all cases, no knowledge of syntax is assumed, and yet we show that relevant words from the language input are easily picked up to match the concepts. To summarize, the above process presents an alternative to human-defined models of symbols for use in design reasoning. We demonstrate a process whereby the set of relevant semantic distinctions are learned by focusing on functional differences. These categories may become reified if the category models are mappable in terms of lower-dimensional manifolds. Finally, we match these proto-concepts with linguistic terms by unsupervised association with word-separated linguistic streams. In this last step, we also demonstrate the relevance of profiling in constructing the {label, image schema} mapping that is a symbol. While we develop this idea through a very limited demonstration, the bottom-up nature of the work, and minimal use of domain knowledge priors, makes it eminently scalable. Of course, considerable amounts of experience is needed, as well as a large number of adult human descriptions of the task, the requirement for these is very little compared to the total experiences and language interaction of a human infant. The simulation process may be used to investigate the symbolization process as a whole, executed over a much larger corpus of experiential data, may result in a set of symbols that discover the set of symbols that are functionally relevant for addressing the given problem. Further, where the same symbol may arise in different contexts or viewpoints, they would be bound by the context, and also the profiling adopted. 7.1. Associating labels Now we outline the process used to discover a linguistic label for the initial image schema — which is nothing but the chunks CT and CL which have been learned. The association of a word ωi with a chunk Cj can be measured in many ways; the machine translation literature posits many association measures. One of the simplest is based on conditional probability, but here the issue of the direction of association arises whether one should consider the conditional of ω given C or C given ω. In this instance, we have an estimate for the conditional probability ω given C , p(ω/C ) as the ratio of the number of times the ω occurred and the total number of words in the given corpus. For the conditional probability of C given ω p(ω/C )p(C ) we may adopt Bayes’ rule, which gives us p(C /ω) = . p(ω) Now, since there are only two concepts we are discriminating in this instance, the problem simplifies considerably, we see that p(CL /ω) p(CT /ω)

=

p(ω/CL ) p(CL ) p(ω/CT ) p(CT )

.

In our case, the number of instances of CL and CT in the training data is roughly the same so the p(CL ) and p(CT ) are roughly equal, so the ratio of the two conditionals are actually equal. Thus, if we find the (ω, C ) pair that maximizes this ratio, then either conditional would be maximized. p(ω /C ) So our objective is to compute maxi p(ω i/C L ) , so that ωi would i T have the strongest association with CL . Similarly, the inverse ratio is to be maximized for the strongest association with CT . For this task, the user narratives are first transcribed as written text to simplify

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

(a) Peg and hole.

911

(b) Peg-in-hole assembly.

Fig. 7. Peg-in-hole assembly. Subjects describe their experience while playing with different assembled peg-hole combinations.

Fig. 8. Apparatus peg and hole sizes. Blocks with holes A, B and C and pegs 1–6 are designed so that assemblies (A:1), (B:3) and (C:5) are tight and (A:2), (B:4) and (C:6) are loose (according to functional criteria learned earlier, Fig. 6). The shapes are rough; four measurements at various points are reported in Table 1.

the problem. We combine all the narratives relating to the tight fit situations, and those relating to the loose fit situations. Let nT and nL be the number of words in each of these narratives, and for any particular word ωi has a count of kT and kL in each narrative. Based on this, one may estimate the conditional probability p(ωi /CT ) as kT /nT , and similarly for the loose situation, and then compute the ratio. The results are reported in Tables 3 and 4. We also observed that users were using many morphological variations on some of the words — e.g. ‘‘tighter’’ (3 times) ‘‘tightly’’(4 times) etc. In Natural Language Processing (NLP), stemming [55] is often used to identify the roots of such words; however we found that even without stemming, the correlations were quite strong. Top five ratios in both cases are presented below. Another step frequently adopted in NLP is to remove extremely frequent words, particularly particles and grammatical markers such as articles the, a, an, etc., prepositions such as of, in, to, and common verbs like is, am, etc. These are known as stop words. Again, here we did not feel the need to remove these words, and we feel that there is no need to impose any knowledge of language into the word association process. 7.2. Experiment: adult narratives The purpose of this data collection experiment is to collect the spoken English data for the situation where the each subject is given a peg already inserted into the hole. We then collect their unconstrained English descriptions. Apparatus We use pegs and holes as shown in Fig. 7; six wooden pegs (1 . . . 6) and three blocks A, B, C with holes of varying diameters are prepared. The dimensions are chosen so that there are three

tight cases (A:1, B:3, C:5) where the peg fits into the hole tightly and three loose fit cases (A:2, B:4, C:6) (Table 1). Note that the dimensions of these pegs and holes are such that the they fit the learned chunks of tight and loose learned earlier (Fig. 8). Thus, the agent already knows which three of these are tight, and which three of these are loose. Participants In order to expose the system to human commentary, we need humans who are familiar with the distinction of tight and loose. The apparatus is such that this distinction will be quite clear to almost any adult, without any explicit instructions. We therefore collected such narratives from eighteen adult speakers of Indian English, of both genders, of age group 18–24, all students at this institute, who participated on a voluntary basis. Indian English is a widely spoken language in India, and is the predominant language used among professional designers who may not share any other language. It is the second language for most of the respondents, and the speech does differ in grammatical and other aspects from other English varieties, and also has some variations within India. However, for the lexicon relevant to this analysis, it does not differ much from British or American English. Subjects were students of physics, engineering, or design. We retained the structure of the spoken utterances, even if they were incomplete sentences or otherwise ungrammatical. Procedure In all trials the blocks A–C were placed on a rigid table and the subject was instructed not to lift the blocks from the table. Each subject was given alternating tight and loose assemblies in the order (A:1)–(A:2)–(B:3)–(B:4)–(C:5)–(C:6) and asked to describe their experience in unconstrained English, as per the following instruction:

912

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

Table 2 Transcribed narrative samples: Note that some speakers describe the experience completely avoiding the words ‘‘tight’’ and ‘‘loose’’, whereas others use them consistently. [. . . ] indicates pause. Peg-in-hole assy.

Speaker 1

Speaker 2

A:1

This one is very sticky kind of thing [. . . ] it is not moving in the either of directions and its firmly hold it to the basement and [. . . ] I am not able to rotate or pull it to the any side of block.

This peg is very tight [. . . ] and if you find [. . . ] give force and it rotates slightly

A:2

This piece of art the later piece can be moved easily in all directions [. . . ] and can be removed out of the block and can be rotated either directions.

This is very much loose and I can throw the peg [. . . ] and I [. . . ] will be moving in this hole

B:3

This piece of block is not exactly lying on the plane of the table it is just [. . . ] it is wobbling around the center and rod [. . . ] the shaft is permanently fixed and seems to be permanently fixed to the basement.

This is also tight but I can move this with force [. . . ] very slowly

B:4

The center of the shaft is extremely [. . . ] loosely connected and it can be easily removed without much effort.

This is very loose even then hole diameter is very much larger than the peg diameter [. . . ] very much loose

Table 3 Tight corpus: Top five words by conditional ratio (ΘT ) for nT = 1099. kT

pˆ (ω/CT )

Without stemming Tight 26 Cannot 7 To 33 Into 12 Am 9

0.02366 0.00637 0.03003 0.01092 0.00819

With stemming Tight To Into Not Rotate

0.03094 0.03003 0.01092 0.01729 0.02093

Term

34 33 12 19 23

pˆ (ω/CL )

k(T +L)

pˆ (ω)

p(ω)(T .V )

ΘT =

3 1 8 3 3

0.00331 0.00110 0.00884 0.00331 0.00331

29 8 41 15 12

0.01447 0.00399 0.02046 0.00749 0.00599

0.00004 0.000121 0.02810 0.00093 0.001294

7.14 5.76 3.40 3.29 2.47

3 8 3 7 12

0.00331 0.00884 0.00331 0.00773 0.01326

37 41 15 26 35

0.01846 0.02046 0.00749 0.01297 0.01747

0.00004 0.02810 0.00093 0.00660 3.99E−6

9.33 3.4 3.29 2.24 1.58

kL

This is a peg and this is a hole [showing objects]. The peg is already inserted into the hole. Play with the assembly without lifting it from the table. Describe the interaction between peg and hole in English. Before the actual collection of data, subjects were given two trial runs, one with a tight case, and another loose, so as to sensitize them to the differences between the two cases. The instructions to subjects have to be thought through very carefully. For example, the instruction for not lifting the objects from the table was found to be critical. Without it, subjects would lift up the assembly and play with both parts. Then we discovered that they tended to encode other subjective aspects – how the situation is being viewed – how it is being profiled [56], rather than an intrinsic aspect of the assembly. Thus, action descriptions such as ‘‘easy’’ were emerging rather than state descriptions like ‘‘loose’’ [57]. Thus the semantics of a symbol is much more than just its referent, which is a key notion in cognitive linguistics. Sample narratives from two speakers are given in Table 2. 7.3. Learned labels : initial symbols After collecting the spoken English text data from the subjects we first categorized the textual data into tight {(A:1): (B:3) and (C:5)} and loose fit cases {(A:2): (B:4) and (C:6)}. Note that we have the estimates of the conditional probabilities, based on a small sample corpus (1099 words in the tight situation, and 904 words for loose). If the frequency is too small then the estimate is not likely to be reliable. Also, many words appearing in one set may not appear in the other. So we decided to work with words that have a high frequency in either corpus. Fortunately, the top twenty-five words appearing in either set were also present in the other, so the ratios were computed for each of these 25-word sets. In order to avoid effects of the small sample size, another aspect we tried to ensure was that the conditional probability of the word appearing in a particular context should be higher than its prior;

pˆ (ω/CT ) pˆ (ω/CL )

i.e. p(ω/C ) should be greater than p(ω). For computing p(ω) we needed a corpus for spoken English text; we used a TV script corpus of 29 million words focusing on words as they are uttered [58]. The prior probabilities of the word is estimated based on the frequency from this corpus, and are reported as pˆ (ω) in the tables. Table 3 shows the top five associations for the tight corpus pˆ (ω/C ) ([tight]), in the descending order of pˆ (ω/CT ) . Out of the top terms L appearing in the tight case, those with lowest priors in spoken English (based on the TV corpus) are ‘‘peg’’ (0.0002), ‘‘hole’’ (0.0003), ‘‘rotate’’ (0.0000028), ‘‘tight’’ (0.00004), ‘‘into’’ (0.00093). Thus, it is likely that these terms may have a bearing in this situation, since their conditional probabilities far exceed this prior. Based on the ratio of the two conditional probabilities (final column), for the stemmed case, we identify the most suitable terms for this particular concepts in descending order are ‘‘tight’’, ‘‘to’’, ‘‘into’’, ‘‘not’’, ‘‘rotate’’ etc. Now, the prior probability of words such as ‘‘to’’ and ‘‘not’’ are very high—i.e. these are most likely used in a wide variety of situations. Thus the more appropriate words for this situation are ‘‘tight’’, ‘‘into’’, and ‘‘rotate’’. Stemming seems to make only a small difference—the term ‘‘rotate’’ appears after stemming since variants such as ‘‘rotating’’ (5 times), ‘‘rotation’’(once), ‘‘rotations’’ (once), are now equated with ‘‘rotate’’ (16 times). The main change is that the domination by the term ‘‘tight’’ increases (and hence our confidence in it) from 1.4 times the next in the first case, to 3 times in the stemmed case. Qualitatively, we feel that ‘‘rotate’’ appears here because since the subjects could not remove the block and try inserting the peg, they were mostly trying to rotate the peg in the hole to see the quality of fit. The word ‘‘into’’ may be appearing when subjects consider the task as an insertion task (‘‘it is not going into. . . ’’) whereas in the loose situation, they are not using it as much. Similarly, in the loose corpus (Table 4), the term ‘‘loose’’ appears as the most suitable term for the concept [loose]. Also, we observe that the top three associations remain unaffected by stemming, though the dominance of ‘‘loose’’ increases a bit. The relevance of the term ‘‘easy’’ goes up with the incorporation of derivatives such as ‘‘easily’’(10 times) and ‘‘easier’’ (7 times).

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

913

Table 4 Loose corpus: Top five words by conditional ratio (ΘL ) for nL = 904. pˆ (ω/CL )

kT

pˆ (ω/CT )

k(T +L)

pˆ (ω)

p(ω)(T .V )

ΘL =

Without stemming Loose 27 Much 10 Can 28 Quite 8 Easily 10

0.02983 0.01105 0.03094 0.00884 0.01105

4 2 12 4 7

0.00364 0.00182 0.01092 0.00364 0.00637

31 12 40 12 17

0.01547 0.00599 0.01996 0.00599 0.00848

0.00045 0.00117 0.00912 0.00018 0.00002

8.2 6.07 2.83 2.43 1.73

With stemming Loose Much Can Easy In

0.03315 0.01105 0.03094 0.01878 0.01657

4 2 12 9 13

0.00364 0.00182 0.01092 0.00819 0.01183

34 12 40 26 28

0.01697 0.00599 0.01996 0.01297 0.01397

0.00003 0.00117 0.00912 0.00032 0.00966

9.11 6.07 2.83 2.29 1.4

Term

kL

30 10 28 17 15

7.4. Discussion The experiments demonstrate that words like ‘‘tight’’ and ‘‘loose’’ are not difficult to associate with the chunks discovered from initial experience based on uninformed word association from language. Initially we had thought we may need to use standard NLP techniques like stemming and stop word removal, but the fact that these words floated up so readily was a pleasant surprise. Partly, this is because in this work we are learning terms for two directly contrasting concepts. This considerably simplifies the association problem; clearly the problem would be more difficult where one attempts to learn associations for concepts separately. Another critical issue is the design of instructions to the subjects. Sometimes, we find that subjects ask the experimenter for cues in the middle of their description. We had to iterate over several different instructions before coming up with items such as the instruction to not lift the apparatus from the table [57]. Again, we note that there were no restrictions on the language or the structure of their narratives, despite which the terms emerged as the top choices here. While this approach works for tight and loose here, we note that in situations where one of the contrasting terms is less frequently used, it is possible that one may get negations. Thus, ‘‘cannot’’ is appearing as a strong second-place for the tight corpus, (one narrative repeats it for emphasis: ‘‘very tight so the peg cannot be pulled out’’) and we also observe terms such as ‘‘not loose’’ ‘‘not very loose’’, ‘‘not rotating’’, ‘‘not moving’’, etc. (‘‘not loose’’ appears four times). In this work, we use no other knowledge of language, but it is conceivable that at some point, our system would know that ‘‘not X’’ is a negation of X, which is one of the earliest acquisitions of human children. 8. Conclusion In this work we have presented a cognitively motivated approach to discovering symbols in design as a combination of a label and an image schema, a rich model of semantics. The image schema are learned as abstractions based on a set of ‘‘good designs’’. The process of abstraction discovers inter-relations within the initial design variables that are necessary when the designs to meet some minimal functional criteria. Such interrelations often lie along low-dimensional manifolds in the design space. Each reduced dimension in the manifold corresponds to a constraint that holds between the input variables, and is learned as a chunk. It is claimed that such chunks are likely candidates for initial design symbols, and a simple example is demonstrated for learning tight and loose. Next, the label corresponding to these symbols is learned by comparing human narratives obtained in these two cases. Thus, we have actually constructed two grounded symbols, by binding the labels ‘‘tight’’ and ‘‘loose’’ to

pˆ (ω/CL ) pˆ (ω/CT )

the discovered initial image schemas (chunks) tight and loose (Fig. 9). These term-meaning pairs constitute a computational model that simulates the representation of these concepts in the core cognitive repertoire of the human designer. Further, its grounded, graded nature provides considerable plasticity in interpreting and using these symbols. This work completes our earlier work aimed at discovering symbols from direct design experience [44], where we had only discovered the semantic representation but not the labels. In terms of learning the labels for these symbols, we observe that we have not used any knowledge of grammar or of the language. Despite this, it was surprisingly easy to associate the contrastive language labels. Indeed, since we are not using language knowledge, we have also been able to use the same process to learn symbols in two completely different languages [59]. For us, the take-home lesson from this exercise is that by far the most difficult part of initial symbol discovery lies not in mapping the labels (i.e. the vocabulary), which come almost for free, but in trying to capture the image schema or the conceptual distinctions that are the most crucial part of the symbolic reasoning apparatus. In many discussions on capturing design knowledge, there is a tendency to consider the labels (e.g. standardizing the set of terms), but the point worth emphasizing is that the difficulty is not in the labels, but in what the label refers to. The novel process suggested here, of discovering the functionally salient designs and mapping these to a lower-dimensional encodings, is only initially demonstrated here. The eventual viability of such an approach will require considerable more work. We also note that the initial symbols discovered here satisfy the four criteria we had outlined in Section 3. The symbols are grounded, since the image schema is defined in the design space and not in the space of symbols. Of course, as of now, the design space is rather simple — the space of circular pegs and circular holes, but extending this into new domains will happen with future learning cycles, which can now be indexed using the linguistic label. The semantics is also graded, since some members of a class such as [tight] are more prototypical than others. That the term-meaning relation is dynamic, is not explicitly demonstrated here, but we observe that the semantics of these incipient symbols are just the chunks learned before the label was associated. Thus, the initial image schema for tight is merely the chunk CT defined as a one-dimensional region of the 2D space. In subsequent interactions where the symbol tight will be encountered, this image schema would be considerably modified. We may speculate that this would involve two processes – that of broadening the chunk to include other domains that exhibit similar behavior (e.g. a ‘‘tight weave’’ or a ‘‘tight passageway’’), as well as that of specializing it to incorporate specialized meanings – e.g. for a mechanical engineer, tight may even infringe into the w < t space for concepts such as interference fits. However, how this would exactly operate is

914

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915

Fig. 9. Initial design symbols tight and loose. Note that the initial image schema are just the chunks experienced in the initial task domain of peg-in-hole insertion. In subsequent usage of this symbol, the image schema will broaden to other domains, and to other connotations, based on language-mediated interaction.

a complex question with many important ramifications for building computers that are able to reason like humans. Our aim in this work has been to present the very first stages of such an approach — the Baby Designer enterprise is truly a baby. The symbols are modeled in a lower-dimensional space than the original design space from which they have emerged; hence these are informationally dense. Finally, they are also functionally salient, since the information compaction is based on functional considerations. 8.1. Beyond initial symbols After the initial discovery of symbols, we may suggest some ways in which these label–meaning pairs may be used to extend the concept or the semantics of the symbol. For example, the Baby designer may have already noticed that a high force is required to assemble tight fit assemblies. But this is an implicit relation she knows as a prior; she is unable to communicate it, and is hard pressed to generalize it to other situations, find analogies for it, etc. On the other hand, if we have distinct symbols [high force], and [low force], each of which constitutes a linguistic unit as well as its grounded image schema, then the labels of these symbols, ‘‘high force’’ and ‘‘low force’’ respectively, act as handles that can be related with the label ‘‘tight’’ we have just learned, resulting in a possible rule that associates [tight fit] with [high force]. This enables the beginning designer to make other generalizations explicitly, and also to reflect on its action when it uses a palm grasp (as opposed to a finger grasp) to pry apart to tightly fitted objects. Even more than capturing its own experience, the baby can now use such new symbols to acquire new knowledge. Thus if a mentor tells her that ‘‘the assembly may slip since it is not a tight fit’’, an early designer with a symbol for [slip] can understand from this that [tight fit] is associated with preventing [slip]. If the baby has the symbol (i.e. also the semantics) of [friction] then she may even be able to construct a theory for how slip is prevented in tight fits. All these will also enrich the symbol [tight] that was learned here. All in all, symbols of this kind enable knowledge to be acquired at a very fast pace. A large number of questions remain open. The demonstration was for an extremely simple problem, with only two design variables, and the chunks learned were linear and one-dimensional. How does this scale up to larger problems? We are not at all sure of this; it is one of the important questions to be explored. Another related question is if the incipient manifolds always reduce noise, as claimed in Section 6. We suspect that this may not be so always; noise may in fact significantly degrade the performance of some nonlinear manifold learning algorithms, which may ‘‘shortcircuit’’ [43,60] between branches of the manifold.

8.2. Scaling up to real design Perhaps for design more than for any other discipline, grounded learning of symbols is a necessity. While the proposed approach may resolve some of the problems, scaling up to real designer level competence would clearly be a very elaborate and long-drawn process. What we propose here implies that in the long run, to create viable computer vocabularies for design, we must train the systems to learn the semantics by experiencing many designs in many real world contexts. While this seems daunting at the outset, it is only through such intensive exposure that human designers have learned the structures underlying their knowledge system. Machine exposure may take place in an accelerated manner, but the system must experience first-hand to something akin to the vast array of bottom-up function-driven task experiences of a human — or possibly many more, since the abstraction processes as we understand it so far may not be as efficient. Thus, the very rudimentary image schemas discovered in this work are proposed as just the start of a long process. Having formed the first symbols, an immediate challenge would be to create mechanisms for composing these semantics (and the terms themselves, i.e. the syntax) to obtain combinations of symbols. At this point, it may be possible to start learning new symbols from communication, i.e. by being told, thus, just as designers can be taught certain relations, computer functional models may benefit from explicit awareness of pre-programmed relations, especially at levels of abstraction that would be hard to grasp otherwise. Here existing machine ontologies may be used to provide input to such systems, provided that the semantics of terms used in these rules are understood in terms of the machine’s internal image schemas; without this, brittleness may again prevail. Another aspect to be highlighted is that the resulting understanding of the vocabulary may not be uniform among machines (this, of course, is true also for human designers). Systems delivered from the factory would have identical image schemas to begin with, but since they continue to learn after deployment, the considerably differing design problems encountered would result in somewhat differing abstractions for the same symbols. However, owing to the graded nature of the schemas, such machines would presumably still be able to communicate, both with other machines and with differing human users. While the arguments presented here and the exploratory results indicate considerable potential for this approach, much work is clearly needed to delineate the approach and identify its scalability to human-level performance. References [1] Hampe B, Grady J. From perception to meaning: image schemas in cognitive linguistics. Mouton de Gruyter; 2005.

A. Mukerjee, M.M. Dabbeeru / Computer-Aided Design 44 (2012) 901–915 [2] Langacker R. An introduction to cognitive grammar. Cognitive Science 1986; 10:1–40. [3] Bohm MR, Stone RB, Szykman S. Enhancing virtual product representations for advanced design repository systems. Journal of Computing and Information Science in Engineering 2005;5:360–72. [4] Bohm MR, Stone RB. A natural language to component term methodology: towards a form based concept generation tool. In: Proceedings of the ASME 2009 international design engineering technical conferences & computers and information in engineering conference; 2009. [5] Nanda J, Thevenot H, Simpson T, Stone R, Bohm M, Shooter S. Product family design knowledge representation, aggregation, reuse, and analysis. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 2007;21: 173–92. [6] Chiu I, Shu L. Using language as related stimuli for concept generation. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 2007;21:103–21. [7] Nagai Y, Taura T, Mukai F. Concept blending and dissimilarity: factors for creative concept generation process. Design Studies 2009;30:648–75. [8] Segers N, De Vries B, Achten H. Do word graphs stimulate design? Design Studies 2005;26:625–47. [9] Dong A. The enactment of design through language. Design Studies 2007;28: 5–21. [10] Szykman S, Sriram RD, Regli WC. The role of knowledge in next-generation product development systems. Journal of Computing and Information Science in Engineering 2001;1:3–11. [11] Holland J. Escaping brittleness: the possibilities of general-purpose learning algorithms applied to parallel rule-based systems. In: Computation & intelligence, American association for artificial intelligence. 1995. p. 275–304. [12] Mandler JM. Foundations of mind : origins of conceptual thought. New York: Oxford University Press; 2004. [13] Ericsson K. Expertise. MIT Press; 1999. [14] Cross N. Expertise in design: an overview. Design Studies 2004;25:427–41. [15] Schon DA. The reflective practitioner. New York: Basic Books; 1983. [16] Simon H. The sciences of the artificial. The MIT Press; 1969/1981. [17] Gero J, Coyne R. Knowledge-based planning as a design paradigm. Design Theory for CAD 1987;339–73. [18] Pahl G, Beitz W. Engineering design: a systematic approach. London (Berlin): The Design Council/Springer-Verlag; 1977/1996. p. 199–400. [19] Welch RV, Dixon JR. Guiding conceptual design through behavioural reasoning. Research in Engineering 1994;6:169–88. [20] Szykman S, Racz J, Sriram R. The representation of function in computerbased design. In: Proceedings of the 1999 ASME design engineering technical conferences (11th international conference on design theory and methodology); 1999. [21] Dorst K. Design problems and design paradoxes. Design Issues 2006;22:4–17. [22] Visser W. Designing as construction of representations: a dynamic viewpoint in cognitive design research. Human–Computer Interaction 2006;21:103–52. [23] Harnad S. The symbol grounding problem. Physica D Nonlinear Phenomena 1990;42:335–46. [24] Angelo Cangelosi AG, Harnad S. From robotic toil to symbolic theft: grounding transfer from entry-level to higher-level categories. Connection Science 2000; 12:143–62. [25] Stone RB, Wood KL. Development of functional basis for design. Journal of Mechanical Design 2000;122(4):359–70. [26] Hirtz J, Stone R, McAdams D, Szykman S, Wood K. A functional basis for engineering design: reconciling and evolving previous efforts. Research in Engineering Design 2002;13:65–82. [27] Moss J, Cagan J, Kotovsky K. Learning from design experience in an agent-based design system. Research in Engineering Design 2004;15:77–92. [28] Yaner P, Goel A. Analogical recognition of shape and structure in design drawings. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 2008;22:117–28. [29] Gärdenfors P. Conceptual spaces: the geometry of thought. The MIT Press; 2004. [30] Barsalou L. Perceptual symbol systems. Behavioral and Brain Sciences 1999; 22:577–660. [31] Gero JS. Design prototypes: a knowledge representation schema for design. In: AI magzine; 1990. p. 26–36. http://people.arch.usyd.edu.au/∼john/ publications/ger-prototypes/ger-aimag.html.

915

[32] Chandrasekaran B. Representing function: relating functional representation and functional modeling research streams. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 2005;19:65–74. [33] Dabbeeru MM, Mukerjee A. Discovering implicit constraints in design. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 2011;25:57–75. [34] Bishop C. Pattern recognition and machine learning. Springer; 2006. [35] Mukerjee A, Dabbeeru MM. Multi-objective functional analysis for product portfolio optimization. In: Computational intelligence in multi-criteria decision-making, 2009. mcdm’09 IEEE symposium on. IEEE; 2009. p. 96–103. [36] Gross MD. Design as exploring constraints. Ph.D. thesis. Department of Architecture, Massachusetts Institute of Technology; 1986. [37] Grünwald P. The minimum description length principle. The MIT Press; 2007. [38] Millikan R. Language, thought, and other biological categories: new foundations for realism. The MIT Press; 1984. [39] Bloom P. Intention, history, and artifact concepts. Cognition 1996;60:1–29. [40] Goel A, Gómez de Silva Garza A, Grué N, Murdock J, Recker M, Govindaraj T. Towards design learning environmentI: exploring how devices work. In: Intelligent tutoring systems. Springer; 1996. p. 493–501. [41] Sarkar S, Dong A, Gero JS. Design optimization problem reformulation using singular value decomposition. Journal of Mechanical Design 2009;131: 081006–1–10. [42] Deb K, Srinivasan A. Innovization: innovative design principles through optimization. Technical Report Kangal: 2005007. Indian Institute of Technology Kanpur. Kangal, IIT Kanpur; 2007. [43] Saul LK, Roweis ST. Think globally, fit locally: unsupervised learning of low dimensional manifolds. The Journal of Machine Learning Research 2003;4: 119–55. [44] Mukerjee A, Dabbeeru MM. The birth of symbols in design. In: Proceedings of DETC’09, 2009 ASME design engineering technical conferences; 2009. Paper No. [DETC2009/DTM-86709]. [45] Steels L. Evolving grounded communication for robots. Trends in Cognitive Sciences 2003;7:308–12. [46] Roy D. Learning visually grounded words and syntax of natural spoken language. Evolution of Communication 2002;4(1):33–56. [47] Siskind JM. Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic. Journal of Artificial Intelligence Research 2001;15:31–90. [48] Whitney D. Quasi-static assembly of compliantly supported rigid parts. Journal of Dynamic Systems, Measurement, and Control 1982;104:65. [49] Badke-Schaub P, Neumann A, Lauche K, Mohammed S. Mental models in design teams: a valid approach to performance in design collaboration? CoDesign 2007;3:5–20. [50] Richard D, Hart PE, Stork DG. Pattern classification. John Wiley and Son; 2001. [51] Roweis S, Saul L. Nonlinear dimensionality reduction by locally linear embedding. Science 2000;290:2323–6. [52] Casasola M, Cohen LB, Chiarello E. Six-month-old infants’ categorization of containment spatial relations. Child Development 2003;74:679–93. [53] Lakoff G, Johnson M. Philosophy in the flesh: the embodied mind and its challenge to Western thought. New York: Basic Books; 1999. [54] Ahmed S, Wallace KM, Blessing LT. Understanding the differences between how novice and experienced designers approach design tasks. Research in Engineering Design 2003;14:1–11. [55] Jurafsky D, Martin J, Kehler A. Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition. MIT Press; 2000. [56] Langacker R. Cognitive grammar: a basic introduction. USA: Oxford University Press; 2008. [57] Dabbeeru MM, Mukerjee A. Learning concepts and language for a baby designer. In: Fournth international conference on design computing and cognition. Stuttgart (Germany): Springer; 2010. [58] Keffe. Wiktionary: frequency lists for tv and movie scripts. 2006 [accessed 10.02.10]. [59] Mukerjee A, Dabbeeru MM. Using emergent symbols to discover multilingual translations in design. In: Proceedings of DETC’10, 2010 ASME design engineering technical conferences; 2010. Paper No. [DETC2010/DTM-29216]. [60] Saxena A, Gupta A, Mukerjee A. Non-linear dimensionality reduction by locally linear isomaps. Lecture Notes in Computer Science 2004;1038–43.