An optimized and enhanced cognitive memory model for mechanical design problems

An optimized and enhanced cognitive memory model for mechanical design problems

Knowledge-Based Systems 11 (1998) 197–211 An optimized and enhanced cognitive memory model for mechanical design problems I. Zeid*, C.G. Chinnappa De...

617KB Sizes 0 Downloads 47 Views

Knowledge-Based Systems 11 (1998) 197–211

An optimized and enhanced cognitive memory model for mechanical design problems I. Zeid*, C.G. Chinnappa Department of Mechanical, Industrial and Manufacturing Engineering, Boston, MA 02115, USA Received 24 June 1997; received in revised form 23 March 1998; accepted 3 April 1998

Abstract Case-based reasoning (CBR) has been applied to mechanical design to aid designers in storing and later retrieving design cases when solving design problems. The development of an efficient memory model is crucial to the process of applying CBR to mechanical design. In previous work the authors have developed a memory model based on the EMOP (Episodic Memory Organization Packet) structure. While the model has been successfully implemented, it suffers from a space explosion problem. This paper presents how to overcome the limitations and solve the problems of current implementation of the EMOP memory model. The new implementation offers better space optimization and new enhancements. Enhancements include both probabilistic weights to retrieve events and editing features that allow the designer to add or remove indices and events. The new implementation has been done in LISP. The paper presents examples to demonstrate the capabilities of the new model. 䉷 1998 Elsevier Science B.V. All rights reserved. Keywords: Case-based reasoning; EMOP memory model; Mechanical design; Optimization; Traversal; Elaboration; Enhancements; Gear; Pinch roller

1. Introduction Mechanical design activities can be categorized into two classes: creating new designs for new problems, and modifying old designs to fit new problems. The vast majority of mechanical design activities can be associated with the latter class. In most cases, it is more effective to modify the mechanical artifact itself. The reuse of mechanical designs possibly accounts for over 60% of the mechanical design activities in the industry. A design represents the embodiment of knowledge of functions and interactions of the components involved in the design. In order to use these designs, some knowledge of what the ‘critical design parameters’ are is imperative. These parameters can influence the design in unforeseen ways. It is for these reasons that modifying aspects of the design process (e.g. parameters and procedures) is more effective and safe than actually modifying the artifact itself. Over the last few years, much attention has been given to knowledge reuse [1–5]. More specifically, case-based reasoning (CBR) has been the focus technique for reuse. CBR relies on past experiences for classifying or solving

* Corresponding author. Tel.: +1 617 373 3817.

new problems. CBR systems have focused sets of cases in their memory, and all cases to be solved pertain to the same class of problems or situations. In case-based reasoning the most pervasive memory structure for the knowledge base of previous cases is the episodic memory organization packet (EMOP) [6]. The development of this memory structure is based on the concept that human knowledge is generated, largely, from personal experiences [7]. An EMOP memory uses indices to store events. Events in an EMOP-based memory are indexed by relevant features of that particular event. The memory is structured (in a tree-like manner) from the more general situations at the top level to the more specific cases at the deeper levels. Events are organized under EMOPs by the similarities that they share with one another. At the lowest level they are discriminated by their differences. The EMOP model has been applied to mechanical design by Bardasz and Zeid [8]. They have developed an EMOPbased cognitive model of memory that allows the recall and storage of design plans. An overview of this model is covered in this paper. This model has been implemented into the DEJAVU CBR system [9]. After testing and using the model for a while, it became clear that the initial implementation was not efficient. This implementation caused the model to explode in space and made it difficult

0950-7051/98/$ - see front matter 䉷 1998 Elsevier Science B.V. All rights reserved PII: S0 95 0 -7 0 51 ( 98 ) 00 0 46 - X

198

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

Fig. 1. The old EMOP memory model.

to edit. The shortcomings are discussed in detail in this paper. The paper also offers solutions and a new implementation to overcome these shortcomings. In this paper, we present an optimized and enhanced EMOP-based memory model that alleviates the shortcomings of the existing model (we refer to it here as the old model). The new EMOP-based model optimizes the old one by preventing the memory from explosion as it stores more cases. The enhancements it provides include a facility to edit and prune the tree as well as probabilistic measures to retrieve the ‘best’ existing case. The paper is organized as follows. It provides an overview of the old memory model, its shortcomings, its optimizations and enhancements. It then presents the implementation of the new optimized and enhanced model. Sample examples are provided followed by a

comparison of the performance measures of the old and new models. The paper closes with a discussion and conclusion.

2. The old model The old model [9] is an EMOP model. The model has been used as a part of the DEJAVU case-based reasoning, designer assistant shell. The model structure is fashioned in a hierarchical tree-like manner where the nodes (EMOPs) at the top of the tree are the most general and the ones at the bottom are the most specific. Fig. 1 shows the schematic layout of this model. At the top, the memory is divided into four contexts: product, component, assembly, and recurring engineering

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

designs. Below the context level, the model then arranges the events on the basis of the indices they contain. If more than one event happens to share an index, an EMOP is created. These EMOPs are created dynamically as events are introduced into the memory for storage after problem solving. Each EMOP contains a content frame that holds the set of features that define the common aspects of the events stored under the EMOP. The indices are the relations that connect EMOPs or events below them. They reference a feature that has been used to describe the event(s) that are represented below that index. These indices serve to discriminate between events and EMOPs at the same level in the tree. Events are the leaf nodes of the tree. An event contains a reference to any entity that is to be recalled when traversal of the tree brings it to the corresponding leaf. The entities in this case are the design plans that have been stored. A semantic net was incorporated into the framework of the model, so that the model could adapt to the changes in the vocabulary of the user. Index-fitting elaboration was also built into the system to deal with situations in which more than one event is found to fit the current problem description.

3. Problems with the old model When designing the previous model, the main objective was to retrieve existing design plans from the knowledge base as fast as possible. To accomplish fast retrieval, every possible access path was provided so that, when accessing an event, there would be no backtracking. This resulted in a very elaborate explosive model. The problems that were encountered while using this model were:

199

the goals of the events stored in the memory. The goals are considered to be separate from the other indices, unlike what was done previously where all the indices were considered on an equal footing. As a result, a new layer of goal level sorting was introduced into the structure of the new model. This feature is clearly demonstrated by its structural schema shown in Fig. 2. After the context level sorting, the new layer of goal level sorting has been introduced. This helps to prune the tree further so that the search space is reduced, which in turn reduces the access time. The goals are the EMOPs which serve to organize the events below them. In this implementation the EMOPs are not directly connected with a content frame; instead, they contain only the names of the events. The system maintains an event table that contains the events along with their description. The content frame is then derived using the information in the event table and the EMOPs. This organization helps to conserve memory space although it would increase the access time. This increase in the access time is justified by considering the amount of memory space that will be saved. It must also be noted that the increase in the access time is only very small. The other difference between the two models is that in the new model every possible access path to an event is not built into the system (see Section 4.1 below). Instead, the paths have to be derived by moving from one index to another. Here the indices are allowed to point to more than one event, unlike the old model. Thus, an EMOP is not created. This cuts off any further extension of the model, thereby saving space. 4.1. Traversal

1. Adding a new event into the model could sometimes require considerable reorganization of certain parts of the model. 2. Indices are multiply represented in the tree at various levels depending on the features of the preceding events that were stored. This leads to a memory explosion problem as new events are added into the model. 3. Due to the large nature of the model it was very difficult to edit and prune the tree. Editing features such as adding and removing an event or an index were not included in the previous model.

A new traversal algorithm has been developed to traverse the new memory model. To reach the index level of the tree, the traversal algorithm ensures that all the goals of the current design problem are met by the selected event(s). What the system does is that, for each of the required goals, it looks up the events that satisfy it and then takes an intersection of all these events so that the final list will contain only the events that satisfy all the required goals. When the leaf node is reached and it is found to have more than one event, the algorithm backs up one level and tries the next matching index. Here two possibilities may arise:

Keeping in view a more comprehensive objective, the authors have designed and implemented a much simpler model of memory which provides quick access while eliminating all of the above problems.

1. One event is found in the next index: the algorithm regards this as a list of one event and intersects this with the one found from the previous index, keeping with it the old list. If the result is not a null set (i.e. only one event in the intersection), the algorithm will return that as the result of the search. 2. More than one event is found in the next index: here too a set intersection is done between the old and new lists and the algorithm follows the same procedure as in the above case. However, if the set intersection contains more than one event it proceeds to the next index.

4. The new model The new model is a simpler model as it removes redundant storage. It is designed keeping in mind both fast access and memory requirements. Extra attention has been given to

200

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

The algorithm will return the first event that is found to match the current design problem. Fig. 3 shows the flowchart of this traversal algorithm. The pseudocode of the traversal algorithm is shown below. Traversal algorithm:

to edit the event lists of the indices and then make the appropriate changes in the event table. For example, adding or removing an event only requires adding it to the event list or removing it from the event list, respectively. The EMOP model tree is always constructed using both the event and the index list. Therefore the model is easily edited and the tree is pruned via these two lists. The old model could not support any form of editing or pruning of the tree. The editing features supported by this model are: 1. 2. 3. 4.

adding an event, removing an event, adding an index or indices, and removing an index or indices.

Figs. 4 and 5 show the storage and editing algorithms. The pseudocodes for both algorithms are shown below. Storage algorithm:

Editing algorithm:

4.2. Storage and editing The storage of new events into the model will not require any reorganization of the model [9]. All the storage algorithm will have to do is to include the new event in the event list if the event index or goal is already existing in the model or make a new index with only that event in the event list. It will then update the event table to include the new event. Editing and pruning features are equally easy to implement in the new model. All the algorithm will have to do is

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

201

Fig. 2. The new EMOP memory model.

4.3. Elaboration A default probability-based elaboration facility has been developed. In this method, new events are automatically given weights as they are stored into the model. The number of indices that are used to describe the events are taken as their default weights. When more than one event is found after retrieval, the elaboration algorithm will select the event with the highest weight. The reason that this event is chosen over the others is that the event with the highest number of indices is more likely to satisfy the current design problem than the others. The elaboration algorithm can be modified by the user into a weight-based system in which the user assigns weights to the events.

202

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

Fig. 3. The traversal algorithm.

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

Fig. 4. The storage algorithm.

203

204

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

Fig. 5. The editing algorithm.

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211 Table 1 Index types Product indices

Assembly indices

Class Model Goals Goal Input Output Assessment

Class Goals Goal situation Situation input Output Phenomenon Assessment

Component indices

Recurring-engineering-problems indices

Class Goals Goal situation Input Output Assessment Phenomenon Shape

Class Goals Goal situation Input Output Participants Phenomenon

The types of indices that are used in the model are the same as the ones used in the previous model [9]. Table 1 gives all index types along with their indices for quick reference.

5. Optimizations and enhancements The model structure that has been implemented is much simpler than what has been done in the old model. The most important benefit of using such a model is that this implementation prevents the model from exploding in space as new events are being added. In fact, this model expands linearly as new events are added into the model. Another benefit of using such a simple model is that the editing features can be very easily incorporated into the framework of the model. All the required editing features have been provided in this new implementation of the memory model. They include: 1. 2. 3. 4.

adding an event into the model, adding indices into an event in the model, removing an event, and removing indices from an event.

Table 2 Features for spur gear design plans Feature type

Feature value

Goal Goal situation Input

Transmit rotary motion, parallel shafts Medium load Face width, addendum, dedendum, tooth thickness, number of teeth, pitch diameter, width of space Diametral pitch, whole depth, circular pitch Nil Nil Nil

Output Assessment Shape Phenomenon

205

A default probability-based elaboration facility has also been incorporated in the implementation. In this elaboration system, if more than one event is found then the event with the highest number of indices is selected. The reason for this is that the next index that the user will enter is more likely to be an index for the event with more indices. This default facility can be modified into a weights-based facility, where events as a whole are weighted.

6. Examples The model was tested using the same gear events as were used to test the old model, along with some other spring examples. Presented here are only the gear events, which will allow a direct comparison of the two models. The gears that have been introduced into the model are spur gear, helical gear, bevel gear and pinch roller. All these mechanical artifacts are components that are used to transmit rotary motion between shafts. The spur gear, helical gear and pinch roller are used when the shafts are parallel to each other, whereas bevel gears are used when the axes of the shafts intersect. ‘Transmit rotary motion’ and ‘parallel shafts’ have been chosen as the goals of spur gear, helical gear and pinch roller, and for bevel gear ‘transmit rotary motion’ and ‘intersecting shafts’ have been chosen as the goals. 6.1. Spur gear The features of the spur gear are shown in Table 2 and the EMOP memory model, after introducing this event into the system, is shown in Fig. 6. As all events come under the component context, only this branch of the tree is shown in the figure. Since this is the first event that has been added into the system, all leaf nodes contain only spur gear. As other events are added, the model will start branching into a more complex structure where leaf nodes will contain multiple events because of overlapping indices. 6.2. Helical gear Table 3 gives all the indices that describe a helical gear and the EMOP memory model structure, after adding this event into the memory, is shown in Fig. 7. Helical gears and spur gears have many common indices and this is reflected in how the leaf nodes of these indices now contain both of these events instead of just one. Common indices point to P3 (both spur gear and helical gear), while non-common indices point to P2 (helical gear). In the previous model an EMOP would have been created for each of these overlapping indices, after which there would be a further extension of the model below this level. This difference between both the models can be clearly seen if the model structures of the two models are compared [9]. At the goal layer, the

206

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

Fig. 6. Memory representation after storing spur gear.

tree remains the same since both these gears share the same goals (i.e. ‘transmit rotary motion’ and ‘parallel shafts’). 6.3. Bevel gear Unlike the previous two gears, this gear is used between

intersecting shafts. This will result in the splitting of the model at the goal level to indicate this difference. On one side will be the events that are used between parallel shafts and, on the other, the events used for intersecting shafts. The model structure and the bevel gear description are given in Fig. 8 and Table 4.

Table 3 Features for helical gear design plans

Table 4 Features for bevel gear design plans

Feature type

Feature value

Feature type

Feature value

Goal Goal situation Input

Transmit rotary motion, parallel shafts High load, high speed Helix angle, addendum, dedendum, number of teeth, tooth thickness, pitch diameter, transverse circular pitch, transverse diametral pitch, width of space, face width Axial pitch, whole depth, normal circular pitch, virtual number of teeth, normal diametral pitch Nil Nil Nil

Goal Goal situation Input

Transmit rotary motion, intersecting shafts Moderate load Face width, number of teeth, back cone, back cone radius, cone distance, pitch angle, circular pitch, tooth thickness, width of space, diametral pitch, whole depth, addendum, dedendum Virtual number of teeth, whole depth, circular pitch Nil Nil Nil

Output Assessment Shape Phenomenon

Output Assessment Shape Phenomenon

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

207

Fig. 7. Memory representation after storing helical gear.

6.4. Pinch roller This is the final event that is added into the model. As this artifact is used to transmit rotary motion between parallel shafts, it is placed under the ‘parallel shaft’ branch of the tree. The event description and the model structure are shown in Table 5 and Fig. 9. The new model now contains a total of 49 (sum of 46 and 3 in Table 6) EMOPs and indices as compared to 1801 (sum of 1697 and 104 in Table 6) generated by the old model. Table 6 shows the intermediate results for both models. If the growth of the two models is observed as the events are Table 5 Features for pinch roller design plans Feature type

Feature value

Goal Goal situation Input

Transmit rotary motion, parallel shafts Low load Outer diameter, inner diameter, face width, coefficient of friction Nil Low cost Nil Nil

Output Assessment Shape Phenomenon

being added into them, it is seen that the old model grows exponentially and is unconstrained. This makes the old model occupy a lot of space and makes editing very difficult. The new model, on the other hand, is contained, which reduces the memory requirements and also makes editing easy.

7. Tests performed 7.1. Retrieval tests Presented here are a few retrieval tests performed on the new model to demonstrate its effectiveness. The goal of these tests are twofold. First, they evaluate the performance of the model algorithms. Second, they measure how fast retrieval takes place. The tests are made on the new model after storing information of the four gear examples presented above. The queries made on the new model are: 1. Retrieve a mechanical component that transmits rotary motion. The features of the retrieval are shown below. Goal: Transmit rotary motion The model will identify that all the events in its memory transmit rotary motion and will proceed to use the default

208

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

Fig. 8. Memory representation after storing bevel gear.

probability elaboration and give the result of its search as bevel gear, as it has the highest number of indices. To account for the fact that this may not be the event sought by the user, the system also returns a list of the other events. The old model would have returned helical gear. In this example the retrieval goal ‘transmit rotary motion’ is vague. Unlike the old model, the new model returns all possible types of gears, with bevel gear as the most probable choice. 2. Retrieve a mechanical component that transmits rotary motion between intersecting shafts. Goal: Transmit rotary motion, intersecting shafts As these features uniquely identify the bevel gear, this event will be returned without the use of the elaboration facility. The old model would also have returned the bevel gear. 3. Retrieve a mechanical component that transmits rotary motion at high load.

Goal: Transmit rotary motion Goal situation: High load The helical gear design plan is returned in both the old and the new models. 4. Retrieve a mechanical component that transmits rotary motion between parallel shafts. Goal: Transmit rotary motion, parallel shafts Three events are found to match this description, i.e. helical gear, spur gear and pinch roller. Elaboration is then used, and helical gear is chosen as it has the highest number of indices. The old model would have returned helical gear as well. 5. Retrieve a mechanical component that transmits rotary motion between intersecting shafts at low load. Goal: Transmit rotary motion, intersecting shafts Goal situation: Low load The bevel gear event is returned even though it is not used for low load as it is the closest possible match that

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

209

Fig. 9. Memory representation after storing pinch roller.

can be found. The old model would have returned bevel gear too. 6. Retrieve a mechanical component used to transmit rotary motion between intersecting and parallel shafts. Goal: Transmit rotary motion, intersecting shafts, parallel shafts No event will be returned because there is no event that can be used for both parallel and intersecting shafts. The old model also would not have returned any event.

8. Retrieval time tests As the time required for retrieval is crucial for determining a good design storage system, timing tests are performed on this new model. A set of eleven indices were chosen to retrieve an event. These indices are addendum, dedendum,

face width, tooth thickness, number of teeth, width of space, pitch diameter, transmit rotary motion, parallel shafts, virtual number of teeth, and circular pitch. Out of these eleven indices only three indices are required to uniquely identify an event and the rest are very general indices that satisfy almost every event stored in the model. For different orderings of these nine indices, the model is tested by inputting these indices one at a time (i.e. for a particular order of indices: the first trial was with the first index, the second trial was with the first two indices, and so on). Fig. 10 shows the retrieval time tests for two sets of ordering. In the incorrectly ordered set the three indices needed to uniquely identity an event are spread out, whereas in the correctly ordered set they are the first three. When very general indices are given, elaboration has to be done and this greatly increases the time for retrieval. There is a significant drop in the retrieval time (i.e. the

210

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

Fig. 10. Retrieval time versus number of indices.

ninth index in the incorrectly ordered set and the third in the correctly ordered set) when only one event is reached while searching. In addition, when an event has been found, any number of additional indices does not affect the retrieval time. The results of the retrieval time tests can be concluded as: 1. Ordering of indices can greatly influence the retrieval time. 2. Time required for elaboration is quite significant compared to the retrieval time. 3. Once an event has been found, the retrieval time is more or less independent of any additional indices included. Access of events is fast and is almost instantaneous in the test cases. This time will increase as more events are added into the system.

Query A: Retrieve event with Goal: Transmit rotary motion Input: Coefficient of friction, back cone Query B: Retrieve event with Goal: Transmit rotary motion Query A would return pinch roller whereas Query B would have returned bevel gear. The order of the indices affects the event that will be found.In the above example the input ‘Coefficient of friction’ uniquely identifies Table 6 Comparison between the old and new models Event Number of features

9. Discussion The new model has been carefully designed so that it eliminates any possibility of explosion in space as events are added. Comparisons were done between the two models and it was found that the new model performed better. Although all aspects of the model have been discussed before, some of them warrant further investigation. These aspects are discussed below. 1. Finding the best event: The new model will find the first best event that matches the problem description. For example, consider the two queries:

Number of indices generated

Number of events

Number of EMOPs

Old model

New model

Spur gear Helical gear Bevel gear Pinch roller Spur gear

13 20 17 9 13

13 19 19 8 13

Helical gear Bevel gear Pinch roller Spur gear Helical gear Bevel gear Pinch roller Spur gear Helical gear Bevel gear Pinch roller

153 719 1697 13 143 675 1593 0 10 44 104

22 41 46 11 20 38 43 2 2 3 3

I. Zeid, C.G. Chinnappa / Knowledge-Based Systems 11 (1998) 197–211

pinch roller and ‘Back cone’ uniquely identifies bevel gear. In Query A, when the system checks the input feature ‘Coefficient of friction’ and locates only one event, it neglects all others and returns pinch roller. The same happens with Query B except that bevel gear is returned. The system could be made to search for all possible orderings of the indices. This would however increase the search time greatly. 2. Dependency of retrieval time on the ordering of the indices: As discussed previously, the ordering of the indices affects the retrieval time significantly. Unfortunately, there is no way that this problem can be solved unless the system tries all the different possible orderings and this again would increase the access time. 3. Weighting of individual indices: In this model, events as a whole are weighted, not each index separately. Both types of weighting system have their own advantages and disadvantages and it would be better to have a system that is capable of employing both of these schemes. 4. Negative searching: It is sometimes the case that the user would like to search for events that do not have a particular index. For example, the user may want to search for all events that transmit rotary motion but not at high load. This search is very easily incorporated into the search mechanism by doing a set difference instead of a set intersection. 10. Conclusion A new EMOP memory model has been developed and implemented in this work. The new model offers better

211

space optimization and new enhancements over the old model. Enhancements include both probabilistic weights to retrieve events and editing features that allow the designer to add or remove indices and events. The new model has been implemented in LISP and demonstrated successfully.

References [1] C. Owens, Integrating feature extraction and memory search, Machine Learning 10 (1993) 311–339. [2] M.M. Veloso, J.G. Carbonell, Derivational analogy in PRODIGY: automating case acquisition, storage, and utilization, Machine Learning 10 (1993) 249–278. [3] B. Nebel, J. Korhler, Plan reuse versus plan generation: A theoretical and empirical analysis, Artificial Intelligence 76 (1995) 427–454. [4] N. Amedeo, L. Claude, R. Ducournau, An object-based representation system for organic synthesis planning, International Journal of Human–Computer Studies 41 (1994) 25–32. [5] Schocken, S., Hummel, R., On the use of Dempster Shafer model in indexing and retrieval applications, International Journal of Man– Machine studies 39 (1990) 843–879. [6] Kolodner, J. (Ed.), Proc. Retrieval and Organizational Strategies in Conceptual Memory: A Computer Model, Lawrence Erlbaum, USA, 1984. [7] Schank, R.C., Abelson, R.P., Scripts, Plans, Goals and Understanding, Lawrence Erlbaum, USA, 1984. [8] Bardasz, T., Zeid, I., DEJAVU: A case-based reasoning designer’s assistant shell, Artificial Intelligence 1 (1992) 477–496. [9] T. Bardasz, I. Zeid, Cognitive model of memory for mechanical design problems, CAD Journal 24 (1992) 327–342.