Accepted Manuscript The Icarus Cognitive Architecture Dongkyu Choi, Pat Langley PII: DOI: Reference:
S1389-0417(16)30218-2 http://dx.doi.org/10.1016/j.cogsys.2017.05.005 COGSYS 564
To appear in:
Cognitive Systems Research
Accepted Date:
9 May 2017
Please cite this article as: Choi, D., Langley, P., The Icarus Cognitive Architecture, Cognitive Systems Research (2017), doi: http://dx.doi.org/10.1016/j.cogsys.2017.05.005
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
The I CARUS Cognitive Architecture Dongkyu Choi Department of Aerospace Engineering University of Kansas 1530 West 15th Street, Lawrence, KS 66049
Pat Langley Institute for the Study of Learning and Expertise 2164 Staunton Court, Palo Alto, CA 94306
Abstract Cognitive architectures aim to provide an infrastructure for general intelligence. Inspired by psychological evidences, researchers in this field use these systems to model various aspects of human mind. This paper reviews the evolution of one such architecture, I CARUS, over the three decades’ history of development. We describe different versions of the architecture in the context of related work and provide future directions for research with I CARUS. Keywords: Cognitive architectures, I CARUS architecture, general intelligence
Introduction The cognitive systems movement aims to understand the nature of human mind as an integrated system (Langley, 2012). Through a series of inherently exploratory research, scientists in this field build systems that possess high-level cognitive capabilities using structured representations and processes that work over them. These systems, often categorized as cognitive architectures, provide infrastructure for modeling human cognition by committing to a particular set of representation and memories, providing facilities to process knowledge and other structures, and often enabling their embodied agents to learn from various experiences.
Email addresses:
[email protected] (Dongkyu Choi),
[email protected] (Pat Langley)
1
I CARUS (Langley & Choi, 2006) is one such architecture, that shares its basic characteristics with other architectures like Soar (Laird et al., 1986), Prodigy (Minton, 1990), and ACT-R (Anderson & Lebiere, 1998). But unlike these architectures that started with the theory of problem solving, the research on the I CARUS architecture first focused on how an agent should carry out activities in the world. For this reason, the problem space search is not a core idea in the architecture. Rather, reactive but goal-directed (i.e., teleoreactive) execution has been the main thrust in our research. The I CARUS architecture also differs from other systems in its strong architectural commitment to hierarchical organization of knowledge and structures supporting multiple levels of abstraction. I CARUS uses programs that resemble hierarchical versions of Horn clauses (Horn, 1951) and S TRIPS (Nilsson, 1994) operators, with which the system provides teleoreactive behaviors, the ability to solve problems, and different modes of learning new knowledge. In the sections that follow, we first review the theoretical claims of I CARUS, to help readers understand the goals of our research in this direction and the commitments we make within the architecture. Then we describe the initial version of the architecture briefly. Despite its notable differences from the later versions of the architecture we will cover in the main part of this paper, our discussion on this early research will provide important background knowledge and a historical account to the readers. After that, we explain the core ideas that constitute the architecture’s theoretical foundations, which we have implemented within the I CARUS architecture incrementally over multiple versions. We also discuss some additional aspects that are introduced to the architecture recently. Then we summarize by describing the latest version of the architecture that has all the core aspects tied in. Finally, we will conclude after discussions on related and future work. Theoretical Claims of ICARUS In designing the I CARUS architecture, we aim to provide a computational theory of intelligent behavior that emphasizes how the components of intelligence are fit together to form a cognitive system. Our work is heavily inspired by ideas and results from cognitive psychology that concern high-level human cognition. For this reason, 2
I CARUS shares several assumptions with other architectures that are developed with similar philosophy. Some of these include: 1) the distinction between short-term and long-term memories; 2) the use of symbolic list structures in these memories; and 3) the cyclic operations from which cognitive behavior emerges. There are, however, some theoretical claims that distinguishes I CARUS from other architectures. Some of them include: 1) the notion that cognition is grounded in perception from and actions to the physical environments; 2) the distinction between concepts and skills that are stored separately; and 3) the hierarchical organization of the long-term memory contents that reflects multiple levels of abstraction. Of course, the ongoing interactions among the researchers in this field tend to reduce the functional differences of architectures, but these theoretical claims represent the distinct philosophy of the I CARUS architecture. Early Research I CARUS started as an architecture for physical agents, designed to support physical activities like manipulation and navigation. Despite many changes made to the architecture across subsequent versions, the I CARUS architecture still maintains this commitment, and it will be useful to review the original version of the architecture briefly before we begin investigating the core elements of more recent versions. Langley et al. (1991) describe the initial design of the architecture, which consists of a perceptual system, a planner, and an execution module that work over a single memory system similar to C OBWEB (Fisher, 1987). The memory system, L ABYRINTH (Thompson & Langley, 1991), stores probabilistic concepts at different levels of abstraction, from the most general ones at the top to the most specific ones at the bottom. The agent’s experience is encoded in this hierarchy using an evaluation method, category utility (Gluck, 1985), that decides where in this structure to store the new knowledge and whether or not it should be a separate branch. To perceive the surroundings, the original version of I CARUS had a subsystem, A RGUS, that generates qualitative description of objects and events based on sensory input. The architecture also included an attention mechanism that resembles C LASSIT
3
(Gennari, 1990), which reduced the size of A RGUS’ output to L ABYRINTH for classification. Once the system infers the state of the environment in this manner, it invoked a planner, D ÆDALUS, to generate plans that are relevant to the current situation using means-ends analysis. Then the execution system, M ÆANDER, took the recursive plan and applied the implied primitive action to the world in the specified steps while monitoring the progress. Failures detected by the perceptual system during such executions resulted in replanning. Learning has been an important subject investigated in the context of I CARUS. In the initial version of the architecture, the learning capability was embedded in the memory system. The process of storing new experience involved deciding the location of the new knowledge within the memory and assigning a probabilistic summary of the new node based on the existing structures. The memory system also had the ability to maintain the probabilities of its plans and motor skills by incrementally updating them from experience, leading to behavioral changes over time. Langley (1997) also explored statistical learning capabilities that detect conditions under which skill conditions remain satisfied. The system could then reduce the sensing load during its operation. Furthermore, I CARUS has also been cast as an architecture that supports control of physical agents through reactive logic programming. Shapiro & Langley (1999) described I CARUS as a reactive logic programming language, with which automobile driving agents can be programmed for teleoreactive behaviors. The architecture supports both highly reactive control and non-trivial deliberate reasoning. Core Aspects of ICARUS Based on these previous work on the earlier versions of the architecture, I CARUS evolved into a powerful cognitive architecture that still maintains the same theoretical commitments we have covered above. Through continuous development efforts, the architecture has been extended to include four key aspects, including hierarchical reactive execution, hierarchical conceptual inference, problem solving and skill learning, and, finally, goal reasoning. These extensions were introduced roughly in that order, and therefore, our discussions on these aspects will also serve as a chronological review of I CARUS’s evolution for the last 15 years. 4
Hierarchical Reactive Execution As noted earlier, I CARUS started out as a theory of how an agent should carry out activities in the world. For this reason, the research on this architecture emphasizes on embodied agents that exist over time. This implies that I CARUS agents should be capable of executing their skills in reaction to the situations in the environment, while staying relevant to their goals. This teleoreactive execution capability has been an important focus of our research from early on, and it led to an architecture that has a strong commitment to hieararchical organization of skills, namely, the procedures that achieve certain situations in the environment, written at multiple levels of abstraction. Table 1 shows some sample skills for the version of I CARUS described in Shapiro & Langley (2002). I CARUS’s skills are generalized plans that consist of fields for one or more objectives, preconditions, and actions or subplans. The top-level skill, drive, specifies an ordered list of objectives for the agent to achieve in its :objective field, whereas a lower-level skill, avoid-trouble-ahead, includes the precondition for execution in its :requires field and the subplans in the :means field. Table 1: A subset of the I CARUS plan for driving as provided in Shapiro & Langley (2002). drive ( ) :objective ( *not* (emergency-brake( )) *not* (avoid-trouble-ahead( )) get-to-target-speed( ) *not* (avoid-trouble-behind( )) cruise( ) ) avoid-trouble-ahead ( ) :requires ( bind(?c, car-ahead-center( )) velocity( ) > velocity(?c) bind(?tti, time-to-impact( )) bind(?rd, distance-ahead( )) bind(?rt, target-speed( ) - velocity( )) bind(?art, abs(?rt)) ) :means ( safe-cruise(?tti, ?rd, ?art) safe-to-slow-down(?tti, ?rd, ?rt) move-right(?art) move-left(?art) )
During execution, the architecture evaluates its skills in a top-down manner, as shown in Figure 1. It starts at the top level (e.g., drive), checking all the objectives in sequence. The skill that is associated with each objective should be either irrelevant or satisfied, before I CARUS can continue to the next objective. More specifically, the
5
skills for negated objectives like avoid-trouble-ahead will be executed until their preconditions are no longer true and hence the skill is irrelevant to the situation, and the skills for positive objectives like cruise will be executed until their satisfaction conditions are met. In both cases, the execution of a skill will involve selecting among the alternatives listed in the skill, for example, safe-cruise, safe-to-slow-down, move-right, and move-left in the second example shown. This selection is controlled by values associated with each subplan or action. top-level skills
skill hierarchy
direct actions
Figure 1: Top-down evaluation of skills in I CARUS.
In Shapiro & Langley (2002), I CARUS relied on a separate reinforcement learning algorithm, SHARSHA, to learn these values associated with alternatives in skills. This learning system allowed the propagation of reward signal to update the skill values, and the system could learn different sets of values that correspond to different driving styles, for example. This particular capacity is currently not included in the latest version of I CARUS and the architecture uses different methods to choose among alternatives. Nevertheless, the reactive execution of hierarchical skills is still an important cornerstone of the I CARUS architecture. We will describe the details of the latest version separately later in this paper.
6
Hierarchical Conceptual Inference Although the reactive execution of hierarchical skills enabled teleoreactive behaviors, the early versions of the architecture lacked the ability to capture the state abstractions. This led to another important part of I CARUS programs, namely, hierarchical concepts, that construct the agent’s vocabulary to describe situations in the environment. Choi et al. (2004) describe I CARUS’s hierarchically organized concepts and the associated long-term memory in detail. As shown in Table 2, the Boolean concepts describe various categories of an object or relations of multiple objects. Primitive concepts like the first example, corner-ahead-left, specify perceptual matching conditions in the :percepts field and numeric tests against matched variables in the :tests field. In contrast, non-primitive concepts like the second and the third examples, inintersection and in-lane, include an additional field, :positives, that stores positive (or negated) references to other concepts as sub-relations. Table 2: A subset of the I CARUS concepts for driving as provided in Choi et al. (2004). (corner-ahead-left (?corner) :percepts ((corner ?corner r ?r theta ?theta)) :tests ((< ?theta 0) (>= ?theta -1.571))) (in-intersection (?self) :percepts ((corner ?ncorner theta ?theta) (self ?self)) :positives ((near-block-corder ?ncorder) (corner-behind ?ncorner) (corner-straight-ahead ?scorner)) :negatives ((far-block-corner ?fcorner))) (in-lane (?lline) :percepts ((lane-line ?lline dist ?ldist)) :positives ((on-right-side-of-road ?rline) (left-land-line ?lline)) :tests ((> ?ldist -7) (< ?ldist -3)))
Due to the dependencies that exist between primitive and non-primitive concepts, the inference of concept instances that are true in a situation naturally involves first matching primitive concepts against perceived objects and their attributes, and then matching higher-level concepts against already matched concept instances. Through this bottom-up inference process, shown in Figure 2, I CARUS finds all the concept instances, or beliefs, that hold in the current situation. 7
higher-level (non-primitive) concepts
primitive concepts sensory input
Figure 2: Bottom-up inference of concepts in I CARUS.
This inference process occurs on every cycle of I CARUS’s operations. As one can imagine, the complexity of this process increases very quickly as the number of perceived objects increases. For example, running I CARUS agents in grid-based game domains like FreeCiv1 results in significant slowdown of cognitive cycles as more grid locations are perceived as the game continues. To remedy this issue, we have tried inference mechanisms that can potentially reduce the inference time. The most notable one is a verison of truth maintenance system, that kept track of supporting facts and sub-relations for each concept instance to allow systematic updates of beliefs across cycles. This type of inference strategy was particularly useful when the number of objects grows over time, since the belief inference in all but the initial cycle involved incremental updates rather than the bottom-up inference from scratch. 1 http://www.freeciv.org/.
8
Problem Solving and Skill Learning As seen so far, the introduction of hierarchical concepts and skills enabled I CARUS to categorize situations in a rich manner and to perform teleoreactive execution of plans at multiple levels of abstraction. However, skill execution and concept inference do not prepare the I CARUS agent for novel problems for which it does not have skills to execute. For this, I CARUS needed a systematic method to decompose given problems into smaller pieces it can handle with its existing knowledge, and carry out activities in unfamiliar situations by solving novel problems. From the early stages of its development, the I CARUS architecture has included a version of means-ends analysis as its default problem solver (e.g., D ÆDALUS in Langley et al. (1991)), which enables the system to perform backward-chaining problem solving. Our work on this type of problem solving method culminated in Langley & Choi (2006) and Langley et al. (2009), where the architecture uses its concepts and skills to backward chain off of its goals and solve novel problems. For instance, Figure 3 shows a trace of successful means-ends problem solving in driving domain, along with graphics that depict the changes in the environment. At first, I CARUS uses its skill, steer-for-right-turn, to chain off of the goal state, sg and reach the subgoal state, s2 . Then it uses another skill, in-intersection-for-right-turn, to chain off of that state and reach another state, s1 . I CARUS does not have a skill that achieves the subgoal, in-rightmost-lane, but the architecture knows the definition of this concept, which depends on two sub-relations, driving-in-segment and last-lane. Since the latter is already true in the current state, s0 , the architecture then chains off of the former, which, in turn, can be decomposed into five sub-relations. One of them, in-lane can be achieved directly from the current state, so I CARUS executes the skill with the same name and make in-lane true in the world. The system will achieve the other two unsatisfied concepts, centered-in-lane and aligned-with-lane in similar manners. When this is done, the precondition, in-rightmost-lane, will become true in the world, and the system will be able to execute the skills, in-intersection-for-rightturn and steer-for-right-turn in that order to achieve the top-level goal, in-segment. Problem solving traces like the one described in the example above is stored in a goal stack, and I CARUS invokes skill learning whenever it achieves a subgoal (e.g., in9
Figure 3: A trace of successful problem solving in driving domain as provided in Langley et al. (2009). The ellipses indicate (sub)goals and rectangles denote primitive skills.
lane) during the interleaved problem solving and execution. New skills learned from a chain that involves a skill, A, will include any achievement of A’s preconditions as first steps and then the execution of the skill A itself as the final step. New skills learned from a chain that involves a concept, B, will encode the achievement of B’s sub-relations in the order they were satisfied during problem solving while putting any sub-relations that were already true in the state as preconditions. Goal Reasoning The last aspect of the I CARUS architecture that we consider as essential is its goal reasoning capability. The previous versions of the architecture received its top-level goals from the user and did not have the ability to change these based on the environmental changes. Choi (2010) introduced a new capability that enables the archi10
tecture to nominate, retract, and prioritize its top-level goals, and more details were later described in Choi (2011). At the level of representation and memories, this work introduced the novel distinction between long-term and short-term goals. As shown in Table 3, I CARUS represents long-term goals as conditionalized rules that describe relevant goals to nominate under the associated conditions. Table 3: Some sample
pairs stored in I CARUS’s long-term goal memory, as provided in Choi (2011). ((stopped-and-clear me ?ped) :nominate ((pedestrian-ahead me ?ped)) :priority 10) ((clear me ?car) :nominate ((vehicle-ahead me ?car)) :priority 5) ((cruising-in-lane me ?line1 ?line2) :nominate nil :priority 1)
The nomination of top-level goals occurs on each cycle after the inference of concept instances (beliefs) is complete. For instance, I CARUS instantiates the first longterm goal, stopped-and-clear, when the relevance condition, pedestrian-ahead, is true for the agent and a pedestrian in the world. Since there can be multiple pedestrians in front of the agent’s car, multiple instances of this long-term goal can be nominated. If a long-term goal has null relevance condition like in the third example, this goal will always be nominated regardless of the situation. Later, if the supporting conditions for a long-term goal is no longer true in the state, previously nominated instances of the goal will not be nominated for that cycle, and therefore, retracted from the list of the agent’s top-level goals. Upon completing the nomination of goals, I CARUS prioritizes those nominated goals based on their associated values. Choi (2011) did not have a serious discussion on where these values are coming from, but the work assumed that there are some absolute measure of importance for goals, much like ethical principles that value human lives over material costs in general. However, these fixed values alone are not sufficient to account for dynamically changing priorities sensitive to situations in the world.
11
For this reason, this work also introduced the degrees of match for concept instances and used these continous numbers to modulate the goal priorities. More specifically, the degree of match (a number between zero and one) associated with the relevance condition of a long-term goal is multiplied with the fixed priority value of the goal to get the goal priority for the current goal instance. Table 4 shows some sample concepts that support this capability. These concepts include numeric tests like the speed being between 15 and 20 or the angle being equal to 10. By marking the variables ?speed and ?angle in the special field (:pivot), the programmer can have I CARUS apply fuzziness to the boundaries of the numeric tests, allowing the calculation of how close the value of the specified variable is to the complete match of the concept. For example, the second concept will be true when ?angle is equal to 10 and false otherwise using the usual Boolean matching, but it can get the degrees of match like 0.9 when ?angle is very close to 10 and 0.1 when it is very far from 10. Table 4: Sample I CARUS concepts for driving domain that enables the calculation of continuous degrees of match, as provided in Choi (2011). ((at-turning-speed ?self) :percepts ((self ?self speed ?speed)) :tests ((>= ?speed 15) (<= ?speed 20)) :pivot (?speed)) ((at-steering-angle-for-right-turn ?self) :percepts ((self ?self steering ?angle)) :tests ((= ?angle 10)) :pivot (?angle))
Other Notable Extensions Although not included in the four main aspects of the I CARUS architecture, there have been many previous work that concern with interesting capabilities in the architecture. In this section, we describe some of them in modest detail. Learning from Constraint Violations Based on Ohlsson’s (1996) idea on acquiring new knowledge from violations of constraints, Choi & Ohlsson (2010) extended the I CARUS architecture with the new
12
learning capability using constraint violations. This work introduced a new representation of constraints as relevance–satisfaction pairs in the architecture (see Table 5) and added the capacity to detect violations of such constraints on every cycle. Table 5: Some sample constraints for the Blocks World provided in Choi & Ohlsson (2010). (color (width
:relevance :satisfaction :relevance :satisfaction
((on ?a ?b)) ((same-color ?a ?b))) ((on ?a ?b)) ((smaller-than ?a ?b)))
When the architecture detects a violation of its constraints, it augments the skill that caused the violation by adding preconditions that will prevent future violations. For example, if the agent just executed a stacking skill and the width constraint (the second example) is now violated, I CARUS will add the satisfaction condition of the constraint, namely, smaller-than to the precondition of the skill and prevent further violations of that constraint in the future by this skill. Learning from Observations Motivated by the possibility of reducing the search space dramatically compared to learning from problem solving, there have been continued efforts to introduce learning from observations of expert traces to the I CARUS architecture. Salomaki et al. (2005) first introduced learning from observations of expert traces in I CARUS and Nejati et al. (2006), Li et al. (2009), and Nejati (2011) expanded the capability to varying degrees. In these previous work, I CARUS had the ability to generate explanations from expert traces that include information on goals, initial state, and the action sequence. The system first generates the sequence of states using the given initial state and the action sequence. After matching the generated states to the given actions, I CARUS attempts to generate explanations of how the goals were achieved in the trace. This process bears resemblance to the means-ends analysis used in learning from problem solving, with the exception that it involves significantly less search since the clues for skill selections are given in the form of primitive actions. Once an explanation is generated from the given expert trace, I CARUS learned new skills in a similar manner to skill learning from its own problem solving traces. 13
Latest Version of ICARUS The four core aspects of I CARUS we have covered earlier play central roles in the latest version of the architecture. But the representations I CARUS uses for its conceptual and procedural knowledge underwent several changes as the architecture was extended. In this section, we will explain I CARUS’s representation of knowledge in the latest version, and compare it to those from older versions of the architecture we have discussed so far. We will also describe how the components of the architecture are put together to show intelligent behavior, before covering some additional work that was done in I CARUS. In the latest version of I CARUS, there are several possible variations for concepts as shown in Table 6. On one hand, primitive concepts take one of the first two formats and describe situations only with pattern matches defined in :elements and tests against perceived objects and their attributes written in :tests. The first field was called :percepts in the older versions of the architecture, but the recent unification of percepts and concept instances in I CARUS called for a more generic field label. On the other hand, non-primitive concepts include an additional field :conditions that was called :positives or :relations in older versions. Despite the label change, the field still stores references to other concept instances as sub-relations in the same manner. Table 6: Syntax for I CARUS’s concepts. ( :elements
())
( :elements :tests
() ())
( :elements () :conditions ()) ( :elements () :tests () :conditions ())
Table 7 shows two variations of I CARUS skills, where the former is a primitive skill
14
that calls upon direct actions in the world stored in :actions and the latter is a nonprimitive skill that provides a subgoal decomposition written in :subskills. Both of them can include pattern matches in :elements and preconditions in :conditions, to define the situations in which they can be applied. The definitions also include the outcome of the skill’s successful execution in :effects. In some older versions of the architecture, skills were indexed by the main (intended) effects, using them as the heads of the skills. However, any arbitrary names can be used in the latest version. Table 7: Syntax for I CARUS’s skills. ( :elements :conditions :actions :effects
() () () ())
( :elements :conditions :subskills :effects
() () () ())
Using the representations described so far, the current I CARUS architecture stores its long-term knowledge in the conceptual and procedural memories. Combining all the core processes that work over these knowledge structures, I CARUS’s cognitive cycles involve the operations shown in Figure 4. The architecture first receives perceptual data from the environment (1), infers concept instances (beliefs) that are true in the current state (2), nominates and prioritizes its top-level goals based on the state (3), and selects executable skill instances (intentions) that achieve its goal (4). If no such skills are found, I CARUS invokes its means-ends problem solver to find a solution and learns new skills from the successful achievement of its goals (5). In the sections above, we have discussed how the I CARUS architecture was designed initially and what kind of theoretical commitments it made. We explained the four core aspects of the architecture that were added over the years in a chronological order, and also described some of the notable extensions. We then briefly introduced the latest version of I CARUS. Next, we discuss some of the related and future work and summarize our review of the architecture.
15
Perception
Perceptual Buffer
1 Long-term Conceptual Memory
Categorization and Inference
Goal Reasoning
Belief Memory
2
3 Skill Retrieval
Environment
4 Long-term Skill Memory
Problem Solving and Skill Learning
Goal Memory
5 Motor Buffer
Skill Execution
Figure 4: I CARUS’s operations that occur on each cycle.
Related Work The three decades’ research on the I CARUS architecture was, without any doubts, influenced by many related work. Research on cognitive architectures like Soar (Laird et al., 1986), ACT-R (Anderson & Lebiere, 1998), Prodigy (Minton, 1990), C LARION (Sun, 2007), and others had inspired and motivated our research on I CARUS at varying degrees over the years. If we study the main characteristics of the architecture in more details, however, there were many previous work beyond the cognitive architectures literature that influenced our work. Nilsson’s (1994) work on teleoreactive control motivated reactive but goal-directed execution of skills in I CARUS. Like the work on S TRIPS operators, our architecture started as a theory of controlling activities in the physical world, resulting in a different architectural view from others like Soar and ACT-R. Bonasso et al.’s (1997) research on reactive and deliberative frameworks also affected our commitment to teleoreactive 16
execution. Forgy’s (1982) Rete networks inspired our approach to conceptual knowledge in I CARUS. The hierarchical concepts in the I CARUS architecture forms a lattice that is structurally similar to the Rete networks used for matching in production systems. Our research that cast I CARUS programs as teleoreactive logic programs has its basis on Clocksin & Mellish (1981) logic programming. But the problem solving and learning capabilities in I CARUS bears the most resemblance to Reddy & Tadepalli’s (1997) X-Learn, which acquired goal-decomposition rules from a sequence of training problems. Previous studies like Simon (1967) and Sloman (1987) inspired our work on goal reasoning. The former argued the need for an interruption mechanism that would serve an organism situated in the real world and outlined an information processing system that include such a facility. The latter suggested that often conflicting goals require a mechanism for resolution, a purpose motivational framework can serve in an agent system. Future Work Despite the continuous development efforts for the I CARUS architecture, it still lacks many aspects of human intelligence. Although many of the aspects we have covered so far are worth revisits, recently, our research focuses on the episodic memory and related processes, for which we have extended the architecture with a new memory and various processes. However, this research is still in its infancy, and we will continue our efforts in this direction in the near future. This is especially timely and important, given the recent interest in explainable autonomy, in which an artificial agent should provide justifications for its decision making, goal changes, and other mission-specific outcomes based on the episodic traces of its operations. Conclusions In this paper, we reviewed some early studies that gave birth to the I CARUS architecture and discussed four important aspects of the system in detail. We also provided a
17
glimpse of other notable capabilities introduced to I CARUS over the years. Our research on the architecture has been inspired by many related research and is still ongoing. We believe the cognitive systems approach is crucial for proper understanding of general intelligence. Acknowledgments The research on the I CARUS architecture presented here was performed while supported in part by various grants sponsored by NSF, DARPA, ONR, and KIST over the years. No endorsement should be inferred. References Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Mahwah, NJ: Erlbaum. Bonasso, P. R., Firby, J. R., Gat, E., Kortenkamp, D., Miller, D. P., & Slack, M. G. (1997). Experiences with an architecture for intelligent, reactive agents. Journal of Experimental & Theoretical Artificial Intelligence, 9, 237–256. Choi, D. (2010). Nomination and prioritization of goals in a cognitive architecture. In 10th International Conference on Cognitive Modeling (p. 25). Citeseer. Choi, D. (2011). Reactive goal management in a cognitive architecture. Cognitive Systems Research, 12, 293–308. Choi, D., Kaufman, M., Langley, P., Nejati, N., & Shapiro, D. (2004). An architecture for persistent reactive behavior. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multi Agent Systems (pp. 988–995). New York: ACM Press. Choi, D., & Ohlsson, S. (2010). Learning from failures for cognitive flexibility. In Proceedings of the Thirty-Second Annual Meeting of the Cognitive Science Society. Portland, OR: Cognitive Science Society, Inc.
18
Clocksin, W., & Mellish, C. (1981). Programming in Prolog. Springer-Verlag, New York. Fisher, D. H. (1987). Knowledge acquisition via incremental conceptual clustering. Machine learning, 2, 139–172. Forgy, C. (1982). Rete: A fast algorithm for the many pattern/many object pattern match problem. Artificial Intelligence, 19, 17–37. Gennari, J. H. (1990). An experimental study of concept formation. Ph.D. thesis University of California, Irvine. Gluck, M. (1985). Information, uncertainty and the utility of categories. In Proc. of the Seventh Annual Conf. on Cognitive Science Society (pp. 283–287). Lawrence Erlbaum. Horn, A. (1951). On sentences which are true of direct unions of algebras. Journal of Symbolic Logic, 16, 14–21. Laird, J. E., Rosenbloom, P. S., & Newell, A. (1986). Chunking in soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46. Langley, P. (1997). Learning to sense selectively in physical domains. In Proceedings of the first international conference on Autonomous agents (pp. 217–226). ACM. Langley, P. (2012). The cognitive systems paradigm. Advances in Cognitive Systems, 1, 3–13. Langley, P., & Choi, D. (2006). A unified cognitive architecture for physical agents. In Proceedings of the Twenty-First National Conference on Artificial Intelligence. Boston: AAAI Press. Langley, P., Choi, D., & Rogers, S. (2009). Acquisition of hierarchical reactive skills in a unified cognitive architecture. Cognitive Systems Research, 10, 316–332. Langley, P., McKusick, K. B., Allen, J. A., Iba, W. F., & Thompson, K. (1991). A design for the icarus architecture. ACM SIGART Bulletin, 2, 104–109. 19
Li, N., Stracuzzi, D. J., Langley, P., & Nejati, N. (2009). Learning hierarchical skills from problem solutions using means-ends analysis. In Proceedings of the 31st Annual Meeting of the Cognitive Science Society. Amsterdam, Netherlands: Cognitive Science Society, Inc. Minton, S. (1990). Quantitative results concerning the utility of explanation-based learning. Artificial Intelligence, 42, 363–391. Nejati, N. (2011). Analytical goal-driven learning of procedural knowledge by observation, . Nejati, N., Langley, P., & K¨onik, T. (2006). Learning hierarchical task networks by observation. In Proceedings of the Twenty-Third International Conference on Machine Learning (pp. 665–672). Pittsburgh, PA. Nilsson, N. (1994). Teleo-reactive programs for agent control. Journal of Artificial Intelligence Research, 1, 139–158. Ohlsson, S. (1996). Learning from performance errors. Psychological Review, 103, 241–262. Reddy, C., & Tadepalli, P. (1997). Learning goal-decomposition rules using exercises. In AAAI/IAAI (p. 843). Salomaki, B., Choi, D., Nejati, N., & Langley, P. (2005). Learning teleoreactive logic programs by observation. In Proceedings of AAAI-05. Shapiro, D., & Langley, P. (1999). Controlling physical agents through reactive logic programming. In Proceedings of the third annual conference on Autonomous Agents (pp. 386–387). ACM. Shapiro, D., & Langley, P. (2002). Separating skills from preference: Using learning to program by reward. In Proceedings of the International Conference on Machine Learning (pp. 570–577). Simon, H. A. (1967). Motivational and emotional controls of cognition. Psychological Review, 74, 29–39. 20
Sloman, A. (1987). Motives, mechanisms, and emotions. Cognition & Emotion, 1, 217–233. Sun, R. (2007). The motivational and metacognitive control in CLARION. In W. Gray (Ed.), Modeling Integrated Cognitive Systems (pp. 63–75). New York, NY: Oxford University Press. Thompson, K., & Langley, P. (1991). Concept formation in structured domains. Concept formation: Knowledge and experience in unsupervised learning, (pp. 127–161).
21