A Review of the Icarus Cognitive Architecture

A Review of the Icarus Cognitive Architecture

By: Ben de Jesus

SymSys 205

June 8th, 2005


The Icarus cognitive architecture, designed by Pat Langley and Seth Rogers, here at Stanford, is an attempt to model the organization and problem solving skills of the human mind. Developed after several theories of human problem solving, Icarus uses incremental cumulative learning over a “dynamic combination of existing knowledge elements” to develop a hierarchy of skills and concepts. From these, Icarus can learn what processes work best for certain situations and store this and other information for later use. In this paper, I will focus on the organizational structure of Icarus, the language it uses to encode information, and how the system demonstrates machine learning. I will also examine and extrapolate upon the concept of human problem solving, observing which techniques the Icarus system supports, as well as others that may have been neglected or provide other insight into the layout of higher level mental processing. Finally, I hope to draw on the knowledge I have acquired this quarter pertaining to simulation and show how the Icarus model simulates the function of the mind.

Cognitive architectures are a system of interwoven programs and functions attempting to model knowledge, perception, and computation. Once created, an architecture like Icarus can be masked to a particular problem-solving environment in which the architecture uses its knowledge to solve the problem. Depending on how the architecture is organized yields different methods. One distinguishing characteristic of the Icarus system is that it is divided into two types of memories, each containing different kinds of information concerning the surrounding world. Long term memory is used to store constant principles or relations that are not affected with the changing of world variables. For example, if the environment is the BlockWorld environment, concepts like “on” would be stored in this memory, so that the architecture can distinguish the relations between blocks. The system divides its long term memory into concepts and skills, each as variables with names and any number of arguments. These are used in conjunction with the short term memory field, which is in charge of temporary goals controlled by a stack. The short term memory of Icarus is also in charge of “perceiving” its environment which includes acquiring current and temporary beliefs about a system. Thus, the long term memory would be in charge of knowing what constitutes a block in BlockWorld, or how a card can be legally moved in FreeCell, while the short term memory would regulate the current set-up of blocks and cards.

The source code of Icarus is reminiscent of LISP, which conforms nicely to the incremental learning style that Icarus attempts to emulate. Variables and functions are set up in a list format and are given up to five concept-defining fields. These fields further establish meaning to the protocol of a function or define rules to how a variable can or can’t be used. Particularly interesting to me is how this method of approaching the world includes subfields such as :negatives, which state the concepts that cannot match if a function is to be implemented. For example, if the state of the world matches a concept’s negative field, then that concept will not be enacted. What makes this interesting to me is how this must model a fundamental underpinning to the mind’s approach of functionalism, by which I mean that there must be an incredible array of subconscious acknowledgements that the brain keeps track of in understanding even the simplest of relations. For example, just to understand how an object can be “on” another object requires a variety of knowledge on the mind’s part, including all of the positive as well as negative connotations of the word. To understand the relation of “on” requires understanding position, height, and even what it means to be in existence (a long-term memory concept). Perception then becomes a part-by-part evaluation of the world which is then stored into the architecture short-term memory for later manipulation and achievement of goals.

The functioning of Icarus, in my opinion, is just as impressive as the internal structure that organizes the system to carry out its intentions. The skills and concepts that are defined in the system’s memories build off one another, forming concatenations of skills and charting errors that can be used to prevent future mistakes. This process of building off existing knowledge occurs cyclically as the system iterates over its stack of objectives and as it attempts to form solutions to problems. For example, if Icarus finds a particular combination of functions achieves a goal a certain amount of times, the architecture will define for itself a new function that accomplishes what lower-concepts can more slowly. Thus the system is constantly attempting to better itself and carry out its objectives with as great of efficiency as possible, a distinguishing characteristic between normal programming and cognitive architectures.

In explaining means-end problem solving, the author describes accomplishing sub-goals without some sort of solid holistic plan of what the system is trying to accomplish and the subsequent problems of such an approach. Without a plan, erroneous processes are free to occur if they satisfy a short-term goal, regardless if they forward the final objective any further. In my mind, it is unclear whether this means that the composition of accomplished goals does not necessarily result in completion of a final macro-level goal, or whether this simply refers to the sequence of actions that, in the real world, cannot be manipulated as a function of time. No one could say that carrying out a process at a given time can coincide with performing another action, at least, in the context that computation occurs in time and is bounded to it: walking down the street and chewing bubble gum are different processes within the cognitive architecture. Furthermore, Icarus chooses processes based on the current state of its environment, and not by imposing goals on the world and attempting to satisfy them. Icarus is a reactionary system, bouncing off stimuli that incur different effects, rather than acting as a free-agent to shape the world. I feel this approach (reactionary) models human problem solving much more closely than the latter for two reasons in particular: first the approach of means-end problem solving allows for some degree of error much in the same way that humans err. Icarus cannot choose a process to satisfy an objective that does not match certain conditions in the world. In the same way, when attempting to solve a problem, there are certain methods that simply do not apply or are impertinent to the task at hand. However, this does not restrict the number of correct possible solutions to any one problem. Icarus accounts for this facet of human behavior. Second, by choosing methods based on the world rather than the goal, the Icarus architecture has more of a connection to its environment than its coding. The mind functions in much the same way: for example, the brain is not worried with how perception works when it perceives, but rather is focused on what it perceives. This I feel shows the metaphor of language shaping thought, if it could be said that the cognitive architecture “thinks” as it chooses its processes based upon its environment. The code of Icarus is very task oriented, but how it is implemented resembles trial-by-error, different from a strictly deterministic “finish the goal” approach.

There are a number of other cognitive architectures that model other theories of the organization of the mind, including SOAR, ACT-R, and Eureka, which approach problems with very different heuristics. What’s interesting about this, as pointed out in Sloman’s article “Architecture-Based Conceptions of the Mind”, is that there is great diversity in how minds work, yet the cognitive architectures that are created tend to model a specific group of people: the intellectual researchers that create them. It is an almost unavoidable problem that programmers face in trying to simulate a human mind: how can an artificially created system be free from the bias of its creator. Metaphorically speaking, this asks the question of whether our minds are as diverse as we perceive them to be, or are we all fairly the same when it comes to computation. The question, “How can we be so different when we’re made of the same substance?” comes to mind, and I feel that cognitive architectures offers insight into a possible answer: that we (our minds) really aren’t that different, but through society and culture we perceive them to be so. What then could be said about the “culture” of artificial thinking machines? Are such automata cultureless? Or perhaps a better question is whether culture is a pureful human characteristic independent of the mind that generates it. Could different cognitive architectures be said to have different personalities? In its most basic sense, I think yes, that our own human approaches to solving problems and carrying out tasks is reflective of our personality, and so the same could and should be applied to the spectrum of cognitive architectures.

Understanding how cognitive architectures work and how they are derived has taught me a newfound appreciating for the diversity of our minds and the processes by which we problem solve. Icarus in particular asks the question whether we simply react to our environment or have specific intentions to carry out, or some combination of the two. Perhaps as humans we can only be conscious of one or the other at a point in time, or maybe they are one in the same: goals and objectives come out of our interaction with the world and our desire to create change. This “desire” is absent, or more like imposed, in a cognitive architecture as a result of it artificiality. Regardless, cognitive architectures offer us great insight into possible organizational structures of the mind and processes like knowledge acquisition and upper-level problem solving.



Sources

Langley, Pat & Rogers, Seth. Cumulative Learning of Hierarchical Skills from Problem

Solving and Execution

Sloman, Aaron. Architecture-Based Conceptions of Mind