Gorkem Ozbek

SymbSys 205

Spring 2005

Review of Aaron Sloman’s “Architecture-Based Conceptions of Mind”

 

The Problem of Unification in Defining Mental Concepts

 

Aaron Sloman begins his paper, “Architecture-Based Conceptions of Mind,” with explaining a phenomenological confusion that he sees as problematic to ongoing research in the cognitive sciences and proposing a methodology that he believes will clear this confusion. More specifically, he complains that because we have direct access to our mental life, we believe we have precise definitions for concepts like “experience” and “consciousness.” This is not true, he claims, since these definitions differ significantly depending on who formulates them; AI theorists define “consciousness” differently, for example, than cognitive psychologists. Sloman believes that the confusion cannot be cleared by empirical research since the concepts are ill-defined in the first place, making it difficult to decide what evidence remains relevant across disciplines. As an alternative, he proposes to view these concepts as clusters with blurry boundaries which should be interpreted within the framework of an architecture-based conception of the mind. He claims that this approach can help define such concepts with much higher precision by enabling the formulation of certain answerable questions. These questions would remain relevant across disciplines and, in answering them, different areas of cognitive sciences can arrive at unified definitions (Sloman 2002).

 

            Sloman is a dedicated proponent of the cognitive architecture approach to studying the mind. Before elaborating further on his argument for architecture-based concepts, it would be appropriate to consider the domain of cognitive architectures and where his approach falls in this domain. In general, cognitive architecture “refers to the design and organization of the mind” (Sloman online). Theories of cognitive architecture are usually divided into two groups depending on their computational basis: symbolic cognitive architectures and associative cognitive architectures. Symbolic models view the mind as a symbol-manipulating device and are implemented in the form of digital computers and programs (Lewis online). These models almost always employ the von Neumann architecture [for more information on John von Neumann and the von Neumann architectures see here]. Associative models on the other hand view the mind as a network of many associative links where the computational power of the system stems from the parallel processing of information that these links enable (Sloman online).

           

The cognitive architecture Sloman is proposing to clarify concepts is a symbolic, information processing model. These types of models interpret cognitive processes as a number of sequential stages such as input or storage into memory (Sloman online). Sloman’s architecture has three such stages which are, in increasing order of sophistication: “reactive processes,” “deliberative processes,” and “reflective processes” (Sloman 2002). Reactive processes are of the most primitive kind and regulate reactions, which may or may not be voluntary, to the environment. Deliberative processes concern planning, decision making, scheduling and similar efforts of coordination. The most sophisticated layer represents reflective or “meta-management” processes which involve thinking and introspection and are essential for the mental capabilities that set humans apart from other animals. (Sloman 2002).

 

            Having defined these layers Sloman makes his boldest claim in three steps. First, he argues that there are different kinds of minds (Sloman 2002). For example, the mind of an infant is different than the mind of an adult human which is different from the mind of an insect. This is not a controversial claim; almost everyone would argue that the mind of an adult human is sufficiently different than that of an infant or an insect such that it is justifiable to consider these as different kinds of minds.

           

Second, he claims that different minds have different information processing architectures that correspond to them (Sloman 2002). If we accept the information processing approach as a valid way of modeling cognition, as I do within the scope of this review and for the sake of argument, this too is an uncontroversial claim. In fact, within the information processing paradigm, only different architectures with layers of varying sophistication and functionality can account for existence of different minds.

           

In the final step, Sloman argues that different architectures give rise to different cluster of concepts (Sloman 2002). This effectively suggests that because an infant and an adult have different kinds of minds and therefore different kinds of information processing architectures, they also possess different groupings of mental concepts. Sloman demonstrates this point with the following example concerning the task of defining the mental cluster-concept “emotion:”

                         

Primary emotions, such as many kinds of fear responses, require only a reactive layer, possibly with an alarm system which can take global control under certain circumstances. Secondary emotions, such as apprehension and relief, require in addition a deliberative layer, with mechanisms supporting ‘what if’ reasoning capabilities. These internal processes can trigger disturbances without anything perceived being responsible. Some sub-classes of secondary emotions include external and peripheral bodily changes, whereas others do not. Tertiary emotions require the third layer performing meta-management tasks such as monitoring, evaluating and possibly redirecting some of the reactive and deliberative processes. Certain forms of disruption of these high level control mechanisms, for instance in grief, discussed at length in (Wright, Sloman & Beaudoin 1996) produce tertiary emotions. Other examples are infatuation, humiliation, and thrilled anticipation.

 

It is in this third step that Sloman’s argument runs into problems. The first problem concerns the arbitrary nature of his information processing layers. One can easily add another layer to his architecture to account for behavior in more specific terms. The vagueness associated with the formulation of each layer’s functionality allows for this. In fact, Sloman himself seems to encourage this approach in order to explain more fully the subtleties of an animal’s cognitive abilities (Sloman 2002). However, adding more layers to the architecture implies at the same time that we can partition a cluster concept into more precisely-defined, singular concepts. At a first glance, this may not seem problematic. Once we realize that there is no limit to how finely we can construct the architecture by adding more and more layers, however, we see that we can indeed have as many differently defined concepts associated with a general, cluster concept as we like. To revisit Sloman’s example above, we can add a fourth layer to our architecture to correspond to a fourth group of emotions, which in turn may have its own sub-classes and so on.

 

Sloman does not consider this consequence of his argument as a problem. In fact, he treats it as a central feature that clarifies the task of defining concepts (Sloman 2002). He explains that we can regard a human being at different stages of his or her mental development as having an information processing architecture of different types; say types A1, A2 and later on A3. These types can then correspond to different specifications of the cluster concept “pain:” P1, P2 and P3. So instead of asking “can an infant experience pain?” we would ask “can an infant experience P1 and P2?” This, Sloman argues, gives us the kind of answerable questions that would help eliminate debates across different disciplines in cognitive sciences regarding the definition of mental concepts and allow for advances in architecture-based modeling of the mind and creation of systems that operate on this principle (Sloman 2002).

 

It seems, however, that Sloman’s methodology creates an impasse in unifying what is meant by certain mental concepts. Such unification is an essential part of the cognitive architecture approach to intelligence. Sloman criticizes current AI systems as being too primitive in that they either fall short of creating a comprehensive, human level intelligence or, when they actually rival humans in their performance, they do it with respect to only one mental ability or task and no other (Sloman 2002). His methodology, however, goes against establishing a comprehensive human level intelligence. If a cognitive architecture will imitate human intelligence, some way of representation for all that is implied by the concept “pain” is needed. Having an infinitely large set of singular concepts such as P1, P2, etc. will not help since it will reduce the task of creating human level intelligence to producing many specialized systems that understand and manipulate singular concepts without having an understanding of what “pain” is. It is a convincing argument that “pain” is a cluster concept for the human mind; the next step is to lay down the mechanisms that can represent it as a whole.

 

Overall, while he solves one problem of defining concepts, Sloman introduces a new, completely different problem that has significant implications for cognitive architecture based AI research. Nonetheless, his effort is worthy of praise and, if nothing else, it shows the magnitude of the difficulties associated with unveiling the complexities of the human mind.