Previous Next Up Comments
SEHR, volume 4, issue 1: Bridging the Gap
Updated 8 April 1995

the embodiment of meaning

N. Katherine Hayles


I applaud Herbert Simon's effort to put cognitive science in conversation with literary criticism. It is an effort I have made myself from time to time. And I would like to agree that his definition of meaning makes sense in a literary context. But I get nervous with the implication that, given this definition of meaning, computers can be said not only to generate meanings, but also to understand them.

Simon's announced aim is to make available to literary critics a precise definition of meaning understood in operational terms. An unannounced aim, but one I think we are entitled to infer, is to define meaning in such a way as to advance his program of simulating human intelligence with computers. He writes, "Meanings flow from the intensions of people (or perhaps people and computers, a controversial issue)." If computers can generate meaning, then it follows (his parenthesis suggests) that computers possess intension. When a computer is programmed to achieve a goal, does it have intension toward that goal? Suppose we are willing to grant that proposition. Immediately another issue arises, for intension is only part of meaning; the other (and perhaps larger) part of meaning flows from understanding. Chance events may create juxtapositions that have meaning for observers, even though no intension was involved in producing the meaning. But meaning without understanding on someone's part is not meaning, for only when understanding occurs is it possible to say, "Oh, I see what it means."

I have difficulty imagining a computer having the "Ah-ha" experience that I associate with grasping meaning. If a computer activates in its memory the appropriate network of associations, is that a necessary and sufficient condition for "Ah-ha"? Or, as Marvin Minsky argues (Minsky, 1986), is an additional but still achievable requirement necessary, namely a subsystem whose function is to sense the state of the system as a whole? Imagine a computer with such a subsystem installed; we can call the program "self-awareness." When "self-awareness" notices that a large number of networks are activated, is it entitled to pronounce, "I (we?) have understood something significant"? Drawing from arguments that Simon advances, I will discuss three objections to this proposition. Then I will briefly consider the value that his definition of meaning has for literary criticism, independent of the role that it may play in debates about artificial intelligence.

Simon correctly observes that meanings come from emotional experiences as well as from cognition. Emotion is generated by complex feedback loops between the lower brain and endocrine system. Simply put, we feel emotion because we have bodies. Evocations of these embodied experiences are woven into the networks that create meaning. Even the most abstract concept may be tinged with emotional coloring; witness the testimony of mathematicians about the emotional responses they have to elegant proofs. Lacking the experiences of embodiment, a computer would necessarily construct different kinds of networks. In particular, the profound role that emotion plays in creating meaning would be absent. Perhaps it is possible to simulate emotional colorings by methodologies appropriate to computers. In a provocative argument Valentino Braitenberg devises a series of simple hypothetical machines (his "vehicles") to demonstrate that they can display behavior to which observers would be likely to attribute emotion, from fear and aggression to values and altruism (Breitenberg, 1984).Whatever the case for these thought experiments, the "full-bodied meaning" that Simon writes about is just that-meaning that has embodied experience woven into it at every point.

Embodiment also plays an important role in constructing the contexts that determine meanings. As Simon points out, meaning is context-dependent, for it is the context that determines which of the possible associations in a network are appropriate for a given instance. Specifying rules for determining context has proven formidably complex; even the most advanced computers have difficulty parsing a sentence such as "When the police came, John put the pot in the dishwasher." The problem is not unlike formulating a grammar adequate to account for well-formed utterances. Approached theoretically, the problem is so difficult that even the most advanced theorists cannot solve it; yet a three-year-old can negotiate the same territory without difficulty-and no explicit rules. Surely Hubert L. Dreyfus is correct in pointing out that an important way in which humans establish a sense of context is through embodied experience (Dreyfus, 1979). We know the appropriate contexts because we move and act in the world. Computers can't think, Dreyfus argues, because they lack bodies. What about computers that do have bodies? Rodney Brooks has specialized in creating mobile robots that learn about the world by interacting with it directly rather than by using a central representation created by programmers. "The world is its own best representation," Brooks is fond of asserting, implying that robots, like people, can best learn about meaning by living as embodied creatures in the world.

A third difficulty I have in imagining a computerized "Ah-ha" comes from the experience humans have of feeling a meaning before it is grasped. Just as intension can have unconscious or subconscious dimensions, so understanding can have resonances deeper than conscious awareness. Often we grope toward meanings that we sense but that we cannot yet fully articulate. So central is this phenomeon to literature that it has been the subject of texts by such major (and diverse) writers as Joseph Conrad, Henry James, and Stanislaw Lem. It may be possible to propose a mechanism for the phenomenon consistent with associative networks. For example, the feeling may come from accessing a network by a route so circuitous that the meaning eludes articulation. Whatever the explanation, the intuitive apprehension of a meaning not yet grasped is a characteristic human response to meaning that as yet has no counterpart in computer simulation.

In conclusion, I do not see computers being constructed that can go "Ah-ha," even though it may be possible to build such devices in the future. For literary criticism in the present, Simon's definition of meaning fits together in interesting ways with poststructuralist views of the non-referential nature of language. Meaning constructed through networks of associations has no simple one-to-one correspondence with reference, even though many items in the network may have links with embodied experience. It is a fascinating paradox that this endorsement of non-referential language should have come from the sciences, with their proclaimed preference for referentiality. But that is another story, and another network of associated meanings.

Previous Next Up Comments