Let's start by asking how knowledge is transferred.
Consider a couple of light-hearted but revealing accounts. A comic strip in my possession concerns industrial espionage between firms which manufacture expert systems. One firm has gained a lead in the market by developing super expert systems and another firm employs a spy to find out how they do it. The spy breaks in to the other firm only to discover that they are capturing human experts, removing their brains, slicing them very thin, and inserting the slices into their top-selling model. (Capturing the spy, they remove and slice his brain, enabling them to offer a line of industrial espionage expert systems!)
Another good story-the premise of a TV film whose title I cannot remember-involves knowledge being transferred from one brain to another via electrical signals. A Vietnam veteran has been brainwashed by the Chinese with the result that his brain has become uniquely receptive. When one of those colander-shaped metal bowls is inverted on his head, and joined via wires, amplifiers, and cathode ray displays to an identical bowl on the head of some expert, the veteran speedily acquires all the expert's knowledge. He is then in a position to pass himself off as, say, a virtuoso racing driver, or champion tennis player or whatever. Once equipped with someone else's abilities the CIA can use him as a spy.1
The "double collander" model is attractive because it is the way that we transfer knowledge between computers. When one takes the knowledge from one computer and puts it in another, the second computer "becomes" identical to the first as far as its abilities are concerned. Abilities are transferred between computers in the form of electrical signals transmitted along wires or recorded on floppy disks. We give one computer the knowledge of another every day of the week-the crucial point being that the hardware is almost irrelevant. If we think a little harder about the model as it applies to humans, however, we begin to notice complications.
Let us imagine our Vietnam veteran having his brain loaded with the knowledge of a champion tennis player. He goes to serve in his first match-Wham!-his arm falls off. He just doesn't have the bone structure or muscular development to serve that hard. And then, of course, there is the little matter of the structure of the nerves between brain and arm, and the question of whether the brain of the champion tennis player contains tennis playing knowledge which is appropriate for someone of the size and weight of the recipient. A good part of the the champion tennis player's tennis-playing "knowledge" is, it turns out, contained in the champion tennis player's body.2 Note that in talking this way tennis playing "knowledge" is being ascribed to those with tennis playing ability; this is the implicit philosophy of the Turing Test and, as we will see, it is a useful way to go.
What we have above is a literalist version of what is called "the embodiment thesis." A stronger form suggests that the way we cut up the physical world around us is a function of the shape of all our bodies. Thus, what we recognise as, say, a "chair"-something notoriously undefinable-is a function of our height, weight, and the way our knees bend. Thus both the way we cut up the world, and our ability to recognise the cuts, is a function of the shape of our bodies.3
We now have the beginnings of a classification system; there are some types of knowledge/ability/skill that cannot be transferred simply by passing signals from one brain/computer to another. In these types of knowledge the "hardware" is important. There are other types of knowledge that can be transferred without worrying about hardware.
Some aspects of the abilities/skills of humans are contained in the body. Could it be that there are types of knowledge that have to do with the brain's physicalness rather than its computerness? Yes: certain of our cognitive abilities have to do with the physical set up of the brain. There is the matter of the way neurons are interconnected, but it may also have something to do with the brain as a piece of chemistry or a collection of solid shapes. Templates, or sieves, can sort physical objects of different shapes or sizes; perhaps the brain works like this, or like the working medium of analogue computers. Let us call this kind of knowledge "embrained." It is interesting to note that insofar as knowledge is embrained, (especially if this knowledge were stored "holographically," to use another metaphor), the comic book story-about brains being cut up and inserted into expert systems-would be a better way of thinking about knowledge transfer than the "two colander" image.
We now have knowledge in symbols, in the body, and in the physical matter of the brain. What about the social group? Going back to our Vietnam veteran, suppose it was Ken Rosewall's brain from which his tennis playing knowledge had been siphoned. How would he cope with the new fibre-glass rackets and all the modern swearing, shouting, and grunting? Though the constitutive rules of tennis have remained the same over the last twenty years, the game has changed enormously. The right way to play tennis has to do with tennis-playing society as well as brains and bodies.
Natural languages are, of course, the paradigm example of bits of social knowledge. The right way to speak is the prerogative of the social group not the individual; those who do not remain in contact with the social group will soon cease to know how to speak properly. "To be or not to be, that is the question," on the face of it, a stultifyingly vacuous phrase, may be uttered without fear of ridicule on all sorts of occasions because of the cultural aura which surrounds Shakespeare, Hamlet, and all that. "What will it be then, my droogies?" may not be uttered safely, though it could have been for a short while after 1962. Let us agree for the sake of argument that when William Shakespeare and Anthony Burgess first wrote those phrases, their language-influencing ambitions were similar. That the first became a lasting part of common speech and the second has not, has to do with the way literate society has gone.4 One can see, then, that there is an "encultured" element to language and to other kinds of knowledge; it changes as society changes, it could not be said to exist without the existence of the social group that has it; it is located in society. Variation over time is, of course, only one element of social embededdness.5
We now have four kinds of knowledge/abilities/skills:
1. Symbol-type knowledge. (That is, knowledge that can be transferred without loss on floppy disks and so forth.)
2. Embodied knowledge
3. Embrained knowledge
4. Encultured knowledge
We need to concentrate on the relationship between symbol-type knowledge and encultured knowledge. Understanding this relationship, I believe, will help us most in comparing the competences of human beings and those of current and foreseeable machines.
What, then, is the relationship between symbol-type knowledge and encultured knowledge. Over the last twenty years many empirical studies of knowledge-making and transfer have revealed the social aspect of what we know. For example, my own early field studies showed the place of "tacit knowledge" in the replication of scientific experiments and the implications of this for scientific experimentation; it turns out that before scientists can agree that an experiment has been replicated they have to agree about the existence of the phenomenon for which they are searching. Agreement on the existence of natural phenomena seems to be social agreement; it is not something forced upon individuals in lonely confrontation with nature nor something that can be verified by aggregating isolated reports.6 Most of what we once thought of as the paradigm case of "unsocial" knowledge-science and mathematics-has turned out to be deeply social; it rests on agreements to live our scientific and mathematical life a certain way.7 It is the symbol-type knowledge that is proving to be hard to find and hard to define.
Another juncture at which the cultural basis of knowledge has shown itself is in the attempt to transfer human skills to intelligent computers. Dreyfus's path-breaking book first explained the problem from the point of view of a philosopher, and Suchman has more recently emphasised the role of situated action, writing from the viewpoint of an ethnomethodologically inclined anthropologist.8 Some of my own papers from 1985 onwards argue a related case based on my work in the sociology of science.9
The trouble with all these approaches, including my own, is that they are so good at explaining the difficulties of transferring knowledge in written form and of embodying knowledge in computer programs, and so forth, that they fail to explain the residual successes of formal approaches. If so much knowledge rests upon agreements within forms of life, what is happening when knowledge is transferred via bits of paper or floppy disks? We know that much less is transferred this way than we once believed, but something is being encapsulated in symbols or we would not use them. How can it be that artifacts that do not share our forms of life can "have knowledge" and how can we share it? In the light of modern theories, it is this that needs explaining: What is "formal" knowledge?
To move forward I think we need to locate the difference between type 1 and type 4 knowledge in different types of human action.10
One way of looking at encultured knowledge is to say that there is no one-to-one mapping between human action and observable behavior; the same act can be instantiated by many different behaviors. For example, paying money can be done by passing metal or paper tokens, writing a cheque, offering a plastic card and signing, and so forth, and each of these can be done in many different ways. Furthermore, the same behavior may be the instantiation of many different acts. For example, signing one's name might be paying money, agreeing to a divorce, a part of the act of sending a love letter, the final flourish of a suicide note, or providing a specimen signature for the bank. That is what it is like to act in a society; the co-ordination of apparently uncorrelated behaviors into concerted acts is what we learn as we become social beings.
To relate this point to the discussion of language, we can notice that there are many different ways of saying the same thing; the different verbal formulations are different "behaviors" corresponding to the same speech acts. To recognise which are appropriate and which inappropriate ways of saying something at any particular time and place, one has to be a member of the relevant linguistic community. "Droogies" was once a widely useful piece of linguistic behavior; now it is only narrowly useful.
Call action in which there is no straightforward correspondence between intention and behavior "regular action." Most of the time most of our acts are regular acts. As well as meaning "normal" or "everyday," "regular" also connotes "rule" and "routine." This is both useful and misleading. The useful part is that normal action is usually "rule following" and sometimes "rule establishing." The misleading part is that we tend to think that it is easy to understand or describe the rules which we follow when we are doing regular action; in fact, we can't and this causes all the big problems for the social sciences.
We know that normal action is rule-following because we nearly always know when we have broken the rules. For example, it is clear that there are rules applying to my actions as a pedestrian because I will get into trouble if I break them-perhaps by walking too close to the single person on an otherwise deserted beach, or by trying to keep too far away from others in a crowded street-but I cannot encapsulate all that I know about the proper way to walk in a formula. The little bits of rule that I can provide-such as those in the previous sentence-are full of undefined terms. I have not defined "close," "distant," nor "crowded," nor can I define all my terms on pain of regress. What is more, what counts as following the rule varies from society to society and situation to situation. A set recipe for walking will be found wanting on the first occasion of its use in unanticipated circumstances; perhaps the next people on the beach will be in actors in a perfume advertisement playing out the mysterious attactiveness of a particular aroma, while the next people in the street will be living in the time of a contagious epidemic disease!
The problem of understanding regular action is well known among philosophers of social science and a proportion of social scientists; it explains why skills have to be transferred through interpersonal contact or "socialization" rather than through book learning; it underpins ideas such as tacit knowledge or apprenticeship. The philosophy of regular action shows why social science has not worked in the way many people expected it to work. The orderliness of action is not evident to observers who are not also members of the society under examination. What is more, the order is always changing.11
Note that to make society work many of our actions have to be executed in different ways. To give just one example, studies of factories have shown us that even on production lines informed by Taylorist "scientific management" principles, there must be subtle variations in the way the job is executed.12 Indeed, one effective form of industrial disruption is to act too uniformly-in Britain this form of action is known as a "work to rule."
I now introduce a special class of acts, "behavior-specific acts," which we reserve for maintaining routines. This class seems to have been overlooked in the rush to stress the context-boundedness of ordinary acts. In behavior-specific acts we attempt to maintain a one-to-one mapping between our actions and observable behaviors. It is important to note that the characteristic feature of behavior-specific acts is that we try to execute them with the same spatio-temporal behavior not that we succeed; in this class of act this is what we prefer, and what we intend. The archetypical example of this kind of action is caricature production-line work, for example, as portrayed by Charlie Chaplin in "Modern Times."13 There are, however, much less obvious examples of behavior-specific action, such as the standard golf-swing or competition high-board diving or simple arithmetical operations. Certain actions are intrinsically behavior-specific (e.g., marching), certain actions are intrinsically non-behavior-specific (e.g., writing love letters), but many can be executed in either way depending on intention and desired outcome. Many regular acts contain elements of behavior-specific action. Because behavior-specific action is not always successfully executed, and because, in regular action, the same behavior may sometimes be the instantiation of quite different acts, it is not possible to be certain whether or not behavior-specific action is being executed merely by observation from the outside. It is clear enough, however, that such a class of acts exist, because I can try to do my production line work or my golf swing in a behaviorally repetitious way if I wish, or not if I don't wish.
The crucial point about behavior-specific action is that when it is successfully carried out, as far as an outside observer is concerned, the behavior associated with an act can be substituted for the act itself without loss. The consequences of all successfully executed behavior-specific acts are precisely the same as the consequences of those pieces of behavior which always instantiate the act. Take the intention away and, as far as an outside observer is concerned, nothing is lost. What this means is that anyone or anything that can follow the set of rules describing the behavior can, in effect, reproduce the act. Hence behavior-specific acts are transmitable even across cultures and are mechanisable. Compare this with regular action: in that case there is no way for an outsider to substitute behavior for action because the appropriate instantiation of the action depends on the ever changing social context; behavior is not tied to acts in a regular way.
There are many occasions when our attempts to execute behavior-specific action fail. Human beings are not very good at it. In these cases we count the substitution of the behavior for the act (for example, through mechanical automation), as an improvement.
If all action were behavior-specific action there would be a regular correlation between behavior and action. In that case the big problems of the social sciences would not have emerged; sociology could be a straightforwardly observational science like astronomy or geology or the idealised versions of economics or behaviorist psychology.
Because, in the case of behavior-specific action, behavior can substitute for the act as far as an outside observer is concerned, it is possible to replace the act with the behavior, to describe the act by describing the behavior, to transfer the description of the act in written form and, sometimes, to learn how to execute the act from a written description. That, as I have suggested above, is how we can have a limited systematic social science which observes those parts of human behavior which are predominantly behavior-specific,14 and how we can have machines such as pocket calculators which inscribe the behavior-specific parts of arithmetic in their programs,15 and how we can learn from books and manuals which describe the behavior which is to be executed if the act is to be successfully accomplished.16 The re-instantiation of a behavioral "repertoire," whether by a machine, or by other human beings (who either do or do not understand what they are doing), will mimic the original act. In this sense, behavior-specific action is decontextualisable. It is the only form of action which is not essentially situated.17
Consider how this way of dividing up action compares with approaches to human knowledge which are primarily concerned with the extent to which acts are self-consciously carried out. Both regular acts and behavior-specific acts can be executed with more or less self-consciousness. Figure 1 shows a 2x2 table which contrasts the two approaches. Inside the boxes is what follows about mechanisation from the theory of behavior specific action. Other treatments differ.
Figure 1: TYPES OF ACT
In the treatment of skills by Dreyfus and Dreyfus, and many psychologists, the vertical dimension is all important.18 At least one influential model takes it that competence is attained when the skillful person no longer has to think about what they are doing, but "internalizes" the task. Dreyfus and Dreyfus argue that only novices use expressible rules to guide their actions while experts use intuitive, inexpressible competences. They use this argument to show why expert systems are able to mimic expertise only to the level of the novice.
These treatments have a large grain of truth but for different reasons. The psychological theory touches upon one of the characteristics of the human organism-namely that we are not very good at doing certain things when we think about them. We certainly do get better at many skills when we, as it were, short circuit the conscious part of the brain. The Dreyfus and Dreyfus model rests on the Wittgensteinian problem of rules that we have touched on in the early paragraphs of this paper. To prepare a full description of regular skilled action, ready to cope with every circumstance, would need an infinite regress of rules. Therefore, self-conscious rule following reproduces only a small subset of a skill and is the prerogative of the novice.
The large grain of truth is, however not the whole truth. The psychological model is interesting only insofar as one is interested in the human being as an organism. One may well imagine other organisms that work perfectly without internalising the rules. Even among humans the ability to work fast and accurately while self-consciously following formulaic instructions varies immensely. One can even imagine a super-human who would not need to bother with internalisation at all for a range of tasks: suppose the person in the Chinese Room remembered the content of all the look-up tables as well as a table of pronounciation and learned to speak at normal speed by mentally referring to them in real time! What is more, there are some tasks, as we will see, that are not necessarily performed better without self-conscious attention, and others that can only be performed with attention. What I am suggesting is that, firstly, the psychological "internalisation" model does not apply to all skills, and that, secondly, insofar as it does, organism-specific findings are not all that interesting if one is concerned with the nature and structure of knowledge. For example, to take an entirely different kind of organism-specific rule, it is said to be good to hum the "Blue Danube" while playing golf in order to keep the rhythm slow and smooth, but this tells you about humans, not about knowledge.
The large grain of truth in the Dreyfus and Dreyfus model is precisely that most skills are based on regular action, and in those cases their model applies for all the reasons we have seen. Because we cannot formulate the rules we cannot self-consciously "know what we are doing" and therefore even the fastest thinker will not be able to perform calculatively. But it does not apply where the action is behavior-specific. That is why their model, with its stress on the vertical dimension of Figure 1, does not give an accurate prediction of what skills can be embedded in automated machines. If their model did give an accurate prediction, there would be no pocket calculators, for a lot of arithmetic is done without self-conscious attention to the rules.19
Let us now take a tour around Figure 1 and see what all this means. Consider first the left hand pair of boxes. We can all agree that there are a range of skills that cannot be expertly performed by following a set of explicable rules and that in the case of these skills, only novices follow rules. In expert car-driving, Dreyfus's paradigm case of "intuitive" skill, familiar journeys are sometimes negotiated without any conscious awareness. For instance, on the journey to work the driver might be thinking of the day ahead, responding to variations in traffic without attention and may not even be able to remember the journey. On the other hand, there are occasions when drivers do pay attention to details of traffic and the skills of car handling, perhaps even self-consciously comparing the current state of the traffic with previous experiences; on such occasions they would remember the details of the journey even if they were not self-consciously applying rules. This partitions non-rule-based skills into the upper and lower boxes on the left hand side of my diagram and allows us to say that skills which cannot be described in a set of rules can, on occasion be executed self-consciously if not calculatively. Indeed, there is no reason to think that in these cases un-self-conscious performance is better.
Turn now to the right hand pair of boxes. Imagine a novice who had somehow learned to drive by following rules self-consciously but because of some kind of disability was unable to progress to the level of "intuitive expert." That person would always remain a poor driver even though it might be that they eventually "internalised" the novice's rules. In terms of the table, they would have moved from box 2 to box 4 but they would still be a novice. Thus lack of self-consciousness is not a condition of expertise for inexpert actions may be un-self-consciously performed.
Think now about the golf swing, or parade-ground drill. Humans have to perform these skills without much in the way of conscious effort if they are to be performed well. Thus, box 4 contains skilled actions as well as unskilled actions.
Now try repeating the following at high speed and without error:
I'm not a pheasant plucker
I'm a pheasant plucker's son,
And I'm only plucking pheasant
Till the pheasant pluckers come
That requires skill and self-conscious deliberation. Or again, consider the test for alchoholic intoxication that, it was said, was used by the British police before the invention of the "breathalyser." One was allowed to use all the concentration one wanted to articulate "the Leith police dismisseth us."20 Thus there are skilled tasks as well as unskilled performances located in box 2-each requiring conscious effort.
Going back to the left hand side, it is not normal to refer to everything that happens there as "skilled," since it includes such things as being able to form a sentence in one's native language. Thus all four boxes contain actions that are normally referred to as both skilled and unskilled. The only convincing mapping is that nothing on the left hand side can be mastered without socialization (nor can it be mastered by machines), whereas everything on the right hand side, including the skillful performances, could be (at least in principle).21
The regular/behavior-specific analysis of human abilities seems to be new, or at least, "newish." We can see from the above analysis that the distinction does not map onto the distinction between self-consciousness and internalization. Nor does it map onto the difference between skilled and unskilled performance; there are a minority of activities that we refer to as skillful that are executed in a behavior-specific way.22 The difference between regular and behavior-specific is also not the same as the difference between acts that we value and acts that we do not value, nor between those that are meaningful as opposed to those that are demeaning; many acts that we normally prefer to execute in a behavior-specific way are highly valued-these include high-board diving and the golf swing. Some behavior-specific acts were once highly valued, but are less valued nowadays. An obvious case is the ability to do mental arithmetic, once the prerogative of the really clever, but devalued since the larger part of it can now be done by pocket calculators. Finally we may note that the difference between the two types of act is not the same as the difference between cognitive and sensory-motor abilities.23
Is spoken language behavior-specific action? Is chess behavior-specific action? These are not good questions. The term "behavior-specific" does not identify knowledge domains, it identifies types of action. It does not apply to language or chess as such, it applies to the way people use language or play chess. Thus, for most people, language use is not behavior-specific action, while for the controllers in George Orwell's novel 1984 the aim was to make language into behavior-specific action. In a 1984-like world, just as in Searle's "Chinese Room," and in the world that certain machine-translation enthusiasts would like to bring down upon us, language use would be behavior-specific action. There is more than one way to speak a language.
The same is true of something like chess-playing, though here it is less obvious. We tend to ask what sort of knowledge is chess-knowledge-is it formal or informal. We conclude that it is formal because in principle there are an exhaustive set of rules for winning the game. But humans do not play chess like computers-at least not all the time. Human chess-playing is part behavior-specific action and part not. The first few moves of chess openings are usually played by skilled players as behavior-specific action.24 Unfortunately, I know no openings and cannot play those first few moves in this way (but I wish I could). There is not the slightest doubt that in terms of what counts as good chess in contemporary culture, all chess computers play openings better than I. Some chess endings are also generally performed as behavior-specific action. Skill at chess openings and endings increases as the ability to accomplish behavior-specific action increases. The middle game of chess is not behavior-specific action as far as most good human players are concerned; at least some of the middle game has to do with the quintessentially non-behavior-specific skill of creating surprises. Machines, on the other hand, do play a middle game that mimics what human play would be like were it to be played as behavior-specific action.25 The great chess-playing competition between machines and humans over the last couple of decades has been between the regular action middle game of the best humans and the rule-based procedures of chess programs; slowly the programs are winning. If human brains were better at behavior-specific action, then that, I would guess, is how chess masters would now be playing.
On the other hand, what would happen to the culture of chess should a chess-playing machine be built that could play the game exhaustively and therefore win every time with the same moves?26 It could be that the nature of the game of chess would change; people would care less about winning and more about the aesthetics. In that case, human chess would become quintessentially non-behavior-specific. The idea of a competition between human players and machines would then seem as absurd as an adding competition between a human and a computer or a pulling competition between a strong man and a tractor.
One sees that special cases apart-I have mentioned marching and the writing of love letters-it does not make sense to say that there are domains of behavior-specific and regular action.27 Rather, one notes that elements in domains are generally performed in a behavior-specific way whereas other elements are performed as regular actions. The element of each in any domain may change for many reasons, some having to do with individual choice and some having to do with changes in the "form-of-life" which activities comprise.
One way of applying these ideas to the relationship between humans and machines is to reconsider the "Turing Test." The Turing Test is a controlled experiment. To decide whether a machine is "intelligent" we compare it with a control-usually a human; we check to see if it has the same linguistic abilities. The experimenter (or judge), is "blinded"-he or she does not know which is the experimental device and which is the control. If the experimenter cannot tell machine from control, then we say that the machine is as "intelligent" as the control.
Exactly what "intelligence" means under this approach depends upon how the test is set up. If the control were a block of wood rather than a human, then the test would show only that the experimental device was as "intelligent" as a block of wood. If on the other hand, the judge was a block of wood rather than a human, we would not find the outcome very interesting. There is a range of possibilities for both control and judge varying from block of wood, through dim human, to very sensible human, which imply very different things about the ability of any machine that passes the test. There are other variations in the way the protocol can be arranged: Is the test short or long? Is there one run or more than one? Is the typing filtered through a spell-checker or not? and so forth. Depending on the protocol, the Turing Test tests for different things.
Take the version of the Turing Test implicit in Searle's "Chinese Room" critique. In this case, responses to written questions in the Chinese language are produced by a human operator who knows no Chinese but has resource to a huge stock of look-up tables that tell him or her which strings of Chinese symbols are appropriate outputs to which inputs. (The control in the case of the Chinese Room is "virtual.") Searle hypothesises a Chinese Room that passes the test-i.e., produces convincing Chinese answers to Chinese questions. But, for The Room to do this there must be some constraints on the protocol. For example, having noticed the way that languages change over time, we can see that either the life span of The Room (and therefore the test), must be short so that the Chinese language doesn't change much, or the interrogators must conspire to limit their questions to that temporal cross-section of the Chinese language represented in the stack of look-up tables first placed in the room, or the look-up tables must be continually updated as the Chinese language changes. Under the first two constraints, the knowledge of Chinese contained in the Room is not type 4 (encultured), knowledge. It is, rather, a frozen cross-section of a language-an elaborated version of what is found in a computer spell-checker. It is a set of formulae for the reproduction of the behavior associated with behavior-specific action. It is easy to see that this might be encapsulated in symbols.28
Now suppose, for argument's sake, that the test is long enough for the Chinese language to change while the questions are being asked, or that it is repeated again and again over a long period, and that the interrogators do not conspire to keep their questions within the bounds of the linguistic cross-section encapsulated in the original look-up tables. If the stock of look-up tables, etc., remains the same, The Room will become outdated-it will begin to fail to answer questions convincingly.
Suppose instead, that the look-up tables are continually updated by attendants. Some of the attendants will have to be in day-to-day contact with changing fashions in Chinese-they will have to share Chinese culture. Thus, somewhere in the mechanism there have to be people who do understand Chinese sufficiently well to know the difference between the Chinese equivalents of "to be or not to be" and "what will it be, my droogies" at the time that The Room is in operation. Note that the two types of room-synchronic and diachronic-are distinguishable given the right protocol. It is true that the person using the look-up tables in the diachronic room still does not understand Chinese, but among the attendants there must be some who do.
Under the extended protocol, any Chinese Room that passed the test would have to contain type 4 knowledge, and I have argued that it is to be found in those who update the look-up tables.29 It is these people who link the diachronic room into society-who make it a social entity.30
Under the extended protocol, the Turing Test becomes a test of membership of social groups. It does this by comparing the abilities of experimental object and control in miniature social interactions with the interrogator. Under this protocol, passing the test signifies social intelligence or the possession of encultured knowledge.
Once one sees this point, it is possible to simplify the Turing Test greatly while still using it to check for embeddedness in society.31 The new test requires a determined judge, an intelligent and literate control who shares the broad cultural background of the judge, and the machine with which the control is to be compared. The judge provides both "Control" and "Machine" with copies of a few typed paragraphs (in a clear, machine-readable font), of somewhat mis-spelled and otherwise mucked-about English, which neither has seen before. It is important that the paragraphs are previously unseen for it is easy to devise a program to transliterate an example once it has been thought through.
Once presented, Control and Machine have, say, an hour to transliterate the passages into normal English. Machine will have the text presented to its scanner and its output will be a second text. Control will type his/her transliteration into a word processor to be printed out by the same printer as is used by Machine. The judge will then be given the printed texts and will have to work out which has been transliterated by Control and which by Machine. Here is a specimen of the sort of paragraph the judge would present.
mary: The next thing I want you to do is spell a word that
means a religious ceremony.
john: You mean rite. Do you want me to spell it out loud?
mary: No, I want you to write it.
john: I'm tired. All you ever want me to do is write, write, write.
mary: That's unfair, I just want you to write, write, write.
john: OK, I'll write, write.
The point of this simplifed test is that the hard thing for a machine to do in a Turing Test is to demonstrate the skill of repairing typed English conversation-the interactional stuff is mostly icing on the cake. The simplifed test is designed to draw on all the culture-bound common-sense needed to navigate the domain of error correction in printed English. This is the only kind of skill that can be tested through the medium of the typed word but it is quite sufficient, if the test is carefully designed, to enable us to tell the socialized from the unsocialized.32 It seems to me that if a machine could pass a carefully designed version of this little test all the significant problems of artificial intelligence would have been solved-the rest would be research and development.
What I have tried to do in this paper is divide up our knowledge and skills. The way I have done this is based on two types of human action. I believe the division is fundamental and lies at the root of the difference between "tacit" knowledge, knowledge that appears to be located in society, and "formal" knowledge which can be transferred in symbolic form and encoded into machines and other artifacts. Those who come out of a Wittgensteinian, ethnomethodological, or sociology of scientific knowledge tradition have, whether they know it or not, a problem with the notion of the formal or routine. In fact, everyone has this problem but it is less surprising that others have not noticed it. I think that the idea of behavior-specific action is at least the beginning of a solution.
My claim is that this way of looking at human action allows us to undertand better the shifting patterns and manner of execution of various human competences. It also helps us understand the manner and potential for the delegation of our competences to symbols, computers, and other machinery. It also helps us see the ways in which machines are better than humans: it shows that many of the things that humans do, they do because they are unable to function in the machine-like way that they would prefer. It also shows that there are many activities where the question of machine replacement simply does not arise, or arises as, at best, an asymptotic approximation to human abilities. We notice, then, what humans are good at and what machines are good at, and how these competences meet in the changing pattern of human activity. It seems to me that the analysis of skill, knowledge, human abilities, or whatever one wants to call the sets of activities for which we use our minds and bodies, must start from this distinction and that understanding how much of what we do can be taken over by machines rests on understanding the same distinction.
Alan Turing in one of his famous papers said that by the end of the century he expected "Éthe use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."33,34 There are at least four ways in which we might move toward such a state of affairs: i) Machines get better at mimicking us; ii) We become more charitable to machines; iii) We start to behave more like machines; iv) Our image of ourselves becomes more like our image of machines. Let us consider each of these in turn.
Unless machines can become members of our society they can appear to mimic our acts only by developing more and more ramified behaviors. This process is a good thing so long as it is not misconstrued. Ramification of behaviors makes for new and better tools, not new and better people. The rate at which intelligent tools can be improved is not easy to predict. The problem is analogous to predicting the likelihood of the existence of life on other planets. There are a large number of other planets but the probability that the conditions for life exist on a planet is astronomically small. Where two large numbers have to be compared a very small error can make a lot of difference to the outcome. The two large numbers in the case of intelligent machines are the exponential growth in the power of machines and the exponential increase in the number of rules that are needed to make behavior approximate to the appearance of regular action. My guess is that progress is slowing down fast, but the model sets no limit to asymptotic progress.
As we become more familiar with machines we repair their deficiencies without noticing-in other words we make good their inabilities in the same charitable way as we make good the inabilities of the inarticulate humans among us. Already the use of words and general educated opinion has changed sufficiently to allow us to talk of, say, calculators in the fashion that Turing predicted would apply to intelligence in general. We speak of calculators as being "better than arithmetic than ourselves" or "obviating the need for humans to do arithmetic" though close examination shows that neither of these sentiments is exactly right.35 If we generalise this process of becoming more charitable, we will lose sight of our special abilities.
In the US, it is usual practice amongst some large firms to send entire manuals for online translation on a mainframe computer. It works well. The manual writers are trained to use short sentences, cut out all ambiguities from the text (by repeating nouns instead of using pronouns for example) and to use a limited, powerful vocabulary. Europeans have shown little of this discipline.36
For "Europeans" there are, of course, no significant ambiguities in the texts they write, it is just that the texts are not written in a behavior-specific (1984-like) way. In what is usually counted as good writing, different words are used to represent the same idea and different ideas are represented by the same words. The parallel with regular action is complete. The problem is that translation machines cannot participate in the action of writing unless it is behavior-specific action. If we adjust our writing style to make it universally behavior-specific then mechanized translators will be as good as human translators and we are more likely to come to speak of machines "thinking" just as Turing predicted. What is true of translation is true of all our actions. The theory of action outlined above allows that the way we execute our actions can change. Change in the way we act is not necessarily bad even when it is change from regular to behavior-specific action, but we want to continue to be free to decide how best to carry out our acts. We do not want to lose our freedom of action so as to accommodate the behavior of machines.
If we think of ourselves as machines, we will see our departures from the machine-like ideal as a matter of human frailty rather than human creativity. We need to counter the tendency to think of humans as inefficient machines. There is a difference between the way humans act and the way machines mimic most of those acts. I have argued that machines can mimic us only in those cases where we prefer to do things in a behavior-specific way. Whether we come to speak of machines thinking without fear of contradiction will have something to do with whether this argument is more or less convincing than the arguments of those who think of social life as continuous with the world of things.
Intelligent machines are among the most useful and interesting tools that we have developed. But if them use them with too much uncritical charity, or if we start to think of ourselves as machines, or model our behavior on their behavior, or concentrate so hard on our own boundary-making and maintaining practices that we convince ourselves there is nothing to boundaries except what we make of them, we will lose sight of what we are.
I would like to thank Güven Güzeldere for suggestions and editorial assistance on this essay.
This paper is adapted from Harry M. Collins, "The Structure of Knowledge," Social Research, 60 (Spring, 1993) 95-116. The section on the Turing Test also contains elements taken from the final chapter of my book, Artificial Experts: Social Knowledge and Intelligent Machines (Cambridge, MA: MIT Press, 1990) and from another of my papers: "Embedded or Embodied: Hubert Dreyfus's What Computers Still Can't Do," Artificial Intelligence (forthcoming).
1 The mundane applications of Hollywood's brilliant scientific breakthroughs are depressing.
2 This is not just a matter of necessary conditions for tennis playing; we don't want to say that tennis playing knowledge is contained in the blood, even though a person without blood could not play tennis. Nor do we want to say that the body is like a tool and that tennis-playing knowledge is contained in the racket (after all, we can transfer a tennis racket with hardly any transfer of tennis-playing ability).
3 See, for example Hubert Dreyfus, What Computers Can't Do (1972; New York: Harper and Row, 1979).
4 The first is so well embedded in society I need not provide a reference for it; the second is from Anthony Burgess's A Clockwork Orange (London: Heinemann, 1962).
5 But it is the easiest to explain so I have stayed with this dimension throughout the paper. I argue elsewhere that skilled speakers of a language are able to make all kinds of "repairs" to damaged strings of symbols that the Chinese Room would not. For discussion of these other ways in which the social embededdness of language shows itself, see Collins, and Harry M. Collins, "Hubert Dreyfus, Forms of Life, and a Simple Test For Machine Intelligence," Social Studies of Science, 22 (1992) 726-39.
6 Harry M. Collins, "The Tea Set: Tacit Knowledge and Scientific Networks," Science Studies, 4 (1974) 165-86; Harry M. Collins, "The Seven Sexes: A Study in the Sociology of a Phenomenon, Or the Replication of Experiments in Physics," Sociology, 9 (1975) 205-24; Harry M. Collins, Changing Order: Replication and Induction In Scientific Practice (London and Beverly Hills: Sage, 1985). For the origin of the term "Tacit Knowledge," Michael Polanyi, Personal Knowledge (London: Routledge, 1958).
7 This way of thinking is deeply rooted in the the later philosophy of Wittgenstein. For example, see Ludwig Wittgenstein, Philosophical Investigations (Oxford: Blackwell, 1953); and David Bloor, Wittgenstein: A Social Theory of Knowledge (London: Macmillan, 1983).
8 Dreyfus; Lucy Suchman, Plans and Situated Action: The Problem of Human-Machine Interaction (Cambridge: Cambridge UP, 1987); see also Terry Winograd and Fernando Florès, Understanding Computers and Cognition: A New Foundation for Design (New Jersey: Ablex, 1986.)
9 See, for example, H.M. Collins, R.H. Green, and R.C. Draper, "Where's the Expertise: Expert Systems as a Medium of Knowledge Transfer," Expert Systems 85, ed. Martin J. Merry (Cambridge, UK: Cambridge UP, 1985) 323-334.
10 For the first introduction of the distinction between "regular action" and "behavior-specific action," see Collins, Artificial Experts. For an extended philosophical analysis which analyses types of action into further sub-categories see Harry M. Collins and M. Kusch, "Two Kinds of Actions: A Phenomenological Study," Philosophy and Phenomenological Research (forthcoming).
11 For an analysis of the way scientific order is changed, see Collins, Changing Order.
12 Kenneth C. Kusterer, Know-How on the Job: The Important Working Knowledge of "Unskilled" Workers (Boulder: Westview Press, 1978).
13 Note that the Taylorist ideal usually is a caricature, but this does not mean it cannot be found under special circumstances.
14 For example, it works for certain limited types of economic behavior or the behavior of people in specially arranged laboratory conditions. This kind of science is especially appropriate where behavior-specific action is enforced as in F. W. Taylor's "scientific management."
15 For a full analysis of what pocket calculators can and cannot do, and how to see what they do as behavior-specific action, see Collins, Artificial Experts.
16 So long as the behavioral repertoire we are to master is not too complicated.
18 See especially Hubert Dreyfus and Stuart Dreyfus, Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer (New York: Free, 1986).
19 Dreyfus tries to sidestep the problem by dividing knowledge domains into two types, one of which is "formalisable." Unfortunately it is not possible to have a formalisable domain under the Wittgensteinian treatment which Dreyfus prefers. My "behavior-specific action" makes what Dreyfus calls a formal domain possible. Our theories are largely co-extensive until we try to predict how devices that do not use formal rules will perform. My theory leads one to be far more pessimistic about, say, neural nets. (See Harry M. Collins, "Will Machines Ever Think?" New Scientist, 1826 (June 20, 1992) 36-40.
20 These tasks, incidentally, can be done without difficulty by existing and conceivable talking computers, showing again that the psychological and philosophical dimensions of the skill problem need to be pulled apart.
21 The model does not limit the role of machines to mimicking simple-minded repetitious morons; the ramifications of behavior-specific actions can be such that they approach regular action asymptotically. One of the programs of observation and research that the theory suggests, however, is to break down the performance of machines into its constituent behaviors. This program applies as much to neural nets and other exotic devices as to more straightforward programs.
22 Along the way we have established that self-conscious/internalized also does not map onto on unskilled/skilled.
23 For a more detailed working out of the behavior-specific elements of arithmetic, see Collins, Artificial Experts.
24 I am now assuming away the elements of non-behavior-specificity that have to do with the shape of the pieces, the method of making moves, and so forth. I am thinking about long series of games using the same apparatus.
25 One has to be careful with one's locutions. We must not say that a machine "acts," and therefore we cannot say that a chess machine engages in behavior-specific action. Machines can only mimic behavior-specific action. Chess machines mimic the behavior-specific action of the first few moves of the skilled human chess player's game.
26 For argument's sake we will allow physicists to discover and learn to manipulate a new generation of sub-sub-atomic particles so small that 10200 of them could fit into shoe box.
27 Even these special cases are historically specific.
28 I simplify here. See footnote 5, above.
29 I discuss the protocol of the Turing Test at some length in Artificial Experts.
30 The Turing Test is usually thought of as involving language, but there is no reason to stop at this. We could, for example, use the test with washing machines. To find out if a washing machine was intelligent, we would set it alongside a washer person, concealing both from a judge. The judge would interrogate the two by passing in sets of objects to be washed and examining the response. (We must imagine some mechanism for passing the objects into the machine and starting the cycle.) An unimaginative judge might pass in soiled clothes and, examining the washed garments, might be unable to tell which had been washed by machine and which by human. A more imaginative interrogator might pass in, perhaps, some soiled clothes, some clothes that were ripped and soaked in blood, some clothes with paper money in the pockets, some valuable oil paintings on canvas, or whatever, and examine the response to these. Again, embeddedness in social life is needed if the appropriate response is to be forthcoming.
31 This section of the paper is taken from Collins, "Embedded or Embodied."
32 It is worth noting for the combinatorily inclined that a look-up table exhaustively listing all corrected passages of about the above length-300 characters-including those for which the most appropriate response would be "I can't correct that," would contain 10600 entries, compared to the, roughly, 10125 particles in the universe. The number of potentially correctible passages would be very much smaller of course but-I would guess-would still be beyond the bounds of brute strength methods. Note also that the correct response-of which there may be more than one-may vary from place to place and time to time as our linguistic culture changes.
33 The following section is adapted from Chapter 15 of Artificial Experts.
34 Alan Turing, "Computing Machinery and Intelligence," Mind, LIX No. 236 (1950) 433-460. Reprinted in Douglas Hofstadter and Daniel Dennet, eds., The Mind's I (Harmondworth: Penguin, 1982) 53-66, 57.
35 See Collins, Artificial Experts, ch. 4 and 5.
36 Derek J. Price, "The Advantages of Doubling in Dutch," The Guardian, 20 (April 1989) 31.