August 17, 1994
Submitted to Human Communication Research
A number of recent studies performed under the Social Responses to Communication Technology (SRCT) paradigm (e.g., Nass & Steuer, 1993; Nass, Steuer, Henriksen & Dryer, 1994; Nass, Steuer & Tauber, 1994; Nass, Steuer, Tauber & Reeder, 1993) demonstrate that individuals apply social rules and social expectations to computers. That is, individuals use the same social rules to assess and respond to the performance of computers that they use when assessing or responding to other individuals, even when they are fully aware that they are interacting with machines (Nass & Steuer, 1993).
For example, individuals apply politeness norms to computers: Users asked by a self-praising computer about its own performance provide more positive responses than do those asked by a different computer or a paper-and-pencil questionnaire (Nass, Wade, Malick & Reiss, 1994). This result is fully consistent with what is found in human-human interaction (e.g., Finkel, Guterbock & Borg, 1991; Jones, 1964; Kane & Macaulay, 1993; Singer, Frankel & Glassman, 1983).
Nass, Steuer, Henriksen, and Dryer (1994) demonstrate that individuals use social rules concerning the assessment of "self-praise" and "other-praise" by computers. Specifically, after a computer has performed a task, individuals perceive the performance as superior when it is praised by a different computer rather than when it is praised by itself, consistent with the social rule "other-praise is more valid than self-praise" (Joshi & Rai, 1987; Jones, 1990; Meyer, Mittag & Engler, 1986; Wilson & Chambers, 1989). A computer that praises itself or criticizes other computers is also perceived to be less friendly than a computer that praises other computers or criticizes itself (Nass, Steuer, Henriksen, & Dryer, 1994), again precisely what one finds in human-human interaction (Amabile, 1983; Amabile & Glazebrook, 1981; Folkes & Sears, 1977; Powers & Zuroff, 1988).
Even gender stereotypes are applied to computers: Praise from a male-voiced computer is perceived as more friendly, and has a greater influence on the perception of the object of the praise, than is praise from a female-voiced computer (Green, 1993), consistent with most literature on gender differences (Ashmore, 1981; Basow & Silberg, 1987; Eagly & Wood, 1982; Frieze et al, 1978; Zarate & Smith, 1990). Female-voiced computers are also perceived as better teachers of love and relationships than are male-voiced computers, consistent with the gender stereotypes.
Studies have also shown that individuals can be made to feel that computers are their teammates. Nass, Fogg, and Moon (1994) demonstrated that when individuals are told they are teammates of a computer, they perceive the performance of the computer to be better, friendlier, and more cooperative than do individuals who are told they are working with a computer that is independent of them. Furthermore, individuals in the teammate condition cooperate more, conform more with the computer's suggestions, and perceive the computer as more similar to themselves. These results parallel Spears' (1989) findings in human-human interaction.
On the one hand, the breadth of these findings suggests that the application of rules of human-human interaction to human-computer interaction is not an isolated or rare phenomenon. Indeed, these responses seem incurable, commonplace, and relatively easy to generate among both experienced and novice computer users. On the other hand, it seems inappropriate for individuals to apply social rules to computers; after all, the intuitive bases for the social rules, such as emotional harm (the politeness study), suspect motives (the self-other study), differential socialization (the gender study), and feelings of Gemeinschafft (the teammate study), appear not to be relevant to computers. How, then, can we explain this pervasive phenomenon?
Three competing explanations have been offered for the application of social rules to computers: deficiency (e.g., Turkle, 1984; Winograd & Flores, 1987; Zuboff, 1988); parasocial interaction (e.g., Rafaeli, 1985), and social interaction (e.g., Nass & Steuer, 1993; Nass, Steuer, Henriksen & Dryer, 1994; Nass, Steuer & Tauber, 1994). Let us consider each, and then describe a critical test of these competing explanations.
Explanation 1: Deficiency
Perhaps the most common explanation for the application of social rules to computers is based on deficiency. Under this view, individuals who respond socially are unaware that it is inappropriate to apply social rules to computers, because of either youth (Turkle, 1984), ignorance (Winograd & Flores, 1987), or socio-emotional limitations (Turkle, 1984; Weizenbaum, 1976). A related explanation is that individuals are not clever enough to cope with the complexity (or the unprecedented nature) of the situation: They apply social rules because they cannot think of anything else to do (e.g., Dennett, 1991a, 1991b; Barley, 1988; Winner, 1977). The deficiency explanation can be ruled out for the aforementioned experiments, because the subjects were all college students who were experienced computer users. Furthermore, when debriefed, the subjects in all these studies said that they did not apply social rules to computers and that they did not think it would be appropriate to do so. These findings suggest that social responses to communication technology are unintentional and automatic (Nass, Steuer, Henriksen & Dryer, 1994).
Explanation 2: Parasocial Interaction
The concept of parasocial interaction has proven valuable in explaining individuals' social behavior toward media. Numerous definitions of "parasocial interaction" have appeared in the literature, beginning with Horton and Wohl's (1956) psychiatric observations (see also Houlberg, 1984; Rafaeli, 1990; Rubin & Perse, 1987; Rubin & Rubin, 1985), but they all have a single idea at their core: Parasocial interaction occurs when individuals interact with a mediated representation of a person as if the person were actually present. That is, individuals behave as if they are having an interaction with a source when in fact they are only relating to the medium. For example, some individuals behave as if they are having a two-way conversation with a television news anchorperson while watching the person on television (e.g., Houlberg, 1984; Levy, 1979; Rubin, Perse, & Powell, 1985); these individuals ignore the limitations of the mediated representation and behave as if the picture of the source is the actual source. Similarly, in cases in which individuals confuse an actor or actress and the role he or she plays (Horton and Wohl, 1956; Rubin, 1985; Rubin & Perse, 1987), these individuals are treating the performer, who is simply the medium by which the character is represented, as the actual source of the message; a similar effect has been found with "Wizards" in the computer game "Dungeons and Dragons" (Rafaeli, 1985).
Parasocial interaction can be used to explain social responses toward computers by arguing that the computer is merely a medium by which the programmer, the source, is represented. Under the parasocial interaction perspective, it is not unreasonable to apply social rules and attributions toward computers when they are viewed as programmers; the mistake is viewing the medium (the voice of the computer) as the source (the programmer). Human-computer interaction, then, is a mediated interaction, even if the individual behaves otherwise.
In communication research, Rafaeli (1986, 1990) has been the leading exponent of the idea that interactivity must be understood as a form of parasocial interaction. This perspective appears in other disciplines. Theorists in cognitive science argue that technology inherently reflects its creator (Dennett, 1988; Heidegger, 1977); hence, technology is responded to as a proxy for its creator (Cosmides, 1989; Dennett, 1988, 1991b; Searle, 1981). Similarly, computer scientists such as Laurel (1993) argue that computers are inherently metaphorical representations of human archetypes, which are in turn responded to as human.
Perhaps the best-known example of parasocial interaction with technologies is Heider and Simmel's (1944) classic experiment. They demonstrated that when geometric objects move around the screen, individuals describe the movements as if the objects had intentions and motivations. The authors argued that their subjects were responding to what the creator of the image wanted them to experience; that is, the individuals were engaging in a parasocial interaction with the human creator, not with the screen or the objects on it.
Nass and Steuer (1993; see also Green, 1993) attempted to dismiss the parasocial explanation for social responses to computers by asking subjects whether they thought about the programmer while participating in the experiment. Across a number of studies, virtually all subjects denied any thought of the programmer. However, the asking of this question assumes that individuals are conscious of their parasocial behavior; a number of studies suggest that this may not be the case (e.g., Cosmides, 1989; Horton & Wohl, 1956; Levy, 1979; Rafaeli, 1985). A study that tests for evidence of subconscious parasocial behaviors in human-computer interaction is required to definitively confirm or rule out the parasocial explanation.
Explanation 3: Social Interaction
In contrast to the parasocial explanation, Nass and colleagues, in a number of studies (see Nass, Steuer, & Tauber, 1994, for a review), argue that human-computer interaction is unmediated and directly social; that is, individuals respond to computers as a source in much the same way that individuals respond to other human beings as a source. In this unmediated model, individuals' social attributions are made directly to the machine. The programmer is psychologically irrelevant, in contrast to the parasocial model, in which the programmer is too relevant (i.e., individuals ignore the medium).
The Nass et al. argument hinges on the idea that when presented with physical cues that are related to fundamental human characteristics, individuals automatically respond socially. Among the important primary cues that appear to be important are the use of language (Brown, 1988; Turkle, 1984), interactivity (Rafaeli, 1985), filling of roles traditionally held by humans (Nass, Lombard, Henriksen, & Steuer, in press), and voice (Amalberti, 1993; Nass & Steuer, 1993; Steuer, 1990; Turkle, 1984). It is argued that these kinds of cues automatically and mindlessly (Langer, 1989) invoke schemata associated with human-human interaction, without the need to psychologically construct a programmer. In the words of Byron Reeves, "humans are not evolved to respond to twentieth-century technology" (personal communication, 1994); hence, when presented with human-like cues, individuals are swayed by the human characteristics and do not process the fact that the machine is not human (for a related discussion, see Langer, 1989). That is, they automatically respond as if it were appropriate to use social rules, but not because they believe that they are interacting with a programmer.
In sum, the social model argues that the mistake users make is in thinking too little about the computer as a machine; the parasocial model argues that the users' mistake is thinking beyond the computer to the programmer
Critical Test of the Competing Models
The essential difference between the parasocial and the social interaction models lies in the answer to the question, "Psychologically, who or what are individuals responding to when they interact with a computer?" (Sundar, 1993). The parasocial explanation suggests that, psychologically, they are interacting with the programmer, forgetting that the computer and voice are mere representations. Conversely, the social explanation suggests that they are having a real interaction with the computer qua machine.
Because both the parasocial and the social explanations are equally useful for explaining previous experimental results on human-computer interaction, including those outlined at the beginning of the paper, the present study is designed to provide a critical test between these two competing explanations. If the parasocial explanation is correct, there should be virtually no difference in responses to a computer whether the computer is referred to as "a computer" or as "a programmer." That is, referring to the computer as "a programmer" should simply confirm the heuristic that subjects are supposedly using. Conversely, if the social explanation is correct, there should be clear differences depending on whether the computer is referred to as "a computer" or as "a programmer." That is, calling the computer "a programmer" would lead subjects to make attributions quite different from those they would have made had they been left to think of the computer as a computer. In sum, we can draw the following testable conclusion:
If human-computer interaction is parasocial, one should not find differences when referring to the computer as "a computer" or "a programmer." If human-computer interaction is social, one should find clear differences.
To manipulate perceptions of the computer, subjects in one condition were told that they were working with computers; in the other condition, they were told that they were working with programmers.
The second round was identical to the first, with the following differences: 1) the subjects used a third computer, labeled "Computer #2" (or "Programmer #2") for the tutoring and evaluation sessions; 2) the topic of the tutoring and testing was "American Teenagers", and 3) in the evaluation session, the subjects were told that they answered four of the five questions wrong, and were criticized for their performance.
The first set of questions asked, "For each word below, please indicate how well it describes the tutor computer (programmer) you just worked with?" This was followed by a list of 39 adjectives and a ten-point response scale that ranged from "Not at All" to "A Lot." Some of the adjectives addressed the quality of performance (e.g., "helpful", "clever") and some addressed social evaluations (e.g., "friendly," "likeable"). The final two items asked about subjects' perceived similarity between the teaching and evaluating style of the computer (programmer) and their own teaching and evaluating style.
Effectiveness was an index of the following adjectives: articulate, creative, clever, insightful, intelligent, helpful, responsive, competent, and analytical. This index was also highly reliable (first session: [[alpha]] = .88; second session [[alpha]] = .86).
The Playful index consisted of four items: childlike, entertaining, enthusiastic, and playful. This index was also highly reliable (first session: [[alpha]] = .77; second session: [[alpha]] = .80).
The Style Similarity index consisted of two items: subjects' perceived similarity to the computer's style of teaching and subjects' perceived similarity to the computers' style of evaluation. The index was of acceptable reliability (first session: [[alpha]] = .55; second session: [[alpha]] = .47).
The final index, Computeresque, included adjectives that are traditionally viewed as characteristics of computers as opposed to people: efficient, rational, objective, and unbiased. The index was highly reliable (first session: [[alpha]] = .75; second session: [[alpha]] = .83).
For the first round, subjects in the computer condition perceived the computer to be significantly more friendly than the subjects in the programmer condition, F(28,1)=8.5, p < .001, [[eta]]2=.23 as indicated in Figure 1. "Computer" subjects also found the computer to be significantly more effective than did "programmer" subjects (see Figure 1), F(28,1) = 4.2, p < .05, [[eta]]2=.13.
Consistent with the social model, "computer" subjects perceived the playfulness of the computer differently than did "programmer" subjects, F /i>(28,1) = 6.5, p < .05, [[eta]]2=.19; specifically "computer" subjects perceived the computer to be more playful. Similarly, computer subjects felt that the computer was more similar to their style of teaching and evaluating than did "programmer" subjects, F(28,1) = 5.3, p < .05, [[eta]]2=.16.
Consistent with the idea that the computer is perceived as a machine rather than as a surrogate for a person when the computer is labeled a "computer" (rather than a "programmer"), computer subjects perceived the computer to be more "computeresque" than did programmer subjects, F(28,1) = 7.0, p < .05, [[eta]]2=.20.
The second round is simply treated as a replication. However, given the within-subjects manipulation of praise (first round) and criticism (second round), there was an expected difference in the perception of the valence of evaluations by the subjects: the evaluation session was perceived as significantly more favorable in the first round than in the second round on a ten-point scale, M = 5.4 vs. M = 2.2, respectively, F(29,1) = 51.1, p < .001.
A similar pattern confirming that validity of the social model over the parasocial model can be found in the second round. That is, "computer" subjects perceived the computer very differently than did "programmer" subjects, despite the minimal manipulation.
In the second round, "computer" subjects found the computer to be significantly more friendly than did "programmer" subjects, F(28,1) = 5.1, p < .05, [[eta]]2=.15. Similarly, "computer" subjects found the computer to be more effective than did "programmer" subjects, F(28,1) = 3.0, p < .10, [[eta]]2=.10.
Figure 1: Mean ratings of the computer on a normalized ten-point scale as a function of condition (Computer/Programmer) and valence of evaluation (Praise/Criticism).
Consistent with the social model, "computer" subjects found the computer to be more playful than did "programmer" subjects, F(28,1) = 3.8, p < .10, [[eta]]2=.12. Furthermore, the computer was perceived to be more similar to "computer" subjects than to "programmer" subjects, F(28,1) = 4.8, p < .05, [[eta]]2=.14.
There was no significant difference in how "computeresque" the computer was between the two conditions in the second round, although the direction of means was consistent with the first round, F(28,1) = 1.4, p > .1, [[eta]]2=.05.
The foregoing results provide highly consistent evidence for the social model as compared to the parasocial model, given that nine of the ten indices are significantly different between the two conditions.
Although it was not predicted by either model, "computer" subjects perceived the computer as better than "programmer" subjects did for each of the indices. That is, on the four indices in which there is a clearly more favorable value -- friendly, effective, playful, and similarity to the subject -- "computer" subjects perceived the computer as clearly superior. Additional evidence of the perceived favorability of the computer for "computer" subjects is that, of the thirty adjectives that had a clear positive and negative valence in each round, the computer was perceived better by "computer" subjects for 29 adjectives in the first round and 28 adjectives in the second round. These are clearly significant differences (first round: [[chi]]2 (1 d.f.)= 13.1, p < .001; second round: [[chi]]2 (1 d.f.) = 11.2, p < .001).
Does this mean that parasocial interaction only occurs with mass media technologies, and that it cannot inform human-computer interaction? Not necessarily. As argued by Nass and Mason (1990), social responses to technologies could be a function of the values assumed by technologies on certain variables. The experiments reviewed at the beginning of the paper and the experiment reported here used a computer with particular attributes that may be relevant to cueing social, as opposed to parasocial, responses. For example, all of these experiments used computers that simulated a relatively high level of interactivity; that is, the computer's responses were at least ostensibly contingent upon the user's responses. For the most part, the existing mass media are not interactive. It is possible that a computer system that did not interact would elicit parasocial rather than social responses.
One consistent finding that was not hypothesized was that the label "computer" seemed to generate more favorable responses than did the label "programmer." A possible explanation for this finding is that of expectations associated with the conditions. The label "programmer" may encourage subjects to expect the computer to be as friendly, effective, playful, and similar to them as the average person. When the computer fails to meet these expectations, subjects perceive it as flawed. Conversely, the label "computer" may encourage subjects to perceive the computer as the average machine. Because this computer had a human voice, a seemingly complex interaction style, and high-level teaching and evaluation skills, it was ranked as unusually superior when compared to the average computer. If this explanation is correct, it leads to the provocative conclusion that users will be frustrated if the computer appears to be human but does not perform like a human, while users will be pleasantly surprised if the computer appears to be a machine but performs like a human. Of course, the latter condition could also lead to underestimation of the capabilities of the machine. Thus, the ideal situation would be to have the appearance of the machine match its performance.
In sum, we conclude that when individuals interact with (certain types of) computers, they are interacting with independent social entities rather than imagined human beings. The social responses that individuals exhibit when interacting with computers are apparently different from the parasocial responses that have been found for the mass media: Human-computer interaction is fundamentally social.
Thus, if human-television interaction is the concern of mass communication, then human-computer interaction should be the concern of interpersonal communication. The converging evidence of SRCT studies suggests that human-computer interaction is not an example of a mediated interaction wherein a technology (computer) is responded to as a mere medium (or channel) between the human source and the human receiver (Shannon & Weaver, 1962). Rather, users seem to be treating the computer as a separate entity worthy of its own social attributions. That is, psychologically, computers themselves are social actors, just like human beings.
Amabile, T. M., & Glazebrook, A. H. (1981). A negativity bias in interpersonal evaluation. Journal of Personality and Social Psychology, 18, 1-22.
Amalberti, R. (1993). User representations of computer systems in human-speech interaction. International Journal of Man-Machine Studies, 38 (4), 547-566.
Ashmore, R. D. (1981). Sex stereotypes and implicit personality theory. In D. L. Hamilton (Ed.), Cognitive processes in stereotyping and intergroup behavior (p. 37-82). Hillsdale, NJ: Lawrence Erlbaum Associates.
Barley, S. R. (1988). The social construction of a machine: Ritual, superstition, magical thinking, and other pragmatic responses to running a CT Scanner. In M. Lock and D. Gordon (Eds.), Knowledge and practice in medicine: Social, cultural, and historical approaches. Hingham, MA: Reidel.
Basow, S. A., & Silberg, N. T. (1987). Student evaluations of college professors: Are female and male professors rated differently? Journal of Educational Psychology, 79 (3), 308-314.
Brown, B. (1988). The human-machine distinction as predicted by children's para-social interaction with toys. Unpublished doctoral dissertation, Stanford University, Stanford, CA.
Cosmides, L. (1989). The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition, 31, 187-276.
Dennett, D. C. (1988). Précis of the intentional stance. Behavioral and Brain Sciences, 11, 495-546.
Dennett, D. C. (1991a). Consciousness explained. Boston: Little, Brown.
Dennett, D. C. (1991b). True believers: The intentional strategy and why it works. In D. M. Rosenthal (Ed.), The Nature of Mind. New York: Oxford University Press.
Eagly, A. H., & Wood, W. (1982). Inferred sex differences in status as determinant of gender stereotypes about social influence. Journal of Personality and Social Psychology, 43(5), 915-928.
Finkel, S. E., Guterbock, T. M., & Borg, M. J. (1991). Race-of interviewer effects in a preelection poll: Virginia 1989. Public Opinion Quarterly, 55(3), 313-330.
Folkes, V. S. & Sears, D. O. (1977). Does everybody like a liker? Journal of Experimental Social Psychology, 13, 505-519.
Frieze, I. H., Parsons, J. E., Johnson, P. B., Ruble, D.N., & Zellman, G. L. (1978). Women and sex roles: A psychological perspective. New York: W. W. Norton & Co.
Green, N. (1993, May). Can computers have genders? Paper presented at the annual conference of the International Communication Association, Washington, D.C.
Heidegger, M. (1977). The question concerning technology, and other essays. New York: Harper & Row.
Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. American Journal of Psychology, 57, 243-259.
Horton, D., & Wohl, R. R. (1956). Mass communication and para-social interaction: Observation on intimacy at a distance, Psychiatry, 19, 215-229.
Houlberg, R. (1984). Local television news audience and the para-social interaction. Journal of Broadcasting, 28, 423-429.
Jones, E. E. (1964). Ingratiation: A Social Psychological Analysis. New York: Meredith Publishing Company.
Jones, E. E. (1990). Interpersonal perception. New York: W. H. Freeman.
Joshi, K., & Rai, S. N. (1987). Effect of mode of praise and physical attractiveness upon interpersonal attraction. Psycho-Lingua, 17(2), 107-113.
Kane, E. W. & Macaulay, L. J. (1993). Interviewer gender and gender attitudes. Public Opinion Quarterly, 57(1), 1-28.
Langer, E. (1989). Mindfulness. Reading, MA: Addison-Wesley.
Laurel, B. (1993). Computers as theatre. Reading, MA: Addison-Wesley.
Levy, M. R. (1979). Watching TV news as para-social interaction. Journal of Broadcasting, 23, 69-80.
Meyer, W. U., Mittag, W., & Engler, U. (1986). Some effects of praise and blame on perceived ability and affect. Social Cognition, 4(3), 293-308.
Nass, C. I. & Mason, L. (1990). On the study of technology and task: A variable-based approach. In J. Fulk & C. Steinfeld (Eds.), Organizations and communication technology (pp. 46-67). Newbury Park: Sage.
Nass, C. I., Lombard, M., Henriksen, L., & Steuer, J. (1992). Anthropocentrism and computers. Unpublished manuscript, Stanford University.
Nass, C., & Steuer, J. (1993). Voices, boxes, and sources of messages: Computers and social actors. Human Communication Research, 19, 504-527.
Nass, C., Fogg, B. J., & Moon, Y. (1994). [Computers as team-mates]. Unpublished raw data.
Nass, C., Steuer, J., & Tauber, E. (1994, April). Computers are social actors. Paper presented to CHI'94 conference of the ACM/SIGCHI, Boston, MA.
Nass, C., Steuer, J., Henriksen, L., & Dryer, D. C. (1994). Machines, social attributions, and ethopoeia: Performance assessments of computers subsequent to "self-" or "other-" evaluations. International Journal of Human-Computer Studies, 40, 543-559.
Nass, C., Steuer, J., Tauber, E., & Reeder, H. (1993, April). Anthropomorphism, agency and ethopoeia: Computers as social actors. Paper presented at INTERCHI'93 conference of the ACM/SIGCHI and the IFIP, Amsterdam, Netherlands.
Nass, C., Wade, D., Malick, M., & Reiss, S. (1994). Can computers generate ingratiation effects? Unpublished manuscript, Stanford University.
Powers, T. A. & Zuroff, D. C. (1988). Interpersonal consequences of overt self-criticism: A comparison of neutral and self-enhancing presentations of self. Journal of Personality and Social Psychology, 54(6), 1054-1062.
Rafaeli, S. (1985). Interacting with media: Para-social interaction and real interaction. Unpublished doctoral dissertation, Stanford University.
Rafaeli, S. (1990). Interacting with media: Para-social interaction and real interaction. In B. D. Ruben & L. A. Lievrouw (Eds.), Mediation, information, and communication: Information and behavior (Vol. 3, pp. 125-181). New Brunswick, NJ: Transaction Press.
Rubin, A. M. (1985). Uses of daytime television soap operas by college students. Journal of Broadcasting & Electronic Media, 29, 241-258.
Rubin, A. M., & Perse, E. M. (1987). Audience activity and television news gratifications. Communication Research, 14, 58-84.
Rubin, A. M., & Rubin, R. B. (1985). Interface of personal and mediated communication: A research agenda. Critical Studies in Mass Communication, 2, 36-53.
Rubin, A. M., Perse, E. M., & Powell, R. A. (1985). Loneliness, para-social interaction, and local television news viewing. Human Communication Research, 12, 155-180.
Searle, J. R. (1981). Minds, brains, and programs. In D. R. Hofstadter & D. C. Dennett (Eds.), The mind's I (pp. 353- 372). Bantam: Toronto.
Shannon, C. & Weaver, W. (1962). The mathematical theory of communication. Urbana, IL: University of Illinois Press.
Singer E., Frankel, M. R., & Glassman, M. B. (1983). The effect of interviewer characteristics and expectations on response. Public Opinion Quarterly, 47, 84-95.
Spears, S. C. (1989). Controlling for exposure to intragroup communication: Effects of heterogeneous social perspectives and dependence in intragroup behavior and attitudes. Unpublished doctoral dissertation, Stanford University.
Steuer, J. (1990). It's only talk: Speech as a possible determinant of the social categorization of computers. Unpublished manuscript, Stanford University.
Sundar, S. S. (May, 1993). Communicator, Programmer, or Independent Actor: What Are Computers? Paper presented at the annual conference of the International Communication Association, Washington, D.C.
Turkle, S. (1984). The second self: Computers and the human spirit. New York: Simon and Schuster.
Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. San Francisco: Freeman.
Wilson, W. & Chambers, W. (1989). Effectiveness of praise of self versus praise of others. Journal of Social Psychology, 129, 555-556.
Winner, L. (1977). Autonomous technology. Cambridge, MA: MIT Press.
Winograd, T., & Flores, C. (1987). Understanding computers and cognition: A new foundation for design. Reading, MA: Addison- Wesley.
Zarate, M. A., & Smith, E, R. (1990). Person categorization and stereotyping. Social Cognition, 8(2), 161-185.
Zuboff, S. (1988). In the age of the smart machine: The future of work and power. New York: Basic Books.