Previous Next Up  Comments
SEHR, volume 4, issue 2: Constructions of the Mind
Updated July 22, 1995

artificial intelligence research as art

Stephen Wilson


information arts: art as an independent center of research

The art world is in crisis about new technologies and scientific research. There is much confusion about ways to respond. My paper, entitled "Dark and Light Visions: the Relation of Cultural Theory to Art That Uses Emerging Technologies,"1 written for the SIGGRAPH 93 Art Show Catalog, tries to disentangle this confusion. It notes three varying stances artists can take toward the new technologies:

Deconstruction as Art Practice: Postmodern and structuralist analyses of contemporary culture have provided concepts, themes, and methodologies for creating art works that examine and expose the texts, narratives, and representations that underlie contemporary life. Technology and its associated cultural contexts are prime candidates for theory-based analysis because they play a critical role in creating the mediated sign systems and contexts that shape the contemporary world. In this kind of practice artists learn as much as they can about working with the technologies so that they can function as knowledgeable commentators. In one typical strategy, artists become technically proficient so they can produce works that look legitimately part of the output of that technological world while introducing discordant subversive elements that reflect upon technology.

Continue Modernist Practice of Art with Modifications for the Contemporary Era: Many in the art world reject substantial portions of critical theory, upholding the validity and cultural usefulness of a modernist, specialized art discourse with claims to universal aesthetic truth. They believe art can have an avant-garde function, that individual vision and genius are still relevant, and that artists can transcend their particular niches in cultural discourse. The work of some artists with emerging technology can be viewed as continuous with the work of artists who work with traditional media. They see themselves engaged in specialized aesthetic discourse and nurture their personal sensitivity, creativity, and vision. They aspire to be accepted by the mainstream world of museums, galleries, collectors, and critics (or for some, cinema and video). They work on concerns and in modes developed for art in the last decades such as realism, expressionism, abstraction, surrealism, conceptual work. Indeed, they see themselves as essential to progress in art, and seek to cultivate the unique and "revolutionary" expressive capabilities of their new media and tools.

Invention and Elaboration of New Technologies and their Cultural Possibilities as Art Practice: This century is characterized by an orgy of research, discovery, and invention. Branches of knowledge, industries, social contexts, and technologies have appeared that could not have been anticipated. These developments are affecting everything from the paraphernalia of everyday life to ontological categories. Artists can establish a practice in which they participate at the core of this activity rather than as distant commentators or consumers of the gadgets, even while maintaining postmodern reservations about the meaning of the technological explosion.

As I have described in previous works,2 artists can participate in the cycle of research, invention, and development in many ways, by becoming researchers and inventors themselves. From the time of Leonardo until recently, the merger of scientific and artistic activity was not uncommon. Free from the demands of the market and the socialization of particular technical disciplines, artists can explore and extend the principles and technologies in unanticipated ways. They can pursue lines of inquiry abandoned because they were deemed unprofitable, outside established research priorities, or strange. They can integrate disciplines and create events that expose the cultural implications, costs, and possibilities of the new knowledge and technologies. The arts can become an independent center of research.

A Fundamental Category Error--Research and Development as Cultural Concerns: Art derived from all these stances will continue to prosper and coexist. Art as research, however, is the most undeveloped and ultimately most crucial to the culture. The implications of scientific and technological research are so far reaching in their effects on both the practical and the philosophical planes, that it is an error to conceive of them as narrow technical enterprises. The full flowering of research requires a much wider participation in the definition of research agendas and in the pursuit of research questions than is provided from those in technical fields alone. It needs the benefit of the perspectives from many disciplines including the humanities and the arts, not just in commentary but in actual research.

This kind of artistic practice is not easy and its outcomes are uncertain. It requires that artists educate themselves enough to function non-superficially in the world of science and technology. They must learn the language and knowledge base of the fields of interest and be connected to both the art and technical worlds--for example, by joining the information networks of journals, research meetings, and trade shows. It asks artists to be willing to abandon traditional concerns, media, and contexts if necessary. We have called this approach Information Arts.

I have worked as an artist for the last thirteen years within this tradition of information arts and made the monitoring of scientific and technological research a basic element of my art practice. To that end, I have read research journals, attended scientific meetings, received a patent, acted as a developer, been a co-principal investigator in NSF projects on new technologies and education, and been a co-editor of Leonardo, the international journal of art & science published by MIT Press. I have identified several areas of emerging technology that I feel are important such as: telecommunications, artificial intelligence, hypermedia, body sensing, new biology, and material science. This paper focuses specifically on my work with artificial intelligence as an exploration of one kind of art as research practice.

artificial intelligence as an art inquiry

Artificial intelligence is one of these fields of inquiry that reaches beyond its technical boundaries. At its root it is an investigation into the nature of being human, the nature of intelligence, the limits of machines, and our limits as artifact makers. I felt that, in spite of falling in and out of public favor, it was one of the grand intellectual undertakings of our times and that the arts ought to address the questions, challenges, and opportunities it generated.

I undertook to learn what I could about the research agendas, accomplishments, and unresolved problems of the field. I read extensively, took courses, dived into LISP, attended meetings, and corresponded with researchers. I identified areas of research that seemed undeveloped and entertained questions derived from this contact with the field. I produced art installations that focused on issues in artificial intelligence research. The sections that follow present some of the results of this research both in art works produced and critical analysis of AI research issues.

The section that follows focuses on my own personal art research. The reader should note that there is a growing number of artists addressing AI issues and that this article is not a comprehensive review. For example, Harold Cohen,3 Peter Beyls,4 and artists represented in the journal Languages of Design 5 are exploring the possibilities of developing algorithms that enable computers to generate behaviors that would be called creative. Extensive work has been undertaken on the automatic composition systems in music. Joseph Bates 6 and his assoicates are working on a graphic world called OZ in which autonomous objects interact with each other trying to achieve private desires, reacting emotionally to events that occur, and forming simple relationships with other creatures. Naoko Tosa 7 is working on a project called "Neuro Baby" in which a graphic creature responds to feelings it detects in a human's voice and synthesizes appropriate facial expressions. Artistic activity in this area will undoubtedly continue to increase.8

art installations exploring issues in artificial intelligence

In my interactive art installations over the last 13 years, the audience acts as co-creators in affecting the flow of events. These installations have been shown internationally in galleries and in specialized art and technology settings such as SIGGRAPH and SIGCHI art shows. They have explored a variety of issues in the relationship of emerging technologies to culture. This review focuses on those installations in which artificial intelligence was one of the focal issues. More details on the installations can be obtained from articles about this art work.9

The artificial intelligence techniques used in these installations are often primitive; there are no breakthroughs for long-standing research questions. Often the installations use low level Eliza-like tricks or simulations of programs that will perhaps one day exist. Nonetheless, the works do provide new perspectives on long-standing research issues and identify fruitful areas for future research.

IS ANYONE THERE was an interactive telecommunications event that explored issues such as the linkage of telecommunications and alienation, and the possibilities for contacts with artificial characters. The installation was shown at the 1992 SIGCHI art show in Monterey, California, the 1992 SIGGRAPH art show in Chicago, Illinois, and at the 1993 Ars Electronica Show in Linz, Austria. It won the Golden Nica Prize of Distinction in Ars Electronica's international competitions for interactive art.

Five locations in San Francisco were chosen on the basis of socioeconomic diversity and their significance to the life of the city. For a week a computer-based system with digitized voice capabilities systematically called pay phones in these spots, at a particular time every hour, 24 hours a day. It used intelligent response programming to engage passersby curious enough to answer a ringing pay phone in a short discussion and digitally recorded the conversations. The topics focused on the lives of those who answered and whatever they consider noteworthy at that particular location. At other times video was used to capture representative images of the locales of the phones and the people who typically spent time near them.

Stephen Wilson. Is Anyone There? (Interactive Telephone Installation). SIGGRAPH 92 Art Show, Chicago; SigChi 92 Art Show, Monterey. A computer called five San Francisco pay phones every hour on the hour, 24 hours a day for a week. Digital characters tried to engage those who answered in conversations about their life.

An interactive video installation set up months later allowed viewers to explore life near these phones by using this bank of stored sound and digital video to selectively call up recorded responses and images. Visitors used voice recognition to interact with the computer. An interactive hypermedia program encouraged viewers to devise strategies for exploring this information--for example, using a spatial/temporal framework to choose to hear the record of the people who answered a financial district pay phone location during the midnight to 3 a.m. period. Typical digital video of the phone locales accompanied the recordings and digitally manipulated images became metaphors for information about the recorded calls--for example, dynamic colorizing used to indicate the depth to which a particular answerer went in a conversation.

The installations challenged the safety of passive art viewership by shifting occasionally into real time mode and automatically placing live calls to the pay phones, linking the viewer with a real person on the street at the location on the screen. The event explored a variety of conceptual issues:

Telecommunications & Telematic Culture

Interactivity, Art Audiences, and the Safety of Art Spectatorship

Hypermedia and the Structure of Information

Artificial Characterization & Intelligence

Telecommunications & Telematic Culture: The telephone system is an artistically under-explored feature of contemporary culture. Telephones allow almost instantaneous linkage between people anywhere on earth. They enable new kinds of communication including linkages between people who wouldn't ordinarily know each other and the creation of unprecedented kinds of social interchanges such as wrong numbers, answering machines, telemarketing, and the like. IS ANYONE THERE explores both the concrete technological possibilities and the poetry of using pay phones to overcome anomie in contemporary mass society.

Interactivity, Art Audiences, and the Safety of Art Spectatorship: This event challenges two common features of art viewing: the typically elite nature of high culture consumption and the passivity of much art appreciation. All those on the street who answer the ringing pay phones--many who would be unlikely to attend any conventional art institutions--become participants in this art event. The drama of their dialogue with the computer system is an essential aesthetic focus. In addition, the event systematically questions the safety of passive art viewing by requiring viewers to generate strategies to search the images and sounds of the stored calls. More radically, the event periodically shifts the viewer in the gallery from the safety of spectator to the challenging position of full participant. It places live calls to the phone that the viewer had been vicariously experiencing and demands that the viewer engage in a real conversation with a live stranger.

Stephen Wilson. Is Anyone There? Digital characters embedded in an automated computer telephone calling program varied the gender of their voice and the conversational strategy they used in an attempt to engage those who answered pay telephones to talk to them.

Hypermedia and the Structure of Information: New computer systems enable the storage and non-linear retrieval of vast amounts of information including text, image, video, and sound. These systems, which can dynamically adapt to the idiosyncratic inquiry styles of each individual user, raise questions about the most fruitful ways to organize, interrelate, and access new kinds of multimedia information spaces. IS ANYONE THERE explores an innovative kind of "hypermedia" art in which the structure of information and the navigational interface design are as much the artistic focus as the images and sounds.

Artificial Characterization & Intelligence: Many fears and hopes are raised about the possibilities of computers simulating the full range of human intelligence and characterization. This event investigates some aspects of these possibilities by exploring how self-revealing those who answer the phone will be with the various digital characters programmed into the computer device and how gallery observers feel about these exchanges.

The calling program lacked any real language parsing capabilities; it did not understand what the people said. It did have to be sufficiently engaging, however, that answerers would continue with the increasingly personal discussion. To accomplish this, I incorporated information about typical conversations. I studied the phrasing, pacing, and repartee that was typical of telephone conversations. I tried to make the conversation believable as an interchange. Although most answerers seem to recognize the canned nature of the calling voice, a significant minority seemed convinced that the telephone computer was listening and understanding.

EXCURSIONS IN EMOTIONAL HYPERSPACE was an interactive installation that explored issues including artificial characterization and motion in a space as a way of communicating with computers. It was shown at the NCGA Art Show in San Jose, California in 1986.

Visitors entered a room inhabited by four mannequins dressed to represent four different characters. Each held a particular pose; each represented a different fictional person who had a specific set of attitudes. One of the characters was angry and rebellious; another was happy to be part of the event; another was reluctantly submissive; and another was philosophical and tried to take the big view.

Each mannequin wanted to tell its story and express its perspectives on being part of the event. Visitors were invited to walk around the room and look closer at the mannequins. Standing in front of a mannequin caused it to start talking about how it felt about being there using a digital voice. Continuing to stand there caused it to go deeper into those perspectives. Walking away caused it to stop talking. Visitors could direct this small ensemble by their physical movements. A computer was reading sensors to determine motion and controlling speakers located inside each mannequin.

Stephen Wilson. Excursions in Emotional Hyperspace. (Artificial Character Discussion Installation). CADRE Art Show, San Jose, 1986. Four computer controlled mannequins tried to speak their opinions and reacted to each other as they were activated by the movements of visitors. Stephen Wilson. Excursions in Emotional Hyperspace. A mannequin went deeper into its feelings as long as the visitor stood near by.

Walking to another mannequin, however, did not cause it to just start anew. The new mannequin would comment on what the previous mannequin had just said from its own perspective. The mannequins seemed to be actively listening to each other and tracking the conversation. It would then enter into its own comments.

Visitors thus had the experience of encountering artificial characters with intelligence and points of view although this system lacked any real AI capabilities. The mannequins could not really recognize voice or parse the words of their fellow characters. All possible combinations of motion sequences were predetermined and all appropriate comments on previous statements were prerecorded. Nonetheless, the experience for visitors did simulate contacts with artificially intelligent characters and encouraged thought about these future possibilities.

TIME ENTITY was an interactive computer graphic animation and sound installation that modeled an artificial creature. It was invited to be part of the CADRE computer art festival in San Jose, California in 1983 and was later also shown in the gallery of the Art Department at San Francisco State University. Altogether it ran for a month. California artists Matthew Kane, David Lawrence, and Eric Cleveland collaborated with me in its creation.

AI, focusing on the simulation of human intelligence, inevitably raises questions about the nature of non-human intelligence. As an artist I wanted to explore the creation of fictional intelligent species. I studied research on interspecies communication and SETI (Search for Exterrestial Intelligence). I began to search for models of non-human intelligence. Simultaneously, I had been experimenting with the clock-calendar technology that had become available for microcomputers. I was fascinated with this capability of designing programs that knew the exact time and date much more precisely than humans could.

I decided to create a computer simulated creature that was obsessed with time. It would "know" how long it had been alive and be obsessed with its future and its "mortality." It would have intrinsic genetic predispositions to change as it grew older. Similar to biological organisms, it would have monthly, diurnal, and heartbeat length rhythms. It would interact with human visitors around this issue of time.

Stephen Wilson. Time Entity. CADRE Art Show, San Jose, 1983. Visitors interacted with an artificial creature which was obsessed with time. The entity lived in accordance with heartbeat and diurnal rhythms and a focus on life path issues of birth, development, and death. Stephen Wilson. Time Entity. CADRE Art Show, San Jose, 1983. Visitors try to decide how to interact with an artificial creature.

Our design team spent months debating the nature of the creature. We surveyed the biology of time as manifested by real plants and animals and we played with open-ended fantasies of how organisms might relate to time. We probed the capabilities of the clock-calendar technology we were using. Examples of some of the questions facing us were: Should the creature sleep? Should it dream? Should it develop gradually or in punctuated stages? Should its pace get more excited or calm as it interacted with humans? Should there be events that occurred at the millisecond level that were beyond the perceptive capabilities of the human visitors?

Physically, the Entity was a computer graphic animation that moved on a video projection screen accompanied by computer synthesized sound. It also had a tactile and kinesthetic life. Humans interacted with it by touching specially constructed, pleasant-feeling touch pads. It lived in a forest of upside down pine trees. The smell was overwhelming and many visitors remarked it was the first good smelling computer art they had ever encountered. Its appearance and behavior changed with its age since birth, the time of day, and time of month. It had a regular heart beat rhythm that pulsed its visuals and sounds.

Visitors could observe it move and grow or could actively affect its time life by touching the pads. For example, they could speed or slow its pace or choose to make part of it grow or die. They could choose to make the action happen immediately or at a specific minute in its future. At any given moment, its visual and sound appearance was the result of its intrinsic growth tendencies and all the interactions up to that point.

The intelligence of the program was rudimentary although it does point toward interesting future research directions. Visitors reported that they indeed had a sense of an encounter with an unusual creature. Our work as artists was discontinuous with the prevalent artistic traditions. The integration of concepts from other disciplines and the probing of the technology were as least as important as the sensual qualities of the final products. We were constantly forced to invent new extensions of the technology and ended up discovering capabilities of which even the manufacturers were unaware. We were working simultaneously as artists, programmers, engineers, biologists, psychologists, and AI researchers. Inevitably, artists working with technology in the new context will have no choice but to integrate this kind of role diversity.

DEMON SEED was an interactive installation selected to be shown in the 1987 SIGGRAPH Art Show in Anaheim, California. Viewers controlled four computer-choreographed, moving, and talking robot arms. The robots moved on platforms in front of galleries of digitized images of demons from various world cultures.

The installation reflected on the perennial tendency of humans to project images of demons onto things they don't understand and explored the idea of kinesthetic and tactile intelligence. There are signs that robots may be the next recipient of our demon-projecting fears. Moreover, because the world is shrinking through communications, several different cultures may share the robot-demon imagery instead of having different monsters.

DEMON SEED asked the audience to experience this combined fear and fascination with robots. Its robots were simultaneously ominous and endearing. Each platform featured a particular culture with the robots moving in front of a gallery of repeated digitized images of spirit masks from the target culture. Each was dressed in materials from that culture. For example, an African robot was outfitted with a small woven hemp broom, fur, and colorful African cloth. The robots seemed like mechanized priests or shamans.

Stephen Wilson. Demon Seed (Interactive Robot Dance Troop Installation). SIGGRAPH 87 Art Show, Anaheim. Visitors interacted with four computer controlled robot arms via a velvet squeeze rod.

The computer moved the robots through a series of ritualistic motions. Sometimes all four moved in unison; sometimes, in counterpoint. Sometimes the robot dance rippled through--starting with the top robot and then repeated in turn by each.

The installation did not ask the audience just to passively observe the robots. Commenting on our ability to exercise control, DEMON SEED periodically gave viewers a chance to influence their actions. I constructed special squeeze rods that could read viewers' squeezes. I covered these rods in a lush purple and black velvet. I felt that the usual computer interfaces with computers such as buttons, mice and keyboards connote a certain distance and coldness that is not always appropriate. Because of its subject matter, I wanted DEMON SEED to convey an intimacy, tactility, and warmth that is not usually associated with computers and robots.

I was interested in the question of what would an intelligence be like that had to rely only on touch for communication. I invited the audience to devise patterns of squeezes, rubs, caresses, etc. that they would use in communicating with the robots. I developed an expressive language of touch so that patterns of pressures and positions at the squeeze rod took on different meanings. I tried to construct the "language" so there was not an invariant mapping of specific actions to particular effects. Although this experiment was preliminary, it was clear that the exploration of artificial intelligences that do not rely on sound or sight for communication was a fruitful area for future investigation.

Finally, the system had the capability such that viewers could enhance the robots' behaviors though their speech. Periodically the robot they were controlling could "listen" into a microphone placed near the squeeze rods. It could pick up the viewer's voice and then speak with processed digitized recordings of viewer words. In an eerie linkage, the robot spoke with a demonized version of the viewer's voice. I hoped this sonic link would lead the audience to think about the role of projection in human creation of demons.

Stephen Wilson. Demon Seed. Computer controlled robot arms reacted to a language of touch communicated by visitors through a squeeze rod.

ai research: areas that call for non-technical, aesthetic decisions

In the art work and the analysis I realized that many of the decisions to be made about the shape of AI programs are not purely technical.10 The simulation of human information processing outside of narrow realms, and the creation of machine partners which interest and satisfy humans, will depend on sophisticated artistic and psychological design choices as well.

Some discourse about AI seems to imply that intelligence can be viewed as an abstract, disembodied process. This view assumes that there is a "correct" way for the processes of natural language understanding, planning, problem solving, or vision to function and that there are "raw" meanings that programs can understand and manipulate. In this view, understanding and problem solving are technical accomplishments that can be assessed objectively. This section will explain why this view is erroneous.

In interactions with AI programs, technical correctness in response is often viewed as the only criterion to evaluate interaction. Correctness means that the human interactor judges that the program's response indicates that it understood the gist of the human's communication. Except in extremely circumscribed contexts, this restricted interaction may be unacceptable: humans crave texture in interactions with intelligent entities that go beyond technical correctness--for example, personality, mood, purposiveness, sensitivity, fallibility, humor, style, emotion, self-awareness, growth, and moral and aesthetic values. Disaffection with limited interactions will become more severe as AI applications spread.

The texture of interaction is not a superficial decoration but is intrinsic to the basic fabric of human-like understanding and intelligence. For example, these qualities affect the ways that we as humans understand words, see things, go about problem solving, and organize our memories. AI researchers who believe they are objectively avoiding these issues may be deluding themselves. Similarly, those who believe they are only following the classic scientific strategy of defining manageable research problems are underestimating the nature of what is being defined away.

There is no way to avoid making choices. For example, a program without a sensitivity to humor is not just a program which has not addressed humor. This kind of program will have difficulty understanding some aspects of human communication, and it will be viewed as humorless by human partners. The attempt to ignore the issue of humor translates into a decision to structure the program's understanding and communication potential in a way that, if exhibited by a person, would be called humorlessness. This section identifies some specific areas in which AI design must begin to integrate humanistic perspectives:

1. The Physical Basis of Communication--Appearances and Sensual Modes of Contact

2. The Dangers of Limiting Domains

3. Computer Models of Self, World, Relationship, and Partner

The Physical Basis of Communication--
Appearances and Sensual Modes of Contact

Aside from traditional ergonomic considerations of physical and perceptual comfort, little attention has been paid to the physical context of communication between humans and computers. The details of computer appearance and the physical methods of communication are seen as trivial aspects of interaction compared to the actual information interchange. For a long time the domination of interaction by keyboard and video screen was not seen as especially significant. Then there grew to be interest in other methods such as mouse pointers, digitizer tablets, touch screens, and light pens. There is speculation that future development of speech and virtual reality body-tracking technologies will make information exchange even easier.

The physical facts of interaction are more than peripherally important. They play subtle but important roles in our judgments. For example, the way an entity looks, moves, and sounds influences our assessments of its intelligence, sensibility, and our comfort with interaction. The appearance of computers and the keyboard/CRT communication dominance are more accidents of history than intrinsically necessary features. In the past, these forms were used because of familiarity and economic rationales. Though at first they did not seem to interfere with communication, as expectations for interactivity grow, this assumption might be questioned.

Why do we accept as foreordained that computers (or at least the terminal faces that they show to humans) must manifest themselves in high-tech metal, wood, and plastic cabinets? Who decreed that they need to look like electronic devices? Perhaps they should look like stuffed animals. Perhaps they should not be restricted to one physical locus but rather spread out so that whole spaces become active. Who said that typing or moving indicator devices is the best way for humans to communicate with a computer? Similarly who said that text on a CRT screen or piece of paper is the best way for the computer to respond to humans?

Future explorations in these areas may open up new interactive possibilities. Interactions are haunted by the appearances of computer paraphernalia inherited from the past. We have come to expect that computers should have certain appearances and should communicate in certain ways. Similarly, the expressive possibilities of other modes of communication, such as non-verbal signals, go unexploited because of our legacy of assumptions (for example, see Edward Hall's Silent Language and Hidden Dimension).11

New relationships between humans and computers may require that we free ourselves from these expectations. It may be impossible for us to accept, as intelligent, computers as they currently appear. They may not need to look humanoid but a wider range is necessary than currently exists.

Similarly, the physical means of communication needs to broaden. The channels are too narrow. Speech will open up significant new possibilities. The choice, however, of monotone or greatly restricted variation which characterizes the current generation of synthesizers is in fact a choice attempting to masquerade as a non-choice. Monotone is interpreted not as objective speech sounds without variation but rather as dull and machinelike with all the associated connotations of dumbness or maliciousness. AI developers will need to pay attention to previously neglected qualities such as inflection, tone, and timbre. Recognizers will need capabilities to read these qualities and synthesizers will need capabilities to generate these subtleties of sound and integrate them with meaning.

Communication modes similarly do not need to be humanoid. They could be modeled on animals or on totally artificial, fanciful forms. Perhaps in addition to traditional methods, computers could express themselves by changes in shape, color, size, texture, smell, or motion. Similarly, they might gather information via all the senses. Perhaps there would be appropriate situations in which we would communicate with the computer by rubbing, stroking, and snorting as well as key presses. Indeed, computer simulation of human understanding may require multimodal sensual data collection in ways we don't yet understand. Perhaps one day we will even know enough to allow communication via extrasensory perception and emanation or direct reading of brain waves.

These decisions about appearance and communication mode require psychological and artistic sensibility. What will be the impact on humans of various appearances? What kind of forms of communication will allow maximum expressivity for the artificial entity? How should it move, make sounds, or receive human contact? These decisions are not just aesthetic because the shape of human-machine communication is incomplete without attention to them.

The Dangers of Limiting Domains

In AI engineering the strategy of limiting domain as a method of creating problems solvable by AI is prominent. This strategy is a classic approach to scientific problem-solving--work on smaller problems on the way to larger problems. This section suggests that there are major shortcomings in applying this strategy to some AI problems because critical issues in the relationship of humans to machines are ignored. Programs will perhaps always need to work within limited predefined domains but there is much more room for interesting variation to be built in.

In addition, many AI researchers underestimate the importance of enhancing variation. They believe AI programs can function neutrally and objectively without sacrificing effectiveness. They would like the programs to handle more variability but they see the sacrifices as minimal in the quest to develop functioning systems. Much of the variation in human-human contact is seen as unessential fluff. Unfortunately the result may be that humans are adjusting to the computer rather than visa versa.

For example, expert programs which claim to understand natural language require humans to limit vocabulary and syntax in the messages they type in. These limitations of expressive style and range of discourse represent a major unrecognized issue confronting the future of AI and human-computer contacts in general. Many may feel that these restrictions are not a great sacrifice, given the improved "friendliness" and responsiveness of computers in these limited zones of communication. They believe that it is acceptable to forego variety in language so that the computer can understand even this limited amount of natural language. They would claim that many human-human interchanges in specialized contexts are similarly limited in vocabulary and style.

This claim is misguided. Human interchange is punctuated with subtle variations in expression and vocabulary--even in the specialized, limited domain interchanges mentioned earlier such as business or consultations with experts. Similarly, human conversations meander, with fits and starts of intensity and slight topic changes rather than in a straight line. We expect this flavor and texture to conversations and feel something is awry without it--even in serious conversations.

Imagine what it would be like to converse with a human who disregarded anything but the main utilitarian thrust of your statements and always replied with invariant, straight-on efficiency. Even though natural language systems may accurately understand our main meanings and produce their own appropriate responses, their performance will seem wanting and incomplete unless they can provide some "human" conversational texture.

Giving up texture in conversations will be more of a sacrifice than is often imagined. Some analysts suggest that people are already adjusting their communication and thought to the computer. I can imagine a nightmarish future where the natural language understanding requirements of the omnipresent computers force us to use stripped down, de-spiced language in communication with them. This style may spread to all our conversations.

There is another scenario. Computers should be adapted to us rather than we adapting to them. Developing AI programs that can understand the texture as well as the gist of our conversations is essential and possible. For example, there are understanding programs that reduce all mentions of humans consuming solid nourishment (eating) to a primitive internal concept. This strategy allows the programs to proceed with following stories and making inferences about meanings. Humans, however, do not just eat in one way. Sometimes we gobble, gluttonize, devour, gulp, nibble, sample, gnaw, feast, savor, and so on. These subtle distinctions are essential parts of human conversations and lives. Connotations are as important as denotations. Similarly different adjectives and adverbs create families of meanings from any one core sentence. Understanding these subtleties is an absolute prerequisite for future acceptance of AI programs. Similarly, the programs themselves will need to texture their language in order to maintain the interest of humans.

Movement and vision programs face similar dilemmas. They can't deal with the complexity and unpredictability of normal human physical worlds. In factories where robots must work, the solution is often to simplify and standardize the setting. Some futurists believe that most settings will someday be adjusted for our artificially intelligent helpmates. For example, people dream of the day when robots will take over the drudgery of housework and household management. Our current houses are not set up for artificially intelligent robots. Vacuuming becomes a horrendously complex task when the vacu-robot must navigate around and under normal household paraphernalia. The standardization of construction would bring the dream closer.

At what cost? We must guard against the tendency to eliminate and streamline the texture of our verbal and physical lives just so they won't cause problems for AI programs.

Computer Models of Self, World, Relationship, and Partner

In order to understand natural language AI programs must be provided with a repertoire of meanings, connections, and expectations required for dealing with the elliptical nature of normal human communication. In order to solve problems and learn they must be provided with knowledge representation frameworks in which to store information and to make deductions. They must be given search strategies with which to look for solutions to problems. In order to visually recognize objects and scenes they must be given expectations about the possible compositions they might encounter.

To succeed in settings that approach the complexity of everyday human life, AI programs need models of themselves, their human interactive partners, the relationship between themselves and the humans, and the world. Humans succeed in perceiving, understanding, and conversing about the world because they share expectations (often preconscious). As psychologists such as Jean Piaget and Jerome Bruner have noted, understanding is not just passive reception but rather active construction based on mental schema built from years of experience. Personal contact with experiences activate reserves of associations ready to fill in and make sense of the new perceptions and concepts. Similarly, computerized problem solving can proceed only when there are working models of the problem domain.

There are no "right" models to embed in the programs. There is no engineering solution to these problems. Human schema are influenced by personal life experience, temperament, culture, class, sex, philosophy, and the many other influences identified by social scientists--and centuries of artists.

For example, cultures differ in the distinctions they make. Eskimo culture uses over 20 words for snow, each describing different features. A designer of an AI program would need to decide if this part of a model of the world should be incorporated. This importance of culture means that AI work may not translate across cultures as easily as other technical innovations have. The "Fifth Generation" AI developers in Japan may be surprised one day to discover that some AI programs must be culturally specific.

Dramatists and novelists spend lifetimes developing characters who uniquely perceive themselves, others, and the world. They endow them with a fictional "life" that guides interactions such that readers might even predict how the characters might react, even in situations not included in the author's works. Analogously, visual artists create rich, intriguing artificial worlds which have a deep life beyond their surface content. One could think of these centuries of work as development of knowledge representation schema.12

Certainly within very narrow, specialized areas of concern--such as those addressed by expert programs--a single model of the small relevant subworld can be agreed upon and used as the basis for program understanding. In more general contexts, however, technical consensus on the best model of the world to use is unlikely. Other bases must be used. And here is where AI research must incorporate other disciplines to help in the choice and design of the internal worlds.

The typical interactive computer program passively accepts text from a human and tries to respond appropriately. As mentioned earlier, this rhythm of question and answer within dry, efficient protocol will be seen as boring, dull, and intolerable as AI programs spread. Humans want more in interactions. They want their partners to initiate and volunteer. They expect contributions of personality, texture, and mood. Conversation is a kind of dance in which people take turns leading and following and try to anticipate the movements of their partners. Programs will not be able to join the dance if they are not endowed with characteristics that will serve as sources for their contributions.

The world models described above can partially provide program bases from which to originate communication. Programs with different world models would react differently to the same situation. Also, programs can be made different in their processes. Already AI researchers have discovered many design decisions that can impact on the perceived personality of the programs. The crafting of program personality might require different decisions than those based on ideals of technical efficiency.

For example, in AI problem-solving programs decisions must be made about the process of searching among possible actions--for example, breadth versus depth first and use of various evaluative criteria. One can well imagine the shaping of various "personalities" based on these design decisions--for example, timid and careful versus impetuous and risk taking versus opinionated and closed programs. Just as humans differ, so could programs differ in the way they process information.

For centuries artists have been asking themselves related questions about the entities that they create. The advent of AI computer technology expands the arenas in which they can be asked and the repertoire of possible answers.

summary: artificial intelligence asa cultural research issue

At its core, artificial intelligence research is about much more fundamental issues than construction of the next year's model of expert system. The culture desperately needs the definitions of research agendas, the generation of hypotheses, and the pursuit of research questions in this field to reflect the perspectives and wisdom of people from a wide range of disciplines, including the arts and humanities.

If we are going to have artificially intelligent programs and robots, I would have sculptors and visual artists shaping their appearance, musicians composing their voices, choreographers forming their motion, poets crafting their language, and novelists and dramatists creating their character and interactions. To ignore these traditions is to discard centuries of experience and wisdom relevant to the research questions at hand.

Previous Next Up  Comments

Notes

1 Stephen Wilson, "Dark and Light Visions: The Relationship of Cultural Theory to Art That Uses Emerging Technologies," Siggraph 93 Visual Proceedings, Art Show Catalog (New York: ACM, 1993).

2 Stephen Wilson, "Industrial Research Artist: A Proposal," Leonardo, 17.2 (1984); "Research and Development as a Source of Ideas and Inspiration for Artists," Leonardo, 24.3 (1991); Using Computers to Create Art (Englewood Cliffs, NJ: Prentice, 1986); Multimedia Design With Hypercard (Englewood Cliffs, NJ: Prentice, 1991).

3 Pamela McCorduck, Aaron's Code: Meta-Art, Artificial Intelligence, and the Work of Harold Cohen (San Francisco: Freeman, 1991).

4 Peter Beyls, "Creativity and Computation: Tracing Attitudes and Motives," FISEA 93 Proceedings, 4th International Symposium on the Electronic Arts (information available from Professor Roman Verosko at Minneapolis College of Art and Design).

5 Languages of Design (Elsevier Press) (information available from editor Raymond Lauzzana, South Park Media Center, 544 Second Street, San Francisco, CA 94107).

6 Joseph Bates, "Edge of Intention," Siggraph 93 Visual Proceedings, Art Show Catalog (New York: ACM, 1993).

7 Tosa Naoko, "Neuro Baby," Siggraph 93 Visual Proceedings, Art Show Catalog (New York: ACM, 1993).

8 For information about this body of work see: Richard Zach, Gerhard Widmer and Robert Trappl, Artificial Intelligence: A Short Bibliography on AI and the Arts (information available from Austrian Reseach Institute for AI, Schottengasse 3, A-1010 Vienna, Austria); Joseph Bates, Catalog from The First Artificial Intelligence Based Arts Exhibition, American Association for Artificial Intelligence 1992 Meetings, San Jose, CA (information available from Professor Bates at Carnegie Mellon University).

9 Stephen Wilson, "Interactive Art and Cultural Change," Leonardo, 23.2-3 (1990); "Environment-Sensing Artworks and Interactive Events: Exploring Implications of Microcomputer Developments," Leonardo, 16.4 (Autumn, 1983).

10 Stephen Wilson, Fleshing Out Artificial Intelligence: The Aesthetics of Computer-Human Contacts, work in progress.

11 Edward Hall, Silent Language (Greenwich, CT: Fawcett, 1963); Edward Hall, Hidden Dimension (New York: Doubleday, 1969).

12 Brenda Laurel, Computers as Theatre (Reading, MA: Addison, 1992).