Tony Tulathimutte

SymSys 205 – Response Paper #1 – Simulation in the Social Sciences

 

            In class we discussed the notion of “agents” in simulations; yet the term “agent” seems to carry different (and perhaps contrasting) connotations in different circumstances. This response paper will examine the role of both types of simulation agents we have discussed, and the differences between the two.

In our discussion of the Axelrod article we introduced the first type of agent, the “locally interacting” entity responsible for the “emergent properties” of a system. This conception of the agent clearly relates to our discussion of simulations from a systems perspective, as systems are essentially networks of discrete interacting entities. What struck me about the nature of these agents (especially those used in the social science simulations) was their behavioral uniformity; for example, in the Schelling simulation, every agent has the same preference (at least two same-color neighbors). This uniformity is intended to illustrate the effects of a single principle—a slight preference for similar neighbors—but clearly, it does not adequately represent our society, in which preferences are much more likely to be distributed over a wide range. That’s the basic problem of creating a Wolfram-esque simulation to describe social phenomena such as racial preferences: the agents do not all follow the same simple rules. I’m curious to know if there are any variations of the Schelling simulation in which the preferences of each agent are determined with probabilistic weights. Moreover, I’m curious to know if this approach would even be effective in extrapolating any general principles at all, since the stark effects of the Schelling simulation arise from the very uniformity of the agents’ behavior.

The second conception of the “agent” came up in our discussion of the Bailenson article; familiar to HCI scientists, it can be described as an entity which behaves autonomously according to certain rules. This agent is associated with a very different notion of simulation than Wolfram’s: rather than deterministic systems of locally interacting entities, these simulations are computer generated environments intended to match the complexities of some real world environment or principle as closely as possible. The agent typically acts on behalf of a “user”, a person who is actively involved in operating or acting within the simulation. The presence of a user implies that the simulation is non-deterministic, in that the outcome can be influenced by deliberate human actions. This sort of simulation is the kind we’ll be studying in the next two weeks of the class, and its non-determinism and active involvement of humans has entirely new implications for the use of simulation (which I’ll discuss in my next response paper). Because the agent acts on behalf of a user instead of “for its own sake”, its behavior is often contingent on the actions of the user, rather than any of the other agents in the simulation.

What the two notions of agents have in common is that they each behave automatically – that is, according to preprogrammed rules and heuristics – and that their simplicity is what makes them useful in the simulation. In the first case, the simple rules governing the agents are what allow complex behavior to arise; in the second case, their simplicity translates into ease and enjoyability of interaction in the simulated environment.