Brendan O’Connor

Symbolic Systems 205, Spring 2005

Systems, Theory and Metaphor

Instructor: Todd Davies

 

Commentary on Chapters One and Three of Herbert Simons’ The Sciences of the Artificial

 

Simon’s first chapter jumps around a lot, but I’ll talk about a few points.

 

Simon describes natural science thusly:  “The central task of a natural science is... to show that complexity, correctly viewed, is only a mask for simplicity; to find pattern hidden in apparent chaos.”  In contrast, Simon presents the study of artificial things not from nature – synthesized by humans, and characterizable in terms of functions, goals, and adaptation, in both descriptive and normative terms.

 

Later, he talks about the value of simulation.  Everything in there could apply to either artificial or natural sciences.  Simulation, Simon argues, can generate new knowledge by exploring the implications of known facts (e.g. weather modelling), or also it can help explore poorly understood systems when we want to abstract over many ill-understood properties to discover things about a few properties (e.g. learn chemistry but abstracted over quantum theory).  I’m not sure what this section has to do with artificial science.

 

Simulation surely makes sense as a scientific method to understand man-made systems, since you can make an approximation just as other humans made the actual system.  Or perhaps you can recreate the whole system (a particularly effective simulation!)  Consider role-playing training and wargames: in these flesh-and-blood simulations, the participants learn a lot about the system, since the simulation is quite close to reality.  There are, of course, artificial systems that are harder to simulate: economic modelling is difficult because economies are big complex systems that require simulation on a computer, abstracted over many details.  An entire economy is much bigger and more complex than a role-playing scenario or a military action.

 

However, simulation is quite orthogonal to the aspects of functions and goals.  Functions and goals aren’t real in any phenomenon – they are convenient ways to describe sets of data.  “A bird’s wing has the function of flight” is just shorthand for “A bird’s wing can cause enough lift to cause flight”, or “the adaptive benefit of a bird’s wing is the flying ability it allows”, or “god intended in his design for a bird’s wing to allow flight”.  If you simulate a wing, you don’t discover the function of flight.  You get data from which you find flight, from which you might infer the function of flight.  If you simulate anything, you’re not finding functions or goals; you’re finding what causes what.  Finding functions and goals is an abstraction up from causality found from data analysis.  Surely finding them or making statements about them is something you do when running simulations, but you do it equally with non-simulation science.  I don’t understand what functions/goals has to do with simulations.

 

Just one more note.  I once read some paper by John McCarthy where he complained that Herbert Simon opened some text with a beautiful exposition of the problems of intelligence, the depressingly reduced it all to cryptarithmetic puzzles.  It looks like in chapter 3 of Sciences of the Artificial that I’ve now read something that McCarthy also has, but I even doubt how accurate that beautiful exposition is.  Simon argues humans’ behavior is simple, but the environment is complex – change those cryptarithmetic puzzles, then the same STM and concept chunking constraints are going to lead to complex different behavior.  McCarthy’s point is that the richness and complexity of common sense knowledge belies any belief that the simple cognition behind cryptarithmetic puzzles has anything to do with true intelligence.  That’s absolutely true.  And one more problem: what about all those neurons?  Surely it is possible they could create complex cognitive processes.

 

Human cognition is implemented by neurons, and implements a number of other things, including common sense.  It is forbiddingly complicated to describe how cognition is implemented by neurons.  And it is forbiddingly complicated to describe how common sense is implemented by cognition (a fact I infer because it’s forbiddingly complicated to implement in any information processing system anyone has tried.)  So we see incredible complexity below and above cognition ... are you so sure to assume simplicity in the middle?

 

This isn’t to put down the chapter too much – I thoroughly enjoyed it – but it can seem over the top at times.