Response to Wolfram (Processes of Perception and Analysis)

This will probably always be the most infuriating chapter in ANKS for me.  Randomness, complexity, the whole shmegaggle, are defined by reference to human perceptual processes.  That which is complex looks complex and vice versa.  Although it is certainly a desirable property that a postulated definition of complexity would denote as complex objects that look complex, it is a bit of a cheat to start from the goal.  I know definition is a terribly “traditional” way of doing science, but maybe, just maybe, we set up definitions so that our claims may be meaningful, testable, and replicable.  Instead of giving us a ruler to measure one foot, Wolfram has told us all to use the king’s foot.  And it remains unclear whether one could unambiguously assign natural phenomena to the Wolfram complexity classes.  In what complexity class is the phone right next to me?  At a molecular level it may be very complex and heterogeneous.  At a macro scale, it seems to be a fairly predictable structure.  If Wolfram wants to reframe science, he had better work on a better way of mapping it to the real world.

However, Wolfram does pave an important path.  He convinces me that it is possible to come up with a definition of complexity and that he is not just on a wild goose chase.  Clearly, something is going on such that some patterns are more complex than others.  It remains only to formulate the differences more clearly and distinctly, and my anxieties will subside.  

How might one do so?  If Wolfram is going to define complexity by reference to the processes of human perception, it seems justifiable (though perhaps not very congenial) to demand that he nail down what those perceptual processes are.  Is this possible without traversing the moderate speedbump of a full-scale theory of human perception?  I think so.  The complexity of a pattern should be some measure of the difficulty of inducing that pattern from the sequential feeds of the data in which it resides.

So here’s a first cut at a well-defined definition of complexity for discrete, iterative processes.  Take a fully-connected (every node connected to every other) recurrent neural network.  Let the update function g be sigmoidal (arbitrarily): g(x) = 1/(1 + exp{-x}), where x is the sum of the activations of all incident nodes at the previous time step. Reserve k nodes to be input nodes and k to be output nodes.  Create k nodes that are input nodes for the process and k nodes that are output nodes.  Now, without consideration of how to train the network, what is the smallest number of nodes in the network (besides the 2k input and output nodes) that can approximate a series n data comprising the pattern (where one datum is a vector of k elements) from the process to within epsilon mean-squared error?

Some patterns will be replicable with only a small number of nodes.  Some will required a much larger number of interior nodes to be represented approximately.  The latter, I suggest, is more complex and quantifiably so.  Complexity is now well-defined and transitive.

Of course, there is no great reason to appeal to this particular time-series analysis algorithm as an oracle of complexity.  My suspicion is that there will never be a measure of complexity that does not make any unwarranted assumptions.  Still, it does offer a benchmark that can be implemented in hardware.