Although humans have an incredible ability to acquire new skills and alter choice behavior as a result of experience, the learning that emerges is often specific to the trained stimuli, context, and/or task. For instance, in the sub-domain of perceptual learning, enhancements do not typically generalize across low-level features of the trained stimulus (e.g. orientation, spatial frequency, retinal location). My graduate work tested the hypothesis that exposure to richer stimulus environments and variable task demands invokes more general learning. Working within the context of fast-paced action video games, we found that such experience does indeed transfer broadly - improving performance on tests of low-level vision, visual attention, short-term memory, executive function and even general fluid intelligence.
However, while action video games are certainly effective teaching tools, they are also exceedingly complex, making it difficult to derive and test the underlying learning principles. Thus, my current research utilizes simpler, easier to characterize tasks, to ask how the computational demands of the learning environment influence the generality/specificity of learning. I will argue that in order to observe transfer, the learning environment must require that the subject learn the generative process that links states, actions, and outcomes (i.e. – an “internal model). Behavioral tasks designed according to such principles, be they primarily perceptual (gabor orientation learning) or cognitive (binary sequential decision making) in nature, lead to the predicted specificity/generality of learning.