This section describes the diverse areas of research being conducted at the MBC.
Discovering how humans and animals make decisions, and in particular how the underlying neural activity in the brain gives rise to emergent conformity to general laws of decision making, is a central focus of research at the MBC.
The role of neural activity in decision-making processes has been investigated in the Newsome laboratory using single-neuron recording methods for many years (Shadlen & Newsome, 2001). These studies provide direct support for the view, previously proposed in abstract statistical decision-making models, that animals and humans carry out a neural equivalent of the optimal sequential probability ratio test (Gold & Shadlen, 2001).
In parallel with this work, McClelland and colleagues have developed a model of decision making that translates the one-dimensional SPRT into a two-dimensional dynamical system that accounts in more detail for the time course of the underlying neural activity and can be extended to decisions among multiple alternatives (Usher & McClelland, 2001). This model, in turn, provides a bridge between models formulated directly in terms of large numbers of spiking neurons (Wong & Wang, 2006) on the one hand and optimal decision dynamics on the other (Bogacz et al, 2006).
Recently the McClelland and Newsome labs have joined forces with labs in Princeton and London in a major effort to understand how populations of neurons distributed widely across the brain jointly contribute to the dynamics and outcome of decision processes. This effort will require the integration of evidence from multi-single unit neural activity in animals and other imaging methods, together with computational modeling work addressing cross-regional integration of neural activity to understand how parts of the brain work together as the brain makes a decision. The work encompasses a wide range of factors that affect decision making, including payoff magnitude and uncertainty and differential roles of losses vs. gains, and it dovetails with work in the participating Knutson laboratory on the role of neuromodulators in emotion and decision making (Knutson et al, 2004).
This year, we have a new addition to the community of researchers investigating the neural basis of decision making, Samuel McClure. McClure’s work combines computational and functional brain imaging methods to investigate the neural basis of reward processing, delayed gratification, and reward-based decision making.
Our program also benefits from the opportunity for students to train jointly with Stanford faculty and Peter Dayan of the Gatsby Computational Neuroscience unit. Dayan is a major contributor to reinforcement learning theory, neuromodulation, and prefrontal control mechanisms of decision making (Schultz, Dayan & Montegue, 1997).
The method by which the brain successfully constructs a representation of the visual world has long been understood to be a hard problem, requiring a synergistic combination of sophisticated computational approaches with psychophysical and neuroscientific investigation.
Research in the Baccus lab combines physiological investigations and computational modeling approaches to understand how the retina transforms and adapts to visual input. The Newsome laboratory continues to explore sensory and decision-making processes in motion perception, recording from neurons in behaving primates. Computational and fMRI approaches are used jointly in the Wandell laboratory to understand the neural basis of color perception (Wandell et al, 2006).
Turning to attention, John Huguenard’s group investigates biophysical properties of thalamic relay neurons, and how they may be regulated to modulate attention in mammalian nervous systems, using combined computational and electrophysiological methods. Erik Knudsen’s laboratory investigates top-down attentional control of corresponding circuitry in the barn owl, while Tirin Moore’s visual neurophysiology laboratory uses neurophysiological and micro-stimulation methods to investigate how attention modulates visual processing for both target feature selection and eye movements (Armstrong et al, 2006).
The recognition of visual patterns poses computational challenges that have, up to now, only been adequately solved in animal nervous systems. The effort to understand how these challenges are met has long been an important interface between computational and experimental neuroscience.
The laboratories of Ng in computer science and Grill-Spector in psychology take complementary approaches to the problem of high-level vision. Ng’s group relies on the neurally-grounded computational principle of sparse coding to enhance machine learning-based models of object recognition (Ng, 2004), while Grill-Spector’s lab employs very high resolution functional imaging methods to investigate the neural basis of high-level vision and object recognition in humans.
The Wandell laboratory’s recent focus has been to extend the use of advanced functional imaging methods including fMRI and Diffusion Tensor Imaging to understand the distributed brain systems involved in reading, and there are opportunities to combine such efforts with computational models of the reading process investigated by McClelland (Plaut et al, 1996).
In contrast to vision, the goal of which is to represent the visual world, the goal of neurons in motor pathways is ultimately to guide accurate movements. This difference in goals leads to profound differences between the two systems, and to the need for a set of computational methods and physiological experiments complimentary to those used in understanding representations in sensory systems. Modern computational motor control centers on feedback control theory and estimation theory, in which the distributed activity across populations of neurons is interpreted as comprising optimal controller parameters, feed-forward predictors, optimal Bayesian estimators, and internal models and operates in a closed loop.
Several labs at Stanford are heavily involved in computational motor control research. The Shenoy Lab investigates the relationship between the neural activity of many simultaneously-recorded cortical neurons and motor behavior, using electrode array recordings in nonhuman primates performing reaching arm-movement tasks (e.g., Santhanam et al. Nature 2006; Churchland et al. Nature Neuroscience 2010). The Delp lab investigates person-specific biomechanical models of motion, directly complimenting the neural control signals studied in the Shenoy Lab. The Ng and Thrun labs use advanced machine learning techniques to design robotic control systems with uniquely versatile capabilities (e.g., flying helicopters upside down, Abbeel et al; Stanley, the robot who won the DARPA Grand Challenge, Thrun et al. 2007).
Radiating from these lines of investigation are the new methods that should enable the next generation of computational motor control studies that necessarily involve non-linear dynamical systems modeling (e.g., Yu et al. 2006), machine learning (Ng Lab), Bayes Nets (Koller Lab; Friedman et al. 2003), digital signal processing and discrete time models (Meng Lab, Zumsteg et al. 2005), adaptive filters in neural nets and analog signal processing (Widrow Lab), and large-scale hardware simulations of highly interconnected neurons (Boahen Lab, Boahen 2005).
A wide range of research at Stanford addresses aspects of learning and memory, from the synaptic and intra-cellular levels to the circuit, system and functional levels, and explores the role of activity-dependent physiological changes in both development and learning.
Work in the laboratories of Tsien and Malenka uses cellular and molecular research methods to investigate the basic mechanisms of synaptic transmission and the subsequent intracellular electrical, structural, molecular, and genetic processes that underlie neural plasticity. The Raymond Lab (Boyden et al. 2006) investigates the error signals used in the cerebellum in learning to calibrate eye movements. The Schnitzer lab (Jung et al. 2004) investigates the changes in neural circuit dynamics that occur as a result of learning, using two-photon imaging, fiber optic fluorescence microendoscopy and computational modeling approaches.
The role of activity dependent processes in the early development of the visual system is the focus of the Shatz laboratory, and the adaptation of visual processes in response to experience has been the focus of many years of research in Erik Knudsen’s laboratory.
Much of the work in the laboratories mentioned here is primarily experimental, but computational methods are used as well, particularly in the Raymond and Schnitzer laboratories. The experimental efforts are complemented by work in the Boahen laboratory aimed at building synthetic circuits that mimic biophysical properties of learning at the cellular level and neural circuit levels (Arthur and Boahen, 2006). In addition, one line of work in McClelland’s laboratory now explores the consequences of cellular and circuit properties for experience dependent strengthening of sensory processes and their reorganization after brain damage (Moldakarimov et al, 2006).
At a more cognitive level, the effort to understand learning and memory has required increasing computational sophistication since it became apparent that the brain basis of memory must be interpreted in terms of distributed representations. Combined computational and neurophysiological investigations coupled with studies of the effects of brain lesions on memory performance now support an evolving theory in which learned semantic representations and episodic memories depend on complementary brain systems associated with the neocortex on the one hand and the hippocampus and related structures in the medial temporal lobes on the other (McClelland and Rogers, 2004). Yet these are not separate systems – instead they must work together, since meaning affects episodic memory and new semantic learning relies heavily on medial temporal areas. Furthermore, it is understood that effective use of memory requires executive control processes that modulate memory processes (Badre and Wagner, 2005).
Ongoing research in the McClelland and Wagner laboratories addresses these ideas primarily from a computational perspective in the former case and from the use of fMRI methods on the other, and affords many opportunities for the integration of computational and advanced functional imaging methods for the further investigation of the neural basis of learning and memory. The Widrow lab also offers a computational perspective on the organization of cognitive memory systems that makes testable experimental predictions for fMRI studies of the organization of conceptual knowledge in the brain.
In addition to the investigation of emergent functions such as those listed above, many of the participating labs in our program focus more directly on the underlying events that take place within cells that may play roles in many different emergent functions of the nervous system. Work at this level is motivated by efforts to understand emergent functions, and contributes importantly to it, but focuses on the basic building blocks, in hopes of understanding what role they play in allowing those emergent functions to be realized. Investigators working primarily at this level include Richard Tsien, Robert Malenka, Karl Deisseroth, and John Huguenard.
For example, one focus of investigation of the Tsien lab is the role of voltage gated calcium channels found on the cell membranes. By allowing calcium to enter the cell these channels play important roles in a wide variety of cellular processes, including synaptic transmission, electrical activity, and long-term changes that may underlie learning and memory.
John Huguenard’s laboratory is interested in the physiological processes taking place in cells and circuits to give rise to oscillations and synchrony – neural processes that may play a role in a number of important cognitive processes, such as attentional gating, feature binding, and working memory. These labs offer ample opportunities for modeling work incorporating known properties of the underlying physiology into models of many different types of emergent cognitive functions.
Virtually all of the investigators associated with our program are interested in emergent functions of neural systems. However, different laboratories seek to understand the basis of these functions at different levels. Some researchers investigate the mechanistic basis of neural processes at the sub-cellular (molecular, genetic, synaptic) level. Others investigate neural processes at the level of circuit-level interactions among diverse populations of neurons, again using visualization, physiological measurement, and computer simulation. Still others focus on a more abstract level of analysis, seeking to understand how diverse brain areas represent information in the activity of populations of neurons; how these representations change in real time, and how they relate to overt behavior. Another group focuses more on the functional analysis of the roles of whole brain regions in emergent functions.
Partially cross-cutting all of these levels of analysis are complementary research methods. Some investigators rely primarily on classical electrophysiological and cellular/molecular biological techniques. Others rely extensively on advanced methods of visualization, either at a micro level using invasive or in vitro methods, or at a macro level using non-invasive methods.
Some researchers also rely on computational modeling, either directly in their own work or as a basis for guidance of their investigations based on the work of others. The computational modelers in our program ground their models in what we know about the brain, albeit at differing levels of granularity and computational abstraction.
Modeling is critical to the effort to understand how processes at one level of description give rise of emergent phenomena at another, and many investigators in our program use models in this way. Any mismatch between the predictions of the model and the observed emergent phenomena indicates areas where our understanding of the underlying phenomena may be imperfect, and provide an important source of hypotheses for experimental investigation. Students in our program will have ample opportunities to investigate the interplay between experiment and modeling, either within laboratories that combine the two approaches or between laboratories that specialize in one or the other.
While experiment and modeling are important elements of our program, there are also several other important contributing approaches.
Some researchers use what is known about neural representations and processes to inspire the development of artificial systems that may someday hope to achieve parity with natural biological systems in such domains as locomotion, 3-D vision, object recognition, and object manipulation. This is the approach taken by Andrew Ng, who seeks to discover what may be the learning algorithm the brain uses to find useful abstract representations of structure in a range of cognitive and perceptual domains. Ng’s work draws heavily on the work of sensory neuroscientists and other neurally-inspired modelers for its inspiration; and provides a high-level characterization of the nature of neural representations that can guide ongoing research by neuroscientists.
Kwabena Boahen in Electrical Engineering also seeks to build artificial systems that mimic the brain – but in Boahen’s case, the goal is to achieve a far higher degree of fidelity to details of the processes that take place in real neurons and synapses during information processing and learning.
There are also researchers who use what we know about the brain to build useful neuroprosthetic devices, for example, as in Krishna Shenoy’s work, to allow the neural control of movement. Again this type of work is inseparable from the experimental and modeling approaches described above, since it both draws on and contributes to our understanding of the neural representations of intended actions in motor cortex. Similarly, Stephen Baccus’ effort to build a prosthetic retina draws heavily upon his experimental and computational investigations of neural processes in the retina.
Another approach taken by members of our training program is the development of advanced tools that facilitate research of all the types discussed above. We highlight here those who contribute to our ability to visualize neural processes. In coming years the best neuroscience research will combine information obtained from multiple modalities, including but not limited to single unit and local field potentials, functional MRI, electrical and magnetic measurements from the scalp, calcium imaging, behavioral and neurological observations, and an array of new molecular-genomic imaging methods. Scientists will be challenged to explain how the relatively low resolution measurements, such as functional MRI, can be explained by the properties of the cellular or molecular components.
To be truly cutting-edge neuroscience, we will need to have visualization tools that span across length scales. Several labs at Stanford do cutting edge research relevant to this goal. Stephen Smith’s lab works on extreme high resolution imaging of the ultra-synapse. Mark Schnitzer works on optical methods, such as fluorescence endoscopy, to visualize the structure and activity at the cellular level in vivo. Karl Deisseroth has developed imaging techniques that allow the investigation of circuit dynamics with millisecond temporal resolution. Wandell and Glover use quantitative magnetic resonance imaging methods, such as diffusion-weighted imaging, to measure cellular diffusivity, chemical shift imaging, and MR tissue properties (T1, T2) in human subjects and patients.
Complementary to visualization (as well as other data-intensive methods, such as electrode array recording from multiple individual neurons) is the development of advanced methods for analyzing complex multi-variate data sets. While many of the faculty in our program make extensive use of such methods, there are several that contribute at the cutting edge to the development of the methods and/or their extension to neuroscience applications.
For example, Daphne Koller’s work extends the probabilistic graphical models framework to represent, among other things, probabilistic relational structure, temporal structure, and mixtures of discrete and continuous variables. This work has already had an impact on functional brain imaging data analysis and there are important training opportunities for the further development of this application.
Similarly, Trevor Hastie in Statistics has contributed to the field of applied nonparametric regression and classification, and has developed computational tools that provide much of the basis for statistical modeling in the statistical analysis languages S, R, and S-plus. Two labs extensively involved in the extension of such advanced analysis tools to neuroscience research are the laboratories of Wandell, who uses these tools for analysis of MR data, and Shenoy, who makes extensive use of such tools for multi-neuronal data analysis. Shenoy’s collaboration with Maneesh Sahani of the Gatsby Computational Neuroscience Unit will further enhance training opportunities for students in this area; indeed fruitful research involving graduate students is already emerging from this collaboration.
Abbeel, P., Coates, A., Quigley, M., Ng, AY. An Application of Reinforcement Learning
to Aerobatic Helicopter Flight. Neural Information Processing Society (NIPS) 19.
Armstrong, KM, Fitzgerald, JK, Moore, T (2006) Changes in visual receptive fields with
microstimulation of frontal cortex. Neuron. 50(5):791-8.
Arthur , J V , Boahen, K, *Learning in Silicon: Timing is Everything*, /Advances in
Neural Information Processing Systems 17/, B Sholkopf and Y Weiss, Eds, pp 75-82,
MIT Press, 2006.
Badre, D., & Wagner, A. D. (2005). Frontal lobe mechanisms that resolve proactive
interference. Cerebral Cortex, 15, 2003-2012.
Boahen, K (2005). Neuromorphic Microchips. Scientific American. 292(5):56-63.
Bogacz, R., Brown, E., Moehlis, J., Holmes, P., & Cohen, J.D. (2006). The physics of
optimal decision making: A formal analysis of models of performance in two-alternative
forced choice tasks. Psychological Review, 113, 700-65..
Boyden, ES, Katoh A, Pyle JL, Chatila TA, Tsien RW, Raymond JL (2006) Selective
engagement of plasticity mechanisms for motor memory storage. Neuron. 51(6):823-34.
Churchland, MM, Yu BM, Ryu SI, Santhanam G, Shenoy KV (2006a) Neural variability
in premotor cortex provides a signature of motor preparation. Journal of Neuroscience.
Churchland, MM, Afshar A, Shenoy KV (2006b) A central source of movement
variability. Neuron. 52:1085-1096.
Cunningham, JP, Yu, BM, Shenoy, KV, Sahani, M (2008). Inferring Neural Firing Rates
from Spike Trains Using Gaussian Processes. Advances in Neural Information
Processing Systems 20, Platt, J, Koller, D, Singer,Y, Roweis, S (ed.) Cambridge,
Friedmand, N. and Koller, D. (2003) Being Bayesian about Bayesian Network Structure:
A Bayesian Approach to Structure Discovery in Bayesian Networks. Machine Learning,
Gold, J. I. & Shadlen, M. N. Neural Computations that underlie decisions about sensory
stimuli. Trends in Cognitive Sciences, 5, 10-16.
Jung, JC, Mehta AD, Aksay E, Stepnoski R, Schnitzer MJ. (2004) In vivo mammalian
brain imaging using one- and two-photon fluorescence microendoscopy. Journal of
Knutson, B., Bjork, J. M., Fong, G. W., Hommer, D. W., Mattay, V. S., & Weinberger,
D. R. (2004). Amphetamine modulates human incentive processing. Neuron, 43, 261-
McClelland, J. L. and Rogers, T. T. (2003). The Parallel Distributed Processing
Approach to Semantic Cognition. Nature Reviews Neuroscience, 4, 310-322.
Moldakarimov, S. B., McClelland, J. L., and Ermentrout, G. B. (2006). A homeostatic
rule for inhibitory synapses promotes temporal sharpening and cortical reorganization.
/Proceedings of the National Academy of Sciences, 103(44),16526-31/.
Ng, A. Y. (2004). Feature selection, L1 vs. L2 regularization, and rotational invariance.
ICML '04: Proceedings of the twenty-first international conference on Machine learning.
ACM Press, New York, NY, USA.
Plaut, D. C., McClelland, J. L., Seidenberg, M. S., and Patterson, K. (1996).
Understanding normal and impaired word reading: Computational principles in quasi-
regular domains. /Psychological Review/, /103/, 56-115.
Santhanam, G, Ryu, SI, Yu, BM, Afshar, A, Shenoy, KV (2006) A high-performance
brain-computer interface. Nature. 442:195-198.
Schultz, W, Dayan, P & Montague, PR (1997).A neural substrate of prediction and
reward. Science, 275,1593-1599.
Shadlen, M. N., & Newsome, W. T. (2001). Neural basis of a perceptual decision in the
parietal cortex (area LIP) of the rhesus monkey. J Neurophysiol, 86(4), 1916-1936.
Sugrue, LP, Corrado, GS, Newsome, WT (2004) Matching behavior and the
representation of value in the parietal cortex. Science. 304(5678):1782-7.
Thrun, S, Montemerlo, M, Dahlkamp, H, Stavens, D, Aron, A, et al. (in press). Stanley,
the robot that won the DARPA Grand Challenge. Journal of Field Robotics.
Usher, M., & McClelland, J. L. (2001). The time course of perceptual choice: The leaky,
competing accumulator model. Psychological Review, 108, 550–592.
Wandell, B.A, Dumoulin, S.O. and Brewer, A. A. (2006). Computational Neuroimaging:
Color Signals in the Visual Pathways. Neuro-opthalmol. Jpn. vol. 23 pp. 324-343
Wong, K.F., & Wang, X.J. (2006). A Recurrent Network Mechanism of Time
Integration in Perceptual Decisions. Neuroscience 26, 1314-1328.
Yu, BM, Afshar, A, Santhanam, G, Ryu, SI, Shenoy, KV, Sahani, M (2006). Extracting
dynamical structure embedded in neural activity. Neural Information Processing Society
(NIPS) 18, Editors Y. Weiss and B. Scholkopf and J. Platt, MIT Press, Cambridge, MA.
Zumsteg, ZS, Kemere, C, O'Driscoll S, Santhanam, G, Ahmed, RE, Shenoy, KV, Meng,
TH (2005) Power feasibility of implantable digital spike sorting circuits for neural
prosthetic systems. IEEE Transactions in Neural Systems and Rehabilitation