Paul A. Merolla
I currently live Northern California, where I am a post doctoral scholar in the Brains in Silicon lab at Stanford University. I received a BS in electrical engineering from the University of Virginia in 2000, and a PhD in bioengineering from the University of Pennsylvania in 2006.
My academic interests include: VLSI models of cortical networks, vision systems, and asynchronous digital interfaces for routing and interchip connectivity. My interests outside of the lab range from music to modeling financial markets. You can visit my personal webpage to learn more about my extracurricular activities.
Conventional information processing systems (computers, ASICS, embedded systems) compute by brute-force, executing billions of instructions per second in a serial fashion. Although these traditional systems excel at raw number crunching, they fail miserably when asked to make inferences about real-world environments. I have always been impressed with the brain's ability to seamlessly integrate a large number of input variables (sensory information) with a large number of output variables (motor output, abstract thoughts and reasoning), putatively using both serial and parallel forms of processing. What is it about the brain that makes it better suited for real-world applications? Can we understand its strategy for coordinating massive amounts of information, and build similar systems in hardware? I am currently involved in a number projects that are aimed at creating and exploring neural-based processing systems.
The focus of my thesis project was to explore how neurons in the primary visual cortex (area V1) come 'prewired' to process the visual world. Neurons in V1 respond selectively to oriented edges and their preferred orientations change smoothly across the cortical surface. Surprisingly, V1 neurons are selective to orientation as soon as an animal opens its eyes.
I explored the idea that electrical pattern formation can seed orientation columns in V1 (click here for a detailed description on how). Building on the work of Ernst et al., I built a large-scale neuromorphic chip that consists of a recurrent network of silicon neurons with short-range excitation and long-range inhibition. The neurons in my network receive isotropic (unpatterned) afferent inputs. When the recurrent feedback is tuned to be sufficiently strong, bump-like patterns of neural activity form across the network; these bumps, which are seeded by (random) component heterogeneity, serve as the scaffold of the orientation map. Similar to animal maps, my chip has smoothly changing orientation domains (because the bumps have spatial extent) that repeat at regular intervals (because the bumps are periodic).
In certain parameter regimes, bumps form and disappear spontaneously. I explored how this spontaneous activity interacts with evoked visual activity to produce orientation selectivity. Surprisingly, I discovered that the way in which a bump aligns itself to the stimulus determines whether that bump contributes to robust selectivity. Details are in a recent Cosyne abstract.
Higher-order feature maps using Neurogrid
My thesis work demonstrates that we can generate an orientation map in a neuromorphic chip quite easily: all we need is component mismatch (which is unavoidable in a physical system) and a rather simple cortical microcircuit. However, biological vision systems do not use just orientation to represent the visual world. They use a number of other maps that are sensitive to progressively more complex visual features (such as texture, shape, and motion). I plan to use Neurogrid to explore whether the same pattern formation principle (that can generate orientation selectivity) also works as for extracting these higher-order features. Neurogrid is an ideal platform for modeling a cascade of pattern formation networks since such large scale systems are not practical on traditional computing systems.