Helena Wang

About me
Research
Publications

Research

Below is a top-level summary of some of the projects I have worked on, grouped more or less by topic.

Encoding of visual motion in human visual cortex

Multivariate pattern analysis is a commonly used analysis method to infer the perceptual and cognitive state of an observer from fMRI measurements. For example, in vision experiments, the response properties of a visual stimulus can be read out, or “decoded”, from the spatially distributed pattern of voxel responses in the brain. High decoding accuracy in a given visual area for a given visual property is commonly interpreted to indicate that neurons residing in that area encode (show selectivity for) that visual property. Our results provide evidence against this interpretation. We demonstrate that, in case of visual motion, fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. This work also investigates the spatial pattern of neural representations that are necessary for successful fMRI-based decoding.

This work was published in Wang HX, Merriam EP, Freeman J, and Heeger DJ, 2014.
A preliminary version of this work was also presented at the 2013 Society for Neuroscience conference in San Diego.


Encoding of visual motion in nonhuman primates

Neurons in area MT of the macaque monkey encode visual motion. Some of these neurons encode the true motion directions of visual patterns, which is a non-trivial computation that requires the nonlinear integration of “component directions” corresponding to multiple local, oriented features within the pattern. In this work, I analyze a previously-studied population of ~800 MT cells to investigate how various response properties of these neurons contribute to their pattern direction selectivity. These statistical relationships help inform functional models underlying the mechanisms of motion integration by MT neurons.

This work was done in the laboratory of Tony Movshon and presented at the 2010 Society for Neuroscience conference in San Diego.


Using perception of second-order visual stimuli to infer neural computation

In pattern vision, “first-order” patterns are those with boundaries defined by changes in luminance (pixel intensity). The processing of first-order images is largely a linear computation known to take place in primary visual cortex (V1): the perceptual sensitivity to these images is linked systematically to the responses of V1 neurons. “Second-order” patterns are those with boundaries defined by image statistics other than pixel intensity. The model commonly used to explain how neurons might compute, and how humans might perceive, this kind of boundaries involves a cascade of linear computations with intervening nonlinearities (e.g., filtering, rectification, normalization, filtering). It has also been proposed that each of these component computations can be thought of as a “canonical” computation that is ubiquitous in different areas of cortex and can be combined in various ways to perform different sensory and cognitive functions. In this study, I tested for the existence of and characterized the computation of normalization in second-order vision, by manipulating and measuring observers’ perceptual sensitivity to second-order visual stimuli. This allows us to extrapolate from the computational theories of first-order vision to make inferences about computations of higher-order visual processing and other areas of cortex beyond V1.

This work was a collaboration with Mike Landy.
It was published in Wang HX, Heeger DJ, and Landy MS, 2012.  A preliminary version of this work was also presented at the 2011 Vision Sciences Society conference.


Temporal reliability of exploratory eye movements

How people move their eyes to complex, real-world visual stimuli likely involves a variety of uncontrolled sensory and cognitive factors. Yet surprisingly, to a certain class of dynamic, engaging stimuli, eye movements are highly reliable across observers and across repetitions. In this study, we exploited this reliabilty to understand the temporal dynamics of eye movements. Specifically, we asked how the reliable (shared) component of eye movements depends on the accumulation of visual information over time. So we systematically manipulated the temporal context of movie clips (by scrambling them), measured people’s eye movements, and developed a mathematical model (with a closed form solution) that provided an excellent fit to the behavior data. The model allowed us to infer the time scale of integration underlying eye movements for complex, dynamic stimuli.

This work was published in Wang, HX, Freeman J, Merriam EP, Hasson U and Heeger DJ, 2012.
A preliminary version of this work was also presented at the 2010 Vision Sciences Society conference.


Using microsaccades to infer neural computation

Our eyes are never completely still. Even when we attempt to maintain stable fixation, we still make tiny eye movements called microsaccades. Microsaccades are modulated by changes in visual stimulation and cognitive state of the observer. Neurophysiologically, they also reflect neural activations near the foveal region of a continuous visuomotor map (such as in a part of the brain called the superior colliculus). Manipulating and measuring microsaccades behaviorally therefore provides an opportunity to infer the neural computation and representation in visuomotor maps in the brain. I made a series of measurements to quantify how the rate and direction of microsaccades depend on visual stimulation during prolonged fixation, and used these measurements to infer and constrain the nature of spatiotemporal interactions of neural representations in the brain.

This work was presented at the 2013 Vision Sciences Society conference.