One of the primary visual areas in the brain that tracks motion has surprised scientists: Instead of directly mapping objects by the way their image falls on the retina, it is the first visual area to map them in a representation of the space surrounding the viewer.
The finding could be important both for treatment of visual complications of disorders such as Alzheimer’s disease and for understanding how the brain assembles a stable picture from the perpetual jerky scanning movements the eyes make.
“Our visual sensors are like careening video cameras operated by a drunkard,” said Giovanni d’Avossa, M.D., an instructor in neurology and lead author of a paper in a recent issue of Nature Neuroscience. “Our eyes sometimes move smoothly, as when they track the course of a thrown ball, but more frequently they engage in fast, ballistic movements called saccades that abruptly shift our gaze from one point to another.”
The brain must piece together the separate jigsaw-like snapshots formed at each brief fixation to create the perception of a coherent, stable representation of the world, d’Avossa said.
With colleagues at the School of Medicine and universities in Italy, d’Avossa studied the middle temporal cortex (MT), one of up to 32 brain areas involved in processing various aspects of vision.
Researchers had already recognized the MT as playing a role in tracking visual motion. In prior brain imaging studies of the MT, subjects focused on the center of their visual field while an object moved through the field. When the moving object was on the right side of the visual field, the MT became active on the left side of the brain, and vice versa.
This pattern matches the way the optic nerves feed data to the brain: They cross behind the eyes, with data from each retina going to the opposite hemisphere of the brain. This implied that the MT was dealing with data in retinotopic space — the neuroscientists’ term for a map of visual space that correlates on a point-by-point basis with light-sensing cells in the retina.
To create a stable image of the position of visual inputs, the brain also must create another representation of visual space, the spatiotopic map. This representation draws on data from the retinas but constantly has to be updated because of the eyes’ incessant careening.
For the new studies, d’Avossa and colleagues at Italy’s Pisa Vision Laboratory had subjects look to either side of a moving object. They found the activity in the MT depended only on the object’s position on the screen and not on the position of its image on the retina.
“This suggests that the MT can analyze visual data in spatiotopic terms,” d’Avossa said. “It’s no small task to transform data from retinal coordinates to real-world coordinates, and it’s quite surprising to find that a brain area previously known only for motion tracking might have a major role in this component of spatial perception.”
Patients with Alzheimer’s disease sometimes experience spatial disorientation, and d’Avossa said the new results suggest the MT may be a good place to search for the source of this dysfunction. Problems in the MT also have been linked to dyslexia, so the findings could shed further light on this link.
d’Avossa