Archive

Vision Squared

By Claudia Avalos

March 7, 2013

In a little more than half a day your brain will have processed enough visual information to take up all of the available hard drive space on two computers. How does the brain do it? Imagine a digital image of a landscape. A computer stores this picture as a set of pixels; a human observer, on the other hand, perceives various shadows, textures, and shapes. The brain identifies and sorts all of this data by converting signals received by the eye, which essentially encode a set of pixels, into more and more abstract forms through various layers of neural networks. Using computational modeling, UC Berkeley Associate Professor of Vision Science Bruno Olshausen is creating maps that show what the data output from each of these layers might look like. The model consists of simulated neurons, organized into two different layers. The dots on the map represent different contrast elements captured by neurons in the first layer, their position connects contrast data collected to actual locations in the image, and the colors correlate data transfer from the first to the second layer of processing.

The color scheme ranges from red to blue, where red indicates a positive correlation, blue a negative one, and gray indicates no correlation. This model successfully combines features output from first-layer neurons into a second layer, much like researchers think the brain’s visual processing scheme would. If the model agrees with data from brain imaging experiments, it could be an important advance in understanding how our brains analyze what we see.

This article is part of the Spring 2010 issue.

Notice something wrong?

Please report it here.