Prof. Shimon Ullman: "I expect the day will come when we can write a computer program that will learn on its own how to use vision to understand the world. The computer will be able to identify the situation it 'sees' through a camera (say a dog, ears back and tail up, facing a cat with its back arched) and to anticipate the outcome of the scene."
Aristotle realized that most of what we know of the world comes to us through our eyes. But how does that information – transferred from the eye via encoded signals – get deciphered, stored and retrieved in the brain? Prof. Shimon Ullman of the Computer Science and Applied Mathematics Department helped found a branch of science that investigates how our brains process visual input. "Vision is our model," he says. "It's a window we can use to peek at general cognitive processes such as thought and memory."
Ullman was among the first to use theoretical computation techniques to conduct brain research. Together with his research team, he builds mathematical models that simulate the brain's information-processing activities. Such models are as good as their ability to "understand" what they "see." A simple model could pick out the figure of a person, even one embedded in the background. A more advanced model could recognize a known person and, in the next stage, could tell if that person was happy or sad. A large part of this knack lies in the ability to compare the image in sight to a stored collection of "interpreted images" in the memory banks. These insights might shed new light on thought, memory and learning in the brain. They may aid in developing new methods of treatment for neurological diseases, as well as providing the basis for new products and advanced tools for the electro-optics, aviation and space industries.
Prof. Shimon Ullman is the incumbent of the Ruth and Samy Cohn Professorial Chair of Computer Sciences.