A new theory developed by a Weizmann Institute mathematician may explain one of the most remarkable and mysterious capacities of the brain -- its ability to recognize familiar objects even when conditions for viewing, such as lighting, distance or position, change dramatically.
Prof. Shimon Ullman of the Department of Applied Mathematics and Computer Science has developed a computational model that describes how the brain may process visual information to make such recognition possible. According to him, the brain stores not only "snapshots" of objects but also knowledge, gained from experience, about the way objects change under various viewing conditions. For example, after seeing many smiling faces, it can generate a smiling version of any glum-looking face.
Using this knowledge, the brain generates numerous versions of an image newly presented to it. In parallel, it creates multiple versions of an image stored in its memory. These two sets of versions are compared and when a close match is found between two images -- bingo! -- recognition occurs. According to the model, the process takes only a fraction of a second because the brain concurrently generates several thousand varieties of each image.
"Recognition is not a straightforward comparison, it's an active trial-and-error process involving multiple transformations that take place before a comparison with a stored image is performed," Ullman says.
Ullman's Ph.D. student Assaf Zeira has used this theory to teach a computer to recognize faces. His program enables the machine to recognize an endless number of views of a particular face based on several snapshots of this face stored in its memory. Ullman's model will be further tested in biological experiments, some of them to be conducted by Weizmann Institute neurobiologists.
Prof. Ullman is the incumbent of the Ruth and Samy Cohn Chair of Computer Science. Funding for this research was provided by the Israel Science Foundation.