Whether spellbinding or tedious, a video film seems like a simple sequence of pictures. Yet it encompasses often-overlooked spatial and temporal information. Dr. Michal Irani of the Weizmann Institute's Computer Science and Applied Mathematics Department develops advanced methods for extracting and analyzing this information. She does this by formulating mathematical algorithms that make possible the recovery of space-time visual information that is beyond the physical bounds of various visual sensors, including the human eye.
For example, one such algorithm can align and integrate information from two video sequences recorded simultaneously by two possibly non-identical) visual sensors. These sensors could cover different fields of view, optical wavelengths, and space-time frequencies, as well as significantly varied zooms. Thus, for example, the alignment and fusion of information covering diverse optical wavelengths give rise to new sensors capable of simultaneous day and night vision. Similarly, integrating information from a wide range of zooms allows for the detection of specific persons in a packed sports stadium. Part of the visual information is not apparent to us because it is characterized by higher space-time frequencies than those detectable by the human eye or the video camera. Integrating information from several such limited sensors can unveil fine details and very high-speed events that no individual sensor could see. From security to medicine, entertainment to advanced robotic visual systems, this technology opens the door to a new generation of visual sensors and greatly enhanced visual capabilities.
Dr. Irani is the incumbent of the Frances Hersh & Max Hersh Career Development Chair.