The trouble with many robots and other automated systems equipped with artificial vision is that their eyes "see" the world as two-dimensional. As a result, they have great difficulty in assessing the relative positions of objects. Existing ways for robots to reconstruct 3-D images tend to be slow and cumbersome, but two Weizmann Institute physicists have developed a 3-D imaging technique that greatly speeds up and simplifies this process.
The system -- developed by Drs. Daniel Zajfman and Oded Heber of the Particle Physics Department -- uses two regular video cameras, a light source and a transparent fluorescent screen placed between the cameras and the object to be filmed. When light is reflected off the object, it strikes the screen and creates a flash that the cameras record along with the image of the object.
One camera films continuously, while the other has a shutter that opens for only a billionth of a second at a time, registering just a tiny fraction of the light particles emitted by the flashes. Because both the speed of light and the time it takes for the flashes to fade on the screen are known, it is possible to determine the exact distance between the screen and each point on the object's surface. This information, in turn, is combined with data from the 2-D picture to form a 3-D image.
The new 3-D imaging system can be applied in such diverse fields as aerial photography, cartography and surveying.