Clockwise: Prof. Ehud Shapiro, Prof. Zvi Livneh, Dr. Tamar Paz-Elizur, Ph. D. student Yaakov Benenson and Dr. Rivka Adar. Guinness World Record
Fifty years after the discovery of the structure of DNA, a Weizmann team has found a new use for this celebrated molecule – as fuel for molecular computation systems. They have developed a device – an improvement upon a molecular computing device reported by the team around a year ago – in which a single DNA molecule provides the computer with input data as well as all the necessary fuel. A spoonful (5 milliliters) of solution can contain 15,000 trillion such computers. The new device was awarded the Guinness World Record for “smallest biological computing device.”
The study was carried out by Prof. Ehud Shapiro, Prof. Zvi Livneh, Yaakov Benenson, Dr. Rivka Adar and Dr. Tamar Paz-Elizur of the Institute’s Biological Chemistry Department and the Computer Science and Applied Mathematics Department.
Prof. Ehud Shapiro’s research is supported by the Samuel R. Dweck Foundation; the Dolfi and Lola Ebner Center for Biomedical Research; the Benjamin and Seema Pulier Charitable Foundation; the Robert Rees Fund for Applied Research; and Yad Hanadiv.
Prof. Zvi Livneh is the incumbent of the Maxwell Ellis Professorial Chair in Biomedical Research. His research is supported by the Dolfi and Lola Ebner Center for Biomedical Research; the Levine Institute of Applied Science; the Dr. Josef Cohn Minerva Center for Biomembrane Research; and the M.D. Moross Institute for Cancer Research.
Left to right: Dr. Rami Marelly and Prof. David Harel. Making computers play
Ever feel you have too much on your mind and just can't sort it our? Imagine, then, that you were told to build a computer system that could simulate a football game and predict its development. You’d have to work on each one of the game’s many elements separately. Specifically, you’d have to take into account the physical and behavioral characteristics of each of the players and of the referees, carefully laying out the way they react to each possible set of circumstances. You’d also have to consider the features of the ball and the field, the condition of the grass, the weather and so on. Then there are the rules of the game, as well as the possibility of human error in referee decisions.
The numerous elements involved make building such a computerized system through regular programming virtually impossible. This is true for many other kinds of systems as well – such as aerospace systems, complex communication networks, embryonic development and even work management in the office. All include many events that trigger multiple and diverse reactions, which is why they are called “reactive systems.”
In the past, Prof. David Harel, Dean of the Faculty of Mathematics and Computer Science at the Weizmann Institute, created a visual programming language called Statecharts, aimed at programming reactive systems. Now, together with Dr. Rami Marelly (who worked with Harel as a graduate student), he proposes a different method that may significantly advance this field.
Instead of describing every component of the game separately, in what is called “intra-object” style, Harel and Marelly propose to describe the game as a collection of scenarios depicting the possible interactions between the components (what they term “inter-object” style). The approach is similar to that of a radio or TV commentator, who rather than describing characteristics of the players and the ball, describes the dynamics among them.
The collection of scenarios includes desirable scenarios (for example, a series of passes leading to a goal), necessary scenarios (a ball kicked upward must eventually come down), forbidden scenarios (the presence of more than 22 players on the field), and the various possible ways in which the game can progress.
An important aspect of the method is that it’s easy to use. A person trying to input the possible scenarios does not need a background in programming. “There is no need to write a single line of code,” says Harel. Instead of writing a cryptic input command, the programmer “plays in” the scenarios in a natural, relatively simple way. If the programmer wants to play in the “behavior” of a telephone, for example, an image of a phone appears on-screen. The programmer – which could be anyone – would “teach” the computer how he or she wants the phone to work, feeding in the desirable, necessary and forbidden scenarios. The computer, in turn, would “understand” how the different scenarios are related, and their combination would constitute the system’s full behavioral repertoire.
This approach is supported by a tool built by the scientists for this purpose, called the Play-Engine. It can analyze the collection of behaviors as one unit and decide, using combinations of scenarios, how to respond to changing circumstances – even those that were not directly fed into the system.
The new method has produced very encouraging results and is already being used to construct computer models of complex biological processes, such as the differentiation of a group of embryonic cells in the C. elegans roundworm. Although the method was only recently published in the scientific literature, numerous scientists have already found it efficient and convenient for programming reactive systems. The Play-Engine promises to produce a significant change in the programming of large systems. It may also allow people with no background in computers to easily design new programs on their PCs, specify the dynamics of their Web pages or reprogram their home appliances.
Harel and Marelly have authored a book called Come, Let’s Play that describes their work in detail. It will be published in May with the Play-Engine software attached so that readers can “play” on their own.
Out With Dead Ends
In a follow-up project to Harel and Marelly's work, a tool called "smart play -out”was developed by graduate student Hillel Kugler, working under the guidance of Harel and Prof. Amir Pnueli of the Computer Science and Applied Mathematics Department. Its aim is to prevent the Play-Engine from running into a dead end or other undesirable situations and to minimize surprises in the behavior of the system.
Prof. David Harel is the incumbent of the William Sussman Professorial Chair. His research is supported by the Arthur and Rochelle Belfer Institute of Mathematics and Computer Science; the Gulton Foundation; and the Ida Kohen Center for Mathematics Research.
Tiniest computers ever. A billion operations a second
This is the vision that inspires a team of Weizmann Institute scientists, who have recently made an important early step toward its realization, creating the first autonomous biological nanocomputer. Reported in Nature, as many as a trillion such computing devices can work in parallel in one drop of water in the lab of Prof. Ehud Shapiro of the Computer Science and Applied Mathematics Department and the Biological Chemistry Department. The devices collectively perform a billion operations per second with greater than 99.8% accuracy per operation while requiring less than a billionth of a watt of power.
The nanocomputer created is computationally very limited and too simple to have immediate applications, however it may pave the way to future computers with unique biological and pharmaceutical applications able to operate within the human body. 'It will probably take decades,' says Shapiro. 'But we believe this vision is realizable. The living cell contains incredible molecular machines that manipulate information-encoding molecules such as DNA and RNA in ways that are fundamentally very similar to computation. Since we don't know how to effectively modify these machines or create new ones just yet, the trick is to find naturally existing machines that, when combined, can be steered to actually compute.'
Shapiro challenged Ph.D. student Yaakov Benenson to do just that: to find a molecular realization of one of the simplest mathematical computing machines - a finite automaton that detects whether a sequence of 0's and 1's has an even number of 1's.
Biological editing kit
Benenson came up with a solution using DNA molecules and two naturally occurring DNA-manipulating enzymes: Fok-I and Ligase. Operating much like a biological editing kit, Fok-I functions as a chemical scissors, slicing DNA into a specific pattern, whereas the Ligase enzyme seals DNA molecules together.
The nanocomputer can be programmed to perform several simple tasks, depending on the software molecules mixed into the solution. The software molecules can be used to create a total of 765 software programs. Several of these programs were tested in the lab, including the 'even 1's checker,' as well as programs that check whether in a list of 0's and 1's, the 0 comes before the 1 in every case, or whether it both starts with a 0 and ends with a 1.
Four letters
How do DNA strands come to contain the symbols 0 and 1? DNA strands are usually depicted as a scroll of recurring 'letters,' in varied combinations, which represent DNA's constituents (four chemical bases known as A, T, C, and G). The nanocomputer created by Shapiro's team uses these four DNA bases to encode the input data as well as the program rules underlying the computer 'software.' The team decided that the letter pattern 'CTGGCT' in the input molecule would signify 1 and 'CGCACG' would signify 0.
In addition to having two symbols, the input molecule, when mixed with hardware and software molecules, also has two states. When the hardware molecule Fok-I, recognizes a symbol and 'cuts' DNA, it leaves it with one strand longer than the other, resulting in a single-strand overhang called a 'sticky end' (see diagram). Since Fok-I makes its incision at the site of the symbol, the sticky end is what remains of the symbol. Fok-I may leave the symbol's 'head' or 'tail' attached. These are the two possible states. A computing device that has two possible states and two possible symbols is called a two-state, two-symbol finite automaton.
Input output
Two molecules with complementary sticky ends can temporarily stick to each other (a process known as hybridization). In each processing step the input molecule hybridizes with a software molecule that has a complementary sticky end, allowing the hardware molecule Ligase to seal them together using ATP molecules as energy.
Then comes Fok-I, which cleaves the input molecule again, in a location determined by the software molecule. Thus a sticky end is again exposed, encoding the next input symbol and the next state of the computation. Once the last input symbol is processed, a sticky end encoding the final state of the computation is exposed and detected - again by hybridization and ligation - by one of two 'output display' molecules. The resulting molecule, which reports the output of the computation, is made visible to the human eye in a process known as gel electrophoresis.
Though their potential as miniature 'doctors' may be decades away from realization, these devices, if further advanced, could, be used to screen DNA libraries in vitro. This in itself could contribute to the future development of gene therapies tailored to individuals according to their genetic make-up. Both visions have a long way to go, but these small beginnings are harbingers of a much enhanced quality of life.
This study was conducted in collaboration with Prof. Zvi Livneh, Dr. Tamar-Paz Elizur, and Dr. Rivka Adar of the Weizmann Institute's Biological Chemistry Department and Prof. Ehud Keinan of the Technion Israel Institute of Technology's Chemistry Department.
Computers in the Dust
In 1954 researchers at the Weizmann Institute of Science built the first computer in Israel, and one of the first computers worldwide, which they fondly named WEIZAC. Thirty years later, silicon chip computers routinely functioned at a rate that was thousands and millions of times faster than WEIZAC.
Today, research at the Institute targets the development of ever-faster, more compact chips designed according to the emerging principles of quantum electronics. These will inevitably leave silicon chips in the dust, much as the silicon chips once turned WEIZAC into a museum exhibit.
However, surprising as it may sound, even before quantum electronic chips have become a reality, they already have a potential successor - the tiny biological computer developed by Prof. Ehud Shapiro's team.
Prof. Shapiro received his Ph.D. from Yale University and joined the Weizmann Institute in 1982. During the 1980s he was involved with the Japanese Fifth Generation Computer Project and published numerous scientific papers in the area of concurrent logic programming languages.
The design of a universal molecular computer, which inspired the creation of the molecular automaton reported in Nature, was recently awarded U.S. Patent 6,266,569.
Prof. Ehud Shapiro, dr. Rivka Ada, and Ph.D. student Yaakov Benenson
Prof. Shapiro's research is supported by the Dolfi and Lola Ebner Center for Biomedical Research, Yad Hanadiv, and the Samuel R. Dweck Foundation.
It is unusual for theoretical mathematicians to indulge in brain surgery, and certainly not recommended without prior training. "I was trying to open up engineers' skulls and mess around to see how they think." The accent here is on "see." Thus, the Statecharts visual language was born.
Professor David Harel is one of three recipients of the first Prime Minister's Prize for Software, awarded for his Statecharts language. The concept allows engineers and designers to visually describe the workings of complex systems; it's become the standard for computerized engineering projects worldwide.
In 1983, Harel was comfortably installed as a tenured faculty member in the Institute's Department of Applied Mathematics after completing degrees at Bar-Ilan and Tel Aviv Universities, with a Ph.D. from MIT earned in an as-yet unrivaled MIT record of one year and eight months.
When Israel Aircraft Industries called with an urgent request for help with the Lavi fighter jet project, Harel jumped at the chance. Apparently, there was a communications breakdown between the aircraft designers and the subcontractors charged with building parts for the bird. A totally new approach was desperately needed to cut down on the mammoth 1,000-page instruction manual, to increase clarity and simplicity. For a complex system such as an airplane, "There are zillions of scenarios and you can't just list them all and expect people to understand and go from there. I would ask 'deep and profound' questions like 'What happens if I press this button?'"
Sometimes they couldn't respond; many issues hadn't been previously raised. Harel took notes, simplifying what he saw, read and heard around him, developing his own generic descriptive language.
While writing and working at the mathematics of these descriptions, Harel was also adding illustrations. "I would doodle on the side. I'd tell them, 'Actually, what you are saying is we have this and this.' The pictures started to become more elaborate. Gradually the text became less important and we simply stopped using it."
Statecharts' popularity snowballed through word-of-mouth reports and the lectures Harel began delivering upon acknowledging the originality of his own language. He started publishing on the subject in the scientific literature. The next step was the setting up of a private company in 1984 when he and his colleagues developed Statemate, a sophisticated Statecharts-based software tool for engineers to simulate complex systems on computer.
Harel has continued refining Statecharts. A second software tool, Rhapsody, was released in early 1997. Statecharts' place in the history books is now assured with its status as one of the three basic components of a new software development approach called Unified Modeling Language (UML). UML has recently been announced as the industry standard for object-oriented analysis and design.
Statecharts can be thought of as the visual punch behind the majority of computerized engineering projects, from aircraft to cellular telephones, BMWs to the Pathfinder Mars probe. "I even heard recently," laughs Harel, "that it's being used to design the controller for flushing toilets on Lufthansa aircraft!" What greater compliment could there be for an amateur brain surgeon?
Math & Computer Science
English
Computer Science and Applied Mathematics
Computer Science and Applied Mathematics,Computer Science and Applied Mathematics
Da Vinci's Mona Lisa rendered with different expressions
A new theory developed by a Weizmann Institute mathematician may explain one of the most remarkable and mysterious capacities of the brain -- its ability to recognize familiar objects even when conditions for viewing, such as lighting, distance or position, change dramatically.
Prof. Shimon Ullman of the Department of Applied Mathematics and Computer Science has developed a computational model that describes how the brain may process visual information to make such recognition possible. According to him, the brain stores not only "snapshots" of objects but also knowledge, gained from experience, about the way objects change under various viewing conditions. For example, after seeing many smiling faces, it can generate a smiling version of any glum-looking face.
Using this knowledge, the brain generates numerous versions of an image newly presented to it. In parallel, it creates multiple versions of an image stored in its memory. These two sets of versions are compared and when a close match is found between two images -- bingo! -- recognition occurs. According to the model, the process takes only a fraction of a second because the brain concurrently generates several thousand varieties of each image.
Prof. Shimon Ullman
"Recognition is not a straightforward comparison, it's an active trial-and-error process involving multiple transformations that take place before a comparison with a stored image is performed," Ullman says.
Ullman's Ph.D. student Assaf Zeira has used this theory to teach a computer to recognize faces. His program enables the machine to recognize an endless number of views of a particular face based on several snapshots of this face stored in its memory. Ullman's model will be further tested in biological experiments, some of them to be conducted by Weizmann Institute neurobiologists.
Prof. Ullman is the incumbent of the Ruth and Samy Cohn Chair of Computer Science. Funding for this research was provided by the Israel Science Foundation.
Encoded visual information (top) is revealed by overlaying a second encoded transparency (bottom)
Upgrading the confidentiality and authenticity of data communications and of computer-stored information is a major goal of leading cryptography experts at the Weizmann Institute's Faculty of Mathematical Sciences.
By studying and improving encryption techniques, they are developing new tools to ensure that computer files cannot be altered without detection and unauthorized users do not enter the system. By designing unforgeable digital identification systems, they are enabling verification of the authenticity of computer-to-computer communications and of network users. Institute computer scientists also investigate the theory of cryptographic transformations and random number generation, expanding basic knowledge in the field.
Before coming to the Institute in 1982, Prof. Adi Shamir was one of the developers of the extremely sophisticated RSA public key system. This benchmark cryptographic approach is currently used in many commercial software products and in secure telephone and network systems. In Rehovot, he and Dr. Amos Fiat designed a method that provides identification, authentication and signature facilities for digital communications, enhancing computer security. The procedure was patented by the Institute's Yeda Research and Development Co. and is presently used in various applications, including the programming of "smart cards" to ensure that only authorized subscribers can access satellite pay-TV.
"Zero-knowledge interactive proofs," a theoretical concept that underlies the Fiat-Shamir approach, was designed by Prof. Shafi Goldwasser when she was at MIT, working with her colleagues there and at the University of Toronto. This technique enables, among other things the transmission of an identification password in a way that provides no information about that password to an unauthorized eavesdropper. The wide applicability of zero-knowledge proofs was shown by Goldwasser's colleague Prof. Oded Goldreich, then also at MIT, who studied this with MIT and Berkeley scientists.
Goldwasser and Goldreich, both now at the Weizmann Institute, are improving the encryption of computer files, so that encrypting small changes in a large file does not require rescrambling the complete file. This advance may speed the use of scrambling to foil the spread of computer viruses or for preparing multiple authenticated documents. Goldreich is also developing ways to disseminate database information through multiple computers under different auspices, so that the database owner cannot record the information being requested.
The security risks associated with data transmission over public communication lines are also being addressed by Dr. Moni Naor. Such communication interactions include bank and computer-purchase transactions, as well as the transmission of medical records, proprietary data, and telecommunications. Naor is designing improved cryptographic schemes for dealing with these issues.
He also investigates "secret-sharing," techniques in which multiple keys held by different people are required to read or write confidential information. This idea is similar to the use of multiple signatures on checks. Naor and Shamir have recently implemented one of their concepts by designing a secret-sharing scheme for encrypting visual information.
Stimuli for experiments involving human and computer identification of faces
Institute mathematicians have made major strides in the continuing quest to transfer elements of human intelligence -- particularly vision and control of arm-movements -- to robots.
One of the main difficulties in endowing robots with "vision" is teaching them to recognize three-dimensional objects, which can be observed from an infinite number of viewing angles. In an approach developed by Prof. Shimon Ullman and Dr. Ronen Basri, a 3-D object is represented by a handful of 2-D snapshots. A set of mathematical calculations is then used to combine the snapshots that best coincide with novel views from any desired angle.
This method has recently been successfully tested on computer models by Basri and Dr. Ehud Rivlin, who plan to try it on actual robots by storing in their memory 2-D pictures of a room containing contours of various objects, such as furniture and wall paintings. The robot, equipped with a camera, would then determine its location in the room, or reach a designated target, by taking pictures of various surrounding objects and then matching them against the contour bank in its memory.
In related work, Dr. Shimon Edelman collaborating with Prof. Heinrich Bulthoff of the Max Planck Institute for Biological Cybernetics, has shown that the human brain itself may recognize 3-D objects in this very fashion ? by comparing its 2-D image to a series of memorized "snapshots" of previously seen objects. The scientists have provided evidence that recognition takes place when the rain, through interpolation, selects the "snapshots" that most closely approximate the new object.
Prof. Tamar Flash focuses not on vision but on the brain's control of arm movements. She combines laboratory measurements with various modeling techniques to characterize the arm motions of healthy people. Furthermore ? together with Dr. Rivka Inzelberg of the Tel Aviv Medical Center and Prof. Amos Korczyn of this Center and of Tel Aviv University ? Flash studies the movements of people suffering from Parkinson's disease and other neurological disorders in order to pinpoint how they differ from the norm. Her research is also aimed at developing advanced motion control and planning schemes for robotic arms.
A Weizmann Institute-Brown University study has shown that object recognition by the human brain is far simpler than has been commonly thought, a finding that may facilitate the design of more effective vision-related systems, ranging from household robots to smart weapons.
Scientists have long been puzzled by the ability of the brain to reconstruct three-dimensional images from information conveyed by the two-dimensional retina. In a recently published study, Dr. Shimon Edelman of the Weizmann Institute's Department of Applied Mathematics and Computer Science and Prof. Heinrich Bulthoff of Brown University have determined that object recognition may be accomplished through a relatively simple process involving a comparison of two-dimensional views. Upon receiving input about an object, the brain compares its 2-D coordinates to a series of memorized "snapshots" of previously-seen objects. Recognition takes place when the brain, through interpolation, selects the "snapshots" that most closely approximate the new object.
In these studies, subjects were first shown computer images of a previously unseen 3-D object -- or "target" -- as viewed from various vantage points. The participants were then presented with single views of either the target or a "distractor" -- an object similar to but not identical to it. The researchers showed that the subjects could successfully identify the target only when its change in orientation was small enough for interpolation to take place, an indication that learned "snapshot" images and not mentally rotatable dimensional models underlie object recognition.
Funding for this research was provided by the U.S. Navy. Dr. Edelman holds the Elaine Blond Career Development Chair.
Our website uses cookies to enhance user experience by remembering your preferences and analyzing website traffic. For more information about how we use cookies please read our
DNA Fuels Tiny Computing Machine