<p>
Science Feature Articles</p>

When the Network is Defective

English
 

 

Sculpture: Igor Mitoraj
 
About one in 4000 babies is born with DiGeorge syndrome, a congenital condition that causes various abnormalities, most often in the face and heart. This syndrome usually arises from the deletion of a small portion of chromosome 22, but strangely enough, the symptoms and severity vary greatly among individuals.  Facial defects can range from cleft palate or difficulty in eating or speaking, to mild muscle weakness on one side of the mouth; and heart defects can include deformities in the wall separating the chambers or misplaced aorta. Prof. Eldad Tzahor and research student Itamar Harel of the Institute’s Biological Regulation Department recently solved a piece of this puzzle by investigating the genetic network underlying this syndrome. Their findings appeared in the Proceedings of the National Academy of Sciences (PNAS).

Tzahor studies the developmental connection between the face and the heart. He has shown that at a certain stage in the early embryo, the cells that will give rise to parts of the heart as well as those that will form various facial muscles both originate in a common progenitor population in the mesoderm, an embryonic tissue that forms the heart, muscle, blood and skeleton. “It’s as though they all attend the same first grade class together,” says Tzahor, “and then move on to separate classrooms and different educational tracks.”
 
What genes are important for proper heart and head muscle development? Is more than one gene involved and, if so, how do they function together? Tzahor and Harel focused on transcription factors – proteins that bind to DNA and control the transcription (the first step in converting genetic information to proteins) of other genes. An initial screen for the factors that are active in these early mesoderm cells turned up some that were already associated with the development of these cells, as well as a new one called Lhx2. This transcription factor was known to be involved in other areas of development – including eyes, blood cells and hair follicles – but it had never been seen to play a role in heart or muscle development.
 

 

(l-r) Prof. Eldad Tzahor and Itamar Harel
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Next, the researchers looked for the effects of these transcription factors in living organisms. In some of their mice, each of the main transcription factors they had identified was individually knocked out. But they also managed to create several “double knockout mice,” in which two of the genes were inactivated. When just one gene was knocked out, the team observed various DiGeorge-like deformities in the heart and facial muscles – some mild and some more severe. But when two were knocked out at the same time, the embryonic cells failed to “graduate to second grade,” and specific muscles and heart structures did not form.

Combining these results with other experimental approaches, Tzahor and Harel were able to create a general model of the interactions between the various transcription factors. This model suggests that the actions of any one gene may be less vital than the workings of the transcription factor network as a whole. It appears that several of the genes regulate one another, either by direct action or indirectly, through other factors. The upshot, according to Tzahor, is that at least some of the slack might be transferred to other transcription factors, making the network relatively robust: “If one of the nodes is missing, it may still be able to function with only relatively minor defects,” he says. Nonetheless, because this network is involved in the construction and fine-tuning of the heart and facial structures, the loss of one transcription factor might result in too little (or too much) transcription of other genes further down the line and “slip-ups” in the orchestration of the final design.
Sculpture: Igor Mitoraj
 
In addition to revealing the network of interactions that directs heart and face development, the findings can help medical researchers understand how the various facial and cardiac defects link up in DiGeorge syndrome. In some cases, for instance, a mild deformity in the facial muscles could be tied to a more serious heart defect, and a better understanding of the connection has clinical implications.

Harel stresses that the picture is still incomplete. A small subset of people with the symptoms of DiGeorge syndrome does not carry the chromosomal deletion. This has led the researchers to believe that in humans, other genes beyond this segment of chromosome 22 are involved in the process; the regulatory network revealed in the mouse model also hints at this probability. Searching for such genes in a genome-wide association study (GWAS) is under way: Researchers are comparing the genomes of healthy people with those of the group who do not have the deletion but suffer from the syndrome, to identify genetic variants that might be involved.

This work was conducted together with Drs. Roi Avraham and Ariel Rinon of the Institute’s Biological Regulation Department, and Dr. Julius (Teddy) Hegesh of the Chaim Sheba Medical Center; as well as researchers from the Samuel Lunenfeld Research Institute, Toronto, Canada; Oregon State University; the Institute of Cancer Research, London, UK; Chaim Sheba Medical Center, Israel; the Tata Institute of Fundamental Research, Mumbai; and Universidad Pablo de Olavide, Seville.

 
Prof. Eldad Tzahor's research is supported by the Adelis Foundation; the Wolfson Family Charitable Trust; the European Research Council; the estate of Fannie Sherr; and the estate of Jack Gitlitz.
 
 
 
Sculpture: Igor Mitoraj
Life Sciences
English

A Fragile Equilibrium

English
 
 
Prof. Vered Rom-Kedar
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
A baby with dangerously low levels of certain white blood cells recovers, while an adult with nearly twice the levels succumbs to bacterial infection. A new mathematical model proposes a possible explanation for such medical mysteries, shedding light on those appearing after chemotherapy, and even suggesting how their occurrence could be reduced.
 
This study, reported recently in the Journal of Clinical Investigation, has shown that to properly assess the risk of infection, it is essential to evaluate not only the quantity of immune cells but also their quality, which varies from one person to another. This research may therefore lead to more personalized chemotherapy: Better precautions might be taken to prevent infection in high-risk patients, whereas those at a low risk could be spared unnecessary preventive treatments. The new model was developed by Weizmann Institute mathematicians in collaboration with physicians from the Meir Medical Center in Kfar Saba and from the Hoffmann-La Roche research center in Basel, Switzerland.
 
The model reveals how the immune system functions under conditions of neutropenia – a dangerously low level of white blood cells, mainly neutrophils. In this condition, which often emerges after chemotherapy or bone marrow transplant but can also be present at birth, severe infections can develop if the immune system fails to perform the crucial function of devouring and destroying bacteria.
 
Rom-Kedar screen
 
The tug of war between the blood cells and the bacteria cannot be explained away by the simple bacteria-to-cell ratio, nor by a threshold that the blood cell count must exceed. Rather, the model shows that when neutrophil counts are low, the patient’s immune system enters a fragile equilibrium – described in mathematical terms as “bistability” – that can easily be disrupted, with dramatic consequences, by even minute changes in bacterial concentration or in the number of neutrophils. Other factors that can dramatically affect this equilibrium include the effectiveness of the neutrophil functioning and the tissues’ permeability to bacteria, which can increase due to cancer therapy.

Thus, according to the model, in healthy people, the fact that the effectiveness of neutrophils varies from one person to another usually has no significance. By contrast, in patients with neutropenia, this individual variability can make a difference between life and death. For example, after chemotherapy, some cancer patients contract life-threatening infections even when they are maintained in isolation under sterile conditions. It turns out that if the neutrophils of these patients are "weak," even the smallest number of bacteria, such as those present in the gut, can tilt the fragile immune balance in favor of the bacteria.
 
 
The study also explains why certain patients following chemotherapy or a bone marrow transplant may develop an acute infection even when their neutrophil levels have returned to relatively normal levels. The chemotherapy lowers the neutrophil levels and function, in addition to making the tissues of these patients more penetrable to bacteria. As a result, in some patients the bacterial concentrations might increase so quickly that by the time the neutrophil counts rise back to “normal,” the rapidly multiplying bacteria have already gained a head start, so that the neutrophil recovery is insufficient for overcoming the severe infection. This scenario may eventually also shed light on the rare cases in which acute bacterial infections develop in individuals with normal immunological function. The model suggests that in such cases, a high growth rate of unusually virulent bacteria may have overcome even the appropriate quantitative and qualitative neutrophil response.
 
Neutrophil levels of three hypothetical patients with neutropenia. The patient with the "strong" neutrophils (P1) can overcome infections if treated with supportive medications. In contrast, the patient with the "weak" neutrophils (P4) cannot overcome even the minute bacteria load that comes from the gut. Source: the Journal of Clinical Investigation
 
A potential solution to two previously unsolved cases has already emerged from the model. A newborn baby treated at Meir Medical Center recovered from neutropenia even though his absolute neutrophil count (ANC) had fallen as low as 200 neutrophils per microliter of blood, whereas an adult whose ANC stood at 380 after chemotherapy died of infection. The model has suggested how various clinical parameters, such as the poor quality of the neutrophils, could have led to the death of the adult despite his higher ANC.

In addition, the model might help researchers understand the mechanism behind the development of severe recurrent infections in some patients. For example, of one thousand patients referred to Meir Medical Center because of such infections, diagnosis could be established in only one-third of the cases. Weizmann Institute mathematical modeling showed that at least some of the unexplained cases might have resulted from a combination of several mild defects, including variation in the function of neutrophils and other immune cells.
 
 
This research was based upon the blood analysis of four healthy volunteers. To use the model in the clinic, such analysis will have to be applied to large populations. “Our mathematical model has revealed previously unknown mechanisms responsible for the variability in the vulnerability to infections of neutropenia patients,” says research leader Prof. Vered Rom-Kedar of the Weizmann Institute’s Computer Science and Applied Mathematics Department.

The study was performed by researchers with an unusual combination of backgrounds. The mathematician, Rom-Kedar, specializes in the investigation of dynamic systems. The first author, electrical engineer Dr. Roy Malka, conducted this research as part of his Ph.D. studies in mathematics at Weizmann; he is now engaged in postdoctoral research on related subjects at Harvard Medical School. The idea for the project was first proposed by Dr. Eliezer Shochat, a senior oncologist who also has a Ph.D. in applied mathematics from Weizmann and now works in a research group at Hoffman-La Roche in Basel, Switzerland. The study was performed in collaboration with the Meir Medical Center team: Prof. Baruch Wolach, M.D., Head of the Laboratory for Leukocyte Function and Chair of Pediatric Immunology at Tel Aviv University’s Sackler Faculty of Medicine; and laboratory manager Ronit Gavrieli, M.Sc., who performed the experiments.
 
Says Prof. Wolach: “Our study suggests that to achieve optimal results – in applying chemotherapy and/or in treating patients with innate neutrophil dysfunction – it is of value to periodically assess the patient’s neutrophils as well as the bacterial concentration. Such assessments will help reduce the morbidity and mortality, as well as the cost, associated with unnecessary hospitalizations and the administration of expensive medications. Moreover, by cutting down on the use of antibiotics, these assessments can help prevent the rise in antibiotic resistance.”

Prof. Vered Rom Kedar heads the Moross Research School of Mathematics and Computer Science; her research is supported by the Yeda-Sela Center for Basic Research. Prof. Rom Kedar is the incumbent of the Estrin Family Chair of Computer Science and Applied Mathematics.

 
 
 
Prof. Vered Rom-Kedar
Math & Computer Science
English

The Fat-Blood Connection

English

 

Microscope image of embryonic zebrafish blood vessels
 

 

 
Blocked or constricted blood vessels lead to high blood pressure, heart attack and stroke – some of the most common causes of death in the Western world. Blood vessels become blocked when fat is deposited on artery walls, building up until the flow of blood is restricted and the delivery of oxygen and nutrients curtailed. Today, such “clogged pipes” are cleared with drugs, angioplasty or bypass surgery to replace the blocked sections. But in the lab of the Weizmann Institute’s Dr. Karina Yaniv, researchers are looking for ways to keep the body’s main conduits from getting clogged up in the first place.
 
Type your text here
 
Caption
Type your text here
 
Type your text here
 
Caption
Type your text here
 
To prevent blockages, one must understand exactly how they occur. As opposed to the kitchen drainpipe, which gets clogged from the passive accumulation of gunk, the deposition of fat on blood vessel walls is an active process in which the cells lining those walls can sense fats carried in the bloodstream and respond according to amount and type. According to Yaniv, that fat and those vessel-lining cells – endothelial cells – engage in a two-way conversation that facilitates the fat build-up. Basically, the endothelial cells give the fat permission to cross the inner layer of the “pipe” to settle comfortably on the other side of that layer. As the fat layer grows, it presses on the blood vessel walls, constricting them and creating the conditions for blood vessel disease.
In the bottom microscope image, a mutation causes increased blood vessel growth, in contrast with those in a normal zebrafish embryo (top)
 
In research that appeared in Nature Medicine, Yaniv and her team in the Biological Regulation Department revealed how fat levels in the blood act on the endothelial cells. To travel in the bloodstream, fats are packaged into units along with proteins; these hold the fats and help them enter into the body’s cells. These units, called lipoproteins (fat-proteins) also contain cholesterol – either the “bad” cholesterol (LDL) or the “good” cholesterol (HDL). The research focused on units made with bad, LDL, type. Such lipoproteins have large fat-to-protein ratios, and they are known to be high risk factors for the development of blood vessel disease. “Medicine is concerned with the fat content of the LDL lipoprotein, but we showed that the protein component of this unit plays a crucial role in the conversation with the endothelial cells,” says Yaniv.
 
Working with zebrafish embryos, the research team first discovered a genetic mutation that causes the overproduction of blood vessels – almost twice the normal number. Their research showed that the mutated gene is responsible for the packaging and secreting of the LDL: It ties the fat molecules up with the transport protein, called ApoB, and then sends the packaged unit off into the bloodstream. When this gene was damaged in the fish, their bodies didn’t produce the bad cholesterol. A similar mutation exists in certain humans. Carriers of this mutation don’t produce LDL, and they don’t suffer from such fat-related blood-system diseases as arteriosclerosis.
 

 
 
How does LDL or its lack affect the growth of new blood vessels? The researchers found that lowering the bad cholesterol levels resulted in an increased division of the endothelial cells, while raising LDL levels had the opposite effect: Cell division was delayed, as was the cells’ ability to migrate and contribute to the formation of new blood vessels. Further investigation revealed that LDL directly interferes with a main mechanism of endothelial cell proliferation, in which a growth factor called VEGF binds to the cell surface to activate the process. VEGF can bind to two different types of receptors on the cell. The first is the “regular” receptor, which responds to VEGF binding by initiating cell proliferation. The second is a “dummy” receptor, which binds but does not initiate division. The dummy receptors are a sort of double-check and regulation mechanism, but LDL appears to encourage the production of this type of receptor. Thus high LDL levels mean more of the dummy receptors and less endothelial cell proliferation, while low levels promote cell division and, thus, the creation of new blood vessels.
  
(l-r) Yona Ely, Oded Mayseless, Liron Gibbs Bar, Moshe Grunspan, Dr. Inbal Avraham-Davidi, Dr. Guy Malkinson and Dr. Karina Yaniv
 
 
 
 
 
 
 
 
 
 
 

 

 
 
It is the ApoB protein, which is packaged together with the cholesterol and triglycerides, that seems to act as a mediator between the lipoprotein units and the endothelial cells, and so its function is central to the formation of blockages. Understanding the mechanism by which ApoB and LDL contribute to endothelial cell damage may enable us, in the future, to adjust the process, possibly by promoting the production of new blood vessels to bypass the blocked sections.
 

 

Dr. Karina Yaniv's research is supported by the Karen Siem Fellowship for Women in Science; the Willner Family Center for Vascular Biology; the estate of Paul Ourieff; the Carolito Stiftung; Lois Rosen, Los Angeles, CA; the Adelis Foundation; and the Yeda-Sela Center for Basic Research. Dr. Yaniv is the incumbent of the Louis and Ida Rich Career Development Chair.

 

 

 
 
 
Microscope image of embryonic zebrafish blood vessels
Life Sciences
English

Arms Race

English

The scientists’ rendering of the process by which a cell imports protons and ejects antibiotics

 
 
The H.G. Wells classic War of the Worlds tells of a fictional invasion of Earth by evil Martians. Just when the Earthlings are about to give in to the aliens’ superior strength, the Martians pick up and leave, defeated by the tiny bacteria and viruses they encounter on Earth. When that book was written, humanity, itself, was engaged in a desperate struggle against the world of disease-causing microorganisms, which were still a leading cause of death. The tide began to turn only with the discovery of such antibiotics as penicillin. But bacteria have not admitted defeat; they have been developing resistance to our weapons at a pace that threatens to bring back the battle in full force. Since the first use of antibiotics, humans and bacteria have been engaged in an arms race in which the other side finds ways to overcome nearly any antibiotic we can develop.  

One type of instrument of resistance employed by bacteria is membrane proteins that remove antibiotics from the cell. Such proteins are found in the outer walls of the bacterial cells, and they act something like vacuum cleaners, pulling harmful substances from the cells’ insides and ejecting them. They are called multidrug transporters, because each of these proteins is capable of removing a variety of antibiotics.

Like a vacuum cleaner, a transporter protein needs energy to work. That energy comes in the form of protons, which it can take in thanks to a gradient in the proton concentration across the cell membrane (with a higher proton concentration outside than in). Every time a transporter protein ejects an antibiotic molecule from the cell, it takes in one or more protons. Bringing in a proton involves binding it on the outside of the cell and releasing it on the inside; the mechanism for this binding and release has been a mystery.
 
(l-r) Osnat Tirosh, Nir Fluman and Prof. Eitan Bibi
 
Prof. Eitan Bibi and Ph.D. students Nir Fluman and Osnat Tirosh of the Biological Chemistry Department recently investigated this phenomenon. In two parallel studies on the MdfA transporter of the E. coli bacterium, they revealed the mechanism for binding a proton and the conditions necessary for its release. The scientists then continued their study using genetic engineering techniques to create mutated proteins with the ability to bind and release more than one proton when ejecting an antibiotic molecule.

Under a microscope, the cell membrane appears as a thick, nearly impenetrable wall of lipids (fat-like substances) with embedded proteins (like MdfA itself) that span it from the inside to the outside. The first study, which appeared in Molecular Cell, identified a site in the part of MdfA that is within the membrane – an acidic site (that is, it carries a negative charge) that binds the protons and releases them. This acidic site seems to act as a way station for protons passing between the membrane’s two sides. The research findings showed that this acidic site senses when an antibiotic is bound to the protein, releasing the proton into the cell when antibiotic binding takes place. In other words, getting rid of the antibiotic and taking in the proton are orchestrated actions. The mechanism works in the opposite direction as well, so that the binding of the proton causes the transporter protein to release its bound antibiotic to the outside of the cell. The scientists found that this phenomenon, known as competitive binding, is crucial for MdfA activity. But they were surprised to discover that the antibiotic and the proton bind at different locations in MdfA, with the information passed between the two binding sites. Processing this information causes the protein to undergo a structural change, and this, in turn, leads to the antibiotic’s ejection.  

This resistance mechanism works when the antibiotic molecules are exchanged for one proton, but does not help against certain substances that require the import of two protons. In the second study, Bibi and his students genetically engineered bacteria to be resistant to these substances, as well. That is, the transporters could transfer two protons instead of one. The researchers introduced genetic mutations into MdfA and found several that grant resistance. Interestingly, each of the mutated transporters had an additional acidic site created near the original proton-binding site.

This appearance of a second proton-binding site in the mutant transporters suggests that the proton-transfer mechanism might be modular: By adding a second acidic site in the appropriate location, the protein gained the ability to transfer two protons at once. These findings, which appeared in the Proceedings of the National Academy of Sciences (PNAS), shed new light on the potential ease with which bacteria can develop antibiotic resistance.

This research is especially important right now, as antibiotic resistance is growing and new discoveries in the field are slowing down. A deeper understanding of the proteins that remove antibiotics from the cell will aid humanity in its eternal arms race against disease-causing bacteria.
 
Prof. Eitan Bibi's research is supported by the Dr. Josef Cohn Minerva Center for Biomembrane Research, which he heads; the Edmond J. Safra Philanthropic Foundation; the Willner Family Leadership Institute for the Weizmann Institute of Science; the Jeanne and Joseph Nissim Foundation for Life Sciences Research; Rudolfine Steindling; the estate of Harold Z. Novak; the estate of Irwin Mandel; and the estate of Raymond Lapon.  Prof. Bibi is the incumbent of the Ruth and Jerome A. Siegel and Freda and Edward M. Siegel Professorial Chair.
 

 
 
 
(l-r) Osnat Tirosh, Nir Fluman and Prof. Eitan Bibi
Life Sciences
English

A Lesson in Sleep Learning

English
 
Anat Arzi and Prof. Noam Sobel
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Is sleep learning possible? A new Weizmann Institute study that appeared recently in Nature Neuroscience has found that if certain odors are presented after tones are heard during sleep, people will start sniffing when they hear the tones alone – even when no odor is present – both during sleep and later, when awake. In other words, people can learn new information while they sleep, and this can unconsciously modify their waking behavior.

Sleep-learning experiments are notoriously difficult to conduct. For one thing, one must be sure that the subjects are actually asleep and stay that way during the “lessons.” The most rigorous trials of verbal sleep learning have failed to show any new knowledge taking root. While more and more research has demonstrated the importance of sleep for learning and memory consolidation, none had managed to show actual learning of new information taking place in an adult brain during sleep.

Prof. Noam Sobel and research student Anat Arzi, together with Sobel’s group in the Institute’s Neurobiology Department in collaboration with researchers from Loewenstein Hospital and the Academic College of Tel Aviv-Jaffa, chose to experiment with a type of conditioning that involves exposing subjects to a tone followed by an odor, so that they soon exhibit a similar response to the tone, alone, as they would to the odor. The pairing of tones and odors presented several advantages. Neither wakes the sleeper (in fact, certain odors promote sound sleep), yet the brain processes them and even reacts during slumber. Moreover, the sense of smell holds a unique non-verbal measure that can be observed – sniffing. The researchers found that in the case of smelling, the sleeping brain acts much as it does when awake: We inhale deeply when we smell a pleasant aroma but cut our inhalation short when assaulted by a bad smell. This variation in sniffing could be recorded whether the subjects were asleep or awake. Finally, this type of conditioning, while it may appear to be quite simple, is associated with some higher brain areas – including the hippocampus, which is involved in memory formation.
 
 
sleep lab
 
In the experiments, the subjects slept in a special lab while their sleep state was continuously monitored. (Waking up during the conditioning – even for a moment – disqualified the results.) As they slept, a tone was played, followed by an odor – either pleasant or unpleasant. Then another tone was played, followed by an odor at the opposite end of the pleasantness scale. Over the course of the night, the associations were partially reinforced, so that the subject was also exposed to just the tones. The sleeping volunteers reacted to the tones alone as if the associated odor were still present – by either sniffing deeply or taking shallow breaths.

The next day, the now awake subjects again heard the tones alone – with no accompanying odor. Although they had no conscious recollection of listening to them during the night, their breathing patterns told a different story. When exposed to tones that had been paired with pleasant odors, they sniffed deeply, whereas those tones associated with bad smells provoked short, shallow sniffs.

The team then asked whether this type of learning is tied to a particular phase of sleep. In a second experiment, they divided the sleep cycles into rapid eye movement (REM) sleep and non-REM sleep, and then induced the conditioning during only one phase or the other. Surprisingly, they found that the learned response was more pronounced during the REM phase, but the transfer of the association from sleep to waking was evident only when learning had taken place during the non-REM phase. Sobel and Arzi suggest that during REM sleep we may be more open to the influence of stimuli in our surroundings, but so-called “dream amnesia” – which makes us forget most of our dreams – may operate on any conditioning occurring in that stage of sleep. In contrast, non-REM sleep is the phase that is important for memory consolidation, so it might also play a role in this form of sleep-learning.

Sobel’s lab studies focus on the sense of smell;  but Arzi intends to further investigate brain processing in altered states of consciousness such as sleep and coma. “Now that we know that some kind of sleep learning is possible,” says Arzi, “we want to find where the limits lie – what information can be learned during sleep and what information cannot.”
 
Prof. Noam Sobel's research is supported by the estate of Lore Lennon; the Adelis Foundation; the Nadia Jaglom Laboratory for Research in the Neurobiology of Olfaction; the James S. McDonnell Foundation 21st Century Science Scholar in Understanding Human Cognition Program; the Minerva Foundation; and the European Research Council.


 
 
Anat Arzi and Prof. Noam Sobel
Life Sciences
English

Quantum Effects in Cold Chemistry

English
 
 
(l-r)Sasha Gersten, Alon Henson, Dr. Ed Narevicius, Etay Lavert-Ofir, Julia Narevicius and Yuval Shagam

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
At very low temperatures, close to absolute zero, chemical reactions may proceed at a much higher rate than classical chemistry says they should: In this extreme chill, quantum effects enter the picture. A Weizmann Institute team recently confirmed this experimentally. Their results should not only provide insight into processes in the intriguing quantum world in which particles act as waves, they might explain how chemical reactions occur in the vast frigid regions of interstellar space.

Long-standing predictions say that quantum effects should allow the formation of a transient bond – one that will force colliding atoms and molecules to orbit each other instead of separating after the collision. Such a state would be very important, as orbiting atoms and molecules would then have multiple chances to interact chemically. In this theory, a reaction that would seem to have a very low probability of occurring would, at certain energies, proceed very rapidly.

Dr. Ed Narevicius and his team in the Institute’s Chemical Physics Department managed, for the first time, to experimentally confirm this elusive process in a reaction they performed at temperatures just a fraction of a degree above absolute zero – 0.01 K. Their results appeared recently in Science.
 
The experimental system: two supersonic valves followed by two skimmers. The blue beam passes through a curved magnetic quadrupole guide, and the merged beam (purple) enters a quadrupole mass spectrometer. B is a front view of the quadrupole guide
“The problem,” says Narevicius, “is that in classical chemistry, we think of reactions on the molecular level in terms of colliding billiard balls held together by springs. In the classical picture, reaction barriers sometimes block those billiard balls from approaching one another, whereas in the quantum physics world, reaction barriers can be penetrated by particles, as these acquire wave-like qualities at ultra-low temperatures.”

The quest to observe quantum effects in chemical reactions started over half a century ago with pioneering experiments by Dudley Herschbach and Yuan T. Lee, who later received a Nobel Prize for their work. They succeeded in observing chemical reactions at unprecedented resolution by colliding two low-temperature, supersonic beams. However, the collisions took place at relative speeds that were much too high to resolve many quantum effects: When two fast beams collide, the relative velocity sets the collision temperature at above 100 K, much too warm for quantum effects to play a significant role. Over the years, researchers had used various ingenious techniques, including changing the angle of the beams and slowing them down to a near-halt. These managed to bring the temperatures down to around 5 K – close, but still a miss for those seeking to observe chemical reactions in quantum conditions.

The innovation that Narevicius and his team, including Alon B. Henson, Sasha Gersten, Yuval Shagam and Julia Narevicius, introduced was to merge the beams rather than collide them. One beam was produced in a straight line, and the second beam was bent using a magnetic device until it was parallel to the first. Even though the beams were racing at high speed, the relative speed of the particles in relation to the others was zero. Thus a much lower collision temperature of only 0.01 K could be achieved. One beam contained helium atoms in an excited state, the other either argon atoms or hydrogen molecules. In the ensuing chemical reaction, the argon or hydrogen molecules became ionized – releasing electrons.

To see if quantum phenomena were in play, the researchers looked at reaction rates – a measure of how fast a reaction proceeds – at different collision energies. At high collision energies, classical effects dominated and the reaction rates slowed down gradually as the temperature dropped. But below about 3 K, the reaction rate in the merged beams suddenly took on peaks and valleys. This is a sign that a quantum phenomenon known as “scattering resonances due to tunneling” was occurring in the reactions. At low energies, particles started behaving as waves: Those waves that were able to tunnel through the potential barrier interfered constructively with the reflected waves upon collision. This created a standing wave that corresponded to particles trapped in orbits around one another. Such interference occurs at particular energies and is marked by a dramatic increase in reaction rates.

Narevicius: “Our experiment is the first proof that the reaction rate can change dramatically in the cold reaction regime. Beyond the surprising results, we have shown that such measurements can serve as an ultrasensitive probe for reaction dynamics. Our observations already prove that our understanding of even the simplest ionization reaction is far from complete; it requires a thorough rethinking and the construction of better theoretical models. We expect that our method will be used to solve many puzzles in reactions that are especially relevant to interstellar chemistry, which generally occurs at ultra-low temperatures.”
 
Dr. Edvardas Narevicius is the incumbent of the Ernst and Kaethe Ascher Career Development Chair.


 
 
The experimental system: two supersonic valves followed by two skimmers. The blue beam passes through a curved magnetic quadrupole guide, and the merged beam (purple) enters a quadrupole mass spectrometer. B is a front view of the quadrupole guide
Space & Physics
English

The Coupling Protein

English
 
At any given moment in the cell, all sorts of genes are being activated and proteins are getting churned out one after the other. One of the big shots directing this activity is the protein NF-kappaB – a sort of production overseer responsible for the activation of many of the genes that are crucial for the cell’s daily operations. In healthy cells, NF-kappaB activity is transient and gives its orders on the basis of need. But in chronic inflammation and certain types of cancer, it keeps on working long after the need has passed. When its activity becomes abnormally constant, it can lead to the spread of the cancer cells that cause malignancy and increase the resistance of these cancer cells to chemotherapy. For this reason, NF-kappaB and the pathways it activates are thought to be attractive targets for cancer treatments.
 
Prof. Rivka Dikstein
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
NF-kappaB activity affects tumors in several ways: It both represses planned cell death (apoptosis) and promotes cellular processes that lead to unregulated growth. Many scientists studying the protein focus on the initial stages of NF-kappaB activation, when it receives an external signal that sends it from the cell’s body into its nucleus. But Prof. Rivka Dikstein of the Biological Chemistry Department investigates the last step in the protein’s chain of activity: the expression of groups of genes. Around 100 genes are known, at this point, to be activated by NF-kappaB; these hold the instructions for producing, among other things, proteins that play a role in inflammation and cell growth.

Another group of genes activated by NF-kappaB produces a negative feedback mechanism whose function is to bring the protein’s activity to a halt. In earlier research, Dikstein showed how NF-kappaB-regulated genes become activated very rapidly in response to outside signals, especially those that threaten the cell’s survival. She is now investigating how NF-kappaB affects the transcription of the genes it oversees.

Transcription is the first crucial stage of gene expression. It can be broken down into several steps: initiation, elongation (the production of RNA molecules), processing and termination. Dikstein discovered that NF-kappaB is an important regulator of the elongation step. Here, it acts as a sort of foreman, rounding up a “crew” of workers to get the job done. When an entire crew makes a combined effort, the upshot is a significant increase in the number of RNA molecules ready for protein production.
 
 
Gel
 
Dikstein and research student Gil Diamant recently uncovered a previously unknown mechanism that suggested to them a possible explanation for the unrelenting NF-kappaB activity in chronic disease and cancer. The mechanism involves one of the elongation work crew, a protein called DSIF. They found that this protein is responsible for the coupling that occurs between the elongation stage and that of RNA processing, which takes place in tandem with elongation. Without this coupling, RNA cannot be processed, and protein production is stalled. Together with NF-kappaB, DSIF promotes the expression of the negative feedback genes that put the brakes on NF-kappaB activity. In cells depleted of DSIF protein, or in which its activity is perturbed, NF-kappaB activity becomes abnormally prolonged and the risk of developing NF-kappaB-associated diseases rises. The findings of the study recently appeared in Cell Reports.

The discovery of the DSIF mechanism could be good news in the war against cancer. Because NF-kappaB is entrusted with regulating so many biological processes, any attempt to interfere with it would spell disaster for the expression of many genes and thus cause any number of harmful side effects. DSIF, in contrast, has a much more limited effect on a smaller number of NF-kappaB-regulated genes, so treatments that affect its activities might be more selective, and undesirable side effects minimized.
 
Prof. Rivka Dikstein's research is supported by the Pearl Welinsky Merlo Foundation Scientific Progress Research Fund; the Yeda-Sela Center for Basic Research; the Wolfson Family Charitable Trust; and the Y. Leon Benoziyo Institute for Molecular Medicine. Prof. Dikstein is the incumbent of the Ruth and Leonard Simon Professorial Chair of Cancer Research.
 
 
Prof. Rivka Dikstein
Life Sciences
English

Seeing Like a Baby

English

Infants soon learn to make sense of the complex world around them: Their understanding far surpasses any of the current attempts to design intelligent computerized systems. How do such young infants arrive at this understanding?

 

Answering this question has been a challenge for cognitive psychology researchers and computer scientists alike. On the one hand, babies cannot explain how they first learn to comprehend the world around them, and on the other, computers, for all their sophistication, need human help with labeling and sorting objects to make learning possible.  Many scientists believe that for computers to “see” the world as we do, they must first learn to classify and identify objects in much the same way that a baby does.
 

Mover event detected in the red cell: Motion (left) flows into the cell, (middle) stays briefly in the cell, and (right) leaves the cell, changing its appearance. Warmer colors indicate faster motion
 
 
 
 
 
 
 
 
 
Prof. Shimon Ullman and research students Daniel Harari and Nimrod Dorfman of the Institute’s Computer Science and Applied Mathematics Department set out to explore the learning strategies of the young brain by designing computer models based on the way that babies observe their environment. The team first focused on hands: Within a few months, babies can distinguish a randomly viewed hand from other objects or body parts, despite the fact that hands are actually very complex – they can take on quite a range of visual shapes and move in different ways. Ullman and his team created a computer algorithm for learning hand recognition. The aim was to see if the computer could learn independently to pick hands out of video footage – even when those hands took on different shapes or were seen from different angles. That is, nothing in the program said to the computer: “Here is a hand.” Instead, it had to discover from repeated viewing of the video clips what constitutes a hand.

The algorithm began with some basic insight into the stimuli that attract the attention of young infants. The scientists knew, for instance, that babies track movement from the moment they open their eyes, and that motion can be a visual cue for picking objects out of the scenery. The researchers then asked if certain types of movement might be more instructive to the infant mind than others, and whether these could provide enough information to form a visual concept. For instance, a hand makes a change in the baby’s visual field, generally by manipulating an object. Eventually the child might extrapolate, learning to connect the idea of causing-object-to-move with that of a hand. The team named such actions “mover events.”
 
 
Predicted direction of gaze: results of algorithm (red arrows) and two human observers (green arrows). Faces, top to bottom, belong to Prof. Shimon Ullman, Daniel Harari and Nimrod Dorfman
 
After designing and installing an algorithm for learning through detecting mover events, the team showed the computer a series of videos. In some, hands were seen manipulating objects. For comparison, others were filmed as if from the point of view of a baby watching its own hand move, or else movement that did not involve a mover event, or typical visual patterns such as videos of common body parts or people. These trials clearly showed that mover events – watching another’s hand move objects – were not only sufficient for the computer to learn to identify hands, they far outshone any of the other methods, including that of “self-movement.”

But the model was not yet complete. With mover events, alone, the computer could learn to detect hands but still had trouble with different poses. Again, the researchers went back to insights into early perception: Infants can not only detect motion, they can track it; they are also very interested in faces. Adding in mechanisms for observing the movements of already detected hands to learn new poses, and for using the face and body as reference points to locate hands, improved the learning process.

In the next part of their study, the researchers looked at another, related concept that babies learn early on but computers have trouble grasping – knowing where another person is looking. Here, the scientists took the insights they had already gained – mover events are crucial and babies are interested in faces – and added a third: People look in the direction of their hands when they first grasp an object. On the basis of these elements, the researchers created another algorithm to test the idea that babies first learn to identify the direction of a gaze by connecting faces to mover events. Indeed, the computer learned to follow the direction of even a subtle glance – for instance, the eyes alone turning toward an object – nearly as well as an adult human.

The researchers believe these models show that babies are born with certain pre-wired patterns – such as a preference for certain types of movement or visual cues. They refer to this type of understanding as proto-concepts – the building blocks with which one can begin to build an understanding of the world. Thus the basic proto-concept of a mover event can evolve into the concept of hands and of direction of gaze, and eventually give rise to even more complex ideas such as distance and depth.

This study is part of a larger endeavor known as the Digital Baby Project. The idea, says Harari, is to create models for very early cognitive processes. “On the one hand,” says Dorfman, “such theories could shed light on our understanding of human cognitive development. On the other hand, they should advance our insights into computer vision (and possibly machine learning and robotics).” These theories can then be tested in experiments with infants, as well as in computer systems.
 
Prof. Ullman is the incumbent of the Ruth and Samy Cohn Professorial Chair of Computer Sciences.





 
 


 

Mover event detected in the red cell: Motion (left) flows into the cell, (middle) stays briefly in the cell, and (right) leaves the cell, changing its appearance. Warmer colors indicate faster motion
Math & Computer Science
English

Spin Doctors

English

 

spinning illustration
 
 
 
 
 
 
 
 
 
 
 
 
 
 
If you set a bunch of toy tops spinning closely together, they will collide and stop spinning after a while. But according to the laws of physics, if there are many tops all spinning extremely rapidly and all in exactly the same way, something else might happen first: They could enter into a collective circular movement about a central point. Setting up this scenario with actual toy tops would be impractical, but new Weizmann Institute research suggests that it is quite feasible for molecules in a gas. Circulating molecular flow – a gas vortex – could have a number of uses in biomedical technology and nanoscience.

Rotating tops are said to have angular momentum, a measure of their tendency to continue spinning. Several years ago, Profs. Ilya Averbukh and Yehiam Prior and their teams in the Chemical Physics Department (Faculty of Chemistry) proposed that the angular momentum of molecules could be controlled and manipulated by very short laser pulses. In Prior’s lab, lasers that flash in femtosecond pulses – a millionth of a billionth of a second long – are used to get groups of molecules all spinning in the same direction.

Uri Steinitz, a student in Averbukh's group, wondered what happens afterward, as the spinning progresses. Do the molecules collide like tops, and how would this affect the spins? Physics tells us that angular momentum is always conserved. When molecules that are initially spinning more or less in tandem bump into one another, their rotation slows down, and the directions of their rotational axes become random. However, the total angular momentum should be maintained somewhere within the system. The question was: where?
 
Profs. Ilya Averbukh and Yehiam Prior
 
To find the answer, Steinitz, Prior and Averbukh considered a dense gas in which all of the molecules are spinning in the same direction, and calculated (using computer simulations) what happens when the molecules collide repeatedly.

The researchers found that after a certain number of collisions, the angular momentum of the individual molecules was lost, but it reappeared on a larger scale: The molecules in the gas system began to rotate together in a vortex around a central point. This vortex could be millions of times larger than the size of a single, spinning molecule, and it could theoretically reach rates of tens or even hundreds of thousands of revolutions a second. While the angular momentum eventually diffuses within the larger system, repeated spikes of the laser pulses could keep the gas stirred up indefinitely.

These results, says Steinitz, demonstrate a principle normally seen in much larger systems. Atmospheric cyclones, for instance, start out as much smaller eddies that collide and scale up to create large, well-formed vortices. On a practical level, molecular vortices might be useful for manipulating all sorts of particles. For example, as the vortex continues to spin, its momentum drags nearby molecules into its wake. Thus one could use the method to move delicate small particles – for instance biological molecules that are harmed by direct laser manipulation – without actually touching them. In addition, microfluidic devices used in biomedical and pharmaceutical research and industry, which today are operated by tiny channels and gates, could be designed to work efficiently with laser-controlled molecular vortices.

Prof. Ilya Averbukh is the incumbent of the Patricia Elman Bildner Professorial Chair of Solid State Chemistry.

Prof. Yehiam Prior’s research is supported by the Ilse Katz Institute for Material Sciences and Magnetic Resonance Research, which he heads; the Leona M. and Harry B. Helmsley Charitable Trust; the Carolito Stiftung; the Willner Family Leadership Institute for the Weizmann Institute of Science; and Mr. Luis Stuhlberger, Mexico. Prof. Prior is the incumbent of the Sherman Professorial Chair of Physical Chemistry.

 
 
spinning illustration
Chemistry
English

A Thing and Its Opposite

English
 
Here is a tangled tale of four research paths that intersected in a surprising way in the Weizmann Institute’s Braun Center for Submicron Research. It combines high drama with subatomic physics, things with their opposites and, above all, the never-ceasing struggle to understand the basic nature of our world.
Majorana
 
The first part of the story took place in 1937. In that year, Ettore Majorana, a 31-year-old Sicilian theoretical physicist, published a paper in which he expanded on an idea of another physicist, Paul Dirac. Just four years older than Majorana, Dirac had already received a Nobel Prize in physics in 1933. Among other things, he predicted the existence of antiparticles – for instance, the positron, which is identical to the electron but with an opposite, positive charge. Antimatter particles, which have since been observed in the lab, were at that time a radical idea. But it was an idea that changed the way physicists thought about the structure of matter in the universe.

In his 1937 paper, Majorana took Dirac’s formula one step further, proposing that it is possible, in theory, to have an elementary particle that carries no electric charge; such a particle would, in fact, be its own antiparticle. This theoretical particle became known as Majorana’s fermion (a fermion being a particle of matter, rather than a force-carrying particle such as a photon).

Majorana’s personal life adds intrigue to the story: Although a gifted scientist who published his first scientific paper as an undergraduate, he was mentally unstable. His mentor, Enrico Fermi, called him a “genius of Newton’s rank.” But even as Fermi managed to convince him to publish his antiparticle paper, Majorana was becoming increasingly reclusive. Finally, in 1938, he boarded a ship in Palermo bound for Napoli. But he never disembarked. Although it is fairly certain that Majorana committed suicide, rumors of his having escaped to a monastery helped fuel the mystery surrounding the theory of a particle that is its own antiparticle.
 
 

Imaginary particles


The story picks up in 1982 with Robert Laughlin, an American physicist. Laughlin, in proposing an explanation for a quantum phenomenon called the Fractional Quantum Hall Effect, which occurs in semiconductors, made the claim that in certain circumstances imaginary particles – arising through the collective behavior of electrons – are formed and that these carry electric charges. Interestingly, the proposed charges were fractions of the basic charge of an electron.
 
Robert Laughlin
 

 

Such imaginary particles were expected to appear in Quantum Hall systems – systems in which electrons moving in a two-dimensional layer are exposed to a strong magnetic field. The proposed charges were odd fractions – one-third, one-fifth, etc. – of an electron’s charge.

This prediction was first proven experimentally in 1997 by the Weizmann Institute’s Prof. Moty Heiblum and his research group – an achievement that contributed to the decision to award Laughlin, Stormer and Tsui the Nobel Prize in Physics in 1998. But before long, other research into quantum phenomena began to hint at the existence of imaginary particles with completely different characteristics, those with fractional charges that were even – one-half, one-quarter, etc. Heiblum and his group succeeded in proving the existence of these charges, as well, a feat that depended on the growth of ultra-pure semiconductor crystals by the Institute’s Dr. Vladimir Umansky in the Braun Submicron Center.
 

Quantum switches


Imaginary particles come in two types: Abelian and non-Abelian. These mathematical terms, from the field of topology, refer to what happens when two particles in a system change places. Unlike the Abelian type, in the non-Abelian system (in which the particles have even denominators), this exchange moves the system from one quantum state to another. These quantum states are sensitive only to the topology of the path taken by the two particles in the exchange process.
 
Prof. Moty Heiblum
 
Because their quantum state is modified only when they switch positions, such exchanges of non-Abelian particles (often called “braiding”) should be resistant to local perturbation.  They could thus form the basis of a robust type of quantum computing known as topological quantum computation. Though abstract and theoretical, these findings have lately begun to garner interest, including a recent sizable contribution by software giant Microsoft to further the research.
 
 

Small Is beautiful


Another research path, which opened in 2007, began with an insight more common to sculpture than quantum physics: The level of detail that sculptors can achieve in their artwork is limited by the width of their finest carving tool. Past a certain physical point, the only way to introduce greater detail into the sculpture is to shape it from bottom up, by piecing together very tiny bits of stone. The dimensions of the smallest details will thus be determined by the size of the smallest stones.
 
Dr. Hadas Shtrikman
 
Dr. Hadas Shtrikman used this insight in her Weizmann Institute lab, moving from growing two-dimensional crystals that were further miniaturized to nanowires that can be used directly in electronic devices. Working with Dr. Ronit Popovitz-Biro over several years, she managed to grow nanowires with perfect crystal structure and so thin that, for all practical purposes, electrons flow through them in one dimension.

 

 

States


The final path in this story began in 2001, with Prof. Alexei Kitaev, a former visiting scientist at the Weizmann Institute. Kitaev suggested that quantum topological memory could be created using a new version of non-Abelian imaginary particles – particles without any charge that would have the properties of Majorana particles. These would not be true Majorana’s fermions, which are particles of matter, but complex imaginary particles in which the lack of charge would be both a “state” and its own “anti-state.”
 
Nanowires from the lab of Dr. Hadas Shtrikman
 
A few years later, in 2010, the Institute’s Prof. Yuval Oreg and his colleagues Prof. Felix von Oppen of the Open University Berlin and Prof. Gil Refael of the California Institute of Technology described the possibility of creating quantum states containing complex imaginary particles that behave like Majorana particles. To do this, they proposed a relatively simple experimental system, based on one-dimensional semiconductor nanowires placed near a superconductor, with a weak magnetic field applied along the axis of the nanowires.
 

The paths converge


Oreg and his research student Yonatan Most had suggested that if Shtrikman’s nanowires were placed next to the superconductor, complex imaginary Majorana-like particles would appear at the ends of the wires. Heiblum, postdoctoral fellow Dr. Anindya Das and research student Yuval Ronen planned and built an experimental apparatus to test this theory, and within a few months they managed to find evidence for quantum states that fit the expected profile for the proposed Majorana states.
 
Just a short time before they completed their experiment, Dutch physicist Leo Kouwenhoven of Delft University announced a similar finding, supporting Oreg and his colleagues’ proposed realization of the experiment. If these results do point to a Majorana state, say Heiblum and Oreg, “they will get us a step closer to realizing the principles of quantum computing.”
 
Prof. Yuval Oreg
 

Open question


The quantum states seen at the Weizmann Institute and Delft via observations of the imaginary particles aroused much excitement. However, as noted, they are not true Majorana particles, which, as fermions, must be real matter particles to qualify.  In other words, the question is still open: Do the particles that Majorana predicted actually exist in nature?

Some think that neutrinos could be Majorana particles, and if this is the case it might help solve one of the biggest mysteries in physics. In the Big Bang, equal amounts of matter and antimatter should have been produced, yet we observe only matter in our universe. How did matter survive in the world when all antimatter disappeared? If neutrinos were Majorana particles, this might imply an as-yet-undetected force in the universe that prefers matter over antimatter.

 
Prof. Moty Heiblum’s research is supported by the Dan and Herman Mayer Fund for Submicron Research; the Willner Family Leadership Institute for the Weizmann Institute of Science; the Joseph H. and Belle R. Braun Center for Submicron Research, which he heads; the Maurice and Gabriela Goldschleger Center for Nanophysics, which he heads; the Wolfson Family Charitable Trust; the estate of Olga Klein Astrachan; and the European Research Council. Prof. Heiblum is the incumbent of the Alex and Ida Sussman Professorial Chair of Submicron Electronics.
 
Prof. Yuval Oreg’s research is supported by the Yeda-Sela Center for Basic Research.

 
 
 
 
Majorana
Space & Physics
English

Pages