<p>
Science Feature Articles</p>

It Pays to Plan Ahead

English
 
 Prof. Yitzhak Pilpel, Dr. Orna Dahan and Amir Mitchell. Bacteria anticipate the future
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
We humans learn anticipation from a young age. For example, a baby may begin to calm down at the mere sight of its bottle; or a driver’s foot might twitch on the brake at a stoplight, ready to switch pedals as soon as the light turns green. It turns out that bacteria can also anticipate the future and even get ready to meet it. That’s the conclusion of research at the Weizmann Institute showing that, just as we’ve learned to expect that a red light will be followed by a green one, bacteria can “learn” to foresee certain regular changes in their environments and begin preparing for the next stage. This research recently appeared in Nature.
 
Prof. Yitzhak Pilpel, research student Amir Mitchell and Dr. Orna Dahan of the Weizmann Institute’s Molecular Genetics Department, and their research team asked whether natural selection can act as a “teacher,” conditioning single-celled organisms to respond to a “predictable” sequence of events. For instance, E. coli, a type of normally harmless bacteria found in the digestive tract, experience such regular changes as they cruise harmlessly from one end to the other. In particular, they find that one type of sugar – lactose – is invariably followed by a second sugar – maltose – soon afterward. Pilpel, Mitchell and Dahan checked the bacterium’s genetic response to lactose and discovered that in addition to the genes that enable it to digest lactose, the gene network for utilizing maltose was simultaneously partially activated. When they switched the order of the sugars, giving the bacteria maltose first, there was no corresponding activation of lactose genes, implying that bacteria have naturally learned to get ready for a serving of maltose after a lactose appetizer.
 
Another microorganism that experiences consistent change is wine yeast. As fermentation progresses, sugar and acidity levels change, alcohol levels rise, and the yeast’s environment heats up. The scientists found that when the wine yeast feels the heat, it begins activating genes for dealing with the stresses of the next stage. Further analysis showed that this anticipation and early response is an evolutionary adaptation that increases the organism’s chances of survival.
 
So far, bacteria and yeast were demonstrating the classical “conditioned response” famously demonstrated by the Russian scientist Ivan Pavlov in dogs. Pavlov trained his dogs to salivate in response to a stimulus by repeatedly ringing a bell before giving them food. In microorganisms, says Pilpel, “evolution over many generations replaces conditioned learning, but the end result is similar.” “In both evolution and learning,” says Mitchell, “the organism adapts its responses to environmental cues, improving its ability to survive.”
 
But Pavlov, in further experiments, demonstrated the learned nature of the dogs’ response: It could be unlearned, as well. When he stopped giving the dogs food after ringing the bell, the conditioned response faded until they eventually ceased salivating at its sound. Could bacteria “unlearn” the conditioning developed over many generations of evolution? To answer this question, the scientists conducted another Pavlovian experiment: They tested E. coli grown by Dr. Erez Dekel in the lab of Prof. Uri Alon of the Molecular Cell Biology Department, in an environment containing the first sugar, lactose, but lacking the maltose chaser. After several months, the bacteria evolved to stop activating their maltose genes at the taste of lactose, only turning them on when maltose was actually available.
 
Just as Pavlov’s dogs eventually stopped wasting their saliva in the absence of a reward, the bacteria appeared to learn that activating genes for no reason was counterproductive. “These findings showed us that there is a cost to advanced preparation, but that the benefits to the organism outweigh the costs in the right circumstances,” says Pilpel. What are those circum-stances? Based on the experimental evidence, the research team created a sort of cost/benefit model to predict the types of situations in which an organism could increase its chances of survival by evolving to anticipate future events. They are already planning a number of new tests for their model, as well as different avenues of experimentation based on the insights they have gained.
 
Pilpel and his team believe that the genetic conditioned response may be a widespread means of evolutionary adaptation that enhances survival in many organisms – one that may also take place in the cells of higher organisms, including humans. These findings could have practical implications, as well. Genetically engineered microorganisms for fermenting plant materials to produce biofuels, for example, might work more efficiently if they gained the genetic ability to prepare themselves for the next step in the process.
 
Prof. Yitzhak Pilpel’s research is supported by the Ben May Charitable Trust; the Minna James Heineman Stiftung; and Huguette Nazez, France.
 
(l-r) Prof. Yitzhak Pilpel, Dr. Orna Dahan and Amir Mitchell. Lessons from evolution
Life Sciences
English

Checking the Dosage

English

Yoram Groner. Finding the causes of Down syndrome

 
 
 
 
 

Three decades of work on Down syndrome is bringing its cause into focus

 
In 1866, British physician John Langdon Down described the disorder now known as Down syndrome. Despite wide-scale prenatal testing, about 1 in 800 babies in the Western world is born with this disorder. In addition to developmental disabilities, Down syndrome patients suffer from poor muscle function, diabetes and leukemia, and are at a higher risk of Alzheimer's disease.
 
About 50 years ago, it was discovered that people with Down syndrome have an extra copy of chromosome 21. How does an extra copy of an intact gene cause such problems? The case was far from clear-cut: Overexpression of a particular gene does not necessarily result in the overproduction of its encoded protein. In plants, for example, overexpression of genes is a widespread phenomenon that causes no harm.
 
Answers are beginning to emerge, thanks to the pioneering work of Prof. Yoram Groner of the Molecular Genetics Department.
 
In endeavoring to understand how a third copy of chromosome21 causes Down syndrome, scientists have considered two possible scenarios: One widely held "developmental instability" hypothesis suggests that the symptoms are the direct consequence of a disturbance in the chromosome balance, resulting in a disruption of homeostasis. The alternative – the gene dosage effect hypothesis – suggests that symptoms of the syndrome are caused by over-production of some or many of the proteins encoded by chromosome 21 genes, which then leads to a disturbance in the metabolic balance required for proper development and normal body function.
Groner favored the gene dosage effect hypothesis and made it his goal to prove it. The challenge he set himself was to isolate an individual gene from chromosome 21 and demonstrate that it causes known symptoms of Down syndrome. This undertaking was revolutionary in 1979: Information on chromosome 21 was marginal; no molecular tests were available to measure gene expression; and the technology for cloning genes was in its infancy.
 
Groner and his team found that Down syndrome patients have an abnormally high level of an enzyme called SOD1 in their blood. But was the SOD1 gene located on chromosome 21? And if so, what role did it play in the disorder? In the early 1980s, tackling these issues, which are today routine in the field of genetic research, was almost beyond the scope of conventional science.
 
The efforts of Groner and his colleagues led to the first ever cloning of a gene from chromosome 21 and the decoding of its DNA sequence. Groner's group created genetically modified cells (so-called transgenic cells) that produce high levels of SOD1. These transgenic cells had physical defects and excess levels of hydrogen peroxide.
 
One such abnormality was the cells' inability to accumulate neurotransmitters (nervous system modulators). Groner and his team revealed the molecular mechanism that causes this abnormality, tracing the defect to a special protein pump that draws neurotransmitters into the cell. This discovery provided direct evidence that the gene dosage effect of chromosome 21 causes significant physical mal-function in the cell.
 
How does this malfunction tie in with Down syndrome? To answer this question, Groner and his team created, for the first time, a mouse model for gene dosage: a transgenic mouse containing an SOD1 gene. These transgenic mice had high levels of SOD1 and very low levels of the neurotransmitter serotonin in their blood – close to those found in Down syndrome infants, verifying that features of Down syndrome can be attributed to the gene dosage.
 
Further research revealed that overexpressed SOD1 produced significantly higher amounts of hydrogen peroxide in the mice, causing defects in the pumps that draw serotonin from the blood into certain blood cells, where it normally accumulates. When this pump fails, the serotonin remains in the bloodstream, where it is broken down, leading to reduced levels of serotonin in these cells and, later, in the brains of the transgenic mice – similar to what is observed in Down syndrome patients.
 
The intriguing finding that increased SOD1 impairs neuro-transmitter uptake enabled Groner's team to unravel one of the long-standing puzzles in Down syndrome clinical treatment: Attempts were made in the late 1960s to raise the low blood serotonin levels in these patients, but many experienced severe seizures, bringing the studies to a halt. Why does the administration of serotonin cause spasms in infants with Down syndrome? The team were able to resolve the underlying mechanism: When the defective pumps failed to transport serotonin into the infants' blood cells, the added serotonin collected instead in the synapses – the spaces between nerve cells – bringing on the seizures. 
 

Groundbreaking Studies

 
Prof. Yoram Groner established the Weizmann Institute of Science's Molecular Genetics Department (from the former Genetics and Virology Departments), and later served as the Institute's Deputy President. Groner was recently awarded the 2008 EMET Prize for Life Sciences for, among other achievements, "his groundbreaking studies in the molecular biology of Down Syndrome, which proved the gene dosage effect theory in the trisomy of chromosome 21."
 
Prof. Yoram Groner's research is supported by the J & R Center for Scientific Research; the David and Fela Shapell Family Center for Genetic Disorders Research; and the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation. Prof. Groner is the incumbent of the Dr. Barnet Berris Professorial Chair of Cancer Research.
 
Prof. Yoram Groner.
Life Sciences
English

Cell Detective

English
 
 

Dr. Guy Shakhar and his research team. Where do immune cells go?

 

 

 

 

 

 

 

 

 

 

 

 

 

Our immune systems are hugely complex cellular networks that depend on each member carrying out its task. Dr. Guy Shakhar of the Immunology Department is a sort of immune system sleuth, staking out immune system cells and tailing them to find out where they go, how they get there and who they talk to. His surveillance equipment consists of a sophisticated imaging microscope (see box) that enables him to observe these cells both in their normal daily routines and in the act of abetting disease.

 
One group of cells that Shakhar and his team track is dendritic cells. These cells hang around in organs like the gut, skin and lungs – places where pathogens like to enter the body. Dendritic cells sample any foreign matter they find and, if further action is called for, whisk their sample – a bit of protein from the pathogen called an antigen – off to the nearest lymph node to present it to other cells of the immune system. These are primarily T cells that can kick-start an immune response, kill dangerous cells and recruit other immune cells – such as the B cells that produce disease-fighting antibodies. Shakhar is investigating how dendritic cells mobilize, traveling from the skin or gut through the lymph ducts and into the lymph nodes. It's known that these cells don't travel well if their surface receptors for two molecules – one that allows them to sense their target and another that helps them adhere – are blocked. Shakhar is trying to find out exactly when these molecules come into play – whether they help the immune cells leave their patrol sites, steer them to their destinations or enable them to enter the lymph nodes.
 
But sometimes the pathogen invasion is more of a sneak attack: For instance, just a few malaria parasites in a mosquito's bite are enough to cause disease. If these parasites are unfamiliar to the immune system, only a few out of the billions of patrolling T cells may have surface molecules that can identify the new antigens. A race begins between pathogens and immune system, and this small number of cells must convince the immune system to launch a full-scale attack on short notice. The team's evidence suggests that the T cells and dendritic cells combine forces, the dendritic cells forming networks that coordinate their presentation. The T cells can then navigate these networks to find the most informative antigen-presenting dendritic cells. The team is currently exploring how this coordination is carried out.
 
Shakhar is also on the trail of a network of cells involved in a family of lower digestive system disorders that includes Crohn's disease. Evidence suggests these autoimmune diseases are rooted in the faulty regulation of immune responses to the normal population of microbes living in the intestines. This network involves three-way interactions between the inflammation-causing immune system cells, regulatory T cells, which damp down immune responses, and dendritic cells. He and his team are imaging the intestines of mice into which they have introduced both bacteria with specific antigens and immune system T cells that recognize and respond only to these antigens. They believe the dendritic cells are key players in this network, and the team's efforts to pin down their role might provide new targets for treating these diseases.  
 

True to Life
 

When Robert Hooke first observed a slice of cork under a microscope in the 17th century, he named the tiny empty compartments he saw "cells." In the ensuing years, microscopes have become much more powerful, revealing the world of cells in infinitely greater detail. But until recently, whenever researchers wanted to visualize cells inside complex tissues, they had to fix them firmly in place – as dead as those in Hooke's cork.
 
In the last few years, new kinds of microscopes have enabled scientists to observe living, moving cells deep inside live animals.
 
Dr. Guy Shakhar uses one called a two-photon microscope to track immune cells in anesthetized mice. Based on a physical phenomenon in which two infrared photons striking fluorescent molecules in rapid succession cause them to emit a flash of visible light, the method produces 3-D images of cells and tissues by shining ultra-quick pulses of highly focused infrared laser light. The infrared beam can penetrate tissue to a depth of several hundred microns, allowing Shakhar to observe the organization of the tissue without slicing into it and to follow the activities of cells over time.
 

Image of a Scientist
 

Dr. Guy Shakhar was born in Jerusalem and grew up in Givatayim, near Tel Aviv. After serving in the IDF, he enrolled in Tel Aviv University, receiving an M.Sc. in interdisciplinary studies and a Ph.D. in neuroscience, in which he researched interactions between the immune system and the nervous system. In his postdoctoral work with Prof. Michael Dustin at NYU School of Medicine, Shakhar turned to imaging immune cells to investigate their activities. In 2006, Shakhar joined the Weizmann Institute as a senior scientist.
 
Married to Keren, Shakhar is the father of two children – Amit, aged four and Gal, aged six. He enjoys mountain biking and playing the Asian board game Go.
 
Dr. Guy Shakhar's research is supported by the Kirk Center for Childhood Cancer and Immunological Disorders; the Crown Endowment Fund for Immunological Research; the Linda Tallen and David Paul Kane Educational and Research Foundation; the Philip M. Klutznick Fund for Research; the Wolfson Family Charitable Trust; and the estate of David Turner.
 
(l-r) Top: Julia Farache, Dr. Tali Feferman, Dr. Guy Shakhar and Ira Gurevich. Bottom: Orna Tal and Idan Milo. Intercepting a sneak attack
Life Sciences
English

Passport Please

English
 
Prof. rony Seger. A code to enter through the membrane

 

 

 

 

 

 

 

 

 

 

Border controls are the way a country regulates who and what crosses into its territory. Living cells also have "border controls" – membranes that surround the whole cell or especially sensitive or important parts of it. One of the cell's organelles – the nucleus – is even enclosed in a double membrane that serves to protect its highly valuable contents – the organism's genetic material. This membrane regulates the flow of molecules into and out of the nucleus via "border gates" – referred to as membrane-bound pores – helping to prevent erroneous DNA activation.

Those molecules authorized to enter the nucleus and carry out legitimate activities usually possess a special "localization code" – a specific amino acid sequence harbored within a particular region of the molecule. This code is recognized by special carrier proteins (importins), which then facilitate the transport of these molecules into the nucleus.

Scientists have only recently begun to notice that some molecules entering the nucleus lack the localization code. So how do they pass through the border? In a research article recently published in Molecular Cell, Prof. Rony Seger and Ph.D. student Dana Chuderland, with help from postdoctoral fellow Dr. Alexander Konson, all of the Weizmann Institute's Biological Regulation Department, identified a previously unknown mechanism that grants permission to some of these molecules to enter. Because certain molecules that cross over into the nucleus are overactive in such diseases as cancer, this new mechanism may present an effective means of "stopping them at the border."

Seger's team focused on signaling proteins named ERK. These proteins, which function mainly within the nucleus, are involved in such cellular activities as gene expression (the production of proteins) as well as cell proliferation and differentiation. Seger's group discovered, through experiments and bioinformatics analyses, that these signaling proteins do possess a localization code after all – but its sequence is different from that of the commonly recognized one. As opposed to the molecules possessing the regular localization code, molecules bearing the newly identified code do not have an automatic free pass; the team identified further entry criteria they need to meet. ERK proteins are usually held in "customs" – anchored to other proteins in the cell's cytoplasm – and can only get released upon extracellular stimulation of the cells with growth factors and hormones. Once this occurs, their particular entry code needs to get "stamped" with a phosphate group. (The addition or removal of a phosphate is an important regulatory mechanism of cellular activity.) Only then can these proteins bind to the particular carrier protein that helps them localize to the nucleus, where they can carry out their assignment.

When the newly discovered entry code was removed from these proteins or the addition of the phosphate groups was prevented, the ERK proteins remained stranded outside the nuclear membrane, thus confirming that both are necessary for crossing the border. The scientists noted that these events inhibited cellular proliferation, hinting that the new code might be useful for fighting cancer, in which cell reproduction goes out of control.

It turns out that ERK is not the only type of protein to use this mechanism; a bioinformatics search revealed that about 40 proteins can translocate to the nucleus by the same route. In other words, Seger's group seems to have identified a unique general mechanism shared by these proteins.

 
Extracellular activation gets ERK proteins into the nucleus
 

These findings may have important implications for the development of new therapies: Inappropriate activation of ERK proteins and ensuing cell proliferation is common in human cancers. Because these proteins are important in so many other processes, however, existing treatments that target all ERK proteins lead to undesirable side effects. A drug that could selectively target the newly identified entry code might act mainly to prevent proliferation, resulting in a more effective anti-cancer drug with fewer adverse reactions.   

 

Prof. Rony Seger's research is supported by the M.D. Moross Institute for Cancer Research; and the Phyllis and Joseph Gurwin Fund for Scientific Advancement. Prof. Seger is the incumbent of the Yale S. Lewine and Ella Miller Lewine Professorial Chair for Cancer Research.

 
Prof. Rony Seger. Going through customs
Life Sciences
English

Optimum Performance

English
Dr. Tsvi Tlusty.Efficiency in the bacterial genome

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To increase profits, one should lower costs, raise prices and increase efficiency. Simple enough advice; but ask any business manager to what extent the cost of employee bonuses is offset by greater efficiency. Chances are the answer will be anything but simple.

The bacterium Escherichia coli, one of the best-studied single-cell organisms around, has its own formula for industrial efficiency – one that serves it well. This bacterium can be thought of as a factory with just one product: itself. It exists to make copies of itself, and its business plan is to make them at the lowest possible cost with the greatest possible efficiency. Efficiency, in the case of a bacterium, can be judged by the energy and resources it uses to maintain its plant and produce new cells, versus the time it expends on these tasks.

Dr. Tsvi Tlusty and research student Arbel Tadmor of the Physics of Complex Systems Department developed a mathematical model for evaluating the efficiency of these microscopic production plants. Their model, which recently appeared in the online journal PLoS Computational Biology, uses only five remarkably simple equations to check the efficiency of these complex factory systems.

The equations look at two components of the protein production process: ribosomes – the machinery by which proteins are produced – and RNA polymerase – the enzyme that copies the genetic information for protein production onto strands of messenger RNA for further translation into proteins. RNA polymerase is thus a sort of work "supervisor" that keeps protein production running smoothly by checking the specs and setting the pace. The first equation assesses the production rate of the ribosomes themselves; the second, the protein output of the ribosomes; the third, the production of RNA polymerase. The last two equations deal with how the cell assigns the available ribosomes and polymerases to the various tasks of creating other proteins, more ribosomes or more polymerases.

The theoretical model was tested in real bacteria. Do bacteria "weigh" the costs of constructing and maintaining their protein production machinery against the gains to be had from being able to produce more proteins in less time? What happens when a critical piece of equipment – say a main ribosome protein – is in short supply? When experimentally induced changes to genes having important cellular functions caused disruptions in the work flow, Tlusty and Tadmor found that their model was able to accurately predict how an E. coli would change its production strategy to maximize efficiency.

 
Bacteria are models of efficiency
 

What's the optimum? The model predicts that a bacterium, for instance, should have seven genes for ribosome production. It turns out that's exactly the number an average E. coli cell has. Bacteria having five or nine receive a much lower efficiency rating. Evolution, in other words, is a master efficiency expert for living factories, meeting any challenges that arise as production conditions change.   

 

Dr. Tzvi Tlusty's research is supported by the Clore Center for Biological Physics.

 
Dr. Tsvi Tlusty. Evolved for efficiency
Math & Computer Science
English

Screen Saver

English
Malach and his research team. Brain at work, even while resting

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
What's happening in a brain that's disengaged from any focused task? The owner of that brain might be under the impression that when his feet are up and his eyes are closed, most mental activity comes to a standstill; but research at the Weizmann Institute shows that, in truth, the nerve cells in his brain – even those in the visual centers – are still humming with activity.

Previous research conducted by Prof. Rafael Malach and research student Yuval Nir of the Neurobiology Department used functional magnetic resonance imaging (fMRI) to measure brain activity in active and resting states. The results of their study and other similar studies around the world were as surprising as they were controversial: The magnitude of brain activity for the various senses (sight, touch, etc.) in a resting brain is quite similar to that of one exposed to a stimulus. Each sense exhibits a distinctive pattern, spread around different areas of both hemispheres in the brain; the fMRI images show activity in all the areas, whether an outside stimulus is present or not. But fMRI is an indirect measurement of brain activity; it can't catch the nuances of the pulses of electricity that characterize neuron activity.

Together with Prof. Itzhak Fried of the University of California at Los Angeles and a superb team in the EEG unit of the Tel Aviv Sourasky Medical Center, the researchers found a unique source of direct measurement of electrical activity in the brain: data collected from epilepsy patients who underwent extensive testing, including measurement of neuronal pulses in various parts of their brain, in the course of diagnosis and treatment.

An analysis of these data showed conclusively that electrical activity does, indeed, take place even in the absence of stimuli. On the other hand, the nature of the electrical activity differs between the two states. In results that appeared recently in Nature Neuroscience, the scientists showed that the resting activity consists of extremely slow fluctuations, as opposed to the short, quick bursts that typify a response associated with a sensory percept. This difference may explain why, even though our sensory neurons are constantly active, we don't experience non-existent stimuli – hallucinations or voices that aren't there – during rest. Moreover, according to the research, the rest-time oscillations appear to be strongest when we sense nothing at all – during dream-free sleep.

Malach compares the slow fluctuation pattern to a computer screen saver. Though its function is still unclear, the researchers have a number of hypotheses. One possibility is that neurons, like certain philosophers, must "think" in order to be. Survival, therefore, is dependent on a constant state of activity. Another suggestion is that the minimal level of activity enables a quick start when a stimulus eventually presents itself, something like a getaway car with the engine running. Nir: "In the old approach, the senses are 'turned on' by the switch of an outside stimulus. This is giving way to a new paradigm in which the brain is constantly active, and stimuli change and shape that activity."

Malach: "The use of clinical data from a hospital enabled us to solve a riddle of basic science in a way that would have been impossible with conventional methods. These findings could, in the future, become the basis of advanced diagnostic techniques." Such techniques might not necessarily require the cooperation of the patient, allowing them to be used, for instance on people in a coma or on young children.  
 

 

Replay

 
In research that appeared in Science, Malach, research student Hagar Gelbard-Sagiv with Dr. Michal Harel of the Institute's Neurobiology Department, and Fried with postdoctoral fellow Dr. Roy Mukamel of UCLA, showed how remembering, at least a few minutes after the original event, is something like a rerun of that event.

Working with epilepsy patients who had thin electrodes implanted in their brains for medical purposes, the scientists were able to measure the electrical activity of individual neurons. The participants were shown a series of film clips of everything from The Simpsons and Seinfeld episodes to historical events and classic movies. As they watched, their brain's electrical activity was being measured, particularly groups of neurons in several brain areas associated with memory. These nerve cells showed preferences for some clips over others by increasing their activity; the researchers were able to connect specific patterns of electrical activity to the clips that elicited them.

A few minutes later, while their brain activity was still being measured, the subjects were invited to think about the clips they had viewed, and to report each time a new clip came to mind. The scientists found that the patterns of brain activity in remembering were so similar to those observed during the original viewing that they were able to tell which clip the subject was recalling – about a second and a half before he or she said it aloud. "It's possible that it takes a second and a half – not a short period by brain activity standards – for the subject to be able to articulate his memory. On the other hand, it might be that part of that time is spent in bringing that memory to the surface, while the conscious brain is still unaware of the event it will recall," says Gelbard-Sagiv.

Prof. Rafael Malach's research is supported by the Nella and Leon Benoziyo Center for Neurological Diseases; the Carl and Micaela Einhorn-Dominic Brain Research Institute; Vera Benedek, Israel; the Benjamin and Seema Pulier Charitable Foundation, Inc.; and Mary Helen Rowen, New York, NY. Prof. Malach is the incumbent of the Barbara and Morris Levinson Professorial Chair in Brain Research.
 
Standing: (l-r) Eran Dayan, Dr. Son Preminger, Dr. Guido Hesselman and Lior Fisch. Sitting: (l-r) Roye Salomon, Michal Ramot, Ido Davidesko, Prof. Rafael Malach and Michal Harel. Constant activity
Life Sciences
English

Surfin' DNA

English
 
Dr. Yaakov Levy. Sugar molecules shape proteins

 

 

 

 

 

 

 

 

 

 

 

Any good detective knows that to solve a mystery, the evidence left at the scene is never sufficient; one needs to piece together events that took place prior to the crime. Actions that may at first appear to be irrelevant or distant may turn out to be the key to revealing the culprit. Those investigating biological mysteries also often find that clues are hidden in ostensible red herrings – seemingly tangential events that take place before the main action, for instance.

Dr. Yaakov (Koby) Levy of the Structural Biology Department investigates the "histories" of proteins, looking for activities that bear on the final outcome. How, for example, do the events a protein experiences prior to folding affect the shape it ends up with? How do proteins navigate a coil of DNA before they attach themselves to the proper binding site? Using computational models and other theoretical tools, Levy creates simplified portraits of complex biological phenomena that enable him to seek answers to these and other questions about the activities of such large molecules as proteins, DNA and RNA. While shedding light on basic biological processes, Levy hopes to help identify perpetrators that sometimes disrupt the functions of these molecules and cause problems including neurological diseases and cancer.

The life of a protein is filled with action: It folds and unfolds; phosphate and sugar groups attach themselves to its backbone and detach themselves again. The latter changes affect a protein's function: They switch it on and off, and also control the intensity of its activities. But Levy and postdoctoral fellow Dr. Dalit Sental Bechor recently showed that when sugar molecules bind to a protein, they may do more than that: They may help shape the very nature of the protein. In research that recently appeared in the Proceedings of the National Academy of Sciences (PNAS), USA, the researchers showed that the sugars that latch on to a protein directly affect its stability. They created a simple model of a protein molecule to which they attached two different types of sugar in various positions and amounts. Altogether, they produced about 60 versions of the sugar-bound protein.

Their model agreed with experimental results showing that the protein's stability rises as the number of sugar molecules increases. Further investigations of their model enabled them to explain why this happens. Like the colonel in a mystery plot who turns out to have been a double agent, the researchers found that the sugars, rather than being a stabilizing factor – as one might assume from the initial evidence – actually work to destabilize the protein, at least in one of its states. As more sugar groups bind to the unfolded protein, they increase its instability, driving it to "opt" for a stable, folded state. "By putting proteins through a range of chemical changes, nature has come up with an economical means of expanding the protein pool," says Levy. "These sugars are a control mechanism for managing the amounts of proteins in the folded and unfolded states."

 

 
 Illustration: Surfing on the DNA
 

Another subject Levy explores with his models concerns the interactions between proteins and DNA. Various protein molecules must bind to specific sites on the DNA for many of the cell's most crucial activities to be carried out. These activities include gene expression, repair of damaged DNA and packaging the DNA into compact structures in the cell nucleus. The binding must be both quick and accurate: Proteins typically have only a few seconds to pick the right address from millions, or even billions, of possible binding sites. How do they manage? In research that recently appeared in the Journal of Molecular Biology, Levy and research student Ohad Givati used a theoretical model to evaluate possible search methods. The molecules could, for instance, conduct an in-depth search, trawling though the DNA letter by letter to find the right code. Alternately, they might rapidly "surf" the DNA, skipping over large parts of the sequence. According to the model, the best way for proteins to conduct a DNA search is indeed by surfing. A  protein should move in a spiral path around the DNA complex, skipping over about 80 percent of the encoded sequence as it goes. Levy and his team plan to continue investigating this model to try to ascertain whether the length of the DNA molecule affects the search method, whether there are differences in approach between single- and double-stranded molecules, and whether different types of proteins adopt different methods for surfing the DNA.
 

Dr. Yaakov Levy's research is supported by the Clore Center for Biological Physics; the Helen and Martin Kimmel Center for Molecular Design; and the Helen and Milton A. Kimmelman Center for Biomolecular Structure and Assembly. Dr. Levy is the incumbent of the Lillian and George Lyttle Career Development Chair.

 

Scientist and Teacher

 

Tel Aviv-born Dr. Koby Levy received his B.Sc. in chemistry from the Technion in 1992. After completing a Ph.D. in theoretical-computational biophysics at Tel Aviv University in 2002, Levy went on to postdoctoral research at the Theoretical Biological Physics Center at the University of California, San Diego. He joined the Weizmann Institute's Structural Biology Department in 2006. In parallel, Levy has been involved in education: As a university student, he taught high school chemistry in Haifa, and he was later appointed to the Center for Advanced Teaching at Tel Aviv University. Today, he is a member of the executive committee of "Daniel," a non-profit organization for the promotion of anthroposophic education in Israel. Levy, his wife, Rinat, and their two children, Naama (8) and Arnon (2), live in Rehovot.  

 

 
 Illustration: Surfing on the DNA
Life Sciences
English

Supernannies

English
 

 Vicki Plaks, Tal Birnberg, Prof. Nava Dekel, Dr. Steffen Jung and Prof. Michal Neeman. Critical pregnancy stage

 

 

 

 

 

 

 

 

 

 

 

 

 

 

By the time a fertilized egg completes the trip from ovaries to uterus, it has already been transformed into a tiny ball of cells called a blastocyst. Its next step is fraught with risks: Around half of all blastocysts will fail in their attempts to implant in the uterus. Some will be rejected due to abnormalities, but others won't make it because the uterus for some reason doesn't provide them with the conditions they need to thrive and grow.

From the moment a blastocyst settles on an inviting spot on the uterus lining, the surrounding cells begin to change and grow. Soon, a spongy, blood-rich mass called the decidua forms from the uterine tissue. Most of the decidua disappears once a proper placenta is formed, but the decidua tissue is crucial for nurturing the new embryo during the precarious first stage of pregnancy.

Changes in the uterine wall at this time include alterations in the population of immune cells that normally inhabit the uterus. For instance, cells called dendritic cells cluster near the newly implanted blastocyst. Dendritic cells of different types are found throughout the body, where they generally help to form one of its first lines of defense against invading microorganisms, scouting the tissues and alerting the immune system to potential threats.

So what are these immune system fighters doing in the vicinity of the newly implanted blastocyst? The prevailing hypothesis is that these cells somehow play an opposite, protective role in implantation. Uterine dendritic cells, so the theory goes, help to prevent other immune cells from attacking the tiny blastocyst, which shares only half of its genes with its mother and might therefore be regarded by her immune system as a foreign menace.

To test this theory, two research students, Tal Birnberg and Vicki Plaks from the laboratories of Dr. Steffen Jung at the Institute's Immunology Department, and Profs. Nava Dekel and Michal Neeman of the Biological Regulation Department, in collaboration with Dr. Gil Mor of the Yale University School of Medicine, cross-bred mice so that the embryos were genetically identical to the mothers. They then removed the dendritic cells from the uterus using an in vivo cell depletion model developed by Jung during his postdoctoral studies at New York University. Because the immune system could not identify the embryo as foreign, there was no cause for rejection of the blastocysts – but implantation failed anyway.

If they don't defend the blastocysts, what role do the dendritic cells play? To investigate, the scientists again depleted the uterus of dendritic cells, this time in mice that had been induced to develop a decidua without a blastocyst (a technique perfected by Dekel's team). In every case, in vivo MRI studies showed the decidua-forming cells multiplied more slowly and didn't differentiate properly, and new blood vessels were slow to grow.

These findings led the researchers to the somewhat startling conclusion that these particular dendritic cells had taken on a completely new function. Rather than acting as front-line warriors of the immune system, or even protectors of the new embryo, they seemed to be involved in remodeling the nursery, helping to reshape the tissue surrounding the implantation site to provide for the needs of the new embryo. The scientists hope to continue this line of research and to identify exactly what factors are involved in creating a viable decidua. As well as shedding light on this vital but little understood stage of pregnancy, future studies based on this research may help to advance treatments for infertility.   

Prof. Nava Dekel's research is support-ed by the Dwek Family Biomedical Research Fund; the Kirk Center for Childhood Cancer and Immunological Disorders; and the Dr. Pearl H. Levine Foundation for Research in the Neuro-sciences. Prof. Dekel is the incumbent of the Philip M. Klutznick Professorial Chair of Developmental Biology.
 

Dr. Steffen Jung's research is supported by the Kekst Family Center for Medical Genetics; the Kirk Center for Childhood Cancer and Immunological Disorders; the Swiss Society of Friends of the Weizmann Institute of Science; the Fritz Thyssen Stiftung; the estate of Edith Goldensohn; the Women's Health Research Center funded by: the Bennett-Pritzker Endowment Fund, the Marvelle Koffler Program for Breast Cancer Research, the Harry and Jeanette Weinberg Women's Health Research Endowment and the Oprah Winfrey Biomedical Research Fund; and the Center for Health Sciences funded by the Dwek Family Biomedical Research Fund and the Maria and Bernhard Zondek Hormone Research Fund. Dr. Jung is the incumbent of the Pauline Recanati Career Development Chair of Immunology.
 

Prof. Michal Neeman's research is supported by the Clore Center for Biological Physics; the Abisch Frenkel Foundation for the Promotion of Life Sciences; the Phyllis and Joseph Gurwin Fund for Scientific Advancement; the Ridgefield Foundation; the Women's Health Research Center funded by: the Bennett-Pritzker Endowment Fund, the Marvelle Koffler Program for Breast Cancer Research, the Harry and Jeanette Weinberg Women's Health Research Endowment and the Oprah Winfrey Biomedical Research Fund. Prof. Neeman is the incumbent of the Helen and Morris Mauerberger Chair in Biological Sciences.

 
Composite image. Decidua and dendritic cells
 

 

 
Image of an embryo overlaid withan immunofluorescent image of the decidua (purple). Dendritic cells are in green and blood vessels in red
Life Sciences
English

Designer Nerves

English
 
Dr. Assaf Rotem and Prof. Elisha Moses. Home-grown nerve networks
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
In think tanks around the world, analysts try to identify new developments in science and technology that might spur the next big technology wave. These trend spotters will undoubtedly be interested in new research conducted by Prof. Elisha Moses of the Physics of Complex Systems Department and his former research students Drs. Ofer Feinerman and Assaf Rotem. This team has taken the first step in creating circuits and logic gates made of live nerves grown in the lab. The research, which recently appeared in Nature Physics, could have implications for the future creation of biological interfaces between human brains and artificial systems.

Neurons are complex cells, and the networks they form are even more complex: In the brain, large numbers of connections to other neurons are what enable us to think. But when the same cells are grown in lab cultures, they turn out to be quite sluggish, reverting to a very primitive level of functioning. Why does this happen, and how can cells grown in culture be coaxed into becoming more "brainy"?

Moses, Feinerman and Rotem investigated the idea that the design of the nerve network's physical structure might be the key to improving its function. Being physicists – scientists who are used to working with simplified systems – they created a model nerve network in a single dimension, one defined by a straight groove etched on a glass plate. The neurons were "allowed" to develop only along this line. When grown this way, the researchers found the nerve cells could, for the first time ever, be stimulated with a magnetic field. (Beforehand, neurons in culture had been successfully stimulated only with electricity.)

The scientists then investigated the lines further to see if the width of the neuron stripe had any effect on the signals transmitted from cell to cell. Growing the neuronal networks on lines of varying widths, they discovered a threshold thickness – one that allowed for the development of about 100 axons (nerve cell extensions). Below this number, the chances of a neuron passing on a signal it received were iffy. This is because the cells are "programmed" to receive a minimum number of incoming signals before they fire off another one in response. A large number of axonal connections to other nerve cells – say, several hundred – will more or less ensure a reliable response, whereas cells that are connected to just a few others will fire only sporadically. With about 100 neuron connections, signals are sometimes passed on and sometimes not, but adding just a few more raises the odds of a response quite a bit.   

Putting these findings to work, the scientists used two thin stripes of around 100 axons each to create a logic gate similar to one in an electronic computer. Both of these "wires" were connected to a small number of nerve cells. When the cell received a signal along just one of the "wires," the outcome was uncertain; but a signal sent along both "wires" simultaneously was assured of a response. This type of structure is known as an AND gate.

The next structure the team created was yet more complex: Triangles fashioned from the neuron stripes were lined up in a row, point to rib, in a way that forced the axons to develop and send signals in one direction only. Several of these segmented shapes were then linked to one another in a loop to create a closed circuit. The regular relay of nerve signals around the circuit turned it into a sort of biological clock or pacemaker.
 
 
Circuits from nerve cells in the lab
 
Moses: "By creating simple computation tools from neurons with unreliable connections, we learn about the problems the brain must overcome when designing its own complicated 'thought machinery'. Simple animals rely on single nerves to direct their behavior, so these must be very reliable; but our nervous system is built of huge networks of nerve cells that enable a wide range of responses. The extreme sophistication of our neurons carries with it a high risk of error, and a large number of cells is needed to reduce the likelihood of mistakes. Right now we're asking: 'What do nerve cells grown in culture require to be able to carry out complex calculations?' As we find answers, we get closer to understanding the conditions needed for creating a synthetic, many-neuron 'thinking' apparatus."
 
Rotem: "Neuron-based circuits are still much slower than the electronic circuits existing today. But they might offer the promise of performing calculations in parallel, which are impossible on today's computers."
 
Today, this research is contributing to our understanding of the physical principles underlying thought and brain function. In the more distant future, it might serve as the basis for designing new methods of connecting the human brain to various artificial devices and systems.  

 

 
 
 
 
 

 

 

 

(l-r) Dr. Assaf Rotem and Prof. Elisha Moses. Smart nerve network
Math & Computer Science
English

Nanotube News

English
Ronen Kreizman, Dr. Maya Bar Sadan, Profs. Daniel Wagner, Reshef Tenne and Ernesto Joselevich and Dr. Ifat Kaplan-Ashiri. Stacking, stretching and bending
 
 
 
 
 
 
 
 
 
 

 

 

Picture-Perfect Nanotubes

 
A chain is only as strong as its weakest link. This saying is true for materials as well; it is the defects that often ultimately determine a material's strength.
 
Nanotubes may buck this trend, as their defects are limited. Because of their infinitesimal size, however, it's hard to prove this experimentally.
 
The first synthesis of inorganic nanotubes, composed of tungsten disulfide, took place in the lab of Prof. Reshef Tenne of the Materials and Interfaces Department more than 15 years ago. When Dr. Ifat Kaplan-Ashiri, recently a student in Tenne's group in the Institute's Faculty of Chemistry, decided to investigate the mechanical properties of these nanotubes – made of compounds other than carbon – she turned to Prof. Daniel Wagner of the same department.
 
Wagner researches carbon nanotubes, and he has developed special techniques to probe their mechanical properties. Applying Wagner's techniques to multiwalled tungsten disulfide nanotubes synthesized in Tenne's lab by Dr. Rita Rosentsveig, Kaplan-Ashiri put them through a series of stretching, bending and compression "exercises" while observing their behavior under a scanning electron microscope. When her results were compared with theoretical values obtained by the group of Prof. Gotthard Seifert of Dresden University of Technology, Germany, the values matched almost exactly. In other words, the nanotubes were as strong as the theory predicted – virtually defect-free. Also participating in the project were Drs. Sidney R. Cohen and Konstantin Gartsman of the Chemical Research Support Department.
 

The Missing Link

 
Dr. Maya Bar Sadan, also a former Tenne student, researches the structural properties of nanotubes. Now a postdoctoral fellow at the Julich Research Centre, Germany, Bar Sadan uses state-of-the-art electron microscopy techniques to determine the structure of multiwalled, inorganic nanotubes, atom by atom.
 
A nanotube basically consists of a sheet of atoms that has been rolled up into a seamless cylinder. But that sheet can roll in different ways: "armchair-wise" on the horizontal plane, "zigzag" on the vertical plane or "chiral" diagonally. Multiwalled nanotubes are, like Russian dolls, composed of multiple cylinders nested inside one another.
 
Bar Sadan, working with Dr. Lothar Houben, found that the first two or three outer layers of multiwalled, inorganic nanotubes are always rolled identically, either armchair or zigzag. Inside is a chiral layer, with the innermost layers reverting back to either the armchair or zigzag conformation. Knowing the tubes' structure can aid in refining their growth mechanism, helping to create more perfect nanotubes with even greater strength.

 

Double Twist

 
Tenne also approached Prof. Ernesto Joselevich and his post-doctoral student Nagapriya Kavoori  also of the Materials and Interfaces Department, who had developed a method to test nanotube mechanics by twisting them.
 
Kavoori, together with Ohad Goldbart and Kaplan-Ashiri of Tenne's group, had a surprise: When twisted, the inorganic nanotubes started creaking like the hinges of an old door! This creaking – a type of friction known to physicists as "stick-slip" behavior – takes place in everything from earthquakes to violins, but it has never before been observed in twisting on the atomic scale.
 
What was causing this creaking? Preliminary observations showed that at the onset of twisting, the multiple walls "stick" and twist as one; but beyond a certain angle, the outer layer "slips" and twists around the inner walls.
 
Joselevich, Kavoori and Seifert came up with a simple theoretical model. Unlike smooth-walled carbon nanotubes, inorganic nanotubes have a bumpy, corrugated surface. As Bar Sadan's research had shown, the outer walls are rolled identically, so they stack up like sheets of corrugated tin. The scientists calculated that this stacking would cause the layers to stick initially; but when the force of the twisting became stronger than the "locking" force between the corrugated walls, it caused them to repeatedly stick and slip.

 
A Melting Pot

 
Another student in Tenne's group, Ronen Kreizman, working together with Oxford University student Sung You Hong under the supervision of Profs. Malcolm Green and Ben Davis, has discovered yet another way to get to the core of nanotubes – literally: By melting an inorganic material with a lower melting point than tungsten disulfide, Kreizman found that the liquid is drawn into a nanotube's straw-like hollow cavity, where it then solidifies into a new nanotube.
 
This is the first ever report of a perfectly crystalline inorganic nanotube being produced within a nanotube. Certain inorganic materials tend to be unstable in tubular form, resisting synthesis into nanotubes. Now, however, thanks to Kreizman, this feat seems to be possible.
 
Dr. Ronit Popovitz-Biro of Chemical Research Support and Dr. Ana Albu-Yaron of the Materials and Interfaces Department also participated in this research.  
 
multiwalled inorganic nanotube
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Prof. Ernesto Joselevich's research is supported by the Helen and Martin Kimmel Center for Nanoscale Science; the Gerhardt Schmidt Minerva Center on Supramolecular Architectures.
 
Prof. Reshef Tenne's research is supported by the Helen and Martin Kimmel Center for Nanoscale Science; and the Phyllis and Joseph Gurwin Fund for Scientific Advancement. Prof. Tenne is the incumbent of the Drake Family Professorial Chair in Nanotechnology.
 

Prof. Daniel Hanoch Wagner is the incumbent of the Livio Norzi Professorial Chair.

 
 

Ifat and Maya

 
Dr. Ifat Kaplan-Ashiri pursued her M.Sc. and Ph.D. degrees in the lab of Prof. Reshef Tenne, garnering many prizes and honors, the most recent being the Outstanding Ph.D. Student Award of 2007, bestowed by the Israeli Chemical Society. Kaplan-Ashiri recently started her postdoctoral studies in the group of Dr. Katherine Willets at the University of Texas at Austin where she intends to combine atomic force microscopy and Raman spectroscopy to study single molecules.
 
Israeli-born Kaplan-Ashiri is married to Elad; they have one daughter. Apart from nanotubes, she likes to research the "pleasurable properties" of playing the piano, pottery and reading.
 
Dr. Maya Bar Sadan received a B.Sc. in chemical engineering from the Technion and an M.Sc. from the Weizmann Institute, studying superconductors. Her Ph.D. research was carried out under Tenne. Investigating inorganic nanotube properties together with German colleagues, she found they can change from semiconductors to a metal-like state. Bar Sadan is a recent recipient of a National Postdoctoral Award for Advancing Women in Science.
 
The mother of an 8-year-old daughter and 6-year-old twins, Bar Sadan likes to hike and read in her spare time.
 
 
 
(l-r) Ronen Kreizman, Dr. Maya Bar Sadan, Profs. Daniel Wagner, Reshef Tenne and Ernesto Joselevich and Dr. Ifat Kaplan-Ashiri. Defect-free nanotubes
Chemistry
English

Pages