<p>
Science Feature Articles</p>

Acting Locally, Reporting Globally

English

Mapping of size distributions of a mouse’s gray matter by quantum-controlled proton MRI. (l) Brain proton MRI; (c) mean cellular size; (r) distribution peak

The smallest devices for storing information are qubits – quantum bits. Qubits are all around us – they are in the nuclei of atoms, for example. The trick is to learn to read and use the information they contain.

One technology that reads the information in quantum systems is magnetic resonance (MR). MR – including the familiar MRI scan – exploits the quantum nature of the protons found, for example, in the water molecules in our bodies. These protons have a property known as “spin”: states of rotation that are measured in one of two discrete values, often referred to as “up” or “down.” Spin states endow the protons in a molecule with magnetism, transforming them into “quantum compass needles” that “point” along one of two orientations. These micro-magnets are what enable magnetic resonance: Protons, placed in certain conditions in an external magnetic field, will rotate like a top, emitting very weak – but observable – radio waves. MR equipment detects the distinctive radio waves emitted from the protons in such molecules as water, using their locations to build an image.
 
 
Prof. Lucio Frydman
 
Worlds of information, in addition to the spatial location of protons, could be extracted from MRI if the protons’ emitting states could be efficiently preserved. Unfortunately, the quantum states of these micro-magnet protons are exquisitely delicate; the information they record is rapidly destroyed by interference from the tiny magnetic fluctuations that occur spontaneously in their surroundings.   

How can one protect these delicate systems, on the one hand, and get them to reveal the subtle information they carry about their environment, on the other? Prof. Lucio Frydman, together with visiting scientist Dr. Gonzalo Alvarez and postdoctoral fellow Dr. Noam Shemesh, all in the Institute’s Chemical Physics Department, illustrated a new approach to answering this question, showing that some forms of interference can help, rather than hurt, in preserving this information. Their research, as reported in Physical Review Letters, reveals a new family of MR-based methods that may be used, among other things, to probe the size and shape of small pores, cells or tiny nanostructures.

The new methods operate by exposing nuclear spins to one kind of environmental interference while protecting them from all the others. The interference chosen by Frydman and his group comes from fluctuating magnetic fields associated with the random, Brownian motions that arise from regular quantum particle collisions. This manipulation turns the nuclei into “spies” that, during their random walks, can “scout out” the confines of a cell or the boundaries of their local microstructures. Combined with the spatial imaging ability of a standard MRI, the result is a completely new way to measure microscopic structural architectures at extremely high resolution. As an added plus, the technique is noninvasive: Shemesh, Alvarez and Frydman demonstrated its ability to perform complex biological imaging – in this case, “virtual histologies” – mapping cell-size distribution in whole brains.

Possible additional applications of this method range from solid-state physics to materials sciences and, of course, to the life sciences – where they could, for example, measure the changes in size and shape that the nerve cells’ axons experience due to neural stimulation or maturation. It could also lead to the development of new contrast mechanisms for medical MRI scans, and to better diagnostic methods for observing physiological changes in diseased tissues or those caused by neurodegeneration.
 
Prof. Lucio Frydman’s research is supported by the Ilse Katz Institute for Material Sciences and Magnetic Resonance Research; the Helen and Martin Kimmel Award for Innovative Investigation; the Helen and Martin Kimmel Institute for Magnetic Resonance Research, which he heads; the Adelis Foundation; the Mary Ralph Designated Philanthropic Fund of the Jewish Community Endowment Fund; Gary and Katy Leff, Calabasas, CA; Paul and Tina Gardner, Austin TX; and the European Research Council.

 

 
Mapping of size distributions of a mouse’s gray matter by quantum-controlled proton MRI. (l) Brain proton MRI; (c) mean cellular size; (r) distribution peak
Space & Physics
English

Higher End, Lower Cost

English

 

(l-r) Profs. Gary Hodes, Henry Snaith and David Cahen

 

The goal: To create photovoltaic cells that are highly efficient, easy to produce and economical enough to put on every rooftop. If new materials that have recently come onto the solar cell scene live up to their promise, that goal could possibly be met within the next few years. Prof. David Cahen of the Materials and Interfaces Department says: “It will get to the point that the cells themselves will be so cheap that the external costs involved in producing photovoltaics and installing them will be what ultimately determine their price.”


Cahen and Prof. Gary Hodes of the same department are big fans of these materials, called perovskites. Based on such inexpensive metals as tin or lead, the compounds in this group of materials all share a particular crystal structure. “Perovskite cells are the first inexpensive type of solar cell to work in the high-energy part of the spectrum and give high voltage. And they are so easy to make, you can basically produce them on a hot plate,” says Hodes.
 

 

The rapid rise in efficiency for perovskites as compared to other photovoltaic materials. Graph: Hodes

 

Ninety percent of today’s photovoltaic cells are made with silicon, explains Cahen. Silicon is an inexpensive, abundant material that can allow a current of electrons to flow when photons from sunlight hit its surface. To complete the process, other materials added to the solar cell facilitate charge separation: sending the electrons to one side of the cell for use and the “holes” they leave behind to the other.


But even the very best future silicon module, working at its peak, can use only up to 25% of the sunlight that falls on its surface. Wavelengths at the lower end of the spectrum – for example infrared – do not have enough energy to free silicon’s electrons, while those at the higher end are so energetic that most of their energy is lost. Scientists trying to improve the efficiency of solar cells have been investigating materials that can make better use of those higher-energy photons, which could provide more electricity from the same surface area. Over the years many of these groups – Cahen and Hodes’s among them – have made progress, mostly incremental, in improving efficiency. But both the slow pace and the complexity of these improvements has often been a frustrating factor.     


Perovskites leapt onto the solar cell scene in 2009 when a group led by Prof. Tsutomu Miyasaka from Yokohama, Japan, used them in a special type of solar cell. While the efficiencies they achieved were respectable, the stability of their cells was very poor. Within a short time, the groups of Profs. Nam-Gyu Park and Sang-Il Seok from Korea, and that of Prof. Henry Snaith (see below) in the UK and of Prof. Michael Graetzel in Switzerland showed the way to better efficiencies and stability. Snaith, who had recently set up his lab at Oxford University, and others showed that cells using these materials gave relatively high voltages. Within a short time, these groups were creating experimental perovskite solar cells that rivaled silicon alternatives in efficiency and produced higher voltage.

 
Cahen and Hodes immediately recognized the potential of this material to fill in the “missing link” – that is, a cheap cell that yields high voltages, efficiently using the high-energy part of the solar spectrum. In a number of recent papers published together with Dr. Saar Kirmayer and PhD student Eran Edri in Nature Communications, with Tel Aviv University colleagues in Nano Letters, and with colleagues from Princeton University in Energy & Environmental Science, they elucidated the mechanisms by which perovskites manage to convert sunlight into useful electricity with high-voltage efficiency, as well as demonstrating some methods of improving on solar cells made of these materials. In two studies that appeared in The Journal of Physical Chemistry Letters, they and MSc student Michael Kulbak tested perovskites layered with various materials that act as hole conductors for improved charge separation. These yielded high-voltage cells that surpassed 1.5V (silicon yields around 0.7V, by comparison). Their results, they say, can point the way toward further improvements.


What is the secret of these materials? Cahen and Hodes say their research and that of others points to the crystal structure. The materials form high-quality structures – that is, containing few irregularities. The behavior and efficiency of the perovskite cells fits in with a model that Cahen and his coworkers had proposed some years back, which says that this order is critical. Furthermore, the material simultaneously contains both non-organic (lead and iodide or bromide) and organic (containing carbon and hydrogen rings) components. It is the way these components fit together that makes perovskites so useful for photovoltaic cells: Like silicon, they form nicely ordered crystalline structures, but at the same time, the interactions between the organic parts are weak, making surfaces that allow electrons to cross easily from grain to grain.  


At present, very small perovskite photocells have achieved 16% efficiency, and it appears likely that they can reach, and possibly surpass, 20%. There are still a few hurdles to overcome, cautions Cahen. For one, these materials must prove they are stable over time. For another, the compound contains a small amount of lead, and care must be taken either to find a substitute or to ensure that this toxic element cannot be released into the environment. Neither of these issues, say Cahen and Hodes, have dampened their enthusiasm. Perovskites have renewed both interest in the field and the hope that solar energy may finally become a widespread alternative to fossil-fuel-generated electricity.
 

perovskite infographic

 

 

Visit


Prof. Henry Snaith was as surprised as anyone by the rapid success of perovskite photovoltaics. “We looked at a whole range of materials before beginning to work with them,” he says. “Normally, if you start with 1% efficiency, you’re happy just to have proved that the material has photovoltaic properties. We started at 6%; within six months we had achieved 10%; and now we are already at 16% – nearly the efficiency of silicon.”


Snaith was recently in Israel as a guest of the Weizmann Institute, and through the Institute, he appeared at the annual meeting of the Israel Chemical Society. During his visit – just two days – he met with Cahen and Hodes and their groups. Snaith and the Weizmann scientists are at present undertaking a collaborative project that is supported by an initiative of Weizmann UK. They are working on widening the range of light energy the material can absorb. Snaith notes that he is looking forward to working with the Institute’s scientists: “Fantastic work is being done here at the Weizmann Institute,” he says.  
 

Prof. David Cahen’s research is supported by the Mary and Tom Beck Canadian Center for Alternative Energy Research, which he heads; the Nancy and Stephen Grand Center for Sensors and Security; the Ben B. and Joyce E. Eisenberg Foundation Endowment Fund; the Monroe and Marjorie Burk Fund for Alternative Energy Studies; the Leona M. and Harry B. Helmsley Charitable Trust; the Carolito Stiftung; the Wolfson Family Charitable Trust; the Charles and David Wolfson Charitable Trust; and Martin Kushner Schnur, Mexico. Prof. Cahen is the incumbent of the Rowland and Sylvia Schaefer Professorial Chair in Energy Research.


Prof. Gary Hodes’s research is supported by the Nancy and Stephen Grand Center for Sensors and Security; and the Leona M. and Harry B. Helmsley Charitable Trust.
 

 

 
(l-r) Profs. Gary Hodes, Henry Snaith and David Cahen
Chemistry
English

Cells Are Individuals Too

English
(l-r) Prof. Amos Tanay and Dr. Ido Amit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Stereotyping groups of people according to the behaviors and characteristics of a few may not accurately reflect reality – it ignores diversity within these groups. Could this also be true for cells? Traditionally, cells have been classified according to type depending on their looks (round, pointed, etc.) and broad functional characteristics – for example, the immune system’s T and dendritic cells. But new research from the Weizmann Institute suggests that such generalization of cells may be misleading; rather, cells should be judged on their individual merit. “A cell has the potential to independently change its behavior by switching genes ‘on and off’ in response to various developmental signals and cues it receives from its environment. We therefore need a way to distinguish their individual characteristics and function,” says Dr. Ido Amit of the Immunology Department. 
Brain cells
 
This is easier said than done. The average biological tissue contains billions of cells, comprising hundreds – if not thousands – of different “types.” How to sample each individual? A relatively new tool holds promise: single-cell sequencing. This method sequences the DNA and RNA of single cells, allowing scientists to trace and characterize their unique individual behavior – without cell-stereotype prejudices.
Muscle cells
 
It can reveal which genes are being expressed, at what levels; whether proteins are being made, and under what conditions. This information will ultimately lead to a better understanding of how the cells work together to ensure the healthy functioning – or, in cases of disease, the malfunctioning – of tissues. 
 
But single-cell sequencing is still in its infancy and scientists have so far been able to sequence the genetic material in only a small quantity of cells from tissue cultures consisting of one “type.” Now, as reported in Science, Amit, Prof. Amos Tanay of the Computer Science and Applied Mathematics, and Biological Regulation Departments, and their teams, led by Dr. Diego Jaitin, Dr.Hadas Keren-Shaul and PhD student Ephraim Kenigsberg, together with Prof. Steffen Jung’s laboratory in the Immunology Department, have developed a new single-cell sequencing method that is able to automatically sequence and analyze the RNA from thousands of single cells at once from living samples of complex tissues.
 
To test the robustness of their new technique, the scientists sampled mouse spleen – an immune system organ whose cells are very well described – to see whether they could reverse engineer and correctly identify its cellular composition. After computationally dividing the cells into groups expressing similar genes, they ended up with seven “types” of cells in total. Some of these groups were comparable to those already described, including B cells, natural killer cells, macrophages, monocytes and dendritic cells. But to their surprise, they discovered new subpopulations of dendritic cells, with diverse functions that have, until now, gone unrecognized.
Red blood cells
 
“The use of traditional methods can be likened to observing a football game from afar,” says Tanay. “All players in a team seem identical, both in terms of outward appearance and behavior, and, for the most part, they all seem to ‘hang around’ in the same spot in the middle of the pitch. Single-cell sequencing, however, allows you to ‘zoom in’ on the action and distinguish the unique position and role of each player within the team – defense, attack, goalkeeper.”
 
In a follow-up experiment intended to simulate an immune response, the scientists exposed mice to a substance that mimics infection in order to test whether their method is able to correctly identify immune cells “in action.”
 
In response to changes in environmental cues, the same “types” of immune cells were identified, but the relative proportion of each changed, as did the genes expressed and the amount of RNA produced within each cell, both within and between cell groups. “Taken together, these findings paint a more complex picture of cell identity, suggesting that ultimately, defining cells according to ‘type’ is somewhat irrelevant and we should be letting individual cells ‘speak for themselves’,” says Kenigsberg.
Skin cells
 
This is just the beginning. Jaitin: “Single-cell RNA sequencing has huge potential in answering as yet unsolvable questions in biology, as well as having clinical implications in areas ranging from immune disorders to neurodegeneration, metabolism, stem cells and cancer. For example, we could apply these new tools to discover exactly which tumor cells are resistant to therapy.”

“The experimental protocol and analysis algorithms we developed make single-cell RNA sequencing of thousands of cells efficient, reliable and affordable,” says Keren-Shaul. Indeed, achieving a throughput about 30 times higher than current techniques, the team believes that the single-cell characterization of complex tissues is poised to increase the amount of detail able to be ascertained and bring new insights into multiple fields in biology and medicine.
 
Dr. Ido Amit's research is supported by M.D. Moross Institute for Cancer Research; the Abramson Family Center for Young Scientists; the Wolfson Family Charitable Trust; the Abisch Frenkel Foundation for the Promotion of Life Sciences; the Leona M. and Harry B. Helmsley Charitable Trust; Sam Revusky, Canada; Drs. Herbert and Esther Hecht, Beverly Hills, CA; the estate of Ernst and Anni Deutsch; and the Estate of Irwin Mandel. Dr. Amit is the incumbent of the Alan and Laraine Fischer Career Development Chair.
 
Prof. Steffen Jung's research is supported by the Leir Charitable Foundations; the Leona M. and Harry B. Helmsley Charitable Trust; the Maurice and Vivienne Wohl Biology Endowment; the Adelis Foundation; Lord David Alliance, CBE; the Wolfson Family Charitable Trust; the estate of Olga Klein Astrachan; and the European Research Council.
 
Prof.  Amos Tanay's research is supported by the Helen and Martin Kimmel Award for Innovative Investigation; Pascal and Ilana Mantoux, Israel/France; the Wolfson Family Charitable Trust; the Rachel and Shaul Peles Fund for Hormone Research; and the estate of Evelyn Wellner.


 
All cell images: Thinkstock
 
 

 

 

 

 
 
(l-r) Prof. Amos Tanay and Dr. Ido Amit
Life Sciences
English

Rearrange as Needed

English
 
(l-r) Drs. Yishai Levin, Michal Sharon, Maria Füzesi-Levi and Gili Ben-Nisan
 
Anybody who has ever seen how plastic bottles or wastepaper are compressed before being recycled is familiar with the brute-force equipment that crushes these materials in preparation for the recycling process. In contrast, the molecular machinery that prepares proteins for being recycled in living cells is subtle and sophisticated – and, according to a new Weizmann Institute study, much more versatile than previously thought.

The Weizmann scientists, led by Dr. Michal Sharon, have revealed dynamic adjustments that take place in this machinery as it labels proteins intended for recycling. Understanding exactly how the recycling occurs is essential because mistakes in this mechanism are responsible for a host of widespread diseases. In fact, so central is the recycling machinery to the life of the cell that in 2004, the Nobel Prize in Chemistry was awarded to scientists who had clarified the function of ubiquitin, a small molecular tag that marks proteins slated for destruction by this machinery.

In virtually all the cells of our body, proteins that are worn, damaged or simply no longer needed are constantly being dismantled and reassembled for future use. Among the cellular structures that regulate this process is a large molecular complex called the signalosome. Itself consisting of eight proteins, it releases a command that ultimately results in tagging the targeted protein with ubiquitin.

Until now, the signalosome was thought to maintain a fairly permanent structure. But the new Weizmann study has shown that, in fact, this large complex is highly dynamic: Each of its eight subunits can assume different shapes; furthermore, the subunits can rearrange themselves into various compositions. In addition, the entire signalosome can move to different parts of the cell, depending on the task at hand.

Working with living human cells and using several technologies, including advanced mass spectroscopy, the researchers clarified how the signalosome operates in a crisis as it responds to DNA damage inflicted by ultraviolet radiation. The researchers found that the signalosome undergoes several adjustments. It rushes from the cytoplasm to the nucleus, where the damage has occurred; the more extensive the damage, the more massive its migration to the nucleus. Moreover, the signalosome’s subunits assume different shapes compared with the complex remaining in the cytoplasm. These findings reveal that the signalosome can adjust itself to match the cell’s changing needs.

The study, reported recently in Molecular and Cellular Biology, was performed by Sharon, and Drs. Maria G. Füzesi-Levi and Gili Ben-Nisan in the Institute’s Biological Chemistry Department, together with Dr. Yishai Levin of the Nancy and Stephen Grand Israel National Center for Personalized Medicine on the Weizmann campus, Dr. Elisabetta Bianchi of the Pasteur Institute in France, and Dr. Houjiang Zhou, Dr. Michael J. Deery and Prof. Kathryn S. Lilley of the University of Cambridge.
 

 

 

The study may in the future shed new light on molecular mechanisms involved in the cell’s repair process following DNA damage; errors in this process may ultimately lead to cancer.

On a more basic level, the new findings may have broad implications for our understanding of how humans and other living organisms function on the molecular level. They suggest that not only the signalosome but also other large cellular machines might be much more dynamic and versatile than currently believed.
 
Dr. Michal Sharon’s research is supported by the European Research Council; the Sergio Lombroso Award for Cancer Research; and Karen Siem, UK. Dr. Sharon is the incumbent of the Elaine Blond Career Development Chair in Perpetuity.
 
 
(l-r) Drs. Yishai Levin, Michal Sharon, Maria Füzesi-Levi and Gili Ben-Nisan
Life Sciences
English

The Collagen Paradox

English
 
 
Lital Bentovim and Prof. Elazar Zelzer
 
Without collagen, we would all go to pieces – quite literally. This large molecule, the most abundant protein in our bodies, is the “glue” that holds tissues together. In fact, in our distant evolutionary past, it was collagen that enabled our single-celled ancestors to evolve into multi-celled creatures – by helping individual cells stick together.

During embryonic development, large amounts of collagen – the starting material for bone formation – are produced in the growing bone. Since collagen production is a complex, multi-stage process that requires a great deal of oxygen, one would expect to see a massive supply of oxygen to this developing body part.
 
Yet paradoxically, the growing bone is exceptionally low in oxygen: As an internal organ, it receives less oxygen than external embryonic tissues. In fact, oxygen-carrying blood vessels are actively pushed out of the cartilage that builds up in preparation for bone formation.  
 
Cartilage cells in the growth plate of a developing bone. When the expression of the Hif1a gene is blocked, the number of cells in the growth plate’s low-oxygen zone is reduced and their supporting matrix is diminished (right)
 

 

 
 
 
 
 
 
 
 
A study conducted at the Weizmann Institute of Science and published in Development sheds light on this paradox. Though it remains unknown why so little oxygen is available during bone formation, the Weizmann findings explain how collagen production can take place in such seemingly unfavorable conditions.

Prof. Elazar Zelzer, Lital Bentovim and then grauate student Dr. Roy Amarilio, all of the Molecular Genetics Department, investigated collagen formation in the growing bones of mouse embryos, focusing on a molecule, HIF1-alpha, which is known to regulate the cellular and physiological responses to low-oxygen conditions in tissues. The scientists discovered that HIF1-alpha also serves as a central regulator of collagen formation and renders the process highly efficient, so as to make the most of the available oxygen. When they blocked the action of this molecule, collagen release was impaired and the bone did not grow properly.
 
Cartilage cells that secrete collagen (red) under low-oxygen conditions, viewed under a fluorescent microscope (left). When the Hif1a gene is knocked out, collagen is not secreted properly (right)
 

 

 
The scientists found that like a capable manager, HIF1-alpha creates optimal conditions for the collagen production. First it increases the release of catalytic enzymes that speed up the process. Then it shuts down other metabolic processes in the tissue, so that the little oxygen available can be channeled in its entirety to the collagen-secreting cells, enabling them to focus exclusively on their main task.

In addition to clarifying the formation in the embryo of one of the body’s central building blocks, this study may contribute to our understanding of disease. It may, for example, help explain how certain cancers develop despite low oxygen levels in the malignant tissue.
 

Attached

Dr. Einat Blitz
 
Bones, muscles and tendons give the body form, keep it stable and enable it to move. But for these functions to be successfully performed, they must be assembled into a single, precisely regulated system – the musculoskeletal system. A another study conducted in Zelzer's lab, reported in Development, has revealed a crucial mechanism responsible for this assembly.

A fundamental step in the assembly process is the development of attachment units between a bone and a tendon – protrusions of various shapes and sizes known in technical language as bone eminences, which grow on the surface of the bone. These protrusions are vital for the proper functioning of the musculoskeletal system: They provide stable anchoring points for muscles that are inserted into the bone via tendons, as well as helping dissipate the stress exerted on the bone by contracting muscles.
 
Developing bone in normal (top) and mutant (bottom) embryos. Protrusions in a developing bone are formed by a distinct class of cells (green) that differ from the regular bone-forming cells (yellow-orange). Incorrect regulation and distribution of these cells leads to irregularities in the shape of the forming bone

 

 

 
 
In the new study, conducted in mouse embryos, the scientists have discovered that the bone protrusions are formed by a distinct, previously unknown class of cells that differ from the regular bone-forming cells. These protrusion-forming cells have a split personality of sorts: They are controlled simultaneously by two genetic programs – one characteristic of bone, the other of ligaments and tendons. This dual nature is what facilitates the attachment of the tendons, ligaments and muscles to the bone protrusions.

The scientists were able to establish the details of the genetic programs, including the molecular signaling that regulates them, by creating mutant mouse embryos lacking certain genes and tracing the embryonic development. The research was performed by Zelzer and then graduate student Dr. Einat Blitz.
A modular model: two distinct classes of cells – forming cartilage (orange) and attachment units (green) – take part in bone development

 
 
 
By revealing that bones are created in the embryo in a modular fashion, the study might help explain their mechanical properties – for example, the ability of different anatomic regions of bone to cope differently with stress and load, and their contribution to the skeleton’s overall sturdiness and flexibility.
 
Dr. Elazar Zelzer’s research is supported by the Y. Leon Benoziyo Institute for Molecular Medicine; the Jeanne and Joseph Nissim Foundation for Life Sciences Research; the Helen and Martin Kimmel Institute for Stem Cell Research; the Irving and Dorothy Rom Charitable Trust and the estate of David Levinson.  
 
 

 

 

 

 

 
 

 

 
 
 
 
Developing bone in normal (top) and mutant (bottom) embryos. Protrusions in a developing bone are formed by a distinct class of cells (green) that differ from the regular bone-forming cells (yellow-orange). Incorrect regulation and distribution of these cells leads to irregularities in the shape of the forming bone
Life Sciences
English

All Together Now

English
 
Imagine a construction site submerged under water: Building blocks are floated to a facility where the scaffolding is being put together. That is how the skeleton and other mineral structures are created in the embryos of numerous animals. In sea creatures the building blocks are of minerals derived from sea water; in developing humans and other mammals, the minerals are supplied by the mother’s blood via the food she eats.

 

Live sea urchin embryo grown in sea water labeled with a green fluorescent dye; dye-labeled calcium carbonate granules are observed all over the embryo. (A) Fluorescence and bright-light images superimposed (B) Fluorescence image alone

Starting with fertilization, a Weizmann Institute-led team has traced the initial steps in the construction of the skeleton in live embryos – those of sea urchins. As reported recently in the Proceedings of the National Academy of Sciences, USA, they have gained surprising new insights into this elaborate process. 

 
A scanning electron microscope image showing a vesicle containing calcium carbonate nanospheres, 20 to 30 nanometers across, in a flash-frozen sea urchin embryo
 

 

To accumulate sufficient calcium and other minerals, each sea urchin embryo, which is about the size of a cross section of a human hair, needs to use all the calcium contained in hundreds of times its own volume of seawater. For decades, scientists had thought that the take-up of calcium and carbonate from the water and the actual building of the skeleton were performed exclusively by specialized embryonic cells. But in the new study, when the researchers observed the growth of the sea urchin embryo in sea water containing calcium ions labeled with green fluorescent dye, they were amazed to discover that the entire embryo was soon lit up in green: Evidently, many of its cells had absorbed the calcium ions.

To make sure the finding had not been produced accidentally, the scientists confirmed the wide distribution of the mineral using several advanced technologies: In addition to observing live embryos with a light laser microscope, they investigated frozen embryonic samples with a scanning electron microscope. Moreover, they used an innovative Israeli system – a multi-modal imaging station featuring fluorescence microscopy, elemental mapping and a scanning electron microscope that allows the examination of samples in open air rather than in a vacuum. The study was performed by Prof. Lia Addadi, Prof. Stephen Weiner and graduate student Netta Vidavsky, all of the Weizmann Institute’s Structural Biology Department, together with Weizmann Institute graduate Dr. Sefi Addadi of B-nano Ltd., Dr. Julia Mahamid, a Weizmann graduate currently in Germany, and Dr. Eyal Shimoni of Weizmann’s Chemical Research Support Department, as well as David Ben-Ezra and Prof. Muki Shpigel of the Israel Oceanographic and Limnological Research Institute in Eilat.    
 
(l-r) Dr. Eyal Shimoni, Dr. Sefi Addadi, Prof. Lia Addadi, Netta Vidavsky and Prof. Stephen Weiner
 
 
The study also revealed that when calcium carbonate gets into the embryo’s cells, it forms solid granules composed entirely of nanospheres. This texture is characteristic of many amorphous minerals and is known to be an intermediate stage in the formation of the skeletal tissue of the sea urchin, as revealed in studies by Weiner and Addadi more than a decade ago. The granules, in turn, were found to be stored in vesicles, spherical container-like structures.
 
A scanning electron microscope image of a flash-frozen sea urchin embryo, showing a large number of intra-cellular vesicles
 

 

These findings point to new avenues for studying skeleton formation in a wide range of living organisms, including humans. The fact that the entire embryo is mobilized for construction of the skeleton suggests that in humans, contrary to accepted belief, in addition to the specialized osteoblasts, other cells might be involved in the construction of bones and teeth. Moreover, the temporary storage of calcium carbonate in vesicles might be applicable to organisms other than sea urchins.

Elucidating these mechanisms is of enormous importance for understanding biological mineralization – an understanding that in turn could be crucial for future investigations into various diseases and structural abnormalities affecting bones and teeth.
 
 
Prof. Lia Addadi's research is supported by the Gerhardt M.J. Schmidt Minerva Center on Supramolecular Architecture; the Ilse Katz Institute for Material Sciences and Magnetic Resonance Research; and the Carolito Stiftung. Prof. Lia Addadi is the incumbent of the Dorothy and Patrick Gorman Professorial Chair.
 
Prof. Stephen Weiner's research is supported by the Helen and Martin Kimmel Center for Archaeological Science, which he heads; the J & R Center for Scientific Research; the Maurice and Vivienne Wohl Charitable Foundation; the Exilarch's Foundation; the estate of Hilda Jacoby-Schaerf; the estate of George and Beatrice F. Schwartzman; and the European Research Council. Prof. Weiner is the incumbent of the Dr. Walter and Dr. Trude Borchardt Professorial Chair in Structural Biology.
 
 
 
 

 

 
Life Sciences
English

Writ Large – and Small

English
 
 
Writ Large – and Small

When you jot notes on a pad, you move different sets of muscles than when you write in large letters on a blackboard. Yet your handwritten a on the pad is formed with the same unique curves as the a on the blackboard. How does our brain encode the complex motions of writing? Does it employ a single set of instructions for both notepads and boards – or multiple ones? This question has been debated in neurobiology circles for a while; but a team of Weizmann Institute scientists that included both mathematicians and neurobiologists has now provided a surprising answer. Their results appeared recently in Neuron.


One of those scientists is research student Naama Kadmon Harpaz, whose work spans both mathematics and neurobiology. She began her graduate studies in the Neurobiology Department and is now in the Computer Science and Applied Mathematics Department  working in the group of Prof. Tamar Flash. Also participating in the study was neurobiologist Dr. Ilan Dinstein, a former postdoctoral fellow in Prof. Rafi Malach’s lab who is now on the faculty of Ben-Gurion University of the Negev.
 
To conduct their study, Kadmon Harpaz, Dinstein and Flash asked volunteers to trace out three letters, in both large and small versions, on a concealed touch screen. While the volunteers were writing, their brains were being scanned in the Institute’s functional magnetic resonance (fMRI) apparatus to capture the brain activation underlying the physical act. In addition, the team analyzed the movements themselves: the geometrical and temporal features of the writing and its kinematics (paths and speeds).

When the results were added together and analyzed, the findings were conclusive: Certain areas in the brain encode the act of writing both small and large letters in a highly similar way. In mathematical terms, the code is “scale invariant” – that is, the pattern of activation looks very similar for large and for small, fast and slow. A one-size-fits-all code meshes with earlier research which had suggested that, even though the muscles used are different, the basic kinematics of the movement when writing large or small are the same.
 
 
Top: Movement traces from single trials of three representative subjects. Orange and blue are small and large letters, respectively. Bottom: Mean movement traces after performing the Procrustes transformation (small mean is scaled to the size of the large mean) revealing a similarity in the path shape across scales

 
 
 
 

Simplified control

 
 
(l-r) Prof. Tamar Flash and Naama Kadmon
 

 

The fMRI results highlighted two areas of the brain that exhibited this “scale-invariant” encoding: One, called the anterior intraparietal sulcus (aIPS), is known to be important for such functions as hand-eye coordination and the planning of movement; the other, the primary motor cortex (M1), is thought to be the executor of the hand’s action. In the hierarchy of the brain, aIPS is said to be “upstream” – that is, it deals with more abstract information. Thus one might expect this brain area to encode symbols in a scale-invariant manner. But the scientists were surprised to find that the downstream area, M1 – thought to be a source of purely instantaneous neural commands for the more mechanical aspects of movement that are sent to the spinal cord and from there to the muscles – also encodes the ensuing action with scale invariance.
 
The researchers think that the encompassing scale invariance they identified acts to simplify the “control problem” and increases the efficiency of neural computation.  

Prof. Flash: “Concerning the generation of movement, the brain is thought to function from the top down, from abstract representation to physical output; yet we found this abstract coding in a supposedly downstream area. We think, on the one hand, that the brain is more of a network and less of a strictly top-down hierarchy than previously thought and, on the other hand, that our brain extends its abstract coding patterns to our seemingly concrete actions in the world around us.”
 
 
This experiment, say the researchers, is among the first to use fMRI to investigate motor control in humans. Most fMRI research looks at the brain’s reaction to such input as photos or video clips, while research into output – e.g., motor control – most often involves electrodes recording the activity of single or multiple neurons in non-human primates. By using fMRI to examine motor control in human subjects, says Dinstein, “we were able to see many areas in the brain at once. We could observe what each was doing in relation to the other.” 
 
According to Flash, these findings may have relevance for a number of areas of scientific research. For example, the insight they provide into the workings of the brain could be applied to robotics and bio-robotics to improve efficiency and enable a wide range of complex movements. In addition, they may aid in understanding those movement disorders that originate in the brain, among them Parkinson’s disease. They may also lead to a better understanding of the “motor memory” we use daily without a conscious thought. Further questions to be answered include: At what stage does learning to write enable a scale-invariant activity? How does such motor learning take place in the brain?    

Prof.Tamar Flash is the incumbent of the Dr. Hymie Moross Professorial Chair.
 
Life Sciences
English

Synchronized Speeds on the Straightaway

English
 
Imagine a 20-lane highway with no road signs, speed limits or white lines. The result, needless to say, would be chaos, with drivers constantly swerving, speeding up or slowing down to adjust to the changing flow. But a group at the Weizmann Institute recently looked at a similarly chaotic system and found a surprising underlying order that helps determine the movement on a tiny, busy “road.” Their findings, which recently appeared in Nature Physics, may help reveal hidden patterns in many kinds of complex chaotic systems, as well as providing new insight into the properties of flow for those working in the burgeoning field of microfluidics.
 

 

Itamar Shani and Prof. Roy Bar-Ziv
 
In such systems as traffic, long-range interactions become important – that is, they are the result of all the individual players interacting with one another. Whether they are powered by car engines, liquid flow or the force of gravity, systems with long-range forces tend to be unpredictable and hard to characterize. Several years ago, Prof. Roy Bar-Ziv of the Institute’s Materials and Interfaces Department, together with former graduate student Dr. Tsevi Beatus and theoretical physicist Prof. Tsvi Tlusty, developed a setup for observing what happens to particles in a traffic-like system with many elements.

Their trick was to make their system two-dimensional: They began with a very thin conduit – so thin that microscopic drops of water were flattened into “pancakes” by the top and bottom of the channel. Individual drops were carried along in a stream of oil, but as they were held back by the friction created by the top and bottom of the conduit, the drops moved much more slowly than the friction-resistant oil. Having this simplified, two-dimensional system enabled them to observe and measure things that would be impossibly complex in a three-dimensional arrangement.
 

 

 

In the original system water drops “marched” in single file; in the present study, the passage was widened to enable two-dimensional “traffic” patterns. The channel was half a millimeter in width and a few centimeters long; the drops just fit the height of the channel – around 10 microns (one one-hundredth of a millimeter) – so they could move freely in two dimensions between the channel’s side-walls. Research student Itamar Shani in Bar-Ziv’s lab watched through a microscope and on computer recordings as the tiny drops, pushed by the oil, flowed in streams as chaotic as those of the hypothetical unmarked superhighway on a Monday morning. And yet, as the researchers discovered, there was an underlying pattern – one that revealed a new type of long-range order between particles in a non-equilibrium system. Essentially, their findings show how each of the seemingly independent actors in the scenario is, in fact, influenced by every other actor.
 
The order is in the velocities of the drops. This became apparent when the team used sophisticated software they had developed to instantaneously capture the velocities of many drops at once. Mapping them out and color-coding them according to relative speed – faster than average or slower than average – revealed an arrangement in which groups of drops were coordinating their velocities in a way that was unusually persistent.
 
droplet flow
 
An ensemble of droplets. Lines drawn from the center of each droplet are proportional to its velocity relative to the mean. Top: Red - fast droplets. Blue - slow droplets. Bottom: Yellow – upward moving. Purple – downward moving. The rectangular frames highlight the angles along which the colors are typically uniform or mixed, corresponding to positive and negative correlations
 
 
 
 
 
 

 

Correlations of droplet velocities (projection of velocities along x-direction) plotted against a pair of droplets' spatial separation in the x and y direction. Red stands for positive values signifying joint motion; blue stands for negative values signifying opposing motion
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The drops’ collective action begins with the friction differential: If the oil were to flow alone, only the friction with the channel’s top, bottom and sides would affect its velocity. But when moving oil meets the friction-bound drops, some of the forward momentum transfers to the drops. Each time this happens, a bump appears in the velocity field in the channel; each distortion in the field then affects the movement of every other drop, to a greater or lesser extent.

To understand the phenomenon further, Bar-Ziv and Shani, along with Beatus* and Tlusty,* looked at how pairs of drops affect each other’s velocities. Taking large numbers of drop pairs, they looked for correlations between the measurements of their spatial separation and their relative velocities. Indeed, the resulting graph showed neat correlations – both positive and negative – depending on the drops’ relative positions. Positive and negative, in this case, meant how closely the drops moved in tandem; so, for example, positive correlations – pairs showing joint motion – were found between droplets lined up either perpendicular or parallel to the flow. The perpendicular drops were faster, while parallel ones were slower. Negative correlations – pairs with other types of motion, including those moving toward or away from each other, or rotating around each other – tended to be farther off the axes, along the diagonals. 

 
Prof. Tsvi Tlusty
 
The team then turned to an elegant model based on previous work by the group, in which the water drops create patterns of flow that are similar to the invisible lines of force around everyday magnets: These are dipoles. In other words, the distorted flow diverges and converges from two opposite points on the drops. Like the two sides of a magnet, the poles in the current around each drop have push or pull effects on other drops in the stream. The team again looked at pairs – this time theoretically – and calculated the mutual effects of only two dipole drops positioned at different angles to each other.
 
 

This “two-body” solution fit some of the observed correlations, but not all – in particular, not the negative ones. The scientists found that the mutual interaction in their model fell off too quickly with distance. They then realized that the reason the velocities were correlated was because they “felt” similar surroundings – due to their interactions with all the other droplets – rather than being an example of simple causation due to mutual interaction. Rather than try to work out the impact of every drop on every other drop, they added to the two-body model a third body that represented all the other drops – another dipole drop that interfered, from various directions, with the relations between the pairs. Now, with a mere three bodies, the team's theoretical model could describe – using simple mathematics and the positions of three drops – so-called “first principles,” the new long-range order that emerged from the chaotic flow.
Velocity fluctuations of two test droplets typically have parallel or opposite directions due to their interactions with a third droplet (green), depending on the angle between the pair
 

 

 
 
These findings may be relevant to any number of chaotic systems with long-range interactions. They may be especially useful to those designing microsystems based on the flow of particles in various fluids.

Bar-Ziv:  “This research is unique in the study of many-body systems with long-range forces, in that it presents a neat solution, with mathematical simplicity.” Questions the team intends to ask in future research include: What happens when changes are introduced into the system? And, can the underlying principle they discovered be applied to mapping chaos and turbulence?  

*Dr. Tsevi Beatus is currently at Cornell University, Ithaca, New York; Prof. Tsvi Tlusty is at the Institute for Advanced Study, Princeton, New Jersey.  

 

Prof. Roy Bar Ziv’s research is supported by the Yeda-Sela Center for Basic Research.


 
 

 

 

 

 

 
 


 

In the original system water drops “marched” in single file; in the present study, the passage was widened to enable two-dimensional “traffic” patterns. The channel was half a millimeter in width and a few centimeters long; the drops just fit the height of the channel – around 10 microns (one one-¬hundredth of a millimeter) – so they could move freely in two dimensions between the channel’s side-walls.  Research student Itamar Shani in Bar-Ziv’s lab watched through a microscope and on computer recordings as the tiny drops, pushed by the oil, flowed in streams as chaotic as those of the hypothetical unmarked superhighway on a Monday morning. And yet, as the researchers discovered, there was an underlying pattern – one that revealed a new type of long-range order between particles in a non-equilibrium system. Essentially, their findings show how each of the seemingly independent actors in the scenario is, in fact, influenced by every other actor.
The order is in the velocities of the drops. This became apparent when the team used sophisticated software they had developed to instantaneously capture the velocities of many drops at once. Mapping them out and color-coding them according to relative speed – faster than average or slower than average – revealed an arrangement in which groups of drops were coordinating their velocities in a way that was unusually persistent.
 
Correlations of droplet velocities (projection of velocities along x-direction) plotted against a pair of droplets' spatial separation in the x and y direction. Red stands for positive values signifying joint motion; blue stands for negative values signifying opposing motion
Chemistry
English

Killer Virus to the Rescue

English
HIV and cell membrane. Image:ThinkStock
 
When attacked by bacteria and viruses, the body’s “army” – the immune system – is called into action to fight off and destroy the harmful invaders and thus prevent disease and infection. But viruses, among them HIV, have evolved numerous and varied strategies to evade the immune response and infect cells.  Weizmann Institute scientists have recently uncovered a new “weapon” in HIV’s arsenal that specifically targets the activity of the immune system’s “elite force” – certain white blood cells called T cells. These findings have enabled the scientists to imitate the virus’s way of manipulating the immune system – one that has the potential to reduce the severity of such autoimmune diseases as multiple sclerosis, which arise from the “friendly fire” of T cells when they mistakenly attack healthy cells in the body.

 

 
The first step in HIV infection is to enter the T cells. To do this, the virus joins its outer membrane to that of a T cell. This is achieved with the help of a short molecular sequence found on a certain protein located within the HIV membrane. The point at which the two membranes fuse is found in the vicinity of T cell receptors (TCRs) – molecules on the surface of T cells that are usually responsible for recognizing harmful invaders and helping the T cells mount an effective response. Such a response includes T cell proliferation and the release of pro-inflammatory substances that kill the intruders.
Prof. Yechiel Shai
 
Prof. Yechiel Shai, former research student Avraham Ashkenazi and Omri Faingold of the Biological Chemistry Department found that the fusing sequence is conserved in this protein throughout different HIV strains, leading them to believe it must have additional, biologically important roles. Together with Prof. Avraham Ben-Nun and his research associate Dr. Nathali Kaushansky in the Immunology Department, they discovered, as reported in Blood, that this molecular pattern does indeed provide the virus with an added function: The sequence interacts directly with the TCRs, interfering with the TCR complex assembly. As a result, the activation of T cells is inhibited, preventing them from mounting an immune response.
 
 
Because multiple sclerosis is a T cell-mediated autoimmune disease, the scientists thought this action might prevent the harmful immune response in the disease. They worked with mouse models of this disease to gauge whether an isolated version of the sequence in the form of a peptide – a small piece of a protein sequence – would have the same inhibitory effect on T cells. They found that upon administration of the peptide, T cell activation was indeed suppressed and the severity of the disease was reduced. “As to the inhibitory effects on T cells, the peptide can do what the virus does, but without the virus,” says Ben-Nun.
Prof. Avi Ben-Nun
 
In a follow-up study, reported in the Journal of Biological Chemistry, the scientists engineered a more stable form of the peptide based on the original HIV sequence. Not only has this allowed them to further understand the unique molecular mechanisms of HIV infection, but it turns out that this so-called killer may also help save lives: The engineered peptide, which is based on the virus’s sequence, could potentially be used as a tool to manipulate the immune system and shut down T cell activation, thereby suppressing the development of various T cell-mediated autoimmune diseases.

“And because it specifically targets T cells, unlike existing immunosuppressive drugs that affect all types of white blood cells, it is likely to mount a more effective response with fewer unwanted side effects,” says Shai.
 
Prof. Avraham Ben-Nun’s research is supported by the Jeanne and Joseph Nissim Foundation for Life Sciences Research; the Croscill Home Fashions Charitable Trust; Ellie Adiel, New York, NY; Maria Halphen, France; and the estate of Fannie Sherr. Prof. Ben-Nun is the incumbent of the Eugene and Marcia Applebaum Professorial Chair.
 
Prof. Yechiel Shai’s research is supported by the Nella and Leon Benoziyo Center for Neurological Diseases; the Yeda-Sela Center for Basic Research; the Carolito Stiftung; the Helmsley Charitable Trust; and Mario Fleck, Brazil. Prof. Shai is the incumbent of the Harold S. and Harriet B. Brady Professorial Chair in Cancer Research.
 
 

 

 
 
 
HIV and cell membrane. Image:ThinkStock
Life Sciences
English

Molecules Tip Their Hands

English
 (l-r) Profs. Zeev Vager and Dirk Schwalm and Dr. Oded Heber
     

 

 
 
To understand the new finding arising from a recent collaboration between scientists at the Weizmann Institute and researchers in Germany, one must go back in time. First, one needs to go back to 1847, the year Louis Pasteur discovered that tartaric acid crystals come in two different forms – mirror images that, like hands, appear in “left” and “right” versions that cannot be superimposed on one another. Those molecules – known as chiral molecules – presaged a headache for drug manufacturers and the chemical industry in general, who soon found that the right- and left-handed forms could react very differently, especially in drug compounds. While researchers working in the field of stereochemistry, which deals specifically with the chemistry of chiral molecules, have made some progress in purifying compounds to contain just one form or the other, separating the two remains a problem in many cases.

Next, one needs to look back some 30 years, to the work being done in a Weizmann Institute lab. That’s when Prof. Zeev Vager of the then Nuclear Physics Department, together with the Institute’s Prof. Ron Naaman and researchers from the Argonne National Laboratory, Illinois, developed an innovative method of imaging molecules. Called Coulomb explosion imaging, it consists of accelerating molecular ions to a significant fraction of the velocity of light and shooting them onto a very thin sheet of carbon. As the molecules speed through this foil, their electrons are stripped away while the positively charged nuclei spread out in formation as they race toward a detector. The detector not only records the landing points of the nuclei but also times their touchdown, presenting a magnified, accurate, three-dimensional image of the molecule.

Vager continued to work on the Coulomb explosion technique for many years; he and Prof. Daniel Zajfman introduced it to the Max Planck Institute for Nuclear Physics in Heidelberg, Germany, while Vager was on a sabbatical in the mid-1990s. While the method proved most effective for imaging very small molecules, Vager at some point thought it could also be used for identifying left- and right-handed chiral molecules, even though they tend to be larger and more complex. As he was searching for a feasible chiral species, another Weizmann Institute scientist, the late Prof. Emanuel Gil-Av, whose research dealt with various aspects of chirality, suggested a molecule called oxirane as a test object. Vager discussed this proposal with Prof. Volker Schurig (a former postdoc of Gil-Av’s) and with one of Schurig’s students, Oliver Trapp, at the Institute for Organic Chemistry of the University of Tübingen. Though the molecule was borderline as far as the Coulomb explosion technique was concerned, a more serious obstacle was the difficulty in producing the relatively large amounts of chiral oxirane (up to a gram) needed for these experiments. Thus the intriguing prospect of being the first to determine the spatial structure of a chiral molecule in its gas phase had to be put on ice.  
 
Coulomb explosion method
 
 
Fast forward to this past year: A group of physicists and chemists from the Weizmann Institute and Germany finally managed to systematically overcome the various hurdles to imaging chiral oxirane molecules with the Coulomb explosion method. Oliver Trapp, now a professor at the Institute of Organic Chemistry at Heidelberg University, together with his PhD student Kersten Zawatzky and his former mentor Schurig, devised and succeeded in carrying out a sophisticated, multi-step process to synthesize grams of isotope-labeled oxirane with a well-defined chirality. Dr. Holger Kreckel of the Max Planck Institute for Nuclear Physics and his PhD student Philipp Herwig, together with his institute colleagues Prof. Dr. Dirk Schwalm (see below) and Prof. Dr. Andreas Wolf, as well as Vager and Dr. Oded Heber of the Weizmann institute, rejuvenated and modified the Coulomb explosion imaging setup at the Max Planck Institute to cope with the special requirements imposed on the detection system by the atomic break-up products of oxirane. After several test runs with non-chiral oxirane molecules, the team could finally perform the imaging measurements with chiral oxirane; its handedness was kept secret by the chemists until the final analyses of the experiment had been performed by the physicists. “Fortunately,” jokes Vager, “we got the right answer.”
 
The results appeared recently in Science. This was the first time that the spatial structure of isolated chiral molecules could be directly imaged. The method could prove highly useful to researchers and drug developers who need to unambiguously determine the handedness of their compounds.
 

Back to his roots

 
 
Prof. Dirk Schwalm
 
“I agreed to come to the Weizmann Institute for a year,” says Prof. Dirk Schwalm, “and I have already been here for seven.” Schwalm is a former director of the Max Planck Institute for Nuclear Physics in Heidelberg, Germany, where he worked on the ion storage ring – an evacuated tube about 56 m in circumference used for physics experiments. Among other things, this ring enables scientists to mimic atomic and molecular processes that occur in the emptiest stretches of outer space. That facility is what drew the Weizmann Institute’s Prof. Daniel Zajfman, a physicist who researches these processes, to frequently visit the Max Planck Institute.
 
In 2005, when Schwalm retired, Zajfman took over the directorship, spending part of his time in Heidelberg and part in his Weizmann Institute lab. But the next year, when Zajfman was appointed President of the Weizmann Institute, he resigned his position at the Max Planck Institute. He did not, however, give up his friendship with Schwalm and, indeed asked Schwalm to join the Weizmann Institute as a visiting scientist. Schwalm now spends more than half of each year helping lead Zajfman’s research group, thus enabling Prof. Zajfman to invest more time and energy in his presidential duties.
 
Schwalm says that he is very happy to be at the Weizmann Institute. He especially enjoys the fact that the experimental tools and the research groups are comparatively small, which enables him to get involved again in hands-on physics. “I have the chance to know every screw in the equipment,” he says. “This is a fantastic opportunity to go back to my ‘physics roots.’”
 
 (l-r) Profs. Zeev Vager and Dirk Schwalm and Dr. Oded Heber
Space & Physics
English

Pages