Nano Electronics

01.03.2004

You are here

 
 

 

Introduction

 

illustration: Computers that weigh tons

 


The drive for ever-smaller, ever-powerful technologies


Computers in the future may weigh no more than 1.5 tons. - Popular Mechanics, 1949
 
For decades, “thinking big” in electronics has meant the pursuit of smaller and smaller goals. Only a short while ago, for instance, the idea of a personal computer, phone, and wireless web wrapped into a handheld gadget would have sounded farfetched.
 
Further miniaturization attempts, however, are fast approaching hard-to-crack barriers - from problems faced in trying to crunch increasingly sophisticated circuitry onto limited physical surfaces to steeply escalating production costs.
 
Reluctant to accept the end to miniaturization, Weizmann scientists and their colleagues worldwide are stretching creativity to the brim. They’ve developed a new approach that builds information out of atoms, which may offer an alternative to silicon chip technologies, are working to integrate organic molecules into electronic circuitry as memory elements or transistor-like switches, and are exploring a range of other approaches - all designed to vault over the impending miniaturization deadline.
 


 

Where nature meets electronics

 

illustration: How “micro” can we go?

 


Most proteins, the chemical workhorse of the cell, are roughly 10 nanometers wide; a DNA molecule is about 2.5 nanometers.
 
Back in 1965, Intel co-founder Gordon Moore noticed that scientists were managing every year to double the data storage capacity of silicon chips used in computers. Now known as Moore's Law, this observation still holds true, though slightly extended - to roughly every 18 months. Engineers can now crunch some 30 million transistors onto a single microchip. However, miniaturization attempts will soon hit a wall, say experts, when the transistor will have become so tiny, designing it will be a messy, unreliable and highly costly adventure.
 
A partnership between electronics and nature may turn things around. Organic molecules already include tiny yet remarkably precise functional structures, such as the ribosome, which translates incoming messages encoded as DNA to build proteins. In microelectronics the aim would be to use organic molecules to enhance transistors, memory elements and other conventional components. Far more than just pushing back the impending miniaturization deadline, this merger may ultimately leave Moore’s Law in the dust.
 

Genes hooked up outside the cell

 
Shaped through time, biological cells are the ultimate engineering systems, able to perform the most advanced information-processing known as well as producing a wonderland of over 100,000 proteins. The cell pulls off these feats in a tiny setting that engineers can only dream of. How do its systems work? Might they be harnessed to build superfast computers or advance new biotechnologies?
 
In a step that might help address these questions, a Weizmann scientist has now designed the first synthetic circuit able to process genetic input to produce proteins. The circuit works on the principles of a conventional electrical circuit - that of a flashlight, for instance - but is constructed entirely of genes, proteins and other biological molecules. “Our goal was to determine whether an assembly of these components could be made to operate outside the context of a living cell,” says Dr. Roy Bar-Ziv of the Institute’s Materials and Interfaces Department who performed the work with Prof. Albert Libchaber and Dr. Vincent Noireaux of New York’s Rockefeller University.
 
The circuit inputs are genes “wired-up” such that the protein encoded for by one gene can either activate or depress the production of neighboring proteins. While other scientists have developed single-gene systems, this is this first time researchers have rigged up a multiple-gene circuit outside of the cell. Though rudimentary, this synthetic circuit offers an isolated and thus highly controllable environment in which to explore the workings of the cell, moreover it may represent the first step to streamlined protein production plants or advanced biocomputers. Unlike conventional computer systems, in which information is processed through a rigid digital 0-or-1, yes-or-no framework, biological networks are able to plod toward their goal using the multi-branched routes characteristic of parallel processing. This inherent property, researchers believe, might significantly fast-forward computer processing.
 
But this won’t happen any time soon. The system’s DNA-to-protein reactions can take an hour or more, and there is a delay until enough of the first material is produced to initiate the next stage. When too many stages are added to the sequence, the reactions tend to fizzle out as the available resources are used up.
 
The next step says Bar-Ziv, is to try and introduce circuitries of this sort into different materials. Once it is possible to create positive and negative feedback systems to turn things on and off, one could potentially design artificial circuits that mimic transistors, sensors, memory elements and clocks. “The gene is hardware and software all rolled up into one,” says Bar-Ziv. “Scientists are busy trying to invent self-replicating nanotechnology, but why not use what already exists?”
 
 

The goal: self-assembling transistors made of DNA

 
Merging a tiny sliver of bacterial DNA with carbon nanotubes and nearly 5 years of brainstorming, Institute scientists have created a self-assembling transistor that can interact with biological molecules.
 
The approach consists of a simple two-step self assembly process, enabling the simultaneous production of hundreds of transistors, the building blocks of electronics. The transistors can be designed to recognize a wide array of biological compounds - a feature that, in addition to computing and industrial applications, might advance the production of tiny sensors that would perform diagnostic tests in the body. The technique also offers the first-time possibility of modifying the properties of the transistor even after its production is complete.
 
The study, appearing in Chemical Physics Letters, was performed by a multidisciplinary team led by chemist Prof. Ron Naaman, physicist Dr. Dmitry Shvarts, biochemist Dr. Miron Hazani and doctoral students Dana Peled and Victor Sidorov.
 
The circuit consists of a carbon nanotube, a short segment of DNA and a gold-bottomed surface connected to gold electrodes. To achieve the unique property of self-assembly, the design takes advantage of the structure of DNA, which consists of two winding strands linked together by matching base pairs. One strand is connected to the nanotube, whereas its matching segment is bound to the gold surface. When the tube is introduced, its DNA strand naturally binds with the strand connected to the gold surface, creating a closed circuit. The result is a transistor that can be switched on and off by an electric current.
 
But biological molecules cannot conduct electricity, which is why nanowires such as carbon nanotubes enter the picture. The tubes are a natural candidate, due to their excellent electrical conductance and tiny dimensions.
 
Previous biocircuits have been characterized by low production yields and short-lived functionality, due to design flaws that caused rapid destruction of the biological molecules. The newly designed circuit allows for relatively large-scale production, and can operate for months on end. It also conducts an electric current that is 100 times stronger than those of previous devices.
 
To modify the transistor’s properties even after it is produced, the team introduces enzymes that function as a biological editing kit, cutting the DNA strands at certain points along its chain and adding on new sections.
 
“Many obstacles remain to producing marketable self-assembling transistors,” Naaman emphasizes. “But the dream is to create a sophisticated system that can sense, feel and react.”
 
 

Recollections of a pioneer

 
In the late 1940s John Bardeen, William Shockley and Walter Brattain of Bell Laboratories in New Jersey were trying to develop a semiconductor that would replace some of the vacuum tubes used in telecommunications. The tubes were impractical - they were energy inefficient, created heat and burned out rapidly.
 
The team was about to give up when a last attempt, using a purer substance, proved successful. Their discovery - the transistor - soon changed the world, becoming the building block for all modern electronics, including computers.
 
When asked if they had realized the importance of their discovery, for which they earned the 1956 Nobel prize in Physics, John Bardeen once replied: “No. When we invented the transistor it cost around fifty dollars to make, while the vacuum tube cost only a dollar and a half. We couldn’t imagine it ever offering a competitive alternative.” Today, around 100 million transistors can be produced for less than a dollar.
 
 

Organic molecules to fine-tune electronics

 
One challenge to incorporating biological compounds into electronics has been the difficulty of examining the electrical properties of organic molecules that are chemically connected to semiconductors and metals.
 
The layer of organic molecules used to cover semiconductors contains “pinholes” - small defects that are very difficult to detect but offer an easier route for electrons to pass through, thus radically altering conductance patterns. As a result, scientists taking electric measurements could not tell whether they were tracking the current’s passage through the molecule itself or through these pinholes.
 
Researchers at the Institute’s Materials and Interfaces Department skirted this problem by constructing a single layer of very short organic molecules, which they placed on a semiconductor. The monolayer was so thin that an electrical current directed through the semiconductor generally passed by the monolayer without interacting with it - making it irrelevant whether the electrons were passing through a molecule or a pinhole.
 
The team, including then graduate student Ayelet Vilan and Prof. David Cahen, also found that changing the organic molecules used led to predictable changes in electrical characteristics, meaning they could control the properties of the resulting electronic device. The molecules essentially served as “doormen,” determining the ease with which electrons passed through the device. Applying this understanding, the team (in collaboration with Prof. Abraham Shanzer of the Organic Chemistry Department), is now studying the use of novel organic molecules to fine-tune electronic devices.
 
 

Silicon real estate

 
Researchers at the Weizmann Institute and select teams worldwide have created transistor “switches” that are only a single molecule in size. A functional molecular transistor would represent the ultimate in electronic miniaturization. For perspective: if the conventional transistor were scaled up so that it occupied this page, a molecular transistor would be the dot on this i.
 
Prof. Abraham Shanzer of the Institute’s Organic Chemistry Department has designed such a single-molecule memory switch. The helix-structured organic molecules have upper and lower “pockets,” each of which binds electrically charged iron or copper ions. Using standard chemical reactions, the charge of the metal ion can be raised or lowered, causing it to jump between the two binding sites. These distinct states correspond to the zero or one of the binary code used in digital computers, thus fulfilling the basic requirement of a memory unit.
 
Key to implementing these switches is the ability to activate them, known as the “addressing” problem - which is where Prof. Israel Rubinstein of the Materials and Interfaces Department entered the picture. Rubinstein, together with Dr. Alexander Vaskevich and then doctoral student Gregory Kalyuzhny, demonstrated that the “switch” in these molecules could be activated by electrochemical means. They also showed that the molecules could be arranged as single molecular layers on electrode surfaces. Both characteristics are crucial to creating working memory components.
 
Further development of this concept could lead to digital circuitry containing millions of ON/OFF elements and dramatically more compact electronic devices. Nevertheless, significant challenges remain, including the ability to effectively link up these molecular switches or detect whether a particular molecular switch is ON or OFF.
 

 

Probing the current of modern life

 

illustration: "New spin"

 


What are these particles? Are they atoms, or molecules, or matter in a still finer state of subdivision? - Sir John Joseph Thompson, the discoverer of the electron
 
Miniaturization attempts are largely dependent on a better understanding of the electron. Computer operations in electronic devices are based on breaking down complex challenges into “tasks” - a huge number of yes/no questions, the answers to which are represented as either a zero or a one. These tasks are performed by transistors that, through changes in polarity, open or close to control the flow of electrons.
 
Studies of the electron - including the way it moves through material and its extraordinary capacity to behave as both a particle or a wave - are an important focus at the Institute’s Braun Center for Submicron Research. Here are some examples.
 
 

New spin on electronics

 
In 1897 a group of Cambridge University scientists threw a party to celebrate the discovery of a subatomic particle: “May it never be of use to anybody,” teased a colleague as he toasted its discoverer, Sir John Joseph Thompson. This same particle, later known as an electron, would soon change the course of history.
 
As scientists would discover over time, electrons are particles that carry an electrical charge, creating an electric current - today, the river of life of the industrialized world. They also found that in a quantum setting, in addition to existing as both particles and waves, electrons have a characteristic spin - as if they were tiny spinning balls. In other words, two electrons of identical mass and charge can be different, depending on their spinning direction.
 
This feature, now known as superposition, might dramatically fast-forward computations, since in contrast to conventional electronics, where every bit of information has a definite value of 0 or 1, quantum bits, known as quibits, can exist as both 0 and 1 at the same time. Thus, say, eight “conventional” bits can represent any number from 0 to 255, but only one number at a time, whereas eight quibits would be able to simultaneously represent every number from 0 to 255 - thus improving the speed and power of the computation process.
 
Spintronics, however, has a long way to go. One of the key challenges, explored by Profs. Yuval Gefen and Amir Yacoby of the Institute’s Condensed Matter Physics Department, is the ability to measure the changes taking place in spinning direction. These changes would correspond to the output (i.e., the “answer”) to a requested computation. At present, in a Catch 22-like snarl, the mere act of monitoring a system affects its superposition qualities. The Institute team hopes to determine which measurements can be safely performed without electrons sensing the intervention and losing their superpositional capacity.
 
 

Quantum forecast

 
What makes one material break under pressure, another an excellent conductor of electricity and a third exceptionally transparent? What grants different materials such properties as heat conductivity and magnatism? All these properties are set by the atomic structure of the material, in other words, the number of protons, neutrons and electrons it includes and the way these particles are arranged in the material.
 
This being the case, one would expect scientists to be able to predict the properties of a material from its atomic arrangement. But, it turns out that in many cases this is still tricky - which is where Dr. Leeor Kronik, who recently joined the Institute’s Materials and Interfaces Department enters the picture.
 
To predict a material’s properties based on its components one must first understand the nature of the chemical bonds between its atoms - or, rather, the complex systems of interactions between the electrons that surround the atoms and create the bonds between them. Such an understanding can be obtained through calculations based on the laws of quantum theory, particularly on the application of Schroedinger’s equation. “The problem,” says Kronik, “is that Schroedinger’s equation is very difficult to solve, so in practice, our ability to use it is limited to small molecules.”
 
As often happens in cases in which a solution is elusive, scientists in Kronik’s field frequently opt for a compromise: Rather than trying to understand the entire system of relationships between the components of the electronic “cloud,” they create an approximate picture of the world.
 
Kronik attempts to predict the properties of real materials that may be central to new technologies. One example is that of so-called spintronic materials, which are natural candidates for building future quantum computers.
 
To perform his complex quantum computations, Kronik uses one of the most powerful supercomputers in Israel. Nicknamed “platypus,” after one of the two species of egg-laying mammals, the computer is based on 60 microprocessors working in parallel.
 
“The multidisciplinary research we perform on the new supercomputer, which involves chemistry, physics, mathematics and computer science, reminded us of the versatility of this rare mammal that also lays eggs,” says Kronik.
 
 

In the eye of the beholder

 
A central premise of quantum theory is that by the very act of watching, an observer affects reality. This premise was demonstrated in a highly controlled experiment led by Prof. Moty Heiblum, who heads the Institute’s Submicron Center.
 
Quantum mechanics states that particles can also behave as waves. This can be true for electrons at the submicron level - i.e., at distances measuring less than one micron, or one-thousandth of a millimeter. When behaving as waves, particles can simultaneously pass through several openings in a barrier and then meet again on the other side of the barrier. Strange as it may sound, this “meeting,” resulting in interference, can occur only when no one is “watching.”
 
Once an observer begins to watch the particles, the picture changes dramatically: if a particle is seen going through one opening, one can be certain it will not go through another. In other words, when under observation, electrons are “forced” to behave like particles and not like waves.
 
To demonstrate this phenomenon, the researchers built a tiny device, less than one micron in size, that had a barrier with two openings. They then sent a current of electrons toward the barrier. The “observer” in this experiment wasn't human. It was a tiny but sophisticated electronic detector that can spot passing electrons. Apart from observing the electrons, the detector had no effect on the current; however, the scientists found that its very presence caused changes in the behavior of electrons passing through. Moreover, this effect was dependent on the “amount” of observation: when the detector’s capacity to detect electrons was increased, the wavelike behavior of the electrons diminished, whereas it increased when the detector’s capacity was reduced.
 
 

Less-than-whole electron charges

 
In 1982 Nobel laureate Robert Laughlin proposed that under certain conditions an electrical current behaves as if it were made up of fractions of electronic charges.
 
This theory was first supported experimentally in 1997, when a Weizmann Institute team led by Prof. Moty Heiblum succeeded in measuring electronic charges equal to one-third and one-fifth that of a single electron.
 
The team obtained the results by creating a weak disturbance in the electric current and measuring the level of “electric noise” produced as a result. (The increase in electric noise is proportional to the unit of electrical charge: the smaller the charge, the weaker the noise.) Subsequent experiments conducted by the Institute team revealed that when such “one-third particles,” with one-third the charge of an electron, arrive en masse at a tall barrier, they create partnerships of three (together making a whole electron) and are then able to penetrate the barrier. Current studies seek to explain a recently revealed feature of these sub-electron particles that is even more remarkable. Much to their surprise, the Institute team has found that individual “one-third particles” arriving at the barrier one at a time are nonetheless able to cross over to the other side.
 
 

Mapping electrons in a semiconductor

 
Prof. Israel Bar-Joseph of the Condensed Matter Physics Department and the Submicron Center developed a novel method for mapping electrons within a semiconductor that may aid the design of future electronic and electrooptic components.
 
When electrons in a semiconductor move to a higher energy level, they leave behind a “hole” - the mark of a missing electron. This hole carries a positive electrical charge - as opposed to the electron’s negative charge - and functions as a nucleus for an atom-like complex, the exciton. When a single electron moves around this hole, the exciton that forms is equivalent to a neutral hydrogen atom; but under certain conditions a second electron may join, creating a charged exciton.
 
The method developed by Bar-Joseph takes advantage of this unique property of excitons to map electron movements within semiconductors. Using a microscope he designed that contains a needle-fine optical fiber probe, Bar-Joseph was able to trace the motion of excitons within the semiconductor. By applying the probe to measure the light emitted by different, infinitesimally small regions on the semiconductor surface, his team determined the strength of the spectral line emitted by the charged exciton in each region. This information was then used to establish the local electron density and thus draw a precise map of the electron distribution throughout the semiconductor.
 
 

When super small affects superconductivity

 
Back in 1911, Dutch physicist Kamerlingh Onnes made a remarkable discovery. Having cooled mercury to the chilling temperature of liquid helium (minus 452 degrees F) he found that it had virtually no resistance. An electric current set in a ring of material exhibiting this property would flow indefinitely.
The concept of superconductivity was born, and with it, over time, the fantasy of superfast trains levitating on magnetic cushions, life-saving medical scanners, long distance power lines, cutting-edge microelectronics and more.
 
Much research remains however, before superconductivity becomes part of daily life. Scientists, for instance, have yet to discover materials that can superconduct at room temperature - all still check in at a shivering range of roughly minus 220 degrees F, making them suitable for only highly specialized applications.
 
Weizmann Institute scientists recently explored another aspect of superconductivity of critical importance to the quest of electronic miniaturization. Working with tiny lead particles ranging from 4 to 1000 nanometers in size, they found that superconductivity is size-dependent. Lead particles, for instance, lose their superconducting property below a critical cutoff point of around 6 nanometers. The study, published in Physical Review Letters, was performed by Prof. Shimon Reich, Dr. Gregory Leitus and Dr. Ronit Popovitz-Biro of the Institute’s Materials and Interfaces Department, and Dr. Moshe Shechter of the Hebrew University of Jerusalem.
 
Reich: Our results confirm a prediction first made by Philip W. Anderson back in 1959 that the move to extremely small particles would impair superconductivity. Until the recent study, scientists were unable to create the experimental conditions needed to examine this prediction, which included examining a large ensemble of minuscule particles, positioned in isolation from one another during the experiment. Our team overcame this obstacle using a method we designed, which confines the lead particles into tiny holes uniform in size of a polymeric thin membrane.
 
 

The goal: room temperature superconductors

 
High temperature superconductors offer a valuable advantage over classic superconductors since cooling an object to these relatively high temperatures is simpler and much cheaper. Most are made of ceramic-based materials, but the search is on for novel materials with high temperature superconductivity.
In previous research, Prof. Reich, working with graduate student Yitzhak Tsabba found that a surprising, non-ceramic material known as tungsten trioxide transforms into a superconductor at a relatively high temperature of minus 182 degrees C (-296 degrees F) when surface doped with sodium.
 

How “micro” can we go?

 
Microelectronic devices are getting smaller every year. Increasing miniaturization will inevitably lead to chemical problems, such as the mixing of atoms between neighboring components. The key question, say scientists, is just how "micro" can electronic devices go?
 
WIS researchers have tackled this question by designing a new mapping method for evaluating the limits of miniaturization. Using this method, called scanning spreading resistance, they succeeded in predicting the minimal possible size of bipolar transistors, one of the basic types of transistors used in microelectronics. The team consisted of doctoral student Shachar Richter, Prof. Yishay Manassen of the Chemical Physics Department, Prof. David Cahen of the Materials and Interfaces Department and Dr. Sidney Cohen of the Chemical Research Support.
 
The team first manufactured a tiny bipolar transistor using the experimental semiconductor copper indium diselenide. With a total width of 50 nanometers - less than one-thousandth the width of a human hair - the device is smaller than today's smallest transistors of this type. They then applied their new mapping method to examine whether electricity would flow in this tiny transistor. Their approach, developed independently by Belgian researchers around the same time, is becoming an important tool for evaluating microelectronic devices and the limits of miniaturization.
 
 

If it ain’t broke - break it

 
When two waves of any kind - sound, light, ocean or even seismic waves from earthquakes - collide, they cause a physical phenomenon known as interference. The ability to control interference is central to various applications, including electronic miniaturization and the vision of quantum computing.
Interference can be constructive, destructive or variations between. If the waves are in step with each other (have the same “phase”) their crests add together, forming a single, larger wave (constructive interference). If, in contrast, the two waves have different phases, such that the crest of one coincides with the trough of the other, they cancel each other out (destructive interference).
 
Prof. Moty Heiblum of the Institute’s Submicron Center and then doctoral students Amir Yacoby and Eyal Buks set out to better understand wave interference in a conducting material (electric currents consist of waves of electrons). In examining what happens when a wave passes through a quantum dot (a nanosized “box” that temporarily traps individual electrons) the researchers found that the passage causes a change in wave phase that naturally affects interference. They were even able to use this effect to destroy the wave phase.
 
Next Prof. Yoseph Imry, of the Institute’s Department of Condensed Matter Physics, entered the picture. A leading expert in the theory of solid state physics, Imry is now analyzing the implications of the remarkable ability to manipulate quantum phases made possible by Heiblum’s research. He hopes to apply Heiblum’s experimental insights into eliminating wave phases to achieve the opposite - namely, to predict how it might be possible to maintain phases and thus enhance the coherence of electron waves. “Sometimes breaking an object helps us understand how it works,” says Imry. “If we are clever, the knowledge of how to dephase a wave might help us better control and even improve wave coherence.”
 
 

Electrons on the move

The number 3 is big in electronics. Depending on their setting, electrons are able to move in as many as three dimensions in space: left and right, forward and backward, and up and down. And now, a new study by Institute scientists has confirmed a longstanding premise in physics - that at low temperatures, electric conductance can occur only in a three-dimensional setting.
 
The premise was based on the property of interference (see previous page). Scientists believed that constructive interference (in which electrons are carried far from their starting point - a key feature of conductance) could occur only in a three-dimensional setting over large distances, since by nature this setting has so many potential interference paths, the likelihood of constructive interferences occurring is very high.
 
Prof. Amir Yacoby and Shahal Ilani of the Institute’s Condensed Matter Physics Department and the Submicron Center have now confirmed this longstanding premise - while adding a new twist. It turns out that interference is not the whole story. A key element in determining whether conductance occurs is the formation of tiny “electron islands.”
 
Having designed a system that made it possible, for the first time, to observe how electrons arrange themselves in a two- or three-dimensional setting, the scientists discovered that when electron density is very low, electrons organize into tiny “islands” that are very stable. Electrons added to this system join one of the islands and remain there instead of exiting on the other side - i.e., conductance does not occur. In contrast, when an increasing number of electrons are added to the system, the “islands” expand to such an extent that the electrons spill over, joining separate islands and moving freely around the system (conductance).
 
The new measuring system enables observation of the microscopic changes occurring as an electrical system switches between conducting and insulating states. Such transitions occur millions of times per second in every computer through the famed zero-or- one scheme, in which transistors open or close to control the flow of electrons.

Share