No End in Sight for Moore’s Law and Its Impact

December 18 2008 / by Covus
Category: Technology   Year: Beyond   Rating: 5 Hot

 

Intel Roadmap

Gorden E. Moore, in a landmark 1965 paper, observed that the density of transistors on an integrated circuit doubles every two years and with it comes increased performance and lower cost. It has been a hallmark for computers and information technology for decades. We have exploited this phenomenon to create amazing artifacts and tools which are just emerging to solve our exponentially increasing problems and it doesn’t seem to be waning anytime soon.

As we move into 2009 with Moore’s Law intact, we are pushing the boundaries of computational power. We’ve already reached the petaflop in processing power and we set our sights on the exaflop. While I remain optimistic, Moore’s Law has been in danger of hitting a brick wall for quite awhile now. We’ve had problems passing the 4 GHz barrier (in the consumer market) because of power consumption and heating issues, and it is getting increasingly difficult to create transistors at the sub 30nm level. However, the industry has sidestepped some barriers and kept Moore’s Law alive by using multi-core and high-k metal gate technology. While MCT has kept performance very high, it is creating some major headaches in the IT field.

There is hope. New research at UK’s National Physical Laboratory (NPL) shows that advanced techniques applied to magnetic semiconductors should help extend Moore’s Law even longer than previously thought.

Senior Research Scientist at NPL Dr Olga Kazakova said “The solution lies in changing not only the material but also the structure of our transistors. We have worked mainly with germanium nanowires that we have made magnetic. Magnetic semiconductors don't exist in nature, so they have to be artificially engineered. Germanium is closely compatible with silicon, meaning it can easily be used with existing silicon electronics without further redesign. The resulting transistors based on NPL's germanium nanowire technology, which could revolutionize computing and electronic devices, could realistically be 10 years away." 

 

As transistors on integrated circuits shrink below 20nm (nanometers), heating and quantum effects—namely quantum tunneling (leakage) will impede the progress of faster chips. These problems make it nearly impossible to operate or function correctly at the atomic level.

Intel_Cadence.png

Intel’s Micro architecture and silicon cadence model

As you’re probably aware, we are rapidly approaching 20nm fabrication of transistors. Intel is set to release its 32nm processors next year (2009) with Sandy Bridge in 2011. At that point we will be heading to 22nm and the end of CMOS--Complementary metal–oxide–semiconductors. CMOS has been the standard for microprocessors, microcontroller, static RAM and other integrated circuits. As soon as 2013, Intel will have to use nanotechnology to construct 16nm chips—the first of their kind. It is too early to speculate on what is after 16nm. Sub 10nm is an area where nanotechnology and self-assembly will be the only way to create transistors. That is still 10 years away, at least. After that, the future of computing is hazy.

 Do not fret— there are numerous ways Intel, IBM et al., are using clever new technologies to keep progress moving forward. The scientists at NPL are not the only ones who see a way to continue Moore’s Law.

 

Multi-Core Madness

Multi-core technology is a term that describes processors that have two or more chips inside cores that work in tandem. All the cores are threaded together to work as one system, rather than have one core handle the whole workload. This off-loading of processing power allows the CPU to do more at the same time, and is called parallel processing.  While this has created a way to scale up CPU power, the problem is programming programs to utilize the chip to its full capacity, however most companies do make sure their programs function optimally with Intel chips. Intel is also looking at CPU/GPU combinations to boost power even greater, giving consumers and scientists a much more robust chip than current machines.

 

Nahelam.png

Intel’s Nehalem or Core i7 processor

Intel’s latest chip is called Nehalem or Core i7. It has 4 cores and 8 threads, operating between 2.4 GHz and 3.2 GHz, respectively. According to Intel it is 8x as fast as a computer in 2005. On performance benchmarks it has blown anything prior to it away.

Nehalem is scalable with future versions having anywhere from 2 to 8 cores, with Simultaneous Multi-threading, resulting in 4 to 16 thread capability. Nehalem will deliver 4 times the memory bandwidth compared to today's highest-performance Intel Xeon processor-based systems. With up to 8 MB level-3 cache, 731 million transistors, Quickpath interconnects (up to 25.6GB per second), integrated memory controller and optional integrated graphics, Nehalem will eventually scale from notebooks to high-performance servers. Other features discussed include support for DDR3-800, 1066, and 1333 memory, SSE4.2 instructions, 32KB instruction cache, 32KB Data Cache, 256K L2 data and instruction low-latency cache per core and new 2-level TLB (Translation Lookaside Buffer) hierarchy. These technical improvements will result in performance improvements as well as flexibility for a wide range of eventual products based on the Nehalem architecture.

Also pushing the speed limit is the Telsa C1060. Its GPU Computing processors promises one teraflop of single precision and 78 Gflops of double precision processing power; based on Nvidia’s CUDA computing architecture.  According to Nvidia, it is a supercomputer at 1/100th of the cost of today’s traditional supercomputing clusters. How did they do it? It houses over 960 parallel processing cores, 240 scalar processor cores per GPU, Integer, single-precision and double-precision floating point operations

 

Tesla_PSC.jpg

GPU-based Tesla Personal Supercomputer, which promises to deliver the power of a traditional supercomputer cluster at 1/100th of the price. That "personal supercomputer" is actually a platform based on NVIDIA's new Tesla C1060 GPU Computing Processor, which itself is based on NVIDIA's CUDA parallel computing architecture

Hardware Thread Execution Manager enables thousands of concurrent threads per GPU, Parallel shared memory enables processor cores to collaborate on shared information at local cache performance, Ultra-fast GPU memory access with 102 GB/s peak bandwidth per GPU, IEEE 754 single-precision and double-precision floating points. It is the latest in the march towards PSC’s.

Superconductive  Materials

While multi-core technology will keep chips moving forward, bandwidth problems will start to plague these behemoths as they grow even larger with more and more cores.

At the University of Bristol a breakthrough experiment was done that might lead to room-temperature superconductors.

Superconductivity is a process by which a pair of electrons travelling in opposite directions and with opposite spin direction suddenly become attracted to one another. By pairing up, the two electrons manage to lose all their electrical resistance. This superconducting state means that current can flow without the aid of a battery. 

Historically, this remarkable state had always been considered a very low temperature phenomenon, thus the origin of the superconductivity peculiar to very unusual metallic materials termed ‘high temperature superconductors,’ still remains a mystery. That’s why Intel, et al switched from
Net Burst to CMOS technology. At room-temperature it became impossible to produce fast chips without them melting or the power consumption becoming too taxing.

Hussey and his team used ultra-high (pulsed) magnetic fields – some of the most powerful in the world – to destroy the superconductivity and follow the form of the electrical resistance down to temperatures close to absolute zero. 

They found that it was as the superconductivity becomes stronger, so does the scattering that causes the resistance in the metallic host from which superconductivity emerges. At some point however, the interaction that promotes high temperature superconductivity gets so strong, that ultimately it destroys the very electronic states from which the superconducting pairs form. The next step will be to identify just what that interaction is and how might it be possible to get around its self-destructive tendencies. 

In doing this experiment, the team was able to reveal information that will help theorists to develop a more complete theory to explain the properties of high temperature superconductors. 

“Indeed,” said Hussey, “if researchers are able to identify what make these superconductors tick, and the electrons to pair up, then material scientists might be able to create a room temperature superconductor. This holy grail of superconductivity research holds the promise of loss-free energy transmission, cheap, fast, levitated transport and a whole host of other revolutionary technological innovations.” 

Also, this month researchers at the University of Geneva created the first superconducting transistor which promises a PC revolution. The team made such a transistor by using the lanthanum aluminate side of its crystal as a source-drain channel and the strontium titanate layer as the gate (Nature, vol 456, p 624). "With no electric field, there is zero resistance between the source and drain as the device is superconducting," says Caviglia. But with an electric field applied to the strontium titanate, the dense electron gas gets shifted away from the interface and the lanthanum aluminate stops conducting current. Caviglia said that computers using such transistors would be "much faster than the gigahertz speeds currently available.” David Cardwell, a superconductor specialist at the University of Cambridge, thinks the work is an important breakthrough: "This is an exciting effect and has clear potential for a new generation of high-speed transistors." Hopefully, this will lead to a PSC (Personal Supercomputer) that has a frequency higher than 10 GHz with low power consumption.

 

Photon Torpedoes? Silicon Photonics!

Another effort to keep computing humming along is Silicon Photonics. By manipulating photons in a similar fashion to fiber optics, this new technology hopes to address the issues in standard CMOS technologies.

Photonics.jpg

Silicon Photonic Motherboard

Silicon Photonics is an emerging technology using standard silicon to send and receive optical information among computers and other electronic devices. The technology aims to address future bandwidth needs of data-intensive computing applications such as remote medicine and lifelike 3-D virtual worlds. With our computers have multiple cores in the future; Ultra-fast transfer of data will be essential for future computers powered by many processor cores. Silicon Photonics-based technology could deliver higher-speed mainstream computing at a lower cost.

This advance builds upon previous Intel breakthroughs such as fast silicon modulators and hybrid silicon lasers. Combined, these technologies could lead to the creation of entirely new kinds of digital machines capable of far greater performance than today. Using light, just like in fiber optics for communications will make CMOS technologies analogous to using vacuum tubes in ENIAC.

Quoted from Intel’s website:

In order to "siliconize" photonics, there are six main areas or building blocks for investigation. These include generating the light, selectively guiding and transporting it within the silicon, encoding light, detecting light, packaging the devices and finally, intelligently controlling all of these photonic functions. Intel is working to address these areas, and this research has produced a few recent success stories, including the first continuous-wave silicon laser and the first gigabit speed silicon modulator.

Photonic_Roadmap.jpg

2D is so 1999. 3D is 2019.

However there is no telling when Silicon Photonics will be practical. We will still have to create novel ways of using existing technologies to stretch their longevity. 3-D chips are probably that novel solution.

 

 Last year, Intel has been pioneering 3D Chips to address this problem. The technology – called "through-silicon vias" -- allows different chip components to be packaged much closer together for faster, smaller, and lower-power systems.

3-D_Chips.jpg

3-D chip with a cooling solution

 

The IBM breakthrough enables the move from horizontal 2-D chip layouts to 3-D chip stacking, which takes chips and memory devices that traditionally sit side by side on a silicon wafer and stacks them together on top of one another. The result is a compact sandwich of components that dramatically reduces the size of the overall chip package and boosts the speed at which data flows among the functions on the chip. "This allows us to move 3-D chips from the 'lab to the fab' across a range of applications."

 

The new IBM method eliminates the need for long-metal wires that connect today's 2-D chips together, instead relying on through-silicon vias, which are essentially vertical connections etched through the silicon wafer and filled with metal. These vias allow multiple chips to be stacked together, allowing greater amounts of information to be passed between the chips.

 

The technique shortens the distance information on a chip needs to travel by 1000 times, and allows for the addition of up to 100 times more channels, or pathways, for that information to flow compared to 2-D chips. Think computer cubes.

 

Memristors? Instant-on PC’s? Cool!

Stanley W. of IEEE Spectrum doesn’t think we should be focusing on shrinking electronics anymore. While we are going to continue down that path, the memristor, which was just created this year by HP (Hewlett Packard), promises to further increase computing power but not the density of transistors on a chip. According to scientists the memristor is the missing fourth electronic circuit element. The circuit element had only been described in a series of mathematical equations written by Leon Chua, who in 1971 was an engineering student studying non-linear circuits. Chua knew the circuit element should exist -- he even accurately outlined its properties and how it would work.  It has been theorized that it may lead to instant-on PCs as well as analog computers that process information the way the human brain does. However, it is probably understated how much of an impact this will have on the computing industry and why there is so much buzz about it.

Memristor.jpg

R. Stanley Williams of IEEE spectrum gives a perfect analogy on why the memristor will change things dramatically: Think of a resistor as a pipe through which water flows. The water is electric charge. The resistor’s obstruction of the flow of charge is comparable to the diameter of the pipe: the narrower the pipe, the greater the resistance.

For the history of circuit design, resistors have had a fixed pipe diameter. But a memristor is a pipe that changes diameter with the amount and direction of water that flows through it. If water flows through this pipe in one direction, it expands (becoming less resistive). But send the water in the opposite direction and the pipe shrinks (becoming more resistive). Further, the memristor remembers its diameter when water last went through. Turn off the flow and the diameter of the pipe “freezes” until the water is turned back on. This dynamic nature will lead us to new revolutionary computers/technology that people cannot even dream of yet.

Tell us something we don’t know!

The most promising side-effect of this relentless drive forward of computing speed is using them for science. We’re all awed by how many FLOPS supercomputers have, but it really is useless unless the power is focused for something.

In a WIRED article about Cray’s Jaguar XT5 supercomputer, "This new capability allows you to do fundamentally new physics and tackle new problems," said Thomas Zacharia, who heads computer science at Oak Ridge National Laboratory in Tennessee, "and it will accelerate the transition from basic research to applied technology.”

"It's getting to the point where simulation is actually the third branch of science," Seager said. "We say that nature is always the arbiter of truth, but it turns out our ability to observe nature is fundamentally limited.”

As we observe nature better, we are able to solve problems with greater efficiency and so on. "It's very exciting to be alive today and doing computer science," Seager said. "Now we can do some spectacular things." With the Exaflop on the horizon, I can only imagine.

Will Moore’s Law Ever End?

With these promising technological solutions, Moore’s Law should continue for about 20-30 more years. Alas, no paradigm lasts forever. Gordon Moore has said that his law will eventually hit a fundamental wall

However, we have made breakthroughs in molecular transistors this year. British researchers have unveiled the world's smallest transistor, which measures one atom thick and ten atoms across. The newly announced transistor is more than three times smaller than the 32 nanometer transistors at the cutting edge of silicon-based electronics. "It's molecular electronics with the standard top-down approach which can be used in any semiconductor factory," said Kostya Novoselov, a researcher at the University of Manchester and a co-author of a new paper on the transistor in the journal Science. The transistor is made out of graphene, a new material exactly one-atom thick that was discovered by Novoselov' s research team in 2004.

With the creation of what could be the smallest possible transistor, the long line of technology that extends from the first transistor, created at Bell Labs in 1947, could come to an end.

For all the new transistors' promise, Novoselov noted that it is currently impossible to produce large amounts of graphene. They can only produce graphene crystals about 100 microns or 0.1 millimeters across, far too small for industrial production at Intel's scale. But the scientist believes that a process for producing graphene wafers is already in the foreseeable future.

"Probably this problem will be solved in the next couple of years," he said. 

Even, with the few questions that remain R. Merkle of IEEE Spectrum sees the relentless march of Moore’s Law will eventually lead to advanced nanotechnology. “Extrapolating these remarkably regular trends, it seems clear where we're headed: molecular computers with billions upon billions of molecular switches made by the pound. And if we can arrange atoms into molecular computers, why not a whole range of other molecularly precise products?” Merkle says.

Nanotechnology.jpg

“Nanotechnology will make us healthy and wealthy. In a few decades, this emerging manufacturing technology will let us inexpensively arrange atoms and molecules in most of the ways permitted by physical law. It will let us make supercomputers that fit on the head of a pin and fleets of medical nanorobots smaller than a human cell able to eliminate cancer, infections, clogged arteries, and even old age. People will look back on this era with the same feelings we have toward medieval times—when technology was primitive and almost everyone lived in poverty and died young.”

According to the law of disruption, usually more often than not, technology is very disruptive. Nanotechnology would take this to a whole new level.

Law_of_Disruption.jpg

If this does come to pass and is where we are going, while Moore’s Law will end, we will end up with something greater than anything Moore could have ever predicted.

Comment Thread (5 Responses)

  1. Awesome comprehensive piece. As you demonstrate, there’s no shortage of paradigm candidates for continued acceleration of computation, which menas we’re due for at least 20 years of sheer craziness.

    Posted by: Alvis Brigis   December 18, 2008
    Vote for this comment - Recommend

  2. Advanced nanotechnology will more than likely come to pass, not because we force it too (although we will try), but for the same reason technology has always advanced – we use our former tools to make the next ones. This is the primary cause of acceleration. If you connect the dots backwards, you eventually reach stone tools.

    The thing most people struggle with is connecting the dots forwards. There is a “middle step” which is necessary before we are in a position to build (never mind predict) the tools necessary for genuine molecular assembly. No doubt the computing industry will set the standard in the next ten years, by building the “middle” tools.

    The future after that becomes hazy as you said, simply because even the sharpest futurist can’t think too many steps ahead (a bit like playing chess or snooker). I am still content however in the knowledge that each piece of nanoscale research published in the literature pushes us one step closer to being able to manipulate matter at our leisure.

    This is why Michio Kaku describes us as passing from the age of discovery to one of control (although truly, discovery always continues and that makes it all the more interesting). I won’t even try to guess how the world will change when we have this control. Better to simply watch the many advancing fields you indicated to see.

    Posted by: CptSunbeam   December 19, 2008
    Vote for this comment - Recommend

  3. @Alvis and CptSunbeam – Thanks guys. It only gets better from here. Next I’m going to do a piece on Quantum Computers.

    Posted by: Covus   December 20, 2008
    Vote for this comment - Recommend

  4. Sadly (for me), I struggled with the profession jargon in this piece, and I was acutely aware that there was a grand wealth of knowledge just bypassing me like I was at a train crossing. The more I read, the more I had to look up and read :) Then at the end the bit on nanotech and the graph of the law of disruption gleamed clear and concise, and the language was no longer a mystery to me. Regardless, this article rocks, and I intend to reread soon.

    Posted by: Adam Cutsinger   December 20, 2008
    Vote for this comment - Recommend

  5. you forgot 2 mention spintronics

    Posted by: Jon Loehr   December 26, 2008
    Vote for this comment - Recommend

Related content from the Future Scanner and Future Blogger