Naturally, this led me to thinking about the problem of leakage in transistors, which in turn led me to think that a blog post about leakage would be a good idea. I’ll try to avoid the worst of the math, but we do need some… Years ago, when I went to school, transistor behavior was described by a “threshold voltage” (Vt) and equations for three regions: cutoff, linear, and saturation. These modes had a lot in common with a water faucet (or even a barrel spigot). Current was 0 when the gate-to-source voltage (Vgs) was below Vt (cutoff, like “shut off”) then increased linearly until the drain-to-source voltage (Vds) exceeded Vgs-Vt (opening the faucet lets more through), at which point the device entered saturation as current peaked (the transistor is completely on).
Exciting as these equations are, they no longer describe the transistors used in VLSI chips. In particular, the idea that a transistor is ever off has gone by the wayside. Transistors, like wine barrels, always leak.
The graph above shows leakage in action. Notice that even when the gate is off (Vgs=0) the transistor is still conducting. There are many different reasons for this, but a key one is that for today’s tiny transistors, the source and drain regions are much closer together than in the past, so charge can move between them even before the gate turns on and creates a conducting channel. (Charge also tunnels through the gate oxide, but that’s a topic for another post). Again, a picture is helpful:
The red region under the gate is the conducting channel, and it gets bigger with increasing Vgs. In nanometer scale devices, the source (green) and drain (yellow) regions are close enough together that conduction happens anyhow. As you might expect, variations in transistor channel length play a big role in the amount of current flow. In the diagram below, the gate is shorter, bringing the source and drain closer together and increasing the channel current:
Small changes have a big effect. For 45nm generation devices, a 15% decrease in gate length can lead to a 50% increase in leakage, as shown below:
This kind of variation is quite common, and statistically, this makes calculating leakage for a chip a bit complicated. Here’s why: Suppose your wine cellar consists of 100 bottles of $10 wine. A typical (“median”) bottle and an average (“mean”) bottle each cost $10. Now suppose your new friends, impressed with your correct pronunciation of “Meritage”, are coming over, so you replace 20% of your wine with $20 bottles. Your typical bottle is still $10, but the average bottle is now $12. When someone asks about your collection, which number should you use? This is essentially the problem in leakage modeling for libraries: the typical device model will give a typical number, but a “typical” chip will have enough statistically leaky transistors to skew its overall leakage higher. (Note that having some bottles or some transistors with significantly lower values changes the numbers a little, but the mean is still higher than the median).
To demonstrate this, I ran a Monte Carlo simulation of within-die variation for 100,000 transistors in a 40nm process (see histogram below) and found that 10% of them were at least 50% leakier than the typical value, 1% were twice as leaky, and the worst one was over 4 times leakier than average. This led to an overall leakage about 8% higher than the “typical” value. (For the statistically inclined, the skewness of the distribution is 1.25 and its mode is 8% less than its median). The problem exists in the worst-case (fast) corner as well – should the model use the worst-case value for any possible transistor, or the average value for a worst-case chip. Lately, foundries have addressed the worst-case issue explicitly with “global” corners, but the general observation that average chips have above average leakage still holds.
For timing, an 8% error in correlation is a big problem. But leakage is different. In addition to on-chip variation, the leakiest chips in a process can have close to an order of magnitude more leakage than the least leaky. Leakage also strongly depends on voltage, temperature (including local variations in temperature on silicon caused by applying tests), oxide degradation, the presence or absence of tiny defects, the mood of the equipment, and so on. All this leads to some challenges when reconciling measured leakage on manufactured chips to predicted leakage. Large sample sizes and precise identification of process and test conditions are needed to correlate measurements and predictions, and getting within 8% of expected value is doing pretty well.
The good news is that just as winemakers learned to take advantage of evaporation loss to get more control over their wine, library designers can take advantage of leakage variability to build a better product. Deliberately designing gates in a 40nm process to be 25% longer leads to a 50% savings in leakage, no matter how you count it, at a cost of about 15% in reduced performance. This is comparable to the leakage savings that can be obtained with high Vt transistors, but with significantly better performance. Also, because they use the same implant, the long channel devices vary across global corners the same way the short ones do, leading to simpler timing closure. Keeping the cell footprint the same in a multi-channel library lets the lower leakage variants be easily swapped in post placement.
If all of this makes your head spin, don’t worry. All you really need to know is that while leakage is unavoidable and difficult to measure, it can be brought under control with some simple design practices. And now I think I might investigate a nice pinot noir.
Rob Aitken, ARM Fellow, spends his days in the technology trenches with nanometer scale devices and picosecond timing, looking at the circuits that eventually get put together to make smart phones or mildly clever toasters. He is a fan of all aspects of chip design, from transistors on up, and also of the various tools and methods that enable efficient, productive and successful design and manufacturing.
Shortlink to this post: http://bit.ly/K38D4
0 Comments On This Entry
Please log in above to add a comment or register for an account
Fortune Brainstorm Green
on May 13 2013 10:58 AM
Moonshot - a shot in the ARM for the 21st century data center
on Apr 09 2013 01:22 PM
Bringing the Benefits of the Smartphone to Pay-TV
on Mar 14 2013 05:34 PM
2013 - A Lucky Year For All Smartphone Consumers
on Mar 13 2013 06:58 PM
Internet of You at Mobile World Congress with M2M, Sensors and LTE
on Mar 12 2013 02:44 PM