15.01.2013 Views

U. Glaeser

U. Glaeser

U. Glaeser

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

This static current is obviously undesirable, but it does not exist for true CMOS logic. This is one of the<br />

main reasons for the popularity of CMOS in low power applications.<br />

Another situation that can lead to static power dissipation in CMOS is when a degraded voltage level<br />

(e.g., the “high” output level of a nMOS pass transistor) is applied to the inputs of a CMOS gate. A<br />

degraded voltage level may leave both the nMOS and pMOS transistors in a conducting state, leading to<br />

continuous flow of short-circuit current. This again is undesirable and care should be taken to avoid it<br />

in practice.<br />

Reducing Power Consumption<br />

The capacitive switching power (P switching), as given by the formula in Eq. (14.1), is the dominant source<br />

of power consumption in CMOS circuits today. Research and design efforts aimed at low power are<br />

therefore largely focussed on reducing P switching. The parameters in its formula V dd, f, C, N provide avenues<br />

for power reduction. The idea is to either reduce each of the parameters individually without adversely<br />

impacting the others, or to trade them off against each other.<br />

It should be noted that f is also a measure of the performance (speed) of a system. Therefore, reduction<br />

in power through simply a reduction in f is an option only if it is acceptable to trade off speed for power.<br />

Power is proportional to the square of V dd. This makes reduction in V dd as the most effective way for<br />

power reduction. This has motivated the acceptance of 3.3 V as the standard supply voltage, down from<br />

5 V. The downward trend in V dd continues, with processors with 1.5 V internal supply voltage and lower,<br />

already being shipped.<br />

The problem with reducing V dd is that it leads to an increase in circuit delay. Circuit delay is inversely<br />

proportional to V dd as a first order of approximation. The increased delay can be overcome if device<br />

dimensions are also scaled down along with V dd. In particular, in constant-field scaling, V dd and the<br />

horizontal and vertical dimensions of devices are scaled down by the same factor k. This is done in order<br />

to maintain constant electric fields in the devices. To the first order of approximation, the power consumption<br />

scales down by k 2 , and the delays go up by k. Reducing device dimensions (or feature size reduction)<br />

is a very costly proposition, requiring changes in fabrication technology and semiconductor processes.<br />

The other problem is that circuit delay actually rises rapidly as V dd approaches the threshold voltage<br />

V t. As a general rule, V dd should be larger than 4V t, if speed is not to suffer excessively V t does not scale<br />

easily and, therefore, reducing V dd below V t will be difficult.<br />

The speed degradation (increase in delay) may be reduced by circuit technologies that allow lower V t;<br />

however, decrease in V t leads to a significant increase in subthreshold leakage current—every 0.1 V<br />

reduction in V t raises subthreshold leakage current by a factor of 10.<br />

A tradeoff is involved in choosing a very low V dd. Some design options in this regard are to either<br />

dynamically vary V t (lower V t when the circuit is active, higher when it is not) or using different V t for<br />

sub-circuits with different speed requirements. The speed degradation may also be compensated for by<br />

increasing the amount of parallelism in the system. This works well for applications such as digital signal<br />

processing, where throughput is used as the performance metric.<br />

It is also worth noting that V dd reduction only provides temporary relief in the course of technological<br />

change. Please refer back to Fig. 14.1 and Table 14.1. These processors shown represent recent data points<br />

for a trend that has persisted for over 25 years—increased performance through higher clock frequencies<br />

or increased number of transistors, or both. This, however, directly increases the power consumption.<br />

Reductions in supply voltage and feature size do help to offset the effect of increased clock frequency<br />

and to reduce power.<br />

For example, the original Intel Pentium processor had 0.8 µm feature size, 5 V supply voltage, 66 MHz<br />

clock frequency, and 16 W power consumption. Reduction in the feature size to 0.5 µm and voltage to<br />

3.3 V, led to 10 W power consumption, even at 100 MHz. But such reductions are only temporary, since<br />

the march toward increased clock frequencies shows no signs of slowing. Thus, the power consumption<br />

of the 200 MHz Pentium climbed back to 17 W.<br />

© 2002 by CRC Press LLC

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!