15.01.2013 Views

U. Glaeser

U. Glaeser

U. Glaeser

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

tape (DAT) cartridges were used for low-cost storage. The digital linear tape (DLT) cartridge became the<br />

most popular backup tape format. Similar to 8 mm, DAT, and other tape formats, DLT has undergone<br />

a number of revisions to increase the tape capacity and to improve the data throughput rate. The latest<br />

SuperDLT has a capacity of 113 GB on one cartridge and a throughput rate of 11 MB/s (about three hours<br />

to write one tape). The main consideration is that a backup tape system must keep pace with the increase<br />

in storage capacity of disk drives. The other device types listed in Table 4.6 for backup have generally not<br />

met the low cost of media required for large-scale backup.<br />

4.5 Computer Systems—Small to Large<br />

A typical general-purpose computer system consists of a processor, main memory, disk storage, a backup<br />

device, and possibly interface devices such as a keyboard and mouse, graphics adapter and monitor,<br />

sound card with microphone and speakers, communications interface, and a printer. The processor speed<br />

is normally used to characterize the system—a 1.5 GHz Pentium-4 system, etc. But, the processor is only<br />

a portion of the cost of the system [5]. Using a slightly faster processor will likely cause little change in<br />

the performance seen by the owner of a personal computer, but is important for a multiuser server.<br />

General-purpose desktop PCs for single users are common. PCs are cheap because millions are<br />

produced per year by many competing vendors. Millions of portable laptops, notebooks, and PDA devices<br />

are also sold every year. Currently, the laptops and notebooks are more expensive than desktop computers<br />

because of a more expensive screen and power management requirements. Workstations are similar to<br />

personal computers, but are produced in smaller quantities, with better reliability and packaging. Workstations<br />

are more expensive than PCs primarily due to the smaller quantities produced.<br />

Servers have special features for supporting multiple simultaneous users, such as more rugged components,<br />

ECC memory, swappable disk and power supplies, and a good backup device. They normally<br />

have some method of adding extra processors, memory, storage, and I/O devices. This means more<br />

components and more expensive components than in a typical PC. There is extra design work required<br />

for the extra features even if not used, and there are fewer servers sold than PCs. Thus, servers of similar<br />

capability will be more expensive.<br />

In large servers, reliability and expandability are very important, because several hundred people may<br />

be using them at any time. Designing very high-speed interconnect busses to support cache coherence<br />

across many processors sharing common memory is expensive. Special bus interconnect circuitry is<br />

required for each board connecting to the system. Large servers are sold in small quantities, and the<br />

design costs form a large percentage of the selling price. Extensive reliability testing means that the large<br />

servers are slower to use the latest, fastest CPUs, which may be the same as are used in PCs and<br />

workstations. Additional processors or upgraded processors may directly increase the number of users<br />

that can be supported, making the entire system more valuable. Often, chip manufacturers charge double<br />

the price for the fastest CPU they produce, compared with a 10% lower speed CPU. People pay the<br />

premium for the overall system performance upgrade, particularly in the server market.<br />

Another approach to improving computer system performance is to cluster a few computers [104].<br />

Cooperative sharing and fail-over is one type of cluster that provides enhanced reliability and performance.<br />

Other clusters are collections of nodes with fast interconnections to share computations. The<br />

TOP500 list [47] ranks the fastest computer systems in the world based on LINPACK benchmarks. The<br />

top entries contain many thousand processors.<br />

Beowulf clusters [105] use fairly standard home PCs to form affordable supercomputers. These clusters<br />

scale in peak performance roughly with price. But, like most cluster computers, they are difficult to<br />

program for obtaining usable performance on real problems.<br />

An interesting approach to low-cost computational power is to use the unused cycles from underutilized<br />

PCs. The SETI@home program distributed client programs to PCs connected to the Internet to<br />

allow over 2 million computers to be used for computations that were distributed and collected by the<br />

SETI program [106]. SETI@home is likely the largest distributed computation problem in existence and<br />

forms the largest computational system.<br />

© 2002 by CRC Press LLC

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!