08.12.2022 Views

Computer_networking (1)

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.


Don't forget to check out the Online Learning Center, www.mhhe.com/forouzan for

additional resources!

Instructors and students using Data Communications and Networking, Fourth Edition

by Behrouz A. Forouzan will find a wide variety of resources available at the Online

Learning Center, www.mhhe.comlforouzan

Instructor Resources

Instructors can access the following resources by contacting their McGraw-Hill Representative

for a secure password.

a PowerPoint Slides. Contain figures, tables, highlighted points, and brief descriptions

ofeach section.

o Complete Solutions Manual. Password-protected solutions to all end-of-chapter

problems are provided.

a Pageout. A free tool that helps you create your own course website.

D Instructor Message Board. Allows you to share ideas with other instructors

using the text.

Student Resources

The student resources are available to those students using the book. Once you have

accessed the Online Learning Center, click on "Student Resources," then select a chapter

from the drop down menu that appears. Each chapter has a wealth of materials to

help you review communications and networking concepts. Included are:

a Chapter Summaries. Bulleted summary points provide an essential review of

major ideas and concepts covered in each chapter.

a Student Solutions Manual. Contains answers for odd-numbered problems.

o Glossary. Defines key terms presented in the book.

o Flashcards. Facilitate learning through practice and review.

a Animated Figures. Visual representations model key networking concepts, bringing

them to life.

D

Automated Quizzes. Easy-to-use quizzes strengthen learning and emphasize important

ideas from the book.

a Web links. Connect students to additional resources available online.



DATA

COMMUNICATIONS

AND

NETWORKING


McGraw-Hill Forouzan Networking Series

Titles by Behrouz A. Forouzan:

Data Communications and Networking

TCPflP Protocol Suite

Local Area Networks

Business Data Communications


DATA

COMMUNICATIONS

AND

NETWORKING

Fourth Edition

Behrouz A. Forouzan

DeAnza College

with

Sophia Chung Fegan

• Higher Education

Boston Burr Ridge, IL Dubuque, IA Madison, WI New York San Francisco S1. Louis

Bangkok Bogota Caracas Kuala Lumpur Lisbon London Madrid Mexico City

Milan Montreal New Delhi Santiago Seoul Singapore Sydney Taipei Toronto


The McGraw·HiII Companies

.~I

II Higher Education

DATA COMMUNICATIONS AND NETWORKING, FOURTH EDITION

Published by McGraw-Hill, a business unit ofThe McGraw-Hill Companies. Inc., 1221 Avenue

of the Americas, New York, NY 10020. Copyright © 2007 by The McGraw-Hill Companies, Inc.

AlI rights reserved. No part of this publication may be reproduced or distributed in any form or

by any means, or stored in a database or retrieval system, without the prior written consent of

The McGraw-Hill Companies, Inc., including, but not limited to, in any network or other

electronic storage or transmission, or broadcast for distance learning.

Some ancillaries, including electronic and print components, may not be available to customers

outside the United States.

This book is printed on acid-free paper.

1234567890DOC/DOC09876

ISBN-13 978-0-07-296775-3

ISBN-to 0-07-296775-7

Publisher: Alan R. Apt

Developmental Editor: Rebecca Olson

Executive Marketing Manager: Michael Weitz

Senior Project Manager: Sheila M. Frank

Senior Production Supervisor: Kara Kudronowicz

Senior Media Project Manager: Jodi K. Banowetz

Associate Media Producer: Christina Nelson

Senior Designer: David W Hash

Cover Designer: Rokusek Design

(USE) Cover Image: Women ascending Mount McKinley, Alaska. Mount McKinley (Denali)

12,000 feet, ©Allan Kearney/Getty Images

Compositor: Interactive Composition Corporation

Typeface: 10/12 Times Roman

Printer: R. R. Donnelley Crawfordsville, IN

Library of Congress Cataloging-in~Publication Data

Forouzan, Behrouz A.

Data communications and networking I Behrouz A Forouzan. - 4th ed.

p. em. - (McGraw-HilI Forouzan networking series)

Includes index.

ISBN 978-0-07-296775-3 - ISBN 0-07-296775-7 (hard eopy : alk. paper)

1. Data transmission systems. 2. Computer networks. I. Title. II. Series.

TK5105.F6617

004.6--dc22

www.mhhe.com

2007

2006000013

CIP


To lny wife, Faezeh, with love

Behrouz Forouzan



Preface

XXlX

PART 1 Overview 1

Chapter 1 Introduction 3

Chapter 2 Network Models 27

PART 2

Chapter 3

Chapter 4

Chapter 5

Chapter 6

Chapter 7

Chapter 8

Chapter 9

PART 3

Chapter 10

Chapter 11

Chapter 12

Chapter 13

Chapter 14

Chapter 15

Chapter 16

Chapter 17

Chapter 18

Physical Layer and Media 55

Data and Signals 57

Digital Transmission 101

Analog Transmission 141

Bandwidth Utilization: Multiplexing and Spreading 161

Transmission Media 191

Switching 213

Using Telephone and Cable Networksfor Data Transmission 241

Data Link Layer 265

Error Detection and Correction 267

Data Link Control 307

Multiple Access 363

Wired LANs: Ethernet 395

Wireless LANs 421

Connecting LANs, Backbone Networks, and Virtual LANs 445

Wireless WANs: Cellular Telephone and Satellite Networks 467

SONETISDH 491

Virtual-Circuit Nenvorks: Frame Relay andATM 517

vii


viii

BRIEF CONTENTS

PART 4

Chapter 19

Chapter 20

Chapter 21

Chapter 22

PARTS

Chapter 23

Chapter 24

PART 6

Chapter 25

Chapter 26

Chapter 27

Chapter 28

Chapter 29

Network Layer 547

Netvvork Layer: Logical Addressing 549

Netvvork Layer: Internet Protocol 579

Netl,vork La.ver: Address Mapping, Error Reporting,

and Multicasting 611

Network Layer: Delivery, Fonvarding, and Routing 647

Transport Layer 701

Process-to-Process Delivery: UDp, TCP, and SCTP 703

Congestion Control and Quality ql'Sen'ice 761

Application Layer 795

Domain Name System 797

Remote Logging, Electronic Mail, and File Transfer 817

WWW and HTTP 851

Network Management: SNMP 873

Multimedia 901

PART 7 Security 929

Chapter 30 Cf}1Jtography 931

Chapter 31 Network Security 961

Chapter 32

Securit}' in the Internet: IPSec, SSLlTLS, PCp, VPN,

and Firewalls 995

Appendix A Unicode 1029

Appendix B Numbering Systems 1037

Appendix C Mathematical Review 1043

Appendix D 8B/6T Code 1055

Appendix E Telephone History 1059

Appendix F Co!1tact Addresses 1061

Appendix G RFCs 1063

Appendix H UDP and TCP Ports 1065

Acron.Vl11s 1067

ClOSSOlY 1071

References 1107

Index

IIII


Preface

xxix

PART 1 Overview 1

Chapter 1 Introduction 3

1.1 DATA COMMUNICATIONS 3

Components 4

Data Representation 5

DataFlow 6

1.2 NETWORKS 7

Distributed Processing 7

Network Criteria 7

Physical Structures 8

Network Models 13

Categories of Networks 13

Interconnection of Networks: Internetwork IS

1.3 THE INTERNET 16

A Brief History 17

The Internet Today 17

1.4 PROTOCOLS AND STANDARDS 19

Protocols 19

Standards 19

Standards Organizations 20

Internet Standards 21

1.5 RECOMMENDED READING 21

Books 21

Sites 22

RFCs 22

1.6 KEY TERMS 22

1.7 SUMMARY 23

1.8 PRACTICE SET 24

Review Questions 24

Exercises 24

Research Activities 25

Chapter 2 Network Models 27

2.1 LAYERED TASKS 27

Sender, Receiver, and Carrier 28

Hierarchy 29

ix


x

CONTENTS

2.2 THE OSI MODEL 29

Layered Architecture 30

Peer-to-Peer Processes 30

Encapsulation 33

2.3 LAYERS IN THE OSI MODEL 33

Physical Layer 33

Data Link Layer 34

Network Layer 36

Transport Layer 37

Session Layer 39

Presentation Layer 39

Application Layer 41

Summary of Layers 42

2.4 TCP/IP PROTOCOL SUITE 42

Physical and Data Link Layers 43

Network Layer 43

Transport Layer 44

Application Layer 45

2.5 ADDRESSING 45

Physical Addresses 46

Logical Addresses 47

Port Addresses 49

Specific Addresses 50

2.6 RECOMMENDED READING 50

Books 51

Sites 51

RFCs 51

2.7 KEY lERMS 51

2.8 SUMMARY 52

2.9 PRACTICE SET 52

Review Questions 52

Exercises 53

Research Activities 54

PART 2 Physical Layer and Media 55

Chapter 3 Data and Signals 57

3.1 ANALOG AND DIGITAL 57

Analog and Digital Data 57

Analog and Digital Signals 58

Periodic and Nonperiodic Signals 58

3.2 PERIODIC ANALOG SIGNALS 59

Sine Wave 59

Phase 63

Wavelength 64

Time and Frequency Domains 65

Composite Signals 66

Bandwidth 69

3.3 DIGITAL SIGNALS 71

Bit Rate 73

Bit Length 73

Digital Signal as a Composite Analog Signal 74

Transmission of Digital Signals 74


CONTENTS

xi

3.4 TRANSMISSION IMPAIRMENT 80

Attenuation 81

Distortion 83

Noise 84

3.5 DATA RATE LIMITS 85

Noiseless Channel: Nyquist Bit Rate 86

Noisy Channel: Shannon Capacity 87

Using Both Limits 88

3.6 PERFORMANCE 89

Bandwidth 89

Throughput 90

Latency (Delay) 90

Bandwidth-Delay Product 92

Jitter 94

3.7 RECOMMENDED READING 94

Books 94

3.8 KEYTERMS 94

3.9 SUMMARY 95

3.10 PRACTICE SET 96

Review Questions 96

Exercises 96

Chapter 4 Digital Transmission 101

4.1 DIGITAL-TO-DIGITAL CONVERSION 101

Line Coding 101

Line Coding Schemes 106

Block Coding 115

Scrambling 118

4.2 ANALOG-TO-DIGITAL CONVERSION 120

Pulse Code Modulation (PCM) 121

Delta Modulation (DM) 129

4.3 TRANSMISSION MODES 131

Parallel Transmission 131

Serial Transmission 132

4.4 RECOMMENDED READING 135

Books 135

4.5 KEYTERMS 135

4.6 SUMMARY 136

4.7 PRACTICE SET 137

Review Questions 137

Exercises 137

Chapter 5 Analog TranSl1'lission 141

5.1 DIGITAL-TO-ANALOG CONVERSION 141

Aspects of Digital-to-Analog Conversion 142

Amplitude Shift Keying 143

Frequency Shift Keying 146

Phase Shift Keying 148

Quadrature Amplitude Modulation 152

5.2 ANALOG-TO-ANALOG CONVERSION 152

Amplitude Modulation 153

Frequency Modulation 154

Phase Modulation 155


xii

CONTENTS

5.3 RECOMMENDED READING 156

Books 156

5.4 KEY lERMS 157

5.5 SUMMARY 157

5.6 PRACTICE SET 158

Review Questions 158

Exercises 158

Chapter 6

Ba17chridth Utili::.ation: Multiplexing

and Spreading 161

6.1 MULTIPLEXING 161

Frequency-Division Multiplexing 162

Wavelength-Division Multiplexing 167

Synchronous Time-Division Multiplexing 169

Statistical Time-Division Multiplexing 179

6.2 SPREAD SPECTRUM 180

Frequency Hopping Spread Spectrum (FHSS) 181

Direct Sequence Spread Spectrum 184

6.3 RECOMMENDED READING 185

Books 185

6.4 KEY lERMS 185

6.5 SUMMARY 186

6.6 PRACTICE SET 187

Review Questions 187

Exercises 187

Chapter 7 Transmission Media 191

7.1 GUIDED MEDIA 192

Twisted-Pair Cable 193

Coaxial Cable 195

Fiber-Optic Cable 198

7.2 UNGUIDED MEDIA: WIRELESS 203

Radio Waves 205

Microwaves 206

Infrared 207

7.3 RECOMMENDED READING 208

Books 208

7.4 KEY lERMS 208

7.5 SUMMARY 209

7.6 PRACTICE SET 209

Review Questions 209

Exercises 210

Chapter 8 Svvitching 213

8.1 CIRCUIT-SWITCHED NETWORKS 214

Three Phases 217

Efficiency 217

Delay 217

Circuit-Switched Technology in Telephone Networks 218

8.2 DATAGRAM NETWORKS 218

Routing Table 220


CONTENTS

xiii

Efficiency 220

Delay 221

Datagram Networks in the Internet 221

8.3 VIRTUAL-CIRCUIT NETWORKS 221

Addressing 222

Three Phases 223

Efficiency 226

Delay in Virtual-Circuit Networks 226

Circuit-Switched Technology in WANs 227

8.4 STRUCTURE OF A SWITCH 227

Structure of Circuit Switches 227

Structure of Packet Switches 232

8.5 RECOMMENDED READING 235

Books 235

8.6 KEY TERMS 235

8.7 SUMMARY 236

8.8 PRACTICE SET 236

Review Questions 236

Exercises 237

Chapter 9

Using Telephone and Cable Networks for Data

Transm,ission 241

9.1 1ELEPHONE NETWORK 241

Major Components 241

LATAs 242

Signaling 244

Services Provided by Telephone Networks 247

9.2 DIAL-UP MODEMS 248

Modem Standards 249

9.3 DIGITAL SUBSCRIBER LINE 251

ADSL 252

ADSL Lite 254

HDSL 255

SDSL 255

VDSL 255

Summary 255

9.4 CABLE TV NETWORKS 256

Traditional Cable Networks 256

Hybrid Fiber-Coaxial (HFC) Network 256

9.5 CABLE TV FOR DATA TRANSFER 257

Bandwidth 257

Sharing 259

CM and CMTS 259

Data Transmission Schemes: DOCSIS 260

9.6 RECOMMENDED READING 261

Books 261

9.7 KEY TERMS 261

9.8 SUMMARY 262

9.9 PRACTICE SET 263

Review Questions 263

Exercises 264


xiv

CONTENTS

PART 3 Data Link Layer 265

Chapter 10 Error Detection and Correction 267

10.1 INTRODUCTION 267

Types of Errors 267

Redundancy 269

Detection Versus Correction 269

Forward Error Correction Versus Retransmission 269

Coding 269

Modular Arithmetic 270

10.2 BLOCK CODING 271

Error Detection 272

Error Correction 273

Hamming Distance 274

Minimum Hamming Distance 274

10.3 LINEAR BLOCK CODES 277

Minimum Distance for Linear Block Codes 278

Some Linear Block Codes 278

10.4 CYCLIC CODES 284

Cyclic Redundancy Check 284

Hardware Implementation 287

Polynomials 291

Cyclic Code Analysis 293

Advantages of Cyclic Codes 297

Other Cyclic Codes 297

10.5 CHECKSUM 298

Idea 298

One's Complement 298

Internet Checksum 299

10.6 RECOMMENDED READING 30 I

Books 301

RFCs 301

10.7 KEY lERMS 301

10.8 SUMMARY 302

10.9 PRACTICE SET 303

Review Questions 303

Exercises 303

Chapter 11 Data Link Control 307

11.1 FRAMING 307

Fixed-Size Framing 308

Variable-Size Framing 308

11.2 FLOW AND ERROR CONTROL 311

Flow Control 311

Error Control 311

11.3 PROTOCOLS 311

11.4 NOISELESS CHANNELS 312

Simplest Protocol 312

Stop-and-Wait Protocol 315

11.5 NOISY CHANNELS 318

Stop-and-Wait Automatic Repeat Request 318

Go-Back-N Automatic Repeat Request 324


CONTENTS

xv

11.6

11.7

11.8

11.9

11.10

11.11

Selective Repeat Automatic Repeat Request

Piggybacking 339

HDLC 340

Configurations and Transfer Modes 340

Frames 341

Control Field 343

POINT-TO-POINT PROTOCOL 346

Framing 348

Transition Phases 349

Multiplexing 350

Multilink PPP 355

RECOMMENDED READING 357

Books 357

KEY TERMS 357

SUMMARY 358

PRACTICE SET 359

Review Questions 359

Exercises 359

Chapter 12 Multiple Access 363

332

12.1 RANDOMACCESS 364

ALOHA 365

Carrier Sense Multiple Access (CSMA) 370

Carrier Sense Multiple Access with Collision Detection (CSMAlCD) 373

Carrier Sense Multiple Access with Collision Avoidance (CSMAlCA) 377

12.2 CONTROLLED ACCESS 379

Reservation 379

Polling 380

Token Passing 381

12.3 CHANNELIZATION 383

Frequency-Division Multiple Access (FDMA) 383

Time-Division Multiple Access (TDMA) 384

Code-Division Multiple Access (CDMA) 385

12.4 RECOMMENDED READING 390

Books 391

12.5 KEY TERMS 391

12.6 SUMMARY 391

12.7 PRACTICE SET 392

Review Questions 392

Exercises 393

Research Activities 394

Chapter 13 Wired LANs: Ethernet 395

13.1 IEEE STANDARDS 395

Data Link Layer 396

Physical Layer 397

13.2 STANDARD ETHERNET 397

MAC Sublayer 398

Physical Layer 402

13.3 CHANGES IN THE STANDARD 406

Bridged Ethernet 406

Switched Ethernet 407

Full-Duplex Ethernet 408


xvi

CONTENTS

13.4 FAST ETHERNET 409

MAC Sublayer 409

Physical Layer 410

13.5 GIGABIT ETHERNET 412

MAC Sublayer 412

Physical Layer 414

Ten-Gigabit Ethernet 416

13.6 RECOMMENDED READING 417

Books 417

13.7 KEY TERMS 417

13.8 SUMMARY 417

13.9 PRACTICE SET 418

Review Questions 418

Exercises 419

Chapter 14 Wireless LANs 421

14.1 IEEE 802.11 421

Architecture 421

MAC Sublayer 423

Addressing Mechanism 428

Physical Layer 432

14.2 BLUETOOTH 434

Architecture 435

Bluetooth Layers 436

Radio Layer 436

Baseband Layer 437

L2CAP 440

Other Upper Layers 441

14.3 RECOMMENDED READING 44 I

Books 442

14.4 KEYTERMS 442

14.5 SUMMARY 442

14.6 PRACTICE SET 443

Review Questions 443

Exercises 443

Chapter 15

15.1 CONNECTING DEVICES 445

Passive Hubs 446

Repeaters 446

Active Hubs 447

Bridges 447

Two-Layer Switches 454

Routers 455

Three-Layer Switches 455

Gateway 455

15.2 BACKBONE NETWORKS 456

Bus Backbone 456

Star Backbone 457

Connecting Remote LANs 457

Connecting LANs, Backbone Networks,

and VirtuaL LANs 445


CONTENTS

xvii

15.3 VIRTUAL LANs 458

Membership 461

Configuration 461

Communication Between Switches 462

IEEE Standard 462

Advantages 463

15.4 RECOMMENDED READING 463

Books 463

Site 463

15.5 KEY TERMS 463

15.6 SUMMARY 464

15.7 PRACTICE SET 464

Review Questions 464

Exercises 465

Chapter 16

16.1 CELLULAR TELEPHONY 467

Frequency-Reuse Principle 467

Transmitting 468

Receiving 469

Roaming 469

First Generation 469

Second Generation 470

Third Generation 477

16.2 SATELLITE NETWORKS 478

Orbits 479

Footprint 480

Three Categories of Satellites 480

GEO Satellites 481

MEO Satellites 481

LEO Satellites 484

16.3 RECOMMENDED READING 487

Books 487

16.4 KEY TERMS 487

16.5 SUMMARY 487

16.6 PRACTICE SET 488

Review Questions 488

Exercises 488

Wireless WANs: Cellular Telephone and

Satellite Networks 467

Chapter 17 SONETISDH 491

17.1 ARCHITECTURE 491

Signals 491

SONET Devices 492

Connections 493

17.2 SONET LAYERS 494

Path Layer 494

Line Layer 495

Section Layer 495

Photonic Layer 495

Device-Layer Relationships 495


xviii

CONTENTS

17.3 SONET FRAMES 496

Frame, Byte, and Bit Transmission 496

STS-l Frame Format 497

Overhead Summary 501

Encapsulation 501

17.4 STS MULTIPLEXING 503

Byte Interleaving 504

Concatenated Signal 505

AddlDrop Multiplexer 506

17.5 SONET NETWORKS 507

Linear Networks 507

Ring Networks 509

Mesh Networks 510

17.6 VIRTUAL TRIBUTARIES 512

Types ofVTs 512

17.7 RECOMMENDED READING 513

Books 513

17.8 KEY lERMS 513

17.9 SUMMARY 514

17.1 0 PRACTICE SET 514

Review Questions 514

Exercises 515

Chapter 18 Virtual-Circuit Networks: Frame Relm' and ATM 517

18.1 FRAME RELAY 517

Architecture 518

Frame Relay Layers 519

Extended Address 521

FRADs 522

VOFR 522

LMI 522

Congestion Control and Quality of Service 522

18.2 ATM 523

Design Goals 523

Problems 523

Architecture 526

Switching 529

ATM Layers 529

Congestion Control and Quality of Service 535

18.3 ATM LANs 536

ATM LAN Architecture 536

LAN Emulation (LANE) 538

Client/Server Model 539

Mixed Architecture with Client/Server 540

18.4 RECOMMENDED READING 540

Books 541

18.5 KEY lERMS 541

18.6 SUMMARY 541

18.7 PRACTICE SET 543

Review Questions 543

Exercises 543


CONTENTS

xix

PART 4 Network Layer 547

Chapter 19 Netvl/ark Layer: Logical Addressing 549

19.1 IPv4ADDRESSES 549

Address Space 550

Notations 550

Classful Addressing 552

Classless Addressing 555

Network Address Translation (NAT) 563

19.2 IPv6 ADDRESSES 566

Structure 567

Address Space 568

19.3 RECOMMENDED READING 572

Books 572

Sites 572

RFCs 572

19.4 KEY TERMS 572

19.5 SUMMARY 573

19.6 PRACTICE SET 574

Review Questions 574

Exercises 574

Research Activities 577

Chapter 20 Network Layer: Internet Protocol 579

20.1 INTERNETWORKING 579

Need for Network Layer 579

Internet as a Datagram Network 581

Internet as a Connectionless Network 582

20.2 IPv4 582

Datagram 583

Fragmentation 589

Checksum 594

Options 594

20.3 IPv6 596

Advantages 597

Packet Format 597

Extension Headers 602

20.4 TRANSITION FROM IPv4 TO IPv6 603

Dual Stack 604

Tunneling 604

Header Translation 605

20.5 RECOMMENDED READING 605

Books 606

Sites 606

RFCs 606

20.6 KEY TERMS 606

20.7 SUMMARY 607

20.8 PRACTICE SET 607

Review Questions 607

Exercises 608

Research Activities 609


xx

CONTENTS

Chapter 21

Network Layer: Address Mapping, Error Reporting,

and Multicasting 611

21.1 ADDRESS MAPPING 611

Mapping Logical to Physical Address: ARP 612

Mapping Physical to Logical Address: RARp, BOOTP, and DHCP 618

21.2 ICMP 621

Types of Messages 621

Message Format 621

Error Reporting 622

Query 625

Debugging Tools 627

21.3 IGMP 630

Group Management 630

IGMP Messages 631

Message Format 631

IGMP Operation 632

Encapsulation 635

Netstat Utility 637

21.4 ICMPv6 638

Error Reporting 638

Query 639

21.5 RECOMMENDED READING 640

Books 641

Site 641

RFCs 641

21.6 KEYTERMS 641

21.7 SUMMARY 642

21.8 PRACTICE SET 643

Review Questions 643

Exercises 644

Research Activities 645

Chapter 22

Network Layer: Delivery, Forwarding,

and Routing 647

22.1 DELIVERY 647

Direct Versus Indirect Delivery 647

22.2 FORWARDING 648

Forwarding Techniques 648

Forwarding Process 650

Routing Table 655

22.3 UNICAST ROUTING PROTOCOLS 658

Optimization 658

Intra- and Interdomain Routing 659

Distance Vector Routing 660

Link State Routing 666

Path Vector Routing 674

22.4 MULTICAST ROUTING PROTOCOLS 678

Unicast, Multicast, and Broadcast 678

Applications 681

Multicast Routing 682

Routing Protocols 684


CONTENTS

xxi

22.5 RECOMMENDED READING 694

Books 694

Sites 694

RFCs 694

22.6 KEY lERMS 694

22.7 SUMMARY 695

22.8 PRACTICE SET 697

Review Questions 697

Exercises 697

Research Activities 699

PART 5 Transport Layer 701

Chapter 23

Process-fa-Process Delivery: UDp, TCp,

and SeTP 703

23.1 PROCESS-TO-PROCESS DELIVERY 703

Client/Server Paradigm 704

Multiplexing and Demultiplexing 707

Connectionless Versus Connection-Oriented Service 707

Reliable Versus Unreliable 708

Three Protocols 708

23.2 USER DATAGRAM PROTOCOL (UDP) 709

Well-Known Ports for UDP 709

User Datagram 710

Checksum 711

UDP Operation 713

Use ofUDP 715

23.3 TCP 715

TCP Services 715

TCP Features 719

Segment 721

A TCP Connection 723

Flow Control 728

Error Control 731

Congestion Control 735

23.4 SCTP 736

SCTP Services 736

SCTP Features 738

Packet Format 742

An SCTP Association 743

Flow Control 748

Error Control 751

Congestion Control 753

23.5 RECOMMENDED READING 753

Books 753

Sites 753

RFCs 753

23.6 KEY lERMS 754

23.7 SUMMARY 754

23.8 PRACTICE SET 756

Review Questions 756

Exercises 757

Research Activities 759


xxii

CONTENTS

Chapter 24 Congestion Control and Quality (~j'Service 767

24.1 DATA lRAFFIC 761

Traffic Descriptor 76]

Traffic Profiles 762

24.2 CONGESTION 763

Network Performance 764

24.3 CONGESTION CONTROL 765

Open-Loop Congestion Control 766

Closed-Loop Congestion Control 767

24.4 lWO EXAMPLES 768

Congestion Control in TCP 769

Congestion Control in Frame Relay 773

24.5 QUALITY OF SERVICE 775

Flow Characteristics 775

Flow Classes 776

24.6 TECHNIQUES TO IMPROVE QoS 776

Scheduling 776

Traffic Shaping 777

Resource Reservation 780

Admission Control 780

24.7 INTEGRATED SERVICES 780

Signaling 781

Flow Specification 781

Admission 781

Service Classes 781

RSVP 782

Problems with Integrated Services 784

24.8 DIFFERENTIATED SERVICES 785

DS Field 785

24.9 QoS IN SWITCHED NETWORKS 786

QoS in Frame Relay 787

QoS inATM 789

24.10 RECOMMENDED READING 790

Books 791

24.11 KEY TERMS 791

24.12 SUMMARY 791

24.13 PRACTICE SET 792

Review Questions 792

Exercises 793

PART 6 Application Layer 795

Chapter 25 DO/nain Name Svstem 797

25.1 NAME SPACE 798

Flat Name Space 798

Hierarchical Name Space 798

25.2 DOMAIN NAME SPACE 799

Label 799

Domain Narne 799

Domain 801


CONTENTS

xxiii

25.3

25.4

25.5

25.6

25.7

25.8

25.9

25.10

25.11

25.12

25.13

25.14

DISTRIBUTION OF NAME SPACE 801

Hierarchy of Name Servers 802

Zone 802

Root Server 803

Primary and Secondary Servers 803

DNS IN THE INTERNET 803

Generic Domains 804

Country Domains 805

Inverse Domain 805

RESOLUTION 806

Resolver 806

Mapping Names to Addresses 807

Mapping Address to Names 807

Recursive Resolution 808

Iterative Resolution 808

Caching 808

DNS MESSAGES 809

Header 809

TYPES OF RECORDS 811

Question Record 811

Resource Record 811

REGISTRARS 811

DYNAMIC DOMAIN NAME SYSTEM (DDNS)

ENCAPSULATION 812

RECOMMENDED READING 812

Books 813

Sites 813

RFCs 813

KEY TERMS 813

SUMMARY 813

PRACTICE SET 814

Review Questions 814

Exercises 815

812

Chapter 26 Remote Logging, Electronic Mail, and File Transfer 817

26.1 REMOTE LOGGING 817

TELNET 817

26.2 ELECTRONIC MAIL 824

Architecture 824

User Agent 828

Message Transfer Agent: SMTP 834

Message Access Agent: POP and IMAP 837

Web-Based Mail 839

26.3 FILE TRANSFER 840

File Transfer Protocol (FTP) 840

Anonymous FTP 844

26.4 RECOMMENDED READING 845

Books 845

Sites 845

RFCs 845

26.5 KEY lERMS 845

26.6 SUMMARY 846


xxiv

CONTENTS

26.7 PRACTICE SET 847

Review Questions 847

Exercises 848

Research Activities 848

Chapter 27 WWW and HTTP 851

27.1 ARCHITECTURE 851

Client (Browser) 852

Server 852

Uniform Resource Locator 853

Cookies 853

27.2 WEB DOCUMENTS 854

Static Documents 855

Dynamic Documents 857

Active Documents 860

27.3 HTTP 861

HTTP Transaction 861

Persistent Versus Nonpersistent Connection 868

Proxy Server 868

27.4 RECOMMENDED READING 869

Books 869

Sites 869

RFCs 869

27.5 KEY 1ERMS 869

27.6 SUMMARY 870

27.7 PRACTICE SET 871

Review Questions 871

Exercises 871

Chapter 28 Network Management: SNMP 873

28.1 NETWORK MANAGEMENT SYSTEM 873

Configuration Management 874

Fault Management 875

Performance Management 876

Security Management 876

Accounting Management 877

28.2 SIMPLE NETWORK MANAGEMENT PROTOCOL (SNMP) 877

Concept 877

Management Components 878

Structure of Management Information 881

Management Information Base (MIB) 886

Lexicographic Ordering 889

SNMP 891

Messages 893

UDP Ports 895

Security 897

28.3 RECOMMENDED READING 897

Books 897

Sites 897

RFCs 897

28.4 KEY 1ERMS 897

28.5 SUMMARY 898


CONTENTS

xxv

28.6 PRACTICE SET 899

Review Questions 899

Exercises 899

Chapter 29 Multimedia 901

29.1 DIGITIZING AUDIO AND VIDEO 902

Digitizing Audio 902

Digitizing Video 902

29.2 AUDIO AND VIDEO COMPRESSION 903

Audio Compression 903

Video Compression 904

29.3 STREAMING STORED AUDIO/VIDEO 908

First Approach: Using a Web Server 909

Second Approach: Using a Web Server with Metafile 909

Third Approach: Using a Media Server 910

Fourth Approach: Using a Media Server and RTSP 911

29.4 STREAMING LIVE AUDIOIVIDEO 912

29.5 REAL-TIME INTERACTIVE AUDIOIVIDEO 912

Characteristics 912

29.6 RTP 916

RTP Packet Format 917

UDPPort 919

29.7 RTCP 919

Sender Report 919

Receiver Report 920

Source Description Message 920

Bye Message 920

Application-Specific Message 920

UDP Port 920

29.8 VOICE OVER IP 920

SIP 920

H.323 923

29.9 RECOMMENDED READING 925

Books 925

Sites 925

29.10 KEY 1ERMS 925

29.11 SUMMARY 926

29.12 PRACTICE SET 927

Review Questions 927

Exercises 927

Research Activities 928

PART 7 Security 929

Chapter 30 Cryptography 931

30.1 INTRODUCTION 931

Definitions 931

Two Categories 932

30.2 SYMMETRIC-KEY CRYPTOGRAPHY 935

Traditional Ciphers 935

Simple Modem Ciphers 938


xxvi

CONTENTS

Modern Round Ciphers 940

Mode of Operation 945

30.3 ASYMMETRIC-KEY CRYPTOGRAPHY 949

RSA 949

Diffie-Hellman 952

30.4 RECOMMENDED READING 956

Books 956

30.5 KEY TERMS 956

30.6 SUMMARY 957

30.7 PRACTICE SET 958

Review Questions 958

Exercises 959

Research Activities 960

Chapter 31 Network Security 961

31.1 SECURITY SERVICES 961

Message Confidentiality 962

Message Integrity 962

Message Authentication 962

Message Nonrepudiation 962

Entity Authentication 962

31.2 MESSAGE CONFIDENTIALITY 962

Confidentiality with Symmetric-Key Cryptography 963

Confidentiality with Asymmetric-Key Cryptography 963

31.3 MESSAGE INTEGRITY 964

Document and Fingerprint 965

Message and Message Digest 965

Difference 965

Creating and Checking the Digest 966

Hash Function Criteria 966

Hash Algorithms: SHA-1 967

31.4 MESSAGE AUTHENTICATION 969

MAC 969

31.5 DIGITAL SIGNATURE 971

Comparison 97 I

Need for Keys 972

Process 973

Services 974

Signature Schemes 976

31.6 ENTITY AUTHENTICATION 976

Passwords 976

Challenge-Response 978

31.7 KEY MANAGEMENT 981

Symmetric-Key Distribution 981

Public-Key Distribution 986

31.8 RECOMMENDED READING 990

Books 990

31.9 KEY TERMS 990

31.10 SUMMARY 991

31.11 PRACTICE SET 992

Review Questions 992

Exercises 993

Research Activities 994


CONTENTS

xxvii

Chapter 32

32.1 IPSecurity (lPSec) 996

Two Modes 996

Two Security Protocols 998

Security Association 1002

Internet Key Exchange (IKE) 1004

Virtual Private Network 1004

32.2 SSLffLS 1008

SSL Services 1008

Security Parameters 1009

Sessions and Connections 1011

Four Protocols 10 12

Transport Layer Security 1013

32.3 PGP 1014

Security Parameters 1015

Services 1015

A Scenario 1016

PGP Algorithms 1017

Key Rings 1018

PGP Certificates 1019

32.4 FIREWALLS 1021

Packet-Filter Firewall 1022

Proxy Firewall 1023

32.5 RECOMMENDED READING 1024

Books 1024

32.6 KEY lERMS 1024

32.7 SUMMARY 1025

32.8 PRACTICE SET 1026

Review Questions 1026

Exercises 1026

Security in the Internet: IPSec, SSUFLS, PGP, VPN,

and Firewalls 995

Appendix A Unicode 1029

A.l UNICODE 1029

Planes 1030

Basic Multilingual Plane (BMP) 1030

Supplementary Multilingual Plane (SMP) 1032

Supplementary Ideographic Plane (SIP) 1032

Supplementary Special Plane (SSP) 1032

Private Use Planes (PUPs) 1032

A.2 ASCII 1032

Some Properties ofASCII 1036

Appendix B Numbering Systems 1037

B.l BASE 10: DECIMAL 1037

Weights 1038

B.2 BASE 2: BINARY 1038

Weights 1038

Conversion 1038


xxviii

CONTENTS

B.3 BASE 16: HEXADECIMAL 1039

Weights 1039

Conversion 1039

A Comparison 1040

BA BASE 256: IP ADDRESSES 1040

Weights 1040

Conversion 1040

B.5 OTHER CONVERSIONS 1041

Binary and Hexadecimal 1041

Base 256 and Binary 1042

Appendix C Mathenwtical Revietv 1043

C.1 TRIGONOMETRIC FUNCTIONS 1043

Sine Wave 1043

Cosine Wave 1045

Other Trigonometric Functions 1046

Trigonometric Identities 1046

C.2 FOURIER ANALYSIS 1046

Fourier Series 1046

Fourier Transform 1048

C.3 EXPONENT AND LOGARITHM 1050

Exponential Function 1050

Logarithmic Function 1051

Appendix 0 8B/6T Code 1055

Appendix E Telephone History 1059

Before 1984 1059

Between 1984 and 1996 1059

After 1996 1059

Appendix F Contact Addresses 1061

Appendix G RFCs 1063

Appendix H UDP and TCP Ports 1065

Acronyms 1067

Glossary 1071

References 1107

Index 1111


Data communications and networking may be the fastest growing technologies in our

culture today. One ofthe ramifications of that growth is a dramatic increase in the number

of professions where an understanding of these technologies is essential for successand

a proportionate increase in the number and types of students taking courses to learn

about them.

Features of the Book

Several features of this text are designed to make it particularly easy for students to

understand data communications and networking.

Structure

We have used the five-layer Internet model as the framework for the text not only because

a thorough understanding ofthe model is essential to understanding most current networking

theory but also because it is based on a structure of interdependencies: Each layer

builds upon the layer beneath it and supports the layer above it. In the same way, each concept

introduced in our text builds upon the concepts examined in the previous sections. The

Internet model was chosen because it is a protocol that is fully implemented.

This text is designed for students with little or no background in telecommunications

or data communications. For this reason, we use a bottom-up approach. With this

approach, students learn first about data communications (lower layers) before learning

about networking (upper layers).

Visual Approach

The book presents highly technical subject matter without complex formulas by using a

balance of text and figures. More than 700 figures accompanying the text provide a

visual and intuitive opportunity for understanding the material. Figures are particularly

important in explaining networking concepts, which are based on connections and

transmission. Both of these ideas are easy to grasp visually.

Highlighted Points

We emphasize important concepts in highlighted boxes for quick reference and immediate

attention.

xxix


xxx

PREFACE

Examples andApplications

When appropriate, we have selected examples to reflect true-to-life situations. For example,

in Chapter 6 we have shown several cases of telecommunications in current telephone

networks.

Recommended Reading

Each chapter includes a list of books and sites that can be used for further reading.

Key Terms

Each chapter includes a list ofkey terms for the student.

Summary

Each chapter ends with a summary of the material covered in that chapter. The summary

provides a brief overview of all the important points in the chapter.

Practice Set

Each chapter includes a practice set designed to reinforce and apply salient concepts. It

consists of three parts: review questions, exercises, and research activities (only for

appropriate chapters). Review questions are intended to test the student's first-level understanding

of the material presented in the chapter. Exercises require deeper understanding

of the materiaL Research activities are designed to create motivation for further study.

Appendixes

The appendixes are intended to provide quick reference material or a review of materials

needed to understand the concepts discussed in the book.

Glossary andAcronyms

The book contains an extensive glossary and a list of acronyms.

Changes in the Fourth Edition

The Fourth Edition has major changes from the Third Edition, both in the organization

and in the contents.

Organization

The following lists the changes in the organization of the book:

1. Chapter 6 now contains multiplexing as well as spreading.

2. Chapter 8 is now totally devoted to switching.

3. The contents of Chapter 12 are moved to Chapter 11.

4. Chapter 17 covers SONET technology.

5. Chapter 19 discusses IP addressing.

6. Chapter 20 is devoted to the Internet Protocol.

7. Chapter 21 discusses three protocols: ARP, ICMP, and IGMP.

8. Chapter 28 is new and devoted to network management in the Internet.

9. The previous Chapters 29 to 31 are now Chapters 30 to 32.


PREFACE

xxxi

Contents

We have revised the contents of many chapters including the following:

1. The contents of Chapters 1 to 5 are revised and augmented. Examples are added to

clarify the contents.

2. The contents of Chapter 10 are revised and augmented to include methods of error

detection and correction.

3. Chapter 11 is revised to include a full discussion of several control link protocols.

4. Delivery, forwarding, and routing of datagrams are added to Chapter 22.

5. The new transport protocol, SCTP, is added to Chapter 23.

6. The contents of Chapters 30, 31, and 32 are revised and augmented to include

additional discussion about security issues and the Internet.

7. New examples are added to clarify the understanding of concepts.

End Materials

1. A section is added to the end of each chapter listing additional sources for study.

2. The review questions are changed and updated.

3. The multiple-choice questions are moved to the book site to allow students to self-test

their knowledge about the contents of the chapter and receive immediate feedback.

4. Exercises are revised and new ones are added to the appropriate chapters.

5. Some chapters contain research activities.

Instructional Materials

Instructional materials for both the student and the teacher are revised and augmented.

The solutions to exercises contain both the explanation and answer including full colored

figures or tables when needed. The Powerpoint presentations are more comprehensive

and include text and figures.

Contents

The book is divided into seven parts. The first part is an overview; the last part concerns

network security. The middle five parts are designed to represent the five layers of the

Internet model. The following summarizes the contents of each part.

Part One: Overview

The first part gives a general overview of data communications and networking. Chapter

1 covers introductory concepts needed for the rest of the book. Chapter 2 introduces

the Internet model.

Part Two: Physical Layer

The second part is a discussion of the physical layer of the Internet model. Chapters 3

to 6 discuss telecommunication aspects of the physical layer. Chapter 7 introduces the

transmission media, which, although not part of the physical layer, is controlled by it.

Chapter 8 is devoted to switching, which can be used in several layers. Chapter 9 shows

how two public networks, telephone and cable TV, can be used for data transfer.


xxxii

PREFACE

Part Three: Data Link Layer

The third part is devoted to the discussion of the data link layer of the Internet model.

Chapter 10 covers error detection and correction. Chapters 11, 12 discuss issues related

to data link control. Chapters 13 through 16 deal with LANs. Chapters 17 and] 8 are

about WANs. LANs and WANs are examples of networks operating in the first two layers

of the Internet model.

Part Four: Network Layer

The fourth part is devoted to the discussion of the network layer of the Internet model.

Chapter 19 covers IP addresses. Chapters 20 and 21 are devoted to the network layer

protocols such as IP, ARP, ICMP, and IGMP. Chapter 22 discusses delivery, forwarding,

and routing of packets in the Internet.

Part Five: Transport Layer

The fifth part is devoted to the discussion of the transport layer of the Internet model.

Chapter 23 gives an overview of the transport layer and discusses the services and

duties of this layer. It also introduces three transport-layer protocols: UDP, TCP, and

SCTP. Chapter 24 discusses congestion control and quality of service, two issues

related to the transport layer and the previous two layers.

Part Six: Application Layer

The sixth part is devoted to the discussion of the application layer of the Internet model.

Chapter 25 is about DNS, the application program that is used by other application programs

to map application layer addresses to network layer addresses. Chapter 26 to 29

discuss some common applications protocols in the Internet.

Part Seven: Security

The seventh part is a discussion of security. It serves as a prelude to further study in this

subject. Chapter 30 briefly discusses cryptography. Chapter 31 introduces security

aspects. Chapter 32 shows how different security aspects can be applied to three layers

of the Internet model.

Online Learning Center

The McGraw-Hill Online Learning Center contains much additional material. Available

at www.mhhe.com/forouzan. As students read through Data Communications and

Networking, they can go online to take self-grading quizzes. They can also access lecture

materials such as PowerPoint slides, and get additional review from animated figures

from the book. Selected solutions are also available over the Web. The solutions to

odd-numbered problems are provided to students, and instructors can use a password to

access the complete set of solutions.

Additionally, McGraw-Hill makes it easy to create a website for your networking

course with an exclusive McGraw-Hill product called PageOut. It requires no prior

knowledge of HTML, no long hours, and no design skills on your part. Instead, Page:­

Out offers a series of templates. Simply fill them with your course information and


PREFACE

xxxiii

click on one of 16 designs. The process takes under an hour and leaves you with a professionally

designed website.

Although PageOut offers "instant" development, the finished website provides powerful

features. An interactive course syllabus allows you to post content to coincide with

your lectures, so when students visit your PageOut website, your syllabus will direct them

to components of Forouzan's Online Learning Center, or specific material of your own.

How to Use the Book

This book is written for both an academic and a professional audience. The book can be

used as a self-study guide for interested professionals. As a textbook, it can be used for

a one-semester or one-quarter course. The following are some guidelines.

o Parts one to three are strongly recommended.

o Parts four to six can be covered if there is no following course in TCP/IP protocol.

o Part seven is recommended if there is no following course in network security.

Acknowledgments

It is obvious that the development of a book ofthis scope needs the support ofmany people.

Peer Review

The most important contribution to the development of a book such as this comes from

peer reviews. We cannot express our gratitude in words to the many reviewers who

spent numerous hours reading the manuscript and providing us with helpful comments

and ideas. We would especially like to acknowledge the contributions of the following

reviewers for the third and fourth editions of this book.

Farid Ahmed, Catholic University

Kaveh Ashenayi, University ofTulsa

Yoris Au, University ofTexas, San Antonio

Essie Bakhtiar, Clayton College & State University

Anthony Barnard, University ofAlabama, Brimingham

A.T. Burrell, Oklahoma State University

Scott Campbell, Miami University

Teresa Carrigan, Blackburn College

Hwa Chang, Tufts University

Edward Chlebus, Illinois Institute ofTechnology

Peter Cooper, Sam Houston State University

Richard Coppins, Virginia Commonwealth University

Harpal Dhillon, Southwestern Oklahoma State University

Hans-Peter Dommel, Santa Clara University

M. Barry Dumas, Baruch College, CUNY

William Figg, Dakota State University

Dale Fox, Quinnipiac University

Terrence Fries, Coastal Carolina University

Errin Fulp, Wake Forest University


xxxiv

PREFACE

Sandeep Gupta, Arizona State University

George Hamer, South Dakota State University

James Henson, California State University, Fresno

Tom Hilton, Utah State University

Allen Holliday, California State University, Fullerton

Seyed Hossein Hosseini, University ofWisconsin, Milwaukee

Gerald Isaacs, Carroll College, Waukesha

Hrishikesh Joshi, DeVry University

E.S. Khosravi, Southern University

Bob Kinicki, Worcester Polytechnic University

Kevin Kwiat, Hamilton College

Ten-Hwang Lai, Ohio State University

Chung-Wei Lee, Auburn University

Ka-Cheong Leung, Texas Tech University

Gertrude Levine, Fairleigh Dickinson University

Alvin Sek See Lim, Auburn University

Charles Liu, California State University, Los Angeles

Wenhang Liu, California State University, Los Angeles

Mark Llewellyn, University ofCentral Florida

Sanchita Mal-Sarkar, Cleveland State University

Louis Marseille, Harford Community College

Kevin McNeill, University ofArizona

Arnold C. Meltzer, George Washington University

Rayman Meservy, Brigham Young University

Prasant Mohapatra, University ofCalifornia, Davis

Hung Z Ngo, SUNY, Buffalo

Larry Owens, California State University, Fresno

Arnold Patton, Bradley University

Dolly Samson, Hawaii Pacific University

Joseph Sherif, California State University, Fullerton

Robert Simon, George Mason University

Ronald 1. Srodawa, Oakland University

Daniel Tian, California State University, Monterey Bay

Richard Tibbs, Radford University

Christophe Veltsos, Minnesota State University, Mankato

Yang Wang, University ofMaryland, College Park

Sherali Zeadally, Wayne State University

McGraw-Hill Staff

Special thanks go to the staff of McGraw-Hill. Alan Apt, our publisher, proved how a

proficient publisher can make the impossible possible. Rebecca Olson, the developmental

editor, gave us help whenever we needed it. Sheila Frank, our project manager,

guided us through the production process with enormous enthusiasm. We also thank

David Hash in design, Kara Kudronowicz in production, and Patti Scott, the copy editor.


Overview

Objectives

Part 1 provides a general idea of what we will see in the rest of the book. Four major

concepts are discussed: data communications, networking, protocols and standards,

and networking models.

Networks exist so that data may be sent from one place to another-the basic concept

of data communications. To fully grasp this subject, we must understand the data

communication components, how different types of data can be represented, and how

to create a data flow.

Data communications between remote parties can be achieved through a process

called networking, involving the connection of computers, media, and networking

devices. Networks are divided into two main categories: local area networks (LANs)

and wide area networks (WANs). These two types of networks have different characteristics

and different functionalities. The Internet, the main focus of the book, is a

collection of LANs and WANs held together by internetworking devices.

Protocols and standards are vital to the implementation of data communications

and networking. Protocols refer to the rules; a standard is a protocol that has been

adopted by vendors and manufacturers.

Network models serve to organize, unify, and control the hardware and software components

of data communications and networking. Although the term "network model"

suggests a relationship to networking, the model also encompasses data communications.

Chapters

This part consists of two chapters: Chapter 1 and Chapter 2.

Chapter 1

In Chapter 1, we introduce the concepts of data communications and networking. We discuss

data communications components, data representation, and data flow. We then move

to the structure of networks that carry data. We discuss network topologies, categories

of networks, and the general idea behind the Internet. The section on protocols and

standards gives a quick overview of the organizations that set standards in data communications

and networking.


Chapter 2

The two dominant networking models are the Open Systems Interconnection (OSI) and

the Internet model (TCP/IP).The first is a theoretical framework; the second is the

actual model used in today's data communications. In Chapter 2, we first discuss the

OSI model to give a general background. We then concentrate on the Internet model,

which is the foundation for the rest of the book.


I

, I

CHAPTERl

Introduction

Data communications and networking are changing the way we do business and the way

we live. Business decisions have to be made ever more quickly, and the decision makers

require immediate access to accurate information. Why wait a week for that report

from Germany to arrive by mail when it could appear almost instantaneously through

computer networks? Businesses today rely on computer networks and internetworks.

But before we ask how quickly we can get hooked up, we need to know how networks

operate, what types of technologies are available, and which design best fills which set

of needs.

The development of the personal computer brought about tremendous changes for

business, industry, science, and education. A similar revolution is occurring in data

communications and networking. Technological advances are making it possible for

communications links to carry more and faster signals. As a result, services are evolving

to allow use of this expanded capacity. For example, established telephone services

such as conference calling, call waiting, voice mail, and caller ID have been extended.

Research in data communications and networking has resulted in new technologies.

One goal is to be able to exchange data such as text, audio, and video from all

points in the world. We want to access the Internet to download and upload information

quickly and accurately and at any time.

This chapter addresses four issues: data communications, networks, the Internet,

and protocols and standards. First we give a broad definition of data communications.

Then we define networks as a highway on which data can travel. The Internet is discussed

as a good example of an internetwork (i.e., a network of networks). Finally, we

discuss different types of protocols, the difference between protocols and standards,

and the organizations that set those standards.

1.1 DATA COMMUNICATIONS

When we communicate, we are sharing information. This sharing can be local or

remote. Between individuals, local communication usually occurs face to face, while

remote communication takes place over distance. The term telecommunication, which

3


4 CHAPTER 1 INTRODUCTION

includes telephony, telegraphy, and television, means communication at a distance (tele

is Greek for "far").

The word data refers to information presented in whatever form is agreed upon by

the parties creating and using the data.

Data communications are the exchange of data between two devices via some

form of transmission medium such as a wire cable. For data communications to occur,

the communicating devices must be part of a communication system made up of a combination

of hardware (physical equipment) and software (programs). The effectiveness

of a data communications system depends on four fundamental characteristics: delivery,

accuracy, timeliness, and jitter.

I. Delivery. The system must deliver data to the correct destination. Data must be

received by the intended device or user and only by that device or user.

7 Accuracy. The system must deliver the data accurately. Data that have been

altered in transmission and left uncorrected are unusable.

3. Timeliness. The system must deliver data in a timely manner. Data delivered late are

useless. In the case of video and audio, timely delivery means delivering data as

they are produced, in the same order that they are produced, and without significant

delay. This kind of delivery is called real-time transmission.

-\.. Jitter. Jitter refers to the variation in the packet arrival time. It is the uneven delay in

the delivery of audio or video packets. For example, let us assume that video packets

are sent every 3D ms. If some of the packets arrive with 3D-ms delay and others with

4D-ms delay, an uneven quality in the video is the result.

COinponents

A data communications system has five components (see Figure 1.1).

Figure 1.1

Five components ofdata communication

Rule 1:

Rule 2:

Rule n:

Protocol

-1 Message

r

Protocol

Rule 1:

Rule 2:

Rulen:

Medium

I. Message. The message is the information (data) to be communicated. Popular

forms of information include text, numbers, pictures, audio, and video.

Sender. The sender is the device that sends the data message. It can be a computer,

workstation, telephone handset, video camera, and so on.

I

3. Receiver. The receiver is the device that receives the message. It can be a computer,

workstation, telephone handset, television, and so on.

-1.. Transmission medium. The transmission medium is the physical path by which

a message travels from sender to receiver. Some examples of transmission media

include twisted-pair wire, coaxial cable, fiber-optic cable, and radio waves.


SECTION 1.1 DATA COMMUNICATIONS 5

5. Protocol. A protocol is a set of rules that govern data communications. It represents

an agreement between the communicating devices. Without a protocol, two

devices may be connected but not communicating, just as a person speaking French

cannot be understood by a person who speaks only Japanese.

Data Representation

Information today comes in different forms such as text, numbers, images, audio, and

video.

Text

In data communications, text is represented as a bit pattern, a sequence of bits (Os or

Is). Different sets ofbit patterns have been designed to represent text symbols. Each set

is called a code, and the process of representing symbols is called coding. Today, the

prevalent coding system is called Unicode, which uses 32 bits to represent a symbol or

character used in any language in the world. The American Standard Code for Information

Interchange (ASCII), developed some decades ago in the United States, now

constitutes the first 127 characters in Unicode and is also referred to as Basic Latin.

Appendix A includes part of the Unicode.

Numbers

Numbers are also represented by bit patterns. However, a code such as ASCII is not used

to represent numbers; the number is directly converted to a binary number to simplify

mathematical operations. Appendix B discusses several different numbering systems.

Images

Images are also represented by bit patterns. In its simplest form, an image is composed

of a matrix of pixels (picture elements), where each pixel is a small dot. The size of the

pixel depends on the resolution. For example, an image can be divided into 1000 pixels

or 10,000 pixels. In the second case, there is a better representation of the image (better

resolution), but more memory is needed to store the image.

After an image is divided into pixels, each pixel is assigned a bit pattern. The size

and the value of the pattern depend on the image. For an image made of only blackand-white

dots (e.g., a chessboard), a I-bit pattern is enough to represent a pixel.

If an image is not made of pure white and pure black pixels, you can increase the

size of the bit pattern to include gray scale. For example, to show four levels of gray

scale, you can use 2-bit patterns. A black pixel can be represented by 00, a dark gray

pixel by 01, a light gray pixel by 10, and a white pixel by 11.

There are several methods to represent color images. One method is called RGB,

so called because each color is made of a combination of three primary colors: red,

green, and blue. The intensity of each color is measured, and a bit pattern is assigned to

it. Another method is called YCM, in which a color is made of a combination of three

other primary colors: yellow, cyan, and magenta.

Audio

Audio refers to the recording or broadcasting of sound or music. Audio is by nature

different from text, numbers, or images. It is continuous, not discrete. Even when we


6 CHAPTER 1 INTRODUCTION

use a microphone to change voice or music to an electric signal, we create a continuous

signal. In Chapters 4 and 5, we learn how to change sound or music to a digital or an

analog signal.

Video

Video refers to the recording or broadcasting of a picture or movie. Video can either be

produced as a continuous entity (e.g., by a TV camera), or it can be a combination of

images, each a discrete entity, arranged to convey the idea of motion. Again we can

change video to a digital or an analog signal, as we will see in Chapters 4 and 5.

Data Flow

Communication between two devices can be simplex, half-duplex, or full-duplex as

shown in Figure 1.2.

Figure 1.2

Data flow (simplex, half-duplex, andfull-duplex)

Direction of data

Mainframe

Monitor

a. Simplex

Direction of data at time I

~

Direction of data at time 2

b. Half-duplex

Direction of data all the time

)

c. Full·duplex

Simplex

In simplex mode, the communication is unidirectional, as on a one-way street. Only one

of the two devices on a link can transmit; the other can only receive (see Figure 1.2a).

Keyboards and traditional monitors are examples of simplex devices. The keyboard

can only introduce input; the monitor can only accept output. The simplex mode

can use the entire capacity of the channel to send data in one direction.

Half-Duplex

In half-duplex mode, each station can both transmit and receive, but not at the same time. :

When one device is sending, the other can only receive, and vice versa (see Figure 1.2b).


SECTION 1.2 NETWORKS 7

The half-duplex mode is like a one-lane road with traffic allowed in both directions.

When cars are traveling in one direction, cars going the other way must wait. In a

half-duplex transmission, the entire capacity of a channel is taken over by whichever of

the two devices is transmitting at the time. Walkie-talkies and CB (citizens band) radios

are both half-duplex systems.

The half-duplex mode is used in cases where there is no need for communication

in both directions at the same time; the entire capacity of the channel can be utilized for

each direction.

Full-Duplex

In full-duplex m.,lle (als@ called duplex), both stations can transmit and receive simultaneously

(see Figure 1.2c).

The full-duplex mode is like a tW<D-way street with traffic flowing in both directions

at the same time. In full-duplex mode, si~nals going in one direction share the

capacity of the link: with signals going in the other din~c~on. This sharing can occur in

two ways: Either the link must contain two physically separate t:nmsmissiIDn paths, one

for sending and the other for receiving; or the capacity of the ch:arillilel is divided

between signals traveling in both directions.

One common example of full-duplex communication is the telephone network.

When two people are communicating by a telephone line, both can talk and listen at the

same time.

The full-duplex mode is used when communication in both directions is required

all the time. The capacity of the channel, however, must be divided between the two

directions.

1.2 NETWORKS

A network is a set of devices (often referred to as nodes) connected by communication

links. A node can be a computer, printer, or any other device capable of sending and/or

receiving data generated by other nodes on the network.

Distributed Processing

Most networks use distributed processing, in which a task is divided among multiple

computers. Instead of one single large machine being responsible for all aspects of a

process, separate computers (usually a personal computer or workstation) handle a

subset.

Network Criteria

A network must be able to meet a certain number of criteria. The most important of

these are performance, reliability, and security.

Performance

Performance can be measured in many ways, including transit time and response time.

Transit time is the amount of time required for a message to travel from one device to


8 CHAPTER 1 INTRODUCTION

another. Response time is the elapsed time between an inquiry and a response. The performance

of a network depends on a number of factors, including the number of users,

the type of transmission medium, the capabilities of the connected hardware, and the

efficiency of the software.

Performance is often evaluated by two networking metrics: throughput and delay.

We often need more throughput and less delay. However, these two criteria are often

contradictory. If we try to send more data to the network, we may increase throughput

but we increase the delay because of traffic congestion in the network.

Reliability

In addition to accuracy of delivery, network reliability is measured by the frequency of

failure, the time it takes a link to recover from a failure, and the network's robustness in

a catastrophe.

Security

Network security issues include protecting data from unauthorized access, protecting

data from damage and development, and implementing policies and procedures for

recovery from breaches and data losses.

Physical Structures

Before discussing networks, we need to define some network attributes.

Type ofConnection

A network is two or more devices connected through links. A link is a communications

pathway that transfers data from one device to another. For visualization purposes, it is

simplest to imagine any link as a line drawn between two points. For communication to

occur, two devices must be connected in some way to the same link at the same time.

There are two possible types of connections: point-to-point and multipoint.

Point-to-Point A point-to-point connection provides a dedicated link between two

devices. The entire capacity of the link is reserved for transmission between those two

devices. Most point-to-point connections use an actual length of wire or cable to connect

the two ends, but other options, such as microwave or satellite links, are also possible

(see Figure 1.3a). When you change television channels by infrared remote control,

you are establishing a point-to-point connection between the remote control and the

television's control system.

Multipoint A multipoint (also called multidrop) connection is one in which more

than two specific devices share a single link (see Figure 1.3b).

In a multipoint environment, the capacity of the channel is shared, either spatially

or temporally. If several devices can use the link simultaneously, it is a spatially shared

connection. If users must take turns, it is a timeshared connection.

Physical Topology

The term physical topology refers to the way in which a network is laid out physically.:

1\vo or more devices connect to a link; two or more links form a topology. The topology


SECTION 1.2 NETWORKS 9

Figure 1.3

Types ofconnections: point-to-point and multipoint

Link

a. Point-to-point

Link

Mainframe

b. Multipoint

of a network is the geometric representation of the relationship of all the links and

linking devices (usually called nodes) to one another. There are four basic topologies

possible: mesh, star, bus, and ring (see Figure 1.4).

Figure 1.4

Categories oftopology

Mesh In a mesh topology, every device has a dedicated point-to-point link to every

other device. The term dedicated means that the link carries traffic only between the

two devices it connects. To find the number of physical links in a fully connected mesh

network with n nodes, we first consider that each node must be connected to every

other node. Node 1 must be connected to n - I nodes, node 2 must be connected to n - 1

nodes, and finally node n must be connected to n - 1 nodes. We need n(n - 1) physical

links. However, if each physical link allows communication in both directions (duplex

mode), we can divide the number of links by 2. In other words, we can say that in a

mesh topology, we need

n(n -1) /2

duplex-mode links.

To accommodate that many links, every device on the network must have n - 1

input/output (VO) ports (see Figure 1.5) to be connected to the other n - 1 stations.


10 CHAPTER 1 INTRODUCTION

Figure 1.5

A fully connected mesh topology (five devices)

A mesh offers several advantages over other network topologies. First, the use of

dedicated links guarantees that each connection can carry its own data load, thus eliminating

the traffic problems that can occur when links must be shared by multiple devices.

Second, a mesh topology is robust. If one link becomes unusable, it does not incapacitate

the entire system. Third, there is the advantage of privacy or security. When every

message travels along a dedicated line, only the intended recipient sees it. Physical

boundaries prevent other users from gaining access to messages. Finally, point-to-point

links make fault identification and fault isolation easy. Traffic can be routed to avoid

links with suspected problems. This facility enables the network manager to discover the

precise location of the fault and aids in finding its cause and solution.

The main disadvantages of a mesh are related to the amount of cabling and the

number of I/O ports required. First, because every device must be connected to every

other device, installation and reconnection are difficult. Second, the sheer bulk of the

wiring can be greater than the available space (in walls, ceilings, or floors) can accommodate.

Finally, the hardware required to connect each link (I/O ports and cable) can be

prohibitively expensive. For these reasons a mesh topology is usually implemented in a

limited fashion, for example, as a backbone connecting the main computers of a hybrid

network that can include several other topologies.

One practical example of a mesh topology is the connection of telephone regional

offices in which each regional office needs to be connected to every other regional office.

Star Topology In a star topology, each device has a dedicated point-to-point link

only to a central controller, usually called a hub. The devices are not directly linked to

one another. Unlike a mesh topology, a star topology does not allow direct traffic

between devices. The controller acts as an exchange: Ifone device wants to send data to

another, it sends the data to the controller, which then relays the data to the other connected

device (see Figure 1.6) .

A star topology is less expensive than a mesh topology. In a star, each device needs

only one link and one I/O port to connect it to any number of others. This factor also

makes it easy to install and reconfigure. Far less cabling needs to be housed, and additions,

moves, and deletions involve only one connection: between that device and the hub.

Other advantages include robustness. Ifone link fails, only that link is affected. All

other links remain active. This factor also lends itself to easy fault identification and


SECTION 1.2 NETWORKS 11

Figure 1.6

A star topology connecting four stations

Hub

fault isolation. As long as the hub is working, it can be used to monitor link problems

and bypass defective links.

One big disadvantage of a star topology is the dependency of the whole topology

on one single point, the hub. If the hub goes down, the whole system is dead.

Although a star requires far less cable than a mesh, each node must be linked to a

central hub. For this reason, often more cabling is required in a star than in some other

topologies (such as ring or bus).

The star topology is used in local-area networks (LANs), as we will see in Chapter 13.

High-speed LANs often use a star topology with a central hub.

Bus Topology The preceding examples all describe point-to-point connections. A bus

topology, on the other hand, is multipoint. One long cable acts as a backbone to link all

the devices in a network (see Figure 1.7).

Figure 1.7

A bus topology connecting three stations

Drop line Drop line Drop line

Cable end 11I-----1..-----..-----..----11 Cable end

Tap Tap Tap

Nodes are connected to the bus cable by drop lines and taps. A drop line is a connection

running between the device and the main cable. A tap is a connector that either

splices into the main cable or punctures the sheathing of a cable to create a contact with

the metallic core. As a signal travels along the backbone, some ofits energy is transformed

into heat. Therefore, it becomes weaker and weaker as it travels farther and farther. For

this reason there is a limit on the number of taps a bus can support and on the distance

between those taps.

Advantages of a bus topology include ease of installation. Backbone cable can be

laid along the most efficient path, then connected to the nodes by drop lines of various

lengths. In this way, a bus uses less cabling than mesh or star topologies. In a star, for

example, four network devices in the same room require four lengths of cable reaching


12 CHAPTER 1 INTRODUCTION

all the way to the hub. In a bus, this redundancy is eliminated. Only the backbone cable

stretches through the entire facility. Each drop line has to reach only as far as the nearest

point on the backbone.

Disadvantages include difficult reconnection and fault isolation. A bus is usually

designed to be optimally efficient at installation. It can therefore be difficult to add new

devices. Signal reflection at the taps can cause degradation in quality. This degradation

can be controlled by limiting the number and spacing of devices connected to a given

length of cable. Adding new devices may therefore require modification or replacement

of the backbone.

In addition, a fault or break in the bus cable stops all transmission, even between

devices on the same side of the problem. The damaged area reflects signals back in the

direction of origin, creating noise in both directions.

Bus topology was the one of the first topologies used in the design of early localarea

networks. Ethernet LANs can use a bus topology, but they are less popular now for

reasons we will discuss in Chapter 13.

Ring Topology In a ring topology, each device has a dedicated point-to-point connection

with only the two devices on either side of it. A signal is passed along the ring

in one direction, from device to device, until it reaches its destination. Each device in

the ring incorporates a repeater. When a device receives a signal intended for another

device, its repeater regenerates the bits and passes them along (see Figure 1.8).

Figure 1.8

A ring topology connecting six stations

Repeater

Repeater

Repeater

Repeater

Repeater

Repeater

A ring is relatively easy to install and reconfigure. Each device is linked to only its

immediate neighbors (either physically or logically). To add or delete a device requires

changing only two connections. The only constraints are media and traffic considerations

(maximum ring length and number of devices). In addition, fault isolation is simplified.

Generally in a ring, a signal is circulating at all times. If one device does not

receive a signal within a specified period, it can issue an alarm. The alarm alerts the

network operator to the problem and its location.

However, unidirectional traffic can be a disadvantage. In a simple ring, a break in

the ring (such as a disabled station) can disable the entire network. This weakness can

be solved by using a dual ring or a switch capable of closing off the break.


SECTION 1.2 NETWORKS 13

Ring topology was prevalent when IBM introduced its local-area network Token

Ring. Today, the need for higher-speed LANs has made this topology less popular.

Hybrid Topology A network can be hybrid. For example, we can have a main star topology

with each branch connecting several stations in a bus topology as shown in Figure 1.9.

Figure 1.9

A hybrid topology: a star backbone with three bus networks

Hub

Network Models

Computer networks are created by different entities. Standards are needed so that these

heterogeneous networks can communicate with one another. The two best-known standards

are the OSI model and the Internet model. In Chapter 2 we discuss these two

models. The OSI (Open Systems Interconnection) model defines a seven-layer network;

the Internet model defines a five-layer network. This book is based on the Internet

model with occasional references to the OSI model.

Categories of Networks

Today when we speak of networks, we are generally referring to two primary categories:

local-area networks and wide-area networks. The category into which a network

falls is determined by its size. A LAN normally covers an area less than 2 mi; a WAN can

be worldwide. Networks of a size in between are normally referred to as metropolitanarea

networks and span tens of miles.

Local Area Network

A local area network (LAN) is usually privately owned and links the devices in a single

office, building, or campus (see Figure 1.10). Depending on the needs of an organization

and the type of technology used, a LAN can be as simple as two PCs and a printer in

someone's home office; or it can extend throughout a company and include audio and

video peripherals. Currently, LAN size is limited to a few kilometers.


14 CHAPTER 1 INTRODUCTION

Figure 1.10

An isolated IAN connecting 12 computers to a hub in a closet

Hub

LANs are designed to allow resources to be shared between personal computers or

workstations. The resources to be shared can include hardware (e.g., a printer), software

(e.g., an application program), or data. A common example of a LAN, found in many

business environments, links a workgroup of task-related computers, for example, engineering

workstations or accounting PCs. One of the computers may be given a largecapacity

disk drive and may become a server to clients. Software can be stored on this

central server and used as needed by the whole group. In this example, the size of the

LAN may be determined by licensing restrictions on the number of users per copy of software,

or by restrictions on the number ofusers licensed to access the operating system.

In addition to size, LANs are distinguished from other types of networks by their

transmission media and topology. In general, a given LAN will use only one type of

transmission medium. The most common LAN topologies are bus, ring, and star.

Early LANs had data rates in the 4 to 16 megabits per second (Mbps) range. Today,

however, speeds are normally 100 or 1000 Mbps. LANs are discussed at length in

Chapters 13, 14, and 15.

Wireless LANs are the newest evolution in LAN technology. We discuss wireless

LANs in detail in Chapter 14.

Wide Area Network

A wide area network (WAN) provides long-distance transmission of data, image, audio,

and video information over large geographic areas that may comprise a country, a continent,

or even the whole world. In Chapters 17 and 18 we discuss wide-area networks in

greater detail. A WAN can be as complex as the backbones that connect the Internet or as

simple as a dial-up line that connects a home computer to the Internet. We normally refer

to the first as a switched WAN and to the second as a point-to-point WAN (Figure 1.11).

The switched WAN connects the end systems, which usually comprise a router (internetworking

connecting device) that connects to another LAN or WAN. The point-to-point

WAN is normally a line leased from a telephone or cable TV provider that connects a

home computer or a small LAN to an Internet service provider (lSP). This type of WAN

is often used to provide Internet access.


SECTION 1.2

NETWORKS

15

Figure 1.11

WANs: a switched WAN and a point-to-point WAN

i i

I

a. Switched WAN

Point-te-point ~d :_:

WAN .c ' . -- cu::J::W

ii~~;-E=~~~· "", ., .. ,~! ;~, j -, E::J

Modem Modem -

Computer

ISP

~

b. Point-to-point WAN

An early example of a switched WAN is X.25, a network designed to provide connectivity

between end users. As we will see in Chapter 18, X.25 is being gradually

replaced by a high-speed, more efficient network called Frame Relay. A good example

of a switched WAN is the asynchronous transfer mode (ATM) network, which is a network

with fixed-size data unit packets called cells. We will discuss ATM in Chapter 18.

Another example ofWANs is the wireless WAN that is becoming more and more popular.

We discuss wireless WANs and their evolution in Chapter 16.

Metropolitan Area Networks

A metropolitan area network (MAN) is a network with a size between a LAN and a

WAN. It normally covers the area inside a town or a city. It is designed for customers

who need a high-speed connectivity, normally to the Internet, and have endpoints

spread over a city or part of city. A good example of a MAN is the part of the telephone

company network that can provide a high-speed DSL line to the customer. Another

example is the cable TV network that originally was designed for cable TV, but today

can also be used for high-speed data connection to the Internet. We discuss DSL lines

and cable TV networks in Chapter 9.

Interconnection of Networks: Internetwork

Today, it is very rare to see a LAN, a MAN, or a LAN in isolation; they are connected

to one another. When two or more networks are connected, they become an

internetwork, or internet.

As an example, assume that an organization has two offices, one on the east coast

and the other on the west coast. The established office on the west coast has a bus topology

LAN; the newly opened office on the east coast has a star topology LAN. The president of

the company lives somewhere in the middle and needs to have control over the company


16 CHAPTER 1 INTRODUCTION

from her horne. To create a backbone WAN for connecting these three entities (two

LANs and the president's computer), a switched WAN (operated by a service provider

such as a telecom company) has been leased. To connect the LANs to this switched

WAN, however, three point-to-point WANs are required. These point-to-point WANs

can be a high-speed DSL line offered by a telephone company or a cable modern line

offered by a cable TV provider as shown in Figure 1.12.

Figure 1.12

A heterogeneous network made offour WANs and two LANs

President

1 _....

••

Point-to-point:

WAN :

MOdem~~'

.. ,', Mod,m

~

.

Point-to-point

~ WAN

• ':, Point-to-point

':. WAN

LAN

LAN

1.3 THE INTERNET

The Internet has revolutionized many aspects of our daily lives. It has affected the way

we do business as well as the way we spend our leisure time. Count the ways you've

used the Internet recently. Perhaps you've sent electronic mail (e-mail) to a business

associate, paid a utility bill, read a newspaper from a distant city, or looked up a local

movie schedule-all by using the Internet. Or maybe you researched a medical topic,

booked a hotel reservation, chatted with a fellow Trekkie, or comparison-shopped for a

car. The Internet is a communication system that has brought a wealth of information to

our fingertips and organized it for our use.

The Internet is a structured, organized system. We begin with a brief history of the

Internet. We follow with a description of the Internet today.


SECTION 1.3 THE INTERNET 17

A Brief History

A network is a group of connected communicating devices such as computers and

printers. An internet (note the lowercase letter i) is two or more networks that can communicate

with each other. The most notable internet is called the Internet (uppercase

letter I), a collaboration of more than hundreds of thousands of interconnected networks.

Private individuals as well as various organizations such as government agencies,

schools, research facilities, corporations, and libraries in more than 100 countries

use the Internet. Millions of people are users. Yet this extraordinary communication system

only came into being in 1969.

In the mid-1960s, mainframe computers in research organizations were standalone

devices. Computers from different manufacturers were unable to communicate

with one another. The Advanced Research Projects Agency (ARPA) in the Department

of Defense (DoD) was interested in finding a way to connect computers so that

the researchers they funded could share their findings, thereby reducing costs and eliminating

duplication of effort.

In 1967, at an Association for Computing Machinery (ACM) meeting, ARPA presented

its ideas for ARPANET, a small network of connected computers. The idea was

that each host computer (not necessarily from the same manufacturer) would be

attached to a specialized computer, called an inteiface message processor (IMP). The

IMPs, in tum, would be connected to one another. Each IMP had to be able to communicate

with other IMPs as well as with its own attached host.

By 1969, ARPANET was a reality. Four nodes, at the University of California at

Los Angeles (UCLA), the University of California at Santa Barbara (UCSB), Stanford

Research Institute (SRI), and the University of Utah, were connected via the IMPs to

form a network. Software called the Network Control Protocol (NCP) provided communication

between the hosts.

In 1972, Vint Cerf and Bob Kahn, both of whom were part of the core ARPANET

group, collaborated on what they called the Internetting Projec1. Cerf and Kahn's landmark

1973 paper outlined the protocols to achieve end-to-end delivery of packets. This

paper on Transmission Control Protocol (TCP) included concepts such as encapsulation,

the datagram, and the functions of a gateway.

Shortly thereafter, authorities made a decision to split TCP into two protocols:

Transmission Control Protocol (TCP) and Internetworking Protocol (lP). IP would

handle datagram routing while TCP would be responsible for higher-level functions

such as segmentation, reassembly, and error detection. The internetworking protocol

became known as TCPIIP.

The Internet Today

The Internet has come a long way since the 1960s. The Internet today is not a simple

hierarchical structure. It is made up of many wide- and local-area networks joined by

connecting devices and switching stations. It is difficult to give an accurate representation

of the Internet because it is continually changing-new networks are being

added, existing networks are adding addresses, and networks of defunct companies are

being removed. Today most end users who want Internet connection use the services of

Internet service providers (lSPs). There are international service providers, national


18 CHAPTER 1 INTRODUCTION

service providers, regional service providers, and local service providers. The Internet

today is run by private companies, not the government. Figure 1.13 shows a conceptual

(not geographic) view of the Internet.

Figure 1.13

Hierarchical organization ofthe Internet

National

ISP

a. Structure of a national ISP

National

ISP

National

ISP

b. Interconnection of national ISPs

International Internet Service Providers

At the top of the hierarchy are the international service providers that connect nations

together.

National Internet Service Providers

The national Internet service providers are backbone networks created and maintained

by specialized companies. There are many national ISPs operating in North

America; some of the most well known are SprintLink, PSINet, UUNet Technology,

AGIS, and internet Mel. To provide connectivity between the end users, these backbone

networks are connected by complex switching stations (normally run by a third

party) called network access points (NAPs). Some national ISP networks are also

connected to one another by private switching stations called peering points. These

normally operate at a high data rate (up to 600 Mbps).


SECTION 1.4 PROTOCOLS AND STANDARDS 19

Regional Internet Service Providers

Regional internet service providers or regional ISPs are smaller ISPs that are connected

to one or more national ISPs. They are at the third level of the hierarchy with a smaller

data rate.

Local Internet Service Providers

Local Internet service providers provide direct service to the end users. The local

ISPs can be connected to regional ISPs or directly to national ISPs. Most end users are

connected to the local ISPs. Note that in this sense, a local ISP can be a company that

just provides Internet services, a corporation with a network that supplies services to its

own employees, or a nonprofit organization, such as a college or a university, that runs

its own network. Each of these local ISPs can be connected to a regional or national

service provider.

1.4 PROTOCOLS AND STANDARDS

In this section, we define two widely used terms: protocols and standards. First, we

define protocol, which is synonymous with rule. Then we discuss standards, which are

agreed-upon rules.

Protocols

In computer networks, communication occurs between entities in different systems. An

entity is anything capable of sending or receiving information. However, two entities cannot

simply send bit streams to each other and expect to be understood. For communication

to occur, the entities must agree on a protocol. A protocol is a set of rules that govern data

communications. A protocol defines what is communicated, how it is communicated, and

when it is communicated. The key elements of a protocol are syntax, semantics, and timing.

o Syntax. The term syntax refers to the structure or format of the data, meaning the

order in which they are presented. For example, a simple protocol might expect the

first 8 bits of data to be the address of the sender, the second 8 bits to be the address

of the receiver, and the rest of the stream to be the message itself.

o Semantics. The word semantics refers to the meaning of each section of bits.

How is a particular pattern to be interpreted, and what action is to be taken based

on that interpretation? For example, does an address identify the route to be taken

or the final destination of the message?

o Timing. The term timing refers to two characteristics: when data should be sent

and how fast they can be sent. For example, if a sender produces data at 100 Mbps

but the receiver can process data at only 1 Mbps, the transmission will overload the

receiver and some data will be lost.

Standards

Standards are essential in creating and maintaining an open and competitive market for

equipment manufacturers and in guaranteeing national and international interoperability

of data and telecommunications technology and processes. Standards provide guidelines


20 CHAPTER 1 INTRODUCTION

to manufacturers, vendors, government agencies, and other service providers to ensure

the kind of interconnectivity necessary in today's marketplace and in international communications.

Data communication standards fall into two categories: de facto (meaning

"by fact" or "by convention") and de jure (meaning "by law" or "by regulation").

o De facto. Standards that have not been approved by an organized body but have

been adopted as standards through widespread use are de facto standards. De facto

standards are often established originally by manufacturers who seek to define the

functionality of a new product or technology.

o De jure. Those standards that have been legislated by an officially recognized body

are de jure standards.

Standards Organizations

Standards are developed through the cooperation of standards creation committees,

forums, and government regulatory agencies.

Standards Creation Committees

While many organizations are dedicated to the establishment of standards, data telecommunications

in North America rely primarily on those published by the following:

o International Organization for Standardization (ISO). The ISO is a multinational

body whose membership is drawn mainly from the standards creation committees

of various governments throughout the world. The ISO is active in developing

cooperation in the realms of scientific, technological, and economic activity.

o International Telecommunication Union-Telecommunication Standards

Sector (ITU-T). By the early 1970s, a number of countries were defining national

standards for telecommunications, but there was still little international compatibility.

The United Nations responded by forming, as part of its International

Telecommunication Union (ITU), a committee, the Consultative Committee

for International Telegraphy and Telephony (CCITT). This committee was

devoted to the research and establishment of standards for telecommunications in

general and for phone and data systems in particular. On March 1, 1993, the name

of this committee was changed to the International Telecommunication Union­

Telecommunication Standards Sector (ITU-T).

o American National Standards Institute (ANSI). Despite its name, the American

National Standards Institute is a completely private, nonprofit corporation not affiliated

with the U.S. federal government. However, all ANSI activities are undertaken

with the welfare of the United States and its citizens occupying primary importance.

o Institute of Electrical and Electronics Engineers (IEEE). The Institute of

Electrical and Electronics Engineers is the largest professional engineering society in

the world. International in scope, it aims to advance theory, creativity, and product

quality in the fields of electrical engineering, electronics, and radio as well as in all

related branches of engineering. As one of its goals, the IEEE oversees the development

and adoption of international standards for computing and communications.

o Electronic Industries Association (EIA). Aligned with ANSI, the Electronic

Industries Association is a nonprofit organization devoted to the promotion of


SECTION 1.5 RECOMMENDED READING 21

Forums

electronics manufacturing concerns. Its activities include public awareness education

and lobbying efforts in addition to standards development. In the field of information

technology, the EIA has made significant contributions by defining physical connection

interfaces and electronic signaling specifications for data communication.

Telecommunications technology development is moving faster than the ability of standards

committees to ratify standards. Standards committees are procedural bodies and

by nature slow-moving. To accommodate the need for working models and agreements

and to facilitate the standardization process, many special-interest groups have developed

forums made up of representatives from interested corporations. The forums

work with universities and users to test, evaluate, and standardize new technologies. By

concentrating their efforts on a particular technology, the forums are able to speed

acceptance and use of those technologies in the telecommunications community. The

forums present their conclusions to the standards bodies.

Regulatory Agencies

All communications technology is subject to regulation by government agencies such

as the Federal Communications Commission (FCC) in the United States. The purpose

of these agencies is to protect the public interest by regulating radio, television,

and wire/cable communications. The FCC has authority over interstate and international

commerce as it relates to communications.

Internet Standards

An Internet standard is a thoroughly tested specification that is useful to and adhered

to by those who work with the Internet. It is a formalized regulation that must be followed.

There is a strict procedure by which a specification attains Internet standard

status. A specification begins as an Internet draft. An Internet draft is a working document

(a work in progress) with no official status and a 6-month lifetime. Upon recommendation

from the Internet authorities, a draft may be published as a Request for

Comment (RFC). Each RFC is edited, assigned a number, and made available to all

interested parties. RFCs go through maturity levels and are categorized according to

their requirement level.

1.5 RECOMMENDED READING

For more details about subjects discussed in this chapter, we recommend the following

books and sites. The items enclosed in brackets [...] refer to the reference list at the end

of the book.

Books

The introductory materials covered in this chapter can be found in [Sta04] and [PD03].

[Tan03] discusses standardization in Section 1.6.


22 CHAPTER 1 INTRODUCTION

Sites

The following sites are related to topics discussed in this chapter.

o www.acm.org/sigcomm/sos.html This site gives the status of varililus networking

standards.

o www.ietf.org/ The Internet Engineering Task Force (IETF) home page.

RFCs

The following site lists all RFCs, including those related to IP and TCP. In future chapters

we cite the RFCs pertinent to the chapter material.

o www.ietf.org/rfc.html

1.6 KEY TERMS

Advanced Research Projects

Agency (ARPA)

American National Standards

Institute (ANSI)

American Standard Code for

Information Interchange (ASCII)

ARPANET

audio

backbone

Basic Latin

bus topology

code

Consultative Committee for

International Telegraphy

and Telephony (CCITT)

data

data communications

de facto standards

de jure standards

delay

distributed processing

Electronic Industries Association (EIA)

entity

Federal Communications Commission

(FCC)

forum

full-duplex mode, or duplex

half-duplex mode

hub

image

Institute of Electrical and Electronics

Engineers (IEEE)

International Organization for

Standardization (ISO)

International Telecommunication

Union-Telecommunication

Standards Sector (ITU-T)

Internet

Internet draft

Internet service provider (ISP)

Internet standard

internetwork or internet

local area network (LAN)

local Internet service providers

mesh topology

message

metropolitan area network (MAN)

multipoint or multidrop connection

national Internet service provider

network


SECTION 1.7 SUMMARY 23

network access points (NAPs)

node

performance

physical topology

point-to-point connection

protocol

receiver

regional ISP

reliability

Request for Comment (RFC)

ROB

ring topology

security

semantics

sender

simplex mode

star topology

syntax

telecommunication

throughput

timing

Transmission Control Protocol!

Internetworking Protocol (TCPIIP)

transmission medium

Unicode

video

wide area network (WAN)

YCM

1.7 SUMMARY

o Data communications are the transfer of data from one device to another via some

form of transmission medium.

o A data communications system must transmit data to the correct destination in an

accurate and timely manner.

o The five components that make up a data communications system are the message,

sender, receiver, medium, and protocol.

o Text, numbers, images, audio, and video are different forms of information.

o Data flow between two devices can occur in one of three ways: simplex, half-duplex,

or full-duplex.

o A network is a set of communication devices connected by media links.

o In a point-to-point connection, two and only two devices are connected by a

dedicated link. In a multipoint connection, three or more devices share a link.

o Topology refers to the physical or logical arrangement of a network. Devices may

be arranged in a mesh, star, bus, or ring topology.

o A network can be categorized as a local area network or a wide area network.

o A LAN is a data communication system within a building, plant, or campus, or

between nearby buildings.

o A WAN is a data communication system spanning states, countries, or the whole

world.

o An internet is a network of networks.

o The Internet is a collection of many separate networks.

o There are local, regional, national, and international Internet service providers.

o A protocol is a set of rules that govern data communication; the key elements of

a protocol are syntax, semantics, and timing.


24 CHAPTER 1 INTRODUCTION

o Standards are necessary to ensure that products from different manufacturers can

work together as expected.

o The ISO, ITD-T, ANSI, IEEE, and EIA are some of the organizations involved

in standards creation.

o Forums are special-interest groups that quickly evaluate and standardize new

technologies.

o A Request for Comment is an idea or concept that is a precursor to an Internet

standard.

1.8 PRACTICE SET

Review Questions

1. Identify the five components of a data communications system.

2. What are the advantages of distributed processing?

3. What are the three criteria necessary for an effective and efficient network?

4. What are the advantages of a multipoint connection over a point-to-point

connection?

5. What are the two types of line configuration?

6. Categorize the four basic topologies in terms of line configuration.

7. What is the difference between half-duplex and full-duplex transmission modes?

8. Name the four basic network topologies, and cite an advantage of each type.

9. For n devices in a network, what is the number of cable links required for a mesh,

ring, bus, and star topology?

10. What are some of the factors that determine whether a communication system is a

LAN or WAN?

1I. What is an internet? What is the Internet?

12. Why are protocols needed?

13. Why are standards needed?

Exercises

14. What is the maximum number of characters or symbols that can be represented by

Unicode?

15. A color image uses 16 bits to represent a pixel. What is the maximum number of

different colors that can be represented?

16. Assume six devices are arranged in a mesh topology. How many cables are needed?

How many ports are needed for each device?

17. For each ofthe following four networks, discuss the consequences if a connection fails.

a. Five devices arranged in a mesh topology

b. Five devices arranged in a star topology (not counting the hub)

c. Five devices arranged in a bus topology

d. Five devices arranged in a ring topology


SECTION 1.8 PRACTICE SET 25

18. You have two computers connected by an Ethernet hub at home. Is this a LAN, a

MAN, or a WAN? Explain your reason.

19. In the ring topology in Figure 1.8, what happens if one of the stations is unplugged?

20. In the bus topology in Figure 1.7, what happens if one ofthe stations is unplugged?

21. Draw a hybrid topology with a star backbone and three ring networks.

22. Draw a hybrid topology with a ring backbone and two bus networks.

23. Performance is inversely related to delay. When you use the Internet, which of the

following applications are more sensitive to delay?

a. Sending an e-mail

b. Copying a file

c. Surfing the Internet

24. When a party makes a local telephone call to another party, is this a point-to-point

or multipoint connection? Explain your answer.

25. Compare the telephone network and the Internet. What are the similarities? What

are the differences?

Research Activities

26. Using the site \\iww.cne.gmu.edu/modules/network/osi.html, discuss the OSI model.

27. Using the site www.ansi.org, discuss ANSI's activities.

28. Using the site www.ieee.org, discuss IEEE's activities.

29. Using the site www.ietf.org/, discuss the different types of RFCs.



CHAPTER 2

Network Models

A network is a combination of hardware and software that sends data from one location

to another. The hardware consists of the physical equipment that carries signals from

one point of the network to another. The software consists of instruction sets that make

possible the services that we expect from a network.

We can compare the task ofnetworking to the task ofsolving a mathematics problem

with a computer. The fundamental job of solving the problem with a computer is done

by computer hardware. However, this is a very tedious task if only hardware is involved.

We would need switches for every memory location to store and manipulate data. The

task is much easier if software is available. At the highest level, a program can direct

the problem-solving process; the details ofhow this is done by the actual hardware can

be left to the layers of software that are called by the higher levels.

Compare this to a service provided by a computer network. For example, the task

of sending an e-mail from one point in the world to another can be broken into several

tasks, each performed by a separate software package. Each software package uses the

services of another software package. At the lowest layer, a signal, or a set of signals, is

sent from the source computer to the destination computer.

In this chapter, we give a general idea of the layers of a network and discuss the

functions of each. Detailed descriptions of these layers follow in later chapters.

2.1 LAYERED TASKS

We use the concept of layers in our daily life. As an example, let us consider two

friends who communicate through postal maiL The process of sending a letter to a

friend would be complex if there were no services available from the post office. Figure

2.1 shows the steps in this task.

27


28 CHAPTER 2 NETWORK MODELS

Figure 2.1

Tasks involved in sending a letter

t

Sender

I

t

Receiver

The letter is written,

The letter is picked up,

put in an envelope, and Higher layers removed from the

dropped in a mailbox.

envelope, and read.

I -,

The letter is carried

The letter is carried

from the mailbox Middle layers from the post office

to a post office.

to the mailbox.

I

The letter is delivered

The letter is delivered

to a carrier by the post Lower layers from the carrier

office.

to the post office.

II

The parcel is carried from

the source to the destination.

t

I

Sender, Receiver, and Carrier

In Figure 2.1 we have a sender, a receiver, and a carrier that transports the letter. There

is a hierarchy of tasks.

At the Sellder Site

Let us first describe, in order, the activities that take place at the sender site.

o Higher layer. The sender writes the letter, inserts the letter in an envelope, writes

the sender and receiver addresses, and drops the letter in a mailbox.

o Middle layer. The letter is picked up by a letter carrier and delivered to the post

office.

o Lower layer. The letter is sorted at the post office; a carrier transports the letter.

011 the Way

The letter is then on its way to the recipient. On the way to the recipient's local post

office, the letter may actually go through a central office. In addition, it may be transported

by truck, train, airplane, boat, or a combination of these.

At the Receiver Site

o Lower layer. The carrier transports the letter to the post office.

o Middle layer. The letter is sorted and delivered to the recipient's mailbox.

o Higher layer. The receiver picks up the letter, opens the envelope, and reads it.


SECTION 2.2 THE OS! MODEL 29

Hierarchy

According to our analysis, there are three different activities at the sender site and

another three activities at the receiver site. The task of transporting the letter between

the sender and the receiver is done by the carrier. Something that is not obvious

immediately is that the tasks must be done in the order given in the hierarchy. At the

sender site, the letter must be written and dropped in the mailbox before being picked

up by the letter carrier and delivered to the post office. At the receiver site, the letter

must be dropped in the recipient mailbox before being picked up and read by the

recipient.

Services

Each layer at the sending site uses the services of the layer immediately below it. The

sender at the higher layer uses the services of the middle layer. The middle layer uses

the services of the lower layer. The lower layer uses the services of the carrier.

The layered model that dominated data communications and networking literature

before 1990 was the Open Systems Interconnection (OSI) model. Everyone believed

that the OSI model would become the ultimate standard for data communications, but

this did not happen. The TCPIIP protocol suite became the dominant commercial architecture

because it was used and tested extensively in the Internet; the OSI model was

never fully implemented.

In this chapter, first we briefly discuss the OSI model, and then we concentrate on

TCPIIP as a protocol suite.

2.2 THE OSI MODEL

Established in 1947, the International Standards Organization (ISO) is a multinational

body dedicated to worldwide agreement on international standards. An ISO standard

that covers all aspects of network communications is the Open Systems Interconnection

model. It was first introduced in the late 1970s. An open system is a set of protocols that

allows any two different systems to communicate regardless of their underlying architecture.

The purpose of the OSI model is to show how to facilitate communication

between different systems without requiring changes to the logic ofthe underlying hardware

and software. The OSI model is not a protocol; it is a model for understanding and

designing a network architecture that is flexible, robust, and interoperable.

ISO is the organization. OSI is the model.

The OSI model is a layered framework for the design of network systems that

allows communication between all types of computer systems. It consists of seven separate

but related layers, each ofwhich defines a part ofthe process of moving information

across a network (see Figure 2.2). An understanding of the fundamentals of the OSI

model provides a solid basis for exploring data communications.


30 CHAPTER 2 NETWORK MODELS

Figure 2.2

Seven layers ofthe OSI model

71

Application

61

Presentation

51

Session

41

Transport

31

Network

21

Data link

1I

Physical

Layered Architecture

The OSI model is composed of seven ordered layers: physical (layer 1), data link (layer 2),

network (layer 3), transport (layer 4), session (layer 5), presentation (layer 6), and

application (layer 7). Figure 2.3 shows the layers involved when a message is sent from

device A to device B. As the message travels from A to B, it may pass through many

intermediate nodes. These intermediate nodes usually involve only the first three layers

of the OSI model.

In developing the model, the designers distilled the process of transmitting data to

its most fundamental elements. They identified which networking functions had related

uses and collected those functions into discrete groups that became the layers. Each

layer defines a family of functions distinct from those of the other layers. By defining

and localizing functionality in this fashion, the designers created an architecture that is

both comprehensive and flexible. Most importantly, the OSI model allows complete

interoperability between otherwise incompatible systems.

Within a single machine, each layer calls upon the services of the layer just below

it. Layer 3, for example, uses the services provided by layer 2 and provides services for

layer 4. Between machines, layer x on one machine communicates with layer x on

another machine. This communication is governed by an agreed-upon series of rules

and conventions called protocols. The processes on each machine that communicate at

a given layer are called peer-to-peer processes. Communication between machines is

therefore a peer-to-peer process using the protocols appropriate to a given layer.

Peer-to-Peer Processes

At the physical layer, communication is direct: In Figure 2.3, device A sends a stream

of bits to device B (through intermediate nodes). At the higher layers, however, communication

must move down through the layers on device A, over to device B, and then


SECTION 2.2 THE OSI MODEL 31

Figure 2.3

The interaction between layers in the OSI model

Device

A

Device

B

Intermediate

node

Intermediate

node

Peer-to-peer protocol Oth layer)

7 Application ------------------------ Application 7

7-6 interface 7-6 interface

Peer-to-peer protocol (6th layer)

6 Presentation ------------------------ Presentation 6

6-5 interface 6-5 interface

Peer-to-peer protocol (5th layer)

5 Session ------------------------ Session 5

5-4 interface

Peer-to-peer protocol (4th layer)

5-4 interface

4 Transport ------------------------ Transport 4

4-3 interface 4-3 interface

3 Network Network 3

3-2 interface 3-2 interface

2 Data link Data link 2

2-1 interface 2-1 interface

Physical

Physical communication

Physical

back up through the layers. Each layer in the sending device adds its own information

to the message it receives from the layer just above it and passes the whole package to

the layer just below it.

At layer I the entire package is converted to a form that can be transmitted to the

receiving device. At the receiving machine, the message is unwrapped layer by layer,

with each process receiving and removing the data meant for it. For example, layer 2

removes the data meant for it, then passes the rest to layer 3. Layer 3 then removes the

data meant for it and passes the rest to layer 4, and so on.

Interfaces Between Layers

The passing of the data and network information down through the layers of the sending

device and back up through the layers of the receiving device is made possible by

an interface between each pair of adjacent layers. Each interface defines the information

and services a layer must provide for the layer above it. Well-defined interfaces and

layer functions provide modularity to a network. As long as a layer provides the

expected services to the layer above it, the specific implementation of its functions can

be modified or replaced without requiring changes to the surrounding layers.

Organization ofthe Layers

The seven layers can be thought of as belonging to three subgroups. Layers I, 2, and

3-physical, data link, and network-are the network support layers; they deal with


32 CHAPTER 2 NETWORK MODELS

the physical aspects of moving data from one device to another (such as electrical

specifications, physical connections, physical addressing, and transport timing and

reliability). Layers 5, 6, and 7-session, presentation, and application-can be

thought of as the user support layers; they allow interoperability among unrelated

software systems. Layer 4, the transport layer, links the two subgroups and ensures

that what the lower layers have transmitted is in a form that the upper layers can use.

The upper OSI layers are almost always implemented in software; lower layers are a

combination of hardware and software, except for the physical layer, which is mostly

hardware.

In Figure 2.4, which gives an overall view of the OSI layers, D7 means the data

unit at layer 7, D6 means the data unit at layer 6, and so on. The process starts at layer

7 (the application layer), then moves from layer to layer in descending, sequential

order. At each layer, a header, or possibly a trailer, can be added to the data unit.

Commonly, the trailer is added only at layer 2. When the formatted data unit passes

through the physical layer (layer 1), it is changed into an electromagnetic signal and

transported along a physical link.

Figure 2.4

An exchange using the OS! model

~

~ D6 I

~ D5 I

~ D4 I

~ D3 I

~ D2 •

@IQj 010101010101101010000010000 I

L...

:HiiD7l

t---~

~~§J D6 I

:Hs- 1 D5 I

r- _J

:H4j D4 I

r---.------

:-Hi l D3 I

r-- J ---=-=----

:HiJ D2 ~.j!

1- - - .10--1

:_~i§j 010101010101101010000010000 I

1

Trn",m;"io" =dium

Upon reaching its destination, the signal passes into layer 1 and is transformed

back into digital form. The data units then move back up through the OSI layers. As

each block of data reaches the next higher layer, the headers and trailers attached to it at

the corresponding sending layer are removed, and actions appropriate to that layer are

taken. By the time it reaches layer 7, the message is again in a form appropriate to the

application and is made available to the recipient.


SECTION 2.3 LAYERS IN THE OSI MODEL 33

Encapsulation

Figure 2.3 reveals another aspect of data communications in the OSI model: encapsulation.

A packet (header and data) at level 7 is encapsulated in a packet at level 6. The

whole packet at level 6 is encapsulated in a packet at level 5, and so on.

In other words, the data portion of a packet at level N - 1 carries the whole packet

(data and header and maybe trailer) from level N. The concept is called encapsulation;

level N - 1 is not aware of which part of the encapsulated packet is data and which part

is the header or trailer. For level N - 1, the whole packet coming from level N is treated

as one integral unit.

2.3 LAYERS IN THE OSI MODEL

In this section we briefly describe the functions of each layer in the OSI model.

Physical Layer

The physical layer coordinates the functions required to carry a bit stream over a physical

medium. It deals with the mechanical and electrical specifications of the interface and

transmission medium. It also defines the procedures and functions that physical devices

and interfaces have to perform for transmission to Occur. Figure 2.5 shows the position of

the physical layer with respect to the transmission medium and the data link layer.

Figure 2.5

Physical layer

From data link layer

To data link layer

Physical

layer

Physical

layer

Transmission medium

The physical layer is responsible for movements of

individual bits from one hop (node) to the next.

The physical layer is also concerned with the following:

o Physical characteristics of interfaces and medium. The physical layer defines

the characteristics of the interface between the devices and the transmission

medium. It also defines the type of transmission medium.

o Representation of bits. The physical layer data consists of a stream of bits

(sequence of Os or 1s) with no interpretation. To be transmitted, bits must be


34 CHAPTER 2 NETWORK MODELS

encoded into signals--electrical or optical. The physical layer defines the type of

encoding (how Os and Is are changed to signals).

o Data rate. The transmission rate-the number of bits sent each second-is also

defined by the physical layer. In other words, the physical layer defines the duration

of a bit, which is how long it lasts.

o Synchronization of bits. The sender and receiver not only must use the same bit

rate but also must be synchronized at the bit level. In other words, the sender and

the receiver clocks must be synchronized.

o Line configuration. The physical layer is concerned with the connection of

devices to the media. In a point-to-point configuration, two devices are connected

through a dedicated link. In a multipoint configuration, a link is shared among

several devices.

o Physical topology. The physical topology defines how devices are connected to

make a network. Devices can be connected by using a mesh topology (every device

is connected to every other device), a star topology (devices are connected through

a central device), a ring topology (each device is connected to the next, forming a

ring), a bus topology (every device is on a common link), or a hybrid topology (this

is a combination of two or more topologies).

o Transmission mode. The physical layer also defines the direction of transmission

between two devices: simplex, half-duplex, or full-duplex. In simplex mode, only

one device can send; the other can only receive. The simplex mode is a one-way

communication. In the half-duplex mode, two devices can send and receive, but

not at the same time. In a full-duplex (or simply duplex) mode, two devices can

send and receive at the same time.

Data Link Layer

The data link layer transforms the physical layer, a raw transmission facility, to a reliable

link. It makes the physical layer appear error-free to the upper layer (network

layer). Figure 2.6 shows the relationship of the data link layer to the network and physicallayers.

Figure 2.6

Data link layer

From network layer

To network layer

H2

Data link layer

Data link layer

To physical layer

From physical layer


SECTION 2.3 LAYERS IN THE OSI MODEL 35

The data link layer is responsible for moving frames from one hop (node) to the next.

Other responsibilities of the data link layer include the following:

[I Framing. The data link layer divides the stream of bits received from the network

layer into manageable data units called frames.

o Physical addressing. If frames are to be distributed to different systems on the

network, the data link layer adds a header to the frame to define the sender and/or

receiver of the frame. If the frame is intended for a system outside the sender's

network, the receiver address is the address of the device that connects the network

to the next one.

D Flow control. If the rate at which the data are absorbed by the receiver is less than

the rate at which data are produced in the sender, the data link layer imposes a flow

control mechanism to avoid overwhelming the receiver.

o Error control. The data link layer adds reliability to the physical layer by adding

mechanisms to detect and retransmit damaged or lost frames. It also uses a mechanism

to recognize duplicate frames. Error control is normally achieved through a

trailer added to the end of the frame.

D Access control. When two or more devices are connected to the same link, data

link layer protocols are necessary to determine which device has control over the

link at any given time.

Figure 2.7 illustrates hop-to-hop (node-to-node) delivery by the data link layer.

Figure 2.7

Hop-fa-hop delivery

End

system

r

A

Link

Link

Hop-ta-hop delivery Hop-to-hop delivery Hop-to-hop delivery

E

End

system

End

system

r

F

A B E F

Data link Data link Data link

Physical Physical Physical

Hop-to-hop delivery Hop-to-hop delivery Hop-to-hop delivery

As the figure shows, communication at the data link layer occurs between two

adjacent nodes. To send data from A to F, three partial deliveries are made. First, the

data link layer at A sends a frame to the data link layer at B (a router). Second, the data


36 CHAPTER 2 NETWORK MODELS

link layer at B sends a new frame to the data link layer at E. Finally, the data link layer

at E sends a new frame to the data link layer at F. Note that the frames that are

exchanged between the three nodes have different values in the headers. The frame from

A to B has B as the destination address and A as the source address. The frame from B to

E has E as the destination address and B as the source address. The frame from E to F

has F as the destination address and E as the source address. The values of the trailers

can also be different if error checking includes the header of the frame.

Network Layer

The network layer is responsible for the source-to-destination delivery of a packet,

possibly across multiple networks (links). Whereas the data link layer oversees the

delivery of the packet between two systems on the same network (links), the network

layer ensures that each packet gets from its point of origin to its final destination.

If two systems are connected to the same link, there is usually no need for a network

layer. However, if the two systems are attached to different networks (links) with

connecting devices between the networks (links), there is often a need for the network

layer to accomplish source-to-destination delivery. Figure 2.8 shows the relationship of

the network layer to the data link and transport layers.

Figure 2.8

Network layer

From transport layer

I

1 -,,-_1

~: Data .1 Packet

I

I

...

To transport layer

I

',,- - -H-3- - _]1..jI

. Data,. Packet

i------'-----'-------1

Network

layer

...,

To data link layer

From data link layer

Network

layer

The network layer is responsible for the delivery ofindividual

packets from the source host to the destination host.

Other responsibilities of the network layer include the following:

o Logical addressing. The physical addressing implemented by the data link layer

handles the addressing problem locally. If a packet passes the network boundary,

we need another addressing system to help distinguish the source and destination

systems. The network layer adds a header to the packet coming from the upper

layer that, among other things, includes the logical addresses of the sender and

receiver. We discuss logical addresses later in this chapter.

o Routing. When independent networks or links are connected to create intemetworks

(network of networks) or a large network, the connecting devices (called routers


SECTION 2.3 LAYERS IN THE OSI MODEL 37

or switches) route or switch the packets to their final destination. One of the functions

of the network layer is to provide this mechanism.

Figure 2.9 illustrates end-to-end delivery by the network layer.

Figure 2.9

Source-to-destination delivery

End

system

r

A

Intermediate

system

E

Link

HOP-lO-hop delivery Hop-to-hop delivery HOp-lO-hop delivery

End

system

r

End

system

r

Source-to-destination delivery

-

A B E F

Network Network Network

Data link Data link Data link

Physical Physical Physical

I.

Source-to-destination delivery

,I

-

F

As the figure shows, now we need a source-to-destination delivery. The network layer

at A sends the packet to the network layer at B. When the packet arrives at router B, the

router makes a decision based on the final destination (F) of the packet. As we will see

in later chapters, router B uses its routing table to find that the next hop is router E. The

network layer at B, therefore, sends the packet to the network layer at E. The network

layer at E, in tum, sends the packet to the network layer at F.

Transport Layer

The transport layer is responsible for process-to-process delivery of the entire message.

A process is an application program running on a host. Whereas the network layer

oversees source-to-destination delivery of individual packets, it does not recognize

any relationship between those packets. It treats each one independently, as though

each piece belonged to a separate message, whether or not it does. The transport layer,

on the other hand, ensures that the whole message arrives intact and in order, overseeing

both error control and flow control at the source-to-destination level. Figure 2.10 shows

the relationship of the transport layer to the network and session layers.


38 CHAPTER 2 NETWORK MODELS

Figure 2.10

Transport layer

From session layer

To session layer

/ / I I \ \

IH4 (Data rIH4 f Data rIH4) Data 1

I I I I I I

Segments

Transport

layer

To network layer

/

/

CH!lData I

I

I

From network layer

\

\

\

\

Segments

Transport

layer

The transport layer is responsible for the delivery ofa message from one process to another.

Other responsibilities of the transport layer include the following:

o Service-point addressing. Computers often run several programs at the same

time. For this reason, source-to-destination delivery means delivery not only from

one computer to the next but also from a specific process (running program) on

one computer to a specific process (running program) on the other. The transport

layer header must therefore include a type of address called a service-point

address (or port address). The network layer gets each packet to the correct

computer; the transport layer gets the entire message to the correct process on

that computer.

o Segmentation and reassembly. A message is divided into transmittable segments,

with each segment containing a sequence number. These numbers enable the transport

layer to reassemble the message correctly upon arriving at the destination and

to identify and replace packets that were lost in transmission.

o Connection control. The transport layer can be either connectionless or connectionoriented.

A connectionless transport layer treats each segment as an independent

packet and delivers it to the transport layer at the destination machine. A connectionoriented

transport layer makes a connection with the transport layer at the destination

machine first before delivering the packets. After all the data are transferred,

the connection is terminated.

o Flow control. Like the data link layer, the transport layer is responsible for flow

control. However, flow control at this layer is performed end to end rather than

across a single link.

o Error control. Like the data link layer, the transport layer is responsible for

error control. However, error control at this layer is performed process-toprocess

rather than across a single link. The sending transport layer makes sure

that the entire message arrives at the receiving transport layer without error

(damage, loss, or duplication). Error correction is usually achieved through

retransmission.


SECTION 2.3 LAYERS IN THE OSI MODEL 39

Figure 2.11 illustrates process-to-process delivery by the transport layer.

Figure 2.11

Reliable process-to-process delivery ofa message

Processes

Processes

I

I

, ,,I

,/ ~~~~,....-----<

\

\

\

\

\

\

)-----~~~~'C)\\

An internet

, I I

/ I I

, I-.I.----------------------.J.I

/ I Network layer I

, Host-to-host delivery

L

Transport layer

Process-to-process delivery

\

\

\

\

\

\

\

~

Session Layer

The services provided by the first three layers (physical, data link, and network) are

not sufficient for some processes. The session layer is the network dialog controller.

It establishes, maintains, and synchronizes the interaction among communicating

systems.

The session layer is responsible for dialog control and synchronization.

Specific responsibilities ofthe session layer include the following:

o Dialog control. The session layer allows two systems to enter into a dialog. It

allows the communication between two processes to take place in either halfduplex

(one way at a time) or full-duplex (two ways at a time) mode.

o Synchronization. The session layer allows a process to add checkpoints, or syn­

Chronization points, to a stream of data. For example, if a system is sending a file

of 2000 pages, it is advisable to insert checkpoints after every 100 pages to ensure

that each 100-page unit is received and acknowledged independently. In this case,

if a crash happens during the transmission of page 523, the only pages that need to

be resent after system recovery are pages 501 to 523. Pages previous to 501 need

not be resent. Figure 2.12 illustrates the relationship of the session layer to the

transport and presentation layers.

Presentation Layer

The presentation layer is concerned with the syntax and semantics of the information

exchanged between two systems. Figure 2.13 shows the relationship between the presentation

layer and the application and session layers.


40 CHAPTER 2 NETWORK MODELS

Figure 2.12

Session layer

From presentation layer

1

I

/ ;f t

~.•.

/ I~ lII

syn syn syn

1

I

I

I

I

I

I

I

I

To presentation layer

I~i1-,

I

II I

1/ I I I

I I I I I

• , I

~')~:{~ ~ ,

I

r' ~"''' 9 ~,

I

syn syn syn

I

Session

layer

Session

layer

To transport layer

From transport layer

Figure 2.13

Presentation layer

From application layer

I

..

I

To application layer

;.......,. 1 --1

~l ...,i "J)8,tfi~~. ,.J

I

I

Presentation

layer

...

To session layer

From session layer

Presentation

layer

The presentation layer is responsible for translation, compression, and encryption.

Specific responsibilities of the presentation layer include the following:

o Translation. The processes (running programs) in two systems are usually exchanging

information in the form of character strings, numbers, and so on. The infonnation

must be changed to bit streams before being transmitted. Because different

computers use different encoding systems, the presentation layer is responsible for

interoperability between these different encoding methods. The presentation layer

at the sender changes the information from its sender-dependent format into a

common format. The presentation layer at the receiving machine changes the

common format into its receiver-dependent format.

o Encryption. To carry sensitive information, a system must be able to ensure

privacy. Encryption means that the sender transforms the original information to


SECTION 2.3 LAYERS IN THE OSI MODEL 41

another form and sends the resulting message out over the network. Decryption

reverses the original process to transform the message back to its original form.

o Compression. Data compression reduces the number of bits contained in the

information. Data compression becomes particularly important in the transmission

of multimedia such as text, audio, and video.

Application Layer

The application layer enables the user, whether human or software, to access the network.

It provides user interfaces and support for services such as electronic mail,

remote file access and transfer, shared database management, and other types of distributed

information services.

Figure 2.14 shows the relationship of the application layer to the user and the presentation

layer. Of the many application services available, the figure shows only three:

XAOO (message-handling services), X.500 (directory services), and file transfer,

access, and management (FTAM). The user in this example employs XAOO to send an

e-mail message.

Figure 2.14

Application layer

User

(human or program)

User

(human or program)

Data' ~:.

Application

layer

Application

layer

To presentation layer

From presentation layer

The application layer is responsible for providing services to the user.

Specific services provided by the application layer include the following:

o Network virtual terminal. A network virtual terminal is a software version of

a physical terminal, and it allows a user to log on to a remote host. To do so, the

application creates a software emulation of a terminal at the remote host. The

user's computer talks to the software terminal which, in turn, talks to the host,

and vice versa. The remote host believes it is communicating with one of its own

terminals and allows the user to log on.


42 CHAPTER 2 NETWORK MODELS

o File transfer, access, and management. This application allows a user to access

files in a remote host (to make changes or read data), to retrieve files from a remote

computer for use in the local computer, and to manage or control files in a remote

computer locally.

o Mail services. This application provides the basis for e-mail forwarding and

storage.

o Directory services. This application provides distributed database sources and

access for global information about various objects and services.

Summary of Layers

Figure 2.15 shows a summary of duties for each layer.

Figure 2.15

Summary oflayers

To translate, encrypt, and

compress data

To provide reliable process-toprocess

message delivery and

error recovery

To organize bits into frames;

to provide hop-to-hop delivery

Application

Presentation

Session

Transport

Network

Data link

Physical

To allow access to network

resources

To establish, manage, and

terminate sessions

To move packets from source

to destination; to provide

internetworking

To transmit bits over a medium;

to provide mechanical and

electrical specifications

2.4 TCP/IP PROTOCOL SUITE

The TCPIIP protocol suite was developed prior to the OSI model. Therefore, the layers

in the TCP/IP protocol suite do not exactly match those in the OSI model. The

original TCP/IP protocol suite was defined as having four layers: host-to-network,

internet, transport, and application. However, when TCP/IP is compared to OSI, we can

say that the host-to-network layer is equivalent to the combination of the physical and

data link layers. The internet layer is equivalent to the network layer, and the application

layer is roughly doing the job of the session, presentation, and application layers

with the transport layer in TCPIIP taking care of part of the duties of the session layer.

So in this book, we assume that the TCPIIP protocol suite is made of five layers: physical,

data link, network, transport, and application. The first four layers provide physical

standards, network interfaces, internetworking, and transport functions that correspond

to the first four layers of the OSI model. The three topmost layers in the OSI model,

however, are represented in TCPIIP by a single layer called the application layer (see

Figure 2.16).


SECTION 2.4 TCPIIP PROTOCOL SUITE 43

Figure 2.16

TCPIIP and OSI model

Applications

IApplication

8GB8GB

I Presentation ... ]

ISession

1

I Transport ___SC_TP__! IL-.-__TC_P__I I UD_P__II

J

Network

(internet)

I ICMP II IGMP I

IP

I RARP II ARP

I

IData link

I Physical

Protocols defined by

the underlying networks

(host-to-network)

J

1

TCP/IP is a hierarchical protocol made up of interactive modules, each of which

provides a specific functionality; however, the modules are not necessarily interdependent.

Whereas the OSI model specifies which functions belong to each of its layers,

the layers of the TCP/IP protocol suite contain relatively independent protocols that

can be mixed and matched depending on the needs of the system. The term hierarchical

means that each upper-level protocol is supported by one or more lower-level

protocols.

At the transport layer, TCP/IP defines three protocols: Transmission Control

Protocol (TCP), User Datagram Protocol (UDP), and Stream Control Transmission

Protocol (SCTP). At the network layer, the main protocol defined by TCP/IP is the

Internetworking Protocol (IP); there are also some other protocols that support data

movement in this layer.

Physical and Data Link Layers

At the physical and data link layers, TCPIIP does not define any specific protocol. It

supports all the standard and proprietary protocols. A network in a TCPIIP internetwork

can be a local-area network or a wide-area network.

Network Layer

At the network layer (or, more accurately, the internetwork layer), TCP/IP supports

the Internetworking Protocol. IP, in turn, uses four supporting protocols: ARP,

RARP, ICMP, and IGMP. Each of these protocols is described in greater detail in later

chapters.


44 CHAPTER 2 NETWORK MODELS

Internetworking Protocol (IP)

The Internetworking Protocol (IP) is the transmission mechanism used by the TCP/IP

protocols. It is an unreliable and connectionless protocol-a best-effort delivery service.

The term best effort means that IP provides no error checking or tracking. IP assumes

the unreliability of the underlying layers and does its best to get a transmission through

to its destination, but with no guarantees.

IP transports data in packets called datagrams, each of which is transported separately.

Datagrams can travel along different routes and can arrive out of sequence or be

duplicated. IP does not keep track of the routes and has no facility for reordering datagrams

once they arrive at their destination.

The limited functionality of IP should not be considered a weakness, however. IP

provides bare-bones transmission functions that free the user to add only those facilities

necessary for a given application and thereby allows for maximum efficiency. IP is discussed

in Chapter 20.

Address Resolution Protocol

The Address Resolution Protocol (ARP) is used to associate a logical address with a

physical address. On a typical physical network, such as a LAN, each device on a link

is identified by a physical or station address, usually imprinted on the network interface

card (NIC). ARP is used to find the physical address of the node when its Internet

address is known. ARP is discussed in Chapter 21.

Reverse Address Resolution Protocol

The Reverse Address Resolution Protocol (RARP) allows a host to discover its Internet

address when it knows only its physical address. It is used when a computer is connected

to a network for the first time or when a diskless computer is booted. We discuss

RARP in Chapter 21.

Internet Control Message Protocol

The Internet Control Message Protocol (ICMP) is a mechanism used by hosts and

gateways to send notification of datagram problems back to the sender. ICMP sends

query and error reporting messages. We discuss ICMP in Chapter 21.

Internet Group Message Protocol

The Internet Group Message Protocol (IGMP) is used to facilitate the simultaneous

transmission of a message to a group of recipients. We discuss IGMP in Chapter 22.

Transport Layer

Traditionally the transport layer was represented in TCP/IP by two protocols: TCP and

UDP. IP is a host-to-host protocol, meaning that it can deliver a packet from one

physical device to another. UDP and TCP are transport level protocols responsible

for delivery of a message from a process (running program) to another process. A new

transport layer protocol, SCTP, has been devised to meet the needs of some newer

applications.


SECTION 2.5 ADDRESSING 45

User Datagram Protocol

The User Datagram Protocol (UDP) is the simpler ofthe two standard TCPIIP transport

protocols. It is a process-to-process protocol that adds only port addresses, checksum

error control, and length information to the data from the upper layer. UDP is discussed

in Chapter 23.

Transmission Control Protocol

The Transmission Control Protocol (TCP) provides full transport-layer services to

applications. TCP is a reliable stream transport protocol. The term stream, in this context,

means connection-oriented: A connection must be established between both ends

of a transmission before either can transmit data.

At the sending end of each transmission, TCP divides a stream of data into smaller

units called segments. Each segment includes a sequence number for reordering after

receipt, together with an acknowledgment number for the segments received. Segments

are carried across the internet inside of IP datagrams. At the receiving end, TCP collects

each datagram as it comes in and reorders the transmission based on sequence

numbers. TCP is discussed in Chapter 23.

Stream Control Transmission Protocol

The Stream Control Transmission Protocol (SCTP) provides support for newer

applications such as voice over the Internet. It is a transport layer protocol that combines

the best features of UDP and TCP. We discuss SCTP in Chapter 23.

Application Layer

The application layer in TCPIIP is equivalent to the combined session, presentation,

and application layers in the OSI modeL Many protocols are defined at this layer. We

cover many of the standard protocols in later chapters.

2.5 ADDRESSING

Four levels of addresses are used in an internet employing the TCP/IP protocols:

physical (link) addresses, logical (IP) addresses, port addresses, and specific

addresses (see Figure 2.17).

Figure 2.17

Addresses in TCPIIP

Addresses

I

I I I I

Physical Logical Port Specific

addresses addresses addresses addresses


46 CHAPTER 2 NETWORK MODELS

Each address is related to a specific layer in the TCPIIP architecture, as shown in

Figure 2.18.

Figure 2.18

Relationship oflayers and addresses in TCPIIP

Specific

• addresses

Appli"tio" I,,", II p=,= 11-----.-'---..I

Port

T""portI,,",IEJG [j~ 1-----.- •

'---..I

addresses

Network layer

I

IPand

other protocols

I

..

Logical

addresses

Data link layer

Physical layer

Underlying

~ physical ~

networks

Physical

addresses

Physical Addresses

The physical address, also known as the link address, is the address of a node as defined

by its LAN or WAN. It is included in the frame used by the data link layer. It is the

lowest-level address.

The physical addresses have authority over the network (LAN or WAN). The size

and format of these addresses vary depending on the network. For example, Ethernet

uses a 6-byte (48-bit) physical address that is imprinted on the network interface card

(NIC). LocalTalk (Apple), however, has a I-byte dynamic address that changes each

time the station comes up.

Example 2.1

In Figure 2.19 a node with physical address 10 sends a frame to a node with physical address 87.

The two nodes are connected by a link (bus topology LAN). At the data link layer, this frame

contains physical (link) addresses in the header. These are the only addresses needed. The rest of

the header contains other information needed at this level. The trailer usually contains extra bits

needed for error detection. As the figure shows, the computer with physical address lOis the

sender, and the computer with physical address 87 is the receiver. The data link layer at the

sender receives data from an upper layer. It encapsulates the data in a frame, adding a header and

a trailer. The header, among other pieces of information, carries the receiver and the sender physical

(link) addresses. Note that in most data link protocols, the destination address, 87 in this

case, comes before the source address (10 in this case).

We have shown a bus topology for an isolated LAN. In a bus topology, the frame is propagated

in both directions (left and right). The frame propagated to the left dies when it reaches the

end of the cable if the cable end is terminated appropriately. The frame propagated to the right is


SECTION 2.5 ADDRESSING 47

Figure 2.19

Physical addresses

Sender

__ 28

"

53 __

Receiver

Destination address does

not match; the packet is

dropped

•••

LAN

sent to every station on the network. Each station with a physical addresses other than 87 drops

the frame because the destination address in the frame does not match its own physical address.

The intended destination computer, however, finds a match between the destination address in the

frame and its own physical address. The frame is checked, the header and trailer are dropped, and

the data part is decapsulated and delivered to the upper layer.

Example 2.2

As we will see in Chapter 13, most local-area networks use a 48-bit (6-byte) physical address

written as 12 hexadecimal digits; every byte (2 hexadecimal digits) is separated by a colon, as

shown below:

07:01:02:01:2C:4B

A 6-byte (12 hexadecimal digits) physical address

Logical Addresses

Logical addresses are necessary for universal communications that are independent of

underlying physical networks. Physical addresses are not adequate in an internetwork

environment where different networks can have different address formats. A universal

addressing system is needed in which each host can be identified uniquely, regardless

of the underlying physical network.

The logical addresses are designed for this purpose. A logical address in the Internet

is currently a 32-bit address that can uniquely define a host connected to the Internet. No

two publicly addressed and visible hosts on the Internet can have the same IP address.

Example 2.3

Figure 2.20 shows a part of an internet with two routers connecting three LANs. Each device

(computer or router) has a pair of addresses (logical and physical) for each connection. In this

case, each computer is connected to only one link and therefore has only one pair of addresses.

Each router, however, is connected to three networks (only two are shown in the figure). So each

router has three pairs of addresses, one for each connection. Although it may obvious that each

router must have a separate physical address for each connection, it may not be obvious why it

needs a logical address for each connection. We discuss these issues in Chapter 22 when we discuss

routing.


48 CHAPTER 2 NETWORK MODELS

Figure 2.20

IP addresses

AI10

To another

network

XJ44

LAN 1

-~~_~I~I

~tE]

JRouterll

LAN 2

LAN 3

P/95

Ph)sical

addl'esscs

changed

To another

network Y/55

The computer with logical address A and physical address 10 needs to send a

packet to the computer with logical address P and physical address 95. We use letters to

show the logical addresses and numbers for physical addresses, but note that both are

actually numbers, as we will see later in the chapter.

The sender encapsulates its data in a packet at the network layer and adds two logical

addresses (A and P). Note that in most protocols, the logical source address comes before

the logical destination address (contrary to the order of physical addresses). The network

layer, however, needs to find the physical address of the next hop before the packet can be

delivered. The network layer consults its routing table (see Chapter 22) and finds the

logical address of the next hop (router I) to be F. The ARP discussed previously finds

the physical address of router 1 that corresponds to the logical address of 20. Now the

network layer passes this address to the data link layer, which in tum, encapsulates the

packet with physical destination address 20 and physical source address 10.

The frame is received by every device on LAN 1, but is discarded by all except

router 1, which finds that the destination physical address in the frame matches with its

own physical address. The router decapsulates the packet from the frame to read the logical

destination address P. Since the logical destination address does not match the

router's logical address, the router knows that the packet needs to be forwarded. The


SECTION 2.5 ADDRESSING 49

router consults its routing table and ARP to find the physical destination address of the

next hop (router 2), creates a new frame, encapsulates the packet, and sends it to router 2.

Note the physical addresses in the frame. The source physical address changes

from 10 to 99. The destination physical address changes from 20 (router 1 physical

address) to 33 (router 2 physical address). The logical source and destination addresses

must remain the same; otherwise the packet will be lost.

At router 2 we have a similar scenario. The physical addresses are changed, and a

new frame is sent to the destination computer. When the frame reaches the destination,

the packet is decapsulated. The destination logical address P matches the logical address

of the computer. The data are decapsulated from the packet and delivered to the upper

layer. Note that although physical addresses will change from hop to hop, logical

addresses remain the same from the source to destination. There are some exceptions to

this rule that we discover later in the book.

The physical addresses will change from hop to hop,

but the logical addresses usually remain the same.

Port Addresses

The IP address and the physical address are necessary for a quantity of data to travel

from a source to the destination host. However, arrival at the destination host is not the

final objective of data communications on the Internet. A system that sends nothing but

data from one computer to another is not complete. Today, computers are devices that

can run multiple processes at the same time. The end objective of Internet communication

is a process communicating with another process. For example, computer A can

communicate with computer C by using TELNET. At the same time, computer A communicates

with computer B by using the File Transfer Protocol (FTP). For these processes

to receive data simultaneously, we need a method to label the different processes.

In other words, they need addresses. In the TCPIIP architecture, the label assigned to a

process is called a port address. A port address in TCPIIP is 16 bits in length.

Example 2.4

Figure 2.21 shows two computers communicating via the Internet. The sending computer is running

three processes at this time with port addresses a, b, and c. The receiving computer is running

two processes at this time with port addresses j and k. Process a in the sending computer needs to

communicate with process j in the receiving computer. Note that although both computers are

using the same application, FTP, for example, the port addresses are different because one is a client

program and the other is a server program, as we will see in Chapter 23. To show that data from

process a need to be delivered to process j, and not k, the transport layer encapsulates data from

the application layer in a packet and adds two port addresses (a and j), source and destination. The

packet from the transport layer is then encapsulated in another packet at the network layer with

logical source and destination addresses (A and P). Finally, this packet is encapsulated in a frame

with the physical source and destination addresses of the next hop. We have not shown the physical

addresses because they change from hop to hop inside the cloud designated as the Internet.

Note that although physical addresses change from hop to hop, logical and port addresses remain

the same from the source to destination. There are some exceptions to this rule that we discuss

later in the book.


50 CHAPTER 2 NETWORK MODELS

Figure 2.21

Port addresses

abc

DD

DD

j k

- - - - Data link layer - - --

Internet

The physical addresses change from hop to hop,

but the logical and port addresses usually remain the same.

Example 2.5

As we will see in Chapter 23, a port address is a 16-bit address represented by one decimal numher

as shown.

Specific Addresses

753

A 16-bit port address represented as one single number

Some applications have user-friendly addresses that are designed for that specific address.

Examples include the e-mail address (for example, forouzan@fhda.edu) and the Universal

Resource Locator (URL) (for example, www.mhhe.com). The first defines the recipient of

an e-mail (see Chapter 26); the second is used to find a document on the World Wide Web

(see Chapter 27). These addresses, however, get changed to the corresponding port and

logical addresses by the sending computer, as we will see in Chapter 25.

2.6 RECOMMENDED READING

For more details about subjects discussed in this chapter, we recommend the following

books and sites. The items enclosed in brackets, [...] refer to the reference list at the

end of the text.


SECTION 2. 7 KEY TERMS 51

Books

Network models are discussed in Section 1.3 of [Tan03], Chapter 2 of [For06], Chapter 2

of [Sta04], Sections 2.2 and 2.3 of [GW04], Section 1.3 of [PD03], and Section 1.7 of

[KR05]. A good discussion about addresses can be found in Section 1.7 of [Ste94].

Sites

The following site is related to topics discussed in this chapter.

o www.osi.org! Information about OS1.

RFCs

The following site lists all RFCs, including those related to IP and port addresses.

o www.ietLorg/rfc.html

2.7 KEY TERMS

access control

Address Resolution Protocol (ARP)

application layer

best-effort delivery

bits

connection control

data link layer

encoding

error

error control

flow control

frame

header

hop-to-hop delivery

host-to-host protocol

interface

Internet Control Message Protocol

(ICMP)

Internet Group Message Protocol (IGMP)

logical addressing

mail service

network layer

node-to-node delivery

open system

Open Systems Interconnection (OSI)

model

peer-to-peer process

physical addressing

physical layer

port address

presentation layer

process-to-process delivery

Reverse Address Resolution Protocol

(RARP)

routing

segmentation

session layer

source-to-destination delivery

Stream Control Transmission Protocol

(SCTP)

synchronization point

TCPIIP protocol suite

trailer

Transmission Control Protocol (TCP)

transmission rate

transport layer

transport level protocols

User Datagram Protocol (UDP)


52 CHAPTER 2 NETWORK MODELS

2.8 SUMMARY

o The International Standards Organization created a model called the Open Systems

Interconnection, which allows diverse systems to communicate.

U The seven-layer OSI model provides guidelines for the development of universally

compatible networking protocols.

o The physical, data link, and network layers are the network support layers.

o The session, presentation, and application layers are the user support layers.

D The transport layer links the network support layers and the user support layers.

o The physical layer coordinates the functions required to transmit a bit stream over

a physical medium.

o The data link layer is responsible for delivering data units from one station to the

next without errors.

o The network layer is responsible for the source-to-destination delivery of a packet

across multiple network links.

[J The transport layer is responsible for the process-to-process delivery of the entire

message.

D The session layer establishes, maintains, and synchronizes the interactions between

communicating devices.

U The presentation layer ensures interoperability between communicating devices

through transformation of data into a mutually agreed upon format.

o The application layer enables the users to access the network.

o TCP/IP is a five-layer hierarchical protocol suite developed before the OSI model.

U The TCP/IP application layer is equivalent to the combined session, presentation,

and application layers of the OSI model.

U Four levels of addresses are used in an internet following the TCP/IP protocols: physical

(link) addresses, logical (IP) addresses, port addresses, and specific addresses.

o The physical address, also known as the link address, is the address of a node as

defined by its LAN or WAN.

L,J The IP address uniquely defines a host on the Internet.

U The port address identifies a process on a host.

o A specific address is a user-friendly address.

2.9 PRACTICE SET

Review Questions

I. List the layers of the Internet model.

2. Which layers in the Internet model are the network support layers?

3. Which layer in the Internet model is the user support layer?

4. What is the difference between network layer delivery and transport layer delivery?


SECTION 2.9 PRACTICE SET 53

5. What is a peer-to-peer process?

6. How does information get passed from one layer to the next in the Internet

model?

7. What are headers and trailers, and how do they get added and removed?

X. What are the concerns of the physical layer in the Internet model?

9. What are the responsibilities of the data link layer in the Internet model?

10. What are the responsibilities of the network layer in the Internet model?

II. What are the responsibilities of the transport layer in the Internet model?

12. What is the difference between a port address, a logical address, and a physical

address?

13. Name some services provided by the application layer in the Internet model.

14. How do the layers of the Internet model correlate to the layers of the OSI model?

Exercises

15. How are OSI and ISO related to each other?

16. Match the following to one or more layers of the OSI model:

a. Route determination

b. Flow control

c. Interface to transmission media

d. Provides access for the end user

I 7. Match the following to one or more layers of the OSI model:

a. Reliable process-to-process message delivery

b. Route selection

c. Defines frames

d. Provides user services such as e-mail and file transfer

e. Transmission of bit stream across physical medium

\8. Match the following to one or more layers of the OSl model:

a. Communicates directly with user's application program

b. Error correction and retransmission

c. Mechanical, electrical, and functional interface

d. Responsibility for carrying frames between adjacent nodes

I 9. Match the following to one or more layers of the OSI model:

a. Format and code conversion services

b. Establishes, manages, and terminates sessions

c. Ensures reliable transmission of data

d. Log-in and log-out procedures

e. Provides independence from differences in data representation

20. In Figure 2.22, computer A sends a message to computer D via LANl, router Rl,

and LAN2. Show the contents of the packets and frames at the network and data

link layer for each hop interface.


54 CHAPTER 2 NETWORK MODELS

Figure 2.22 Exercise 20

Rl

Sender 8/42

LAN 1

LAN 2

Sender

21. In Figure 2.22, assume that the communication is between a process running at

computer A with port address i and a process running at computer D with port

address j. Show the contents of packets and frames at the network, data link, and

transport layer for each hop.

22. Suppose a computer sends a frame to another computer on a bus topology LAN.

The physical destination address ofthe frame is corrupted during the transmission.

What happens to the frame? How can the sender be informed about the situation?

23. Suppose a computer sends a packet at the network layer to another computer

somewhere in the Internet. The logical destination address of the packet is corrupted.

What happens to the packet? How can the source computer be informed of

the situation?

24. Suppose a computer sends a packet at the transport layer to another computer

somewhere in the Internet. There is no process with the destination port address

running at the destination computer. What will happen?

25. If the data link layer can detect errors between hops, why do you think we need

another checking mechanism at the transport layer?

Research Activities

26. Give some advantages and disadvantages of combining the session, presentation,

and application layer in the OSI model into one single application layer in the

Internet model.

27. Dialog control and synchronization are two responsibilities of the session layer in

the OSI model. Which layer do you think is responsible for these duties in the

Internet model? Explain your answer.

28. Translation, encryption, and compression are some of the duties of the presentation

layer in the OSI model. Which layer do you think is responsible for these duties in

the Internet model? Explain your answer.

29. There are several transport layer models proposed in the OSI model. Find all of

them. Explain the differences between them.

30. There are several network layer models proposed in the OSI model. Find all of

them. Explain the differences between them.


Physical Layer

and Media

Objectives

We start the discussion of the Internet model with the bottom-most layer, the physical

layer. It is the layer that actually interacts with the transmission media, the physical part

of the network that connects network components together. This layer is involved in

physically carrying information from one node in the network to the next.

The physical layer has complex tasks to perform. One major task is to provide

services for the data link layer. The data in the data link layer consists of Os and Is organized

into frames that are ready to be sent across the transmission medium. This stream

of Os and Is must first be converted into another entity: signals. One ofthe services provided

by the physical layer is to create a signal that represents this stream of bits.

The physical layer must also take care of the physical network, the transmission

medium. The transmission medium is a passive entity; it has no internal program or

logic for control like other layers. The transmission medium must be controlled by the

physical layer. The physical layer decides on the directions of data flow. The physical

layer decides on the number of logical channels for transporting data coming from

different sources.

In Part 2 of the book, we discuss issues related to the physical layer and the transmission

medium that is controlled by the physical layer. In the last chapter of Part 2, we

discuss the structure and the physical layers of the telephone network and the cable

network.

Part 2 of the book is devoted to the physical layer

and the transmission media.


Chapters

This part consists of seven chapters: Chapters 3 to 9.

Chapter 3

Chapter 3 discusses the relationship between data, which are created by a device, and

electromagnetic signals, which are transmitted over a medium.

Chapter 4

Chapter 4 deals with digital transmission. We discuss how we can covert digital or

analog data to digital signals.

Chapter 5

Chapter 5 deals with analog transmission. We discuss how we can covert digital or

analog data to analog signals.

Chapter 6

Chapter 6 shows how we can use the available bandwidth efficiently. We discuss two

separate, but related topics, multiplexing and spreading.

Chapter 7

After explaining some ideas about data and signals and how we can use them efficiently,

we discuss the characteristics of transmission media, both guided and

unguided, in this chapter. Although transmission media operates under the physical

layer, they are controlled by the physical layer.

Chapter 8

Although the previous chapters in this part are issues related to the physical layer or

transmission media, Chapter 8 discusses switching, a topic that can be related to several

layers. We have included this topic in this part of the book to avoid repeating the discussion

for each layer.

Chapter 9

Chapter 9 shows how the issues discussed in the previous chapters can be used in actual

networks. In this chapter, we first discuss the telephone network as designed to carry

voice. We then show how it can be used to carry data. Second, we discuss the cable network

as a television network. We then show how it can also be used to carry data.


CHAPTER 3

Data and Signals

One of the major functions of the physical layer is to move data in the form of electromagnetic

signals across a transmission medium. Whether you are collecting numerical

statistics from another computer, sending animated pictures from a design workstation,

or causing a bell to ring at a distant control center, you are working with the transmission

of data across network connections.

Generally, the data usable to a person or application are not in a form that can be

transmitted over a network. For example, a photograph must first be changed to a form

that transmission media can accept. Transmission media work by conducting energy

along a physical path.

To be transmitted, data must be transformed to electromagnetic signals.

3.1 ANALOG AND DIGITAL

Both data and the signals that represent them can be either analog or digital in form.

Analog and Digital Data

Data can be analog or digital. The term analog data refers to information that is continuous;

digital data refers to information that has discrete states. For example, an analog

clock that has hour, minute, and second hands gives information in a continuous form;

the movements of the hands are continuous. On the other hand, a digital clock that

reports the hours and the minutes will change suddenly from 8:05 to 8:06.

Analog data, such as the sounds made by a human voice, take on continuous values.

When someone speaks, an analog wave is created in the air. This can be captured by a

microphone and converted to an analog signal or sampled and converted to a digital

signal.

Digital data take on discrete values. For example, data are stored in computer

memory in the form of Os and 1s. They can be converted to a digital signal or modulated

into an analog signal for transmission across a medium.

57


58 CHAPTER 3 DATA AND SIGNALS

Data can be analog or digital. Analog data are continuous and take continuous values.

Digital data have discrete states and take discrete values.

Analog and Digital Signals

Like the data they represent, signals can be either analog or digital. An analog signal

has infinitely many levels of intensity over a period of time. As the wave moves from

value A to value B, it passes through and includes an infinite number of values along its

path. A digital signal, on the other hand, can have only a limited number of defined

values. Although each value can be any number, it is often as simple as 1 and O.

The simplest way to show signals is by plotting them on a pair of perpendicular

axes. The vertical axis represents the value or strength of a signal. The horizontal axis

represents time. Figure 3.1 illustrates an analog signal and a digital signal. The curve

representing the analog signal passes through an infinite number of points. The vertical

lines of the digital signal, however, demonstrate the sudden jump that the signal makes

from value to value.

Signals can be analog or digital. Analog signals can have an infinite number of

values in a range; digital signals can have only a limited number of values.

Figure 3.1

Comparison ofanalog and digital signals

Value

Value

-

Time

Time

a. Analog signal b. Digital signal

'----

Periodic and Nonperiodic Signals

Both analog and digital signals can take one of two forms: periodic or nonperiodic

(sometimes refer to as aperiodic, because the prefix a in Greek means "non").

A periodic signal completes a pattern within a measurable time frame, called a

period, and repeats that pattern over subsequent identical periods. The completion of

one full pattern is called a cycle. A nonperiodic signal changes without exhibiting a pattern

or cycle that repeats over time.

Both analog and digital signals can be periodic or nonperiodic. In data communications,

we commonly use periodic analog signals (because they need less bandwidth,


SECTION 3.2 PERIODIC ANALOG SIGNALS 59

as we will see in Chapter 5) and nonperiodic digital signals (because they can represent

variation in data, as we will see in Chapter 6).

In data communications, we commonly use periodic

analog signals and nonperiodic digital signals.

3.2 PERIODIC ANALOG SIGNALS

Periodic analog signals can be classified as simple or composite. A simple periodic

analog signal, a sine wave, cannot be decomposed into simpler signals. A composite

periodic analog signal is composed of multiple sine waves.

Sine Wave

The sine wave is the most fundamental form of a periodic analog signal. When we

visualize it as a simple oscillating curve, its change over the course of a cycle is smooth

and consistent, a continuous, rolling flow. Figure 3.2 shows a sine wave. Each cycle

consists of a single arc above the time axis followed by a single arc below it.

Figure 3.2

A sine wave

Value

Time

We discuss a mathematical approach to sine waves in Appendix C.

A sine wave can be represented by three parameters: the peak amplitude, the frequency,

and the phase. These three parameters fully describe a sine wave.

Peak Amplitude

The peak amplitude of a signal is the absolute value of its highest intensity, proportional

to the energy it carries. For electric signals, peak amplitude is normally measured

in volts. Figure 3.3 shows two signals and their peak amplitudes.

Example 3.1

The power in your house can be represented by a sine wave with a peak amplitude of 155 to 170 V.

However, it is common knowledge that the voltage of the power in U.S. homes is 110 to 120 V.


60 CHAPTER 3 DATA AND SIGNALS

Figure 3.3

Two signals with the same phase andfrequency, but different amplitudes

Amplitude

Peak amplitude

Time

a. A signal with high peak amplitude

Amplitude

Time

b. A signal with low peak amplitude

This discrepancy is due to the fact that these are root mean square (rms) values. The signal is

squared and then the average amplitude is calculated. The peak value is equal to 2 112 x rms

value.

Example 3.2

The voltage of battery is a constant; this constant value can be considered a sine wave, as we will

see later. For example, the peak value of an AA battery is normally 1.5 V.

Period and Frequency

Period refers to the amount of time, in seconds, a signal needs to complete 1 cycle.

Frequency refers to the number of periods in I s. Note that period and frequency are just

one characteristic defined in two ways. Period is the inverse of frequency, and frequency

is the inverse of period, as the following formulas show.

1

f= ­ T

and

1

T=- f

Frequency and period are the inverse of each other.

Figure 3.4 shows two signals and their frequencies.


SECTION 3.2 PERIODIC ANALOG SIGNALS 61

Figure 3.4

Two signals with the same amplitude and phase, but different frequencies

Amplitude

12 periods in Is-----+- Frequency is 12 Hz

1 s

Time

Period:ns

a. A signal with a frequency of 12 Hz

Amplitude

6 periods in Is-----+- Frequency is 6 Hz

1 s

I

•••

Time

T

Period: t s

b. A signal with a frequency of 6 Hz

Period is formally expressed in seconds. Frequency is formally expressed in

Hertz (Hz), which is cycle per second. Units of period and frequency are shown in

Table 3.1.

Table 3.1

Units ofperiod andfrequency

Unit Equivalent Unit Equivalent

Seconds (s) 1 s Hertz (Hz) 1 Hz

Milliseconds (ms) 10- 3 s Kilohertz (kHz) 10 3 Hz

Microseconds (Ils) 10-6 s Megahertz (MHz) 10 6 Hz

Nanoseconds (ns) 10- 9 s Gigahertz (GHz) 10 9 Hz

Picoseconds (ps) 10- 12 s Terahertz (THz) 10 12 Hz

Example 3.3

The power we use at home has a frequency of 60 Hz (50 Hz in Europe). The period of this sine

wave can be determined as follows:

T=1= 6 1 0 = 0.0166 s = 0.0166 x 10 3 ms =16.6 illS

This means that the period of the power for our lights at home is 0.0116 s, or 16.6 ms. Our

eyes are not sensitive enough to distinguish these rapid changes in amplitude.


62 CHAPTER 3 DATA AND SIGNALS

Example 3.4

Express a period of 100 ms in microseconds.

Solution

From Table 3.1 we find the equivalents of 1 ms (l ms is 10- 3 s) and 1 s (1 sis 106118). We make

the following substitutions:

100 ms:;;::; 100 X 10- 3 s:;;::; 100 X 10- 3 X 10 6 JlS:;;::; 10 2 X 10- 3 X 10 6 JlS :::::: 10 5 JlS

Example 3.5

The period of a signal is 100 ms. What is its frequency in kilohertz?

Solution

First we change lOO ms to seconds, and then we calculate the frequency from the period (1 Hz =

10- 3 kHz).

100 ms = 100 X 10- 3 S=10- 1 S

f= 1. = _1_ Hz = 10 Hz = 10 X 10- 3 kHz =10- 2 kHz

T 10- 1

More About Frequency

We already know that frequency is the relationship ofa signal to time and that the frequency

of a wave is the number of cycles it completes in 1 s. But another way to look at frequency

is as a measurement of the rate of change. Electromagnetic signals are oscillating waveforms;

that is, they fluctuate continuously and predictably above and below a mean energy

level. A 40-Hz signal has one-half the frequency of an 80-Hz signal; it completes 1 cycle in

twice the time ofthe 80-Hz signal, so each cycle also takes twice as long to change from its

lowest to its highest voltage levels. Frequency, therefore, though described in cycles per second

(hertz), is a general measurement ofthe rate of change ofa signal with respect to time.

Frequency is the rate of change with respect to time. Change in a short span of time

means high frequency. Change over a long span oftime means low frequency.

If the value of a signal changes over a very short span of time, its frequency is

high. If it changes over a long span of time, its frequency is low.

Two Extremes

What if a signal does not change at all? What ifit maintains a constant voltage level for the

entire time it is active? In such a case, its frequency is zero. Conceptually, this idea is a simple

one. Ifa signal does not change at all, it never completes a cycle, so its frequency is aHz.

But what if a signal changes instantaneously? What if it jumps from one level to

another in no time? Then its frequency is infinite. In other words, when a signal changes

instantaneously, its period is zero; since frequency is the inverse of period, in this case,

the frequency is 1/0, or infinite (unbounded).


SECTION 3.2 PERiODIC ANALOG SIGNALS 63

Ifa signal does not change at all, its frequency is zero.

Ifa signal changes instantaneously, its frequency is infinite.

Phase

The term phase describes the position of the waveform relative to time O. If we think of

the wave as something that can be shifted backward or forward along the time axis,

phase describes the amount of that shift. It indicates the status of the first cycle.

Phase describes the position ofthe waveform relative to time O.

Phase is measured in degrees or radians [360° is 2n rad; 1° is 2n/360 rad, and 1 rad

is 360/(2n)]. A phase shift of 360° corresponds to a shift of a complete period; a phase

shift of 180° corresponds to a shift of one-half of a period; and a phase shift of 90° corresponds

to a shift of one-quarter of a period (see Figure 3.5).

Figure 3.5

Three sine waves with the same amplitude andfrequency, but different phases

Time

a. 0 degrees

*AAL

1!4TO\J~

~

Time

b. 90 degrees

)I

Time

c. 180 degrees

Looking at Figure 3.5, we can say that

I. A sine wave with a phase of 0° starts at time 0 with a zero amplitude. The

amplitude is increasing.

2. A sine wave with a phase of 90° starts at time 0 with a peak amplitude. The

amplitude is decreasing.


64 CHAPTER 3 DATA AND SIGNALS

3. A sine wave with a phase of 180° starts at time 0 with a zero amplitude. The

amplitude is decreasing.

Another way to look at the phase is in terms of shift or offset. We can say that

1. A sine wave with a phase of 0° is not shifted.

2. A sine wave with a phase of 90° is shifted to the left by ! cycle. However, note

4

that the signal does not really exist before time O.

3. A sine wave with a phase of 180° is shifted to the left by ~ cycle. However, note

that the signal does not really exist before time O.

Example 3.6

A sine wave is offset ~

cycle with respect to time O. What is its phase in degrees and radians?

Solution

We know that 1 complete cycle is 360°. Therefore, i cycle is

! x 360;::; 60° =60 x 2n tad;::; ~ fad;::; 1.046 rad

6 360 3

Wavelength

Wavelength is another characteristic of a signal traveling through a transmission

medium. Wavelength binds the period or the frequency of a simple sine wave to the

propagation speed of the medium (see Figure 3.6).

Figure 3.6

Wavelength and period

Wavelength

Transmission medium /"" '- ~ /" '- :. ':

Direction of

propagation

While the frequency of a signal is independent of the medium, the wavelength

depends on both the frequency and the medium. Wavelength is a property of any type

of signal. In data communications, we often use wavelength to describe the transmission

of light in an optical fiber. The wavelength is the distance a simple signal can travel

in one period.

Wavelength can be calculated if one is given the propagation speed (the speed of

light) and the period of the signal. However, since period and frequency are related to

each other, if we represent wavelength by A, propagation speed by c (speed of light), and

frequency by1, we get

. • propagation speed

Wavelength =propagatJon speed x perJod ;::; {\ requency


SECTION 3.2 PERIODIC ANALOG SIGNALS 6S

The propagation speed of electromagnetic signals depends on the medium and on

the frequency of the signal. For example, in a vacuum, light is propagated with a speed

of 3 x 10 8 mls. That speed is lower in air and even lower in cable.

The wavelength is normally measured in micrometers (microns) instead of meters.

For example, the wavelength of red light (frequency =4 x 10 14 ) in air is

8

c 3xlO -6

A= - = =0.75 x 10 m=0.75!J.m

f 4x 10 14

In a coaxial or fiber-optic cable, however, the wavelength is shorter (0.5 !Jm) because the

propagation speed in the cable is decreased.

Time and Frequency Domains

A sine wave is comprehensively defined by its amplitude, frequency, and phase. We

have been showing a sine wave by using what is called a time-domain plot. The

time-domain plot shows changes in signal amplitude with respect to time (it is an

amplitude-versus-time plot). Phase is not explicitly shown on a time-domain plot.

To show the relationship between amplitude and frequency, we can use what is

called a frequency-domain plot. A frequency-domain plot is concerned with only the

peak value and the frequency. Changes of amplitude during one period are not shown.

Figure 3.7 shows a signal in both the time and frequency domains.

Figure 3.7

The time-domain andfrequency-domain plots ofa sine wave

Amplitude

Frequency: 6 Hz

Peak value: 5 V

5 ----------

Time

(s)

a. A sine wave in the time domain (peak value: 5 V, frequency: 6 Hz)

Amplitude

r

Peak value: 5 V

5 --;-1-1--; --1--1'--+-1-----111--+1-----111--+1-+-1-+1-+-1 ~.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 Frequency

(Hz)

b. The same sine wave in the frequency domain (peak value: 5 V, frequency: 6 Hz)


66 CHAPTER 3 DATA AND SIGNALS

It is obvious that the frequency domain is easy to plot and conveys the information

that one can find in a time domain plot. The advantage of the frequency domain is that

we can immediately see the values of the frequency and peak amplitude. A complete

sine wave is represented by one spike. The position of the spike shows the frequency;

its height shows the peak amplitude.

A complete sine wave in the time domain can be represented

by one single spike in the frequency domain.

Example 3.7

The frequency domain is more compact and useful when we are dealing with more than one sine

wave. For example, Figure 3.8 shows three sine waves, each with different amplitude and frequency.

All can be represented by three spikes in the frequency domain.

Figure 3.8

The time domain andfrequency domain ofthree sine waves

Amplitude

15 -

10 - I

5-

o 8 16 Frequency

a. Time-domain representation of three sine waves with

frequencies 0, 8, and 16

b. Frequency-domain representation of

the same three signals

---------------------------------------

Composite Signals

So far, we have focused on simple sine waves. Simple sine waves have many applications

in daily life. We can send a single sine wave to carry electric energy from one

place to another. For example, the power company sends a single sine wave with a frequency

of 60 Hz to distribute electric energy to houses and businesses. As another

example, we can use a single sine wave to send an alarm to a security center when a

burglar opens a door or window in the house. In the first case, the sine wave is carrying

energy; in the second, the sine wave is a signal of danger.

If we had only one single sine wave to convey a conversation over the phone, it

would make no sense and carry no information. We would just hear a buzz. As we will

see in Chapters 4 and 5, we need to send a composite signal to communicate data. A

composite signal is made of many simple sine waves.

A singleafrequency sine wave is not useful in data communications;

we need to send a composite signal, a signal made of many simple sine waves.


SECTION 3.2 PERIODIC ANALOG SIGNALS 67

In the early 1900s, the French mathematician Jean-Baptiste Fourier showed that

any composite signal is actually a combination of simple sine waves with different frequencies,

amplitudes, and phases. Fourier analysis is discussed in Appendix C; for our

purposes, we just present the concept.

According to Fourier analysis, any composite signal is a combination of

simple sine waves with different frequencies, amplitudes, and phases.

Fourier analysis is discussed in Appendix C.

A composite signal can be periodic or nonperiodic. A periodic composite signal

can be decomposed into a series of simple sine waves with discrete frequenciesfrequencies

that have integer values (1, 2, 3, and so on). A nonperiodic composite signal

can be decomposed into a combination of an infinite number of simple sine waves

with continuous frequencies, frequencies that have real values.

Ifthe composite signal is periodic, the decomposition gives a series of signals with

discrete frequencies; if the composite signal is nonperiodic, the decomposition

gives a combination ofsine waves with continuous frequencies.

Example 3.8

Figure 3.9 shows a periodic composite signal with frequencyf This type of signal is not typical

of those found in data communications.We can consider it to be three alarm systems, each with a

different frequency. The analysis of this signal can give us a good understanding of how to

decompose signals.

Figure 3.9

A composite periodic signal

...

Time

It is very difficult to manually decompose this signal into a series of simple sine waves.

However, there are tools, both hardware and software, that can help us do the job. We are not concerned

about how it is done; we are only interested in the result. Figure 3.10 shows the result of

decomposing the above signal in both the time and frequency domains.

The amplitude of the sine wave with frequencyf is almost the same as the peak amplitude of

the composite signal. The amplitude of the sine wave with frequency 3fis one-third of that of the

first, and the amplitude of the sine wave with frequency 9fis one-ninth ofthe first. The frequency


68 CHAPTER 3 DATA AND SIGNALS

Figure 3.10

Decomposition ofa composite periodic signal in the time andfrequency domains

Amplitude

- Frequency1

- Frequency 31

- Frequency 91

Time

a. Time-domain decomposition of a composite signal

Amplitude

WL...-----'3~L...-------9L-'!---------T~i~e

b. Frequency-domain decomposition of the composite signal

of the sine wave with frequencyf is the same as the frequency of the composite signal; it is called

the fundamental frequency, or first harmonic. The sine wave with frequency 3fhas a frequency

of 3 times the fundamental frequency; it is called the third harmonic. The third sine wave with frequency

9fhas a frequency of 9 times the fundamental frequency; it is called the ninth harmonic.

Note that the frequency decomposition of the signal is discrete; it has frequenciesf, 3f, and

9f Because f is an integral number, 3fand 9fare also integral numbers. There are no frequencies

such as 1.2for 2.6f The frequency domain of a periodic composite signal is always made of discrete

spikes.

Example 3.9

Figure 3.11 shows a nonperiodic composite signal. It can be the signal created by a microphone or a

telephone set when a word or two is pronounced. In this case, the composite signal cannot be periodic,

because that implies that we are repeating the same word or words with exactly the same tone.

Figure 3.11

The time andfrequency domains ofa nonperiodic signal

Amplitude

Amplitude

Amplitude for sine

wave of frequency1

IA

IA

a. Time domain

,

v Time 1

b. Frequency domain

4 kHz Frequency


SECTION 3.2 PERIODICANALOG SIGNALS 69

In a time-domain representation of this composite signal, there are an infinite number

of simple sine frequencies. Although the number of frequencies in a human voice is

infinite, the range is limited. A normal human being can create a continuous range of

frequencies between 0 and 4 kHz.

Note that the frequency decomposition of the signal yields a continuous curve.

There are an infinite number of frequencies between 0.0 and 4000.0 (real values). To find

the amplitude related to frequency J, we draw a vertical line atfto intersect the envelope

curve. The height of the vertical line is the amplitude of the corresponding frequency.

Bandwidth

The range of frequencies contained in a composite signal is its bandwidth. The bandwidth

is normally a difference between two numbers. For example, if a composite signal

contains frequencies between 1000 and 5000, its bandwidth is 5000 - 1000, or 4000.

The bandwidth of a composite signal is the difference between the

highest and the lowest frequencies contained in that signal.

Figure 3.12 shows the concept of bandwidth. The figure depicts two composite

signals, one periodic and the other nonperiodic. The bandwidth of the periodic signal

contains all integer frequencies between 1000 and 5000 (1000, 100 I, 1002, ...). The bandwidth

ofthe nonperiodic signals has the same range, but the frequencies are continuous.

Figure 3.12

The bandwidth ofperiodic and nonperiodic composite signals

Amplitude

1000

a. Bandwidth of a periodic signal

Bandwidth = 5000 - 1000 = 4000 Hz

5000 Frequency

·1

Amplitude

1000

b. Bandwidth of a nonperiodic signal

Bandwidth =5000 - 1000 = 4000 Hz

5000 Frequency

·1


70 CHAPTER 3 DATA AND SIGNALS

Example 3.10

If a periodic signal is decomposed into five sine waves with frequencies of 100, 300, 500, 700,

and 900 Hz, what is its bandwidth? Draw the spectrum, assuming all components have a maximum

amplitude of 10 V.

Solution

Letfh be the highest frequency, fl the lowest frequency, and B the bandwidth. Then

B =fh - it = 900 - 100 = 800 Hz

The spectrum has only five spikes, at 100, 300, 500, 700, and 900 Hz (see Figure 3.13).

Figure 3.13 The bandwidthfor Example 3.10

Amplitude

10+-----]

100 300 500 700 900

I, -I

Bandwidth =900 - 100 =800 Hz

Frequency

Example 3.11

A periodic signal has a bandwidth of 20 Hz. The highest frequency is 60 Hz. What is the lowest

frequency? Draw the spectrum if the signal contains all frequencies of the same amplitude.

Solution

Letfh be the highest frequency,fz the lowest frequency, and B the bandwidth. Then

B =fh - fz :::::} 20 =60 - it =} .ft =60 - 20 =40 Hz

The spectrum contains all integer frequencies. We show this by a series of spikes (see Figure 3.14).

Figure 3.14 The bandwidth for Example 3.11

III

4041 42

I,

Bandwidth = 60 - 40 = 20 Hz

585960

·1

Frequency

(Hz)

Example 3.12

A nonperiodic composite signal has a bandwidth of 200 kHz, with a middle frequency of

140 kHz and peak amplitude of 20 V. The two extreme frequencies have an amplitude of 0. Draw

the frequency domain of the signal.


SECT/ON 3.3 DIGITAL SIGNALS 71

Solution

The lowest frequency must be at 40 kHz and the highest at 240 kHz. Figure 3.15 shows the frequency

domain and the bandwidth.

Figure 3.15 The bandwidth for Example 3.12

Amplitude

40 kHz 140kHz 240 kHz Frequency

Example 3. J3

An example of a nonperiodic composite signal is the signal propagated by an AM radio station, In

the United States, each AM radio station is assigned a lO-kHz bandwidth. The total bandwidth dedicated

to AM radio ranges from 530 to 1700 kHz. We will show the rationale behind this lO-kHz

bandwidth in Chapter 5.

Example 3. J4

Another example of a nonperiodic composite signal is the signal propagated by an FM radio station.

In the United States, each FM radio station is assigned a 200-kHz bandwidth. The total

bandwidth dedicated to FM radio ranges from 88 to 108 MHz. We will show the rationale behind

this 200-kHz bandwidth in Chapter 5.

Example 3./5

Another example of a nonperiodic composite signal is the signal received by an old-fashioned

analog black-and-white TV. A TV screen is made up of pixels (picture elements) with each pixel

being either white or black. The screen is scanned 30 times per second. (Scanning is actually

60 times per second, but odd lines are scanned in one round and even lines in the next and then

interleaved.) Ifwe assume a resolution of 525 x 700 (525 vertical lines and 700 horizontal lines),

which is a ratio of 3:4, we have 367,500 pixels per screen. If we scan the screen 30 times per second,

this is 367,500 x 30 = 11,025,000 pixels per second. The worst-case scenario is alternating

black and white pixels. In this case, we need to represent one color by the minimum amplitude

and the other color by the maximum amplitude. We can send 2 pixels per cycle. Therefore, we

need 11,025,000/2 =5,512,500 cycles per second, or Hz. The bandwidth needed is 5.5124 MHz.

This worst-case scenario has such a low probability of occurrence that the assumption is that we

need only 70 percent of this bandwidth, which is 3.85 MHz. Since audio and synchronization signals

are also needed, a 4-MHz bandwidth has been set aside for each black and white TV channel.

An analog color TV channel has a 6-MHz bandwidth.

3.3 DIGITAL SIGNALS

In addition to being represented by an analog signal, information can also be represented

by a digital signal. For example, a I can be encoded as a positive voltage and a 0

as zero voltage. A digital signal can have more than two levels. In this case, we can


72 CHAPTER 3 DATA AND SIGNALS

send more than 1 bit for each level. Figure 3.16 shows two signals, one with two levels

and the other with four.

Figure 3.16

Two digital signals: one with two signal levels and the other with four signal levels

Amplitude 8 bits sent in I s,

Bit rate = 8 bps

Levell

Levell

1 I 0 I 1 I I I 0 I 0 I 0 I 1 I

I I I I I I I I

I

I

I I I

I I I

I I I

I I I ...

I I I I I I I

I s

I I I I I I I Time

I I I I I I I

a. A digital signal with two levels

Amplitude

I 6 b" Its sent III "1 s,

Bit rate = 16 bps

Level4

II I 10 I 01 [ 01 I 00 00 [ 00 I 10 I

I I I I I I I

,

I I I I I I

I I I I I I

I I I I I I

I

Level3 [ I I

Levell

Levell

b. A digital signal with four levels

I I I [

I I I I

I s

I ...

I I I Tim

I

I

I I [ [

I I [

I

I

I I I

[ I I I I I

e

We send 1 bit per level in part a of the figure and 2 bits per level in part b of the

figure. In general, if a signal has L levels, each level needs log2L bits.

Appendix C reviews information about exponential and logarithmic functions.

Example 3.16

A digital signal has eight levels. How many bits are needed per level? We calculate the number

of bits from the formula

Each signal level is represented by 3 bits.

Example 3.17

Number of bits per level =log2 8 =3

A digital signal has nine levels. How many bits are needed per level? We calculate the number of

bits by using the formula. Each signal level is represented by 3.17 bits. However, this answer is

not realistic. The number ofbits sent per level needs to be an integer as well as a power of 2. For

this example, 4 bits can represent one level.


SECTION 3.3 DIGITAL SIGNALS 73

Bit Rate

Most digital signals are nonperiodic, and thus period and frequency are not appropriate

characteristics. Another term-bit rate (instead ofjrequency)-is used to describe

digital signals. The bit rate is the number of bits sent in Is, expressed in bits per

second (bps). Figure 3.16 shows the bit rate for two signals.

Example 3.18

Assume we need to download text documents at the rate of 100 pages per minute. What is the

required bit rate of the channel?

Solution

A page is an average of 24 lines with 80 characters in each line. If we assume that one character

requires 8 bits, the bit rate is

Example 3.19

100 x 24 x 80 x 8 =1,636,000 bps =1.636 Mbps

A digitized voice channel, as we will see in Chapter 4, is made by digitizing a 4-kHz bandwidth

analog voice signal. We need to sample the signal at twice the highest frequency (two samples

per hertz). We assume that each sample requires 8 bits. What is the required bit rate?

Solution

The bit rate can be calculated as

2 x 4000 x 8 =64,000 bps =64 kbps

Example 3.20

What is the bit rate for high-definition TV (HDTV)?

Solution

HDTV uses digital signals to broadcast high quality video signals. The HDTV Screen is normally

a ratio of 16 : 9 (in contrast to 4 : 3 for regular TV), which means the screen is wider. There are

1920 by 1080 pixels per screen, and the screen is renewed 30 times per second. Twenty-four bits

represents one color pixel. We can calculate the bit rate as

1920 x 1080 x 30 x 24 = 1,492,992,000 or 1.5 Gbps

The TV stations reduce this rate to 20 to 40 Mbps through compression.

Bit Length

We discussed the concept of the wavelength for an analog signal: the distance one cycle

occupies on the transmission medium. We can define something similar for a digital

signal: the bit length. The bit length is the distance one bit occupies on the transmission

medium.

Bit length=propagation speed x bit duration


74 CHAPTER 3 DATA AND SIGNALS

Digital Signal as a Composite Analog Signal

Based on Fourier analysis, a digital signal is a composite analog signal. The bandwidth

is infinite, as you may have guessed. We can intuitively corne up with this concept when

we consider a digital signal. A digital signal, in the time domain, comprises connected

vertical and horizontal line segments. A vertical line in the time domain means a frequency

of infinity (sudden change in time); a horizontal line in the time domain means a

frequency of zero (no change in time). Going from a frequency of zero to a frequency of

infinity (and vice versa) implies all frequencies in between are part of the domain.

Fourier analysis can be used to decompose a digital signal. If the digital signal is

periodic, which is rare in data communications, the decomposed signal has a frequencydomain

representation with an infinite bandwidth and discrete frequencies. If the digital

signal is nonperiodic, the decomposed signal still has an infinite bandwidth, but the frequencies

are continuous. Figure 3.17 shows a periodic and a nonperiodic digital signal

and their bandwidths.

Figure 3.17

The time andfrequency domains ofperiodic and nonperiodic digital signals

---- ------------

- .--- .---

Lu

... I

...

I I I

..

Time / 3/ 5f 7f 9f Ilf I3f Frequency

'--- '--- '---

a. Time and frequency domains of perIodic digital signal

~~~-----=------+-.

b~)Time o

Frequency

b. Time and frequency domains of nonperiodic digital signal

Note that both bandwidths are infinite, but the periodic signal has discrete frequencies

while the nonperiodic signal has continuous frequencies.

Transmission of Digital Signals

The previous discussion asserts that a digital signal, periodic or nonperiodic, is a composite

analog signal with frequencies between zero and infinity. For the remainder of

the discussion, let us consider the case of a nonperiodic digital signal, similar to the

ones we encounter in data communications. The fundamental question is, How can we

send a digital signal from point A to point B? We can transmit a digital signal by using

one of two different approaches: baseband transmission or broadband transmission

(using modulation).


SECTION 3.3 DIGITAL SIGNALS 75

Baseband Transmission

Baseband transmission means sending a digital signal over a channel without changing

the digital signal to an analog signal. Figure 3.18 shows baseband transmission.

Figure 3.18

Baseband transmission

-----D •

Digital signal

&i;;~;;;;_;;;~a

Channel

A digital signal is a composite analog signal with an infinite bandwidth.

Baseband transmission requires that we have a low-pass channel, a channel with a

bandwidth that starts from zero. This is the case if we have a dedicated medium with a

bandwidth constituting only one channel. For example, the entire bandwidth of a cable

connecting two computers is one single channel. As another example, we may connect

several computers to a bus, but not allow more than two stations to communicate at a

time. Again we have a low-pass channel, and we can use it for baseband communication.

Figure 3.19 shows two low-pass channels: one with a narrow bandwidth and the other

with a wide bandwidth. We need to remember that a low-pass channel with infinite bandwidth

is ideal, but we cannot have such a channel in real life. However, we can get close.

Figure 3.19

Bandwidths oftwo low-pass channels

o

a. Low-pass channel, wide bandwidth

o

b. Low-pass channel, narrow bandwidth

Let us study two cases of a baseband communication: a low-pass channel with a

wide bandwidth and one with a limited bandwidth.


76 CHAPTER 3 DATA AND SIGNALS

Case 1: Low-Pass Channel with Wide Bandwidth

If we want to preserve the exact form of a nonperiodic digital signal with vertical segments

vertical and horizontal segments horizontal, we need to send the entire spectrum,

the continuous range of frequencies between zero and infinity. This is possible if we

have a dedicated medium with an infinite bandwidth between the sender and receiver

that preserves the exact amplitude of each component of the composite signal.

Although this may be possible inside a computer (e.g., between CPU and memory), it

is not possible between two devices. Fortunately, the amplitudes of the frequencies at

the border of the bandwidth are so small that they can be ignored. This means that if we

have a medium, such as a coaxial cable or fiber optic, with a very wide bandwidth, two

stations can communicate by using digital signals with very good accuracy, as shown in

Figure 3.20. Note that!i is close to zero, andh is very high.

Figure 3.20

Baseband transmission using a dedicated medium

Input signal bandwidth

,<::;S>:,:, ,... -, :.">~

o

Jw_,

,", L

Bandwidth supported by medium

~.~:~' '"

,,"...

'-.;:.~ "

Output signal bandwidth

_C c:~::.:.~.•.•.. c<l

II

h

Input signal

Wide-bandwidth channel

Output signal

Although the output signal is not an exact replica of the original signal, the data

can still be deduced from the received signal. Note that although some of the frequencies

are blocked by the medium, they are not critical.

Baseband transmission ofa digital signal that preserves the shape of the digital signal is

possible only ifwe have a low-pass channel with an infinite or very wide bandwidth.

Example 3.21

An example of a dedicated channel where the entire bandwidth of the medium is used as one single

channel is a LAN. Almost every wired LAN today uses a dedicated channel for two stations communicating

with each other. In a bus topology LAN with multipoint connections, only two stations

can communicate with each other at each moment in time (timesharing); the other stations need to

refrain from sending data. In a star topology LAN, the entire channel between each station and the

hub is used for communication between these two entities. We study LANs in Chapter 14.

Case 2: Low-Pass Channel with Limited Bandwidth

In a low-pass channel with limited bandwidth, we approximate the digital signal with

an analog signal. The level of approximation depends on the bandwidth available.

Rough Approximation Let us assume that we have a digital signal ofbit rate N. Ifwe

want to send analog signals to roughly simulate this signal, we need to consider the worst

case, a maximum number of changes in the digital signal. This happens when the signal


SECTION 3.3 DIGITAL SIGNALS 77

carries the sequence 01010101 ... or the sequence 10101010· ... To simulate these two

cases, we need an analog signal offrequencyf = N12. Let 1 be the positive peak value and

obe the negative peak value. We send 2 bits in each cycle; the frequency of the analog

signal is one-half of the bit rate, or N12. However, just this one frequency cannot make all

patterns; we need more components. The maximum frequency is NI2. As an example of

this concept, let us see how a digital signal with a 3-bit pattern can be simulated by using

analog signals. Figure 3.21 shows the idea. The two similar cases (000 and 111) are simulated

with a signal with frequencyf =0 and a phase of 180° for 000 and a phase of 0° for

111. The two worst cases (010 and 101) are simulated with an analog signal with frequencyf

=NI2 and phases of 180° and 0°. The other four cases can only be simulated with

an analog signal with f = NI4 and phases of 180°, 270°, 90°, and 0°. In other words, we

need a channel that can handle frequencies 0, N14, and NI2. This rough approximation is

referred to as using the first harmonic (NI2) frequency. The required bandwidth is

Bandwidth = lJ. - 0 = lJ.

2 2

Figure 3.21

Rough approximation ofa digital signal using the first harmonicfor worst case

Amplitude

Bandwidth == ~

o N/4 N/2 Frequency

Digital: bit rate N

1° 1 0 I

0 1

I 1 I 1

I 1 I 1

I 1

, , ! !

1 1 I 1

1 I I 1

i i i I

I I

. ,

, , , ,

Analog:f= 0, p = 180

Digital: bit rate N

I 0 1

I 1

: :

o~

, , , ,

I I I I

I I n

~l I

, , , .

Analog:f==N/4,p= 180

Digital: bit rate N

I

I

I~

1

-I--

J I I I

1

1

{"'\j:

MM

, , , ,

Analog:j==N/2, p == 180

Digital: bit rate N

*I I

, , ,

I~

I I I

~~~ , , , ,

Analog:j==N/4, p == 270

Digital: bit rate N

1 I I o 101

-A=±i

: • ~ 1

1 I I I

~

1~

, . , ,

Analog:f=N/4, p = 90

Digital: bit rate N

I 1 I o 1 1 I

-R=R-

AA

I M I

, , . ,

Analog:j== N/2, P = 0

Digital: bit rate N

1 1 I

1

I ° I

: , I

1 I

, , 8-

I I j I

~II

:~N

J , J

,

Analog:f=N/4,p== 0

Digital: bit rate N

I 1 1 I I I 1

i i I i

I I I I

I

,

1

,

I

,

1

,

I ! 1 ,

I I I I

I I i I

I 1 I I

, , , ,

Analog: j == 0, p = 0

Better Approximation To make the shape of the analog signal look more like that

of a digital signal, we need to add more harmonics of the frequencies. We need to

increase the bandwidth. We can increase the bandwidth to 3N12, 5N12, 7NI2, and so on.

Figure 3.22 shows the effect of this increase for one of the worst cases, the pattern 010.


78 CHAPTER 3 DATA AND SIGNALS

Figure 3.22

Simulating a digital signal with three first harmonics

Amplitude

5N

Bandwidth = T

W_I 1 ~

I'--- I ..

o N/4

NI2

3N/4 3N12

5N/4 5N12 Frequency

Digital: bit rate N

0 1 0

I I I I

I

I

I

I

I

I

I

I

I

I

I

I

I

I I I

v=s

I

.\2 :V

I

Analog:!= N/2

Analog:/= NI2 and 3N12

cf':

I

.U~

I

U __

ft

I

1

.~ ~-

Analog:!= N12, 3N12, and 5NI2

Note that we have shown only the highest frequency for each hannonic. We use the first,

third, and fifth hannonics. The required bandwidth is now 5NJ2, the difference between the

lowest frequency 0 and the highest frequency 5NJ2. As we emphasized before, we need

to remember that the required bandwidth is proportional to the bit rate.

In baseband transmission, the required bandwidth is proportional to the bit rate;

ifwe need to send bits faster, we need more bandwidth.

By using this method, Table 3.2 shows how much bandwidth we need to send data

at different rates.

Table 3.2

Bandwidth requirements

Bit Harmonic Harmonics Harmonics

Rate 1 1,3 1,3,5

n = 1 kbps B=500Hz B= 1.5 kHz B=2.5 kHz

n = 10 kbps B=5 kHz B= 15kHz B= 25 kHz

n = 100 kbps B= 50kHz B = 150 kHz B= 250 kHz

Example 3.22

What is the required bandwidth of a low-pass channel if we need to send 1 Mbps by using baseband

transmission?


SECTION 3.3 DIGITAL SIGNALS 79

Solution

The answer depends on the accuracy desired.

a. The minimum bandwidth, a rough approximation, is B =: bit rate /2, or 500 kHz. We need

a low-pass channel with frequencies between 0 and 500 kHz.

b. A better result can be achieved by using the first and the third harmonics with the required

bandwidth B =: 3 x 500 kHz =: 1.5 MHz.

c. Still a better result can be achieved by using the first, third, and fifth harmonics with

B =: 5 x 500 kHz =0 2.5 MHz.

Example 3.23

We have a low-pass channel with bandwidth 100 kHz. What is the maximum bit rate of this

channel?

Solution

The maximum bit rate can be achieved if we use the first harmonic. The bit rate is 2 times the

available bandwidth, or 200 kbps.

Broadband Transmission (Using Modulation)

Broadband transmission or modulation means changing the digital signal to an analog

signal for transmission. Modulation allows us to use a bandpass channel-a channel with

a bandwidth that does not start from zero. This type of channel is more available than a

low-pass channel. Figure 3.23 shows a bandpass channel.

Figure 3.23

Bandwidth ofa bandpass channel

Amplitude

1_~~.Frequency

Bandpass channel

Note that a low-pass channel can be considered a bandpass channel with the lower

frequency starting at zero.

Figure 3.24 shows the modulation of a digital signal. In the figure, a digital signal is

converted to a composite analog signal. We have used a single-frequency analog signal

(called a carrier); the amplitude of the carrier has been changed to look like the digital

signal. The result, however, is not a single-frequency signal; it is a composite signal, as

we will see in Chapter 5. At the receiver, the received analog signal is converted to digital,

and the result is a replica of what has been sent.

Ifthe available channel is a bandpass channel~ we cannot send the digital signal directly to

the channel; we need to convert the digital signal to an analog signal before transmission.


80 CHAPTER 3 DATA AND SIGNALS

Figure 3.24 Modulation ofa digital signal for transmission on a bandpass channel

0 _

Input digital signal

D'-- ~.

Output digital signal

DigitaUanalog

converter

Analog/digital

converter t--~'>'''''''--

Input analog signal bandwidth

L "

Available bandwidth

Output analog signal bandwidth

/ -,>:~,;:>~

- - I

Input analog signal

Bandpass channel

Input analog signal

Example 3.24

An example of broadband transmission using modulation is the sending of computer data through

a telephone subscriber line, the line connecting a resident to the central telephone office. These

lines, installed many years ago, are designed to carry voice (analog signal) with a limited bandwidth

(frequencies between 0 and 4 kHz). Although this channel can be used as a low-pass channel,

it is normally considered a bandpass channel. One reason is that the bandwidth is so narrow

(4 kHz) that if we treat the channel as low-pass and use it for baseband transmission, the maximum

bit rate can be only 8 kbps. The solution is to consider the channel a bandpass channel, convert the

digital signal from the computer to an analog signal, and send the analog signal. We can install two

converters to change the digital signal to analog and vice versa at the receiving end. The converter,

in this case, is called a modem (modulator/demodulator), which we discuss in detail in Chapter 5.

Example 3.25

A second example is the digital cellular telephone. For better reception, digital cellular phones

convert the analog voice signal to a digital signal (see Chapter 16). Although the bandwidth allocated

to a company providing digital cellular phone service is very wide, we still cannot send the

digital signal without conversion. The reason is that we only have a bandpass channel available

between caller and callee. For example, if the available bandwidth is Wand we allow lOOO couples

to talk simultaneously, this means the available channel is WIlOOO, just part of the entire

bandwidth. We need to convert the digitized voice to a composite analog signal before sending.

The digital cellular phones convert the analog audio signal to digital and then convert it again to

analog for transmission over a bandpass channel.

3.4 TRANSMISSION IMPAIRMENT

Signals travel through transmission media, which are not petfect. The impetfection causes

signal impairment. This means that the signal at the beginning of the medium is not the

same as the signal at the end of the medium. What is sent is not what is received. Three

causes of impairment are attenuation, distortion, and noise (see Figure 3.25).


SECTION 3.4 TRANSMISSION IMPAIRMENT 81

Figure 3.25

Causes ofimpairment

Attenuation

Attenuation means a loss of energy. When a signal, simple or composite, travels

through a medium, it loses some of its energy in overcoming the resistance of the

medium. That is why a wire carrying electric signals gets warm, if not hot, after a

while. Some of the electrical energy in the signal is converted to heat. To compensate

for this loss, amplifiers are used to amplify the signal. Figure 3.26 shows the effect of

attenuation and amplification.

Figure 3.26

Attenuation

Original Attenuated Amplified

-Affi

Point 1 Transmission medium Point 2 Point 3

Decibel

To show that a signal has lost or gained strength, engineers use the unit of the decibel.

The decibel (dB) measures the relative strengths of two signals or one signal at two different

points. Note that the decibel is negative if a signal is attenuated and positive if a

signal is amplified.

Variables PI and P 2 are the powers of a signal at points 1 and 2, respectively. Note that

some engineering books define the decibel in terms of voltage instead of power. In this

case, because power is proportional to the square of the voltage, the formula is dB =

20 log 10 (V 2 IV 1 ). In this text, we express dB in terms of power.


82 CHAPTER 3 DATA AND SIGNALS

Example 3.26

Suppose a signal travels through a transmission medium and its power is reduced to one-half.

This means that P 2 = ~ PI' In this case, the attenuation (loss ofpower) can be calculated as

Pz

1010glO -

PI

0.5PI

= 1010gl0-- = 10 Ioglo0.5 = 10(-0.3) = -3 dB

PI

A loss of 3 dB (-3 dB) is equivalent to losing one-half the power.

Example 3.27

A signal travels through an amplifier, and its power is increased 10 times. This means that Pz=

1OPI' In this case, the amplification (gain of power) can be calculated as

Example 3.28

One reason that engineers use the decibel to measure the changes in the strength of a signal is that

decibel numbers can be added (or subtracted) when we are measuring several points (cascading)

instead ofjust two. In Figure 3.27 a signal travels from point 1 to point 4. The signal is attenuated

by the time it reaches point 2. Between points 2 and 3, the signal is amplified. Again, between

points 3 and 4, the signal is attenuated. We can find the resultant decibel value for the signal just

by adding the decibel measurements between each set of points.

Figure 3.27 Decibels for Example 3.28

I dB

1_:_---'--'-=-.3 dB__•

1

7dB

• ----'----

'I'

-3 dB

:1

Point 1

Transmission

medium

Point 2

Point 3 Transmission Point 4

medium

In this case, the decibel value can be calculated as

The signal has gained in power.

Example 3.29

dB=-3+7-3=+1

Sometimes the decibel is used to measure signal power in milliwatts. In this case, it is referred to

as dB m and is calculated as dB m = 10 loglO Pm' where Pm is the power in milliwatts. Calculate

the power of a signal if its dB m =-30.


SECTION 3.4 TRANSMISSION IMPAIRMENT 83

Solution

We can calculate the power in the signal as

dB m = 10 log10 Pm = -30

loglO Pm := -3 Pm =10-3 rnW

Example 3.30

The loss in a cable is usually defined in decibels per kilometer (dB/km). If the signal at the

beginning of a cable with -0.3 dBlkm has a power of 2 mW, what is the power of the signal

at 5 km?

Solution

The loss in the cable in decibels is 5 x (-0.3)::: -1.5 dB. We can calculate the power as

Distortion

Distortion means that the signal changes its form or shape. Distortion can occur in a

composite signal made of different frequencies. Each signal component has its own

propagation speed (see the next section) through a medium and, therefore, its own

delay in arriving at the final destination. Differences in delay may create a difference in

phase if the delay is not exactly the same as the period duration. In other words, signal

components at the receiver have phases different from what they had at the sender. The

shape of the composite signal is therefore not the same. Figure 3.28 shows the effect of

distortion on a composite signal.

Figure 3.28

Distortion

Composite signal

sent

Composite signal

received

Components,

in phase

Components,

out of phase

At the sender

At the receiver


84 CHAPTER 3 DATA AND SIGNALS

Noise

Noise is another cause of impairment. Several types of noise, such as thermal noise,

induced noise, crosstalk, and impulse noise, may corrupt the signal. Thermal noise is

the random motion of electrons in a wire which creates an extra signal not originally

sent by the transmitter. Induced noise comes from sources such as motors and appliances.

These devices act as a sending antenna, and the transmission medium acts as the

receiving antenna. Crosstalk is the effect of one wire on the other. One wire acts as a

sending antenna and the other as the receiving antenna. Impulse noise is a spike (a signal

with high energy in a very short time) that comes from power lines, lightning, and so

on. Figure 3.29 shows the effect of noise on a signal. We discuss error in Chapter 10.

Figure 3.29

Noise

Transmitted

Noise

I

I

I

Point 1

Transmission medium

I

I

I

Point 2

Signal-to-Noise Ratio (SNR)

As we will see later, to find the theoretical bit rate limit, we need to know the ratio of

the signal power to the noise power. The signal-to-noise ratio is defined as

SNR =average signal power

average noise power

We need to consider the average signal power and the average noise power because

these may change with time. Figure 3.30 shows the idea of SNR.

SNR is actually the ratio of what is wanted (signal) to what is not wanted (noise).

A high SNR means the signal is less corrupted by noise; a low SNR means the signal is

more corrupted by noise.

Because SNR is the ratio of two powers, it is often described in decibel units,

SNR dB , defined as

SNRcm = lOloglo SNR

Example 3.31

The power of a signal is 10 mW and the power of the noise is 1 /lW; what are the values of SNR

and SNR dB ?


SECTION 3.5 DATA RATE LIMITS 85

Figure 3.30

Two cases ofSNR: a high SNR and a low SNR

-

Signal

Noise

a. Large SNR

Noise

Signal + noise

b. Small SNR

Solution

The values of SNR and SN~B can be calculated as follows:

SNR = 10.000 flW = 10 000

ImW '

SNR dB = 10 loglO 10,000 = 10 loglO 10 4 = 40

Example 3.32

The values of SNR and SNR dB for a noiseless channel are

SNR=signal power = <>0

o

SNRdB = 10 loglO 00 = <>0

We can never achieve this ratio in real life; it is an ideal.

3.5 DATA RATE LIMITS

A very important consideration in data communications is how fast we can send data, in

bits per second. over a channel. Data rate depends on three factors:

1. The bandwidth available

2. The level of the signals we use

3. The quality of the channel (the level of noise)

Two theoretical formulas were developed to calculate the data rate: one by Nyquist for

a noiseless channel. another by Shannon for a noisy channel.


86 CHAPTER 3 DATA AND SIGNALS

Noiseless Channel: Nyquist Bit Rate

For a noiseless channel, the Nyquist bit rate formula defines the theoretical maximum

bit rate

BitRate = 2 x bandwidth x 10g2 L

In this formula, bandwidth is the bandwidth of the channel, L is the number of signal

levels used to represent data, and BitRate is the bit rate in bits per second.

According to the formula, we might think that, given a specific bandwidth, we can

have any bit rate we want by increasing the number of signa11eve1s. Although the idea

is theoretically correct, practically there is a limit. When we increase the number of signal1eve1s,

we impose a burden on the receiver. Ifthe number oflevels in a signal is just 2,

the receiver can easily distinguish between a 0 and a 1. If the level of a signal is 64, the

receiver must be very sophisticated to distinguish between 64 different levels. In other

words, increasing the levels of a signal reduces the reliability of the system.

Increasing the levels of a signal may reduce the reliability of the system.

Example 3.33

Does the Nyquist theorem bit rate agree with the intuitive bit rate described in baseband

transmission?

Solution

They match when we have only two levels. We said, in baseband transmission, the bit rate is 2

times the bandwidth if we use only the first harmonic in the worst case. However, the Nyquist

formula is more general than what we derived intuitively; it can be applied to baseband transmission

and modulation. Also, it can be applied when we have two or more levels of signals.

Example 3.34

Consider a noiseless channel with a bandwidth of 3000 Hz transmitting a signal with two signal

levels. The maximum bit rate can be calculated as

BitRate=2 x 3000 x log2 2 =6000 bps

Example 3.35

Consider the same noiseless channel transmitting a signal with four signal levels (for each level,

we send 2 bits). The maximum bit rate can be calculated as

BitRate =2 x 3000 X log2 4 = 12,000 bps

Example 3.36

We need to send 265 kbps over a noiseless channel with a bandwidth of 20 kHz. How many signallevels

do we need?


SECTION 3.5 DATA RATE LIMITS 87

Solution

We can use the Nyquist formula as shown:

265,000 =2 X 20,000 X logz L

log2 L =6.625

L =2 6 . 625 = 98.7 levels

Since this result is not a power of 2, we need to either increase the number of levels or reduce

the bit rate. If we have 128 levels, the bit rate is 280 kbps. If we have 64 levels, the bit rate is

240 kbps.

Noisy Channel: Shannon Capacity

In reality, we cannot have a noiseless channel; the channel is always noisy. In 1944,

Claude Shannon introduced a formula, called the Shannon capacity, to determine the

theoretical highest data rate for a noisy channel:

Capacity =bandwidth X log2 (1 + SNR)

In this formula, bandwidth is the bandwidth of the channel, SNR is the signal-tonoise

ratio, and capacity is the capacity ofthe channel in bits per second. Note that in the

Shannon formula there is no indication of the signal level, which means that no matter

how many levels we have, we cannot achieve a data rate higher than the capacity of the

channel. In other words, the formula defines a characteristic of the channel, not the method

of transmission.

Example 3.37

Consider an extremely noisy channel in which the value of the signal-to-noise ratio is almost

zero. In other words, the noise is so strong that the signal is faint. For this channel the capacity C

is calculated as

C=B log2 (1 + SNR) =B 10g z (l + 0) =B log2 1 ::;;: B x 0 :;;;; 0

This means that the capacity of this channel is zero regardless of the bandwidth. In other

words, we cannot receive any data through this channel.

Example 3.38

We can calculate the theoretical highest bit rate of a regular telephone line. A telephone line normally

has a bandwidth of 3000 Hz (300 to 3300 Hz) assigned for data communications. The signal-to-noise

ratio is usually 3162. For this channel the capacity is calculated as

C = B log2 (1 + SNR) =3000 log2 (l + 3162) =3000 log2 3163

:::::: 3000 x 11.62 = 34,860 bps

This means that the highest bit rate for a telephone line is 34.860 kbps. If we want to send

data faster than this, we can either increase the bandwidth of the line or improve the signal-tonoise

ratio.


88 CHAPTER 3 DATA AND SIGNALS

Example 3.39

The signal-to-noise ratio is often given in decibels. Assume that SN~B = 36 and the channel

bandwidth is 2 MHz. The theoretical channel capacity can be calculated as

SNR dB = 10 loglO SNR ... SNR = lOSNRoB/10 ... SNR::; 10 3 . 6 =3981

C =B log2 (1+ SNR) = 2 X 10 6 X log2 3982 = 24 Mbps

Example 3.40

For practical purposes, when the SNR is very high, we can assume that SNR + I is almost the

same as SNR. In these cases, the theoretical channel capacity can be simplified to

SNR dB

C=BX --= 3

For example, we can calculate the theoretical capacity of the previous example as

36

C= 2 MHz X - =24 Mbps

3

Using Both Limits

In practice, we need to use both methods to find the limits and signal levels. Let us show

this with an example.

Example 3.41

We have a channel with a I-MHz bandwidth. The SNR for this channel is 63. What are the appropriate

bit rate and signal level?

Solution

First, we use the Shannon formula to find the upper limit.

C =B log2 (l + SNR) = 10 6 log2 (1 + 63) = 10 6 10g2 64 =6 Mbps

The Shannon formula gives us 6 Mbps, the upper limit. For better performance we choose

something lower, 4 Mbps, for example. Then we use the Nyquist formula to find the number of

signal levels.

4Mbps=2x 1 MHz x log 2 L ... L=4

The Shannon capacity gives us the upper limit;

the Nyquist formula tells us how many signal levels we need.


SECTION 3.6 PERFORMANCE 89

3.6 PERFORMANCE

Up to now, we have discussed the tools of transmitting data (signals) over a network

and how the data behave. One important issue in networking is the performance of the

network-how good is it? We discuss quality of service, an overall measurement of

network performance, in greater detail in Chapter 24. In this section, we introduce

terms that we need for future chapters.

Bandwidth

One characteristic that measures network performance is bandwidth. However, the term

can be used in two different contexts with two different measuring values: bandwidth in

hertz and bandwidth in bits per second.

Bandwidth in Hertz

We have discussed this concept. Bandwidth in hertz is the range of frequencies contained

in a composite signal or the range of frequencies a channel can pass. For example,

we can say the bandwidth of a subscriber telephone line is 4 kHz.

Bandwidth in Bits per Seconds

The term bandwidth can also refer to the number of bits per second that a channel, a

link, or even a network can transmit. For example, one can say the bandwidth of a Fast

Ethernet network (or the links in this network) is a maximum of 100 Mbps. This means

that this network can send 100 Mbps.

Relationship

There is an explicit relationship between the bandwidth in hertz and bandwidth in bits

per seconds. Basically, an increase in bandwidth in hertz means an increase in bandwidth

in bits per second. The relationship depends on whether we have baseband transmission

or transmission with modulation. We discuss this relationship in Chapters 4 and 5.

In networking, we use the term bandwidth in two contexts.

o The first, bandwidth in hertz, refers to the range of frequencies in a composite

signal or the range of frequencies that a channel can pass.

o The second, bandwidth in bits per second, refers to the speed of bit transmission

in a channel or link.

Example 3.42

The bandwidth ofa subscriber line is 4 kHz for voice or data. The bandwidth ofthis line for data transmission

can be up to 56,000 bps using a sophisticated modem to change the digital signal to analog.

Example 3.43

If the telephone company improves the quality of the line and increases the bandwidth to 8 kHz,

we can send 112,000 bps by using the same technology as mentioned in Example 3.42.


90 CHAPTER 3 DATA AND SIGNALS

Throughput

The throughput is a measure of how fast we can actually send data through a network.

Although, at first glance, bandwidth in bits per second and throughput seem the same,

they are different. A link may have a bandwidth of B bps, but we can only send T bps

through this link with T always less than B. In other words, the bandwidth is a potential

measurement of a link; the throughput is an actual measurement of how fast we can

send data. For example, we may have a link with a bandwidth of 1 Mbps, but the

devices connected to the end of the link may handle only 200 kbps. This means that we

cannot send more than 200 kbps through this link.

Imagine a highway designed to transmit 1000 cars per minute from one point

to another. However, if there is congestion on the road, this figure may be reduced to

100 cars per minute. The bandwidth is 1000 cars per minute; the throughput is 100 cars

per minute.

Example 3.44

A network with bandwidth of 10 Mbps can pass only an average of 12,000 frames per minute

with each frame carrying an average of 10,000 bits. What is the throughput of this network?

Solution

We can calculate the throughput as

Throughput= 12,000 x 10,000 =2 Mbps

60

The throughput is almost one-fifth of the bandwidth in this case.

Latency (Delay)

The latency or delay defines how long it takes for an entire message to completely

arrive at the destination from the time the first bit is sent out from the source. We can

say that latency is made of four components: propagation time, transmission time,

queuing time and processing delay.

Latency =propagation time + transmission time + queuing time + processing delay

Propagation Time

Propagation time measures the time required for a bit to travel from the source to the

destination. The propagation time is calculated by dividing the distance by the propagation

speed.

Propagation time = Dist:m ce

Propagation speed

The propagation speed of electromagnetic signals depends on the medium and on

the frequency of the signaL For example, in a vacuum, light is propagated with a speed

of 3 x 10 8 mfs. It is lower in air; it is much lower in cable.


SECTION 3.6 PERFORMANCE 91

Example 3.45

What is the propagation time if the distance between the two points is 12,000 km? Assume the

propagation speed to be 2.4 x 10 8 mls in cable.

Solution

We can calculate the propagation time as

.. 12000 x 1000

Propagation tIme = ' =50 ms

2.4 x 10 8

The example shows that a bit can go over the Atlantic Ocean in only 50 ms if there is a direct

cable between the source and the destination.

Tra/lsmissio/l Time

In data communications we don't send just 1 bit, we send a message. The first bit may

take a time equal to the propagation time to reach its destination; the last bit also may

take the same amount of time. However, there is a time between the first bit leaving the

sender and the last bit arriving at the receiver. The first bit leaves earlier and arrives earlier;

the last bit leaves later and arrives later. The time required for transmission of a

message depends on the size of the message and the bandwidth of the channel.

Transmission time = Message size

Bandwidth

Example 3.46

What are the propagation time and the transmission time for a 2.5-kbyte message (an e-mail) if

the bandwidth of the network is 1 Gbps? Assume that the distance between the sender and the

receiver is 12,000 km and that light travels at 2.4 x 10 8 mls.

Solution

We can calculate the propagation and transmission time as

T

.. 12000 x 1000

PropagatIon Hme = ~.4 x 10 8

=50 ms

... 2500 x 8 0 020

ranSIlllSSlOn tIme = 9 =. ms

Note that in this case, because the message is short and the bandwidth is high, the

dominant factor is the propagation time, not the transmission time. The transmission

time can be ignored.

Example 3.47

What are the propagation time and the transmission time for a 5-Mbyte message (an image) if the

bandwidth of the network is 1 Mbps? Assume that the distance between the sender and the

receiver is 12,000 km and that light travels at 2.4 x 10 8 mls.

10


92 CHAPTER 3 DATA AND SIGNALS

Solution

We can calculate the propagation and transmission times as

., 12 000 X 1000

PropagatIon tIme ::::' 8:::: 50 ms

2.4x 10

.. . 5,000,000 x 8 40

TranSffilSSlOn tIme:::: 6 :::: S

10

Note that in this case, because the message is very long and the bandwidth is not very

high, the dominant factor is the transmission time, not the propagation time. The propagation

time can be ignored.

Queuing Time

The third component in latency is the queuing time, the time needed for each intermediate

or end device to hold the message before it can be processed. The queuing time is

not a fixed factor; it changes with the load imposed on the network. When there is

heavy traffic on the network, the queuing time increases. An intermediate device, such

as a router, queues the arrived messages and processes them one by one. If there are

many messages, each message will have to wait.

Bandwidth-Delay Product

Bandwidth and delay are two performance metrics of a link. However, as we will see in

this chapter and future chapters, what is very important in data communications is the

product of the two, the bandwidth-delay product. Let us elaborate on this issue, using

two hypothetical cases as examples.

o Case 1. Figure 3.31 shows case 1.

Figure 3.31 Filling the link with bits for case 1

Sender

Bandwidth: 1 bps Delay: 5 s

Bandwidth x delay = 5 bits

Receiver

_...

1st bit ~

1st bit

1 s

1 s

Let US assume that we have a link with a bandwidth of 1 bps (unrealistic, but good

for demonstration purposes). We also assume that the delay of the link is 5 s (also

unrealistic). We want to see what the bandwidth-delay product means in this case.


SECTION 3.6 PERFORMANCE 93

Looking at figure, we can say that this product 1 x 5 is the maximum number of

bits that can fill the link. There can be no more than 5 bits at any time on the link.

o Case 2. Now assume we have a bandwidth of 4 bps. Figure 3.32 shows that there

can be maximum 4 x 5 = 20 bits on the line. The reason is that, at each second,

there are 4 bits on the line; the duration of each bit is 0.25 s.

Figure 3.32 Filling the link with bits in case 2

Bandwidth: 4 bps

Delay: 5 s

Bandwidth x delay = 20 bits

Receiver

r

After 1 s I---'-----'----.l..-.L-l

After 2 s

After 4 s

1 s 1 s 1 s 1 s 1 s

The above two cases show that the product of bandwidth and delay is the number of

bits that can fill the link. This measurement is important if we need to send data in bursts

and wait for the acknowledgment of each burst before sending the next one. To use the

maximum capability of the link, we need to make the size ofour burst 2 times the product

of bandwidth and delay; we need to fill up the full-duplex channel (two directions). The

sender should send a burst of data of (2 x bandwidth x delay) bits. The sender then waits

for receiver acknowledgment for part of the burst before sending another burst. The

amount 2 x bandwidth x delay is the number ofbits that can be in transition at any time.

The bandwidth~delayproduct defines the number of bits that can rdl the link.

Example 3.48

We can think about the link between two points as a pipe. The cross section of the pipe represents

the bandwidth, and the length of the pipe represents the delay. We can say the volume of the pipe

defines the bandwidth-delay product, as shown in Figure 3.33.

Figure 3.33

Concept ofbandwidth-delay product

Length: delay

Cross section: bandwidth


94 CHAPTER 3 DATA AND SIGNALS

Jitter

Another performance issue that is related to delay is jitter. We can roughly say that jitter

is a problem if different packets of data encounter different delays and the application

using the data at the receiver site is time-sensitive (audio and video data, for example).

If the delay for the first packet is 20 ms, for the second is 45 ms, and for the third is

40 ms, then the real-time application that uses the packets endures jitter. We discuss jitter

in greater detail in Chapter 29.

3.7 RECOMMENDED READING

For more details about subjects discussed in this chapter, we recommend the following

books. The items in brackets [...] refer to the reference list at the end of the text.

Books

Data and signals are elegantly discussed in Chapters 1 to 6 of [Pea92]. [CouOl] gives

an excellent coverage about signals in Chapter 2. More advanced materials can be

found in [Ber96]. [Hsu03] gives a good mathematical approach to signaling. Complete

coverage of Fourier Analysis can be found in [Spi74]. Data and signals are discussed in

Chapter 3 of [Sta04] and Section 2.1 of [Tan03].

3.8 KEY TERMS

analog

analog data

analog signal

attenuation

bandpass channel

bandwidth

baseband transmission

bit rate

bits per second (bps)

broadband transmission

composite signal

cycle

decibel (dB)

digital

digital data

digital signal

distortion

Fourier analysis

frequency

frequency-domain

fundamental frequency

harmonic

Hertz (Hz)

jitter

low-pass channel

noise

nonperiodic signal

Nyquist bit rate

peak amplitude

period

periodic signal

phase

processing delay

propagation speed


SECTION 3.9 SUMMARY 95

propagation time

queuing time

Shannon capacity

signal

signal-to-noise ratio (SNR)

sine wave

throughput

time-domain

transmission time

wavelength

3.9 SUMMARY

o Data must be transformed to electromagnetic signals to be transmitted.

o Data can be analog or digital. Analog data are continuous and take continuous

values. Digital data have discrete states and take discrete values.

o Signals can be analog or digital. Analog signals can have an infinite number of

values in a range; digital ,signals can have only a limited number of values.

o In data communications, we commonly use periodic analog signals and nonperiodic

digital signals.

o Frequency and period are the inverse of each other.

o Frequency is the rate of change with respect to time.

o Phase describes the position of the waveform relative to time O.

o A complete sine wave in the time domain can be represented by one single spike in

the frequency domain.

o A single-frequency sine wave is not useful in data communications; we need to

send a composite signal, a signal made of many simple sine waves.

o According to Fourier analysis, any composite signal is a combination of simple

sine waves with different frequencies, amplitudes, and phases.

o The bandwidth of a composite signal is the difference between the highest and the

lowest frequencies contained in that signal.

o A digital signal is a composite analog signal with an infinite bandwidth.

o Baseband transmission of a digital signal that preserves the shape of the digital

signal is possible only if we have a low-pass channel with an infinite or very wide

bandwidth.

o If the available channel is a bandpass channel, we cannot send a digital signal

directly to the channel; we need to convert the digital signal to an analog signal

before transmission.

o For a noiseless channel, the Nyquist bit rate formula defines the theoretical maximum

bit rate. For a noisy channel, we need to use the Shannon capacity to find the

maximum bit rate.

o Attenuation, distortion, and noise can impair a signal.

o Attenuation is the loss of a signal's energy due to the resistance of the medium.

o Distortion is the alteration of a signal due to the differing propagation speeds of

each of the frequencies that make up a signal.

o Noise is the external energy that corrupts a signal.

o The bandwidth-delay product defines the number of bits that can fill the link.


96 CHAPTER 3 DATA AND SIGNALS

3.10 PRACTICE SET

Review Questions

1. What is the relationship between period and frequency?

2. What does the amplitude of a signal measure? What does the frequency of a signal

measure? What does the phase of a signal measure?

3. How can a composite signal be decomposed into its individual frequencies?

4. Name three types of transmission impairment.

5. Distinguish between baseband transmission and broadband transmission.

6. Distinguish between a low-pass channel and a band-pass channel.

7. What does the Nyquist theorem have to do with communications?

8. What does the Shannon capacity have to do with communications?

9. Why do optical signals used in fiber optic cables have a very short wave length?

10. Can we say if a signal is periodic or nonperiodic by just looking at its frequency

domain plot? How?

11. Is the frequency domain plot of a voice signal discrete or continuous?

12. Is the frequency domain plot of an alarm system discrete or continuous?

13. We send a voice signal from a microphone to a recorder. Is this baseband or broadband

transmission?

14. We send a digital signal from one station on a LAN to another station. Is this baseband

or broadband transmission?

15. We modulate several voice signals and send them through the air. Is this baseband

or broadband transmission?

Exercises

16. Given the frequencies listed below, calculate the corresponding periods.

a. 24Hz

b. 8 MHz

c. 140 KHz

17. Given the following periods, calculate the corresponding frequencies.

a. 5 s

b. 12 Jls

c. 220 ns

18. What is the phase shift for the foIlowing?

a. A sine wave with the maximum amplitude at time zero

b. A sine wave with maximum amplitude after 1/4 cycle

c. A sine wave with zero amplitude after 3/4 cycle and increasing

19. What is the bandwidth of a signal that can be decomposed into five sine waves

with frequencies at 0, 20, 50, 100, and 200 Hz? All peak amplitudes are the same.

Draw the bandwidth.


SECTION 3.10 PRACTICE SET 97

20. A periodic composite signal with a bandwidth of 2000 Hz is composed of two sine

waves. The first one has a frequency of 100 Hz with a maximum amplitude of 20 V;

the second one has a maximum amplitude of 5 V. Draw the bandwidth.

21. Which signal has a wider bandwidth, a sine wave with a frequency of 100 Hz or a

sine wave with a frequency of 200 Hz?

22. What is the bit rate for each of the following signals?

a. A signal in which 1 bit lasts 0.001 s

b. A signal in which 1 bit lasts 2 ms

c. A signal in which 10 bits last 20 J-ls

23. A device is sending out data at the rate of 1000 bps.

a. How long does it take to send out 10 bits?

b. How long does it take to send out a single character (8 bits)?

c. How long does it take to send a file of 100,000 characters?

24. What is the bit rate for the signal in Figure 3.34?

Figure 3.34 Exercise 24

16 ns

---t=:i--J~-...;..---+-.....;.--.;....-d ... ~

1 TI~

25. What is the frequency of the signal in Figure 3.35?

Figure 3.35 Exercise 25

I, 4ms .1

1 I

V\ f\ f\ f\ f\ f\ f\ f\ : ...

\TV V VVV V~

Time

26. What is the bandwidth of the composite signal shown in Figure 3.36.

Figure 3.36 Exercise 26

5 5 5 5

5 I

Frequency


9S CHAPTER 3 DATA AND SIGNALS

27. A periodic composite signal contains frequencies from 10 to 30 KHz, each with an

amplitude of 10 V. Draw the frequency spectrum.

2K. A non-periodic composite signal contains frequencies from 10 to 30 KHz. The

peak amplitude is 10 V for the lowest and the highest signals and is 30 V for the

20-KHz signal. Assuming that the amplitudes change gradually from the minimum

to the maximum, draw the frequency spectrum.

20. A TV channel has a bandwidth of 6 MHz. If we send a digital signal using one

channel, what are the data rates if we use one harmonic, three harmonics, and five

harmonics?

30. A signal travels from point A to point B. At point A, the signal power is 100 W. At

point B, the power is 90 W. What is the attenuation in decibels?

31. The attenuation of a signal is -10 dB. What is the final signal power if it was originally

5 W?

32. A signal has passed through three cascaded amplifiers, each with a 4 dB gain.

What is the total gain? How much is the signal amplified?

33. If the bandwidth of the channel is 5 Kbps, how long does it take to send a frame of

100,000 bits out of this device?

3cf. The light of the sun takes approximately eight minutes to reach the earth. What is

the distance between the sun and the earth?

35. A signal has a wavelength of 1 11m in air. How far can the front of the wave travel

during 1000 periods?

36. A line has a signal-to-noise ratio of 1000 and a bandwidth of 4000 KHz. What is

the maximum data rate supported by this line?

37. We measure the performance of a telephone line (4 KHz of bandwidth). When the

signal is 10 V, the noise is 5 mV. What is the maximum data rate supported by this

telephone line?

3X. A file contains 2 million bytes. How long does it take to download this file using a

56-Kbps channel? 1-Mbps channel?

39. A computer monitor has a resolution of 1200 by 1000 pixels. If each pixel uses

1024 colors, how many bits are needed to send the complete contents of a screen?

40. A signal with 200 milliwatts power passes through 10 devices, each with an average

noise of 2 microwatts. What is the SNR? What is the SNR dB ?

4 I. Ifthe peak voltage value of a signal is 20 times the peak voltage value ofthe noise,

what is the SNR? What is the SNR dB ?

42. What is the theoretical capacity of a channel in each of the following cases:

a. Bandwidth: 20 KHz SNR dB =40

b. Bandwidth: 200 KHz SNR dB =4

c. Bandwidth: 1 MHz SNR dB =20

43. We need to upgrade a channel to a higher bandwidth. Answer the following

questions:

a. How is the rate improved if we double the bandwidth?

b. How is the rate improved if we double the SNR?


SECTION 3.10 PRACTICE SET 99

44. We have a channel with 4 KHz bandwidth. If we want to send data at 100 Kbps,

what is the minimum SNR dB ? What is SNR?

45. What is the transmission time of a packet sent by a station if the length of the

packet is 1 million bytes and the bandwidth of the channel is 200 Kbps?

46. What is the length of a bit in a channel with a propagation speed of 2 x 10 8 mls if

the channel bandwidth is

a. 1 Mbps?

h. 10 Mbps?

c. 100 Mbps?

-1- 7. How many bits can fit on a link with a 2 ms delay if the bandwidth of the link is

-1-X.

a. 1 Mbps?

h. 10 Mbps?

c. 100 Mbps?

What is the total delay (latency) for a frame of size 5 million bits that is being sent

on a link with 10 routers each having a queuing time of 2 Ils and a processing time

of 1 Ils. The length of the link is 2000 Km. The speed of light inside the link is 2 x

10 8 mls. The link has a bandwidth of 5 Mbps. Which component of the total delay

is dominant? Which one is negligible?



CHAPTER 4

Digital Transmission

A computer network is designed to send information from one point to another. This

information needs to be converted to either a digital signal or an analog signal for transmission.

In this chapter, we discuss the first choice, conversion to digital signals; in

Chapter 5, we discuss the second choice, conversion to analog signals.

We discussed the advantages and disadvantages of digital transmission over analog

transmission in Chapter 3. In this chapter, we show the schemes and techniques that

we use to transmit data digitally. First, we discuss digital-to-digital conversion techniques,

methods which convert digital data to digital signals. Second, we discuss analogto-digital

conversion techniques, methods which change an analog signal to a digital

signaL Finally, we discuss transmission modes.

4.1 DIGITAL~TO~DIGITAL CONVERSION

In Chapter 3, we discussed data and signals. We said that data can be either digital or

analog. We also said that signals that represent data can also be digital or analog. In this

section, we see how we can represent digital data by using digital signals. The conversion

involves three techniques: line coding, block coding, and scrambling. Line coding

is always needed~ block coding and scrambling mayor may not be needed.

Line Coding

Line coding is the process of converting digital data to digital signals. We assume that

data, in the form of text, numbers, graphical images, audio, or video, are stored in computer

memory as sequences of bits (see Chapter 1). Line coding converts a sequence of

bits to a digital signal. At the sender, digital data are encoded into a digital signal; at the

receiver, the digital data are recreated by decoding the digital signal. Figure 4.1 shows

the process.

Characteristics

Before discussing different line coding schemes, we address their common characteristics.

101


102 CHAPTER 4 DIGITAL TRANSMISSION

Figure 4.1

Line coding and decoding

Sender

Receiver

Digital data

1°101". 101 1

Digital signal

Digital data

1°101 •• ' 101 1

Signal Element Versus Data Element Let us distinguish between a data element

and a signal element. In data communications, our goal is to send data elements. A

data element is the smallest entity that can represent a piece of information: this is the

bit. In digital data communications, a signal element carries data elements. A signal

element is the shortest unit (timewise) of a digital signal. In other words, data elements

are what we need to send; signal elements are what we can send. Data elements are

being carried; signal elements are the carriers.

We define a ratio r which is the number of data elements carried by each signal element.

Figure 4.2 shows several situations with different values of r.

Figure 4.2

Signal element versus data element

I data element

o

~-

element

a. One data element per one signal

element (r = 1)

I data element

o

2 signal

elements

b. One data element per two signal

elements (r = t)

2 data elements 4 data elements

II I 01 II

I

---'

-l=:F

I signal

element

c. Two data elements per one signal

element (r = 2)

1101

---16_­

3 signal

elements

d. Four data elements per three signal

elements (r = 1)

In part a of the figure, one data element is carried by one signal element (r = 1). In

part b of the figure, we need two signal elements (two transitions) to carry each data


SECTION 4.1 DIGITAL-TO-DIGITAL CONVERSION 103

element (r = ! ). We will see later that the extra signal element is needed to guarantee

2

synchronization. In part c ofthe figure, a signal element carries two data elements (r = 2).

Finally, in part d, a group of 4 bits is being carried by a group of three signal elements

(r = ~ ). For every line coding scheme we discuss, we will give the value of r.

3 .

An analogy may help here. Suppose each data element IS a person who needs to be

carried from one place to another. We can think of a signal element as a vehicle that can

carry people. When r = 1, it means each person is driving a vehicle. When r > 1, it

means more than one person is travelling in a vehicle (a carpool, for example). We can

also have the case where one person is driving a car and a trailer (r = ~ ).

Data Rate Versus Signal Rate The data rate defines the number of data elements

(bits) sent in Is. The unit is bits per second (bps). The signal rate is the number of signal

elements sent in Is. The unit is the baud. There are several common terminologies

used in the literature. The data rate is sometimes called the bit rate; the signal rate is

sometimes called the pulse rate, the modulation rate, or the baud rate.

One goal in data communications is to increase the data rate while decreasing the

signal rate. Increasing the data rate increases the speed of transmission; decreasing the

signal rate decreases the bandwidth requirement. In our vehicle-people analogy, we

need to carry more people in fewer vehicles to prevent traffic jams. We have a limited

bandwidth in our transportation system.

We now need to consider the relationship between data rate and signal rate (bit rate

and baud rate). This relationship, of course, depends on the value of r. It also depends

on the data pattern. If we have a data pattern of all 1s or all Os, the signal rate may be

different from a data pattern of alternating Os and Is. To derive a formula for the relationship,

we need to define three cases: the worst, best, and average. The worst case is

when we need the maximum signal rate; the best case is when we need the minimum.

In data communications, we are usually interested in the average case. We can formulate

the relationship between data rate and signal rate as

1

S =c xNx -

r

baud

where N is the data rate (bps); c is the case factor, which varies for each case; S is the

number of signal elements; and r is the previously defined factor.

Example 4.1

A signal is carrying data in which one data element is encoded as one signal element (r = 1). If

the bit rate is 100 kbps, what is the average value of the baud rate if c is between 0 and l?

Solution

We assume that the average value of c is ~. The baud rate is then

111

S =c x N x - = - x 100,000 x -1 = 50,000 =50 kbaud

r 2

Bandwidth We discussed in Chapter 3 that a digital signal that carries infonnation is

nonperiodic. We also showed that the bandwidth of a nonperiodic signal is continuous

with an infinite range. However, most digital signals we encounter in real life have a


104 CHAPTER 4 DIGITAL TRANSMISSION

bandwidth with finite values. In other words, the bandwidth is theoretically infinite, but

many of the components have such a small amplitude that they can be ignored. The

effective bandwidth is finite. From now on, when we talk about the bandwidth of a digital

signal, we need to remember that we are talking about this effective bandwidth.

Although the actual bandwidth ofa digital signal is infinite, the effective bandwidth is finite.

We can say that the baud rate, not the bit rate, determines the required bandwidth

for a digital signal. If we use the transpOltation analogy, the number of vehicles affects

the traffic, not the number of people being carried. More changes in the signal mean

injecting more frequencies into the signal. (Recall that frequency means change and

change means frequency.) The bandwidth reflects the range of frequencies we need.

There is a relationship between the baud rate (signal rate) and the bandwidth. Bandwidth

is a complex idea. When we talk about the bandwidth, we normally define a

range of frequencies. We need to know where this range is located as well as the values

of the lowest and the highest frequencies. In addition, the amplitude (if not the phase)

of each component is an impOltant issue. In other words, we need more information

about the bandwidth than just its value; we need a diagram of the bandwidth. We will

show the bandwidth for most schemes we discuss in the chapter. For the moment, we

can say that the bandwidth (range of frequencies) is proportional to the signal rate

(baud rate). The minimum bandwidth can be given as

B min =:c >< N ><

1

­ r

We can solve for the maximum data rate if the bandwidth of the channel is given.

1

N max = -

c

xBxr

Example 4.2

The maximum data rate of a channel (see Chapter 3) is N max = 2 >< B ><

Nyquist formula). Does this agree with the previous formula for N max ?

log2L (defined by the

Solution

A signal with L levels actually can carry log2 L bits per level. Ifeach level corresponds to one signal

element and we assume the average case (c = ~), then we have

Baseline Wandering In decoding a digital signal, the receiver calculates a running

average of the received signal power. This average is called the baseline. The incoming

signal power is evaluated against this baseline to determine the value of the data element.

A long string of Os or 1s can cause a drift in the baseline (baseline wandering)

and make it difficult for the receiver to decode correctly. A good line coding scheme

needs to prevent baseline wandering.


SECTION 4.1 DIGITAL-TO-DIGITAL CONVERSION 105

DC Components When the voltage level in a digital signal is constant for a while,

the spectrum creates very low frequencies (results of Fourier analysis). These frequencies

around zero, called DC (direct-current) components, present problems for a

system that cannot pass low frequencies or a system that uses electrical coupling

(via a transformer). For example, a telephone line cannot pass frequencies below

200 Hz. Also a long-distance link may use one or more transformers to isolate

different parts of the line electrically. For these systems, we need a scheme with no

DC component.

Self-synchronization To correctly interpret the signals received from the sender,

the receiver's bit intervals must correspond exactly to the sender's bit intervals. If the

receiver clock is faster or slower, the bit intervals are not matched and the receiver might

misinterpret the signals. Figure 4.3 shows a situation in which the receiver has a shorter

bit duration. The sender sends 10110001, while the receiver receives 110111 000011.

Figure 4.3

Effect oflack ofsynchronization

o

J

I

I

I

o

J

o : 0 :

J

I

I 1""'"----;

I

I

Time

a.Sent

I

I

1 I

I

I

I

1 I 1 0 0 0 0

I

I

I

1 I 1

I

Time

b. Received

A self-synchronizing digital signal includes timing information in the data being

transmitted. This can be achieved if there are transitions in the signal that alert the

receiver to the beginning, middle, or end of the pulse. If the receiver's clock is out of

synchronization, these points can reset the clock.

Example 4.3

In a digital transmission, the receiver clock is 0.1 percent faster than the sender clock. How many

extra bits per second does the receiver receive if the data rate is 1 kbps? How many if the data

rate is 1 Mbps?

Solution

At 1 kbps, the receiver receives 1001 bps instead of 1000 bps.

1000 bits sent 1001 bits received 1 extra bps


106 CHAPTER 4 DIGITAL TRANSMISSION

At 1 Mbps, the receiver receives 1,001,000 bps instead of 1,000,000 bps.

1,000,000 bits sent 1,001,000 bits received 1000 extra bps

Built-in Error Detection It is desirable to have a built-in error-detecting capability

in the generated code to detect some of or all the errors that occurred during transmission.

Some encoding schemes that we will discuss have this capability to some extent.

Immunity to Noise and Interference Another desirable code characteristic is a code I

that is immune to noise and other interferences. Some encoding schemes that we will

discuss have this capability.

Complexity A complex scheme is more costly to implement than a simple one. For

example, a scheme that uses four signal levels is more difficult to interpret than one that

uses only two levels.

Line Coding Schemes

We can roughly divide line coding schemes into five broad categories, as shown in

Figure 4.4.

Figure 4.4

Line coding schemes

Unipolar

--NRZ

Polar

NRZ, RZ, and biphase (Manchester.

and differential Manchester)

Line coding

Bipolar

-- AMI and pseudoternary

Multilevel

-- 2B/IQ, 8B/6T, and 4U-PAM5

Multitransition

-- MLT-3

There are several schemes in each category. We need to be familiar with all

schemes discussed in this section to understand the rest of the book. This section can be

used as a reference for schemes encountered later.

Unipolar Scheme

In a unipolar scheme, all the signal levels are on one side of the time axis, either above

or below.

NRZ (Non-Return-to-Zero) Traditionally, a unipolar scheme was designed as a

non-return-to-zero (NRZ) scheme in which the positive voltage defines bit I and the

zero voltage defines bit O. It is called NRZ because the signal does not return to zero at

the middle of the bit. Figure 4.5 show a unipolar NRZ scheme.


SECTION 4.1 DIGITAL-TO-DIGITAL CONVERSION 107

Figure 4.5

Unipolar NRZ scheme

Amplitude

v

1 f 0

f

: 1

I 0

I

o 1------'1---1---1--+---.--__

I Time Nonnalized power

Compared with its polar counterpart (see the next section), this scheme is very

costly. As we will see shortly, the normalized power (power needed to send 1 bit per

unit line resistance) is double that for polar NRZ. For this reason, this scheme is normally

not used in data communications today.

Polar Schemes

In polar schemes, the voltages are on the both sides of the time axis. For example, the

voltage level for 0 can be positive and the voltage level for I can be negative.

Non-Return-to-Zero (NRZ) In polar NRZ encoding, we use two levels of voltage

amplitude. We can have two versions of polar NRZ: NRZ-Land NRZ-I, as shown in

Figure 4.6. The figure also shows the value of r, the average baud rate, and the bandwidth.

In the first variation, NRZ-L (NRZ-Level), the level of the voltage determines

the value of the bit. In the second variation, NRZ-I (NRZ-Invert), the change or lack of

change in the level of the voltage determines the value of the bit. If there is no change,

the bit is 0; if there is a change, the bit is 1.

Figure 4.6

Polar NRZ-L and NRZ-I schemes

011

I

I

1 : 1 0

I

I

NRZ-L f--+--1---I---+--I---1------t----'--~

Time

NRZ-I f-----I----J---I---+--+--+----+----'--~

Time

T=:= 1

p

Save "'NIl

0: ~illdWidth

oG""Iil""""'~I=-=-"""'r'-----'l..~

o I 2 fIN

o No inversion: Next bit is 0 • Inversion: Next bit is 1

In NRZ-L the level of the voltage determines the value of the bit. In NRZ-I

the inversion or the lack of inversion determines the value of the bit.

Let us compare these two schemes based on the criteria we previously defined.

Although baseline wandering is a problem for both variations, it is twice as severe in

NRZ-L. If there is a long sequence of Os or Is in NRZ-L, the average signal power


108 CHAPTER 4 DIGITAL TRANSMISSION

becomes skewed. The receiver might have difficulty discerning the bit value. In NRZ-I

this problem occurs only for a long sequence of as. If somehow we can eliminate the

long sequence of as, we can avoid baseline wandering. We will see shortly how this can

be done.

The synchronization problem (sender and receiver clocks are not synchronized)

also exists in both schemes. Again, this problem is more serious in NRZ-L than in

NRZ-I. While a long sequence of as can cause a problem in both schemes, a long

sequence of 1s affects only NRZ-L.

Another problem with NRZ-L occurs when there is a sudden change of polarity in

the system. For example, if twisted-pair cable is the medium, a change in the polarity of

the wire results in all as interpreted as Is and all Is interpreted as as. NRZ-I does not

have this problem. Both schemes have an average signal rate ofNI2 Bd.

NRZ-L and NRZ-J both have an average signal rate ofNI2 Bd.

Let us discuss the bandwidth. Figure 4.6 also shows the normalized bandwidth for

both variations. The vertical axis shows the power density (the power for each I Hz of

bandwidth); the horizontal axis shows the frequency. The bandwidth reveals a very

serious problem for this type of encoding. The value of the power density is velY high

around frequencies close to zero. This means that there are DC components that carry a

high level of energy. As a matter of fact, most of the energy is concentrated in frequencies

between a and NIl. This means that although the average of the signal rate is N12,

the energy is not distributed evenly between the two halves.

NRZ-L and NRZ-J both have a DC component problem.

Example 4.4

A system is using NRZ-I to transfer 10-Mbps data. What are the average signal rate and minimum

bandwidth?

Solution

The average signal rate is S = NI2 = 500 kbaud. The minimum bandwidth for this average baud

rate is B nlin = S = 500 kHz.

Return to Zero (RZ) The main problem with NRZ encoding occurs when the sender

and receiver clocks are not synchronized. The receiver does not know when one bit has

ended and the next bit is starting. One solution is the return-to-zero (RZ) scheme,

which uses three values: positive, negative, and zero. In RZ, the signal changes not

between bits but during the bit. In Figure 4.7 we see that the signal goes to 0 in the middle

of each bit. It remains there until the beginning of the next bit. The main disadvantage

of RZ encoding is that it requires two signal changes to encode a bit and therefore

occupies greater bandwidth. The same problem we mentioned, a sudden change of

polarity resulting in all as interpreted as 1s and all 1s interpreted as as, still exist here,

but there is no DC component problem. Another problem is the complexity: RZ uses

three levels of voltage, which is more complex to create and discern. As a result of all

these deficiencies, the scheme is not used today. Instead, it has been replaced by the

better-performing Manchester and differential Manchester schemes (discussed next).


SECTION 4.1 DIGITAL-TO-DIGITAL CONVERSION 109

Figure 4.7

Polar RZ scheme

Amplitude

p

o l

l

l

1 l

1

l

o:lL

Time

o I ~

o 1 2 fiN

Biphase: Manchester and Differential Manchester The idea of RZ (transition at

the middle of the bit) and the idea of NRZ-L are combined into the Manchester scheme.

In Manchester encoding, the duration of the bit is divided into two halves. The voltage

remains at one level during the first half and moves to the other level in the second half.

The transition at the middle ofthe bit provides synchronization. Differential Manchester,

on the other hand, combines the ideas of RZ and NRZ-I. There is always a transition at

the middle of the bit, but the bit values are determined at the beginning of the bit. Ifthe

next bit is 0, there is a transition; if the next bit is 1, there is none. Figure 4.8 shows

both Manchester and differential Manchester encoding.

Figure 4.8

Polar biphase: Manchester and differential Manchester schemes

( Ois L lis S )

Manchester

I I I

1 1

I I I I I I

I I I I I I

0

I 1

.....

I

I

0

I

0

r+-

1,...- I _,

--4

I 1

I I I I

I I I I

I

I

I -I I-

-+- I

l..+-

rt-

I I l I I I

I,.... 1,...- I

I

r¢- I

I

I

1 I I I I I

Differential

I I I

Manchester I I I I

1

L...¢-

I I

~ 1

~

-

Time

Time

p

11 Bandwidth

O.~~~

o 1

)0

2 fiN

o No inversion: Next bit is 1 • Inversion: Next bit is 0

In Manchester and differential Manchester encoding, the transition

at the middle of the bit is used for synchronization.

The Manchester scheme overcomes several problems associated with NRZ-L, and

differential Manchester overcomes several problems associated with NRZ-I. First, there

is no baseline wandering. There is no DC component because each bit has a positive and


110 CHAPTER 4 DIGITAL TRANSMISSION

negative voltage contribution. The only drawback is the signal rate. The signal rate for

Manchester and differential Manchester is double that for NRZ. The reason is that there is

always one transition at the middle of the bit and maybe one transition at the end of each

bit. Figure 4.8 shows both Manchester and differential Manchester encoding schemes.

Note that Manchester and differential Manchester schemes are also called biphase

schemes.

The minimum bandwidth ofManchesterand differential Manchester is 2 times that ofNRZ.

Bipolar Schemes

In bipolar encoding (sometimes called multilevel binary), there are three voltage levels:

positive, negative, and zero. The voltage level for one data element is at zero, while

the voltage level for the other element alternates between positive and negative.

In bipolar encoding, we use three levels: positive, zero, and negative.

AMI and Pseudoternary Figure 4.9 shows two variations of bipolar encoding: AMI

and pseudoternary. A common bipolar encoding scheme is called bipolar alternate

mark inversion (AMI). In the term alternate mark inversion, the word mark comes

from telegraphy and means 1. So AMI means alternate I inversion. A neutral zero voltage

represents binary O. Binary Is are represented by alternating positive and negative

voltages. A variation of AMI encoding is called pseudoternary in which the 1 bit is

encoded as a zero voltage and the 0 bit is encoded as alternating positive and negative

voltages.

Figure 4.9

Bipolar schemes: AMI and pseudoternary

Amplitude

o I I (I

I

o I

I

I : (I

I

I

I

I

AMI r--~-~--1--+----+---;-+

Time

r= 1

p

Pseudoternary I-----+--~----If----t--...,....-____r--+

Time

The bipolar scheme was developed as an alternative to NRZ. The bipolar scheme

has the same signal rate as NRZ, but there is no DC component. The NRZ scheme has

most of its energy concentrated near zero frequency, which makes it unsuitable for

transmission over channels with poor performance around this frequency. The concentration

of the energy in bipolar encoding is around frequency N12. Figure 4.9 shows the

typical energy concentration for a bipolar scheme.


SECTION 4.1 DIGITAL-TO-DIGITAL CONVERSION 111

One may ask why we do not have DC component in bipolar encoding. We can

answer this question by using the Fourier transform, but we can also think about it intuitively.

If we have a long sequence of 1s, the voltage level alternates between positive

and negative; it is not constant. Therefore, there is no DC component. For a long

sequence of Os, the voltage remains constant, but its amplitude is zero, which is the

same as having no DC component. In other words, a sequence that creates a constant

zero voltage does not have a DC component.

AMI is commonly used for long-distance communication, but it has a synchronization

problem when a long sequence of Os is present in the data. Later in the chapter, we

will see how a scrambling technique can solve this problem.

Multilevel Schemes

The desire to increase the data speed or decrease the required bandwidth has resulted in

the creation of many schemes. The goal is to increase the number of bits per baud by

encoding a pattern of m data elements into a pattern of n signal elements. We only have

two types of data elements (Os and Is), which means that a group of m data elements

can produce a combination of 2 m data patterns. We can have different types of signal

elements by allowing different signal levels. If we have L different levels, then we can

produce Ln combinations of signal patterns. If 2 m = Ln, then each data pattern is

encoded into one signal pattern. If 2 m < L n , data patterns occupy only a subset of signal

patterns. The subset can be carefully designed to prevent baseline wandering, to provide

synchronization, and to detect errors that occurred during data transmission. Data

encoding is not possible if 2 m > L n because some of the data patterns cannot be

encoded.

The code designers have classified these types of coding as mBnL, where m is the

length of the binary pattern, B means binary data, n is the length of the signal pattern,

and L is the number of levels in the signaling. A letter is often used in place of L: B

(binary) for L =2, T (ternary) for L =3, and Q (quaternary) for L =4. Note that the first

two letters define the data pattern, and the second two define the signal pattern.

InmBnL schemes, a pattern ofm data elements is encoded as a pattern ofn signal

elements in which 2 m ::::; Ln.

2BIQ The first mBnL scheme we discuss, two binary, one quaternary (2BIQ), uses

data patterns of size 2 and encodes the 2-bit patterns as one signal element belonging

to a four-level signal. In this type of encoding m =2, n =1, and L =4 (quatemary). Figure

4.10 shows an example of a 2B 1Q signal.

The average signal rate of 2BlQ is S = N/4. This means that using 2BIQ, we can

send data 2 times faster than by using NRZ-L. However, 2B lQ uses four different signal

levels, which means the receiver has to discern four different thresholds. The

reduced bandwidth comes with a price. There are no redundant signal patterns in this

scheme because 2 2 =4 1 .

As we will see in Chapter 9, 2BIQ is used in DSL (Digital Subscriber Line) technology

to provide a high-speed connection to the Internet by using subscriber telephone

lines.


112 CHAPTER 4 DIGITAL TRANSMISSION

Figure 4.10

Multilevel: 2B1Q scheme

Previous level:

positive

Previous level:

negative

Next Next Next

bits level level

00 +1 -I

01 +3 -3

10 -I +1

II -3 +3

Transition table

+3

+1

~1

-3

00 I II 0] I HI I 0]

I I I

I

I

I

I

I

I

I

I

I

I

I

I

I

I I I

I I

, I

Time

p

1 \ Bandwidth

0.5

o -

o 1/2

Save =N14

2 fiN

Assuming positive original level

8B6T A very interesting scheme is eight binary, six ternary (8B6T). This code is used

with 100BASE-4T cable, as we will see in Chapter 13. The idea is to encode a pattern of

8 bits as a pattern of 6 signal elements, where the signal has three levels (ternary). In this

type of scheme, we can have 2 8 =256 different data patterns and 3 6 =478 different signal

patterns. The mapping table is shown in Appendix D. There are 478 - 256 =222 redundant

signal elements that provide synchronization and error detection. Part of the redundancy is

also used to provide DC balance. Each signal pattern has a weight of 0 or +1 DC values. This

means that there is no pattern with the weight -1. To make the whole stream Dc-balanced,

the sender keeps track of the weight. If two groups of weight 1 are encountered one after

another, the first one is sent as is, while the next one is totally inverted to give a weight of -1.

Figure 4.11 shows an example of three data patterns encoded as three signal patterns.

The three possible signal levels are represented as -,0, and +. The first 8-bit pattern

00010001 is encoded as the signal pattern -0-0++ with weight 0; the second 8-bit

pattern 01010011 is encoded as - + - + + 0 with weight +1. The third bit pattern should

be encoded as + - - + 0 + with weight +1. To create DC balance, the sender inverts the

actual signal. The receiver can easily recognize that this is an inverted pattern because

the weight is -1. The pattern is inverted before decoding.

Figure 4.11

Multilevel: 8B6T scheme

OOOIO()O( 01()10011

O!OIOO()() I

I

Inverted:

pattern :

o+----.--,--..,.....---l---I-+--I--I----L---r--t-----..--r-----It---r----..

Time

+v

-v

-0-0++ -+-++0 +--+0+


SECTION 4.1 DIGITAL-TO-DIGITAL CONVERSION 113

The average signal rate of the scheme is theoretically Save = ! X N X §; in practice

the minimum bandwidth is very close to 6N18. 2 8

4D-PAMS The last signaling scheme we discuss in this category is called fourdimensional

five-level pulse amplitude modulation (4D-PAM5). The 4D means that data

is sent over four wires at the same time. It uses five voltage levels, such as -2, -1, 0, 1, and 2.

However, one level, level 0, is used only for forward error detection (discussed in Chapter

10). Ifwe assume that the code is just one-dimensional, the four levels create something

similar to 8B4Q. In other words, an 8-bit word is translated to a signal element offour different

levels. The worst signal rate for this imaginary one-dimensional version is N X 4/8, orN12.

The technique is designed to send data over four channels (four wires). This means

the signal rate can be reduced to N18, a significant achievement. All 8 bits can be fed into a

wire simultaneously and sent by using one signal element. The point here is that the four

signal elements comprising one signal group are sent simultaneously in a four-dimensional

setting. Figure 4.12 shows the imaginary one-dimensional and the actual four-dimensional

implementation. Gigabit LANs (see Chapter 13) use this technique to send 1-Gbps data

over four copper cables that can handle 125 Mbaud. This scheme has a lot of redundancy

in the signal pattern because 2 8 data patterns are matched to 4 4 = 256 signal patterns. The

extra signal patterns can be used for other purposes such as error detection.

Figure 4.12

Multilevel: 4D-PAM5 scheme

00011110 1 Gbps

250 Mbps

Wire 1 (125 MBd)

+2

+1

-1

-2

250 Mbps

250 Mbps

250 Mbps

Wire 2 (125 MBd)

Wire 3 (125 MBd)

Wire 4 (125 MBd)

Multiline Transmission: MLT-3

NRZ-I and differential Manchester are classified as differential encoding but use two transition

rules to encode binary data (no inversion, inversion). Ifwe have a signal with more than

two levels, we can design a differential encoding scheme with more than two transition

rules. MLT-3 is one of them. The multiline transmission, three level (MLT-3) scheme

uses three levels (+v, 0, and - V) and three transition rules to move between the levels.

1. If the next bit is 0, there is no transition.

2. If the next bit is 1 and the current level is not 0, the next level is 0.

3. Ifthe next bit is 1 and the cutTent level is 0, the next level is the opposite of the last

nonzero level.


114 CHAPTER 4 DIGITAL TRANSMISSION

The behavior ofMLT-3 can best be described by the state diagram shown in Figure 4.13.

The three voltage levels (-V, 0, and +V) are shown by three states (ovals). The transition

from one state (level) to another is shown by the connecting lines. Figure 4.13 also

shows two examples of an MLT-3 signal.

Figure 4.13

Multitransition: MLT-3 scheme

01110111101111

1 I I I I 1

+V I I

I I

I I

OV r--__---t- --t--+-------j--_ __+_~

-v

a. Typical case

: Time

1

I

Next bit: 1

Next bit: 0

1

+v

OV l----+O---i--r--

-v

b. Worse case

Last

non-zero non-zero

Next bit: 0 level: +V level: - V Next bit: 0

c. Transition states

One might wonder why we need to use MLT-3, a scheme that maps one bit to one

signal element. The signal rate is the same as that for NRZ-I, but with greater complexity

(three levels and complex transition rules). It turns out that the shape of the signal in this

scheme helps to reduce the required bandwidth. Let us look at the worst-case scenario, a

sequence of Is. In this case, the signal element pattern +VO - VO is repeated every 4 bits.

A nonperiodic signal has changed to a periodic signal with the period equal to 4 times the

bit duration. This worst-case situation can be simulated as an analog signal with a frequency

one-fourth of the bit rate. In other words, the signal rate for MLT-3 is one-fourth

the bit rate. This makes MLT-3 a suitable choice when we need to send 100 Mbps on a

copper wire that cannot support more than 32 MHz (frequencies above this level create

electromagnetic emissions). MLT-3 and LANs are discussed in Chapter 13.

Summary ofLine Coding Schemes

We summarize in Table 4.1 the characteristics of the different schemes discussed.

Table 4.1

Summary ofline coding schemes

Bandwidth

Category Scheme (average) Characteristics

Unipolar NRZ B=N/2 Costly, no self-synchronization iflong Os or Is, DC

NRZ-L B=N/2 No self-synchronization if long Os or 1s, DC

Unipolar NRZ-I B=N/2 No self-synchronization for long aS, DC

Biphase B=N Self-synchronization, no DC, high bandwidth


SECTION 4.1 DIGITAL-TO-DIGITAL CONVERSION 115

Table 4.1

Summary ofline coding schemes (continued)

Bandwidth

Category Scheme (average) Characteristics

Bipolar AMI B=NI2 No self-synchronization for long OS, DC

2BIQ B=N/4 No self-synchronization for long same double bits

Multilevel 8B6T B =3N/4 Self-synchronization, no DC

4D-PAM5 B=N/8 Self-synchronization, no DC

Multiline MLT-3 B=N/3 No self-synchronization for long Os

Block Coding

We need redundancy to ensure synchronization and to provide some kind of inherent

error detecting. Block coding can give us this redundancy and improve the performance

of line coding. In general, block coding changes a block of m bits into a block

of n bits, where n is larger than m. Block coding is referred to as an mB/nB encoding

technique.

Block coding is normally referred to as mBlnB coding;

it replaces each m~bit group with an n~bit group.

The slash in block encoding (for example, 4B/5B) distinguishes block encoding

from multilevel encoding (for example, 8B6T), which is written without a slash. Block

coding normally involves three steps: division, substitution, and combination. In the

division step, a sequence of bits is divided into groups of m bits. For example, in 4B/5B

encoding, the original bit sequence is divided into 4-bit groups. The heart of block coding

is the substitution step. In this step, we substitute an m-bit group for an n-bit group.

For example, in 4B/5B encoding we substitute a 4-bit code for a 5-bit group. Finally,

the n-bit groups are combined together to form a stream. The new stream has more bits

than the original bits. Figure 4.14 shows the procedure.

Figure 4.14

Block coding concept

Division of a stream into m-bit groups

m bits m bits m bits

[110"'111000'''11 ••• 1010'''11

11

m&-to-nB

substitution

Jl

1010"'10111000'''00 1 1... 1° 11 ''.1111

n bits n bits n bits

Combining n-bit groups into a stream


116 CHAPTER 4 DIGITAL TRANSMISSION

4B/5B

The four binary/five binary (4B/5B) coding scheme was designed to be used in combination

with NRZ-I. Recall that NRZ-I has a good signal rate, one-half that of the

biphase, but it has a synchronization problem. A long sequence of as can make the

receiver clock lose synchronization. One solution is to change the bit stream, prior to

encoding with NRZ-I, so that it does not have a long stream of as. The 4B/5B scheme

achieves this goal. The block-coded stream does not have more that three consecutive

as, as we will see later. At the receiver, the NRZ-I encoded digital signal is first

decoded into a stream of bits and then decoded to remove the redundancy. Figure 4.15

shows the idea.

Figure 4.15

Using block coding 4B/5B with NRZ-I line coding scheme

Sender

Receiver

Digital signal

--::Ft:F-

Link

In 4B/5B, the 5-bit output that replaces the 4-bit input has no more than one leading

zero (left bit) and no more than two trailing zeros (right bits). So when different groups

are combined to make a new sequence, there are never more than three consecutive as.

(Note that NRZ-I has no problem with sequences of Is.) Table 4.2 shows the corresponding

pairs used in 4B/5B encoding. Note that the first two columns pair a 4-bit

group with a 5-bit group. A group of 4 bits can have only 16 different combinations

while a group of 5 bits can have 32 different combinations. This means that there are 16

groups that are not used for 4B/5B encoding. Some of these unused groups are used for

control purposes; the others are not used at all. The latter provide a kind of error detection.

If a 5-bit group arrives that belongs to the unused portion of the table, the receiver

knows that there is an error in the transmission.

Table 4.2

4B/5B mapping codes

Data Sequence Encoded Sequence Control Sequence Encoded Sequence

0000 11110 Q (Quiet) 00000

0001 01001 I (Idle) 11111

0010 10100 H (Halt) 00100

0011 10101 J (Start delimiter) 11000

0100 01010 K (Start delimiter) 10001

0101 01011 T (End delimiter) 01101


SECTION 4.1 DIGITAL-TO-DIGITAL CONVERSION 117

Table 4.2

4B/5B mapping codes (continued)

Data Sequence Encoded Sequence Control Sequence Encoded Sequence

0110 01110 S (Set) 11001

0111 01111 R (Reset) 00111

1000 10010

1001 10011

1010 10110

1011 10111

1100 11 010

1101 11011

1110 11100

1111 11101

Figure 4.16 shows an example of substitution in 4B/5B coding. 4B/5B encoding

solves the problem of synchronization and overcomes one of the deficiencies ofNRZ-1.

However, we need to remember that it increases the signal rate of NRZ-1. The redundant

bits add 20 percent more baud. Still, the result is less than the biphase scheme

which has a signal rate of 2 times that of NRZ-1. However, 4B/5B block encoding does

not solve the DC component problem of NRZ-1. If a DC component is unacceptable, we

need to use biphase or bipolar encoding.

Figure 4.16

Substitution in 48/5B block coding

4-bit blocks

I 1 1 1 1 I ••• I 000 I I I 0000 1

I

1 1 1 1 1 ~l 1 1 1 1 0

I

LI 1 1 1 0 1

I

oooo~

r

':

"0'

5-bit blocks

Example 4.5

We need to send data at a 1-Mbps rate. What is the minimum required bandwidth, using a combination

of 4B/5B and NRZ-I or Manchester coding?

Solution

First 4B/5B block coding increases the bit rate to 1.25 Mbps. The minimum bandwidth using

NRZ-I is NI2 or 625 kHz. The Manchester scheme needs a minimum bandwidth of 1 MHz. The

first choice needs a lower bandwidth, but has a DC component problem; the second choice needs

a higher bandwidth, but does not have a DC component problem.


118 CHAPTER 4 DIGITAL TRANSMISSION

8RIlOR

The eight binary/ten binary (SBIlOB) encoding is similar to 4B/5B encoding except

that a group of 8 bits of data is now substituted by a lO-bit code. It provides greater

error detection capability than 4B/5B. The 8BIlOB block coding is actually a combination

of 5B/6B and 3B/4B encoding, as shown in Figure 4.17.

Figure 4.17

8B/lOB block encoding

8B/IOB encoder

8-bit block

&-+--+-IO-bit block

The most five significant bits of a 10-bit block is fed into the 5B/6B encoder; the

least 3 significant bits is fed into a 3B/4B encoder. The split is done to simplify the

mapping table. To prevent a long run of consecutive Os or Is, the code uses a disparity

controller which keeps track of excess Os over Is (or Is over Os). If the bits in the current

block create a disparity that contributes to the previous disparity (either direction),

then each bit in the code is complemented (a 0 is changed to a 1 and a 1 is changed to a 0).

The coding has 2 10 - 2 8 = 768 redundant groups that can be used for disparity checking

and error detection. In general, the technique is superior to 4B/5B because of better

built-in error-checking capability and better synchronization.

Scramblin~

Biphase schemes that are suitable for dedicated links between stations in a LAN are not

suitable for long-distance communication because of their wide bandwidth requirement.

The combination of block coding and NRZ line coding is not suitable for long-distance

encoding either, because of the DC component. Bipolar AMI encoding, on the other

hand, has a narrow bandwidth and does not create a DC component. However, a long

sequence of Os upsets the synchronization. Ifwe can find a way to avoid a long sequence

of Os in the original stream, we can use bipolar AMI for long distances. We are looking

for a technique that does not increase the number of bits and does provide synchronization.

We are looking for a solution that substitutes long zero-level pulses with a combination

of other levels to provide synchronization. One solution is called scrambling. We

modify part of the AMI rule to include scrambling, as shown in Figure 4.18. Note that

scrambling, as opposed to block coding, is done at the same time as encoding. The

system needs to insert the required pulses based on the defined scrambling rules. Two

common scrambling techniques are B8ZS and HDB3.

R8ZS

Bipolar with S-zero substitution (BSZS) is commonly used in North America. In

this technique, eight consecutive zero-level voltages are replaced by the sequence


SECTION 4.1 DIGITAL-TO-DIGITAL CONVERSION 119

Figure 4.18

AMI used with scrambling

Sender

Receiver

Violated digital signal

--=ftF-

OOOVBOVB. The V in the sequence denotes violation; this is a nonzero voltage that

breaks an AMI rule of encoding (opposite polarity from the previous). The B in the

sequence denotes bipolm; which means a nonzero level voltage in accordance with the

AMI rule. There are two cases, as shown in Figure 4.19.

Figure 4.19

Two cases ofB8ZS scrambling technique

I 000 000 0 ~

: i: :: ~ ~i ,'" :

t "j t I' t j 1 1

1 t I I B I I VI ':1

I { I I 1

10 10 101 0 I

a. Previous level is positive. b. Previous level is negative.

Note that the scrambling in this case does not change the bit rate. Also, the technique

balances the positive and negative voltage levels (two positives and two negatives),

which means that the DC balance is maintained. Note that the substitution may

change the polarity of a 1 because, after the substitution, AMI needs to follow its rules.

B8ZS substitutes eight consecutive zeros with OOOVBOVB.

One more point is worth mentioning. The letter V (violation) or B (bipolar) here is

relative. The V means the same polarity as the polarity of the previous nonzero pulse; B

means the polarity opposite to the polarity of the previous nonzero pulse.

HDB3

High-density bipolar 3-zero (HDB3) is commonly used outside ofNorth America. In this

technique, which is more conservative than B8ZS, four consecutive zero-level voltages are

replaced with a sequence of OOOV or BOO\: The reason for two different substitutions is to


120 CHAPTER 4 DIGITAL TRANSMISSION

maintain the even number of nonzero pulses after each substitution. The two rules can be

stated as follows:

1. If the number of nonzero pulses after the last substitution is odd, the substitution

pattern will be OOOV, which makes the total number of nonzero pulses even.

2. If the number of nonzero pulses after the last substitution is even, the substitution

pattern will be BOOV, which makes the total number of nonzero pulses even.

Figure 4.20 shows an example.

Figure 4.20

Different situations in HDB3 scrambling technique

First

substitution

Second

substitution

Third

substitution

t

Even

t t

Even Odd Even Even

There are several points we need to mention here. First, before the first substitution,

the number of nonzero pulses is even, so the first substitution is BODY. After this

substitution, the polarity of the 1 bit is changed because the AMI scheme, after each

substitution, must follow its own rule. After this bit, we need another substitution,

which is OOOV because we have only one nonzero pulse (odd) after the last substitution.

The third substitution is BOOV because there are no nonzero pulses after the second

substitution (even).

HDB3 substitutes four consecutive zeros with OOOV or BOOV depending

on the number of nonzero pulses after the last substitution.

4.2 ANALOG-TO-DIGITAL CONVERSION

The techniques described in Section 4.1 convert digital data to digital signals. Sometimes,

however, we have an analog signal such as one created by a microphone or camera.

We have seen in Chapter 3 that a digital signal is superior to an analog signal. The

tendency today is to change an analog signal to digital data. In this section we describe

two techniques, pulse code modulation and delta modulation. After the digital data are

created (digitization), we can use one of the techniques described in Section 4.1 to convert

the digital data to a digital signal.


SECTION 4.2 ANALOG-TO-DIGITAL CONVERSION 121

Pulse Code Modulation (PCM)

The most common technique to change an analog signal to digital data (digitization)

is called pulse code modulation (PCM). A PCM encoder has three processes, as shown

in Figure 4.21.

Figure 4.21

Components ofPCM encoder

Quantized signal

peM encoder

~'-'-T

! .• ··1······ :t i ~ 'i

t=lJl::I::=t=J;.:;=~,

~. J- J4 HSampling; IQuantizing Encoding 11 "'11°°1

Digital data

Analog signal

tuL I

PAM signal

. I I.,

1. The analog signal is sampled.

2. The sampled signal is quantized.

3. The quantized values are encoded as streams of bits.

Sampling

The first step in PCM is sampling. The analog signal is sampled every T s s, where T s is

the sample interval or period. The inverse of the sampling interval is called the sampling

rate or sampling frequency and denoted by is, where is = IITs' There are three

sampling methods-ideal, natural, and flat-top-as shown in Figure 4.22.

In ideal sampling, pulses from the analog signal are sampled. This is an ideal sampling

method and cannot be easily implemented. In natural sampling, a high-speed

switch is turned on for only the small period of time when the sampling occurs. The

result is a sequence of samples that retains the shape of the analog signal. The most

common sampling method, called sample and hold, however, creates flat-top samples

by using a circuit.

The sampling process is sometimes referred to as pulse amplitude modulation

(PAM). We need to remember, however, that the result is still an analog signal with

nonintegral values.

Sampling Rate One important consideration is the sampling rate or frequency. What

are the restrictions on T s ? This question was elegantly answered by Nyquist. According

to the Nyquist theorem, to reproduce the original analog signal, one necessary condition

is that the sampling rate be at least twice the highest frequency in the original signal.


122 CHAPTER 4 DIGITAL TRANSMISSION

Figure 4.22

Three different sampling methods for PCM

Amplitude

Amplitude

,~Analog signal

...

Time

/ Analog signal

...

" ~ s

a. Ideal sampling

b. Natural sampling

Amplitude

c. Flat-top sampling

According to the Nyquist theorem, the sampling rate must be

at least 2 times the highest frequency contained in the signal.

We need to elaborate on the theorem at this point. First, we can sample a signal

only if the signal is band-limited. In other words, a signal with an infinite bandwidth

cannot be sampled. Second, the sampling rate must be at least 2 times the highest frequency,

not the bandwidth. If the analog signal is low-pass, the bandwidth and the

highest frequency are the same value. If the analog signal is bandpass, the bandwidth

value is lower than the value of the maximum frequency. Figure 4.23 shows the value

of the sampling rate for two types of signals.

Figure 4.23 Nyquist sampling rate for low-pass and bandpass signals

-------------------------------------

Amplitude

Nyquist rate = 2 x fm"x

Low-pass signal

f rnin

Amplitude

o

f min

f max Frequency

Frequency

Nyquist rate =2 x fmax

Bandpass signal


SECTION 4.2 ANALOG-TO-DIGITAL CONVERSION 123

Example 4.6

For an intuitive example of the Nyquist theorem, let us sample a simple sine wave at three sampling

rates: fs = 4f (2 times the Nyquist rate)'/s = 2f (Nyquist rate), and f s =f (one-half the

Nyquist rate). Figure 4.24 shows the sampling and the subsequent recovery of the signal.

Figure 4.24

Recovery ofa sampled sine wave for different sampling rates

a. Nyquist rate sampling:f s = 2f

,. • , ,, ,

,

. ,

,, ,

, , .

.

, , , , ,

,

, ,

¥

,

, .

, ..

•,.

'".'

•,.

,.,

,, .,

, '

"

; ,

, .

, ,

,•, ,

,

,

,

,

,

b. Oversampling:f s = 4f

c. Undersampling:f s = f

..... ,

"

' ....

.'

It can be seen that sampling at the Nyquist rate can create a good approximation of the original

sine wave (part a). Oversampling in part b can also create the same approximation, but it is

redundant and unnecessary. Sampling below the Nyquist rate (part c) does not produce a signal

that looks like the original sine wave.

Example 4.7

As an interesting example, let us see what happens if we sample a periodic event such as the revolution

of a hand of a clock. The second hand of a clock has a period of 60 s. According to the Nyquist

Tor f s = 2f). In

theorem, we need to sample the hand (take and send a picture) every 30 s (T s = ~

Figure 4.25a, the sample points, in order, are 12, 6, 12, 6, 12, and 6. The receiver of the samples

cannot tell if the clock is moving forward or backward. In part b, we sample at double the Nyquist

rate (every 15 s). The sample points, in order, are 12,3,6, 9, and 12. The clock is moving forward.

In part c, we sample below the Nyquist rate (T s = ~ Torfs = ~f). The sample points, in order, are 12,

9,6,3, and 12. Although the clock is moving forward, the receiver thinks that the clock is moving

backward.


124 CHAPTER 4 DIGITAL TRANSMISSION

Figure 4.25

Sampling ofa clock with only one hand

a. Sampling at Nyquist rate: T s = !T

Samples can mean that

the clock is moving

either forward or

backward.

(12-6-12-6-12)

rnG2 ~~rn

U 9:-3 \JY~U

Samples show clock

is moving forward.

(12·3-6-9-12)

b. Oversampling (above Nyquist rate): T s = ~T

rn~~~rn

ug\J)~U

c. Undersampling (below Nyquist rate): T s = ~T

Samples show clock

is moving backward.

(12-9-6-3-12)

Example 4.8

An example related to Example 4.7 is the seemingly backward rotation of the wheels of a forwardmoving

car in a movie. This can be explained by undersampling. A movie is filmed at 24 frames

per second. If a wheel is rotating more than 12 times per second, the undersampling creates the

impression of a backward rotation.

Example 4.9

Telephone companies digitize voice by assuming a maximum frequency of 4000 Hz. The sampling

rate therefore is 8000 samples per second.

Example 4.10

A complex low-pass signal has a bandwidth of 200 kHz. What is the minimum sampling rate for

this signal?

Solution

The bandwidth of a low-pass signal is between 0 andj, wheref is the maximum frequency in the

signal. Therefore, we can sample this signal at 2 times the highest frequency (200 kHz). The sampling

rate is therefore 400,000 samples per second.

Example 4.1J

A complex bandpass signal has a bandwidth of 200 kHz. What is the minimum sampling rate for

this signal?


SECTION 4.2 ANALOG-TO-DIGITAL CONVERSION 125

Solution

We cannot find the minimum sampling rate in this case because we do not know where the bandwidth

starts or ends. We do not know the maximum frequency in the signal.

Quantization

The result of sampling is a series of pulses with amplitude values between the maximum

and minimum amplitudes of the signal. The set of amplitudes can be infinite with

nonintegral values between the two limits. These values cannot be used in the encoding

process. The following are the steps in quantization:

1. We assume that the original analog signal has instantaneous amplitudes between

Vmin and Vmax'

2. We divide the range into L zones, each of height ~ (delta).

V max

-

V rnin

~ = -==-:::--=

L

3. We assign quantized values of 0 to L - I to the midpoint of each zone.

4. We approximate the value of the sample amplitude to the quantized values.

As a simple example, assume that we have a sampled signal and the sample amplitudes

are between -20 and +20 V. We decide to have eight levels (L = 8). This means

that ~ = 5 V. Figure 4.26 shows this example.

Figure 4.26

Quantization and encoding ofa sampled signal

Quantization

codes

Normalized

amplitude

7

46.

36.

19.7

5

26.

7.5

11.0

Ol--------L...---L...------lL...-----''------,,---,-----,-----,_

2

-6. -6.1

-26.

-5.5

1 , 1Time

-6,0

-9.4

-11.3 '

-36.

-4.11

Normalized -1.22

PAM values

Normalized -1.50

quantized values

Normalized -0.38

error

1.50 3.24 3.94 2.20 -1.10 -2.26 -1.88 -1.20

1.50 3.50 3.50 2.50 -1.50 -2.50 -1.50 -1.50

o +0.26 -0.44 +0.30 -0.40 -0.24 +0.38 -0.30

Quantization code 2 5 7 7 6 2 2 2

Encoded words 010 101 111 111 110 010 001 010 ow


126 CHAPTER 4 DIGITAL TRANSMISSION

We have shown only nine samples using ideal sampling (for simplicity). The

value at the top of each sample in the graph shows the actual amplitude. In the chart,

the first row is the normalized value for each sample (actual amplitude/.:1). The quantization

process selects the quantization value from the middle of each zone. This

means that the normalized quantized values (second row) are different from the normalized

amplitudes. The difference is called the normalized error (third row). The

fourth row is the quantization code for each sample based on the quantization levels

at the left of the graph. The encoded words (fifth row) are the final products of the

conversion.

Quantization Levels In the previous example, we showed eight quantization levels.

The choice of L, the number of levels, depends on the range of the amplitudes of the

analog signal and how accurately we need to recover the signal. If the amplitude of a

signal fluctuates between two values only, we need only two levels; if the signal, like

voice, has many amplitude values, we need more quantization levels. In audio digitizing,

L is normally chosen to be 256; in video it is normally thousands. Choosing

lower values of L increases the quantization error if there is a lot of fluctuation in the

signal.

Quantization Error One important issue is the error created in the quantization process.

(Later, we will see how this affects high-speed modems.) Quantization is an

approximation process. The input values to the quantizer are the real values; the output

values are the approximated values. The output values are chosen to be the middle

value in the zone. If the input value is also at the middle of the zone, there is no quantization

error; otherwise, there is an error. In the previous example, the normalized

amplitude of the third sample is 3.24, but the normalized quantized value is 3.50. This

means that there is an error of +0.26. The value of the error for any sample is less than

.:112. In other words, we have -.:112 -::;; error -::;; .:112.

The quantization error changes the signal-to-noise ratio of the signal, which in turn

reduces the upper limit capacity according to Shannon.

It can be proven that the contribution of the quantization error to the SNR dB of

the signal depends on the number of quantization levels L, or the bits per sample nb' as

shown in the following formula:

SNR dB =6.02nb + 1.76 dB

Example 4.12

What is the SNR dB in the example of Figure 4.26?

Solution

We can use the formula to find the quantization. We have eight levels and 3 bits per sample, so

SNR dB

=6.02(3) + 1.76 = 19.82 dB. Increasing the number of levels increases the SNR.

Example 4.13

A telephone subscriber line must have an SN~B above 40. What is the minimum number of bits

per sample?


SECTION 4.2 ANALOG-TO-DIGITAL CONVERSION 127

Solution

We can calculate the number of bits as

SN~:::: 6.02nb + 1.76:::: 40 ..... n:::: 6.35

Telephone companies usually assign 7 or 8 bits per sample.

Uniform Versus Nonuniform Quantization For many applications, the distribution

of the instantaneous amplitudes in the analog signal is not uniform. Changes in

amplitude often occur more frequently in the lower amplitudes than in the higher

ones. For these types of applications it is better to use nonuniform zones. In other

words, the height of ~ is not fixed; it is greater near the lower amplitudes and less

near the higher amplitudes. Nonuniform quantization can also be achieved by using a

process called companding and expanding. The signal is companded at the sender

before conversion; it is expanded at the receiver after conversion. Companding means

reducing the instantaneous voltage amplitude for large values; expanding is the opposite

process. Companding gives greater weight to strong signals and less weight to

weak ones. It has been proved that nonuniform quantization effectively reduces the

SNR dB of quantization.

Encoding

The last step in PCM is encoding. After each sample is quantized and the number of

bits per sample is decided, each sample can be changed to an llb-bit code word. In Figure

4.26 the encoded words are shown in the last row. A quantization code of 2 is

encoded as 010; 5 is encoded as 101; and so on. Note that the number of bits for each

sample is determined from the number of quantization levels. If the number of quantization

levels is L, the number of bits is llb = log2 L. In our example L is 8 and llb is

therefore 3. The bit rate can be found from the formula

Bit rate :::: sampling rate x number of bits per sample::::is x nb

Example 4.14

We want to digitize the human voice. What is the bit rate, assuming 8 bits per sample?

Solution

The human voice normally contains frequencies from 0 to 4000 Hz. So the sampling rate and bit

rate are calculated as follows:

Sampling rate :::: 4000 x 2 :::: 8000 samples/s

Bit rate == 8000 x 8 :::: 64,000 bps == 64 kbps

Original Signal Recovery

The recovery of the original signal requires the PCM decoder. The decoder first uses

circuitry to convert the code words into a pulse that holds the amplitude until the next

pulse. After the staircase signal is completed, it is passed through a low-pass filter to


128 CHAPTER 4 DIGITAL TRANSMISSION

smooth the staircase signal into an analog signal. The filter has the same cutoff frequency

as the original signal at the sender. If the signal has been sampled at (or

greater than) the Nyquist sampling rate and if there are enough quantization levels,

the original signal will be recreated. Note that the maximum and minimum values of

the original signal can be achieved by using amplification. Figure 4.27 shows the

simplified process.

Figure 4.27

Components ofa PCM decoder

Amplitude

~TI~'

PCM decoder

Amplitude

Analog signal

%'

"

......... ,

Make and

' !lloool100r--+ Low-pass ' .

connect " ,

filter ,' ,.

Time

samples

Digital data , , ,, .. '

-.- .. ,

PCM Bandwidth

Suppose we are given the bandwidth of a low-pass analog signal. If we then digitize the

signal, what is the new minimum bandwidth of the channel that can pass this digitized

signal? We have said that the minimum bandwidth of a line-encoded signal is B min =ex

N x (lIr). We substitute the value ofN in this formula:

111

B min =c x N x - = c X nb xis x - =c x nb x 2 x Banalog x -

r r r

When lIr = I (for a NRZ or bipolar signal) and c = (12) (the average situation), the

minimum bandwidth is

This means the minimum bandwidth of the digital signal is nb times greater than the

bandwidth of the analog signal. This is the price we pay for digitization.

Example 4.15

We have a low-pass analog signal of 4 kHz. If we send the analog signal, we need a channel with

a minimum bandwidth of 4 kHz. If we digitize the signal and send 8 bits per sample, we need a

channel with a minimum bandwidth of 8 X 4 kHz =32 kHz.


SECTION 4.2 ANALOG-TO-DIGITAL CONVERSION 129

Maximum Data Rate ofa Channel

In Chapter 3, we discussed the Nyquist theorem which gives the data rate of a channel

as N max

=2 x B x log 2 L. We can deduce this rate from the Nyquist sampling theorem

by using the following arguments.

1. We assume that the available channel is low-pass with bandwidth B.

2. We assume that the digital signal we want to send has L levels, where each level is

a signal element. This means r = 1/10g 2 L.

3. We first pass the digital signal through a low-pass filter to cut off the frequencies

above B Hz.

4. We treat the resulting signal as an analog signal and sample it at 2 x B samples per

second and quantize it using L levels. Additional quantization levels are useless

because the signal originally had L levels.

S. The resulting bit rate is N =fs x nb = 2 x B x log2 L. This is the maximum bandwidth;

if the case factor c increases, the data rate is reduced.

N max :::::

2 x B x logzL bps

Minimum Required Bandwidth

The previous argument can give us the minimum bandwidth if the data rate and the

number of signal levels are fixed. We can say

B . = N Hz

mm 2xlog z L

Delta Modulation (DM)

PCM is a very complex technique. Other techniques have been developed to reduce

the complexity of PCM. The simplest is delta modulation. PCM finds the value of the

signal amplitude for each sample; DM finds the change from the previous sample. Figure

4.28 shows the process. Note that there are no code words here; bits are sent one

after another.

Figure 4.28

The process ofdelta modulation

Amplitude

~. o

T

?enerated 1_~_1 1_1 Time

bmary data . 0 0 0 0 0 0 .


130 CHAPTER 4 DIGITAL TRANSMISSION

Modulator

The modulator is used at the sender site to create a stream of bits from an analog signal.

The process records the small positive or negative changes, called delta O. Ifthe delta is

positive, the process records a I; if it is negative, the process records a O. However, the

process needs a base against which the analog signal is compared. The modulator

builds a second signal that resembles a staircase. Finding the change is then reduced to

comparing the input signal with the gradually made staircase signal. Figure 4.29 shows

a diagram of the process.

Figure 4.29

Delta modulation components

OM modulator

t ~ I---+---~c a t ...-----.-+-~I I •.. I 100

~.. ompraor

Digital data

Analog signal

The modulator, at each sampling interval, compares the value of the analog signal

with the last value of the staircase signal. Ifthe amplitude of the analog signal is larger,

the next bit in the digital data is 1; otherwise, it is O. The output of the comparator, however,

also makes the staircase itself. If the next bit is I, the staircase maker moves the

last point of the staircase signal 0 up; it the next bit is 0, it moves it 0 down. Note that we

need a delay unit to hold the staircase function for a period between two comparisons.

Demodulator

The demodulator takes the digital data and, using the staircase maker and the delay

unit, creates the analog signal. The created analog signal, however, needs to pass

through a low-pass filter for smoothing. Figure 4.30 shows the schematic diagram.

Figure 4.30

Delta demodulation components

OM demodulator

11"'1100

Digital data

Analog signal


SECTION 4.3 TRANSMISSION MODES 131

Adaptive DA1

A better performance can be achieved if the value of 0 is not fixed. In adaptive delta

modulation, the value of 0 changes according to the amplitude of the analog signal.

Quantization Error

It is obvious that DM is not perfect. Quantization error is always introduced in the process.

The quantization error of DM, however, is much less than that for PCM.

4.3 TRANSMISSION MODES

Of primary concern when we are considering the transmission of data from one device

to another is the wiring, and of primary concern when we are considering the wiring is

the data stream. Do we send 1 bit at a time; or do we group bits into larger groups and,

if so, how? The transmission of binary data across a link can be accomplished in either

parallel or serial mode. In parallel mode, multiple bits are sent with each clock tick.

In serial mode, 1 bit is sent with each clock tick. While there is only one way to send

parallel data, there are three subclasses of serial transmission: asynchronous, synchronous,

and isochronous (see Figure 4.31).

Figure 4.31

Data transmission and modes

Data transmission

Parallel Transmission

Binary data, consisting of Is and Os, may be organized into groups of n bits each.

Computers produce and consume data in groups of bits much as we conceive of and use

spoken language in the form of words rather than letters. By grouping, we can send

data n bits at a time instead of 1. This is called parallel transmission.

The mechanism for parallel transmission is a conceptually simple one: Use n wires

to send n bits at one time. That way each bit has its own wire, and all n bits of one

group can be transmitted with each clock tick from one device to another. Figure 4.32

shows how parallel transmission works for n = 8. Typically, the eight wires are bundled

in a cable with a connector at each end.

The advantage of parallel transmission is speed. All else being equal, parallel

transmission can increase the transfer speed by a factor of n over serial transmission.


132 CHAPTER 4 DIGITAL TRANSMISSION

Figure 4.32

Parallel transmission

The 8 bits are sent together

Sender

f

I

I

I

\

I

,--" j/ ,/'-,.

v

\ I \

<

I \

1

< 1

v

I

v

j

v I

\

<

\

'::/ // "-J

I A'\ /

Receiver

We need eight lines

But there is a significant disadvantage: cost. Parallel transmission requires n communication

lines (wires in the example) just to transmit the data stream. Because this is

expensive, parallel transmission is usually limited to short distances.

Serial Transmission

In serial transmission one bit follows another, so we need only one communication

channel rather than n to transmit data between two communicating devices (see

Figure 4.33).

Figure 4.33

Serial transmission

The 8 bits are sent

one after another.

o o

1 1

1 o 00010 1

Sender 0 -1-----------+1

o

g Receiver

1

o

We need only

one line (wire).

o 1o

Parallel/serial

converter

Serial/parallel

converter

The advantage of serial over parallel transmission is that with only one communication

channel, serial transmission reduces the cost of transmission over parallel by

roughly a factor of n.

Since communication within devices is parallel, conversion devices are required at

the interface between the sender and the line (parallel-to-serial) and between the line

and the receiver (serial-to-parallel).

Serial transmission occurs in one of three ways: asynchronous, synchronous, and

isochronous.


SECTION 4.3 TRANSMISSION MODES 133

Asynchronous Transmission

Asynchronous transmission is so named because the timing of a signal is unimportant.

Instead, information is received and translated by agreed upon patterns. As long as those

patterns are followed, the receiving device can retrieve the information without regard

to the rhythm in which it is sent. Patterns are based on grouping the bit stream into

bytes. Each group, usually 8 bits, is sent along the link as a unit. The sending system

handles each group independently, relaying it to the link whenever ready, without regard

to a timer.

Without synchronization, the receiver cannot use timing to predict when the next

group will arrive. To alert the receiver to the arrival of a new group, therefore, an extra

bit is added to the beginning of each byte. This bit, usually a 0, is called the start bit.

To let the receiver know that the byte is finished, 1 or more additional bits are appended

to the end of the byte. These bits, usually Is, are called stop bits. By this method, each

byte is increased in size to at least 10 bits, of which 8 bits is information and 2 bits or

more are signals to the receiver. In addition, the transmission of each byte may then be

followed by a gap of varying duration. This gap can be represented either by an idle

channel or by a stream of additional stop bits.

In asynchronous transmission, we send 1 start bit (0) at the beginning and 1 or more

stop bits (Is) at the end ofeach byte. There may be a gap between each byte.

The start and stop bits and the gap alert the receiver to the beginning and end of

each byte and allow it to synchronize with the data stream. This mechanism is called

asynchronous because, at the byte level, the sender and receiver do not have to be synchronized.

But within each byte, the receiver must still be synchronized with the

incoming bit stream. That is, some synchronization is required, but only for the duration

of a single byte. The receiving device resynchronizes at the onset ofeach new byte.

When the receiver detects a start bit, it sets a timer and begins counting bits as they

come in. After n bits, the receiver looks for a stop bit. As soon as it detects the stop bit,

it waits until it detects the next start bit.

Asynchronous here means "asynchronous at the byte level;'

but the bits are still synchronized; their durations are the same.

Figure 4.34 is a schematic illustration of asynchronous transmission. In this example,

the start bits are as, the stop bits are 1s, and the gap is represented by an idle line

rather than by additional stop bits.

The addition of stop and start bits and the insertion of gaps into the bit stream

make asynchronous transmission slower than forms of transmission that can operate

without the addition of control information. But it is cheap and effective, two advantages

that make it an attractive choice for situations such as low-speed communication.

For example, the connection of a keyboard to a computer is a natural application for

asynchronous transmission. A user types only one character at a time, types extremely

slowly in data processing terms, and leaves unpredictable gaps of time between each

character.


134 CHAPTER 4 DIGITAL TRANSMISSION

Figure 4.34

Asynchronous transmission

Direction of flow

"or~ n", ~rt bit

~1'_111l1011 0

Sender

Receiver

01 101 I 0 I @j 1111101 1 0 W000101 11 0 11 11 1

~~ r ~~

Gaps between

data units

Synchronous Transmission

In synchronous transmission, the bit stream is combined into longer "frames," which

may contain multiple bytes. Each byte, however, is introduced onto the transmission

link without a gap between it and the next one. It is left to the receiver to separate the

bit stream into bytes for decoding purposes. In other words, data are transmitted as an

unbroken string of 1s and Os, and the receiver separates that string into the bytes, or

characters, it needs to reconstruct the information.

In synchronous transmission, we send bits one after another without start or stop

bits or gaps. Itis the responsibility of the receiver to group the bits.

Figure 4.35 gives a schematic illustration of synchronous transmission. We have

drawn in the divisions between bytes. In reality, those divisions do not exist; the sender

puts its data onto the line as one long string. If the sender wishes to send data in separate

bursts, the gaps between bursts must be filled with a special sequence of Os and Is that

means idle. The receiver counts the bits as they arrive and groups them in 8-bit units.

Figure 4.35

Synchronous transmission

Direction of flow

Sender

110 1 I I I

Frame

111111011111110110 I··· j 111101111

Frame

1,----tReceiver

11 1 1 1

Without gaps and start and stop bits, there is no built-in mechanism to help the

receiving device adjust its bit synchronization midstream. Timing becomes very important,

therefore, because the accuracy of the received information is completely dependent

on the ability ofthe receiving device to keep an accurate count ofthe bits as they come in.


SECTION 4.5 KEY TERMS 135

The advantage of synchronous transmission is speed. With no extra bits or gaps to

introduce at the sending end and remove at the receiving end, and, by extension, with

fewer bits to move across the link, synchronous transmission is faster than asynchronous

transmission. For this reason, it is more useful for high-speed applications such as

the transmission of data from one computer to another. Byte synchronization is accomplished

in the data link layer.

We need to emphasize one point here. Although there is no gap between characters

in synchronous serial transmission, there may be uneven gaps between frames.

Isochronous

In real-time audio and video, in which uneven delays between frames are not acceptable,

synchronous transmission fails. For example, TV images are broadcast at the rate

of 30 images per second; they must be viewed at the same rate. If each image is sent

by using one or more frames, there should be no delays between frames. For this type

of application, synchronization between characters is not enough; the entire stream of

bits must be synchronized. The isochronous transmission guarantees that the data

arrive at a fixed rate.

4.4 RECOMMENDED READING

For more details about subjects discussed in this chapter, we recommend the following

books. The items in brackets [...] refer to the reference list at the end of the text.

Books

Digital to digital conversion is discussed in Chapter 7 of [Pea92], Chapter 3 of

[CouOl], and Section 5.1 of [Sta04]. Sampling is discussed in Chapters 15, 16, 17, and

18 of [Pea92], Chapter 3 of [CouO!], and Section 5.3 of [Sta04]. [Hsu03] gives a good

mathematical approach to modulation and sampling. More advanced materials can be

found in [Ber96].

4.5 KEY TERMS

adaptive delta modulation

alternate mark inversion (AMI)

analog-to-digital conversion

asynchronous transmission

baseline

baseline wandering

baud rate

biphase

bipolar

bipolar with 8-zero substitution (B8ZS)

bit rate

block coding

companding and expanding

data element

data rate

DC component

delta modulation (DM)

differential Manchester

digital-to-digital conversion

digitization


136 CHAPTER 4 DIGITAL TRANSMISSION

eight binary/ten binary (8B/lOB)

eight-binary, six-ternary (8B6T)

four binary/five binary (4B/5B)

four dimensional, five-level pulse

amplitude modulation (4D-PAM5)

high-density bipolar 3-zero (HDB3)

isochronous transmission

line coding

Manchester

modulation rate

multilevel binary

multiline transmission, 3 level (MLT-3)

nonreturn to zero (NRZ)

nonreturn to zero, invert (NRZ-I)

nonreturn to zero, level (NRZ-L)

Nyquist theorem

parallel transmission

polar

pseudoternary

pulse amplitude modulation (PAM)

pulse code modulation (PCM)

pulse rate

quantization

quantization error

return to zero (RZ)

sampling

sampling rate

scrambling

self-synchronizing

serial transmission

signal element

signal rate

start bit

stop bit

synchronous transmission

transmission mode

two-binary, one quaternary (2B I Q)

unipolar

4.6 SUMMARY

o Digital-to-digital conversion involves three techniques: line coding, block coding,

and scrambling.

o Line coding is the process of converting digital data to a digital signal.

o We can roughly divide line coding schemes into five broad categories: unipolar,

polar, bipolar, multilevel, and multitransition.

o Block coding provides redundancy to ensure synchronization and inherent error

detection. Block coding is normally referred to as mB/nB coding; it replaces each

m-bit group with an n-bit group.

o Scrambling provides synchronization without increasing the number of bits. Two

common scrambling techniques are B8ZS and HDB3.

o The most common technique to change an analog signal to digital data (digitization)

is called pulse code modulation (PCM).

o The first step in PCM is sampling. The analog signal is sampled every Ts s, where Ts

is the sample interval or period. The inverse of the sampling interval is called the

sampling rate or sampling frequency and denoted by fs, where fs = lITs. There are

three sampling methods-ideal, natural, and flat-top.

o According to the Nyquist theorem, to reproduce the original analog signal, one

necessary condition is that the sampling rate be at least twice the highest frequency

in the original signal.


SECTION 4.7 PRACTICE SET 137

o Other sampling techniques have been developed to reduce the complexity of PCM.

The simplest is delta modulation. PCM finds the value of the signal amplitude for

each sample; DM finds the change from the previous sample.

o While there is only one way to send parallel data, there are three subclasses of

serial transmission: asynchronous, synchronous, and isochronous.

o In asynchronous transmission, we send 1 start bit (0) at the beginning and 1 or

more stop bits (1 s) at the end of each byte.

o In synchronous transmission, we send bits one after another without start or stop

bits or gaps. It is the responsibility of the receiver to group the bits.

o The isochronous mode provides synchronized for the entire stream of bits must. In

other words, it guarantees that the data arrive at a fixed rate.

4.7 PRACTICE SET

Review Questions

1. List three techniques of digital-to-digital conversion.

2. Distinguish between a signal element and a data element.

3. Distinguish between data rate and signal rate.

4. Define baseline wandering and its effect on digital transmission.

5. Define a DC component and its effect on digital transmission.

6. Define the characteristics of a self-synchronizing signal.

7. List five line coding schemes discussed in this book.

8. Define block coding and give its purpose.

9. Define scrambling and give its purpose.

10. Compare and contrast PCM and DM.

11. What are the differences between parallel and serial transmission?

12. List three different techniques in serial transmission and explain the differences.

Exercises

13. Calculate the value of the signal rate for each case in Figure 4.2 if the data rate is

1 Mbps and c = 1/2.

14. In a digital transmission, the sender clock is 0.2 percent faster than the receiver clock.

How many extra bits per second does the sender send if the data rate is 1 Mbps?

15. Draw the graph of the NRZ-L scheme using each of the following data streams,

assuming that the last signa11evel has been positive. From the graphs, guess the

bandwidth for this scheme using the average number of changes in the signal level.

Compare your guess with the corresp.onding entry in Table 4.1.

a. 00000000

b. 11111111

c. 01010101

d. 00110011


138 CHAPTER 4 DIGITAL TRANSMISSION

16. Repeat Exercise 15 for the NRZ-I scheme.

17. Repeat Exercise 15 for the Manchester scheme.

18. Repeat Exercise 15 for the differential Manchester scheme.

19. Repeat Exercise 15 for the 2B 1Q scheme, but use the following data streams.

a. 0000000000000000

b. 1111111111111111

c. 0101010101010101

d. 0011001100110011

20. Repeat Exercise 15 for the MLT-3 scheme, but use the following data streams.

a. 00000000

b. 11111111

c. 01010101

d. 00011000

21. Find the 8-bit data stream for each case depicted in Figure 4.36.

Figure 4.36 Exercise 21

t

Time

a. NRZ-I

Time

b. differential Manchester

Time

c.AMI

22. An NRZ-I signal has a data rate of 100 Kbps. Using Figure 4.6, calculate the value

of the normalized energy (P) for frequencies at 0 Hz, 50 KHz, and 100 KHz.

23. A Manchester signal has a data rate of 100 Kbps. Using Figure 4.8, calculate the

value of the normalized energy (P) for frequencies at 0 Hz, 50 KHz, 100 KHz.


SECTION 4. 7 PRACTICE SET 139

24. The input stream to a 4B/5B block encoder is 0100 0000 0000 0000 0000 OOOI.

Answer the following questions:

a. What is the output stream?

b. What is the length of the longest consecutive sequence of Os in the input?

c. What is the length of the longest consecutive sequence of Os in the output?

25. How many invalid (unused) code sequences can we have in 5B/6B encoding? How

many in 3B/4B encoding?

26. What is the result of scrambling the sequence 11100000000000 using one of the

following scrambling techniques? Assume that the last non-zero signal level has

been positive.

a. B8ZS

b. HDB3 (The number of nonzero pules is odd after the last substitution)

27. What is the Nyquist sampling rate for each of the following signals?

a. A low-pass signal with bandwidth of 200 KHz?

b. A band-pass signal with bandwidth of 200 KHz if the lowest frequency is

100 KHz?

28. We have sampled a low-pass signal with a bandwidth of 200 KHz using 1024 levels

of quantization.

a. Calculate the bit rate of the digitized signal.

b. Calculate the SNR dB for this signal.

c. Calculate the PCM bandwidth of this signal.

29. What is the maximum data rate of a channel with a bandwidth of 200 KHz if we

use four levels of digital signaling.

30. An analog signal has a bandwidth of 20 KHz. If we sample this signal and send it

through a 30 Kbps channel what is the SNR dB ?

31. We have a baseband channel with a I-MHz bandwidth. What is the data rate for

this channel if we use one of the following line coding schemes?

a. NRZ-L

b. Manchester

c. MLT-3

d. 2B1Q

32. We want to transmit 1000 characters with each character encoded as 8 bits.

a. Find the number of transmitted bits for synchronous transmission.

b. Find the number of transmitted bits for asynchronous transmission.

c. Find the redundancy percent in each case.



CHAPTERS

Analog Transmission

In Chapter 3, we discussed the advantages and disadvantages of digital and analog transmission.

We saw that while digital transmission is very desirable, a low-pass channel is

needed. We also saw that analog transmission is the only choice if we have a bandpass

channel. Digital transmission was discussed in Chapter 4; we discuss analog transmission

in this chapter.

Converting digital data to a bandpass analog signal.is traditionally called digitalto-analog

conversion. Converting a low-pass analog signal to a bandpass analog signal

is traditionally called analog-to-analog conversion. In this chapter, we discuss these two

types of conversions.

5.1 DIGITAL-TO-ANALOG CONVERSION

Digital-to-analog conversion is the process of changing one of the characteristics of

an analog signal based on the information in digital data. Figure 5.1 shows the relationship

between the digital information, the digital-to-analog modulating process,

and the resultant analog signal.

Figure 5.1

Digital-to-analog conversion

Sender

Receiver

Digital data

Analog signal

Digital data

1°101 ... 1011

Link

141


142 CHAPTER 5 ANALOG TRANSMISSION

As discussed in Chapter 3, a sine wave is defined by three characteristics: amplitude,

frequency, and phase. When we vary anyone of these characteristics, we create a different

version ofthat wave. So, by changing one characteristic of a simple electric signal, we

can use it to represent digital data. Any of the three characteristics can be altered in this

way, giving us at least three mechanisms for modulating digital data into an analog signal:

amplitude shift keying (ASK), frequency shift keying (FSK), and phase shift keying

(PSK). In addition, there is a fourth (and better) mechanism that combines changing both

the amplitude and phase, called quadrature amplitude modulation (QAM). QAM

is the most efficient of these options and is the mechanism commonly used today (see

Figure 5.2).

Figure 5.2

Types ofdigital-to-analog conversion

Digital-to-analog

conversion

"'------

Quadrature amplitude modulation

(QAM)

Aspects of Digital-to-Analog Conversion

Before we discuss specific methods of digital-to-analog modulation, two basic issues

must be reviewed: bit and baud rates and the carrier signal.

Data Element Versus Signal Element

In Chapter 4, we discussed the concept of the data element versus the signal element.

We defined a data element as the smallest piece of information to be exchanged, the

bit. We also defined a signal element as the smallest unit of a signal that is constant.

Although we continue to use the same terms in this chapter, we will see that the nature

of the signal element is a little bit different in analog transmission.

Data Rate Versus Signal Rate

We can define the data rate (bit rate) and the signal rate (baud rate) as we did for digital

transmission. The relationship between them is

S=Nx!

r

baud

where N is the data rate (bps) and r is the number of data elements carried in one signal

element. The value of r in analog transmission is r = log2 L, where L is the type of signal

element, not the level. The same nomenclature is used to simplify the comparisons.


SECTION 5.1 DIGITAL-TO-ANALOG CONVERSION 143

Bit rate is the number of bits per second. Baud rate is the number of signal

elements per second. In the analog transmission of digital data,

the baud rate is less than or equal to the bit rate.

The same analogy we used in Chapter 4 for bit rate and baud rate applies here. In

transportation, a baud is analogous to a vehicle, and a bit is analogous to a passenger.

We need to maximize the number of people per car to reduce the traffic.

Example 5.1

An analog signal carries 4 bits per signal element. If 1000 signal elements are sent per second,

find the bit rate.

Solution

In this case, r = 4, S = 1000, and N is unknown. We can find the value ofN from

S=Nx! r

or N=Sxr= 1000 x 4 =4000 bps

Example 5.2

An analog signal has a bit rate of8000 bps and a baud rate of 1000 baud. How many data elements

are carried by each signal element? How many signal elements do we need?

Solution

In this example, S = 1000, N = 8000, and rand L are unknown. We find first the value of rand

then the value of L.

1

S=Nx­ r

r= logzL

N 8000 .

r = - =-- =8 bltslbaud

S 1000

L= y= 2 8 = 256

Bandwidth

The required bandwidth for analog transmission of digital data is proportional to the

signal rate except for FSK, in which the difference between the carrier signals needs to

be added. We discuss the bandwidth for each technique.

Carrier Signal

In analog transmission, the sending device produces a high-frequency signal that acts

as a base for the information signal. This base signal is called the carrier signal or carrier

frequency. The receiving device is tuned to the frequency of the carrier signal that it

expects from the sender. Digital information then changes the carrier signal by modifying

one or more of its characteristics (amplitude, frequency, or phase). This kind of

modification is called modulation (shift keying).

Amplitude Shift Keying

In amplitude shift keying, the amplitude of the carrier signal is varied to create signal

elements. Both frequency and phase remain constant while the amplitude changes.


144 CHAPTER 5 ANALOG TRANSMISSION

Binary ASK (BASK)

Although we can have several levels (kinds) of signal elements, each with a different

amplitude, ASK is normally implemented using only two levels. This is referred to as

binary amplitude shift keying or on-offkeying (OOK). The peak amplitude of one signallevel

is 0; the other is the same as the amplitude of the carrier frequency. Figure 5.3

gives a conceptual view of binary ASK.

Figure 5.3

Binmy amplitude shift keying

Amplitude Bit rate: 5

o

1

o

r=:= 1 S=N B=(I +d)S

1 signal

element

1 signal

element

I signal

element

I signal

element

I signal

element

I

I

I

I

I

I

Time

1 s

Baud rate: 5

Bandwidth for ASK Figure 5.3 also shows the bandwidth for ASK. Although the

carrier signal is only one simple sine wave, the process of modulation produces a

nonperiodic composite signal. This signal, as was discussed in Chapter 3, has a continuous

set of frequencies. As we expect, the bandwidth is proportional to the signal rate

(baud rate). However, there is normally another factor involved, called d, which

depends on the modulation and filtering process. The value of d is between 0 and 1. This

means that the bandwidth can be expressed as shown, where 5 is the signal rate and the B

is the bandwidth.

B =(1 +d) x S

The formula shows that the required bandwidth has a minimum value of 5 and a

maximum value of 25. The most important point here is the location of the bandwidth.

The middle of the bandwidth is whereIe the carrier frequency, is located. This means if

we have a bandpass channel available, we can choose our Ie so that the modulated

signal occupies that bandwidth. This is in fact the most important advantage of digitalto-analog

conversion. We can shift the resulting bandwidth to match what is available.

Implementation The complete discussion of ASK implementation is beyond the

scope of this book. However, the simple ideas behind the implementation may help us

to better understand the concept itself. Figure 5.4 shows how we can simply implement

binary ASK.

If digital data are presented as a unipolar NRZ (see Chapter 4) digital signal with a

high voltage of I V and a low voltage of 0 V, the implementation can achieved by

multiplying the NRZ digital signal by the carrier signal coming from an oscillator.

When the amplitude of the NRZ signal is 1, the amplitude of the carrier frequency is


SECTION 5.1 DIGITAL-TO-ANALOG CONVERSION 145

Figure 5.4

Implementation ofbinary ASK

I 0

I

o

Carrier signal

I

held; when the amplitude of the NRZ signal is 0, the amplitude of the carrier frequency

IS zero.

Example 5.3

We have an available bandwidth of 100 kHz which spans from 200 to 300 kHz. What are the carrier

frequency and the bit rate if we modulated our data by using ASK with d = I?

Solution

The middle of the bandwidth is located at 250 kHz. This means that our carrier frequency can be

atfe =250 kHz. We can use the formula for bandwidth to find the bit rate (with d = 1 and r = 1).

B =(l + d) x S =2 x N X! =2 X N =100 kHz ...... N =50 kbps

r

Example 5.4

In data communications, we normally use full-duplex links with communication in both directions.

We need to divide the bandwidth into two with two carrier frequencies, as shown in Figure

5.5. The figure shows the positions of two carrier frequencies and the bandwidths.The

available bandwidth for each direction is now 50 kHz, which leaves us with a data rate of 25 kbps

in each direction.

Figure 5.5 Bandwidth offull-duplex ASK used in Example 5.4

I' B = 50 kHz '11

B = 50 kHz 'I

~"~~~ ~,~Jf: L

200 (225) (275) 300

Multilevel ASK

The above discussion uses only two amplitude levels. We can have multilevel ASK in

which there are more than two levels. We can use 4,8, 16, or more different amplitudes

for the signal and modulate the data using 2, 3, 4, or more bits at a time. In these cases,


146 CHAPTER 5 ANALOG TRANSMISSION

r = 2, r = 3, r =4, and so on. Although this is not implemented with pure ASK, it is

implemented with QAM (as we will see later).

Frequency Shift Keying

In frequency shift keying, the frequency of the carrier signal is varied to represent data.

The frequency of the modulated signal is constant for the duration of one signal element,

but changes for the next signal element if the data element changes. Both peak

amplitude and phase remain constant for all signal elements.

Binary FSK (BFSK)

One way to think about binary FSK (or BFSK) is to consider two carrier frequencies. In

Figure 5.6, we have selected two carrier frequencies,f} and12. We use the first carrier if

the data element is 0; we use the second if the data element is 1. However, note that this

is an unrealistic example used only for demonstration purposes. Normally the carrier

frequencies are very high, and the difference between them is very small.

Figure 5.6

Binaryfrequency shift keying

Amplitude

1

Il

Bit rate: 5 r=l S=N B=(1+d)S+2t-.j

Il

1 signal 1 signal 1 signal 1 signal 1 signal

element element element element element

o+-~--1-L....JL..--l--..I...-_

o

Is

It h

I' 21-.! -I

Baud rate: 5

As Figure 5.6 shows, the middle of one bandwidth isJI and the middle of the other

is h. BothJI and12 are il/apart from the midpoint between the two bands. The difference

between the two frequencies is 211f

Bandwidth for BFSK

Figure 5.6 also shows the bandwidth of FSK. Again the carrier

signals are only simple sine waves, but the modulation creates a nonperiodic composite

signal with continuous frequencies. We can think of FSK as two ASK signals,

each with its own carrier frequency Cil orh). If the difference between the two frequencies

is 211j, then the required bandwidth is

B=(l+d)xS+2iij

What should be the minimum value of 211/? In Figure 5.6, we have chosen a value

greater than (l + d)S. It can be shown that the minimum value should be at least S for

the proper operation of modulation and demodulation.


SECTION 5.1 DIGITAL-TO-ANALOG CONVERSION 147

Example 5.5

We have an available bandwidth of 100 kHz which spans from 200 to 300 kHz. What should be

the carrier frequency and the bit rate ifwe modulated our data by using FSK with d = 1?

Solution

This problem is similar to Example 5.3, but we are modulating by using FSK. The midpoint of

the band is at 250 kHz. We choose 2~f to be 50 kHz; this means

B =(1 + d) x S + 28f =100 -. 2S =50 kHz S = 25 kbaud N;;;; 25 kbps

Compared to Example 5.3, we can see the bit rate for ASK is 50 kbps while the bit rate for FSK is

25 kbps.

Implementation There are two implementations of BFSK: noncoherent and coherent.

In noncoherent BFSK, there may be discontinuity in the phase when one signal

element ends and the next begins. In coherent BFSK, the phase continues through the

boundary of two signal elements. Noncoherent BFSK can be implemented by treating

BFSK as two ASK modulations and using two carrier frequencies. Coherent BFSK can

be implemented by using one voltage-controlled oscillator (VeO) that changes its frequency

according to the input voltage. Figure 5.7 shows the simplified idea behind the

second implementation. The input to the oscillator is the unipolar NRZ signal. When

the amplitude of NRZ is zero, the oscillator keeps its regular frequency; when the

amplitude is positive, the frequency is increased.

Figure 5.7

Implementation ofBFSK

1 o 1 o

_lD1_I_I_I_-;"~1 veo I~

Voltage-controlled

oscillator

Multilevel FSK

Multilevel modulation (MFSK) is not uncommon with the FSK method. We can use

more than two frequencies. For example, we can use four different frequenciesfIJ2,!3,

and 14 to send 2 bits at a time. To send 3 bits at a time, we can use eight frequencies.

And so on. However, we need to remember that the frequencies need to be 2~1 apart.

For the proper operation of the modulator and demodulator, it can be shown that the

minimum value of 2~lneedsto be S. We can show that the bandwidth with d =0 is

B;;;; (l + d) x S + (L - 1)24{ -. B =Lx S


148 CHAPTER 5 ANALOG TRANSMISSION

Example 5.6

We need to send data 3 bits at a time at a bit rate of 3 Mbps. The carrier frequency is

10 MHz. Calculate the number of levels (different frequencies), the baud rate, and the

bandwidth.

Solution

We can have L =2 3 =8. The baud rate is S =3 MHz/3 =1000 Mbaud. This means that the carrier

frequencies must be 1 MHz apart (211f =1 MHz). The bandwidth is B =8 x 1000 =8000. Figure 5.8

shows the allocation of frequencies and bandwidth.

Figure 5.8 Bandwidth ofMFSK used in Example 5.6

I'

-(.-----.."--,,I~~~lf5~~1

II

6.5

MHz

Bandwidth = 8 MHz

I: I--Ji~~l~;~r'·' ·t':~

h h 14 j~ 15 16

7.5 8.5 9.5 HI 10.5 11.5

MHz MHz MHz MHz MHz MHz

I

h

12.5

MHz

'I

r~~'¥'::L

is

13.5

MHz

Phase Shift Keying

In phase shift keying, the phase of the carrier is varied to represent two or more different

signal elements. Both peak amplitude and frequency remain constant as the phase

changes. Today, PSK is more common than ASK or FSK. However, we will see

Sh0l1ly that QAM, which combines ASK and PSK, is the dominant method of digitalto-analog

modulation.

Binary PSK (BPSK)

The simplest PSK is binary PSK, in which we have only two signal elements, one with

a phase of 0°, and the other with a phase of 180°. Figure 5.9 gives a conceptual view

of PSK. Binary PSK is as simple as binary ASK with one big advantage-it is less

Figure 5.9

Binary phase shift keying

Amplitude Bit rate: 5

o 1 I) r=1 S=N B= {I + d)S

I signal I signal I signal I signal I signal

element element element element element

I s

Baud rate: 5


SECTION 5.1 DIGITAL-TO-ANALOG CONVERSION 149

susceptible to noise. In ASK, the criterion for bit detection is the amplitude of the signal;

in PSK, it is the phase. Noise can change the amplitude easier than it can change

the phase. In other words, PSK is less susceptible to noise than ASK. PSK is superior to

FSK because we do not need two carrier signals.

Bandwidth Figure 5.9 also shows the bandwidth for BPSK. The bandwidth is the

same as that for binary ASK, but less than that for BFSK. No bandwidth is wasted for

separating two carrier signals.

Implementation The implementation of BPSK is as simple as that for ASK. The reason

is that the signal element with phase 180° can be seen as the complement of the signal

element with phase 0°. This gives us a clue on how to implement BPSK. We use

the same idea we used for ASK but with a polar NRZ signal instead of a unipolar NRZ

signal, as shown in Figure 5.10. The polar NRZ signal is multiplied by the carrier frequency;

the 1 bit (positive voltage) is represented by a phase starting at 0°; the abit

(negative voltage) is represented by a phase starting at 180°.

Figure 5.10

Implementation ofBASK

o 1

I

I

Carrie~ signal

o

-=t:f=tt

Multiplier~

------.t X I---'-;';';"";";"':":"';";';'~

*fc

Quadrature PSK (QPSK)

The simplicity of BPSK enticed designers to use 2 bits at a time in each signal element,

thereby decreasing the baud rate and eventually the required bandwidth. The scheme is

called quadrature PSK or QPSK because it uses two separate BPSK modulations; one

is in-phase, the other quadrature (out-of-phase). The incoming bits are first passed

through a serial-to-parallel conversion that sends one bit to one modulator and the next

bit to the other modulator. If the duration of each bit in the incoming signal is T, the

duration of each bit sent to the corresponding BPSK signal is 2T. This means that the

bit to each BPSK signal has one-half the frequency of the original signal. Figure 5.11

shows the idea.

The two composite signals created by each multiplier are sine waves with the

same frequency, but different phases. When they are added, the result is another sine

wave, with one of four possible phases: 45°, -45°, 135°, and -135°. There are four

kinds of signal elements in the output signal (L = 4), so we can send 2 bits per signal

element (r = 2).


150 CHAPTER 5 ANALOG TRANSMISSION

Figure 5.11

QPSK and its implementation

00

1 U

01

1 I

o

1

o

1

I I I

I I I

·p·,J\-:/YfAf\f~-tPtj'}/f'J\-lllJ~

I I I I

I I I I

o

I I I I

: 0 I I I

I

-135 -45 135 45

Example 5.7

Find the bandwidth for a signal transmitting at 12 Mbps for QPSK. The value ofd =O.

Solution

For QPSK, 2 bits is carried by one signal element. This means that r = 2. So the signal rate (baud

rate) is S =N x (lIr) = 6 Mbaud. With a value of d =0, we have B =S =6 MHz.

Constellation Diagram

A constellation diagram can help us define the amplitude and phase of a signal element,

particularly when we are using two carriers (one in-phase and one quadrature), The

diagram is useful when we are dealing with multilevel ASK, PSK, or QAM (see next

section). In a constellation diagram, a signal element type is represented as a dot. The

bit or combination of bits it can carry is often written next to it.

The diagram has two axes. The horizontal X axis is related to the in-phase carrier;

the vertical Y axis is related to the quadrature carrier. For each point on the diagram,

four pieces of information can be deduced. The projection of the point on the X axis

defines the peak amplitude of the in-phase component; the projection of the point on

the Y axis defines the peak amplitude of the quadrature component. The length of the

line (vector) that connects the point to the origin is the peak amplitude of the signal

element (combination of the X and Y components); the angle the line makes with the

X axis is the phase of the signal element. All the information we need, can easily be found

on a constellation diagram. Figure 5.12 shows a constellation diagram.


SECTION 5.1 DIGITAL-TO-ANALOG CONVERSION 151

Figure 5.12

Concept ofa constellation diagram

Y (Quadrature carrier)

-----------~

,

, I

f+-< --' ~e.~~ I

o l=: • ..:S5~ I

~ ~ ?f" I

B 8.. ~<$'/ I

:.::: E . , I

0.. 0 #1"', I

E 0 ~:<::>,' I

« C)l V, I

" Angle: phase I

, I

__---L+_-----.J'----

---'--~X (In-phase carrier)

Amplitude of

I component

Example 5.8

Show the constellation diagrams for an ASK (OOK), BPSK, and QPSK signals.

Solution

Figure 5.13 shows the three constellation diagrams.

Figure 5.13

Three constellation diagrams

0 -

-. -. .-

\

I -0 - 1 \

~- -,

01'- .11

/ \

1 \

00. ,

__~1O

I

a.ASK(OOK) b.BPSK c.QPSK

Let us analyze each case separately:

a. For ASK, we are using only an in-phase carrier. Therefore, the two points should be on the

X axis. Binary 0 has an amplitude of 0 V; binary 1 has an amplitude of 1 V (for example).

The points are located at the origin and at 1 unit.

h. BPSK also uses only an in-phase carrier. However, we use a polar NRZ signal for modulation.

It creates two types of signal elements, one with amplitude 1 and the other with

amplitude -1. This can be stated in other words: BPSK creates two different signal elements,

one with amplitude I V and in phase and the other with amplitude 1V and 1800 out of phase.

c. QPSK uses two carriers, one in-phase and the other quadrature. The point representing 11 is

made of two combined signal elements, both with an amplitude of 1 V. One element is represented

by an in-phase carrier, the other element by a quadrature carrier. The amplitude of

the final signal element sent for this 2-bit data element is 2 112 , and the phase is 45°. The

argument is similar for the other three points. All signal elements have an amplitude of 2 112 ,

but their phases are different (45°, 135°, -135°, and -45°). Of course, we could have chosen

the amplitude of the carrier to be 1/(21/2) to make the final amplitudes 1 V.


152 CHAPTER 5 ANALOG TRANSMISSION

Quadrature Amplitude Modulation

PSK is limited by the ability of the equipment to distinguish small differences in phase.

This factor limits its potential bit rate. So far, we have been altering only one of the

three characteristics of a sine wave at a time; but what if we alter two? Why not combine

ASK and PSK? The idea of using two carriers, one in-phase and the other quadrature,

with different amplitude levels for each carrier is the concept behind quadrature

amplitude modulation (QAM).

Quadrature amplitude modulation is a combination ofASK and PSK.

The possible variations of QAM are numerous. Figure 5.14 shows some of these

schemes. Figure 5.14a shows the simplest 4-QAM scheme (four different signal element

types) using a unipolar NRZ signal to modulate each carrier. This is the same

mechanism we used for ASK (OOK). Part b shows another 4-QAM using polar NRZ,

but this is exactly the same as QPSK. Part c shows another QAM-4 in which we used a

signal with two positive levels to modulate each of the two carriers. Finally, Figure 5.14d

shows a 16-QAM constellation of a signal with eight levels, four positive and four

negative.

Figure 5.14

Constellation diagrams for some QAMs

t • • • • • • • • •

L • • • • •• ••

I~ • • • •

• • • • • •

a.4·QAM b.4-QAM c.4.QAM d.16·QAM

Bandwidth for QAM

The minimum bandwidth required for QAM transmission is the same as that required

for ASK and PSK transmission. QAM has the same advantages as PSK over ASK.

5.2 ANALOG-TO-ANALOG CONVERSION

Analog-to-analog conversion, or analog modulation, is the representation of analog

information by an analog signal. One may ask why we need to modulate an analog signal;

it is already analog. Modulation is needed if the medium is bandpass in nature or if

only a bandpass channel is available to us. An example is radio. The government assigns

a narrow bandwidth to each radio station. The analog signal produced by each station is

a low-pass signal, all in the same range. To be able to listen to different stations, the

low-pass signals need to be shifted, each to a different range.


SECTION 5.2 ANALOG-TO-ANALOG CONVERSION 153

Analog-to-analog conversion can be accomplished in three ways: amplitude

modulation (AM), frequency modulation (FM), and phase modulation (PM). FM

and PM are usually categorized together. See Figure 5.15.

Figure 5.15

Types ofanalog-to-analog modulation

Analog-lo-analog

conversion

Frequency modulation

Amplitude Modulation

In AM transmission, the carrier signal is modulated so that its amplitude varies with the

changing amplitudes of the modulating signal. The frequency and phase of the carrier

remain the same; only the amplitude changes to follow variations in the information. Figure

5.16 shows how this concept works. The modulating signal is the envelope ofthe carrier.

Figure 5.16

Amplitude modulation

:L JjBAM=2B

o I 1;

As Figure 5.16 shows, AM is normally implemented by using a simple multiplier

because the amplitude of the carrier signal needs to be changed according to the amplitude

of the modulating signal.

AM Bandwidth

Figure 5.16 also shows the bandwidth of an AM signal. The modulation creates a bandwidth

that is twice the bandwidth of the modulating signal and covers a range centered

on the carrier frequency. However, the signal components above and below the carrier


154 CHAPTER 5 ANALOG TRANSMISSION

frequency carry exactly the same information. For this reason, some implementations

discard one-half of the signals and cut the bandwidth in half.

The total bandwIdth required for AM can be determined

from the bandwidth of the audio signal: BAM = 2B.

------------------_~-~,~_-

Stalldard Balldwidth Allocatioll for AiH l(adio

The bandwidth of an audio signal (speech and music) is usually 5 kHz. Therefore, an

AM radio station needs a bandwidth of 10kHz. In fact, the Federal Communications

Commission (FCC) allows 10 kHz for each AM station.

AM stations are allowed carrier frequencies anywhere between 530 and 1700 kHz

(1.7 MHz). However, each station's carrier frequency must be separated from those on

either side of it by at least 10 kHz (one AM bandwidth) to avoid interference. If one

station uses a carrier frequency of 1100 kHz, the next station's carrier frequency cannot

be lower than 1110 kHz (see Figure 5.17).

Figure 5.17

AM band allocation

__*_*_*_....I....-..... t-'"....=u.;[IJ

530 I' ,I 1700

kHz 10kHz kHz

Frequency Modulation

In FM transmission, the frequency of the carrier signal is modulated to follow the changing

voltage level (amplitude) of the modulating signal. The peak amplitude and phase of

the carrier signal remain constant, but as the amplitude of the information signal

changes, the frequency of the carrier changes correspondingly. Figure 5.18 shows the

relationships of the modulating signal, the carrier signal, and the resultant FM signal.

As Figure 5.18 shows, FM is nOimalIy implemented by using a voltage-controlled

oscillator as with FSK. The frequency of the oscillator changes according to the input

voltage which is the amplitude of the modulating signal.

FiH Balldwidth

Figure 5.18 also shows the bandwidth of an FM signal. The actual bandwidth is difficult

to determine exactly, but it can be shown empirically that it is several times that

of the analog signal or 2(1 + ~)B where ~ is a factor depends on modulation technique

with a common value of 4.

The total bandwidth required for FM can be determined from

the bandwidth of the audio signal: B FM =2(1 + j3 )B.


SECTION 5.2 ANALOG-TO-ANALOG CONVERSION 155

:Figure 5.1 g

Frequency modulation

----------------------

Amplitude

Modulating signal (audio)

FM signal

n !

Time

Time

Time

___"0-----,J~~ C". I veo I~ .-

VOltage-controlled

oscillator

DB,"

:L

= 2(1 +filR

o I Ie

------------_._-----------------

,c.,'twllhm! Bandwidth A.llo('((tiOll for F,\f Radio

The bandwidth of an audio signal (speech and music) broadcast in stereo is almost

15 kHz. The FCC allows 200 kHz (0.2 MHz) for each station. This mean ~ = 4 with

some extra guard band. FM stations are allowed carrier frequencies anywhere between

88 and 108 MHz. Stations must be separated by at least 200 kHz to keep their bandwidths

from overlapping. To create even more privacy, the FCC requires that in a given

area, only alternate bandwidth allocations may be used. The others remain unused to prevent

any possibility of two stations interfering with each other. Given 88 to 108 MHz as

a range, there are 100 potential PM bandwidths in an area, of which 50 can operate at

anyone time. Figure 5.19 illustrates this concept.

Figure 5.19

FM band allocation

------_.__._--

,--_t,-e_I st~~on

88

MHz

Ie

t

I- 200 kHz ,I

... Ie

t

No

station

108

MHz

Phast' ,\ loduiation

In PM transmission, the phase of the carrier signal is modulated to follow the changing

voltage level (amplitude) of the modulating signal. The peak amplitude and frequency

of the carrier signal remain constant, but as the amplitude of the information signal

changes, the phase of the carrier changes correspondingly. It can proved mathematically

(see Appendix C) that PM is the same as FM with one difference. In FM, the

instantaneous change in the carrier frequency is proportional to the amplitude of the


156 CHAPTER 5 ANALOG TRANSMISSION

modulating signal; in PM the instantaneous change in the carrier frequency is proportional

to the derivative of the amplitude of the modulating signal. Figure 5.20 shows the

relationships of the modulating signal, the carrier signal, and the resultant PM signal.

Figure 5.20

Phase modulation

Amplitude

Modulating signal (audio)

Time

Time

I

~ I

'J 1 dldt

1 I

VCOI~

r

PM signal

Time

As Figure 5.20 shows, PM is normally implemented by using a voltage-controlled

oscillator along with a derivative. The frequency of the oscillator changes according to

the derivative of the input voltage which is the amplitude of the modulating signal.

PM Bandwidth

Figure 5.20 also shows the bandwidth of a PM signal. The actual bandwidth is difficult

to determine exactly, but it can be shown empirically that it is several times that of the

analog signal. Although, the formula shows the same bandwidth for FM and PM, the

value of ~ is lower in the case of PM (around 1 for narrowband and 3 for wideband).

The total bandwidth required for PM can be determined from the bandwidth and

maximum amplitude of the modulating signal: B pM = 2(1 + ~ )B.

5.3 RECOMMENDED READING

For more details about subjects discussed in this chapter, we recommend the following

books. The items in brackets [...] refer to the reference list at the end of the text.

Books

Digital-to-analog conversion is discussed in Chapter 14 of [Pea92], Chapter 5 of

[CouOl], and Section 5.2 of [Sta04]. Analog-to-analog conversion is discussed in

Chapters 8 to 13 of [Pea92], Chapter 5 of [CouOl], and Section 5.4 of [Sta04]. [Hsu03]


SECTION 5.5 SUMMARY 157

gives a good mathematical approach to all materials discussed in this chapter. More

advanced materials can be found in [Ber96].

5.4 KEY TERMS

amplitude modulation (AM)

amplitude shift keying (ASK)

analog-to-analog conversion

carrier signal

constellation diagram

digital-to-analog conversion

frequency modulation (PM)

frequency shift keying (FSK)

phase modulation (PM)

phase shift keying (PSK)

quadrature amplitude modulation

(QAM)

5.5 SUMMARY

o Digital-to-analog conversion is the process of changing one of the characteristics

of an analog signal based on the information in the digital data.

o Digital-to-analog conversion can be accomplished in several ways: amplitude shift

keying (ASK), frequency shift keying (FSK), and phase shift keying (PSK).

Quadrature amplitude modulation (QAM) combines ASK and PSK.

o In amplitude shift keying, the amplitude of the carrier signal is varied to create signal

elements. Both frequency and phase remain constant while the amplitude changes.

o In frequency shift keying, the frequency of the carrier signal is varied to represent

data. The frequency of the modulated signal is constant for the duration of one signal

element, but changes for the next signal element if the data element changes.

Both peak amplitude and phase remain constant for all signal elements.

o In phase shift keying, the phase of the carrier is varied to represent two or more different

signal elements. Both peak amplitude and frequency remain constant as the

phase changes.

o A constellation diagram shows us the amplitude and phase of a signal element,

particularly when we are using two carriers (one in-phase and one quadrature).

o Quadrature amplitude modulation (QAM) is a combination of ASK and PSK.

QAM uses two carriers, one in-phase and the other quadrature, with different

amplitude levels for each carrier.

o Analog-to-analog conversion is the representation of analog information by an

analog signal. Conversion is needed if the medium is bandpass in nature or if only

a bandpass bandwidth is available to us.

o Analog-to-analog conversion can be accomplished in three ways: amplitude modulation

(AM), frequency modulation (FM), and phase modulation (PM).

o In AM transmission, the carrier signal is modulated so that its amplitude varies with the

changing amplitudes of the modulating signal. The frequency and phase of the carrier

remain the same; only the amplitude changes to follow variations in the information.

o In PM transmission, the frequency of the carrier signal is modulated to follow the

changing voltage level (amplitude) of the modulating signal. The peak amplitude


158 CHAPTER 5 ANALOG TRANSMISSION

and phase of the carrier signal remain constant, but as the amplitude of the information

signal changes, the frequency of the carrier changes correspondingly.

o In PM transmission, the phase of the carrier signal is modulated to follow the

changing voltage level (amplitude) of the modulating signal. The peak amplitude

and frequency of the carrier signal remain constant, but as the amplitude of the

information signal changes, the phase of the carrier changes correspondingly.

5.6 PRACTICE SET

Review Questions

1. Define analog transmission.

2. Define carrier signal and its role in analog transmission.

3. Define digital-to-analog conversion.

4. Which characteristics of an analog signal are changed to represent the digital signal

in each of the following digital-to-analog conversion?

a. ASK

b. FSK

c. PSK

d. QAM

5. Which of the four digital-to-analog conversion techniques (ASK, FSK, PSK or

QAM) is the most susceptible to noise? Defend your answer.

6. Define constellation diagram and its role in analog transmission.

7. What are the two components of a signal when the signal is represented on a con- .

stellation diagram? Which component is shown on the horizontal axis? Which is

shown on the vertical axis?

8. Define analog-to-analog conversion?

9. Which characteristics of an analog signal are changed to represent the lowpass analog

signal in each of the following analog-to-analog conversions?

a. AM

b. FM

c. PM

]0. Which of the three analog-to-analog conversion techniques (AM, FM, or PM) is

the most susceptible to noise? Defend your answer.

Exercises

11. Calculate the baud rate for the given bit rate and type of modulation.

a. 2000 bps, FSK

b. 4000 bps, ASK

c. 6000 bps, QPSK

d. 36,000 bps, 64-QAM


SECTION 5.6 PRACTICE SET 159

12. Calculate the bit rate for the given baud rate and type of modulation.

a. 1000 baud, FSK

b. 1000 baud, ASK

c. 1000 baud, BPSK

d. 1000 baud, 16-QAM

13. What is the number of bits per baud for the following techniques?

a. ASK with four different amplitudes

b. FSK with 8 different frequencies

c. PSK with four different phases

d. QAM with a constellation of 128 points.

14. Draw the constellation diagram for the following:

a. ASK, with peak amplitude values of 1 and 3

b. BPSK, with a peak amplitude value of 2

c. QPSK, with a peak amplitude value of 3

d. 8-QAM with two different peak amplitude values, I and 3, and four different

phases.

15. Draw the constellation diagram for the following cases. Find the peak amplitude

value for each case and define the type of modulation (ASK, FSK, PSK, or QAM).

The numbers in parentheses define the values of I and Q respectively.

a. Two points at (2, 0) and (3, 0).

b. Two points at (3, 0) and (-3, 0).

c. Four points at (2, 2), (-2, 2), (-2, -2), and (2, -2).

d. Two points at (0 , 2) and (0, -2).

16. How many bits per baud can we send in each of the following cases if the signal

constellation has one of the following number of points?

a. 2

b. 4

c. 16

d. 1024

17. What is the required bandwidth for the following cases if we need to send 4000 bps?

Let d = 1.

a. ASK

b. FSK with 2~f =4 KHz

c. QPSK

d. 16-QAM

18. The telephone line has 4 KHz bandwidth. What is the maximum number of bits we

can send using each of the following techniques? Let d = O.

a. ASK

b. QPSK

c. 16-QAM

d.64-QAM


160 CHAPTER 5 ANALOG TRANSMISSION

19. A corporation has a medium with a I-MHz bandwidth (lowpass). The corporation

needs to create 10 separate independent channels each capable of sending at least

10 Mbps. The company has decided to use QAM technology. What is the minimum

number of bits per baud for each channel? What is the number of points in

the constellation diagram for each channel? Let d = O.

20. A cable company uses one of the cable TV channels (with a bandwidth of 6 MHz)

to provide digital communication for each resident. What is the available data rate

for each resident if the company uses a 64-QAM technique?

21. Find the bandwidth for the following situations if we need to modulate a 5-KHz

voice.

a. AM

b. PM (set ~ =5)

c. PM (set ~ =1)

22. Find the total number of channels in the corresponding band allocated by FCC.

a. AM

b. FM


CHAPTER 6

Bandwidth Utilization:

Multiplexing and Spreading

In real life, we have links with limited bandwidths. The wise use of these bandwidths

has been, and will be, one of the main challenges of electronic communications. However,

the meaning ofwise may depend on the application. Sometimes we need to combine

several low-bandwidth channels to make use of one channel with a larger bandwidth.

Sometimes we need to expand the bandwidth of a channel to achieve goals such as

privacy and antijamming. In this chapter, we explore these two broad categories of

bandwidth utilization: multiplexing and spreading. In multiplexing, our goal is efficiency;

we combine several channels into one. In spreading, our goals are privacy and

antijamming; we expand the bandwidth of a channel to insert redundancy, which is

necessary to achieve these goals.

Bandwidth utilization is the wise use of available bandwidth to achieve specific goals.

Efficiency can be achieved by multiplexing;

privacy and antijamming can be achieved by spreading.

6.1 MULTIPLEXING

Whenever the bandwidth of a medium linking two devices is greater than the bandwidth

needs of the devices, the link can be shared. Multiplexing is the set of techniques

that allows the simultaneous transmission of multiple signals across a single data link.

As data and telecommunications use increases, so does traffic. We can accommodate

this increase by continuing to add individual links each time a new channel is needed;

or we can install higher-bandwidth links and use each to carry multiple signals. As

described in Chapter 7, today's technology includes high-bandwidth media such as

optical fiber and terrestrial and satellite microwaves. Each has a bandwidth far in excess

of that needed for the average transmission signal. If the bandwidth of a link is greater

than the bandwidth needs of the devices connected to it, the bandwidth is wasted. An

efficient system maximizes the utilization of all resources; bandwidth is one of the

most precious resources we have in data communications.

161


162 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

In a multiplexed system, n lines share the bandwidth of one link. Figure 6.1 shows

the basic format of a multiplexed system. The lines on the left direct their transmission

streams to a multiplexer (MUX), which combines them into a single stream (many-toone).

At the receiving end, that stream is fed into a demultiplexer (DEMUX), which

separates the stream back into its component transmissions (one-to-many) and

directs them to their corresponding lines. In the figure, the word link refers to the

physical path. The word channel refers to the portion of a link that carries a transmission

between a given pair of lines. One link can have many (n) channels.

Figure 6.1

n Input

lines

Dividing a link into channels

~ MUX: Multiplexer

/

DEMUX: Demultiplexer

M

X U

· ·

U , M · I link, 11 channels

X ·

/ ~

D

E

n Output

lines

There are three basic multiplexing techniques: frequency-division multiplexing,

wavelength-division multiplexing, and time-division multiplexing. The first two are

techniques designed for analog signals, the third, for digital signals (see Figure 6.2).

Figure 6.2

Categories ofmultiplexing

Multiplexing

I

I I I

Frequency-division

Wavelength-division

Time-division

multiplexing multiplexing multiplexing

I

Analog Analog Digital

Although some textbooks consider carrier division multiple access (COMA) as a

fourth multiplexing category, we discuss COMA as an access method (see Chapter 12).

Frequency-Division Multiplexing

Frequency-division multiplexing (FDM) is an analog technique that can be applied

when the bandwidth of a link (in hertz) is greater than the combined bandwidths of

the signals to be transmitted. In FOM, signals generated by each sending device modulate

different carrier frequencies. These modulated signals are then combined into a single

composite signal that can be transported by the link. Carrier frequencies are separated by

sufficient bandwidth to accommodate the modulated signal. These bandwidth ranges are

the channels through which the various signals travel. Channels can be separated by


SECTION 6.1 MULTIPLEXING 163

strips of unused bandwidth-guard bands-to prevent signals from overlapping. In

addition, carrier frequencies must not interfere with the original data frequencies.

Figure 6.3 gives a conceptual view of FDM. In this illustration, the transmission path

is divided into three parts, each representing a channel that carries one transmission.

Figure 6.3

Frequency-division multiplexing

Input

lines

D

M Channel J E

U Chanllel2 M

X

U

X

Output

Jines

We consider FDM to be an analog multiplexing technique; however, this does not

mean that FDM cannot be used to combine sources sending digital signals. A digital

signal can be converted to an analog signal (with the techniques discussed in Chapter 5)

before FDM is used to multiplex them.

FDM is an analog multiplexing technique that combines analog signals.

Multiplexing Process

Figure 6.4 is a conceptual illustration of the multiplexing process. Each source generates

a signal of a similar frequency range. Inside the multiplexer, these similar signals

modulates different carrier frequencies (/1,12, andh). The resulting modulated signals

are then combined into a single composite signal that is sent out over a media link that

has enough bandwidth to accommodate it.

Figure 6.4

FDM process

Modulator

/\/\/\/\

V \TV\)

Carrier!1

Modulator

flOflflflflflfl

VVlJl)l) VVV

Carrierh

Baseband

analog signals

Modulator

U!UUUUA!!!A

mvmvmnm

Carrierh


164 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

Demultiplexing Process

The demultiplexer uses a series of filters to decompose the multiplexed signal into its

constituent component signals. The individual signals are then passed to a demodulator

that separates them from their carriers and passes them to the output lines. Figure 6.5 is

a conceptual illustration of demultiplexing process.

Figure 6.5

FDM demultiplexing example

,

/--

~

\ '

\ " -

Demodulator

AAAA

VVVV

Carrier!l

:==D=e=m=o=d=ul=at=or==~

" \ AAAAAnAA

vvvvvvvv

'----~ .... ~'. ~~ /1' Carrierh

Demodulator

AA!!AAAAAAAAAA!!

mvmvmmn

Carrierh

Baseband

analog signals

Example 6.1

Assume that a voice channel occupies a bandwidth of 4 kHz. We need to combine three voice

channels into a link with a bandwidth of 12 kHz, from 20 to 32 kHz. Show the configuration,

using the frequency domain. Assume there are no guard bands.

Solution

We shift (modulate) each of the three voice channels to a different bandwidth, as shown in Figure

6.6. We use the 20- to 24-kHz bandwidth for the first channel, the 24- to 28-kHz bandwidth

for the second channel, and the 28- to 32-kHz bandwidth for the third one. Then we combine

them as shown in Figure 6.6. At the receiver, each channel receives the entire signal, using a

filter to separate out its own signal. The first channel uses a filter that passes frequencies

between 20 and 24 kHz and filters out (discards) any other frequencies. The second channel

uses a filter that passes frequencies between 24 and 28 kHz, and the third channel uses a filter

that passes frequencies between 28 and 32 kHz. Each channel then shifts the frequency to start

from zero.

Example 6.2

Five channels, each with a lOa-kHz bandwidth, are to be multiplexed together. What is the minimum

bandwidth of the link if there is a need for a guard band of 10kHz between the channels to

prevent interference?

Solution

For five channels, we need at least four guard bands. This means that the required bandwidth is at

least 5 x 100 + 4 x 10 =540 kHz, as shown in Figure 6.7.


SECTION 6.1 MULTIPLEXING 165

Figure 6.6 Example 6.1

Shift and combine

Higher-bandwidth link

Bandpass

filter

Bandpass

filter

-_.._-

20 24

24 28

Figure 6.7 Example 6.2

Guard band

of 10 kHz

I. 100 kHz -Ill' 100kHz _I

540 kHz

Example 6.3

Four data channels (digital), each transmitting at I Mbps, use a satellite channel of I MHz.

Design an appropriate configuration, using FDM.

Solution

The satellite channel is analog. We divide it into four channels, each channel having a 2S0-kHz

bandwidth. Each digital channel of I Mbps is modulated such that each 4 bits is modulated to

1 Hz. One solution is 16-QAM modulation. Figure 6.8 shows one possible configuration.

The Analog Carrier System

To maximize the efficiency of their infrastructure, telephone companies have traditionally

multiplexed signals from lower-bandwidth lines onto higher-bandwidth lines. In this

way, many switched or leased lines can be combined into fewer but bigger channels. For

analog lines, FDM is used.


166 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

Figure 6.8 Example 6.3

I Mbps

Digital

I Mbps

Digital

I Mbps

Digital

I Mbps

Digital

250 kHz

Analog

250 kHz

Analog

250 kHz

Analog

250 kHz

Analog

I MHz

One of these hierarchical systems used by AT&T is made up of groups, supergroups,

master groups, and jumbo groups (see Figure 6.9).

Figure 6.9

Analog hierarchy

48 kHz

12 voice channels

Group

F

D t------i~

M

en

g. ----i~ F Master group

~ D

~ ----:l.~1 M

:=:

§.- ~ F

~- ..-jD

M

!:l

~-.....~

E

'.Q - .....~

16.984 MHz

3600 voice channels

Jumbo

group

In this analog hierarchy, 12 voice channels are multiplexed onto a higher-bandwidth

line to create a group. A group has 48 kHz ofbandwidth and supports 12 voice channels.

At the next level, up to five groups can be multiplexed to create a composite signal

called a supergroup. A supergroup has a bandwidth of 240 kHz and supports up to

60 voice channels. Supergroups can be made up ofeither five groups or 60 independent

voice channels.

At the next level, 10 supergroups are multiplexed to create a master group. A

master group must have 2.40 MHz of bandwidth, but the need for guard bands between

the supergroups increases the necessary bandwidth to 2.52 MHz. Master groups support

up to 600 voice channels.

Finally, six master groups can be combined into a jumbo group. A jumbo group

must have 15.12 MHz (6 x 2.52 MHz) but is augmented to 16.984 MHz to allow for

guard bands between the master groups.


SECTION 6.1 MULTIPLEXING 167

Other Applications ofFDM

A very common application of FDM is AM and FM radio broadcasting. Radio uses the

air as the transmission medium. A special band from 530 to 1700 kHz is assigned to AM

radio. All radio stations need to share this band. As discussed in Chapter 5, each AM station

needs 10kHz of bandwidth. Each station uses a different carrier frequency, which

means it is shifting its signal and multiplexing. The signal that goes to the air is a combination

of signals. A receiver receives all these signals, but filters (by tuning) only the one

which is desired. Without multiplexing, only one AM station could broadcast to the common

link, the air. However, we need to know that there is physical multiplexer or demultiplexer

here. As we will see in Chapter 12 multiplexing is done at the data link layer.

The situation is similar in FM broadcasting. However, FM has a wider band of 88

to 108 MHz because each station needs a bandwidth of 200 kHz.

Another common use of FDM is in television broadcasting. Each TV channel has

its own bandwidth of 6 MHz.

The first generation of cellular telephones (still in operation) also uses FDM. Each

user is assigned two 30-kHz channels, one for sending voice and the other for receiving.

The voice signal, which has a bandwidth of 3 kHz (from 300 to 3300 Hz), is modulated by

using FM. Remember that an FM signal has a bandwidth 10 times that of the modulating

signal, which means each channel has 30 kHz (10 x 3) of bandwidth. Therefore, each user

is given, by the base station, a 60-kHz bandwidth in a range available at the time ofthe call.

Example 6.4

The Advanced Mobile Phone System (AMPS) uses two bands. The first band of 824 to 849 MHz

is used for sending, and 869 to 894 MHz is used for receiving. Each user has a bandwidth of

30 kHz in each direction. The 3-kHz voice is modulated using FM, creating 30 kHz of modulated

signal. How many people can use their cellular phones simultaneously?

Solution

Each band is 25 MHz. Ifwe divide 25 MHz by 30 kHz, we get 833.33. In reality, the band is divided

into 832 channels. Of these, 42 channels are used for control, which means only 790 channels are

available for cellular phone users. We discuss AMPS in greater detail in Chapter 16.

Implementation

FDM can be implemented very easily. In many cases, such as radio and television

broadcasting, there is no need for a physical multiplexer or demultiplexer. As long as

the stations agree to send their broadcasts to the air using different carrier frequencies,

multiplexing is achieved. In other cases, such as the cellular telephone system, a base

station needs to assign a carrier frequency to the telephone user. There is not enough

bandwidth in a cell to permanently assign a bandwidth range to every telephone user.

When a user hangs up, her or his bandwidth is assigned to another caller.

Wavelength-Division Multiplexing

Wavelength-division multiplexing (WDM) is designed to use the high-data-rate

capability of fiber-optic cable. The optical fiber data rate is higher than the data rate of

metallic transmission cable. Using a fiber-optic cable for one single line wastes the

available bandwidth. Multiplexing allows us to combine several lines into one.


168 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

WDM is conceptually the same as FDM, except that the multiplexing and demultiplexing

involve optical signals transmitted through fiber-optic channels. The idea is the

same: We are combining different signals of different frequencies. The difference is

that the frequencies are very high.

Figure 6.10 gives a conceptual view of a WDM multiplexer and demultiplexer.

Very narrow bands of light from different sources are combined to make a wider band

of light. At the receiver, the signals are separated by the demultiplexer.

Figure 6.10

Wavelength-division multiplexing

AI

fl f\.

AI

AZ

fl flflfl fl

Az

Ai + Az + A3

A:J

fl

fl

A:J

WDM is an analog multiplexing technique to combine optical signals.

Although WDM technology is very complex, the basic idea is very simple. We

want to combine multiple light sources into one single light at the multiplexer and do

the reverse at the demultiplexer. The combining and splitting of light sources are easily

handled by a prism. Recall from basic physics that a prism bends a beam of light based

on the angle of incidence and the frequency. Using this technique, a multiplexer can be

made to combine several input beams of light, each containing a narrow band of frequencies,

into one output beam of a wider band of frequencies. A demultiplexer can

also be made to reverse the process. Figure 6.11 shows the concept.

:Figure 6.11

Prisms in wavelength-division multiplexing and demultiplexing

Fiber-optic cable

Multiplexer

Demultiplexer

One application of WDM is the SONET network in which multiple optical

fiber lines are multiplexed and demultiplexed. We discuss SONET in Chapter 17.

A new method, called dense WDM (DWDM), can multiplex a very large number

of channels by spacing channels very close to one another. It achieves even greater

efficiency.


SECTION 6.1 MULTIPLEXING 169

Synchronous Time-Division Multiplexing

Time-division multiplexing (TDM) is a digital process that allows several connections

to share the high bandwidth of a linle Instead of sharing a portion of the bandwidth as in

FDM, time is shared. Each connection occupies a portion of time in the link. Figure 6.12

gives a conceptual view of TDM. Note that the same link is used as in FDM; here, however,

the link is shown sectioned by time rather than by frequency. In the figure, portions

of signals 1,2,3, and 4 occupy the link sequentially.

Figure 6.12

TDM

2

r=:~---l

M ~,

U 4 3 2 1 t'3

3 X

Data flow

. D 2

E J--..::--{§:~

l'4321M

U 3

X t---~=~-~_

t--L-L-.l...--'--=.........-"'=..L...;;...J...-...l...-...l...-...J.........j

Note that in Figure 6.12 we are concerned with only multiplexing, not switching.

This means that all the data in a message from source 1 always go to one specific destination,

be it 1, 2, 3, or 4. The delivery is fixed and unvarying, unlike switching.

We also need to remember that TDM is, in principle, a digital multiplexing technique.

Digital data from different sources are combined into one timeshared link. However, this

does not mean that the sources cannot produce analog data; analog data can be sampled,

changed to digital data, and then multiplexed by using TDM.

TDM is a digital multiplexing technique for combining

several low-rate channels into one high-rate one.

We can divide TDM into two different schemes: synchronous and statistical. We first

discuss synchronous TDM and then show how statistical TDM differs. In synchronous

TDM, each input connection has an allotment in the output even if it is not sending data.

Time Slots and Frames

In synchronous TDM, the data flow of each input connection is divided into units, where

each input occupies one input time slot. A unit can be 1 bit, one character, or one block of

data. Each input unit becomes one output unit and occupies one output time slot. However,

the duration of an output time slot is n times shorter than the duration of an input

time slot. If an input time slot is T s, the output time slot is Tin s, where n is the number

of connections. In other words, a unit in the output connection has a shorter duration; it

travels faster. Figure 6.13 shows an example of synchronous TDM where n is 3.


170 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

Figure 6.13

Synchronous time-division multiplexing

If

T

Jlf

T

Jil

T

A3

A2

Al

B3

B2

Bl

C3

C2

Cl

Each frame is 3 time slots.

Each time slot duration is Tf3 s.

Data are taken from each

line every T s.

In synchronous TDM, a round of data units from each input connection is collected

into a frame (we will see the reason for this shortly). If we have n connections, a frame

is divided into n time slots and one slot is allocated for each unit, one for each input

line. If the duration of the input unit is T, the duration of each slot is Tin and the duration

of each frame is T (unless a frame carries some other information, as we will see

shortly).

The data rate of the output link must be n times the data rate of a connection to

guarantee the flow of data. In Figure 6.13, the data rate of the link is 3 times the data

rate of a connection; likewise, the duration of a unit on a connection is 3 times that of

the time slot (duration of a unit on the link). In the figure we represent the data prior to

multiplexing as 3 times the size of the data after multiplexing. This is just to convey the

idea that each unit is 3 times longer in duration before multiplexing than after.

In synchronous TDM, the data rate of the link is n times faster,

and the unit duration is n times shorter.

Time slots are grouped into frames. A frame consists of one complete cycle of

time slots, with one slot dedicated to each sending device. In a system with n input

lines, each frame has n slots, with each slot allocated to carrying data from a specific

input line.

Example 6.5

In Figure 6.13, the data rate for each input connection is 3 kbps. If 1 bit at a time is multiplexed (a

unit is 1 bit), what is the duration of (a) each input slot, (b) each output slot, and (c) each frame?

Solution

We can answer the questions as follows:

a. The data rate of each input connection is 1 kbps. This means that the bit duration is 111000 s

or 1 ms. The duration of the input time slot is 1 ms (same as bit duration).

b. The duration of each output time slot is one-third of the input time slot. This means that the

duration of the output time slot is 1/3 ms.

c. Each frame carries three output time slots. So the duration of a frame is 3 x 113 ms, or 1 ms.

The duration of a frame is the same as the duration of an input unit.


SECTION 6.1 MULTIPLEXING 171

Example 6.6

Figure 6.14 shows synchronous TOM with a data stream for each input and one data stream for

the output. The unit of data is 1 bit. Find (a) the input bit duration, (b) the output bit duration,

(c) the output bit rate, and (d) the output frame rate.

Figure 6.14 Example 6.6

I Mbps

1 Mbps

1 Mbps

1 Mbps

• •• 1

• •• 0 0 0 0

• •• 1 0 0

• •• 0 0 0

o

o

Frames

••• ffilQI!][Q]QJQliJ1III[Q[QJQli]"

Solution

We can answer the questions as follows:

a. The input bit duration is the inverse of the bit rate: 1/1 Mbps = 1 lls.

b. The output bit duration is one-fourth of the input bit duration, or 1/411s.

c. The output bit rate is the inverse of the output bit duration or 1/4 lls, or 4 Mbps. This can also

be deduced from the fact that the output rate is 4 times as fast as any input rate; so the output

rate =4 x 1 Mbps =4 Mbps.

d. The frame rate is always the same as any input rate. So the frame rate is 1,000,000 frames per

second. Because we are sending 4 bits in each frame, we can verify the result of the previous

question by multiplying the frame rate by the number of bits per frame.

Example 6.7

Four l-kbps connections are multiplexed together. A unit is I bit. Find (a) the duration of I bit

before multiplexing, (b) the transmission rate of the link, (c) the duration of a time slot, and

(d) the duration of a frame.

Solution

We can answer the questions as follows:

a. The duration of 1 bit before multiplexing is 1/1 kbps, or 0.001 s (l ms).

b. The rate of the link is 4 times the rate of a connection, or 4 kbps.

c. The duration of each time slot is one-fourth of the duration of each bit before multiplexing,

or 1/4 ms or 250 I.ls. Note that we can also calculate this from the data rate of the link, 4 kbps.

The bit duration is the inverse of the data rate, or 1/4 kbps or 250 I.ls.

d. The duration of a frame is always the same as the duration of a unit before multiplexing, or

I ms. We can also calculate this in another way. Each frame in this case has fouf time slots.

So the duration of a frame is 4 times 250 I.ls, or I ms.

Interleaving

TDM can be visualized as two fast-rotating switches, one on the multiplexing side and

the other on the demultiplexing side. The switches are synchronized and rotate at the

same speed, but in opposite directions. On the multiplexing side, as the switch opens


172 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

in front of a connection, that connection has the opportunity to send a unit onto the

path. This process is called interleaving. On the demultiplexing side, as the switch

opens in front of a connection, that connection has the opportunity to receive a unit

from the path.

Figure 6.15 shows the interleaving process for the connection shown in Figure 6.13.

In this figure, we assume that no switching is involved and that the data from the first

connection at the multiplexer site go to the first connection at the demultiplexer. We

discuss switching in Chapter 8.

Figure 6.15

Interleaving

A3 A2 Al

c=J [==:J c::::::J

B3 B2 Bl

~~~

C3 C2 CI

c=:::J [==:J c::::::J

r - - - - - - - - - - Synchronization - - - - - - - - - ,

I

I

I

I

I

A3 A2 Al

[==:J [==:J c:::J

B3 B2 Bl

~~~

C3 C2 Cl

c:::J r:::::J c=:::J

Example 6.8

Four channels are multiplexed using TDM. If each channel sends 100 bytesis and we multiplex

1 byte per channel, show the frame traveling on the link, the size of the frame, the duration of a

frame, the frame rate, and the bit rate for the link.

Solution

The multiplexer is shown in Figure 6.16. Each frame carries 1 byte from each channel; the size of

each frame, therefore, is 4 bytes, or 32 bits. Because each channel is sending 100 bytes/s and a

frame carries 1 byte from each channel, the frame rate must be 100 frames per second. The duration

of a frame is therefore 11100 s. The link is carrying 100 frames per second, and since each

frame contains 32 bits, the bit rate is 100 x 32, or 3200 bps. This is actually 4 times the bit rate of

each channel, which is 100 x 8 = 800 bps.

Figure 6.16 Example 6.8

Frame 4 bytes

Frame 4 bytes

32 bits 32 bits

II 1,*"tiJ 'I ... II g~ II

100 frames/s

3200 bps

-100 bytes/s

Frame duration == 160s


SECTION 6.1 MULTIPLEXING 173

Example 6.9

A multiplexer combines four 100-kbps channels using a time slot of 2 bits. Show the output with

four arbitrary inputs. What is the frame rate? What is the frame duration? What is the bit rate?

What is the bit duration?

Solution

Figure 6.17 shows the output for four arbitrary inputs. The link carries 50,000 frames per second

since each frame contains 2 bits per channel. The frame duration is therefore 1/50,000 s or 20 ~s.

The frame rate is 50,000 frames per second, and each frame carries 8 bits; the bit rate is 50,000 x

8 =400,000 bits or 400 kbps. The bit duration is 1/400,000 s, or 2.5 IJ.s. Note that the frame duration

is 8 times the bit duration because each frame is carrying 8 bits.

Figure 6.17 Example 6.9

100 kbps

100 kbps

100 kbps

100 kbps

·.. 110010

... 001010

·.. 101 101

·.. 000111

Frame duration = lI50,000 s = 20 Ils

Frame: 8 bits Frame: 8 bits Frame: 8 bits

... ~~~

50,000 frames/s

400kbps

Empty Slots

Synchronous TDM is not as efficient as it could be. If a source does not have data to

send, the corresponding slot in the output frame is empty. Figure 6.18 shows a case in

which one of the input lines has no data to send and one slot in another input line has

discontinuous data.

Figure 6.18

Empty slots

II

II

I~ 0110 DII~ 01

The first output frame has three slots filled, the second frame has two slots filled,

and the third frame has three slots filled. No frame is full. We learn in the next section

that statistical TDM can improve the efficiency by removing the empty slots from the

frame.

Data Rate Management

One problem with TDM is how to handle a disparity in the input data rates. In all our

discussion so far, we assumed that the data rates of all input lines were the same. However,


174 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

if data rates are not the same, three strategies, or a combination of them, can be used.

We call these three strategies multilevel multiplexing, multiple-slot allocation, and

pulse stuffing.

Multilevel Multiplexing Multilevel multiplexing is a technique used when the data

rate of an input line is a multiple of others. For example, in Figure 6.19, we have two

inputs of 20 kbps and three inputs of 40 kbps. The first two input lines can be multiplexed

together to provide a data rate equal to the last three. A second level of multiplexing

can create an output of 160 kbps.

Figure 6.19

Multilevel multiplexing

20 kbps ----;"\

>------t

20 kbps ----I

40 kbps --------1

160 kbps

40 kbps --------1

40 kbps --------1

Multiple-Slot Allocation Sometimes it is more efficient to allot more than one slot in

a frame to a single input line. For example, we might have an input line that has a data

rate that is a multiple of another input. In Figure 6.20, the input line with a SO-kbps data

rate can be given two slots in the output. We insert a serial-to-parallel converter in the

line to make two inputs out of one.

Figure 6.20

Multiple-slot multiplexing

50 kbps

25 kbps -------1

25 kbps -------1

25 kbps -------1/

The input with a

50-kHz data rate has two

slots in each frame.

Pulse Stuffing Sometimes the bit rates of sources are not multiple integers of each

other. Therefore, neither of the above two techniques can be applied. One solution is to

make the highest input data rate the dominant data rate and then add dummy bits to the

input lines with lower rates. This will increase their rates. This technique is called pulse

stuffing, bit padding, or bit stuffing. The idea is shown in Figure 6.21. The input with a

data rate of 46 is pulse-stuffed to increase the rate to 50 kbps. Now multiplexing can

take place.


SECTION 6.1 MULTIPLEXING 175

Figure 6.21

Pulse stuffing

50 kbps ----------1

50 kbps ----------1

150 kbps

46kbps ---I

Frame Synchronizing

The implementation of TDM is not as simple as that of FDM. Synchronization between

the multiplexer and demultiplexer is a major issue. If the. multiplexer and the demultiplexer

are not synchronized, a bit belonging to one channel may be received by the

wrong channel. For this reason, one or more synchronization bits are usually added to

the beginning of each frame. These bits, called framing bits, follow a pattern, frame to

frame, that allows the demultiplexer to synchronize with the incoming stream so that it

can separate the time slots accurately. In most cases, this synchronization information

consists of 1 bit per frame, alternating between 0 and I, as shown in Figure 6.22.

Figure 6.22

Framing bits

ISynchronization

1 0 1

I pattern

Frame 3 Frame 2 Frame 1

C3 I B3 I A3 182 I A2 CI I I Al

.:11II:0 0 I I I

I

:11I:0

I :0

Example 6.10

We have four sources, each creating 250 characters per second. If the interleaved unit is a character

and 1 synchronizing bit is added to each frame, find (a) the data rate of each source, (b) the duration

of each character in each source, (c) the frame rate, (d) the duration of each frame, (e) the number of

bits in each frame, and (f) the data rate of the link.

Solution

We can answer the questions as follows:

a. The data rate of each source is 250 x 8 = 2000 bps = 2 kbps.

b. Each source sends 250 characters per second; therefore, the duration of a character is 1/250 s,

or4 ms.

c. Each frame has one character from each source, which means the link needs to send

250 frames per second to keep the transmission rate of each source.

d. The duration of each frame is 11250 s, or 4 ms. Note that the duration of each frame is the

same as the duration of each character coming from each source.

e. Each frame carries 4 characters and I extra synchronizing bit. This means that each frame is

4 x 8 + 1 = 33 bits.


176 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

f. The link sends 250 frames per second, and each frame contains 33 bits. This means that the

data rate of the link is 250 x 33, or 8250 bps. Note that the bit rate of the link is greater than

the combined bit rates of the four channels. If we add the bit rates of four channels, we get

8000 bps. Because 250 frames are traveling per second and each contains 1 extra bit for

synchronizing, we need to add 250 to the sum to get 8250 bps.

Example 6.11

Two channels, one with a bit rate of 100 kbps and another with a bit rate of 200 kbps, are to be

multiplexed. How this can be achieved? What is the frame rate? What is the frame duration?

What is the bit rate of the link?

Solution

We can allocate one slot to the first channel and two slots to the second channel. Each frame carries

3 bits. The frame rate is 100,000 frames per second because it carries 1 bit from the first

channel. The frame duration is 1/100,000 s, or 10 ms. The bit rate is 100,000 frames/s x 3 bits per

frame, or 300 kbps. Note that because each frame carries 1 bit from the first channel, the bit rate

for the first channel is preserved. The bit rate for the second channel is also preserved because

each frame carries 2 bits from the second channel.

Digital Signal Service

Telephone companies implement TDM through a hierarchy of digital signals, called

digital signal (DS) service or digital hierarchy. Figure 6.23 shows the data rates supported

by each level.

Figure 6.23

Digital hierarchy

os-o

6.312 Mbps

405-1

OS-1

1.544 Mbps

24 DS-O

T

D

M

OS-2

--~M

T OS-3

D :---......~

--+-\

T

D

M

274.176 Mbps

60S-3

OS-4

o A DS-O service is a single digital channel of 64 kbps.

ODS-I is a 1.544-Mbps service; 1.544 Mbps is 24 times 64 kbps plus 8 kbps of

overhead. It can be used as a single service for 1.544-Mbps transmissions, or it can

be used to multiplex 24 DS-O channels or to carry any other combination desired

by the user that can fit within its 1.544-Mbps capacity.

o DS-2 is a 6.312-Mbps service; 6.312 Mbps is 96 times 64 kbps plus 168 kbps of

overhead. It can be used as a single service for 6.312-Mbps transmissions; or it can


SECTION 6.1 MULTIPLEXING 177

be used to multiplex 4 DS-l channels, 96 DS-O channels, or a combination of these

service types.

o DS-3 is a 44.376-Mbps service; 44.376 Mbps is 672 times 64 kbps plus 1.368 Mbps

of overhead. It can be used as a single service for 44.376-Mbps transmissions; or it

can be used to multiplex 7 DS-2 channels, 28 DS-l channels, 672 DS-O channels,

or a combination of these service types.

o DS-4 is a 274. 176-Mbps service; 274.176 is 4032 times 64 kbps plus 16.128 Mbps of

overhead. It can be used to multiplex 6 DS-3 channels, 42 DS-2 channels, 168 DS-l

channels, 4032 DS-O channels, or a combination ofthese service types.

T Lines

DS-O, DS-l, and so on are the names of services. To implement those services, the telephone

companies use T lines (T-l to T-4). These are lines with capacities precisely

matched to the data rates of the DS-l to DS-4 services (see Table 6.1). So far only T-l

and T-3 lines are commercially available.

Table 6.1

DS and T line rates

Sen/ice Line Rate (Mbps) Voice Channels

DS-1 T-1 1.544 24

DS-2 T-2 6.312 96

DS-3 T-3 44.736 672

DS-4 T-4 274.176 4032

The T-l line is used to implement DS-l; T-2 is used to implement DS-2; and so on.

As you can see from Table 6.1, DS-O is not actually offered as a service, but it has been

defined as a basis for reference purposes.

T Lines for Analog Transmission

T lines are digital lines designed for the transmission of digital data, audio, or video.

However, they also can be used for analog transmission (regular telephone connections),

provided the analog signals are first sampled, then time-division multiplexed.

The possibility of using T lines as analog carriers opened up a new generation of

services for the telephone companies. Earlier, when an organization wanted 24 separate

telephone lines, it needed to run 24 twisted-pair cables from the company to the central

exchange. (Remember those old movies showing a busy executive with 10 telephones

lined up on his desk? Or the old office telephones with a big fat cable running from

them? Those cables contained a bundle of separate lines.) Today, that same organization

can combine the 24 lines into one T-l line and run only the T-l line to the exchange.

Figure 6.24 shows how 24 voice channels can be multiplexed onto one T-I line. (Refer

to Chapter 5 for PCM encoding.)

The T-1 Frame As noted above, DS-l requires 8 kbps of overhead. To understand how

this overhead is calculated, we must examine the format of a 24-voice-channel frame.

The frame used on a T-l line is usually 193 bits divided into 24 slots of 8 bits each

plus 1 extra bit for synchronization (24 x 8 + 1 = 193); see Figure 6.25. In other words,


178 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

Figure 6.24

T-l line for multiplexing telephone lines

Sampling at 8000 sampJes/s

Llsing 8 bits per sample

t

I)

M

T-I line 1.544 Mbps

24 x 64 kbps + 8 kbps overhead

Figure 6.25

T-l frame structure

Samplen

I

I Channel

I·· ·1 Channel

1 frame= 193 bits

II Channel I

24 2 I

1 bit 8 bits 8 bits 8 bits

... Frame ... I Frame II Frame

n 2 1

T- I: 8000 frames/s = 8000 x 193 bps = 1.544 Mbps

each slot contains one signal segment from each channel; 24 segments are interleaved

in one frame. Ifa T-l line carries 8000 frames, the data rate is 1.544 Mbps (193 x 8000 =

1.544 Mbps)-the capacity of the line.

E Lines

Europeans use a version ofT lines called E lines. The two systems are conceptually identical,

but their capacities differ. Table 6.2 shows the E lines and their capacities.


SECTION 6.1 MULTIPLEXING 179

Table 6.2

E line rates

Line Rate (Mbps) Voice Channels

E-1 2.048 30

E-2 8.448 120

E-3 34.368 480

E-4 139.264 1920

More Synchronous TDM Applications

Some second-generation cellular telephone companies use synchronous TDM. For

example, the digital version of cellular telephony divides the available bandwidth into

3D-kHz bands. For each band, TDM is applied so that six users can share the band. This

means that each 3D-kHz band is now made of six time slots, and the digitized voice signals

of the users are inserted in the slots. Using TDM, the number of telephone users in

each area is now 6 times greater. We discuss second-generation cellular telephony in

Chapter 16.

Statistical Time-Division Multiplexing

As we saw in the previous section, in synchronous TDM, each input has a reserved slot

in the output frame. This can be inefficient if some input lines have no data to send. In

statistical time-division multiplexing, slots are dynamically allocated to improve bandwidth

efficiency. Only when an input line has a slot's worth of data to send is it given a

slot in the output frame. In statistical multiplexing, the number of slots in each frame is

less than the number of input lines. The multiplexer checks each input line in roundrobin

fashion; it allocates a slot for an input line if the line has data to send; otherwise,

it skips the line and checks the next line.

Figure 6.26 shows a synchronous and a statistical TDM example. In the former,

some slots are empty because the corresponding line does not have data to send. In

the latter, however, no slot is left empty as long as there are data to be sent by any

input line.

Addressing

Figure 6.26 also shows a major difference between slots in synchronous TDM and

statistical TDM. An output slot in synchronous TDM is totally occupied by data; in

statistical TDM, a slot needs to carry data as well as the address of the destination.

In synchronous TDM, there is no need for addressing; synchronization and preassigned

relationships between the inputs and outputs serve as an address. We know, for example,

that input 1 always goes to input 2. If the multiplexer and the demultiplexer are

synchronized, this is guaranteed. In statistical multiplexing, there is no fixed relationship

between the inputs and outputs because there are no preassigned or reserved

slots. We need to include the address of the receiver inside each slot to show where it

is to be delivered. The addressing in its simplest form can be n bits to define N different

output lines with n =10g2 N. For example, for eight different output lines, we need a

3-bit address.


180 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

Figure 6.26

TDM slot comparison

Line A ----[:3:0

LineB

Line C--------{

Line 0 --i.iI~=~

LineE

a. Synchronous TDM

Line A -----[=:ACH

LineB

Line C--------i

Line 0 ---:~~

LineE --I

b. Statistical TDM

Slot Size

Since a slot carries both data and an address in statistical TDM, the ratio of the data size

to address size must be reasonable to make transmission efficient. For example, it

would be inefficient to send 1 bit per slot as data when the address is 3 bits. This would

mean an overhead of 300 percent. In statistical TDM, a block of data is usually many

bytes while the address is just a few bytes.

No Synchronization Bit

There is another difference between synchronous and statistical TDM, but this time it is

at the frame level. The frames in statistical TDM need not be synchronized, so we do not

need synchronization bits.

Bandwidth

In statistical TDM, the capacity of the link is normally less than the sum of the capacities

of each channel. The designers of statistical TDM define the capacity of the link

based on the statistics of the load for each channel. If on average only x percent of the

input slots are filled, the capacity of the link reflects this. Of course, during peak times,

some slots need to wait.

6.2 SPREAD SPECTRUM

Multiplexing combines signals from several sources to achieve bandwidth efficiency; the

available bandwidth ofa link is divided between the sources. In spread spectrum (88), we

also combine signals from different sources to fit into a larger bandwidth, but our goals


SECTION 6.2 SPREAD SPECTRUM 181

are somewhat different. Spread spectrum is designed to be used in wireless applications

(LANs and WANs). In these types of applications, we have some concerns that outweigh

bandwidth efficiency. In wireless applications, all stations use air (or a vacuum) as the

medium for communication. Stations must be able to share this medium without interception

by an eavesdropper and without being subject to jamming from a malicious intruder

(in military operations, for example).

To achieve these goals, spread spectrum techniques add redundancy; they spread

the original spectrum needed for each station. Ifthe required bandwidth for each station

is B, spread spectrum expands it to B ss ' such that B ss » B. The expanded bandwidth

allows the source to wrap its message in a protective envelope for a more secure transmission.

An analogy is the sending of a delicate, expensive gift. We can insert the gift in

a special box to prevent it from being damaged during transportation, and we can use a

superior delivery service to guarantee the safety of the package.

Figure 6.27 shows the idea of spread spectrum. Spread spectrum achieves its goals

through two principles:

1. The bandwidth allocated to each station needs to be, by far, larger than what is

needed. This allows redundancy.

2. The expanding of the original bandwidth B to the bandwidth Bss must be done by a

process that is independent of the original signal. In other words, the spreading

process occurs after the signal is created by the source.

Figure 6.27

Spread spectrum

B SS

I' >I

Spreading

process

i

Spreading

code

After the signal is created by the source, the spreading process uses a spreading

code and spreads the bandwidth. The figure shows the original bandwidth B and the

spreaded bandwidth Bss. The spreading code is a series of numbers that look random,

but are actually a pattern.

There are two techniques to spread the bandwidth: frequency hopping spread spectrum

(FHSS) and direct sequence spread spectrum (DSSS).

Frequency Hopping Spread Spectrum (FHSS)

The frequency hopping spread spectrum (FHSS) technique uses M different carrier

frequencies that are modulated by the source signal. At one moment, the signal modulates

one carrier frequency; at the next moment, the signal modulates another carrier


182 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

frequency. Although the modulation is done using one carrier frequency at a time,

M frequencies are used in the long run. The bandwidth occupied by a source after

spreading is B pHSS »B.

Figure 6.28 shows the general layout for FHSS. A pseudorandom code generator,

called pseudorandom noise (PN), creates a k-bit pattern for every hopping period T h •

The frequency table uses the pattern to find the frequency to be used for this hopping

period and passes it to the frequency synthesizer. The frequency synthesizer creates a

carrier signal of that frequency, and the source signal modulates the carrier signal.

Figure 6.28

Frequency hopping spread spectrum (FHSS)

Modulator

Original --I----------'l~

signal

t- --lt-~Spread

signal

Frequency table

Suppose we have decided to have eight hopping frequencies. This is extremely low

for real applications and is just for illustration. In this case, Mis 8 and k is 3. The pseudorandom

code generator will create eight different 3-bit patterns. These are mapped to

eight different frequencies in the frequency table (see Figure 6.29).

Figure 6.29

Frequency selection in FHSS

First-hop frequency

t

k-bit Frequency

000 200kHz

k-bil patterns

001 300kHz

101 111 001 000 010 110 011 010 400kHz

100I

1 011 500kHz

I

First selection

100 600kHz

101 700kHz--

UQ 800kHz

III 900kHz

Frequency table


SECTION 6.2 SPREAD SPECTRUM 183

The pattern for this station is 101, 111, 001, 000, 010, all, 100. Note that the pattern

is pseudorandom it is repeated after eight hoppings. This means that at hopping

period 1, the pattern is 101. The frequency selected is 700 kHz; the source signal modulates

this carrier frequency. The second k-bit pattern selected is 111, which selects the

900-kHz carrier; the eighth pattern is 100, the frequency is 600 kHz. After eight hoppings,

the pattern repeats, starting from 101 again. Figure 6.30 shows how the signal hops

around from carrier to carrier. We assume the required bandwidth of the original signal

is 100 kHz.

Figure 6.30

FHSS cycles

Carrier

frequencies

(kHz)

900

800

700

600

500

400

300

200

• •

••

Cycle 1

..

D

Cycle 2

D

cP

1 2 3 4 5 6 7 8 9 10 11 12 13 14 l5 16 Hop

periods

It can be shown that this scheme can accomplish the previously mentioned goals.

If there are many k-bit patterns and the hopping period is short, a sender and receiver

can have privacy. If an intruder tries to intercept the transmitted signal, she can only

access a small piece of data because she does not know the spreading sequence to

quickly adapt herself to the next hop. The scheme has also an antijamming effect. A

malicious sender may be able to send noise to jam the signal for one hopping period

(randomly), but not for the whole period.

Bandwidth Sharing

If the number of hopping frequencies is M, we can multiplex M channels into one by using

the same B ss bandwidth. This is possible because a station uses just one frequency in each

hopping period; M - 1 other frequencies can be used by other M - 1 stations. In other

words, M different stations can use the same B ss if an appropriate modulation technique

such as multiple FSK (MFSK) is used. FHSS is similar to FDM, as shown in Figure 6.31.

Figure 6.31 shows an example of four channels using FDM and four channels

using FHSS. In FDM, each station uses 11M of the bandwidth, but the allocation is

fixed; in FHSS, each station uses 11M of the bandwidth, but the allocation changes hop

to hop.


184 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

Figure 6.31

Bandwidth sharing

Frequency

Frequency

14

h

f 1

a.FDM

Time

b.FHSS

Time

Direct Sequence Spread Spectrum

The direct sequence spread spectrum (nSSS) technique also expands the bandwidth

of the original signal, but the process is different. In DSSS, we replace each data bit

with 11 bits using a spreading code. In other words, each bit is assigned a code of 11 bits,

called chips, where the chip rate is 11 times that of the data bit. Figure 6.32 shows the

concept of DSSS.

Figure 6.32

DSSS

Modulator

Original -+-------+i

signal

r--------ll-~Spread

signal

As an example, let us consider the sequence used in a wireless LAN, the famous

Barker sequence where 11 is 11. We assume that the original signal and the chips in the

chip generator use polar NRZ encoding. Figure 6.33 shows the chips and the result of

multiplying the original data by the chips to get the spread signal.

In Figure 6.33, the spreading code is 11 chips having the pattern 10110111000 (in

this case). If the original signal rate is N, the rate of the spread signal is lIN. This

means that the required bandwidth for the spread signal is 11 times larger than the

bandwidth of the original signal. The spread signal can provide privacy if the intruder

does not know the code. It can also provide immunity against interference if each station

uses a different code.


SECTION 6.4 KEY TERMS 185

Figure 6.33

DSSS example

Original 1--

signal

_

Spreading 1---+--+--1-+--1--­

code

Spread f----t-+-+-i--+--­

signal

Bandwidth Sharing

Can we share a bandwidth in DSSS as we did in FHSS? The answer is no and yes. If we

use a spreading code that spreads signals (from different stations) that cannot be combined

and separated, we cannot share a bandwidth. For example, as we will see in Chapter 14,

some wireless LANs use DSSS and the spread bandwidth cannot be shared. However, if

we use a special type of sequence code that allows the combining and separating of spread

signals, we can share the bandwidth. As we will see in Chapter 16, a special spreading code

allows us to use DSSS in cellular telephony and share a bandwidth between several users.

6.3 RECOMMENDED READING

For more details about subjects discussed in this chapter, we recommend the following

books. The items in brackets [...] refer to the reference list at the end of the text.

Books , '-

Multiplexing is elegantly discussed in Chapters 19 of [Pea92]. [CouOI] gives excellent

coverage of TDM and FDM in Sections 3.9 to 3.11. More advanced materials can be

found in [Ber96]. Multiplexing is discussed in Chapter 8 of [Sta04]. A good coverage of

spread spectrum can be found in Section 5.13 of [CouOl] and Chapter 9 of [Sta04].

6.4 KEY TERMS

analog hierarchy

Barker sequence

channel

chip

demultiplexer (DEMUX)

dense WDM (DWDM)

digital signal (DS) service

direct sequence spread spectrum (DSSS)

Eline

framing bit

frequency hopping spread spectrum

(FSSS)


186 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

frequency-division multiplexing (FDM)

group

guard band

hopping period

interleaving

jumbo group

link

master group

multilevel multiplexing

multiple-slot multiplexing

multiplexer (MUX)

multiplexing

pseudorandom code generator

pseudorandom noise (PN)

pulse stuffing

spread spectrum (SS)

statistical TDM

supergroup

synchronous TDM

T line

time-division multiplexing (TDM)

wavelength-division multiplexing (WDM)

6.5 SUMMARY

o Bandwidth utilization is the use of available bandwidth to achieve specific goals.

Efficiency can be achieved by using multiplexing; privacy and antijamming can be

achieved by using spreading.

o Multiplexing is the set of techniques that allows the simultaneous transmission of

multiple signals across a single data link. In a multiplexed system, n lines share the

bandwidth of one link. The word link refers to the physical path. The word channel

refers to the portion of a link that carries a transmission.

o There are three basic multiplexing techniques: frequency-division multiplexing,

wavelength-division multiplexing, and time-division multiplexing. The first two are

techniques designed for analog signals, the third, for digital signals

o Frequency-division multiplexing (FDM) is an analog technique that can be applied

when the bandwidth of a link (in hertz) is greater than the combined bandwidths of

the signals to be transmitted.

o Wavelength-division multiplexing (WDM) is designed to use the high bandwidth

capability of fiber-optic cable. WDM is an analog multiplexing technique to combine

optical signals.

o Time-division multiplexing (TDM) is a digital process that allows several connections

to share the high bandwidth of a link. TDM is a digital multiplexing technique

for combining several low-rate channels into one high-rate one.

o We can divide TDM into two different schemes: synchronous or statistical. In synchronous

TDM, each input connection has an allotment in the output even if it is

not sending data. In statistical TDM, slots are dynamically allocated to improve

bandwidth efficiency.

o In spread spectrum (SS), we combine signals from different sources to fit into a

larger bandwidth. Spread spectrum is designed to be used in wireless applications

in which stations must be able to share the medium without interception by an

eavesdropper and without being subject to jamming from a malicious intruder.

o The frequency hopping spread spectrum (FHSS) technique uses M different carrier

frequencies that are modulated by the source signal. At one moment, the signal


SECTION 6.6 PRACTICE SET 187

modulates one carrier frequency; at the next moment, the signal modulates another

carrier frequency.

o The direct sequence spread spectrum (DSSS) technique expands the bandwidth of

a signal by replacing each data bit with n bits using a spreading code. In other words,

each bit is assigned a code of n bits, called chips.

6.6 PRACTICE SET

Review Questions

1. Describe the goals of multiplexing.

2. List three main multiplexing techniques mentioned in this chapter.

3. Distinguish between a link and a channel in multiplexing.

4. Which of the three multiplexing techniques is (are) used to combine analog signals?

Which ofthe three multiplexing techniques is (are) used to combine digital signals?

5. Define the analog hierarchy used by telephone companies and list different levels

ofthe hierarchy.

6. Define the digital hierarchy used by telephone companies and list different levels

of the hierarchy.

7. Which of the three multiplexing techniques is common for fiber optic links?

Explain the reason.

8. Distinguish between multilevel TDM, multiple slot TDM, and pulse-stuffed TDM.

9. Distinguish between synchronous and statistical TDM.

10. Define spread spectrum and its goal. List the two spread spectrum techniques discussed

in this chapter.

11. Define FHSS and explain how it achieves bandwidth spreading.

12. Define DSSS and explain how it achieves bandwidth spreading.

Exercises

13. Assume that a voice channel occupies a bandwidth of 4 kHz. We need to multiplex

10 voice channels with guard bands of 500 Hz using FDM. Calculate the required

bandwidth.

14. We need to transmit 100 digitized voice channels using a pass-band channel of

20 KHz. What should be the ratio of bits/Hz if we use no guard band?

15. In the analog hierarchy of Figure 6.9, find the overhead (extra bandwidth for guard

band or control) in each hierarchy level (group, supergroup, master group, and

jumbo group).

16. We need to use synchronous TDM and combine 20 digital sources, each of 100 Kbps.

Each output slot carries 1 bit from each digital source, but one extra bit is added to

each frame for synchronization. Answer the following questions:

a. What is the size of an output frame in bits?

b. What is the output frame rate?


188 CHAPTER 6 BANDWIDTH UTIUZATION: MULTIPLEXING AND SPREADING

c. What is the duration of an output frame?

d. What is the output data rate?

e. What is the efficiency of the system (ratio of useful bits to the total bits).

17. Repeat Exercise 16 if each output slot carries 2 bits from each source.

18. We have 14 sources, each creating 500 8-bit characters per second. Since only some

of these sources are active at any moment, we use statistical TDM to combine these

sources using character interleaving. Each frame carries 6 slots at a time, but we need

to add four-bit addresses to each slot. Answer the following questions:

a. What is the size of an output frame in bits?

b. What is the output frame rate?

c. What is the duration of an output frame?

d. What is the output data rate?

19. Ten sources, six with a bit rate of 200 kbps and four with a bit rate of 400 kbps are

to be combined using multilevel TDM with no synchronizing bits. Answer the following

questions about the final stage of the multiplexing:

a. What is the size of a frame in bits?

b. What is the frame rate?

c. What is the duration of a frame?

d. What is the data rate?

20. Four channels, two with a bit rate of200 kbps and two with a bit rate of 150 kbps, are

to be multiplexed using multiple slot TDM with no synchronization bits. Answer

the following questions:

a. What is the size of a frame in bits?

b. What is the frame rate?

c. What is the duration of a frame?

d. What is the data rate?

21. Two channels, one with a bit rate of 190 kbps and another with a bit rate of 180 kbps,

are to be multiplexed using pulse stuffing TDM with no synchronization bits. Answer

the following questions:

a. What is the size of a frame in bits?

b. What is the frame rate?

c. What is the duration of a frame?

d. What is the data rate?

22. Answer the following questions about a T-1 line:

a. What is the duration of a frame?

b. What is the overhead (number of extra bits per second)?

23. Show the contents of the five output frames for a synchronous TDM multiplexer

that combines four sources sending the following characters. Note that the characters

are sent in the same order that they are typed. The third source is silent.

a. Source 1 message: HELLO

b. Source 2 message: HI


SECTION 6.6 PRACTICE SET 189

c. Source 3 message:

d. Source 4 message: BYE

24. Figure 6.34 shows a multiplexer in a synchronous TDM system. Each output slot is

only 10 bits long (3 bits taken from each input plus 1 framing bit). What is the output

stream? The bits arrive at the multiplexer as shown by the arrows.

Figure 6.34 Exercise 24

101110111101~

11111110000~

IFrame of 10 bits I

1010000001111~

25. Figure 6.35 shows a demultiplexer in a synchronous TDM. If the input slot is 16 bits

long (no framing bits), what is the bit stream in each output? The bits arrive at the

demultiplexer as shown by the arrows.

Figure 6.35 Exercise 25

101000001110101010101000011101110000011110001

26. Answer the following questions about the digital hierarchy in Figure 6.23:

a. What is the overhead (number of extra bits) in the DS-l service?

b. What is the overhead (number of extra bits) in the DS-2 service?

c. What is the overhead (number of extra bits) in the DS-3 service?

d. What is the overhead (number of extra bits) in the DS-4 service?

27. What is the minimum number of bits in a PN sequence if we use FHSS with a

channel bandwidth of B =4 KHz and B ss = 100 KHz?

28. An FHSS system uses a 4-bit PN sequence. If the bit rate of the PN is 64 bits per

second, answer the following questions:

a. What is the total number of possible hops?

b. What is the time needed to finish a complete cycle of PN?


190 CHAPTER 6 BANDWIDTH UTILIZATION: MULTIPLEXING AND SPREADING

29. A pseudorandom number generator uses the following formula to create a random

series:

N i + 1 =(5 + 7N i ) mod 17-1

In which N j defines the current random number and N j + 1 defines the next random

number. The term mod means the value of the remainder when dividing (5 + 7N j )

by 17.

30. We have a digital medium with a data rate of 10 Mbps. How many 64-kbps voice

channels can be carried by this medium if we use DSSS with the Barker sequence?


CHAPTER 7

Transmission Media

We discussed many issues related to the physical layer in Chapters 3 through 6. In this

chapter, we discuss transmission media. Transmission media are actually located below

the physical layer and are directly controlled by the physical layer. You could say that

transmission media belong to layer zero. Figure 7.1 shows the position of transmission

media in relation to the physical layer.

Figure 7.1

Transmission medium and physical layer

I

L...

Transmission

SenderI Physical layer

medium

Cable or air

I

Physical layer

.....J

IReceiver

A transmission medium can be broadly defined as anything that can carry information

from a source to a destination. For example, the transmission medium for two

people having a dinner conversation is the air. The air can also be used to convey the

message in a smoke signal or semaphore. For a written message, the transmission

medium might be a mail carrier, a truck, or an airplane.

In data communications the definition of the information and the transmission

medium is more specific. The transmission medium is usually free space, metallic cable,

or fiber-optic cable. The information is usually a signal that is the result of a conversion

of data from another form.

The use of long-distance communication using electric signals started with the

invention of the telegraph by Morse in the 19th century. Communication by telegraph

was slow and dependent on a metallic medium.

Extending the range of the human voice became possible when the telephone was

invented in 1869. Telephone communication at that time also needed a metallic medium

to carry the electric signals that were the result of a conversion from the human voice.

191


192 CHAPTER 7 1RANSMISSION MEDIA

The communication was, however, unreliable due to the poor quality of the wires. The

lines were often noisy and the technology was unsophisticated.

Wireless communication started in 1895 when Hertz was able to send highfrequency

signals. Later, Marconi devised a method to send telegraph-type messages

over the Atlantic Ocean.

We have come a long way. Better metallic media have been invented (twistedpair

and coaxial cables, for example). The use of optical fibers has increased the data

rate incredibly. Free space (air, vacuum, and water) is used more efficiently, in part

due to the technologies (such as modulation and multiplexing) discussed in the previous