dc
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
DC<br />
COURSE<br />
FILE
Contents<br />
1. Cover Page<br />
2. Syllabus copy<br />
3. Vision of the Department<br />
4. Mission of the Department<br />
5. PEOs and POs<br />
6. Course objectives and outcomes<br />
7. Brief notes on the importance of the course and how it fits into the curriculum<br />
8. Prerequisites<br />
9. Instructional Learning Outcomes<br />
10. Course mapping with PEOs and POs<br />
11. Class Time Table<br />
12. Individual Time Table<br />
13. Micro Plan with dates and closure report<br />
14. Detailed notes<br />
15. Additional topics<br />
16. University Question papers of previous years<br />
17. Question Bank<br />
18. Assignment topics<br />
19. Unit wise Quiz Questions<br />
20. Tutorial problems<br />
21. Known gaps ,if any<br />
22. Discussion topics<br />
23. References, Journals, websites and E-links<br />
24. Quality Control Sheets<br />
25. Student List<br />
26. Group-Wise students list for discussion topics
GEETHANJALI COLLEGE OF ENGINEERING AND TECHNOLOGY<br />
Department Of Electronics and Communication Engineering<br />
(Name of the Subject / Lab Course) : Digital Communications<br />
(JNTU CODE –A60420)<br />
Programme : UG<br />
Branch: ECE Version No : 03<br />
Year: III Year ECE<br />
Document Number: GCET/ECE/03<br />
Semester: II No. of pages :<br />
Classification status (Unrestricted / Restricted ) :unrestricted<br />
Distribution List :<br />
Prepared by :<br />
1) Name : Ms. M.Hemalatha 1) Name : Mrs.S, Krishna Priya<br />
2) Sign : 2) Sign :<br />
3) Design : Asst. Prof. 3) Design : Assoc.Prof<br />
4) Date :28/11/2015 4) Date: 28/11/2015<br />
Verified by : 1) Name: Mr.D.Venkata Rami Reddy<br />
2) Sign :<br />
3) Design :<br />
4) Date : 30/11/2015<br />
* For Q.C Only.<br />
1) Name :<br />
2) Sign :<br />
3) Design :<br />
4) Date :<br />
Approved by : (HOD ) 1) Name: Dr. P. Srihari<br />
2) Sign :<br />
3) Date:
2. Syllabus Copy<br />
Jawaharlal Nehru Technological University Hyderabad, Hyderabad<br />
DIGITAL COMMUNICATIONS<br />
Programme: B.Tech (ECE)<br />
Year & Sem: III B.Tech II Sem<br />
UNIT I : Elements Of Digital Communication Systems<br />
Advantages of digital communication systems, Bandwidth- S/N trade off, Hartley Shannon<br />
law, Sampling theorem<br />
Pulse Coded Modulation<br />
PCM generation and reconstruction , Quantization noise, Non-uniform Quantization and<br />
Companding, Differential PCM systems (DPCM), Adaptive DPCM, Delta modulation,<br />
adaptive delta modulation, Noise in PCM and DM systems<br />
UNIT II : Digital Modulation Techniques<br />
introduction , ASK, ASK Modulator, Coherent ASK detector, non-Coherent ASK detector,<br />
FSK, Band width frequency spectrum of FSK, Non-Coherent FSK detector, Coherent FSK<br />
detector, FSK Detection using PLL, BPSK, Coherent PSK detection, QPSK, Differential<br />
PSK<br />
UNIT III: Base Band Transmission And Optimal Reception of Digital Signal<br />
Pulse shaping for optimum transmission, A Base band signal receiver, Probability of error,<br />
optimum receiver, Optimum of coherent reception, Signal space representation and<br />
probability of error, Eye diagrams for ASK,FSK and PSK, cross talk,<br />
Information Theory<br />
Information and entropy, Conditional entropy and redundancy, Shannon Fano coding, mutual<br />
information, Information loss due to noise, Source codings,- Huffman code, variable length<br />
coding. Source coding to increase average information per bit, Lossy source Coding,<br />
UNIT IV : Error Control Codes<br />
Matrix description of linear block codes, Error detection and error correction capabilities of<br />
linear block codes,<br />
Cyclic codes: Algebraic structure, encoding, Syndrome calculation, decoding
Convolution Codes:<br />
Encoding, decoding using state, Tree and trellis diagrams, Decoding using Viterbi algorithm,<br />
Comparison of error rates in coded and uncoded transmission.<br />
UNIT V : Spread Spectrum Modulation<br />
Use of spread spectrum, direct sequence spread spectrum(DSSS), Code division multiple<br />
access, Ranging using DSSS, Frequency Hopping spread spectrum, PN Sequences:<br />
generation and characteristics, Synchronization in spread spectrum system<br />
TEXT BOOKS:<br />
1. Principles Of Communication Systems-Herberet Taub, Donald L Schiling, Goutham<br />
saha,3rf edition, Mc Graw Hill 2008<br />
2. digital and anolog communiation systems- Sam Shanmugam, John Wiley,2005<br />
3. Digital Communications- John G.Proakis, Masoud Salehi – 5 th Edition, Mcgraw-Hill,<br />
2008<br />
REFERNCES:<br />
1. Digital communications- Simon Haykin, John Wiley, 2005<br />
2. Digital Communications 3rd Ed - I. A.Glover, P. M. Grant, 2 nd Edition, Pearson Edu,,<br />
2008<br />
3. Communication Systems ---- B.P.Lathi, BS Publications, 2006<br />
4. A first course in Digital Communication Systems – Nguyen, Shewedyh, Cambridge<br />
5. Digital Communication – Theory, Techniques, and Applications – R.N.Mutagi, 2 nd<br />
Edition, 2013
3.Vision of the Department<br />
To impart quality technical education in Electronics and Communication Engineering<br />
emphasizing analysis, design/synthesis and evaluation of hardware/embedded software using<br />
various Electronic Design Automation (EDA) tools with accent on creativity, innovation and<br />
research thereby producing competent engineers who can meet global challenges with<br />
societal commitment.<br />
4. Mission of the Department<br />
i. To impart quality education in fundamentals of basic sciences, mathematics, electronics<br />
and communication engineering through innovative teaching-learning processes.<br />
ii. To facilitate Graduates define, design, and solve engineering problems in the field of<br />
Electronics and Communication Engineering using various Electronic Design Automation<br />
(EDA) tools.<br />
iii. To encourage research culture among faculty and students thereby facilitating them to be<br />
creative and innovative through constant interaction with R & D organizations and<br />
Industry.<br />
iv. To inculcate teamwork, imbibe leadership qualities, professional ethics and social<br />
responsibilities in students and faculty.<br />
5. Program Educational Objectives and Program outcomes of B. Tech (ECE)<br />
Program<br />
Program Educational Objectives of B. Tech (ECE) Program :<br />
I. To prepare students with excellent comprehension of basic sciences, mathematics and<br />
engineering subjects facilitating them to gain employment or pursue postgraduate<br />
studies with an appreciation for lifelong learning.<br />
II. To train students with problem solving capabilities such as analysis and design with<br />
adequate practical skills wherein they demonstrate creativity and innovation that<br />
would enable them to develop state of the art equipment and technologies of<br />
multidisciplinary nature for societal development.
III.<br />
To inculcate positive attitude, professional ethics, effective communication and<br />
interpersonal skills which would facilitate them to succeed in the chosen profession<br />
exhibiting creativity and innovation through research and development both as team<br />
member and as well as leader.<br />
Program Outcomes of B.Tech ECE Program:<br />
1. An ability to apply knowledge of Mathematics, Science, and Engineering to solve<br />
complex engineering problems of Electronics and Communication Engineering<br />
systems.<br />
2. An ability to model, simulate and design Electronics and Communication Engineering<br />
systems, conduct experiments, as well as analyze and interpret data and prepare a<br />
report with conclusions.<br />
3. An ability to design an Electronics and Communication Engineering system,<br />
component, or process to meet desired needs within the realistic constraints such as<br />
economic, environmental, social, political, ethical, health and safety,<br />
manufacturability and sustainability.<br />
4. An ability to function on multidisciplinary teams involving interpersonal skills.<br />
5. An ability to identify, formulate and solve engineering problems of multidisciplinary<br />
nature.<br />
6. An understanding of professional and ethical responsibilities involved in the practice<br />
of Electronics and Communication Engineering profession.<br />
7. An ability to communicate effectively with a range of audience on complex<br />
engineering problems of multidisciplinary nature both in oral and written form.<br />
8. The broad education necessary to understand the impact of engineering solutions in a<br />
global, economic, environmental and societal context.<br />
9. A recognition of the need for, and an ability to engage in life-long learning and<br />
acquire the capability for the same.<br />
10. A knowledge of contemporary issues involved in the practice of Electronics and<br />
Communication Engineering profession<br />
11. An ability to use the techniques, skills and modern engineering tools necessary for<br />
engineering practice.
12. An ability to use modern Electronic Design Automation (EDA) tools, software and<br />
electronic equipment to analyze, synthesize and evaluate Electronics and<br />
Communication Engineering systems for multidisciplinary tasks.<br />
13. Apply engineering and project management principles to one's own work and also to<br />
manage projects of multidisciplinary nature<br />
6. COURSE OBJECTIVES AND OUTCOMES<br />
Course Objectives<br />
Design digital communication systems, given constraints on data rate, bandwidth,<br />
power, fidelity, and complexity.<br />
Analyze the performance of a digital communication link when additive noise is<br />
present in terms of the signal-to-noise ratio and bit error rate.<br />
Compute the power and bandwidth requirements of modern communication systems,<br />
including those employing ASK, PSK, FSK, and QAM modulation formats.<br />
Design a scalar quantizer for a given source with a required fidelity and determine the<br />
resulting data rate.<br />
Determine the auto-correlation function of a line code and determine its power<br />
spectral density.<br />
Determine the power spectral density of band pass digital modulation formats.<br />
Course outcomes :<br />
Upon successful completion of this course, students have the ability to<br />
1. Analyze digital and analog signals with respect to various parameters like bandwidth,<br />
noise etc.<br />
2. Demonstrate generation and reconstruction of different Pulse Code Modulation<br />
schemes like PCM, DPCM etc.<br />
3. Acquire the knowledge of different pass band digital modulation techniques like<br />
ASK, PSK etc.<br />
4. Calculate different parameters like power spectrum density, probability of error etc of<br />
Base Band signal for optimum transmission.<br />
5. Analyze the concepts of Information theory, Huffman coding etc to increase average<br />
information per bit.<br />
6. Generate and retrieve data using block codes and analyze their error detection and<br />
correction capabilities.<br />
7. Generate and decode data using convolution codes and compare error rates for coded<br />
and uncoded transmission.<br />
8. Familiar with different criteria in spread spectrum modulation scheme and its<br />
applications.
7. Importance of the course and how it fits into the curriculum:<br />
7.1 Introduction to the subject<br />
7.2. Objectives of the subject<br />
1. Design digital communication systems, given constraints on data rate, bandwidth,<br />
power, fidelity, and complexity.<br />
2. Analyze the performance of a digital communication link when additive noise is<br />
present in terms of the signal-to-noise ratio and bit error rate.<br />
3. Compute the power and bandwidth requirements of modern communication systems,<br />
including those employing ASK, PSK, FSK, and QAM modulation formats.<br />
4. To provide the students a basic understanding of the Telecommunications.<br />
5. To develop technical expertise in various modulation techniques.<br />
6. Provide basic understanding of information theory and error correction codes.<br />
7.3. Outcomes of the subject<br />
<br />
<br />
<br />
Ability to understand the functions of the various parts, analyze theoretically the<br />
performance of a modern communication system.<br />
Ability to compare analog and digital communications in terms of noise, attenuation,<br />
and distortion.<br />
Ability to recognize the concepts of digital baseband transmission, optimum reception<br />
analysis and band limited transmission.<br />
Characterize and analyze various pass band modulation techniques<br />
Ability to Explain the basic concepts of error detection/ correction coding and<br />
perform error analysis
8.PREREQUISITES:<br />
Engineering Mathematics<br />
Basic Electronics<br />
Signals and systems<br />
Analog Communications<br />
9. Instructional learning outcomes:<br />
Subject: Digital Communications<br />
UNIT 1: Elements of Digital Communication Systems<br />
DC1: Analyse the elements of digital communication system, the importance and<br />
Applications of Digital Communication.<br />
DC 2: Differentiate analog and digital systems, the advantages of digital communication<br />
systems over analog systems. The importance and the need of sampling theorem in digital<br />
communication systems.<br />
DC 3: Conversion of analog signal to digital signal and the issues occur in digital<br />
transmission techniques like Bandwidth- S/N trade off.<br />
DC 4: Compute the power and bandwidth requirements of modern communication systems.<br />
DC 5: Analyse the importance of Hartley Shannon law in calculating the BER and the<br />
channel capacity.<br />
Pulse Code Modulation<br />
DC 6: Explain the generation and reconstruction of PCM.<br />
DC 7: To Analyze the effect of Quantization noise in Digital Communication.<br />
DC 8: Analyse the different digital communication schemes like Differential PCM<br />
systems (DPCM), Delta modulation, and adaptive delta modulation.<br />
DC 9: Compare the digital communication schemes like Differential PCM systems<br />
(DPCM), Delta modulation, and adaptive delta modulation.<br />
DC 10: Illustrate the effect of Noise in PCM and DM systems.<br />
UNIT 2: Digital Modulation Techniques<br />
DC11: Describe and differentiate the different shift keying formats used in digital<br />
communication.
DC 12: Compute the power and bandwidth requirements of modern communication<br />
systems modulation formats like those employing ASK, PSK, FSK, and QAM.<br />
DC 13: Explain the different modulators like ASK Modulator, Coherent ASK detector,<br />
non-Coherent ASK detector, Band width frequency spectrum of FSK, Non-Coherent FSK<br />
detector, Coherent FSK detector.<br />
DC 14: Analyze the need and use of PLL in FSK Detection.<br />
DC 15: Differentiate the different keying schemes -BPSK, Coherent PSK detection,<br />
QPSK & Differential PSK.<br />
UNIT 3: Base Band Transmission and Optimal reception of Digital Signal<br />
DC16: Identify the need of pulse shaping for optimum transmission and get the<br />
knowledge of Base band signal receiver model.<br />
DC 17: Analyze different pulses and their power spectrum densities.<br />
DC 18: Calculation of Probability of error, optimum receiver, Optimum of coherent<br />
reception and understand the Signal space representation and calculate the probability of<br />
error.<br />
DC 19: Explain the Eye diagram and its importance in calculating error.<br />
DC 20: Describe cross talk and its effect in the degradation of signal quality in digital<br />
communication.<br />
Information Theory<br />
DC 21: Identify the basic terminology used in coding of Digital signals like Information<br />
and entropy and calculate the Conditional entropy and redundancy.<br />
DC 22: Solve problems based on Shannon Fano coding.<br />
DC 23: Solve problems based on mutual information and Information loss due to noise.<br />
DC 24: Compute problems on Source coding methods like - Huffman code, variable<br />
length codes used in digital communication.<br />
DC 25: Explain Source coding and drawbacks of Lossy source Coding and<br />
increase the average information per bit.<br />
how to
UNIT 4: Error control codes<br />
Linear Block Codes<br />
DC 26: Illustrate the different types of codes used in digital communication and the<br />
Matrix description of linear block codes.<br />
DC 27: Analyze and find errors, solve the numerical in Error detection and error<br />
correction of linear block codes.<br />
DC 28:<br />
codes.<br />
Explain cyclic codes, the difference between linear block codes and cyclic<br />
DC 29: Compute problems based on the representation of cyclic codes and encoding and<br />
decoding of cyclic codes.<br />
DC 30: Solve problems to find the location of error in the codes i.e., syndrome<br />
calculation.<br />
Convolution Codes<br />
DC 31: Identify the difference between the different codes digital communication.<br />
DC 32: Describe Encoding & decoding of Convolutional Codes.<br />
DC 33: Solve problems on error detection & correction using<br />
diagrams.<br />
state Tree and trellis<br />
DC 34: Solve problems based on Viterbi algorithm.<br />
DC 35: Compute numerical on error calculations and compare the error rates in coded<br />
and uncoded transmission.<br />
UNIT 5: Spread Spectrum Modulation<br />
DC 36: Analyze the need and use of spread spectrum in digital communication and gain<br />
knowledge of spread spectrum techniques like direct sequence spread spectrum (DSSS).<br />
DC 37: Describe Code division multiple access, ranging using DSSS Frequency Hopping<br />
spread spectrum.<br />
DC 38: Generate PN sequences and solve problems based on sequence generation.<br />
DC 39: Explain the need of synchronization in spread spectrum system.<br />
DC 40: Identify the Advancements in the digital communication.
10. Course mapping with PEO’s and PO’s:<br />
Mapping of Course with Programme Educational Objectives:<br />
S.No<br />
Course<br />
component<br />
code course Semester PEO 1 PEO 2 PEO 3<br />
1 Communication 56026<br />
Digital<br />
Communications<br />
II √ √<br />
Mapping of Course outcomes with Programme outcomes:<br />
*When the course outcome weightage is < 40%, it will be given as moderately correlated (1).<br />
*When the course outcome weightage is >40%, it will be given as strongly correlated (2).<br />
Pos 1 2 3 4 5 6 7 8 9 10 11 12 13<br />
Digital Communications 2 2 1 1 1 1 2 2 2 2 2<br />
CO 1: To State the function of Analog to<br />
Digital Converters (ADCs) and vice versa<br />
and to recognize the concepts of digital<br />
baseband transmission, optimum reception<br />
analysis and band limited transmission.<br />
CO 2:Demonstrate generation and<br />
reconstruction of different Pulse Code<br />
Modulation schemes like PCM, DPCM etc.<br />
CO 3: Compare different pass band<br />
digital modulation techniques like ASK,<br />
PSK etc and compute the Probability of<br />
error in each scheme.<br />
2 2 1 1 1 1 1 2<br />
2 2 2 1 1 1 2 2 2<br />
2 2 2 1 1 2 2 2 2<br />
COMMUNICATION<br />
CO 4: Calculate different parameters like<br />
power spectrum density, probability of<br />
error etc of Base Band signal for optimum<br />
transmission.<br />
CO 5: Analyze the concepts of<br />
Information theory, Huffman coding etc to<br />
2 2 2 1 1 1 2 2 2 2<br />
1 1 1 1 1 1 2 2
increase average information per bit.<br />
CO 6: Generate and retrieve data using<br />
block codes and solve numerical problems<br />
on error detection and correction<br />
capabilities.<br />
CO 7: Solve problems on generation and<br />
decoding of data using convolution codes<br />
and compare error rates for coded and<br />
uncoded transmission.<br />
CO 8: Describe the different criteria in<br />
spread spectrum modulation scheme and<br />
its applications.<br />
2 1 1 2 1 1 2 1 1 2<br />
2 1 1 2 1 1 2 1 1 2<br />
2 2 2 1 1 1 1 2 2 2 2<br />
11.Class Time Tables:<br />
12.Individual time table:
13.Micro plan :<br />
Sl.<br />
no<br />
Unit No.<br />
Total<br />
no of<br />
Period<br />
s<br />
Topics to be covered<br />
Total<br />
no. of<br />
hours<br />
Date<br />
Regular/<br />
Additiona<br />
l<br />
Teaching<br />
aids used<br />
LCD/OHP<br />
/BB<br />
Rem<br />
arks<br />
1<br />
Elements Of Digital Communication 1 Regular OHP,BB<br />
Systems: Model of digital communication<br />
system<br />
2 Model of digital communication system 1 Regular OHP,BB<br />
Digital representation of analog signal<br />
3 Certain issues of digital transmission 1 Regular OHP,BB<br />
5 advantages of digital communication<br />
1 Regular OHP,BB<br />
systems, Bandwidth- S/N, Hartley Shannon<br />
law trade off,<br />
6 1 Additional BB<br />
7 Sampling theorem 1 Regular OHP,BB<br />
8<br />
07<br />
Tutorial class-1 1 BB<br />
UNIT - I<br />
9<br />
Pulse Coded Modulation: PCM generation 1 Regular BB<br />
and reconstruction , Quantization noise<br />
10 Differential PCM systems (DPCM), Delta 1 Regular OHP,BB<br />
modulation,<br />
11 adaptive delta modulation, Noise in PCM 1 Regular OHP,BB<br />
and DM systems<br />
12 Voice Coders 1 Additional BB<br />
13 Tutorial Class-2 1 Regular BB<br />
14 07 Solving University papers 1 OHP,BB<br />
15 Assignment test-1 1<br />
16<br />
Digital Modulation Techniques:<br />
1 Regular BB<br />
introduction , ASK, ASK Modulator<br />
17 Coherent ASK detector, non-Coherent ASK 1 Regular<br />
detector<br />
18 Band width frequency spectrum of FSK, 1 Regular OHP,BB<br />
Non-Coherent FSK detector<br />
19 Coherent FSK detector, FSK Detection 1 Regular OHP,BB<br />
08 using PLL<br />
20 BPSK, Coherent PSK detection, 1 Regular BB<br />
21 QPSK, Differential PSK 1 Regular BB<br />
22 Regenerative Repeater 1 Additional OHP,BB<br />
23 Tutorial class-3 1 Regular BB<br />
24<br />
Base Band Transmission And Optimal 1 Regular OHP,BB<br />
reception of Digital Signal: pulse shaping<br />
for optimum transmission<br />
25 A Base band signal receiver, Different<br />
1 Regular OHP,BB<br />
08 pulses and power spectrum densities<br />
26 Probability of error, optimum receiver 1 Regular BB<br />
27 Optimum of coherent reception, 1 Regular OHP,BB<br />
28 Signal space representation and probability 1 Regular OHP,BB<br />
of error, Eye diagram, cross talk<br />
29 Tutorial Class-4 1 Regular BB<br />
30 Solving University papers 1 Regular OHP,BB<br />
UNIT-II<br />
IUNIT- III
31 Assignment test-2 1<br />
32<br />
Information Theory: Information and 1 Regular BB<br />
entropy<br />
33 Conditional entropy and redundancy 1 Regular OHP,BB<br />
34 Shannon Fano coding, mutual information 1 Regular OHP,BB<br />
35 Information loss due to noise, 1 Regular BB<br />
36 Source codings,- Huffman code, variable 1 BB<br />
08 length coding<br />
37 Lossy source Coding , Source coding to 1 Regular BB<br />
increase average information per bit<br />
38 Feedback communications 1 Additional BB<br />
39 Tutorial Class-5 1 Regular OHP,BB<br />
40<br />
Linear Block Codes: Matrix description of 1 Regular BB<br />
linear block codes<br />
41 Matrix description of linear block codes 1 Regular BB<br />
42 Error detection and error correction<br />
1 Regular BB<br />
capabilities of linear block codes<br />
43 Error detection and error correction<br />
1 Regular BB<br />
capabilities of linear block codes<br />
44 Cyclic codes: algebraic structure, encoding, 1 Regular OHP,BB<br />
45 08 syndrome calculation decoding 1 Regular OHPBB<br />
46 Turbo codes 1 Additional OHP,BB<br />
47 Tutorial Class-6 1 Regular OHP,BB<br />
48 Solving University papers 1 OHP,BB<br />
49 Assignment test-3 1<br />
50<br />
Convolution Codes: Encoding, decoding 1 Regular BB<br />
using state<br />
51 Tree and trellis diagrams 1 Regular BB<br />
52 Decoding using Viterbi algorithm 1 Regular BB<br />
53 08 Comparison of error rates in coded and 1 Regular OHP,BB<br />
uncoded transmission<br />
54 Tutorial Class-7 1 Regular OHP,BB<br />
55<br />
Spread Spectrum Modulation: Use of 1 Regular OHP,BB<br />
spread spectrum, direct sequence spread<br />
spectrum(DSSS)<br />
56 Code division multiple access 1 Regular OHP,BB<br />
57 Ranging using DSSS Frequency Hopping 1 Regular<br />
spread spectrum<br />
58 PN sequences: generation and<br />
1 Regular BB<br />
characteristics<br />
59 Synchronization in spread spectrum system 1 Regular BB<br />
60 Advancements in the digital communication 1 Missing BB<br />
61 08 Tutorial Class-8 1 Regular BB<br />
62 Solving University papers 1 Regular OHP,BB<br />
61 Assignment test-4 1<br />
62 Total No. of classes 62<br />
UNIT-IV<br />
UNIT- V
14.Detailed Notes<br />
UNIT 1 :<br />
Elements Of Digital Communication Systems<br />
Model of digital communication system,<br />
Digital representation of analog signal,<br />
Certain issues of digital transmission,<br />
advantages of digital communication systems,<br />
Bandwidth- S/N trade off,<br />
Hartley Shannon law,<br />
Sampling theorem<br />
What Does Communication (or Telecommunication) Mean?<br />
The term communication (or telecommunication) means the transfer of some form of<br />
information from one place (known as the source of information) to another place<br />
(known as the destination of information) using some system to do this function<br />
(known as a communication system).<br />
So What Will we Study in This Course?<br />
In this course, we will study the basic methods that are used for communication in<br />
today’s world and the different systems that implement these communication methods.<br />
Upon the successful completion of this course, you should be able to identify the<br />
different communication techniques, know the advantages and disadvantages of each<br />
technique, and show the basic construction of the systems that implement these<br />
communication techniques.<br />
Old Methods of Communication<br />
• Pigeons<br />
• Horseback<br />
• Smoke<br />
• Fire<br />
• Post Office<br />
• Drums<br />
Problems with Old Communication Methods<br />
• Slow<br />
• Difficult and relatively expensive<br />
• Limited amount of information can be sent
• Some methods can be used at specific times of the day<br />
• Information is not secure.<br />
Examples of Today’s Communication Methods<br />
All of the following are electric (or electromagnetic) communication systems<br />
• Satellite (Telephone, TV, Radio, Internet, … )<br />
• Microwave (Telephone, TV, Data, …)<br />
• Optical Fibers (TV, Internet, Telephone, … )<br />
• Copper Cables (telephone lines, coaxial cables, twisted pairs, … etc)<br />
Advantages of Today’s Communication Systems<br />
• Fast<br />
• Easy to use and very cheap<br />
• Huge amounts of information can be transmitted<br />
• Secure transmission of information can easily be achieved<br />
• Can be used 24 hours a day.<br />
Basic Construction of Electrical Communication System<br />
Sound, picture, ...<br />
Electric signal (like<br />
audio and video<br />
outputs of a video<br />
camera<br />
Electric Signal<br />
(transmitted signal)<br />
Electric Signal<br />
(received signal)<br />
Electric Signal (like<br />
the outputs of a<br />
satellite receiver)<br />
Sound, picture, ...<br />
Added Noise<br />
Input<br />
Input<br />
Transducer<br />
Transmitter<br />
Channel<br />
(distorts<br />
transmitted<br />
signal)<br />
Receiver<br />
Output<br />
Transducer<br />
Output<br />
Converts the input<br />
signal from its<br />
original form (sound,<br />
picture, … etc) to an<br />
electric signal<br />
Adapts the electric<br />
signal to the channel<br />
(changes the signal<br />
to a form that is<br />
suitable for<br />
transmission)<br />
Medium though<br />
which the<br />
information is<br />
transmitted<br />
Extracts the original<br />
electric signal from<br />
the received signal<br />
Converts the electric<br />
signal to its original<br />
form (sount, picture,<br />
… etc)
A communication system may transmit information in one direction such as TV and radio<br />
(simplex), two directions but at different times such as the CB (half-duplex), or two<br />
directions simultaneously such as the telephone (full-duplex).<br />
Basic Terminology Used in this Communications Course<br />
A Signal:<br />
A System:<br />
Analog Signals:<br />
Digital Signals:<br />
Noise:<br />
is a function that specifies how a specific variable changes versus an<br />
independent variable such as time, location, height (examples: the age of<br />
people versus their coordinates on Earth, the amount of money in your<br />
bank account versus time).<br />
operates on an input signal in a predefined way to generate an output<br />
signal.<br />
are signals with amplitudes that may take any real value out of an infinite<br />
number of values in a specific range (examples: the height of mercury in<br />
a 10cm–long thermometer over a period of time is a function of time that<br />
may take any value between 0 and 10cm, the weight of people setting in a<br />
class room is a function of space (x and y coordinates) that may take any<br />
real value between 30 kg to 200 kg (typically)).<br />
are signals with amplitudes that may take only a specific number of<br />
values (number of possible values is less than infinite) (examples: the<br />
number of days in a year versus the year is a function that takes one of<br />
two values of 365 or 366 days, number of people sitting on a one-person<br />
chair at any instant of time is either 0 or 1, the number of students<br />
registered in different classes at KFUPM is an integer number between 1<br />
and 100).<br />
is an undesired signal that gets added to (or sometimes multiplied with) a<br />
desired transmitted signal at the receiver. The source of noise may be<br />
external to the communication system (noise resulting from electric<br />
machines, other communication systems, and noise from outer space) or<br />
internal to the communication system (noise resulting from the collision<br />
of electrons with atoms in wires and ICs).<br />
Signal to Noise Ratio (SNR):is the ratio of the power of the desired signal to the power of<br />
the noise signal.<br />
Bandwidth (BW): is the width of the frequency range that the signal occupies. For example<br />
the bandwidth of a radio channel in the AM is around 10 kHz and the<br />
bandwidth of a radio channel in the FM band is 150 kHz.<br />
Rate of Communication: is the speed at which DIGITAL information is transmitted. The<br />
maximum rate at which most of today’s modems receive digital
information is around 56 k bits/second and transmit digital information is<br />
around 33 k bits/second. A Local Area Network (LAN) can theoretically<br />
receive/transmit information at a rate of 100 M bits/s. Gigabit networks<br />
would be able to receive/transmit information at least 10 times that rate.<br />
Modulation:<br />
is changing one or more of the characteristics of a signal (known as the<br />
carrier signal) based on the value of another signal (known as the<br />
information or modulating signal) to produce a modulated signal.<br />
Analog and Digital Communications<br />
Since the introduction of digital communication few decades ago, it has been gaining a steady<br />
increase in use. Today, you can find a digital form of almost all types of analog<br />
communication systems. For example, TV channels are now broa<strong>dc</strong>asted in digital form<br />
(most if not all Ku–band satellite TV transmission is digital). Also, radio now is being<br />
broa<strong>dc</strong>asted in digital form (see sirus.com and xm.com). Home phone systems are starting to<br />
go digital (a digital phone system is available at KFUPM). Almost all cellular phones are now<br />
digital, and so on. So, what makes digital communication more attractive compared to analog<br />
communication?<br />
Advantages of Digital Communication over Analog Communication<br />
• Immunity to Noise (possibility of regenerating the original digital signal if signal<br />
power to noise power ratio (SNR) is relatively high by using of devices called<br />
repeaters along the path of transmission).<br />
• Efficient use of communication bandwidth (through use of techniques like<br />
compression).<br />
• Digital communication provides higher security (data encryption).<br />
• The ability to detect errors and correct them if necessary.<br />
• Design and manufacturing of electronics for digital communication systems is<br />
much easier and much cheaper than the design and manufacturing of electronics<br />
for analog communication systems.<br />
Modulation<br />
Famous Types<br />
• Amplitude Modulation (AM): varying the amplitude of the carrier based on the<br />
information signal as done for radio channels that<br />
are transmitted in the AM radio band.<br />
• Phase Modulation (PM):<br />
varying the phase of the carrier based on the<br />
information signal.<br />
• Frequency Modulation (FM): varying the frequency of the carrier based on the<br />
information signal as done for channels transmitted<br />
in the FM radio band.<br />
Purpose of Modulation<br />
• For a signal (like the electric signals coming out of a microphone) to be<br />
transmitted by an antenna, signal wavelength has to be comparable to the length of<br />
the antenna (signal wavelength is equal to 0.1 of the antenna length or more). If
the wavelength is extremely long, modulation must be used to reduce the<br />
wavelength of the signal to make the length of the required antenna practical.<br />
• To receive transmitted signals from multiple sources without interference between<br />
them, they must be transmitted at different frequencies (frequency multiplexing)<br />
by modulating carriers that have different frequencies with the different<br />
information signals.<br />
Exercise 1–1: Specify if the following communication systems are (A)nalog or (D)igital:<br />
a) TV in the 1970s:<br />
b) TV in the 2030s:<br />
c) Fax machines<br />
d) Local area networks (LANs):<br />
e) First–generation cellular phones<br />
f) Second–generation cellular phones<br />
g) Third–generation cellular phones
These are the basic elements of any digital communication system and It gives a basic<br />
understanding of communication systems.
asic elements of digital communication system<br />
1. Information Source and Input Transducer:<br />
The source of information can be analog or digital, e.g. analog: aurdio or video signal,<br />
digital: like teletype signal. In digital communication the signal produced by this<br />
source is converted into digital signal consists of 1′s and 0′s. For this we need source<br />
encoder.<br />
1.<br />
2. Source Encoder<br />
In digital communication we convert the signal from source into digital signal as<br />
mentioned above. The point to remember is we should like to use as few binary digits as<br />
possible to represent the signal. In such a way this efficient representation of the source<br />
output results in little or no redundancy. This sequence of binary digits is<br />
called information sequence.<br />
Source Encoding or Data Compression: the process of efficiently converting the output<br />
of wither analog or digital source into a sequence of binary digits is known as source<br />
encoding.<br />
3. Channel Encoder:<br />
The information sequence is passed through the channel encoder. The purpose of the<br />
channel encoder is to introduced, in controlled manner, some redundancy in the binary<br />
information sequence that can be used at the receiver to overcome the effects of noise and<br />
interference encountered in the transmission on the signal through the channel.
e.g. take k bits of the information sequence and map that k bits to unique n bit sequence<br />
called code word. The amount of redundancy introduced is measured by the ratio n/k and<br />
the reciprocal of this ratio (k/n) is known as rate of code or code rate.<br />
4. Digital Modulator:<br />
The binary sequence is passed to digital modulator which in turns convert the sequence<br />
into electric signals so that we can transmit them on channel (we will see channel later).<br />
The digital modulator maps the binary sequences into signal wave forms , for example if<br />
we represent 1 by sin x and 0 by cos x then we will transmit sin x for 1 and cos x for 0. ( a<br />
case similar to BPSK)<br />
5. Channel:<br />
The communication channel is the physical medium that is used for transmitting signals<br />
from transmitter to receiver. In wireless system, this channel consists of atmosphere , for<br />
traditional telephony, this channel is wired , there are optical channels, under water<br />
acoustic channels etc.<br />
we further discriminate this channels on the basis of their property and characteristics,<br />
like AWGN channel etc.<br />
6. Digital Demodulator:<br />
The digital demodulator processes the channel corrupted transmitted waveform and<br />
reduces the waveform to the sequence of numbers that represents estimates of the<br />
transmitted data symbols.<br />
7. Channel Decoder:<br />
This sequence of numbers then passed through the channel decoder which attempts to<br />
reconstruct the original information sequence from the knowledge of the code used by the<br />
channel encoder and the redundancy contained in the received data<br />
The average probability of a bit error at the output of the decoder is a measure of the<br />
performance of the demodulator – decoder combination. THIS IS THE MOST<br />
IMPORTANT POINT, We will discuss a lot about this BER (Bit Error Rate) stuff in<br />
coming posts.<br />
8. Source Decoder<br />
At the end, if an analog signal is desired then source decoder tries to decode the sequence<br />
from the knowledge of the encoding algorithm. And which results in the approximate<br />
replica of the input at the transmitter end<br />
9. Output Transducer:<br />
Finally we get the desired signal in desired format analog or digital.<br />
The point worth noting are :<br />
1. the source coding algorithm plays important role in higher code rate<br />
2. the channel encoder introduced redundancy in data<br />
3. the modulation scheme plays important role in deciding the data rate and immunity of<br />
signal towards the errors introduced by the channel
4. Channel introduced many types of errors like multi path, errors due to thermal noise<br />
etc.<br />
5. The demodulator and decoder should provide high BER.<br />
What are the advantages and disadvantages of Digital Communication.?<br />
Advantages of digital communication:<br />
1. It is fast and easier.<br />
2. No paper is wasted.<br />
3. The messages can be stored in the device for longer times, without being damaged, unlike<br />
paper files that easily get damages or attacked by insects.<br />
4. Digital communication can be done over large distances through internet and other things.<br />
5. It is comparatively cheaper and the work which requires a lot of people can be done simply<br />
by one person as folders and other such facilities can be maintained.<br />
6. It removes semantic barriers because the written data can be easily channel to different<br />
languages using software.<br />
7. It provides facilities like video conferencing which save a lot of time, money and effort.<br />
Disadvantages:<br />
1. It is unreliable as the messages cannot be recognised by signatures. Though software can<br />
be developed for this, yet the softwares can be easily hacked.<br />
2. Sometimes, the quickness of digital communication is harmful as messages can be sent<br />
with the click of a mouse. The person does not think and sends the message at an impulse.<br />
3. Digital Communication has completely ignored the human touch. A personal touch cannot<br />
be established because all the computers will have the same font!<br />
4. The establishment of Digital Communication causes degradation of the environment in<br />
some cases. "Electronic waste" is an example. The vibes given out by the telephone and cell<br />
phone towers are so strong that they can kill small birds. In fact the common sparrow has<br />
vanished due to so many towers coming up as the vibrations hit them on the head.<br />
5. Digital Communication has made the whole world to be an "office." The people carry their<br />
work to places where they are supposed to relax. The whole world has been made into an<br />
office. Even in the office, digital communication causes problems because personal messages<br />
can come on your cell phone, internet, etc.<br />
6. Many people misuse the efficiency of Digital Communication. The sending of hoax<br />
messages, the usage by people to harm the society, etc cause harm to the society on the<br />
whole.<br />
Definition of Digital – A method of storing, processing and transmitting information through<br />
the use of distinct electronic or optical pulses that represent the binary digits 0 and 1.<br />
Advantages of Digital -<br />
Less expensive<br />
More reliable
Easy to manipulate<br />
Flexible<br />
Compatibility with other digital systems<br />
Only digitized information can be transported through a noisy channel without degradation<br />
Integrated networks<br />
Disadvantages of Digital -<br />
Sampling Error<br />
Digital communications require greater bandwidth than analogue to transmit the same<br />
information.<br />
The detection of digital signals requires the communications system to be synchronized,<br />
whereas generally speaking this is not the case with analogue systems.<br />
Some more explanation of advantages and disadvantages of analog vs digital<br />
communication.<br />
1.The first advantage of digital communication against analog is it’s noise immunity. In any<br />
transmission path some unwanted voltage or noise is always present which cannot be<br />
eliminated fully. When signal is transmitted this noise gets added to the original signal<br />
causing the distortion of the signal. However in a digital communication at the receiving end<br />
this additive noise can be eliminated to great extent easily resulting in better recovery of<br />
actual signal. In case of analog communication it’s difficult to remove the noise once added<br />
to the signal.<br />
2.Security is another priority of messaging services in modern days. Digital communication<br />
provides better security to messages than the analog communication. It can be achieved<br />
through various coding techniques available in digital communication.<br />
3. In a digital communication the signal is digitized to a stream of 0s and 1s. So at the<br />
receiver side a simple decision has to me made whether received signal is a 0 or a<br />
1.Accordingly the receiver circuit becomes simpler as compared to the analog receiver<br />
circuit.<br />
4. Signal when travelling through it’s transmission path gets faded gradually. So on it’s path<br />
it needs to be reconstructed to it’s actual form and re-transmitted many times. For that reason<br />
AMPLIFIERS are used for analog communication and REPEATERS are used in digital<br />
communication. Amplifiers are needed every 2 to 3 Kms apart where as repeaters are needed<br />
every 5 to 6 Kms apart. So definitely digital communication is cheaper. Amplifiers also often<br />
add non-linearity that distort the actual signal.
5. Bandwidth is another scarce resource. Various Digital communication<br />
techniques are available that use the available bandwidth much efficiently than analog<br />
communication techniques.<br />
6. When audio and video signals are transmitted digitally an AD (Analog to Digital)<br />
converter is needed at transmitting side and a DA (Digital to Analog) converter is again<br />
needed at receiver side. While transmitted in analog communication these devices are not<br />
needed.<br />
7. Digital signals are often an approximation of the analog data (like voice<br />
or video) that is obtained through a process called quantization. The digital representation is<br />
never the exact signal but it’s most closely approximated digital form. So it’s accuracy<br />
depends on the degree of approximation taken in quantization process.<br />
Sampling Theorem:<br />
There are 3 cases of sampling:
Ideal impulse sampling<br />
Consider an arbitrary lowpass signal x (t ) shown in Fig. 6.2(a). Let
Pulse Code Modulation<br />
‣ PCM generation and reconstruction ,<br />
‣ Quantization noise,<br />
‣ Differential PCM systems (DPCM),<br />
‣ Delta modulation, adaptive delta modulation,<br />
‣ Noise in PCM and DM systems<br />
Digital Transmission of Analog Signals:<br />
PCM, DPCM and DM<br />
6.1 Introduction<br />
Quite a few of the information bearing signals, such as speech, music, video, etc., are analog<br />
in nature; that is, they are functions of the continuous variable t and for any t = t1, their value<br />
can lie anywhere in the interval, say − A to A. Also, these signals are of the baseband variety.<br />
If there is a channel that can support baseband transmission, we can easily set up a baseband<br />
communication system. In such a system, the transmitter could be as simple as just a power<br />
amplifier so that the signal that is transmitted could be received at the destination with some<br />
minimum power level, even after being subject to attenuation during propagation on the<br />
channel. In such a situation, even the receiver could have a very simple structure; an<br />
appropriate filter (to eliminate the out of band spectral components) followed by an amplifier.<br />
If a baseband channel is not available but have access to a passband channel, (such as<br />
ionospheric channel, satellite channel etc.) an appropriate CW modulation scheme discussed<br />
earlier could be used to shift the baseband spectrum to the passband of the given channel.<br />
Interesting enough, it is possible to transmit the analog information in a digital format.<br />
Though there are many ways of doing it, in this chapter, we shall explore three such<br />
techniques, which have found widespread acceptance. These are: Pulse Code Modulation<br />
(PCM), Differential Pulse Code Modulation (DPCM)<br />
and Delta Modulation (DM). Before we get into the details of these techniques, let us<br />
summarize the benefits of digital transmission. For simplicity, we shall assume that<br />
information is being transmitted by a sequence of binary pulses. i) During the course of<br />
propagation on the channel, a transmitted pulse becomes gradually distorted due to the nonideal<br />
transmission characteristic of the channel. Also, various unwanted signals (usually<br />
termed interference and noise) will cause further deterioration of the information bearing<br />
pulse. However, as there are only two types of signals that are being transmitted, it is possible<br />
for us to identify (with a very high probability) a given transmitted pulse at some appropriate<br />
intermediate point on the channel and regenerate a clean pulse. In this way, be completely<br />
eliminating the effect of distortion and noise till the point of regeneration. (In long-haul PCM<br />
telephony, regeneration is done every few Kilometers, with the help of regenerative<br />
repeaters.) Clearly, such an operation is not possible if the transmitted signal was analog<br />
because there is nothing like a reference waveform that can be regenerated.<br />
ii) Storing the messages in digital form and forwarding or redirecting them at a later point in<br />
time is quite simple.<br />
iii) Coding the message sequence to take care of the channel noise, encrypting for secure<br />
communication can easily be accomplished in the digital domain.<br />
iv) Mixing the signals is easy. All signals look alike after conversion to digital form<br />
independent of the source (or language!). Hence they can easily be multiplexed (and<br />
demultiplexed)
6.2 The PCM system<br />
Two basic operations in the conversion of analog signal into the digital is time discretization<br />
and amplitude discretization. In the context of PCM, the former is accomplished with the<br />
sampling operation and the latter by means of quantization. In addition, PCM involves<br />
another step, namely, conversion of quantized amplitudes into a sequence of simpler pulse<br />
patterns (usually binary), generally called as code words. (The word code in pulse code<br />
modulation refers<br />
to the fact that every quantized sample is converted to an R -bit code word.)<br />
Fig. 6.1 illustrates a PCM system. Here, m(t ) is the information bearing<br />
message signal that is to be transmitted digitally. m(t ) is first sampled and then<br />
quantized. The output of the sampler is<br />
Ts is the sampling period and n is the appropriate integer.<br />
is called the sampling rate or sampling frequency.<br />
The quantizer converts each sample to one of the values that is closest to it from among a<br />
pre-selected set of discrete amplitudes. The encoder represents each one of these quantized<br />
samples by an R -bit code word. This bit stream travels on the channel and reaches the<br />
receiving end. With fs as the sampling rate and R -bits per code word, the bit rate of the PCM<br />
System is<br />
The decoder converts the R -bit code words into the corresponding (discrete) amplitudes.<br />
Finally, the reconstruction filter, acting on these discrete amplitudes, produces the analog<br />
signal, denoted by m’(t ) . If there are no channel errors, then m’(t ) approx= m(t ) .
777773333333333333333333333333333333333333333333333333<br />
444444477744444444ggggggggggggggggggg77777777774444477777777777
Pulse-Amplitude Modulation:<br />
Pulse Amplitude Modulation – Natural and Flat-Top Sampling:<br />
(3.14)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
we have<br />
property ,<br />
the sifting<br />
Using<br />
(3.13)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
(3.12)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
is<br />
)<br />
(<br />
version of<br />
The instantane ously sampled<br />
(3.11)<br />
otherwise<br />
T<br />
t<br />
0,<br />
t<br />
T<br />
t<br />
0<br />
,<br />
,<br />
0 2<br />
1<br />
1,<br />
)<br />
(<br />
(3.10)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
as<br />
top pulses<br />
-<br />
flat<br />
denote the sequence of<br />
)<br />
(<br />
Let<br />
s<br />
n<br />
s<br />
s<br />
n<br />
s<br />
s<br />
n<br />
s<br />
n<br />
s<br />
s<br />
s<br />
n<br />
s<br />
nT<br />
t<br />
h<br />
nT<br />
m<br />
t<br />
h<br />
t<br />
m<br />
d<br />
t<br />
h<br />
nT<br />
nT<br />
m<br />
d<br />
t<br />
h<br />
nT<br />
nT<br />
m<br />
d<br />
t<br />
h<br />
m<br />
t<br />
h<br />
t<br />
m<br />
nT<br />
t<br />
nT<br />
m<br />
t<br />
m<br />
t<br />
m<br />
t<br />
h<br />
nT<br />
t<br />
h<br />
nT<br />
m<br />
t<br />
s<br />
t<br />
s<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
(3.18)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
(3.17)<br />
)<br />
(<br />
)<br />
M (<br />
(3.2)<br />
)<br />
(<br />
)<br />
(<br />
(3.2)<br />
Recall<br />
(3.16)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
(3.15)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
is<br />
)<br />
(<br />
The PAM signal<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
k<br />
s<br />
s<br />
k<br />
s<br />
s<br />
m<br />
s<br />
s<br />
δ<br />
f<br />
H<br />
k f<br />
f<br />
M<br />
f<br />
f<br />
S<br />
k f<br />
f<br />
M<br />
f<br />
f<br />
mf<br />
f<br />
G<br />
f<br />
t<br />
g<br />
f<br />
H<br />
f<br />
M<br />
f<br />
S<br />
t<br />
h<br />
t<br />
m<br />
t<br />
s<br />
t<br />
s
• The most common technique for sampling voice in PCM systems is to a sample-andhold<br />
circuit.<br />
• The instantaneous amplitude of the analog (voice) signal is held as a constant charge<br />
on a capacitor for the duration of the sampling period Ts.<br />
• This technique is useful for holding the sample constant while other processing is<br />
taking place, but it alters the frequency spectrum and introduces an error, called<br />
aperture error, resulting in an inability to recover exactly the original analog signal.<br />
• The amount of error depends on how mach the analog changes during the holding<br />
time, called aperture time.
• To estimate the maximum voltage error possible, determine the maximum slope of the<br />
analog signal and multiply it by the aperture time DT<br />
Recovering the original message signal m(t) from PAM signal :<br />
Where the filter bandwidth is W<br />
The filter output is<br />
Fourier transform of h(<br />
t) is given by<br />
H ( f ) T sinc( f T)exp(<br />
j<br />
f T)<br />
aparture effect<br />
Let the equalizer<br />
1<br />
H ( f )<br />
amplitude<br />
<br />
T<br />
1<br />
sinc(<br />
distortion<br />
responseis<br />
f T)<br />
f M ( f ) H ( f ) . Note that the<br />
s<br />
delay T<br />
2<br />
f<br />
<br />
sin( f T)<br />
(3.19)<br />
(3.20)<br />
Ideally the original signal m(<br />
t)<br />
can be recovered completely.<br />
Other Forms of Pulse Modulation:<br />
In pulse width modulation (PWM), the width of each pulse is made directly proportional<br />
to the amplitude of the information signal.<br />
• In pulse position modulation, constant-width pulses are used, and the position or time of<br />
occurrence of each pulse from some reference time is made directly proportional to the<br />
amplitude of the information signal.
Pulse Code Modulation (PCM) :<br />
• Pulse code modulation (PCM) is produced by analog-to-digital conversion process.<br />
• As in the case of other pulse modulation techniques, the rate at which samples are<br />
taken and encoded must conform to the Nyquist sampling rate.<br />
• The sampling rate must be greater than, twice the highest frequency in the analog<br />
signal,<br />
fs > 2fA(max)<br />
Quantization Process:<br />
Define<br />
J k<br />
:<br />
<br />
m<br />
k<br />
Where m<br />
Amplitude<br />
partition<br />
is<br />
cell<br />
<br />
m m<br />
<br />
, k 1,2, ,<br />
L<br />
k<br />
k<br />
1<br />
the decision level<br />
quantizati on:<br />
or the<br />
The processof<br />
(3.21)<br />
decision threshold.<br />
transforming<br />
the
Figure 3.10 Two types of quantization: (a) midtread and (b) midrise.<br />
Quantization Noise:
Figure 3.11 Illustration of the quantization process<br />
Let the quantizati on error bedenoted by the random<br />
variable Q of sample value q<br />
q m <br />
Q M V<br />
,<br />
Assuming a uniform quantizer of the midrise type<br />
2m<br />
max<br />
the step -size is <br />
(3.25)<br />
L<br />
m max m m max,<br />
L : total number of levels<br />
1<br />
<br />
<br />
, <br />
( ) <br />
(3.26)<br />
0, otherwise 2 q <br />
fQ<br />
q <br />
2<br />
<br />
2<br />
Q<br />
E[<br />
Q<br />
2<br />
] <br />
(3.23)<br />
( E[<br />
M ] 0)<br />
(3.24)<br />
<br />
2<br />
<br />
<br />
2<br />
<br />
q<br />
2<br />
f<br />
Q<br />
1<br />
( q)<br />
dq <br />
<br />
<br />
2<br />
<br />
<br />
2<br />
2<br />
<br />
<br />
(3.28)<br />
12<br />
<br />
q<br />
2<br />
dq<br />
When the quatized<br />
where R is the number of bits per sample<br />
2m<br />
max<br />
<br />
R<br />
2<br />
(3.31)<br />
2 1 2 2R<br />
<br />
Q<br />
mmax2<br />
3<br />
(3.32)<br />
Let P denote the average power of m(<br />
t)<br />
(SNR)<br />
o<br />
L 2<br />
R log<br />
( SNR)<br />
sample is<br />
o<br />
R<br />
2<br />
Q<br />
3P<br />
(<br />
2<br />
m<br />
max<br />
)2<br />
increases exponentially with<br />
2<br />
L<br />
P<br />
<br />
<br />
expressed in binary form,<br />
2R<br />
(3.29)<br />
(3.30)<br />
(3.33)<br />
increasing<br />
R (bandwidth).
Pulse Code Modulation (PCM):<br />
Figure 3.13 The basic elements of a PCM system
Quantization (nonuniform quantizer):<br />
Compression laws. (a) m -law. (b) A-law.<br />
- law<br />
A - law<br />
<br />
d m<br />
d<br />
<br />
log(1 m )<br />
<br />
log(1 )<br />
d m<br />
d<br />
A(<br />
m)<br />
1<br />
log A<br />
<br />
1<br />
log( A m )<br />
<br />
<br />
1<br />
log A<br />
<br />
1<br />
log A<br />
<br />
A<br />
(1<br />
A)<br />
m<br />
<br />
log(1 )<br />
(1 m )<br />
<br />
0 m <br />
0 m <br />
1<br />
A<br />
1<br />
A<br />
m 1<br />
1<br />
A<br />
m 1<br />
1<br />
A<br />
(3.48)<br />
(3.49)<br />
(3.50)<br />
(3.51)
Figure 3.15 Line codes for the electrical representations of binary data.<br />
(a) Unipolar NRZ signaling. (b) Polar NRZ signaling.<br />
(c) Unipolar RZ signaling. (d) Bipolar RZ signaling.<br />
(e) Split-phase or Manchester code.<br />
Noise consideration in PCM systems:<br />
(Channel noise, quantization noise)
Time-Division Multiplexing(TDM):
Digital Multiplexers :<br />
Virtues, Limitations and Modifications of PCM:<br />
Advantages of PCM<br />
1. Robustness to noise and interference<br />
2. Efficient regeneration<br />
3. Efficient SNR and bandwidth trade-off<br />
4. Uniform format<br />
5. Ease add and drop<br />
6. Secure<br />
Delta Modulation (DM) :
Let m n<br />
where T<br />
The error signal is<br />
e<br />
e<br />
n mn mqn<br />
1<br />
qn<br />
sgn(<br />
en<br />
)<br />
qn mqn<br />
1 eqn<br />
m nis<br />
m<br />
where<br />
s<br />
m(<br />
nT<br />
q<br />
)<br />
, n 0, 1,<br />
2,<br />
<br />
is the sampling period and m(<br />
nT ) is a sample of<br />
the quantizer output , e<br />
the quantized version of e<br />
s<br />
n<br />
(3.52)<br />
(3.53)<br />
(3.54)<br />
q<br />
n<br />
is<br />
, and is the step size<br />
s<br />
m(<br />
t).<br />
The modulator consists of a comparator, a quantizer, and an accumulator<br />
The output of the accumulator is<br />
m<br />
q<br />
n<br />
<br />
<br />
<br />
n<br />
n<br />
i1<br />
<br />
i1<br />
e<br />
<br />
sgn( e i )<br />
q<br />
i<br />
(3.55)
Two types of quantization errors:<br />
Slope Overload Distortion and Granular Noise:<br />
Denote the quantizati on error by q<br />
Recall (3.52)<br />
Delta-Sigma modulation (sigma-delta modulation):<br />
is<br />
The<br />
e<br />
m<br />
q<br />
n mn<br />
qn<br />
n mn<br />
mn<br />
1 qn<br />
1<br />
qn<br />
1 ,<br />
Except for<br />
too large<br />
,<br />
we have<br />
relative<br />
<br />
modulation which has an integrator can<br />
n<br />
the quantizer input is a first<br />
backward difference of the input signal<br />
To avoid slope- overload distortion , we require<br />
dm(<br />
t)<br />
(slope) max<br />
(3.58)<br />
Ts<br />
dt<br />
On the other hand, granular noise occurs when step size<br />
to the local slopeof m(<br />
t).<br />
relieve the draw back of delta modulation (differentiator)<br />
Beneficial effects of using integrator:<br />
1. Pre-emphasize the low-frequency content<br />
2. Increase correlation between adjacent samples<br />
(reduce the variance of the error signal at the quantizer input)<br />
,<br />
(3.56)<br />
(3.57)
3. Simplify receiver design<br />
Because the transmitter has an integrator , the receiver<br />
consists simply of a low-pass filter.<br />
(The differentiator in the conventional DM receiver is cancelled by the integrator )<br />
Linear Prediction (to reduce the sampling rate):<br />
Consider a finite-duration impulse response (FIR)<br />
discrete-time filter which consists of three blocks :<br />
1. Set of p ( p: prediction order) unit-delay elements (z-1)<br />
2. Set of multipliers with coefficients w1,w2,…wp<br />
3. Set of adders ( )<br />
The filter output (The linear<br />
Find<br />
xˆ<br />
n<br />
The prediction error is<br />
e<br />
n xn<br />
xˆ<br />
n<br />
Let the index of<br />
w , w<br />
1<br />
<br />
2<br />
<br />
k 1<br />
<br />
2<br />
, ,<br />
w<br />
performance be<br />
<br />
n<br />
x(<br />
n k)<br />
(mean square error) (3.61)<br />
to minimize J<br />
From(3.59) (3.60) and (3.61) we have<br />
p<br />
2<br />
x<br />
n<br />
<br />
2 w Exnxn<br />
k<br />
<br />
<br />
J E<br />
Assume X ( t) is stationary<br />
p<br />
J E e<br />
w<br />
k<br />
p<br />
k 1<br />
k<br />
predition of the input ) is<br />
(3.59)<br />
(3.60)<br />
processwith zero mean ( E[<br />
x[<br />
n]]<br />
0)
Linear adaptive prediction :<br />
The predictor is adaptive in the follow sense<br />
1. Compute<br />
2. Do iteration using the method of steepest descent<br />
Define the gradient vector<br />
w<br />
k<br />
g<br />
k<br />
J<br />
<br />
w<br />
k<br />
w , k 1,2,<br />
,<br />
p,<br />
starting any initial values<br />
k<br />
, k 1,<br />
2,<br />
,p<br />
(3.68)<br />
n denotes the value at iteration n .Then update wkn<br />
1<br />
1<br />
wk<br />
n<br />
1 wk<br />
n<br />
gk<br />
, k 1,<br />
2,<br />
,p<br />
(3.69)<br />
2<br />
1<br />
where is a step - size parameter and is for convenience<br />
2<br />
of presentation.<br />
as , if<br />
R<br />
1<br />
X<br />
exists<br />
where<br />
r<br />
X<br />
R<br />
[ R<br />
X<br />
X<br />
w<br />
w<br />
0<br />
0<br />
[1], R<br />
RX<br />
<br />
RX<br />
<br />
<br />
<br />
RX<br />
R<br />
<br />
X<br />
<br />
w , w<br />
1<br />
r<br />
1<br />
X X<br />
2<br />
[2],..., R<br />
, ,<br />
w<br />
[ p]]<br />
<br />
(3.66)<br />
0 RX<br />
1<br />
RX<br />
p<br />
1<br />
1<br />
R 0 R p<br />
2<br />
p<br />
1 R p<br />
2 R 0<br />
X<br />
X<br />
X<br />
<br />
p<br />
T<br />
T<br />
X<br />
X<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
min<br />
2<br />
X<br />
2<br />
X<br />
2<br />
X<br />
2<br />
0,<br />
R 1<br />
,<br />
,<br />
R p<br />
Substituting (3.64) into (3.63) yields<br />
J<br />
r<br />
T<br />
X<br />
<br />
<br />
<br />
R<br />
r<br />
1<br />
X X<br />
R<br />
<br />
X<br />
p<br />
<br />
r<br />
k 1<br />
k 1<br />
T<br />
X<br />
p<br />
<br />
w<br />
w<br />
w<br />
k<br />
0<br />
X<br />
k<br />
R<br />
0, J<br />
R<br />
X<br />
min<br />
X<br />
<br />
k<br />
<br />
k<br />
<br />
2<br />
X<br />
<br />
r<br />
k 1<br />
T<br />
X<br />
p<br />
X<br />
<br />
R<br />
w<br />
k<br />
r<br />
R<br />
1<br />
X X<br />
k<br />
<br />
is always less than <br />
X<br />
2<br />
X<br />
(3.67)<br />
g<br />
gˆ<br />
k<br />
wˆ<br />
J<br />
<br />
w<br />
2E<br />
k<br />
<br />
2<br />
w<br />
jRX<br />
k<br />
j<br />
xnxn<br />
k<br />
<br />
2<br />
wjExn<br />
jxn<br />
k<br />
<br />
To simplify the computing we use x<br />
(ignore the expectation)<br />
k<br />
k<br />
nx<br />
n 2xnxn<br />
k<br />
2<br />
w<br />
jnx<br />
n<br />
jxn<br />
k<br />
n<br />
1 wˆ<br />
<br />
ˆ<br />
k<br />
n x<br />
n k x n <br />
wjnxn<br />
j<br />
where e<br />
k<br />
2R<br />
wˆ<br />
k<br />
X<br />
n<br />
xn<br />
ken<br />
<br />
p<br />
n xn<br />
wˆ<br />
jnxn<br />
j<br />
j1<br />
P<br />
j1<br />
p<br />
j1<br />
p<br />
j1<br />
<br />
<br />
<br />
n k<br />
p<br />
<br />
<br />
j1<br />
<br />
, k 1,2,<br />
,<br />
p<br />
by (3.59) (3.60)<br />
The aboveequations are called lease - mean -square algorithm<br />
<br />
, k 1,2,<br />
,<br />
p<br />
for E[x[n]x[n - k]]<br />
, k 1,2,<br />
,<br />
p<br />
(3.70)<br />
(3.71)<br />
(3.72)<br />
(3.73)
Figure 3.27<br />
Block diagram illustrating the linear adaptive prediction process<br />
Differential Pulse-Code Modulation (DPCM):<br />
Usually PCM has the sampling rate higher than the Nyquist rate .The encode signal contains<br />
redundant information. DPCM can efficiently remove this redundancy.<br />
Figure 3.28 DPCM system. (a) Transmitter. (b) Receiver.<br />
Input signal to the quantizer is defined by:
e<br />
n mn<br />
mˆ<br />
n<br />
is<br />
a<br />
mˆ<br />
n<br />
The quantizer output is<br />
e<br />
q<br />
n en<br />
qn<br />
qnis<br />
where<br />
The prediction filter input is<br />
m<br />
q<br />
<br />
n mˆ<br />
n<br />
en qn<br />
m<br />
q<br />
Processing Gain:<br />
2<br />
where <br />
2<br />
where <br />
prediction<br />
From (3.74)<br />
The (SNR)<br />
(SNR)<br />
M<br />
(SNR)<br />
E<br />
(SNR)<br />
Processing Gain,<br />
value.<br />
(3.74)<br />
(3.75)<br />
quantizati on error.<br />
(3.77)<br />
mn<br />
n mn<br />
qn (3.78)<br />
of the DPCM systemis<br />
2<br />
and <br />
o<br />
o<br />
o<br />
2<br />
M<br />
2<br />
Q<br />
2<br />
<br />
M<br />
(<br />
2<br />
<br />
G<br />
2<br />
<br />
E<br />
)( )<br />
2<br />
<br />
(SNR)<br />
G<br />
(3.79)<br />
(3.80)<br />
Design a prediction filter to maximize<br />
<br />
is the variance of the predictions error<br />
and the signal - to - quantizati on noise ratio is<br />
Q<br />
<br />
<br />
<br />
p<br />
Q<br />
are variances of m n<br />
E<br />
<br />
<br />
<br />
2<br />
E<br />
2<br />
Q<br />
p<br />
Q<br />
Q<br />
(3.81)<br />
<br />
(3.82)<br />
<br />
2<br />
M<br />
2<br />
E<br />
( E[<br />
m[<br />
n]]<br />
0) and q<br />
G<br />
p<br />
2<br />
(minimize )<br />
E<br />
n
Adaptive Differential Pulse-Code Modulation (ADPCM):<br />
Need for coding speech at low bit rates , we have two aims in mind:<br />
1. Remove redundancies from the speech signal as far as possible.<br />
2. Assign the available bits in a perceptually efficient manner.<br />
Figure 3.29 Adaptive quantization with backward estimation (AQB).<br />
Figure 3.30 Adaptive prediction with backward estimation (APB).
UNIT 2<br />
Digital Modulation Techniques<br />
‣ Introduction, ASK, ASK Modulator, Coherent ASK detector, non-Coherent ASK<br />
detector,<br />
‣ Band width frequency spectrum of FSK,<br />
‣ Non-Coherent FSK detector,<br />
‣ Coherent FSK detector,<br />
‣ FSK Detection using PLL,<br />
‣ BPSK, Coherent PSK detection, QPSK, Differential PSK
ASK, OOK, MASK:<br />
• The amplitude (or height) of the sine wave varies to transmit the ones and zeros
• One amplitude encodes a 0 while another amplitude encodes a 1 (a form of amplitude<br />
modulation)<br />
Binary amplitude shift keying, Bandwidth:<br />
• d ≥ 0-related to the condition of the line<br />
B = (1+d) x S = (1+d) x N x 1/r<br />
implementation of binary ASK:
Frequency Shift Keying:<br />
• One frequency encodes a 0 while another frequency encodes a 1 (a form of frequency<br />
modulation)<br />
<br />
st<br />
<br />
<br />
FSK Bandwidth:<br />
<br />
A<br />
2<br />
cos 2f<br />
t<br />
<br />
A<br />
2<br />
cos 2f<br />
t<br />
<br />
<br />
binary 1<br />
binary 0<br />
• Limiting factor: Physical capabilities of the carrier<br />
• Not susceptible to noise as much as ASK
• Applications<br />
– On voice-grade lines, used up to 1200bps<br />
– Used for high-frequency (3 to 30 MHz) radio transmission<br />
– used at higher frequencies on LANs that use coaxial cable<br />
DBPSK:<br />
• Differential BPSK<br />
– 0 = same phase as last signal element<br />
– 1 = 180º shift from last signal element
s<br />
t<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Acos<br />
2f<br />
ct<br />
<br />
3<br />
Acos<br />
2f<br />
ct<br />
<br />
4<br />
<br />
Acos<br />
2f<br />
ct<br />
<br />
<br />
Acos<br />
2f<br />
ct<br />
<br />
<br />
<br />
<br />
3<br />
<br />
4 <br />
<br />
4 <br />
<br />
4 <br />
11<br />
01<br />
00<br />
10<br />
Concept of a constellation :
M-ary PSK:<br />
Using multiple phase angles with each angle having more than one amplitude, multiple<br />
signals elements can be achieved<br />
D <br />
R<br />
L<br />
<br />
R<br />
log 2<br />
M<br />
– D = modulation rate, baud<br />
– R = data rate, bps<br />
– M = number of different signal elements = 2L<br />
– L = number of bits per signal element<br />
QAM:<br />
– As an example of QAM, 12 different phases are combined with two different<br />
amplitudes<br />
– Since only 4 phase angles have 2 different amplitudes, there are a total of 16<br />
combinations<br />
– With 16 signal combinations, each baud equals 4 bits of information (2 ^ 4 =<br />
16)<br />
– Combine ASK and PSK such that each signal corresponds to multiple bits<br />
– More phases than amplitudes<br />
– Minimum bandwidth requirement same as ASK or PSK
QAM and QPR:<br />
• QAM is a combination of ASK and PSK<br />
– Two different signals sent simultaneously on the same carrier frequency<br />
– M=4, 16, 32, 64, 128, 256<br />
• Quadrature Partial Response (QPR)<br />
– 3 levels (+1, 0, -1), so 9QPR, 49QPR
Offset quadrature phase-shift keying (OQPSK):<br />
• QPSK can have 180 degree jump, amplitude fluctuation<br />
• By offsetting the timing of the odd and even bits by one bit-period, or half a symbolperiod,<br />
the in-phase and quadrature components will never change at the same time.
Generation and Detection of Coherent BPSK:<br />
Figure 6.26 Block diagrams for (a) binary FSK transmitter and (b) coherent binary FSK<br />
receiver.
Fig. 6.28<br />
6.28<br />
Figure 6.30 (a) Input binary sequence. (b) Waveform of scaled time<br />
function s 1 f 1 (t). (c) Waveform of scaled time function s 2 f 2 (t). (d)<br />
Waveform of the MSK signal s(t) obtained by adding s 1 f 1 (t) and<br />
s 2 f 2 (t) on a bit-by-bit basis.
Figure 6.29 Signal-space diagram for MSK system.<br />
Generation and Detection of MSK Signals:
Figure 6.31 Block diagrams for (a) MSK transmitter and (b) coherent MSK receiver.
UNIT 3<br />
Base Band Transmission And Optimal reception of<br />
Digital Signal<br />
‣ Pulse shaping for optimum transmission,<br />
‣ A Base band signal receiver,<br />
‣ Different pulses and power spectrum densities,<br />
‣ Probability of error, optimum receiver,<br />
‣ Optimum of coherent reception,<br />
‣ Signal space representation and probability of error,<br />
‣ Eye diagram,<br />
‣ cross talk.<br />
BASEBAND FORMATTING TECHNIQUES<br />
CORRELATIVE LEVEL CODING:
• Correlative-level coding (partial response signaling)<br />
– adding ISI to the transmitted signal in a controlled manner<br />
• Since ISI introduced into the transmitted signal is known, its effect can be interpreted at<br />
the receiver<br />
• A practical method of achieving the theoretical maximum signaling rate of 2W symbol<br />
per second in a bandwidth of W Hertz<br />
• Using realizable and perturbation-tolerant filters<br />
Duo-binary Signaling :<br />
Duo : doubling of the transmission capacity of a straight binary system<br />
• Binary input sequence {bk} : uncorrelated binary symbol 1, 0<br />
a<br />
k<br />
1<br />
<br />
1<br />
if symbol b<br />
if symbol b<br />
k<br />
k<br />
is<br />
is<br />
1<br />
0<br />
c<br />
k<br />
<br />
a<br />
k<br />
a<br />
k1<br />
H ( f ) H<br />
I<br />
H<br />
2H<br />
Nyquist<br />
Nyquist<br />
Nyquist<br />
( f )[1 exp( j2fT<br />
)]<br />
( f )[exp( jfT<br />
) exp( jfT<br />
)]exp( jfT<br />
)<br />
( f )cos( fT<br />
)exp( jfT<br />
)<br />
b<br />
b<br />
b<br />
b<br />
b<br />
b<br />
1, | f | 1/ 2Tb<br />
H<br />
Nyquist(<br />
f ) <br />
0, otherwise<br />
2cos( fTb )exp( j<br />
fTb ), | f | 1/ 2Tb<br />
HI<br />
( f)<br />
<br />
sin( t<br />
/ T ) sin[ ( ) / ] 0,<br />
otherwise<br />
b<br />
t Tb<br />
Tb<br />
hI<br />
( t)<br />
<br />
t<br />
/ T ( t T<br />
) / T<br />
2<br />
Tb<br />
sin( t<br />
/ Tb<br />
)<br />
<br />
t(<br />
T t)<br />
b<br />
b<br />
b<br />
b
• The tails of hI(t) decay as 1/|t|2, which is a faster rate of decay than 1/|t| encountered<br />
in the ideal Nyquist channel.<br />
• Let represent the estimate of the original pulse ak as conceived by the receiver at<br />
time t=kTb<br />
• Decision feedback : technique of using a stored estimate of the previous symbol<br />
• Propagate : drawback, once error are made, they tend to propagate through the output<br />
• Precoding : practical means of avoiding the error propagation phenomenon before the<br />
duobinary coding<br />
d<br />
k<br />
b<br />
k<br />
d<br />
k1<br />
d<br />
k<br />
symbol 1 if either symbol bk<br />
or dk<br />
1<br />
is 1<br />
symbol 0<br />
otherwise<br />
<br />
<br />
1<br />
• {dk} is applied to a pulse-amplitude modulator, producing a corresponding two-level<br />
sequence of short pulse {ak}, where +1 or –1 as before<br />
c<br />
k<br />
a<br />
k<br />
a<br />
k<br />
c<br />
k<br />
0<br />
<br />
2<br />
if data symbol b is1<br />
if data symbol b is 0<br />
k<br />
k<br />
• |ck|=1 : random guess in favor of symbol 1 or 0<br />
• |ck|=1 : random guess in favor of symbol 1 or 0
Modified Duo-binary Signaling :<br />
• Nonzero at the origin : undesirable<br />
• Subtracting amplitude-modulated pulses spaced 2Tb second<br />
c<br />
k<br />
a<br />
k<br />
a<br />
k1<br />
H ( f ) H ( f )[1 exp( j4 fT )]<br />
IV Nyquist b<br />
2 jH ( f )sin(2 fT )exp( j2<br />
fT )<br />
Nyquist b b<br />
H<br />
IV<br />
( f)<br />
2 j sin(2 fTb )exp( j2 fTb ), | f | 1/ 2Tb<br />
0,<br />
elsewhere<br />
<br />
<br />
sin( / ) sin[ ( 2 ) / ]<br />
h<br />
IV<br />
() t <br />
t T t T T<br />
<br />
t / T ( t 2 T ) / T<br />
b b b<br />
b b b
• precoding<br />
d b d<br />
k k k2<br />
symbol 1 if either symbol bk<br />
or dk2<br />
is 1<br />
<br />
symbol 0<br />
otherwise
• |ck|=1 : random guess in favor of symbol 1 or 0<br />
If | c | 1, say symbol b is 1<br />
k<br />
If | c | 1,<br />
say symbol b is 0<br />
k<br />
k<br />
k<br />
Generalized form of correlative-level coding:<br />
• |ck|=1 : random guess in favor of symbol 1 or 0
h(<br />
t)<br />
<br />
N<br />
1<br />
n<br />
w<br />
n<br />
<br />
sin c<br />
<br />
<br />
t<br />
T<br />
b<br />
<br />
n<br />
<br />
Baseband M-ary PAM Transmission:
• Produce one of M possible amplitude level<br />
• T : symbol duration<br />
• 1/T: signaling rate, symbol per second, bauds<br />
– Equal to log2M bit per second<br />
• Tb : bit duration of equivalent binary PAM :<br />
• To realize the same average probability of symbol error, transmitted power must be<br />
increased by a factor of M2/log2M compared to binary PAM<br />
Tapped-delay-line equalization :<br />
• Approach to high speed transmission<br />
– Combination of two basic signal-processing operation<br />
– Discrete PAM<br />
– Linear modulation scheme<br />
• The number of detectable amplitude levels is often limited by ISI<br />
• Residual distortion for ISI : limiting factor on data rate of the system
• Equalization : to compensate for the residual distortion<br />
• Equalizer : filter<br />
– A device well-suited for the design of a linear equalizer is the tapped-delayline<br />
filter<br />
– Total number of taps is chosen to be (2N+1)<br />
N<br />
<br />
h ( t)<br />
w ( t kT )<br />
k N<br />
• P(t) is equal to the convolution of c(t) and h(t)<br />
p(<br />
t)<br />
c(<br />
t)<br />
h(<br />
t)<br />
c(<br />
t)<br />
<br />
<br />
N<br />
<br />
kN<br />
• nT=t sampling time, discrete convolution sum<br />
k<br />
<br />
kN<br />
w c(<br />
t)<br />
<br />
( t kT ) <br />
k<br />
N<br />
<br />
k N<br />
N<br />
w ( t kT )<br />
k<br />
N<br />
<br />
kN<br />
p ( nT)<br />
w c((<br />
n k)<br />
T )<br />
k<br />
w c(<br />
t kT )<br />
k
• Nyquist criterion for distortionless transmission, with T used in place of Tb,<br />
normalized condition p(0)=1<br />
1,<br />
p(<br />
nT)<br />
<br />
0,<br />
n 0 1,<br />
<br />
n 0 0,<br />
n 0<br />
n 1,<br />
2,....., N<br />
• Zero-forcing equalizer<br />
– Optimum in the sense that it minimizes the peak distortion(ISI) – worst case<br />
– Simple implementation<br />
– The longer equalizer, the more the ideal condition for distortionless<br />
transmission<br />
Adaptive Equalizer :<br />
• The channel is usually time varying<br />
– Difference in the transmission characteristics of the individual links that may<br />
be switched together<br />
– Differences in the number of links in a connection<br />
• Adaptive equalization<br />
– Adjust itself by operating on the the input signal<br />
• Training sequence<br />
– Precall equalization<br />
– Channel changes little during an average data call<br />
• Prechannel equalization<br />
– Require the feedback channel<br />
• Postchannel equalization<br />
• synchronous<br />
– Tap spacing is the same as the symbol duration of transmitted signal<br />
Least-Mean-Square Algorithm:<br />
• Adaptation may be achieved<br />
– By observing the error b/w desired pulse shape and actual pulse shape<br />
– Using this error to estimate the direction in which the tap-weight should be<br />
changed<br />
• Mean-square error criterion<br />
– More general in application<br />
– Less sensitive to timing perturbations<br />
• : desired response, : error signal, : actual response<br />
• Mean-square error is defined by cost fuction<br />
2<br />
Ee n<br />
<br />
• Ensemble-averaged cross-correlation
e<br />
<br />
n<br />
y<br />
<br />
n<br />
2E en 2E en 2Eenxnk 2 Rex<br />
( k)<br />
wk wk wk<br />
<br />
R ( k)<br />
E e x <br />
<br />
ex n n k<br />
<br />
• Optimality condition for minimum mean-square error<br />
<br />
w<br />
0 for k 0, 1,....,<br />
N<br />
• k Mean-square error is a second-order and a parabolic function of tap weights as a<br />
multidimentional bowl-shaped surface<br />
• Adaptive process is a successive adjustments of tap-weight seeking the bottom of the<br />
bowl(minimum value )<br />
• Steepest descent algorithm<br />
– The successive adjustments to the tap-weight in direction opposite to the<br />
vector of gradient )<br />
– Recursive formular ( : step size parameter)<br />
1 <br />
wk( n 1) wk( n) , k 0, 1,....,<br />
N<br />
2 w<br />
k<br />
w ( n) R ( k), k 0, 1,....,<br />
N<br />
k<br />
ex<br />
• Least-Mean-Square Algorithm<br />
– Steepest-descent algorithm is not available in an unknown environment<br />
– Approximation to the steepest descent algorithm using instantaneous estimate<br />
R ( k)<br />
e x<br />
ex n nk<br />
w ( n 1) w ( n)<br />
e x<br />
k k n nk<br />
• LMS is a feedback system<br />
• In the case of small , roughly similar to steepest descent algorithm
Operation of the equalizer:<br />
• square error Training mode<br />
– Known sequence is transmitted and synchorunized version is generated in the<br />
receiver<br />
– Use the training sequence, so called pseudo-noise(PN) sequence<br />
• Decision-directed mode<br />
– After training sequence is completed<br />
– Track relatively slow variation in channel characteristic<br />
• Large : fast tracking, excess mean<br />
Implementation Approaches:<br />
• Analog<br />
– CCD, Tap-weight is stored in digital memory, analog sample and<br />
multiplication<br />
– Symbol rate is too high<br />
• Digital<br />
– Sample is quantized and stored in shift register<br />
– Tap weight is stored in shift register, digital multiplication<br />
• Programmable digital<br />
– Microprocessor
– Flexibility<br />
– Same H/W may be time shared<br />
Decision-Feed back equalization:<br />
• Baseband channel impulse response : {hn}, input : {xn}<br />
y<br />
<br />
<br />
n k nk<br />
k<br />
0<br />
h x<br />
<br />
<br />
h x h x h x<br />
n k nk k nk<br />
k0 k0<br />
• Using data decisions made on the basis of precursor to take care of the postcursors<br />
– The decision would obviously have to be correct<br />
• Feedforward section : tapped-delay-line equalizer<br />
• Feedback section : the decision is made on previously detected symbols of the input<br />
sequence<br />
– Nonlinear feedback loop by decision device<br />
c<br />
n<br />
w<br />
<br />
w<br />
(1)<br />
n<br />
(2)<br />
n<br />
<br />
<br />
<br />
v<br />
n<br />
xn<br />
<br />
<br />
a <br />
n <br />
e a c v<br />
T<br />
n n n n<br />
w w <br />
e x<br />
(1) (1)<br />
n1 n1 1 n n<br />
w w <br />
e a<br />
(2) (2)<br />
n1 n1 1 n n
Eye Pattern:<br />
• Experimental tool for such an evaluation in an insightful manner<br />
– Synchronized superposition of all the signal of interest viewed within a<br />
particular signaling interval<br />
• Eye opening : interior region of the eye pattern<br />
• In the case of an M-ary system, the eye pattern contains (M-1) eye opening, where M<br />
is the number of discreteamplitude levels
Interpretation of Eye Diagram:
Information Theory<br />
‣ Information and entropy,<br />
‣ Conditional entropy and redundancy,<br />
‣ Shannon Fano coding,<br />
‣ mutual, information,<br />
‣ Information loss due to noise,<br />
‣ Source codings,- Huffman code, variable length coding<br />
‣ Source coding to increase average information per bit,<br />
‣ Lossy source Coding.<br />
INFORMATION THEORY AND CODING TECHNIQUES<br />
Information sources<br />
Definition:
The set of source symbols is called the source alphabet, and the elements of the set are<br />
called the symbols or letters.<br />
The number of possible answers ‘ r ’ should be linked to “information.”<br />
“Information” should be additive in some sense.<br />
We define the following measure of information:<br />
Where ‘ r ’ is the number of all possible outcome so far an do m message U.<br />
Using this definition we can confirm that it has the wanted property of additivity:<br />
The basis ‘b’ of the logarithm b is only a change of units without actually changing the<br />
amount of information it describes.<br />
Classification of information sources<br />
1. Discrete memory less.<br />
2. Memory.<br />
Discrete memory less source (DMS) can be characterized by “the list of the symbols, the<br />
probability assignment to these symbols, and the specification of the rate of generating these<br />
symbols by the source”.<br />
1. Information should be proportion to the uncertainty of an outcome.<br />
2. Information contained in independent outcome should add.<br />
Information content of a symbol:<br />
Let us consider a discrete memory less source (DMS) denoted by X and having the alphabet<br />
{U1, U2, U3, ……Um}. The information content of the symbol xi, denoted by I(xi) is defined<br />
as
I(U) = logb<br />
= - log b P(U)<br />
Where P(U) is the probability of occurrence of symbol U<br />
Units of I(xi):<br />
For two important and one unimportant special cases of b it has been agreed to use the<br />
following names for these units:<br />
b =2(log2): bit,<br />
b = e (ln): nat (natural logarithm),<br />
b =10(log10): Hartley.<br />
The conversation of these units to other units is given as<br />
log2a=<br />
Definition:<br />
Uncertainty or Entropy (i.e Average information)<br />
In order to get the information content of the symbol, the flow information on the symbol can<br />
fluctuate widely because of randomness involved into the section of symbols.<br />
The uncertainty or entropy of a discrete random variable (RV) ‘U’ is defined as<br />
H(U)= E[I(u)]=<br />
where PU(·)denotes the probability mass function (PMF)2 of the RV U, and where the<br />
support of P U is defined as
We will usually neglect to mention “support” when we sum over<br />
PU(u) · logb PU(u), i.e., we implicitly assume that we exclude all u<br />
With zero probability PU(u)=0.<br />
Entropy for binary source<br />
It may be noted that for a binary souce U which genets independent symbols 0 and 1 with<br />
equal probability, the source entropy H(u) is<br />
H(u) = - log2 - log2 = 1 b/symbol<br />
Bounds on H(U)<br />
If U has r possible values, then 0 ≤ H(U) ≤ log r,<br />
0 ≤ H(U) ≤ log r,<br />
Where<br />
H(U)=0 if, and only if, PU(u)=1 for some u,<br />
H(U)=log r if, and only if, PU(u)= 1/r ∀ u.<br />
Hence, H(U) ≥ 0.Equalitycanonlybeachievedif −PU(u)log2 PU(u)=0<br />
For all u ∈ supp(PU),i.e., PU(u)=1forall u ∈ supp(PU).<br />
To derive the upper bound we use at rick that is quite common in in-<br />
Formation theory: We take the deference and try to show that it must be non positive.
Equality can only be achieved if<br />
1. In the IT Inequality ξ =1,i.e.,if 1r·PU(u)=1=⇒ PU(u)= 1r ,for all u;<br />
2. |supp(PU)| = r.
Note that if Condition1 is satisfied, Condition 2 is also satisfied.<br />
Conditional Entropy<br />
Similar to probability of random vectors, there is nothing really new about conditional<br />
probabilities given that a particular event Y = y has occurred.<br />
The conditional entropy or conditional uncertainty of the RV X given the event Y = y is<br />
defined as<br />
Note that the definition is identical to before apart from that everything is conditioned on<br />
the event Y = y<br />
Note that the conditional entropy given the event Y = y is a function of y. Since Y is also<br />
a RV, we can now average over all possible events Y = y according to the probabilities of<br />
each event. This will lead to the averaged.<br />
• Forward Error Correction (FEC)<br />
– Coding designed so that errors can be corrected at the receiver<br />
– Appropriate for delay sensitive and one-way transmission (e.g., broa<strong>dc</strong>ast TV)<br />
of data<br />
– Two main types, namely block codes and convolutional codes. We will only<br />
look at block codes
UNIT 4<br />
Linear Block Codes<br />
‣ Matrix description of linear block codes,<br />
‣ Matrix description of linear block codes,<br />
‣ Error detection and error correction capabilities of linear block codes<br />
‣ Cyclic codes: algebraic structure, encoding, syndrome calculation, decoding<br />
Block Codes:<br />
• We will consider only binary data<br />
• Data is grouped into blocks of length k bits (dataword)<br />
• Each dataword is coded into blocks of length n bits (codeword), where in general n>k<br />
• This is known as an (n,k) block code<br />
• A vector notation is used for the datawords and codewords,<br />
– Dataword d = (d1 d2….dk)<br />
– Codeword c = (c1 c2……..cn)<br />
• The redundancy introduced by the code is quantified by the code rate,<br />
– Code rate = k/n<br />
– i.e., the higher the redundancy, the lower the code rate<br />
Hamming Distance:<br />
• Error control capability is determined by the Hamming distance<br />
• The Hamming distance between two codewords is equal to the number of differences<br />
between them, e.g.,<br />
10011011<br />
11010010 have a Hamming distance = 3<br />
• Alternatively, can compute by adding codewords (mod 2)<br />
=01001001 (now count up the ones)<br />
• The maximum number of detectable errors is<br />
d min<br />
1<br />
• That is the maximum number of correctable errors is given by,<br />
<br />
t <br />
<br />
d min<br />
1<br />
2
where dmin is the minimum Hamming distance between 2 codewords and<br />
the smallest integer<br />
means<br />
Linear Block Codes:<br />
• As seen from the second Parity Code example, it is possible to use a table to hold all<br />
the codewords for a code and to look-up the appropriate codeword based on the<br />
supplied dataword<br />
• Alternatively, it is possible to create codewords by addition of other codewords. This<br />
has the advantage that there is now no longer the need to held every possible<br />
codeword in the table.<br />
• If there are k data bits, all that is required is to hold k linearly independent codewords,<br />
i.e., a set of k codewords none of which can be produced by linear combinations of 2<br />
or more codewords in the set.<br />
• The easiest way to find k linearly independent codewords is to choose those which<br />
have ‘1’ in just one of the first k positions and ‘0’ in the other k-1 of the first k<br />
positions.<br />
• For example for a (7,4) code, only four codewords are required, e.g.,<br />
1<br />
0<br />
0<br />
0<br />
0<br />
1<br />
0<br />
0<br />
0<br />
0<br />
1<br />
0<br />
0<br />
0<br />
0<br />
1<br />
1<br />
1<br />
0<br />
1<br />
1<br />
0<br />
1<br />
1<br />
0<br />
1<br />
1<br />
1<br />
• So, to obtain the codeword for dataword 1011, the first, third and fourth codewords in<br />
the list are added together, giving 1011010<br />
• This process will now be described in more detail<br />
• An (n,k) block code has code vectors<br />
d=(d1 d2….dk) and<br />
c=(c1 c2……..cn)<br />
• The block coding process can be written as c=dG<br />
where G is the Generator Matrix<br />
a<br />
<br />
<br />
a<br />
G <br />
.<br />
<br />
ak<br />
11<br />
21<br />
1<br />
a<br />
a<br />
a<br />
12<br />
22<br />
.<br />
k 2<br />
...<br />
...<br />
...<br />
...<br />
a<br />
a<br />
a<br />
1n<br />
2n<br />
.<br />
kn<br />
<br />
<br />
<br />
<br />
<br />
<br />
a<br />
<br />
<br />
a<br />
.<br />
<br />
a<br />
1<br />
2<br />
k
• Thus,<br />
c<br />
<br />
k<br />
<br />
i1<br />
d i<br />
a<br />
i<br />
• ai must be linearly independent, i.e.,<br />
Since codewords are given by summations of the ai vectors, then to avoid 2 datawords<br />
having the same codeword the ai vectors must be linearly independent.<br />
• Sum (mod 2) of any 2 codewords is also a codeword, i.e.,<br />
Since for datawords d1 and d2 we have;<br />
d<br />
3<br />
d1<br />
d2<br />
So,<br />
c<br />
k<br />
k<br />
k<br />
k<br />
3<br />
d3iai<br />
(<br />
d1<br />
i<br />
d2i<br />
)a<br />
i<br />
d1<br />
iai<br />
d2iai<br />
i1<br />
i1<br />
i1<br />
i1<br />
c3 c1<br />
c2<br />
Error Correcting Power of LBC:<br />
• The Hamming distance of a linear block code (LBC) is simply the minimum<br />
Hamming weight (number of 1’s or equivalently the distance from the all 0<br />
codeword) of the non-zero codewords<br />
• Note d(c1,c2) = w(c1+ c2) as shown previously<br />
• For an LBC, c1+ c2=c3<br />
• So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))<br />
• Therefore to find min Hamming distance just need to search among the 2k codewords<br />
to find the min Hamming weight – far simpler than doing a pair wise check for all<br />
possible codewords.<br />
•
Linear Block Codes – example 1:<br />
• For example a (4,2) code, suppose;<br />
G<br />
<br />
1<br />
<br />
0<br />
0<br />
1<br />
1<br />
0<br />
1<br />
1<br />
<br />
<br />
a1 = [1011]<br />
a2 = [0101]<br />
• For d = [1 1], then;<br />
c<br />
<br />
<br />
<br />
1<br />
0<br />
_<br />
1<br />
0<br />
1<br />
_<br />
1<br />
1<br />
0<br />
_<br />
1<br />
1<br />
1<br />
_<br />
0<br />
Linear Block Codes – example 2:<br />
• A (6,5) code wit h<br />
1<br />
0 0 0 0 1<br />
<br />
<br />
<br />
0 1 0 0 0 1<br />
<br />
G 0<br />
0 1 0 0 1<br />
<br />
<br />
0<br />
0 0 1 0 1<br />
<br />
<br />
• Is an even 0<br />
single<br />
0 0<br />
parity<br />
0 1<br />
code<br />
1<br />
Systematic Codes:<br />
• For a systematic block code the dataword appears unaltered in the codeword – usually<br />
at the start<br />
• The generator matrix has the structure,<br />
1<br />
<br />
0<br />
G <br />
..<br />
<br />
0<br />
0<br />
1<br />
..<br />
0<br />
..<br />
..<br />
..<br />
..<br />
0<br />
0<br />
..<br />
1<br />
p<br />
p<br />
p<br />
11<br />
21<br />
..<br />
k1<br />
p<br />
p<br />
p<br />
12<br />
22<br />
..<br />
k 2<br />
..<br />
..<br />
..<br />
..<br />
p<br />
p<br />
p<br />
1R<br />
2R<br />
..<br />
kR<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
I |<br />
P
R = n - k<br />
• is often referred to as parity bits<br />
I is k*k identity matrix. Ensures data word appears as beginning of codeword P is k*R matrix.<br />
Decoding Linear Codes:<br />
• One possibility is a ROM look-up table<br />
• In this case received codeword is used as an address<br />
• Example – Even single parity check code;<br />
Address Data<br />
000000 0<br />
000001 1<br />
000010 1<br />
000011 0<br />
……… .<br />
• Data output is the error flag, i.e., 0 – codeword ok,<br />
• If no error, data word is first k bits of codeword<br />
• For an error correcting code the ROM can also store data words<br />
• Another possibility is algebraic decoding, i.e., the error flag is computed from the<br />
received codeword (as in the case of simple parity codes)<br />
• How can this method be extended to more complex error detection and correction<br />
codes?<br />
Parity Check Matrix:<br />
• A linear block code is a linear subspace S sub of all length n vectors (Space S)<br />
• Consider the subset S null of all length n vectors in space S that are orthogonal to all<br />
length n vectors in S sub<br />
• It can be shown that the dimensionality of S null is n-k, where n is the dimensionality<br />
of S and k is the dimensionality of<br />
S sub<br />
• It can also be shown that S null is a valid subspace of S and consequently S sub is also<br />
the null space of S null
• S null can be represented by its basis vectors. In this case the generator basis vectors<br />
(or ‘generator matrix’ H) denote the generator matrix for S null - of dimension n-k = R<br />
• This matrix is called the parity check matrix of the code defined by G, where G is<br />
obviously the generator matrix for S sub - of dimension k<br />
• Note that the number of vectors in the basis defines the dimension of the subspace<br />
• So the dimension of H is n-k (= R) and all vectors in the null space are orthogonal to<br />
all the vectors of the code<br />
• Since the rows of H, namely the vectors bi are members of the null space they are<br />
orthogonal to any code vector<br />
• So a vector y is a codeword only if yHT=0<br />
• Note that a linear block code can be specified by either G or H<br />
Parity Check Matrix:<br />
b<br />
<br />
<br />
b<br />
R =<br />
H<br />
n -<br />
<br />
k .<br />
<br />
bR<br />
11<br />
21<br />
1<br />
b<br />
b<br />
b<br />
12<br />
22<br />
.<br />
R2<br />
...<br />
...<br />
...<br />
...<br />
b1<br />
n <br />
b<br />
<br />
2n<br />
<br />
. <br />
<br />
bRn<br />
b<br />
<br />
<br />
b<br />
.<br />
<br />
b<br />
1<br />
2<br />
R<br />
<br />
<br />
<br />
<br />
<br />
<br />
• So H is used to check if a codeword is valid,<br />
• The rows of H, namely, bi, are chosen to be orthogonal to rows of G, namely ai<br />
• Consequently the dot product of any valid codeword with any bi is zero<br />
This is so since,<br />
c<br />
<br />
k<br />
<br />
i1<br />
d i<br />
a<br />
i<br />
and so,<br />
b<br />
k<br />
k<br />
j.c<br />
b<br />
j.<br />
d i<br />
a<br />
i<br />
di<br />
(a<br />
i.b<br />
j)<br />
0<br />
i1<br />
i1
• This means that a codeword is valid (but not necessarily correct) only if cHT = 0. To<br />
ensure this it is required that the rows of H are independent and are orthogonal to the<br />
rows of G<br />
• That is the bi span the remaining R (= n - k) dimensions of the codespace<br />
• For example consider a (3,2) code. In this case G has 2 rows, a1 and a2<br />
• Consequently all valid codewords sit in the subspace (in this case a plane) spanned by<br />
a1 and a2<br />
• In this example the H matrix has only one row, namely b1. This vector is orthogonal<br />
to the plane containing the rows of the G matrix, i.e., a1 and a2<br />
• Any received codeword which is not in the plane containing a1 and a2 (i.e., an invalid<br />
codeword) will thus have a component in the direction of b1 yielding a non- zero dot<br />
product between itself and b1.<br />
Error Syndrome:<br />
• For error correcting codes we need a method to compute the required correction<br />
• To do this we use the Error Syndrome, s of a received codeword, cr<br />
s = crHT<br />
• If cr is corrupted by the addition of an error vector, e, then<br />
cr = c + e<br />
and<br />
s = (c + e) HT = cHT + eHT<br />
s = 0 + eHT<br />
Syndrome depends only on the error<br />
• That is, we can add the same error pattern to different code words and get the same<br />
syndrome.<br />
– There are 2(n - k) syndromes but 2n error patterns<br />
– For example for a (3,2) code there are 2 syndromes and 8 error patterns<br />
– Clearly no error correction possible in this case<br />
– Another example. A (7,4) code has 8 syndromes and 128 error patterns.<br />
– With 8 syndromes we can provide a different value to indicate single errors in<br />
any of the 7 bit positions as well as the zero value to indicate no errors<br />
• Now need to determine which error pattern caused the syndrome<br />
• For systematic linear block codes, H is constructed as follows,<br />
G = [ I | P] and so H = [-PT | I]<br />
where I is the k*k identity for G and the R*R identity for H<br />
• Example, (7,4) code, dmin= 3
G <br />
1<br />
<br />
<br />
0<br />
I | P <br />
0<br />
<br />
0<br />
<br />
0<br />
1<br />
0<br />
0<br />
0<br />
0<br />
1<br />
0<br />
0<br />
0<br />
0<br />
1<br />
0<br />
1<br />
1<br />
1<br />
1<br />
0<br />
1<br />
1<br />
1<br />
1<br />
<br />
<br />
0<br />
<br />
1<br />
<br />
H - P<br />
T<br />
0<br />
| I<br />
<br />
<br />
1<br />
<br />
1<br />
1<br />
0<br />
1<br />
1<br />
1<br />
0<br />
1<br />
1<br />
1<br />
1<br />
0<br />
0<br />
0<br />
1<br />
0<br />
0<br />
0<br />
<br />
<br />
1<br />
Error Syndrome – Example:<br />
• For a correct received codeword cr = [1101001]<br />
In this case,<br />
s c H<br />
r<br />
T<br />
<br />
Standard Array:<br />
0<br />
<br />
<br />
1<br />
1<br />
<br />
<br />
1<br />
<br />
0<br />
<br />
0<br />
1<br />
1<br />
<br />
<br />
0<br />
<br />
<br />
0<br />
<br />
0<br />
1<br />
<br />
1<br />
1 0 1 0 0 1 1 1 1 0<br />
0 0<br />
• The Standard Array is constructed as follows,<br />
1<br />
0<br />
1<br />
0<br />
1<br />
0
c 1 (all zero)<br />
e 1<br />
e 2<br />
e 3<br />
…<br />
e N<br />
c 2<br />
c 2 +e 1<br />
c 2 +e 2<br />
c 2 +e 3<br />
……<br />
c 2 +e N<br />
…… c M s 0<br />
……<br />
……<br />
……<br />
……<br />
……<br />
c M +e 1<br />
c M +e 2<br />
c M +e 3<br />
……<br />
c M +e N<br />
s 1<br />
s 2<br />
s 3<br />
…<br />
s N<br />
• The array has 2k columns (i.e., equal to the number of valid codewords) and 2R rows<br />
(i.e., the number of syndromes)<br />
Hamming Codes:<br />
• We will consider a special class of SEC codes (i.e., Hamming distance = 3) where,<br />
– Number of parity bits R = n – k and n = 2R – 1<br />
– Syndrome has R bits<br />
– 0 value implies zero errors<br />
– 2R – 1 other syndrome values, i.e., one for each bit that might need to be<br />
corrected<br />
– This is achieved if each column of H is a different binary word – remember s<br />
= eHT<br />
• Systematic form of (7,4) Hamming code is,<br />
G <br />
1<br />
<br />
<br />
0<br />
I | P <br />
0<br />
<br />
0<br />
<br />
0<br />
1<br />
0<br />
0<br />
0<br />
0<br />
1<br />
0<br />
0<br />
0<br />
0<br />
1<br />
0<br />
1<br />
1<br />
1<br />
1<br />
0<br />
1<br />
1<br />
1<br />
1<br />
<br />
<br />
0<br />
<br />
1<br />
<br />
H - P<br />
T<br />
0<br />
| I<br />
<br />
<br />
1<br />
<br />
1<br />
1<br />
0<br />
1<br />
1<br />
1<br />
0<br />
1<br />
1<br />
1<br />
1<br />
0<br />
0<br />
0<br />
1<br />
0<br />
0<br />
0<br />
<br />
<br />
1<br />
• The original form is non-systematic,<br />
G <br />
1<br />
<br />
<br />
1<br />
0<br />
<br />
1<br />
1<br />
0<br />
1<br />
1<br />
1<br />
0<br />
0<br />
0<br />
0<br />
1<br />
1<br />
1<br />
0<br />
1<br />
0<br />
0<br />
0<br />
0<br />
1<br />
0<br />
0<br />
0<br />
<br />
<br />
0<br />
<br />
1<br />
H <br />
0<br />
<br />
<br />
0<br />
<br />
1<br />
0<br />
1<br />
0<br />
0<br />
1<br />
1<br />
1<br />
0<br />
0<br />
1<br />
0<br />
1<br />
1<br />
1<br />
0<br />
1<br />
1<br />
<br />
<br />
1
• Compared with the systematic code, the column orders of both G and H are swapped<br />
so that the columns of H are a binary count<br />
• The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the non-systematic H is col. 7<br />
in the systematic H.<br />
Transmission and Storage Transmission and Storage<br />
Introduction<br />
◊ A major concern of designing digital data transmission and storage Systems is the control<br />
of errors so that reliable reproduction of data systems is the control of errors so that reliable<br />
reproduction of data can be obtained.<br />
◊ In 1948, Shannon demonstrated that, by proper encoding of the information, errors induced<br />
by a noisy channel or storage medium can be reduced to any desired level without sacrificing<br />
the rate of information transmission or storage, as long as the information rate is less than the<br />
capacity of the channel.<br />
◊ A great deal of effort has been expended on the problem of devising efficient encoding and<br />
decoding methods for error control in a noisy environment<br />
Typical Digital Communications Systems<br />
◊ Block diagram of a typical data transmission or storage system<br />
Types of Codes<br />
◊ There are four types of codes in common use today:<br />
◊ Block codes<br />
◊ Convolutionalcodes<br />
◊ Turbo codes<br />
◊ Low-Density Parity-Check (LDPC) Codes<br />
◊ Block codes<br />
◊ The encoder for a block code divides the information sequence
into message blocks of k information bits each.<br />
◊ A message block is represented by the binary k-tuple ( )lld u=(u1,u2,…,uk) called a<br />
message.<br />
◊ There are a total of 2k different possible messages.<br />
Block Codes<br />
◊ Block codes (cont.)<br />
◊ The encoder transforms each message u into an n-tuple<br />
◊ The encoder transforms each message u into an n-tuple<br />
v=(v1,v2,…,vn) of discrete symbols called a code word.<br />
◊ Corresponding to the 2k different possible messages, there are 2k different possible code<br />
words at the encoder output.<br />
◊ This set of 2k code words of length n is called an (n,k) block code.<br />
◊ The ratio R=k/n is called the code rate.<br />
◊ n-k redundant bits can be added to each message to form a code word<br />
◊ Since the n-symbol output code word depends only on the corresponding k-bit input<br />
message, the encoder is memoryless, and can be implemented with a combinational logic<br />
circuit.<br />
Block Codes<br />
◊ Binary block code with k=4 and n=7<br />
6Finite Field (Galois Field) Finite Field (Galois Field)<br />
◊ Much of the theory of linear block code is highly mathematical in nature and requires an<br />
extensive background in modern algebra nature, and requires an extensive background in<br />
modern algebra.<br />
◊ Finite field was invented by the early 19th century mathematician,<br />
◊ Galois was a young French math whiz who developed a theory of finite fields, now know as<br />
Galois fields, before being killed in a duel at the age of 21.<br />
◊ For well over 100 years, mathematicians looked upon Galois fields as elegant mathematics<br />
but of no practical value.
Convolutional Codes<br />
◊ The encoder for a convolutional code also accepts k-bit blocks of the information sequence<br />
u and produces an encoded sequence (code word) v of n-symbol blocks.<br />
◊ Each encoded block depends not only on the corresponding k-bit message block at the same<br />
time unit, but also on m previous message blocks. Hence the encoder has a memory order of<br />
m message blocks. Hence, the encoder has a memory order of m.<br />
◊ The set of encoded sequences produced by a k-input, n-output encoder of memory order m<br />
is called an (n, k, m) convolutional y ( , , ) code.<br />
◊ The ratio R=k/n is called the code rate.<br />
◊ Since the encoder contains memory, it must be implemented with a sequential logic circuit.<br />
◊ Binary convolutional encoder with k=1, n=2, and m=2<br />
◊ Memorylesschannels are called random-error channels.<br />
Transition probability diagrams for binary symmetric channel (BSC).1.5 Types of Errors 1.5<br />
Types of Errors<br />
◊ On channels with memory, the noise is not independent from Transmission to transmission<br />
◊ Channel with memory are called burst-error channels.<br />
Simplified model of a channel with memory.1.6 Error Control Strategies 1.6 Error Control<br />
Strategies<br />
◊ Error control for a one-way system must be accomplished using<br />
Forward error correction (FEC) that is by employing error- forward error correction (FEC),<br />
that is, by employing error correcting codes that automatically correct errors detected at the<br />
receiver.<br />
◊ Error control for a two-way system can be accomplished using error detection and<br />
retransmission, called automatic repeat request (ARQ).<br />
This is also know as the backward error correction (BEC).<br />
◊ In an ARQ system, when errors are detected at the receiver, a request is sent<br />
For the transmitter to repeat the message and this continues until the message for the<br />
transmitter to repeat the message, and this continues until the message is received correctly.<br />
◊ The major advantage of ARQ over FEC is that error detection requires much simpler<br />
decoding equipment than does error correction.<br />
151.6 Error Control Strategies 1.6 Error Control Strategies
◊ ARQ is adaptive in the sense that information is retransmitted only when errors occur when<br />
errors occur.<br />
◊ When the channel error rate is high, retransmissions must be sent too frequently, and the<br />
system throughput, the rate at which newly generated messages are correctly received, is<br />
lowered by ARQ.<br />
◊ In general, wire-line communications (more reliable) adopts BEC scheme, while wireless<br />
communications (relatively unreliable) adopts FEC scheme.<br />
Error Detecting Codes Error Detecting Codes<br />
◊ Cyclic Redundancy Code (CRC Code) –also know as the polynomial code polynomial<br />
code.<br />
◊ Polynomial codes are based upon treating bit strings as representations of polynomials with<br />
coefficients of 0 and 1 only.<br />
◊ For example, 110001representsasix-termpolynomial:x 5 +x 4 +x 0<br />
◊ When the polynomial code method is employed, the sender and receiver must agree upon a<br />
generator polynomial, G(x), in advance.<br />
◊ To compute the checksum for some frame with m bits, corresponding to the polynomial<br />
M(x), the frame must be longer than the generator polynomial.<br />
Error Detecting Codes<br />
◊ The idea is to append a checksum to the end of the frame in such a way that the polynomial<br />
represented by the check summed frame divisible by G(x).<br />
◊ When the receiver gets the checksummed frame, it tries dividing it by G(x). If there is a<br />
remainder, there has been a transmission error.<br />
◊ The algorithm for computing the checksum is as follows:<br />
Calculation of the polynomial code checksum Calculation of the polynomial code checksum<br />
Calculation of the polynomial code checksum Calculation of the polynomial code checksum
Convolution Codes<br />
‣ Encoding,<br />
‣ Decoding using state Tree and trellis diagrams,<br />
‣ Decoding using Viterbi algorithm,<br />
‣ Comparison of error rates in coded and uncoded transmission.<br />
Introduction:<br />
• Convolution codes map information to code bits sequentially by convolving a<br />
sequence of information bits with “generator” sequences<br />
• A convolution encoder encodes K information bits to N>K code bits at one time step<br />
• Convolutional codes can be regarded as block codes for which the encoder has a<br />
certain structure such that we can express the encoding operation as convolution<br />
• Convolutional codes are applied in applications that require good performance with<br />
low implementation cost. They operate on code streams (not in blocks)<br />
• Convolution codes have memory that utilizes previous bits to encode or decode<br />
following bits (block codes are memoryless)<br />
• Convolutional codes achieve good performance by expanding their memory depth<br />
• Convolutional codes are denoted by (n,k,L), where L is code (or encoder) Memory<br />
depth (number of register stages)<br />
• Constraint length C=n(L+1) is defined as the number of encoded bits a message bit<br />
can influence to<br />
• Convolutional encoder, k = 1, n = 2, L=2<br />
– Convolutional encoder is a finite state machine (FSM) processing<br />
information bits in a serial manner<br />
– Thus the generated code is a function of input and the state of the FSM<br />
– In this (n,k,L) = (2,1,2) encoder each message bit influences a span of C=<br />
n(L+1)=6 successive output bits = constraint length C<br />
– Thus, for generation of n-bit output, we require n shift registers in k = 1<br />
convolutional encoders
x m m m<br />
' j<br />
j<br />
<br />
3 j<br />
<br />
2<br />
j<br />
x m m m<br />
'' j<br />
j<br />
<br />
3 j<br />
<br />
1<br />
j
x m m<br />
''' j<br />
j<br />
<br />
2<br />
j<br />
Here each message bit influences<br />
a span of C = n(L+1)=3(1+1)=6<br />
successive output bits<br />
Convolution point of view in encoding and generator matrix:
Example: Using generator matrix<br />
g<br />
<br />
g<br />
(1)<br />
( 2)<br />
[1 0 11] <br />
[111 1]
Representing convolutional codes: Code tree:<br />
(n,k,L) = (2,1,2) encoder<br />
x'<br />
m m m<br />
<br />
x''<br />
m m<br />
j j2<br />
j<br />
j j2 j1<br />
j<br />
x x' x'' x' x'' x' x'' ...<br />
out<br />
1 1 2 2 3 3
Representing convolutional codes compactly: code trellis and state diagram:<br />
State diagram<br />
Inspecting state diagram: Structural properties of convolutional codes:<br />
• Each new block of k input bits causes a transition into new state<br />
• Hence there are 2k branches leaving each state<br />
• Assuming encoder zero initial state, encoded word for any input of k bits can thus be<br />
obtained. For instance, below for u=(1 1 1 0 1), encoded word v=(1 1, 1 0, 0 1, 0 1, 1<br />
1, 1 0, 1 1, 1 1) is produced:<br />
•
- encoder state diagram for (n,k,L)=(2,1,2) code<br />
- note that the number of states is 2L+1 = 8<br />
Distance for some convolutional codes:<br />
THE VITERBI ALGORITHEM:
• Problem of optimum decoding is to find the minimum distance path from the initial<br />
state back to initial state (below from S0 to S0). The minimum distance is the sum of<br />
all path metrics<br />
• that is maximized by the correct path<br />
• Exhaustive maximum likelihood<br />
method must search all the paths<br />
in phase trellis (2k paths emerging/<br />
entering from 2 L+1 states for<br />
an (n,k,L) code)<br />
• The Viterbi algorithm gets its<br />
efficiency via concentrating intosurvivor paths of the trellis<br />
•<br />
ln p( yx , ) ln p( y | x )<br />
<br />
m j 0<br />
j mj<br />
THE SURVIVOR PATH:<br />
• Assume for simplicity a convolutional code with k=1, and up to 2k = 2 branches can<br />
enter each state in trellis diagram<br />
• Assume optimal path passes S. Metric comparison is done by adding the metric of S<br />
into S1 and S2. At the survivor path the accumulated metric is naturally smaller<br />
(otherwise it could not be the optimum path)
• For this reason the non-survived path can<br />
be discarded -> all path alternatives need not<br />
to be considered<br />
• Note that in principle whole transmitted<br />
sequence must be received before decision.<br />
However, in practice storing of states for<br />
input length of 5L is quite adequate
The maximum likelihood path:<br />
The decoded ML code sequence is 11 10 10 11 00 00 00 whose Hamming<br />
distance to the received sequence is 4 and the respective decoded
sequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum distance path.<br />
(Black circles denote the deleted branches, dashed lines: '1' was applied)<br />
How to end-up decoding?<br />
• In the previous example it was assumed that the register was finally filled with zeros<br />
thus finding the minimum distance path<br />
• In practice with long code words zeroing requires feeding of long sequence of zeros to<br />
the end of the message bits: this wastes channel capacity & introduces delay<br />
• To avoid this path memory truncation is applied:<br />
– Trace all the surviving paths to the<br />
depth where they merge<br />
– Figure right shows a common point<br />
at a memory depth J<br />
– J is a random variable whose applicable<br />
magnitude shown in the figure (5L)<br />
has been experimentally tested for<br />
negligible error rate increase<br />
– Note that this also introduces the<br />
delay of 5L!<br />
J 5Lstages of the trellis
Hamming Code Example:<br />
• H(7,4)<br />
• Generator matrix G: first 4-by-4 identical matrix<br />
• Message information vector p<br />
• Transmission vector x<br />
• Received vector r<br />
and error vector e<br />
• Parity check matrix H
Error Correction:<br />
• If there is no error, syndrome vector z=zeros<br />
• If there is one error at location 2<br />
• New syndrome vector z is
Example of CRC:<br />
Example: Using generator matrix:<br />
g<br />
<br />
g<br />
(1)<br />
( 2)<br />
[1 0 11] <br />
[111 1]<br />
<br />
<br />
11 00 0111 01<br />
11 10<br />
01
correct:1+1+2+2+2=8;8 ( 0.11) 0.88<br />
false:1+1+0+0+0=2;2 ( 2.30) 4.6<br />
total path metric: 5.48
Turbo Codes:<br />
• Backgound<br />
– Turbo codes were proposed by Berrou and Glavieux in the 1993 International<br />
Conference in Communications.<br />
– Performance within 0.5 dB of the channel capacity limit for BPSK was<br />
demonstrated.<br />
• Features of turbo codes<br />
– Parallel concatenated coding<br />
– Recursive convolutional encoders<br />
– Pseudo-random interleaving<br />
– Iterative decoding
Motivation: Performance of Turbo Codes<br />
• Comparison:<br />
– Rate 1/2 Codes.<br />
– K=5 turbo code.<br />
– K=14 convolutional code.<br />
• Plot is from:<br />
– L. Perez, “Turbo Codes”, chapter 8 of Trellis Coding by C. Schlegel. IEEE<br />
Press, 1997<br />
Pseudo-random Interleaving:<br />
• The coding dilemma:<br />
– Shannon showed that large block-length random codes achieve channel<br />
capacity.<br />
– However, codes must have structure that permits decoding with reasonable<br />
complexity.<br />
– Codes with structure don’t perform as well as random codes.<br />
– “Almost all codes are good, except those that we can think of.”<br />
• Solution:<br />
– Make the code appear random, while maintaining enough structure to permit<br />
decoding.<br />
– This is the purpose of the pseudo-random interleaver.<br />
– Turbo codes possess random-like properties.<br />
– However, since the interleaving pattern is known, decoding is possible.
Why Interleaving and Recursive Encoding?<br />
• In a coded systems:<br />
– Performance is dominated by low weight code words.<br />
• A “good” code:<br />
– will produce low weight outputs with very low probability.<br />
• An RSC code:<br />
– Produces low weight outputs with fairly low probability.<br />
– However, some inputs still cause low weight outputs.<br />
• Because of the interleaver:<br />
– The probability that both encoders have inputs that cause low<br />
weight outputs is very low.<br />
– Therefore the parallel concatenation of both encoders will produce<br />
a “good” code.<br />
Iterative Decoding:<br />
• There is one decoder for each elementary encoder.<br />
• Each decoder estimates the a posteriori probability (APP) of each data<br />
bit.<br />
• The APP’s are used as a priori information by the other decoder.<br />
• Decoding continues for a set number of iterations.<br />
– Performance generally improves from iteration to iteration, but<br />
follows a law of diminishing returns<br />
The Turbo-Principle:<br />
Turbo codes get their name because the decoder uses feedback, like a turbo engine
Performance as a Function of Number of Iterations:<br />
10 0 E b<br />
/N o<br />
in dB<br />
10 -1<br />
10 -2<br />
1 iteration<br />
BER<br />
10 -3<br />
10 -4<br />
6 iterations<br />
2 iterations<br />
3 iterations<br />
10 -5<br />
10 iterations<br />
10 -6<br />
18 iterations<br />
10 -7<br />
0.5 1 1.5 2<br />
Turbo Code Summary:<br />
• Turbo code advantages:<br />
– Remarkable power efficiency in AWGN and flat-fading channels<br />
for moderately low BER.<br />
– Deign tradeoffs suitable for delivery of multimedia services.<br />
• Turbo code disadvantages:<br />
– Long latency.
– Poor performance at very low BER.<br />
– Because turbo codes operate at very low SNR, channel estimation<br />
and tracking is a critical issue.<br />
• The principle of iterative or “turbo” processing can be applied to other<br />
problems.<br />
– Turbo-multiuser detection can improve performance of coded<br />
multiple-access systems.<br />
UNIT 5 :<br />
Spread Spectrum Modulation<br />
‣ Use of spread spectrum,<br />
‣ direct sequence spread spectrum(DSSS),<br />
‣ Code division multiple access,<br />
‣ Ranging using DSSS Frequency Hopping spread spectrum,<br />
‣ PN sequences: generation and characteristics,<br />
‣ Synchronization in spread spectrum system,<br />
‣ Advancements in the digital communication<br />
SPREAD SPECTRUM MODULATION<br />
• Spread data over wide bandwidth<br />
• Makes jamming and interception harder<br />
• Frequency hoping<br />
– Signal broa<strong>dc</strong>ast over seemingly random series of frequencies<br />
• Direct Sequence<br />
– Each bit is represented by multiple bits in transmitted signal<br />
– Chipping code<br />
Spread Spectrum Concept:<br />
• Input fed into channel encoder<br />
– Produces narrow bandwidth analog signal around central frequency<br />
• Signal modulated using sequence of digits<br />
– Spreading code/sequence<br />
– Typically generated by pseudonoise/pseudorandom number generator<br />
• Increases bandwidth significantly<br />
– Spreads spectrum<br />
• Receiver uses same sequence to demodulate signal<br />
• Demodulated signal fed into channel decoder
General Model of Spread Spectrum System:<br />
Gains:<br />
• Immunity from various noise and multipath distortion<br />
– Including jamming<br />
• Can hide/encrypt signals<br />
– Only receiver who knows spreading code can retrieve signal<br />
• Several users can share same higher bandwidth with little interference<br />
– Cellular telephones<br />
– Code division multiplexing (CDM)<br />
– Code division multiple access (CDMA)<br />
Pseudorandom Numbers:<br />
• Generated by algorithm using initial seed<br />
• Deterministic algorithm<br />
– Not actually random<br />
– If algorithm good, results pass reasonable tests of randomness<br />
• Need to know algorithm and seed to predict sequence<br />
Frequency Hopping Spread Spectrum (FHSS):<br />
• Signal broa<strong>dc</strong>ast over seemingly random series of frequencies<br />
• Receiver hops between frequencies in sync with transmitter<br />
• Eavesdroppers hear unintelligible blips<br />
• Jamming on one frequency affects only a few bits<br />
Basic Operation:<br />
• Typically 2k carriers frequencies forming 2k channels<br />
• Channel spacing corresponds with bandwidth of input<br />
• Each channel used for fixed interval
– 300 ms in IEEE 802.11<br />
– Some number of bits transmitted using some encoding scheme<br />
• May be fractions of bit (see later)<br />
– Sequence dictated by spreading code<br />
Frequency Hopping Example:<br />
Frequency Hopping Spread Spectrum System (Transmitter):<br />
Frequency Hopping Spread Spectrum System (Receiver):
Slow and Fast FHSS:<br />
• Frequency shifted every Tc seconds<br />
• Duration of signal element is Ts seconds<br />
• Slow FHSS has Tc Ts<br />
• Fast FHSS has Tc < Ts<br />
• Generally fast FHSS gives improved performance in noise (or jamming)<br />
Slow Frequency Hop Spread Spectrum Using MFSK (M=4, k=2)
Fast Frequency Hop Spread Spectrum Using MFSK (M=4, k=2)<br />
FHSS Performance Considerations:<br />
• Typically large number of frequencies used<br />
– Improved resistance to jamming<br />
Direct Sequence Spread Spectrum (DSSS):<br />
• Each bit represented by multiple bits using spreading code<br />
• Spreading code spreads signal across wider frequency band<br />
– In proportion to number of bits used
– 10 bit spreading code spreads signal across 10 times bandwidth of 1 bit code<br />
• One method:<br />
– Combine input with spreading code using XOR<br />
– Input bit 1 inverts spreading code bit<br />
– Input zero bit doesn’t alter spreading code bit<br />
– Data rate equal to original spreading code<br />
• Performance similar to FHSS<br />
Direct Sequence Spread Spectrum Example:
Direct Sequence Spread Spectrum Transmitter:<br />
Direct Sequence Spread Spectrum Receiver:
Direct Sequence Spread Spectrum Using BPSK Example:<br />
Code Division Multiple Access (CDMA):<br />
• Multiplexing Technique used with spread spectrum<br />
• Start with data signal rate D<br />
– Called bit data rate<br />
• Break each bit into k chips according to fixed pattern specific to each user<br />
– User’s code<br />
• New channel has chip data rate kD chips per second<br />
• E.g. k=6, three users (A,B,C) communicating with base receiver R<br />
• Code for A = <br />
• Code for B = <br />
• Code for C = <br />
CDMA Example:
• Consider A communicating with base<br />
• Base knows A’s code<br />
• Assume communication already synchronized<br />
• A wants to send a 1<br />
– Send chip pattern <br />
• A’s code<br />
• A wants to send 0<br />
– Send chip[ pattern <br />
• Complement of A’s code<br />
• Decoder ignores other sources when using A’s code to decode<br />
– Orthogonal codes<br />
CDMA for DSSS:<br />
• n users each using different orthogonal PN sequence<br />
• Modulate each users data stream<br />
– Using BPSK<br />
• Multiply by spreading code of user<br />
CDMA in a DSSS Environment:
15.Additional Topics:<br />
Voice coders<br />
Regenerative repeater<br />
Feed back communications<br />
Advancements in the digital communication<br />
Signal space representation<br />
Turbo codes<br />
Voice coders<br />
A vocoder ( short for voice encoder) is an analysis/synthesis system, used to reproduce<br />
human speech. In the encoder, the input is passed through a multiband filter, each band is<br />
passed through an envelope follower, and the control signals from the envelope followers are<br />
communicated to the decoder. The decoder applies these (amplitude) control signals to<br />
corresponding filters in the (re)synthesizer.<br />
It was originally developed as a speech coder for telecommunications applications in the<br />
1930s, the idea being to code speech for transmission. Its primary use in this fashion is for<br />
secure radio communication, where voice has to be encrypted and then transmitted. The<br />
advantage of this method of "encryption" is that no 'signal' is sent, but rather envelopes of the<br />
bandpass filters. The receiving unit needs to be set up in the same channel configuration to
esynthesize a version of the original signal spectrum. The vocoder as<br />
both hardware and software has also been used extensively as an electronic musical<br />
instrument.<br />
Whereas the vocoder analyzes speech, transforms it into electronically transmitted<br />
information, and recreates it, The Voder (from Voice Operating Demonstrator) generates<br />
synthesized speech by means of a console with fifteen touch-sensitive keys and a pedal,<br />
basically consisting of the "second half" of the vocoder, but with manual filter controls,<br />
needing a highly trained operator.<br />
The human voice consists of sounds generated by the opening and closing of the glottis by<br />
the vocal cords, which produces a periodic waveform with many harmonics. This basic sound<br />
is then filtered by the nose and throat (a complicated resonant piping system) to produce<br />
differences in harmonic content (formants) in a controlled way, creating the wide variety of<br />
sounds used in speech. There is another set of sounds, known as<br />
the unvoiced and plosive sounds, which are created or modified by the mouth in different<br />
fashions.<br />
The vocoder examines speech by measuring how its spectral characteristics change over time.<br />
This results in a series of numbers representing these modified frequencies at any particular<br />
time as the user speaks. In simple terms, the signal is split into a number of frequency bands<br />
(the larger this number, the more accurate the analysis) and the level of signal present at each<br />
frequency band gives the instantaneous representation of the spectral energy content. Thus,<br />
the vocoder dramatically reduces the amount of information needed to store speech, from a<br />
complete recording to a series of numbers. To recreate speech, the vocoder simply reverses<br />
the process, processing a broadband noise source by passing it through a stage that filters the<br />
frequency content based on the originally recorded series of numbers. Information about the<br />
instantaneous frequency (as distinct from spectral characteristic) of the original voice signal<br />
is discarded; it wasn't important to preserve this for the purposes of the vocoder's original use<br />
as an encryption aid, and it is this "dehumanizing" quality of the vocoding process that has<br />
made it useful in creating special voice effects in popular music and audio entertainment.<br />
Since the vocoder process sends only the parameters of the vocal model over the<br />
communication link, instead of a point by point recreation of the waveform, it allows a<br />
significant reduction in the bandwidth required to transmit speech.<br />
Modern vocoder implementations<br />
Even with the need to record several frequencies, and the additional unvoiced sounds, the<br />
compression of the vocoder system is impressive. Standard speech-recording systems capture<br />
frequencies from about 500 Hz to 3400 Hz, where most of the frequencies used in speech lie,<br />
typically using a sampling rate of 8 kHz (slightly greater than the Nyquist rate). The sampling<br />
resolution is typically at least 12 or more bits per sample resolution (16 is standard), for a<br />
final data rate in the range of 96-128 kbit/s. However, a good vocoder can provide a<br />
reasonable good simulation of voice with as little as 2.4 kbit/s of data.
'Toll Quality' voice coders, such as ITU G.729, are used in many telephone networks. G.729<br />
in particular has a final data rate of 8 kbit/s with superb voice quality. G.723 achieves slightly<br />
worse quality at data rates of 5.3 kbit/s and 6.4 kbit/s. Many voice systems use even lower<br />
data rates, but below 5 kbit/s voice quality begins to drop rapidly.<br />
Several vocoder systems are used in NSA encryption systems:<br />
• LPC-10, FIPS Pub 137, 2400 bit/s, which uses linear predictive coding<br />
• Code-excited linear prediction (CELP), 2400 and 4800 bit/s, Federal Standard 1016,<br />
used in STU-III<br />
• Continuously variable slope delta modulation (CVSD), 16 kbit/s, used in wide band<br />
encryptors such as the KY-57.<br />
• Mixed-excitation linear prediction (MELP), MIL STD 3005, 2400 bit/s, used in the<br />
Future Narrowband Digital Terminal FNBDT, NSA's 21st century secure telephone.<br />
• Adaptive Differential Pulse Code Modulation (ADPCM), former ITU-T G.721, 32<br />
kbit/s used in STE secure telephone<br />
(ADPCM is not a proper vocoder but rather a waveform codec. ITU has gathered G.721<br />
along with some other ADPCM codecs into G.726.)<br />
Vocoders are also currently used in developing psychophysics, linguistics, computational<br />
neuroscience and cochlear implant research.<br />
Modern vocoders that are used in communication equipment and in voice storage devices<br />
today are based on the following algorithms:<br />
• Algebraic code-excited linear prediction (ACELP 4.7 kbit/s – 24 kbit/s) [5]<br />
• Mixed-excitation linear prediction (MELPe 2400, 1200 and 600 bit/s) [6]<br />
• Multi-band excitation (AMBE 2000 bit/s – 9600 bit/s) [7]<br />
• Sinusoidal-Pulsed Representation (SPR 300 bit/s – 4800 bit/s) [8]<br />
• Tri-wave excited linear prediction (TWELP 2400 – 3600 bit/s) [9]<br />
Linear prediction-based vocoders<br />
Main article: Linear predictive coding<br />
Since the late 1970s, most non-musical vocoders have been implemented using linear<br />
prediction, whereby the target signal's spectral envelope (formant) is estimated by an allpole<br />
IIR filter. In linear prediction coding, the all-pole filter replaces the bandpass filter bank<br />
of its predecessor and is used at the encoder to whiten the signal (i.e., flatten the spectrum)<br />
and again at the decoder to re-apply the spectral shape of the target speech signal.<br />
One advantage of this type of filtering is that the location of the linear predictor's spectral<br />
peaks is entirely determined by the target signal, and can be as precise as allowed by the time<br />
period to be filtered. This is in contrast with vocoders realized using fixed-width filter banks,<br />
where spectral peaks can generally only be determined to be within the scope of a given<br />
frequency band. LP filtering also has disadvantages in that signals with a large number of<br />
constituent frequencies may exceed the number of frequencies that can be represented by the<br />
linear prediction filter. This restriction is the primary reason that LP coding is almost always<br />
used in tandem with other methods in high-compression voice coders.
RAWCLI vocoder<br />
Robust Advanced Low Complexity Waveform Interpolation (RALCWI) technology uses<br />
proprietary signal decomposition and parameter encoding methods to provide high voice<br />
quality at high compression ratios. The voice quality of RALCWI-class vocoders, as<br />
estimated by independent listeners, is similar to that provided by standard vocoders running<br />
at bit rates above 4000 bit/s. The Mean Opinion Score (MOS) of voice quality for this<br />
Vocoder is about 3.5-3.6. This value was determined by a paired comparison method,<br />
performing listening tests of developed and standard voice Vocoders<br />
The RALCWI vocoder operates on a “frame-by-frame” basis. The 20ms source voice frame<br />
consists of 160 samples of linear 16-bit PCM sampled at 8 kHz. The Voice Encoder performs<br />
voice analysis at the high time resolution (8 times per frame) and forms a set of estimated<br />
parameters for each voice segment. All of the estimated parameters are quantized to produce<br />
41-, 48- or 55-bit frames, using vector quantization (VQ) of different types. All of the vector<br />
quantizers were trained on a mixed multi-language voice base, which contains voice samples<br />
in both Eastern and Western languages.<br />
Waveform-Interpolative (WI) vocoder was developed in AT&T Bell Laboratories around<br />
1995 by W.B. Kleijn, and subsequently a low- complexity version was developed by AT&T<br />
for the DoD secure vocoder competition. Notable enhancements to the WI coder were made<br />
at the University of California, Santa Barbara. AT&T holds the core patents related to WI,<br />
and other institutes hold additional patents. Using these patents as a part of WI coder<br />
implementation requires licensing from all IPR holders.<br />
The product is the result of a co-operation between CML Microcircuits and SPIRIT DSP. The<br />
co-operation combines CML’s 39-year history of developing mixed-signal semiconductors<br />
for professional and leisure communication applications, with SPIRIT’s experience<br />
in embedded voice products.<br />
Regenerative repeater<br />
Introduction of on-board regeneration alleviates the conflict between enhanced traffic<br />
capacity and moderate system cost by reducing the requirements of the radio front-ends, by<br />
simplifying the ground station digital equipment and the satellite communication payload in<br />
TDMA and Satellite-Switched-TDMA systems. Regenerative satellite repeaters can be<br />
introduced in an existing system with only minor changes at the ground stations. In cases<br />
where one repeater can be dedicated to each station a more favorable arrangement of the<br />
information data than in SS-TDMA can be conceived, which eliminates burst transmission<br />
while retaining full interconnectivity among spot-beam areas.<br />
ADVANCEMENTS IN DIGITAL COMMUNICATIONS<br />
Novel Robust, Narrow-band PSK Modes for HF Digital Communications<br />
Some items that I wrote that may be of general interest:
The well-known Shannon-Hartley law tells us that there is an absolute limit on the error-free<br />
bit rate that can be transmitted within a certain bandwidth at a given signal to noise ratio<br />
(SNR). Although it is not obvious, this law can be restated (given here without proof) by<br />
saying that for a given bit rate, one can trade off bandwidth and power. On this basis then, a<br />
certain digital communications system could be either bandwidth limited or power limited,<br />
depending on its design criteria.<br />
Practice also tells us that digital communication systems designed for HF are necessarily<br />
designed with two objectives in mind; slow and robust to allow communications with weak<br />
signals embedded in noise and adjacent channel interference, or fast and somewhat subject to<br />
failing under adverse conditions, however being able to best utilize the HF medium with<br />
good prevailing conditions.<br />
Taken that the average amateur radio transceiver has limited power output, typically 20 - 100<br />
Watts continuous duty, poor or restricted antenna systems, fierce competition for a free spot<br />
on the digital portion of the bands, adjacent channel QRM, QRN, and the marginal condition<br />
of the HF bands, it is evident that for amateur radio, there is a greater need for a weak signal,<br />
spectrally-efficient, robust digital communications mode, rather than another high speed,<br />
wide band communications method.<br />
Recent Developments using PSK on HF<br />
It is difficult to understand that true coherent demodulation of PSK could ever be achieved in<br />
any non-cabled system since random phase changes would introduce uncontrolled phase<br />
ambiguities. Presently, we have the technology to match and track carrier frequencies<br />
exactly, however tracking carrier phase is another matter. As a matter of practicality thus, we<br />
must revert to differentially coherent phase demodulation (DPSK).<br />
Another practical matter concerns that of symbol, or baud rate; conventional RTTY runs at<br />
45.45 baud (a symbol time of about 22 ms.) This relatively-long symbol time have been<br />
favored as being resistant to HF multipath effects and thus attributed to its robustness.<br />
Symbol rate also plays an important part in determining spectral occupancy. In the case of a<br />
45.45 baud RTTY waveform, the expected spectral occupancy is some 91 Hz for the major<br />
lobe, or +/- 45.45 on each side of each the two data tones. For a two tone FSK signaling<br />
system of continuous-phase frequency-shift keying (CPFSK) paced at 170 Hz, this system<br />
would occupy approximately 261 Hz.<br />
Signal space representation<br />
• Band pass Signal<br />
• Real valued signal S(f) Ù S* (-f)<br />
• finite bandwidth B Ù infinite time span<br />
• f c denotes center frequency<br />
• Negative Frequencies contain no Additional Info<br />
Characteristics:
• Complex valued signal<br />
• No information loss, truely equivalent<br />
Let us consider DN = {(xi , yi) : i = 1, .., N} iid realizations of the joint observation-class<br />
phenomenon (X(u), Y (u)) with true probability measure PX,Y defined on (X ×Y, σ(FX × FY<br />
)). In addition, let us consider a family of measurable representation functions D, where any<br />
f(·) ∈ D is defined in X and takes values in Xf . Let us assume that any representation<br />
function f(·) induces an empirical istribution Pˆ Xf ,Y on (Xf ×Y, σ(Ff ×FY )), based on the<br />
training data and an implicit learning approach, where the empirical Bayes classification rule<br />
is given by: gˆf (x) = arg maxy∈Y Pˆ Xf ,Y (x, y).<br />
Turbo codes<br />
In information theory, turbo codes (originally in French Turbocodes) are a class of highperformance<br />
forward error correction (FEC) codes developed in 1993, which were the first<br />
practical codes to closely approach the channel capacity, a theoretical maximum for the code<br />
rate at which reliable communication is still possible given a specific noise level. Turbo<br />
codes are finding use in 3G mobile communications and (deep<br />
space) satellite communications as well as other applications where designers seek to achieve<br />
reliable information transfer over bandwidth- or latency-constrained communication links in<br />
the presence of data-corrupting noise. Turbo codes are nowadays competing with LDPC<br />
codes, which provide similar performance.<br />
Prior to turbo codes, the best constructions were serial concatenated codes based on an<br />
outer Reed-Solomon error correction code combined with an inner Viterbi-decoded short<br />
constraint length convolutional code, also known as RSV codes.<br />
In 1993, turbo codes were introduced by Berrou, Glavieux, and Thitimajshima (from<br />
Télécom Bretagne, former ENST Bretagne, France) in their paper: "Near Shannon Limit<br />
Error-correcting Coding and Decoding: Turbo-codes" published in the Proceedings of IEEE<br />
International Communications Conference. [1] In a later paper, Berrou gave credit to the<br />
"intuition" of "G. Battail, J. Hagenauer and P. Hoeher, who, in the late 80s, highlighted the<br />
interest of probabilistic processing.". He adds "R. Gallager and M. Tanner had already<br />
imagined coding and decoding techniques whose general principles are closely related,"<br />
although the necessary calculations were impractical at that time. [2]<br />
The first class of turbo code was the parallel concatenated convolutional code (PCCC). Since<br />
the introduction of the original parallel turbo codes in 1993, many other classes of turbo code<br />
have been discovered, including serial versions and repeat-accumulate codes. Iterative Turbo<br />
decoding methods have also been applied to more conventional FEC systems, including<br />
Reed-Solomon corrected convolutional codes<br />
There are many different instantiations of turbo codes, using different component encoders,<br />
input/output ratios, interleavers, and puncturing patterns. This example encoder<br />
implementation describes a 'classic' turbo encoder, and demonstrates the general design of<br />
parallel turbo codes.<br />
This encoder implementation sends three sub-blocks of bits. The first sub-block is the m-bit<br />
block of payload data. The second sub-block is n/2 parity bits for the payload data, computed<br />
using a recursive systematic convolutional code (RSC code). The third sub-block is n/2 parity
its for a known permutation of the payload data, again computed using an RSC<br />
convolutional code. Thus, two redundant but different sub-blocks of parity bits are sent with<br />
the payload. The complete block has m+n bits of data with a code rate of m/(m+n).<br />
The permutation of the payload data is carried out by a device called an interleaver.<br />
Hardware-wise, this turbo-code encoder consists of two identical RSC coders, С1 and C2, as<br />
depicted in the figure, which are connected to each other using a concatenation scheme,<br />
called parallel concatenation:<br />
In the figure, M is a memory register. The delay line and interleaver force input bits dk to<br />
appear in different sequences. At first iteration, the input sequence dk appears at both outputs<br />
of the encoder,xk and y1k or y2k due to the encoder's systematic nature. If the<br />
encoders C1 and C2 are used respectively in n1 and n2 iterations, their rates are respectively<br />
equal to<br />
,<br />
.<br />
[edit]The decoder<br />
The decoder is built in a similar way to the above encoder - two elementary decoders<br />
are interconnected to each other, but in serial way, not in parallel. The decoder<br />
operates on lower speed (i.e. ), thus, it is intended for the encoder, and is<br />
for correspondingly. yields a soft decision which causes delay. The same<br />
delay is caused by the delay line in the encoder. The 's operation causes delay.
An interleaver installed between the two decoders is used here to scatter error bursts<br />
coming from output. DI block is a demultiplexing and insertion module. It<br />
works as a switch, redirecting input bits to at one moment and to at<br />
another. In OFF state, it feeds both and inputs with padding bits (zeros).<br />
Consider a memoryless AWGN channel, and assume that at k-th iteration, the<br />
decoder receives a pair of random variables:<br />
,<br />
where and are independent noise components having the same<br />
variance . is a k-th bit from encoder output.<br />
Redundant information is demultiplexed and sent<br />
through DI to (when ) and to (when ).<br />
yields a soft decision, i.e.:<br />
and delivers it to . is called the logarithm of the likelihood<br />
ratio (LLR). is the a posteriori probability (APP) of the data bit<br />
which shows the probability of interpreting a received bit as . Taking the LLR into<br />
account, yields a hard decision, i.e. a decoded bit.<br />
It is known that the Viterbi algorithm is unable to calculate APP, thus it cannot be<br />
used in . Instead of that, a modified BCJR algorithm is used. For , the Viterbi<br />
algorithm is an appropriate one.<br />
However, the depicted structure is not an optimal one, ecause<br />
only a proper fraction of the available redundant information. In order to improve the<br />
structure, a feedback loop is used (see the dotted line on the figure).<br />
uses
16. Question papers:<br />
B. Tech III Year II Semester Examinations, April/May - 2012<br />
DIGITAL COMMUNICATIONS<br />
(ELECTRONICS AND COMMUNICATION ENGINEERING)<br />
Time: 3 hours Max. Marks: 75<br />
Answer any five questions<br />
All questions carry equal marks<br />
---<br />
1. a) Discuss the advantages and disadvantages of digital communication system.<br />
b) State and prove sampling theorem in time domain. [15]<br />
2. a) With a relevant diagram, describe the operation of DPCM system.<br />
b) A TV signal with a bandwidth of 4.2 MHz is transmitted using binary PCM. The number<br />
of representation level is 512. Calculate:<br />
i) Code word length ii) Final bit rate iii) Transmission bandwidth. [15]<br />
3. a) What are different digital modulation techniques available? Compare them with regard<br />
to the probability error.<br />
b) Draw the block diagram of DPSK modulator and explain how synchronization problem is<br />
avoided for its detection. [15]<br />
4. a) Draw the block diagram of a baseband signal receiver and explain.<br />
b) What is an Eye pattern? Explain.<br />
c) What is matched filter? Derive the expression for its output SNR. [15]<br />
5. a) State and prove the condition for entropy to be maximum.<br />
b) Prove that H(Y/X) ≤ H(Y) with equality if and only if X and Y are independent.<br />
[15]<br />
6. a) Explain the advantages and disadvantages of cyclic codes.<br />
b) Construct the (7, 4) linear code word for the generator polynomial G(D) = 1+D 2 +D 3 for the<br />
message bits 1001 and find the checksum for the same.<br />
[15]<br />
7. a) Briefly describe the Viterbi algorithm for maximum-likelihood decoding of<br />
convolutional codes.<br />
b) For the convolutional encoder shown in figure7, draw the state diagram and the trellis<br />
diagram.<br />
8. a) Explain how PN sequences are generated. What are maximal-length sequences? What<br />
are their properties and why are they preferred?<br />
b) With the help of a neat block diagram, explain the working of a DS spread spectrum<br />
based CDMA system. [15]
B. Tech III Year II Semester Examinations, April/May - 2012<br />
DIGITAL COMMUNICATIONS<br />
(ELECTRONICS AND COMMUNICATION ENGINEERING)<br />
Time: 3 hours Max. Marks: 75<br />
Answer any five questions<br />
All questions carry equal marks<br />
---<br />
1. a) What is natural sampling? Explain it with sketches.<br />
b) Specify the Nyquist rate and Nyquist intervals for each of the following signals<br />
i) x(t) = Sinc200t ii) x(t) = Sinc 2 200t iii) x(t) = Sinc200t+ Sinc 2 200t.<br />
[15]<br />
2. a) Derive an expression for signal to quantization noise ratio of a PCM encoder using<br />
uniform quantizer when the input signal is uniformly distributed.<br />
b) A PCM system uses a uniform quantizer followed by a 7 bit binary encoder. The bit rate of<br />
the system is equal to 50 x 10 6 bits/sec.<br />
i) What is the maximum message bandwidth?<br />
ii) Determine the signal to quantization noise ratio when f m<br />
= 1 MHz is applied.<br />
[15]<br />
3. a) Draw the correlation receiver structure for QPSK signal and explain its working<br />
principle.<br />
b) Write the power spectral density of BPSK and QPSK and draw the power spectrum of<br />
each. [15]<br />
4. a) Draw the block diagram of baseband communication receiver and explain the<br />
importance of each block.<br />
b) What is matched filter?<br />
c) Represent BFSK system using signal space diagram. What are the conclusions one can<br />
make with that of BPSK system? [15]<br />
5. a) Define and explain the following.<br />
i) Information<br />
ii) Efficiency of coding<br />
iii) Redundancy of coding.<br />
b) Prove that H(X,Y) = H(X) + H(Y/X) = H(Y) + H(X/Y). [15]<br />
6. a) Explain the principle and operation of encoder for Hamming code.<br />
b) An error control code has the following parity check matrix.<br />
101100110010011001H⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦<br />
i) Determine the generator matrix ‘G’
e received code word 110110. Comment on error detection capability of this code. [15]<br />
7. a) Explain how you would draw the trellis diagram of a convolutional encoder given its<br />
state diagrams.<br />
b) For the convolutional encoder shown in figure7, draw the state diagram and the trellis<br />
diagram. [15]<br />
Figure: 7<br />
8. a) What are the advantages of spread spectrum technique.<br />
b) Compare direct sequence spread spectrum and frequency hopped spread spectrum<br />
techniques and draw the important features of each. [15]<br />
B. Tech III Year II Semester Examinations, April/May - 2012<br />
DIGITAL COMMUNICATIONS<br />
(ELECTRONICS AND COMMUNICATION ENGINEERING)<br />
Time: 3 hours Max. Marks: 75<br />
Answer any five questions<br />
All questions carry equal marks<br />
---<br />
1. a) State and discuss the Hartley-Shannon law.<br />
b) The terminal of a computer used to enter alpha numeric data is connected to the computer<br />
through a voice grade telephone line having a usable bandwidth of 3 KHz and a output SNR<br />
of 10 dB. Determine:<br />
i) The capacity of the channel<br />
ii) The maximum rate at which data can be transmitted from the terminal to the computer<br />
without error.<br />
Assume that the terminal has 128 characters and that the data sent from the terminal consists<br />
of independent sequences of characters with equal probability.<br />
[15]<br />
2. a) What is hunting in delta modulation? Explain.<br />
b) Differentiate between granular and slope overload noise.<br />
c) A signal band limited within 3.6 KHz is to be transmitted via binary PCM on a channel<br />
whose maximum pulse rate is 40,000 pulses/sec. Design a PCM system and draw a block<br />
diagram showing all parameters. [15]<br />
3. a) Derive an expression for the spectrum of BFSK and sketch the same.<br />
b) Explain operation of differentially encoded PSK system. [15]<br />
4. a) What is an inter symbol interference in base band binary PAM system? Explain.<br />
b) Give the basic components of a filter in baseband data transmission and explain.<br />
[15]<br />
5. a) State the significance of H(Y/X) and H(X/Y).<br />
b) Given six messages with probabilities, P() = 123456x, x, x, x, x, x1x13, 21P(x) = , 4<br />
31P(x) = , 8 41P(x) = , 8 51P(x) = , 12 61P(x) = 12. Find the Shannon-Fano code.<br />
Evaluate the coding efficiency. [15]<br />
6. a) State and explain the properties of cyclic codes.<br />
b) The generator polynomial of a (7, 4) cyclic code is x 3 +x+1. Construct the generator<br />
matrix for a systematic cyclic code and find the code word for the message (1101)<br />
using the generated matrix. [15]<br />
7. a) What is a convolutional code? How is it different from a block code?<br />
b) Find the generator matrix G(D) for the (2, 1, 2) convolutional encoder of figure shown 7.<br />
[15]<br />
Figure: 7<br />
8. a) What are the PN sequences? Discuss their characteristics.
) What are the two basic types of spread-spectrums systems? Explain the basic principle of<br />
each of them. [15]<br />
B. Tech III Year II Semester Examinations, April/May - 2012<br />
DIGITAL COMMUNICATIONS<br />
(ELECTRONICS AND COMMUNICATION ENGINEERING)<br />
Time: 3 hours Max. Marks: 75<br />
Answer any five questions<br />
All questions carry equal marks<br />
---<br />
1. a) What is Nyquist rate and Nyquist interval?<br />
b) What is aliasing and how it is reduced?<br />
c) A band limited signal x(t) is sampled by a train of rectangular pulses of width τ and period<br />
T.<br />
i) Find an expression for the sampled signal.<br />
ii) Determine the spectrum of the sampled signal and sketch it. [15]<br />
2. a) What is Companding? Explain how Companding improves the SNR of a PCM system?<br />
b) The input of a PCM and a Data Modulational(DM) is a sine wave of 4KHz. PCM and<br />
DM are both designed to yield an output SNR of 30dB. Assuming PCM sampling at 5<br />
times the Nyquist rate. Compare the bandwidth required for each system. [15]<br />
3. a) Draw the block diagram of QPSK system and explain its working.<br />
b) Derive an expression for the probability of error for PSK. [15]<br />
4. a) What is an optimum receiver? Explain it with suitable derivation.<br />
b) Describe briefly baseband M-ary PAM system. [15]<br />
5. a) Explain the importance of source coding.<br />
b) Apply Haffman’s encoding procedure to the following message ensemble and determine<br />
the average length of the encoded message<br />
{}{}12345678910,,,,,,,,,Xxxxxxxxxxx=.<br />
{}{}0.18,0.17,0.16,0.15,0.10,0.08,0.05,0.05,0.04,0.02XP=.<br />
The encoding alphabet is {D} = {0,1,2,3}. [15]<br />
6. a) What is a systematic block code?<br />
b) What is a syndrome vector? How is it useful?<br />
c) A linear (n, k) block code has a generated matrix<br />
10110110G⎡⎤=⎢⎥⎣⎦<br />
i) Find all its code words<br />
ii) Find its H matrix<br />
iii) What is the minimum distance of the code and its error correcting capacity?<br />
[15]<br />
7. a) What is a convolutional code?<br />
b) What is meant by the constraint length of a convolutional encoder?<br />
c) A convolutional encoder has a single shift register with two stages i.e., constraint length<br />
k=3, three mod-2 adders and an output multiplexer. The generator sequence of the<br />
encoder are as follows.<br />
()()()()()()1231,0,1,1,1,0,1,1,1ggg===.<br />
Draw the block diagram of the encoder. [15]<br />
8. a) What are the advantages of spread – spectrum communication.<br />
b) What are PN sequences? Discuss their characteristics.<br />
c) Explain the principle of direct sequence spread spectrum. [15]
17. Question Bank<br />
1. (a) State and prove the sampling theorem for band pass signals.<br />
(b) A signal m(t) = cos(200pt) + 2 cos(320pt) is ideally sampled at fS = 300Hz. If the<br />
sampled signal is passed through a low pass filter with a cutoff frequency of 250Hz. What<br />
frequency components will appear in the output? [6+10]<br />
2. (a) Explain with a neat block diagram the operation of a continuously variable slope delta<br />
modulator (CVSD).<br />
(b) Compare Delta modulation with Pulse code modulation technique. [8+8]<br />
3. (a) Assume that 4800bits/sec. random data are sent over a band pass channel by BFSK<br />
signaling scheme. Find the transmission bandwidth BT such that the spectral envelope is<br />
down at least 35dB outside this band.<br />
(b) Write the comparisons among ASK, PSK, FSK and DPSK. [8+8]<br />
4. (a) What is meant by ISI? Explain how it differs from cross talk in the PAM.<br />
(b) What is the ideal solution to obtain zero ISI and what is the disadvantage of this solution.<br />
[6+10]<br />
5. A code is composed of dots and dashes. Assume that the dash is 3 times as long as the<br />
dots, has one-third the probability of occurrence.<br />
Calculate<br />
(a) the Information in a dot and that in a hash.<br />
(b) average Information in the dot-hash code.<br />
(c) Assume that a dot lasts for 10 ms and that this same time interval is allowed between<br />
symbols. Calculate average rate of Information. [16]<br />
6. Explain Shannon-Fano algorithm with an example. [16]<br />
7. Explain about block codes in which each block of k message bits encoded into block of<br />
n>k bits with an example. [16]<br />
8. Sketch the Tree diagram of convolutional encoder shown in figure 8 with Rate= 1/2,<br />
constraint length L = 2. [16]<br />
9. (a) State and prove the sampling theorem for band pass signals.<br />
(b) A signal m(t) = cos(200pt) + 2 cos(320pt) is ideally sampled at fS = 300Hz. If the<br />
sampled signal is passed through a low pass filter with a cutoff frequency of 250Hz. What<br />
frequency components will appear in the output? [6+10]<br />
10. (a) Derive an expression for channel noise and quantization noise in DM system.<br />
(b) Compare DM and PCM systems. [10+6]<br />
11. (a) Draw the signal space representation of MSK.<br />
(b) Show that in a MSK signaling scheme, the carrier frequency in integral multiple of ‘fb/4’<br />
where ‘fb’ is the bit rate.<br />
(c) Bring out the comparisons between MSK and QPSK.
12. (a) Derive an expression for error probability of non - coherent ASK scheme.<br />
(b) Binary data is transmitted over an RF band pass channel with a usable bandwidth of<br />
10MHz at a rate of 4,8 × 106 bits/sec using an ASK singling method. The carrier amplitude at<br />
the receiver antenna is 1mV and the noise power spectral density at the receiver input is 10-<br />
15w/Hz.<br />
i. Find the error probability of a coherent receiver.<br />
ii. Find the error probability of a coherent receiver. [8+8]<br />
13. Figure 5 illustrates a binary erasure channel with the transmission probabilities<br />
probabilities P(0|0) = P(1|1) = 1 - p and P(e|0) = P(e|1) = p. The probabilities for the input<br />
symbols are P(X=0) =a and P(X=1) =1- a.<br />
Determine the average mutual information I(X; Y) in bits. [16]<br />
14. Show that H(X, Y) = H(X) + H(Y |X) = H(Y ) + H(X|Y ). [16]<br />
15. Explain about block codes in which each block of k message bits encoded into block of<br />
n>k bits with an example. [16]<br />
16. Explain various methods for describing Conventional Codes. [16]<br />
17. The probability density function of the sampled values of an analog signal is shown in<br />
figure 1.<br />
(a) Design a 4 - level uniform quantizer.<br />
(b) Calculate the signal power to quantization noise power ratio.<br />
(c) Design a 4 - level minimum mean squared error non - uniform quantizer.<br />
[6+4+6]<br />
18. A DM system is tested with a 10kHz sinusoidal signal, 1V peak to peak at the input. The<br />
signal is sampled at 10times the Nyquist rate.<br />
(a) What is the step size required to prevent slope overload and to minimize the granular<br />
noise.<br />
(b) What is the power spectral density of the granular noise?<br />
(c) If the receiver input is band limited to 200kHz, what is the average (S/NQ).<br />
[6+5+5]<br />
19. (a) Write down the modulation waveform for transmitting binary information over base<br />
band channels, for the following modulation schemes: ASK, PSK, FSK and DPSK.<br />
(b) What are the advantages and disadvantages of digital modulation schemes?<br />
(c) Discuss base band transmission of M-ary data. [4+6+6]<br />
20. (a) Draw the block diagram of band pass binary data transmission system and explain<br />
each block.<br />
(b) A band pass data transmitter used a PSK signaling scheme with<br />
s1(t) =?A coswct; 0 t Tb<br />
s2(t) = +A coswct; 0 t Tb<br />
Where Tb = 0.2msec; wc = 10p /Tb.<br />
The carrier amplitude at the receiver input is 1mV and the power spectral density of the<br />
additive white gaussian noise at the input is 10-11w/Hz. Assume that an ideal correlation<br />
receiver is used. Calculate the average bit error rate of the receiver. [8+8]
21. A Discrete Memory less Source (DMS) has an alphabet of five letters, xi, i =1,2,3,4,5,<br />
each occurring with probabilities 0.15, 0.30, 0.25, 0.15, 0.10, 0.08, 0.05,0.05.<br />
(a) Determine the Entropy of the source and compare with it N.<br />
(b) Determine the average number N of binary digits per source code. [16]<br />
22. (a) Calculate the bandwidth limits of Shannon-Hartley theorem.<br />
(b) What is an Ideal system? What kind of method is proposed by Shannon for an Ideal<br />
system? [16]<br />
23. Explain about block codes in which each block of k message bits encoded into block of<br />
n>k bits with an example. [16]<br />
24. Sketch the Tree diagram of convolutional encoder shown in figure 8 with Rate= 1/2,<br />
constraint length L = 2. [16]<br />
25. (a) State sampling theorem for low pass signals and band pass signals.<br />
(b) What is aliasing effect? How it can be eliminated? Explain with neat diagram.<br />
[4+4+8]<br />
26. (a) Derive an expression for channel noise and quantization noise in DM system.<br />
(b) Compare DM and PCM systems. [10+6]<br />
27. Explain the design and analysis of M-ary signaling schemes. List the waveforms in<br />
quaternary schemes. [16]<br />
28. (a) Derive an expression for error probability of coherent PSK scheme.<br />
(b) In a binary PSK scheme for using a correlator receiver, the local carrier wave-form is<br />
Acos (wct + q) instead of Acos(wct) due to poor carrier synchronization. Derive an expression<br />
for the error probability and compute the increase in error probability when q=150 and<br />
[A2Tb/?] = 10. [8+8]<br />
29. Consider the transmitting Q1, Q2, Q3, and Q4 by symbols 0, 10, 110, 111<br />
(a) Is the code uniquely decipherable? That is for every possible sequence is there only one<br />
way of interpreting message.<br />
(b) Calculate the average number of code bits per message. How does it compare with H =<br />
1.8 bits per messages. [16]<br />
30. Show that H(X, Y) = H (X) + H (Y) and H(X/Y) = H (X). [16]<br />
31. Explain about block codes in which each block of k message bits encoded into block of<br />
n>k bits with an example. [16]<br />
32. Explain various methods for describing Conventional Codes. [16]
18. Assignment topics<br />
Unit 1:<br />
1. Certain issues of digital transmission,<br />
2. advantages of digital communication systems,<br />
3. Bandwidth- S/N trade off, and Sampling theorem<br />
4. PCM generation and reconstruction<br />
5. Quantization noise, Differential PCM systems (DPCM)<br />
6. Delta modulation,<br />
Unit 2:<br />
1. Coherent ASK detector and non-Coherent ASK detector<br />
2. Coherent FSK detector BPSK<br />
3. Coherent PSK detection<br />
1. A Base band signal receiver,<br />
2. Different pulses and power spectrum densities<br />
3. Probability of error<br />
Unit 3:<br />
1. Conditional entropy and redundancy,<br />
2. Shannon Fano coding<br />
3. Mutual information.<br />
4. Matrix description of linear block codes<br />
5. Matrix description of linear block codes<br />
5. Error detection and error correction capabilities of linear block codes<br />
Unit 4:<br />
1. Encoding,<br />
2. decoding using state Tree and trellis diagrams<br />
3. Decoding using Viterbi algorithm<br />
Unit 5:<br />
1. Use of spread spectrum<br />
2. direct sequence spread spectrum(DSSS),<br />
3. Code division multiple access<br />
4. Ranging using DSSS Frequency Hopping spread spectrum
19. Unit wise Bits<br />
CHOOSE THE CORRECT ANSWER<br />
1. A source is transmitting six messages with<br />
probabilities,1/2,1/4,1/8,1/16,1/32,and1/32.Then<br />
(a) Source coding improves the error performance of the communication system.<br />
(b) Channel coding will reduce the average source code word length.<br />
(c) Two different source codeword sets can be obtained using Huffman coding.<br />
(d) Two different source codeword sets can be obtained using Shanon-Fano coding<br />
2.A memory less source emits 2000binarysymbols/sec and each symbol has a Probability of<br />
0.25 to be equal to 1and 0.75 to be equal to 0.The minimum number of bits/sec required for<br />
error free transmission of this source is<br />
(a) 1500<br />
(b) 1734<br />
(c) 1885<br />
(d) 162213.<br />
3. A system has a bandwidth of 3KHz and an S/N ratio of 29dB at the input of the receiver .If<br />
the bandwidth of<br />
The channel gets doubled ,then<br />
(a) its capacity gets doubled<br />
(b) its capacity gets halved<br />
(c) the corresponding S/N ratio gets doubled<br />
(d) the corresponding S/N ratio gets halved<br />
5.The capacity of a channel with infinite bandwidth is<br />
(a) finite because of increase in noise power<br />
(b) finite because of finite message word length<br />
(c) infinite because of infinite noise power<br />
(d) infinite because of infinite bandwidth
6. Which of the following is correct?<br />
(a) Channel coding is an efficient way of representing the output of a source<br />
(b) ARQschemeoferrorcontrolisappliedafterthereceivermakesadecisionaboutthereceivedbit<br />
(c) ARQ scheme of error control is applied when the receiver is unable to make a decision<br />
about the received bit.<br />
(d) Source coding introduces redundancy<br />
7. Which of the following is correct?<br />
(a) Source encoding reduces the probability of transmission errors<br />
(b) In an (n,k) systematic cyclic code, the sum of two code words is another codeword of the<br />
code.<br />
(c) In a convolutional encoder, the constraint length of the encoder is equal to the tail of the<br />
message sequence+ 1.<br />
(d) Inan(n,k)blockcode,eachcodewordisthecyclicshiftofananothercodewordofthecode.<br />
8. Automatic Repeat Request is a<br />
(a) error correction scheme<br />
(b) Source coding scheme<br />
(c) error control scheme<br />
(d) data conversion scheme<br />
9. The fundamental limit on the average number of bits/source symbol is<br />
(a) Channel capacity<br />
(b) Entropy of the source<br />
(c) Mutual Information<br />
(d) Information content of the message<br />
10. The Memory length of a convolutional encoder is 5. If a 6 bit message sequence is<br />
applied as the input for the encoder ,then for the last message bit to come out of the encoder,<br />
the number of extra zeros to be applied to the encoder is<br />
(a) 6<br />
(b) 4<br />
(c) 3
(d) 5<br />
Answers<br />
1.C<br />
2.D<br />
3.B<br />
4.A<br />
5.C<br />
6.C<br />
7.C<br />
8.B<br />
9.A<br />
10.D<br />
Unit 2<br />
CHOOSE THE CORRECT ANSWER<br />
1. The cascade of two Binary Symmetric Channels is a<br />
(a) symmetric Binary channel<br />
(b) asymmetric Binary channel<br />
(c) asymmetric quaternary channel<br />
(d) symmetric quaternary channel<br />
2. Which of the following is correct?<br />
(a) Source coding introduces redundancy<br />
(b) ARQ scheme of error control is applied after the receiver makes a decision about the<br />
received bit<br />
(c) Channel coding is an efficient way of representing the output of a source<br />
(d) ARQ scheme of error control is applied when the receiver is unable to make a decision<br />
about the received bit.<br />
3. A linear block code with Hamming distance 5 is<br />
(a) Triple error correcting code
(b) Single error correcting and double error detecting code<br />
(c) double error detecting code<br />
(d) Double error correcting code<br />
4. In a Linear Block code<br />
(a) the encoder satisfies superposition principle<br />
(b) the communication channel is a linear system<br />
(c) parity bits of the code word are the linear combination of the message bits<br />
(d) the received power varies linearly with that of the transmitted power<br />
5. The fundamental limit on the average number of bits/source symbol is<br />
(a) Channel capacity<br />
(b) Information content of the message<br />
(c) Mutual Information<br />
(d) Entropy of the source<br />
6. Which of the following involves the effect of the communication channel?<br />
(a) Entropy of the source<br />
(b) Information content of a message<br />
(c) Mutual information<br />
(d) information rate of the source<br />
7. Whichofthefollowingprovidesthefacilitytorecognizetheerroratthereceiver?<br />
(a) Shanon -Fano Encoding<br />
(b) differential encoding<br />
(c) Parity Check codes<br />
(d) Huffman encoding<br />
8. A system has a bandwidth of 3 KHz and an S/N ratio of 29dB at the input of the receiver.<br />
If the bandwidth of<br />
The channel gets doubled, then<br />
(a) the corresponding S/N ratio gets doubled
(b) its capacity gets doubled<br />
(c) its capacity gets halved<br />
(d) the corresponding S/N ratio gets halved<br />
9. Information rate of a source can be used to<br />
(a) design the matched filter for the receiver<br />
(b) differentiate between two sources<br />
(c) correct the errors at the receiving side<br />
(d) to find the entropy in bits/message of a source<br />
10. In a communication system, the average amount of uncertainty associated with the<br />
Source, sink, source and sink jointly in bits/message are1.0613,1.5 and2.432 respectively.<br />
Then the information transferred by the channel connecting the source and sink in bit sis<br />
(a) 1.945<br />
(b) 4.9933<br />
(c) 2.8707<br />
(d) 0.1293<br />
11.ABS Chasa transition probability of P. The cascade of two such channel sis<br />
(a) asymmetric channel with transition probability2P (1-P)<br />
(b) an asymmetric channel with transition probability2P<br />
(c) an asymmetric channel with transition probability(1-P)<br />
(d) asymmetric channel with transition probability P2s.<br />
Answers<br />
1.A<br />
2.D<br />
3.D<br />
4.C<br />
5.D<br />
6. C<br />
7. A
8.C<br />
9.D<br />
10.D<br />
11.A<br />
Unit 2<br />
CHOOSE THE CORRECT ANSWER<br />
1. Information rate of a source is<br />
(a) maximum when the source is continuous<br />
(b) the entropy of the source measured in bits/message<br />
(c) a measure of the uncertainty of the communication system<br />
(d) the entropy of the source measured in bits/sec.<br />
2. A Field is<br />
(a) a group with 0 as the multiplicative identity for its members<br />
(b) a group with 0 as the additive inverse for its members<br />
(c) a group with 1 as the additive identity for its members<br />
(d) an Abelian group under addition<br />
3.Under error free reception, the syndrome vector computed for the received cyclic code<br />
word consists of<br />
(a) all ones<br />
(b) alternate0‘s and1‘s starting with a 0<br />
(c) all zeros<br />
(d) alternate 1‘s and 0‘sstarting with a 1<br />
4. The Memory length of a convolutional encoder is 3. If a 5 bit message sequence is applied<br />
As The Input For The Encoder, Then Forth E last Message Bit To Come Out of the encoder,<br />
The number of extra zeros to be applied to the encoder is<br />
(a) 5<br />
(b) 4<br />
(c) 3
(d) 6<br />
5. The cascade of two binary symmetric channels Is a<br />
(A) Symmetric binary channel<br />
(B) Symmetric quaternary channel<br />
(C) Asymmetric quaternary channel<br />
(D) Asymmetric binary channel<br />
6. There are four binary words given as 0000, 0001,0011, 0111.Which of these cannot beam<br />
member of the parity check matrix of a(15,11)linear Block code?<br />
(a) 0011<br />
(b) 0000,0001<br />
(c) 0000<br />
(d) 0111<br />
7. The encoder of a(7,4)systematic cyclic encoder with generating polynomial<br />
g(x)=1+x2+x3 is basically a<br />
(a) 3 stage shift register<br />
(b) 22 stage shift register<br />
(c) 11 stage shift register<br />
(d) 4 stage shift register<br />
8. A source X with entropy 2 bits/message is connected to the receiver Y through a Noise<br />
free channel. The conditional probability of the source, given the receiver is H(X/Y) and the<br />
joint entropy of the source and the receiver H(X,Y) .Then<br />
(a) H(X/Y)=2 bits/message<br />
(b) H(X,Y)=2bits/message<br />
(c) H(X/Y)=1bit/message<br />
(d) H(X,Y)=0bits/message<br />
9. A system has a bandwidth of 4KHz and an S/N ratio of 28 at the input to the Receiver. If<br />
the bandwidth of the channel is doubled ,then<br />
(a) S/N ratio at the input of the received gets halved
(b) Capacity of the channel gets doubled<br />
(c) Capacity of the channel gets squared<br />
(d) S/N ratio at the input of the received gets doubled<br />
10. The Parity Check Matrix of a(6,3) Systematic Linear Block code is<br />
101100<br />
011010<br />
110001<br />
If the Syndrome vector computed for the received code word is [010], then for error<br />
correction, which of the bits of the received code word is to be complemented?<br />
(a) 3<br />
(b) 4<br />
(c) 5<br />
(d) 2<br />
Answers<br />
1.D<br />
2.D<br />
3.C<br />
4.B<br />
5.A<br />
6.C<br />
7.A<br />
8.B<br />
9.A<br />
10.C<br />
Unit 3
CHOOSE THE CORRECT ANSWER<br />
1. The minimum number of bits per message required to encode the output of source<br />
transmitting four different messages with probabilities 0.5,0.25,0.125and0.125is<br />
2. (a) 1.5<br />
3. (b) 1.75<br />
4. (c) 2<br />
5. (d) 1<br />
2. A Binary Erasure channel has P(0/0)=P(1/1)=p; P(k/0)=P(k/1)=q. Its Capacity in<br />
bits/symbol is<br />
(a) p/q<br />
(b) pq<br />
(c) p<br />
(d) q<br />
3. The syndromes(x) o f a cyclic code is given by Reminder of the division [{V(x)+E(x)}<br />
/g(x)], where V (x) is the transmitted code polynomial(x)is the error polynomial and g(x)<br />
is the generator polynomial. The S(x) is also equal to<br />
(a) Reminder of[V(x).E(x)]/g(x)<br />
(b) Reminder of V(x)/g(x)<br />
(c) Reminder of E(x)/g(x)<br />
(d) Remaindering(x)/V(x)<br />
5. The output of a continuous source is a uniform random variable in the range 0 ≤ x ≤ 4.<br />
The entropy of the source in bits/samples<br />
(a) 2<br />
(b) 8<br />
(c) 4<br />
(d) 1<br />
6. In a (6,3) systematic Linear Block code, the number of‘6‘bit code words that are not<br />
useful is<br />
(a) 45<br />
(b) 64<br />
(c) 8
(d) 56<br />
7. The output of a source is band limited to 6KHz.It is sampled at a rate of 2KHz above the<br />
nyquist‘s rate. If the<br />
Entropy of the sourceis2bits/sample, then the entropy of the source in bits/sec is<br />
(a) 12Kbps<br />
(b) 32Kbps<br />
(c) 28Kbps<br />
(d) 24Kbps<br />
8. The channel capacity of a BSC with transition probability ½ is<br />
(a) 2bits<br />
(b) 0bits<br />
(c) 1bit<br />
(d) infinity<br />
9. A communication channel is fed with an input signal x(t) and the noise in the channel is<br />
negative. The Power received at the receiver input is<br />
(a) Signal power-Noise power<br />
(b) Signal power +Noise Power<br />
(c) Signal power x Noise Power<br />
(d) Signal power /Noise power<br />
10. White noise of PSD η/2 is applied to an ideal LPF with one sided band width of B Hz.The<br />
filter provides again<br />
of 2. If the output power of the filter is 8η ,then the value of Bin Hz is<br />
(a) 8<br />
(b) 2<br />
(c) 6<br />
(d) 4<br />
Answers<br />
1.B
2.C<br />
3.C<br />
4.A<br />
5.D<br />
6.C<br />
7.B<br />
8.B<br />
9.B<br />
10.A<br />
Unit 4<br />
CHOOSE THE CORRECT ANSWER<br />
1. Which of the following is correct?<br />
(a) ThesyndromeofareceivedBlockcodedworddependsonthereceive<strong>dc</strong>odeword<br />
(b) ThesyndromeforareceivedBlockcodedwordundererrorfreereceptionconsistsofall1‘s.<br />
(c) ThesyndromeofareceivedBlockcodedworddependsonthetransmitte<strong>dc</strong>odeword.<br />
(d) The syndrome of a received Block coded word depends on the error pattern<br />
2. A Field is<br />
(a) a group with 0 as the multiplicative identity for its members<br />
(b) a group with 0 as the additive inverse for its members<br />
(c) a group with1as the additive identity for its members<br />
(d) an A beli an group under addition<br />
3. Variable length source coding provides better coding efficiency, if all the messages of the<br />
source are<br />
(a) Equiprobable<br />
(b) continuously transmitted
(c) discretely transmitted<br />
(d) with different transmission probability<br />
4. Which of the following is correct?<br />
(a) FEC and ARQ are not used for error correction<br />
(b) ARQisusedforerrorcontrolafterreceivermakesadecisionaboutthereceivedbit<br />
(c) FECisusedforerrorcontrolwhenthereceiverisunabletomakeadecisionaboutthereceivedbit<br />
(d) FECisusedforerrorcontrolafterreceivermakesadecisionaboutthereceivedbit<br />
5. The source coding efficiency can be increased by<br />
(a) using source extension<br />
(b) decreasing the entropy of the source<br />
(c) using binary coding<br />
(d) increasing the entropy of the source<br />
6.<br />
AdiscretesourceXistransmittingmmessagesandisconnectedtothereceiverYthroughasymmetricc<br />
hannel.The capacity of the channel is given as<br />
(a) log m bits/symbol<br />
(b) H(X)+H(Y)-H(X,Y) bits/symbol<br />
(c) log m-H(X/Y) bits/symbol<br />
(d) log m-H(Y/X) bits/symbol<br />
7. The time domain behavior of a convolutional encoder of code rate 1/3 is defined in terms<br />
of a set of<br />
(a) 3rampresponses<br />
(b) 3stepresponses<br />
(c) 3sinusoidalresponses<br />
(d) 3impulseresponses<br />
8. A source X with entropy 2 bits/message is connected to the receive r Y through a Noise<br />
free channel. The conditional probability of the source, given the receiver is H(X/Y) and the<br />
joint entropy of the source and the receiver H(X,Y).Then<br />
(a) H(X,Y)=2bits/message
(b) H(X/Y)=1bit/message<br />
(c) H(X,Y)=0bits/message<br />
(d) H(X/Y)=2bits/message<br />
9. The fundamental limit on the average number of bits/source symbol is<br />
(a) Mutual Information<br />
(b) Entropy of the source<br />
(c) Information content of the message<br />
(d) Channel capacity<br />
10. The Memory length of a convolutional encoder is 5. If a 6 bit message sequence is<br />
applied as the input for the encoder, then for the last message bit to come out of the encoder,<br />
the number of extra zeros to be applied to the encoder is<br />
(a) 4<br />
(b) 6<br />
(c) 3<br />
(d) 5<br />
Answers<br />
1.D<br />
2.D<br />
3.D<br />
4.D<br />
5.A<br />
6.D<br />
7.D<br />
8.A<br />
9.B<br />
10.B
Unit 5<br />
CHOOSE THE CORRECT ANSWER<br />
1. If ‘a‘ is an element of a Field ‘F‘, then its additive inverse is<br />
(a) -a<br />
(b) 0<br />
(c) a<br />
(d) 1<br />
2. Relative to Hard decision decoding, soft decision decoding results in<br />
(a) better coding gain<br />
(b) lesser coding gain<br />
(c) less circuit complexity<br />
(d) better bit error probability<br />
3. .Under error free reception, the syndrome vector computed for the received cyclic<br />
codeword consists of<br />
(a) alternate 0‘sand1‘sstartingwitha0<br />
(b) all zeros<br />
(c) all ones<br />
(d) alternate1‘s and 0‘ss tartingwitha1<br />
4. Error free communication may be possible by<br />
(a) increasing transmission power to the required level<br />
(b) providing redundancy during transmission<br />
(c) increasing the channel bandwidth<br />
(d) reducing redundancy during transmission<br />
5. A discrete source X is transmitting m messages and is connected to the receiver Y through<br />
asymmetric channel. The capacity of the channel is given as<br />
(a) H(X)+H(Y)-H(X,Y)bits/symbol<br />
(b) log m-H(X/Y)bits/symbol<br />
(c) log m-H(Y/X)bits/symbol
(d) log m bits/symbol<br />
6. Theencoderofa(7,4)systematiccyclicencoderwithgeneratingpolynomialg(x)=1+x2 +x3 Is<br />
basically a<br />
(a) 11stageshiftregister<br />
(b) 4stageshiftregister<br />
(c) 3stageshiftregister<br />
(d) 22stageshiftregister<br />
7. A channel with independent input and output acts as<br />
(a) Gaussian channel<br />
(b) channel with maximum capacity<br />
(c) lossless network<br />
(d) resistive network<br />
8. A system has a bandwidth of 4 KHz and an S/Nratio of 28 at the input to the Receiver.If<br />
the bandwidth of the channel is doubled, then<br />
(a) S/N ratio at the input of the received gets halved<br />
(b) Capacity of the channel gets doubled<br />
(c) S/N ratio at the input of the received gets doubled<br />
(d) Capacity of the channel gets squared<br />
9. A source is transmitting four messages with equal probability. Then, for optimum Source<br />
coding efficiency.<br />
(a) necessarily, variable length coding schemes should be used<br />
(b) Variable length coding schemes need not necessarily be used<br />
(c) Convolutional codes should be used<br />
(d) Fixed length coding schemes should not be used<br />
10. The maximum average amount of information content measured in bits/sec associated<br />
with the output of a discrete Information source transmitting 8 messages and 2000<br />
messages/sec is<br />
(a) 16Kbps<br />
(b) 4Kbps
(c) 3Kbps<br />
(d) 6Kbps<br />
Answers<br />
1.A<br />
2.A<br />
3.B<br />
4.B<br />
5.C<br />
6.C<br />
7.D<br />
8.A<br />
9.B<br />
10. D<br />
Unit 5<br />
CHOOSE THE CORRECT ANSWER<br />
1.<br />
TwobinaryrandomvariablesXandYaredistributedaccordingtothejointDistributiongivenasP(X=<br />
Y=0)=<br />
P(X=Y=1)=P(X=Y=1)=1/3.Then,<br />
(a) H(X)+H(Y)=1.<br />
(b) H(Y)=2.H(X)<br />
(c) H(X)=H(Y)<br />
(d) H(X)=2.H(Y)<br />
2. Relative to Hard decision decoding, soft decision decoding results in<br />
(a) less circuit complexity<br />
(b) better bit error probability<br />
(c) better coding gain<br />
(d) lesser coding gain
3. .Under error free reception, the syndrome vector computed for the received cyclic code<br />
word consists of<br />
(a) alternate 1‘s and 0‘s starting with a 1<br />
(b) all ones<br />
(c) all zeros<br />
(d) alternate 0‘s and 1‘s starting with a 0<br />
4. Source 1 is transmitting two messages with probabilities 0.2 and 0.8 and Source2 is<br />
transmitting two messages with Probabilities 0.5 and 0.5. Then<br />
(a) MaximumuncertaintyisassociatedwithSource1<br />
(b) Boththesources1and2arehavingmaximumamountofuncertaintyassociated<br />
(c) There is no uncertainty associated with either of the two sources.<br />
(d) MaximumuncertaintyisassociatedwithSource2<br />
5. The Hamming Weight of the(6,3) Linear Block coded word 101011<br />
(a) 5<br />
(b) 4<br />
(c) 2<br />
(d) 3<br />
6. If X is the transmitter and Y is the receiver and if the channel is the noise free, then the<br />
mutual information I(X,Y) is equal to<br />
(a) Conditional Entropy of the receiver, given the source<br />
(b) Conditional Entropy of the source, given the receiver<br />
(c) Entropy of the source<br />
(d) Joint entropy of the source and receiver<br />
7. In a Linear Block code<br />
(a) the received power varies linearly with that of the transmitted power<br />
(b) parity bits of the code word are the linear combination of the message bits<br />
(c) the communication channel is a linear system<br />
(d) the encoder satisfies superposition principle
8 The fundamental limit on the average number of bits/source symbol is<br />
(a) Mutual Information<br />
(b) Channel capacity<br />
(c) Information content of the message<br />
(d) Entropy of the source<br />
9. A source is transmitting four messages with equal probability. Then for optimum Source<br />
coding efficiency,<br />
(a) necessarily, variable length coding schemes should be used<br />
(b) Variable length coding schemes need not necessarily be used<br />
(c) Convolutional codes should be used<br />
(d) Fixed length coding schemes should not be used<br />
10. If a memory ess source of information rate R is connected to a channel with a channel<br />
capacity C, then on which of the following statements, the channel coding for the output of<br />
the source is based?<br />
(a) Minimum number of bits required to encode the output of the source is its entropy<br />
(b) R must be less than or equal to C<br />
(c) R must be greater than or equal to C<br />
(d) R must be exactly equal to C<br />
Answers<br />
1.C<br />
2.C<br />
3.C<br />
4.D<br />
5.B<br />
6.C<br />
7.B<br />
8.D<br />
9.B
10.B<br />
Code No: 56026 Set No. 1<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. II Sem., I Mid-Term Examinations, February – 2012<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions.<br />
All Questions Carry Equal Marks. Time: 20 Min. Marks: 10.<br />
I Choose the correct alternative:<br />
1. The minimum band width required to multiplex 12 different message signals each of band<br />
width 10KHz is [ ]<br />
A) 60KHz B) 120KHz C) 180KHz D) 160KHz<br />
2. In 8-PSK system, adjacent phasors differ by an angle given by ( in degrees) [ ]<br />
A) n/4 B) n/8 C) n/6 D) n/2<br />
3. Band Width efficiency of a Digital Modulation Method is [ ]<br />
A) (Minimum Band width)/ (Transmission Bit Rate)<br />
B) (Power required)/( Minimum Band width)<br />
C) (Transmission Bit rate)/ (Minimum Band width)<br />
D) (Power Saved during transmission)/(Minimum Band width)<br />
4. The Auto-correlation function of White Noise is [ ]<br />
A) Impulse function B) Constant C) Sampling function D) Step function<br />
5. The minimum band width required for a BPSK signal is equal to [ ]<br />
A) one fourth of bit rate B) twice the bit rate C) half of the bit rate D) bit rate<br />
6. Companding results in [ ]<br />
A)More S/N ratio at higher amplitudes of the base band signal<br />
B) More S/N ratio at lower amplitudes of the base band signal<br />
C) Uniform S/N ratio throughout the base band signal<br />
D) Better S/N ratio at lower frequencies<br />
7. A uniform quantizer is having a step size of .05 volts. This quantizer suffers from a<br />
maximum quantization error of [ ]<br />
A) 0.1V B) 0.025 V C) 0.8 V D) 0.05 V<br />
8. In Non-Coherent demodulation, the receiver [ ]<br />
A) relies on carrier phase B) relies on the carrier amplitude<br />
C) makes an error with less probability D) uses a carrier recovery circuit<br />
9. The advantage of Manchester encoding is [ ]<br />
A) less band width requirement B) less bit energy required for transmission<br />
C) less probability of error D) less bit duration<br />
10. Granular Noise in Delta Modulation system can be reduced by<br />
A) using a square law device B) increasing the step size<br />
C) decreasing the step size D) adjusting the rate of rise of the base band signal<br />
Cont……2
Code No: 56026 :2: Set No. 1<br />
II Fill in the blanks<br />
11. Non-coherent detection of FSK signal results in ____________________<br />
12. _____________ is used as a Predictor in a DPCM transmitter.<br />
13. The Nyquist's rate of sampling of an analog signal S(t) for alias free reconstruction is<br />
5000samples/sec. For a signal x(t) = [S(t)]2 ,the corresponding sampling rate in<br />
samples/sec is __________________<br />
14. A Matched filter is used to __________________________<br />
15. A signal extending over -4v to +4v is quantized into 8 levels. The maximum possible<br />
quantization error obtainable is _____________V.<br />
16. The advantage of DPCM over Delta Modulation is _________________________<br />
17. The phases in a QPSK system can be expressed as ______________________<br />
18. The Synchronization is defined as _______________________<br />
19. The sampling rate in Delta Modulation is _______________than PCM.<br />
20. The bit error Probability of BPSK system is __________________that of QPSK.
code No: 56026 Set No. 2<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. II Sem., I Mid-Term Examinations, February – 2012<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.<br />
I Choose the correct alternative:<br />
1. The Auto-correlation function of White Noise is [ ]<br />
A) Impulse function B) Constant C) Sampling function D) Step function<br />
2. The minimum band width required for a BPSK signal is equal to [ ]<br />
A) one fourth of bit rate B) twice the bit rate C) half of the bit rate D) bit rate<br />
3. Companding results in [ ]<br />
A)More S/N ratio at higher amplitudes of the base band signal<br />
B) More S/N ratio at lower amplitudes of the base band signal<br />
C) Uniform S/N ratio throughout the base band signal<br />
D) Better S/N ratio at lower frequencies<br />
4. A uniform quantizer is having a step size of .05 volts. This quantizer suffers from a<br />
maximum quantization error of [ ]<br />
A) 0.1V B) 0.025 V C) 0.8 V D) 0.05 V<br />
5. In Non-Coherent demodulation, the receiver [ ]<br />
A) relies on carrier phase B) relies on the carrier amplitude<br />
C) makes an error with less probability D) uses a carrier recovery circuit<br />
6. The advantage of Manchester encoding is [ ]<br />
A) less band width requirement B) less bit energy required for transmission<br />
C) less probability of error D) less bit duration<br />
7. Granular Noise in Delta Modulation system can be reduced by<br />
A) using a square law device B) increasing the step size<br />
C) decreasing the step size D) adjusting the rate of rise of the base band signal<br />
8. The minimum band width required to multiplex 12 different message signals each of band<br />
width 10KHz is [ ]<br />
A) 60KHz B) 120KHz C) 180KHz D) 160KHz<br />
9. In 8-PSK system, adjacent phasors differ by an angle given by ( in degrees) [ ]<br />
A) n/4 B) n/8 C) n/6 D) n/2<br />
10. Band Width efficiency of a Digital Modulation Method is [ ]<br />
A) (Minimum Band width)/ (Transmission Bit Rate)<br />
B) (Power required)/( Minimum Band width)<br />
C) (Transmission Bit rate)/ (Minimum Band width)<br />
D) (Power Saved during transmission)/(Minimum Band width)<br />
Cont……2
code No: 56026 :2: Set No. 2<br />
II Fill in the blanks<br />
11. A Matched filter is used to __________________________<br />
12. A signal extending over -4v to +4v is quantized into 8 levels. The maximum possible<br />
quantization error obtainable is _____________V.<br />
13. The advantage of DPCM over Delta Modulation is _________________________<br />
14. The phases in a QPSK system can be expressed as ______________________<br />
15. The Synchronization is defined as _______________________<br />
16. The sampling rate in Delta Modulation is _______________than PCM.<br />
17. The bit error Probability of BPSK system is __________________that of QPSK.<br />
18. Non-coherent detection of FSK signal results in ____________________<br />
19. _____________ is used as a Predictor in a DPCM transmitter.<br />
20. The Nyquist's rate of sampling of an analog signal S(t) for alias free reconstruction is<br />
5000samples/sec. For a signal x(t) = [S(t)]2 ,the corresponding sampling rate in<br />
samples/sec is __________________<br />
-oOo-<br />
Code No: 56026 Set No. 3<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. II Sem., I Mid-Term Examinations, February – 2012<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.<br />
I Choose the correct alternative:<br />
1. Companding results in [ ]<br />
A)More S/N ratio at higher amplitudes of the base band signal<br />
B) More S/N ratio at lower amplitudes of the base band signal<br />
C) Uniform S/N ratio throughout the base band signal<br />
D) Better S/N ratio at lower frequencies<br />
2. A uniform quantizer is having a step size of .05 volts. This quantizer suffers from a<br />
maximum quantization error of [ ]<br />
A) 0.1V B) 0.025 V C) 0.8 V D) 0.05 V<br />
3. In Non-Coherent demodulation, the receiver [ ]<br />
A) relies on carrier phase B) relies on the carrier amplitude<br />
C) makes an error with less probability D) uses a carrier recovery circuit<br />
4. The advantage of Manchester encoding is [ ]<br />
A) less band width requirement B) less bit energy required for transmission<br />
C) less probability of error D) less bit duration<br />
5. Granular Noise in Delta Modulation system can be reduced by<br />
A) using a square law device B) increasing the step size<br />
C) decreasing the step size D) adjusting the rate of rise of the base band signal<br />
6. The minimum band width required to multiplex 12 different message signals each of band<br />
width 10KHz is [ ]<br />
A) 60KHz B) 120KHz C) 180KHz D) 160KHz<br />
7. In 8-PSK system, adjacent phasors differ by an angle given by ( in degrees) [ ]<br />
A) n/4 B) n/8 C) n/6 D) n/2<br />
8. Band Width efficiency of a Digital Modulation Method is [ ]<br />
A) (Minimum Band width)/ (Transmission Bit Rate)
B) (Power required)/( Minimum Band width)<br />
C) (Transmission Bit rate)/ (Minimum Band width)<br />
D) (Power Saved during transmission)/(Minimum Band width)<br />
9. The Auto-correlation function of White Noise is [ ]<br />
A) Impulse function B) Constant C) Sampling function D) Step function<br />
10. The minimum band width required for a BPSK signal is equal to [ ]<br />
A) one fourth of bit rate B) twice the bit rate C) half of the bit rate D) bit rate<br />
Cont……2<br />
A<br />
Code No: 56026 :2: Set No. 3<br />
II Fill in the blanks<br />
11. The advantage of DPCM over Delta Modulation is _________________________<br />
12. The phases in a QPSK system can be expressed as ______________________<br />
13. The Synchronization is defined as _______________________<br />
14. The sampling rate in Delta Modulation is _______________than PCM.<br />
15. The bit error Probability of BPSK system is __________________that of QPSK.<br />
16. Non-coherent detection of FSK signal results in ____________________<br />
17. _____________ is used as a Predictor in a DPCM transmitter.<br />
18. The Nyquist's rate of sampling of an analog signal S(t) for alias free reconstruction is<br />
5000samples/sec. For a signal x(t) = [S(t)]2 ,the corresponding sampling rate in<br />
samples/sec is __________________<br />
19. A Matched filter is used to __________________________<br />
20. A signal extending over -4v to +4v is quantized into 8 levels. The maximum possible<br />
quantization error obtainable is _____________V.<br />
-oOo-<br />
Code No: 56026 Set No. 4<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. II Sem., I Mid-Term Examinations, February – 2012<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.<br />
I Choose the correct alternative:<br />
1. In Non-Coherent demodulation, the receiver [ ]<br />
A) relies on carrier phase B) relies on the carrier amplitude<br />
C) makes an error with less probability D) uses a carrier recovery circuit<br />
2. The advantage of Manchester encoding is [ ]<br />
A) less band width requirement B) less bit energy required for transmission<br />
C) less probability of error D) less bit duration<br />
3. Granular Noise in Delta Modulation system can be reduced by<br />
A) using a square law device B) increasing the step size<br />
C) decreasing the step size D) adjusting the rate of rise of the base band signal<br />
4. The minimum band width required to multiplex 12 different message signals each of band<br />
width 10KHz is [ ]<br />
A) 60KHz B) 120KHz C) 180KHz D) 160KHz<br />
5. In 8-PSK system, adjacent phasors differ by an angle given by ( in degrees) [ ]<br />
A) n/4 B) n/8 C) n/6 D) n/2<br />
6. Band Width efficiency of a Digital Modulation Method is [ ]
A) (Minimum Band width)/ (Transmission Bit Rate)<br />
B) (Power required)/( Minimum Band width)<br />
C) (Transmission Bit rate)/ (Minimum Band width)<br />
D) (Power Saved during transmission)/(Minimum Band width)<br />
7. The Auto-correlation function of White Noise is [ ]<br />
A) Impulse function B) Constant C) Sampling function D) Step function<br />
8. The minimum band width required for a BPSK signal is equal to [ ]<br />
A) one fourth of bit rate B) twice the bit rate C) half of the bit rate D) bit rate<br />
9. Companding results in [ ]<br />
A)More S/N ratio at higher amplitudes of the base band signal<br />
B) More S/N ratio at lower amplitudes of the base band signal<br />
C) Uniform S/N ratio throughout the base band signal<br />
D) Better S/N ratio at lower frequencies<br />
10. A uniform quantizer is having a step size of .05 volts. This quantizer suffers from a<br />
maximum quantization error of [ ]<br />
A) 0.1V B) 0.025 V C) 0.8 V D) 0.05 V<br />
Cont……2<br />
Code No: 56026 :2: Set No. 4<br />
II Fill in the blanks<br />
11. The Synchronization is defined as _______________________<br />
12. The sampling rate in Delta Modulation is _______________than PCM.<br />
13. The bit error Probability of BPSK system is __________________that of QPSK.<br />
14. Non-coherent detection of FSK signal results in ____________________<br />
15. _____________ is used as a Predictor in a DPCM transmitter.<br />
16. The Nyquist's rate of sampling of an analog signal S(t) for alias free reconstruction is<br />
5000samples/sec. For a signal x(t) = [S(t)]2 ,the corresponding sampling rate in<br />
samples/sec is __________________<br />
17. A Matched filter is used to __________________________<br />
18. A signal extending over -4v to +4v is quantized into 8 levels. The maximum possible<br />
quantization error obtainable is _____________V.<br />
19. The advantage of DPCM over Delta Modulation is _________________________<br />
20. The phases in a QPSK system can be expressed as ______________________<br />
-oOo-<br />
Code No: 56026 Set No. 1<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. II Sem., II Mid-Term Examinations, April – 2012<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.<br />
I Choose the correct alternative:<br />
1. Information rate of a source is [ ]<br />
A) maximum when the source is continuous B) the entropy of the source measured in<br />
bits/message<br />
C) a measure of the uncertainty of the communication system<br />
D) the entropy of the source measured in bits/sec.<br />
2. The Hamming Weight of the (6,3) Linear Block coded word 101011 [ ]
A) 5 B) 4 C) 2 D) 3<br />
3. Which of the following can be the generating polynomial for a (7,4) systematic Cyclic<br />
code? [ ]<br />
A) x 3 +x+1 B) x 5 +x 2 +1 C) x 4 +x 3 +1 D) x 7 +x 4 +x 3 +1<br />
4. In a Linear Block code [ ]<br />
A) the received power varies linearly with that of the transmitted power<br />
B) parity bits of the code word are the linear combination of the message bits<br />
C) the communication channel is a linear system<br />
D) the encoder satisfies super position principle<br />
5. The fundamental limit on the average number of bits/source symbol is [ ]<br />
A) Mutual Information B) Channel capacity<br />
C) Information content of the message D) Entropy of the source<br />
6. A system has a band width of 3KHz and an S/N ratio of 29dB at the input of the receiver.<br />
If the band width of the channel gets doubled, then [ ]<br />
A) its capacity gets halved B) the corresponding S/N ratio gets doubled<br />
C) the corresponding S/N ratio gets halved D) its capacity gets doubled<br />
7. The Channel Matrix of a Noiseless channel [ ]<br />
A) consists of a single nonzero number in each column<br />
B) consists of a single nonzero number in each row<br />
C) is a square Matrix<br />
D) is an Identity Matrix<br />
8. A source emits messages A and B with probability 0.8 and 0.2 respectively. The<br />
redundancy provided by the optimum source-coding scheme for the above Source is [<br />
]<br />
A) 27% B) 72% C) 55% D) 45%<br />
9. A source X and the receiver Y are connected by a noise free channel. Its capacity is [ ]<br />
A) Max H(Y/X) B) Max H(X) C) Max H(X/Y) D) Max H(X,Y)<br />
10. Exchange between Band width and Signal noise ratio can be justified based on [ ]<br />
A) Hartley - Shanon‘s Law B) Shanon‘s source coding Theorem<br />
C) Shanon‘s limit D) Shanon‘s channel coding Theorem<br />
Cont…….2<br />
Code No: 56026 Set No. 1<br />
DIGITAL COMMUNICATIONS<br />
KEYS<br />
I Choose the correct alternative:<br />
1. D<br />
2. B<br />
3. A<br />
4. B<br />
5. D<br />
6. C<br />
7. D<br />
8. A<br />
9. B<br />
10. A<br />
II Fill in the blanks<br />
11. 3
12. to transmit the information signal using orthogonal codes<br />
13. symmetric Binary channel<br />
14. Source extension<br />
15. It has soft capacity limit<br />
16. Variable length coding scheme<br />
17. 18<br />
18. Better bit error probability<br />
19. ZERO<br />
20. It has soft capacity limit<br />
-oOo-<br />
Code No: 56026 :2: Set No. 1<br />
II Fill in the blanks<br />
11. The Parity check matrix of a linear block code is<br />
1 0 1 1 0 0<br />
0 1 1 0 1 0<br />
1 1 0 0 0 1<br />
Its Hamming distance is ___________________<br />
12. The significance of PN sequence in CDMA is ________________<br />
13. The cascade of two Binary Symmetric Channels is a __________________________<br />
14. The source coding efficiency can be increased by using _______________________<br />
15. The advantage of Spread Spectrum Modulation schemes over other modulations is<br />
_________________<br />
16. Entropy coding is a _____________________<br />
17. A convolutional encoder of code rate 1/2 is a 3 stage shift register with a message word<br />
length of 6.The code word length obtained from the encoder ( in bits) is<br />
_____________<br />
18. Relative to Hard decision decoding, soft decision decoding results in _____________<br />
19. If T is the code vector and H is the Parity check Matrix of a Linear Block code, then the<br />
code is defined by the set of all code vectors for which H T .T = ______________<br />
20. The advantage of CDMA over Frequency hopping is ____________<br />
-oOo-
Code No: 56026 Set No. 2<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. II Sem., II Mid-Term Examinations, April – 2012<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.<br />
I Choose the correct alternative:<br />
1. In a Linear Block code [ ]<br />
A) the received power varies linearly with that of the transmitted power<br />
B) parity bits of the code word are the linear combination of the message bits<br />
C) the communication channel is a linear system<br />
D) the encoder satisfies super position principle<br />
2. The fundamental limit on the average number of bits/source symbol is [ ]<br />
A) Mutual Information B) Channel capacity<br />
C) Information content of the message D) Entropy of the source<br />
3. A system has a band width of 3KHz and an S/N ratio of 29dB at the input of the receiver.<br />
If the band width of the channel gets doubled, then [ ]<br />
A) its capacity gets halved B) the corresponding S/N ratio gets doubled<br />
C) the corresponding S/N ratio gets halved D) its capacity gets doubled<br />
4. The Channel Matrix of a Noiseless channel [ ]<br />
A) consists of a single nonzero number in each column<br />
B) consists of a single nonzero number in each row<br />
C) is a square Matrix<br />
D) is an Identity Matrix<br />
5. A source emits messages A and B with probability 0.8 and 0.2 respectively. The<br />
redundancy provided by the optimum source-coding scheme for the above Source is [<br />
]<br />
A) 27% B) 72% C) 55% D) 45%<br />
6. A source X and the receiver Y are connected by a noise free channel. Its capacity is [ ]<br />
A) Max H(Y/X) B) Max H(X) C) Max H(X/Y) D) Max H(X,Y)<br />
7. Exchange between Band width and Signal noise ratio can be justified based on [ ]<br />
A) Hartley - Shanon‘s Law B) Shanon‘s source coding Theorem<br />
C) Shanon‘s limit D) Shanon‘s channel coding Theorem<br />
8. Information rate of a source is [ ]<br />
A) maximum when the source is continuous B) the entropy of the source measured in<br />
bits/message<br />
C) a measure of the uncertainty of the communication system<br />
D) the entropy of the source measured in bits/sec.<br />
9. The Hamming Weight of the (6,3) Linear Block coded word 101011 [ ]<br />
A) 5 B) 4 C) 2 D) 3<br />
10. Which of the following can be the generating polynomial for a (7,4) systematic Cyclic<br />
code? [ ]<br />
A) x 3 +x+1 B) x 5 +x 2 +1 C) x 4 +x 3 +1 D) x 7 +x 4 +x 3 +1<br />
Cont…….2<br />
A
Code No: 56026 :2: Set No. 2<br />
II Fill in the blanks<br />
11. The source coding efficiency can be increased by using _______________________<br />
12. The advantage of Spread Spectrum Modulation schemes over other modulations is<br />
_________________<br />
13. Entropy coding is a _____________________<br />
14. A convolutional encoder of code rate 1/2 is a 3 stage shift register with a message word<br />
length of 6.The code word length obtained from the encoder ( in bits) is<br />
_____________<br />
15. Relative to Hard decision decoding, soft decision decoding results in _____________<br />
16. If T is the code vector and H is the Parity check Matrix of a Linear Block code, then the<br />
code is defined by the set of all code vectors for which H T .T = ______________<br />
17. The advantage of CDMA over Frequency hopping is ____________<br />
18. The Parity check matrix of a linear block code is<br />
1 0 1 1 0 0<br />
0 1 1 0 1 0<br />
1 1 0 0 0 1<br />
Its Hamming distance is ___________________<br />
19. The significance of PN sequence in CDMA is ________________<br />
20. The cascade of two Binary Symmetric Channels is a __________________________<br />
-oOo-<br />
Code No: 56026 Set No. 3<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. II Sem., II Mid-Term Examinations, April – 2012<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.<br />
I Choose the correct alternative:<br />
1. A system has a band width of 3KHz and an S/N ratio of 29dB at the input of the receiver.<br />
If the band width of the channel gets doubled, then [ ]<br />
A) its capacity gets halved B) the corresponding S/N ratio gets doubled<br />
C) the corresponding S/N ratio gets halved D) its capacity gets doubled<br />
2. The Channel Matrix of a Noiseless channel [ ]<br />
A) consists of a single nonzero number in each column<br />
B) consists of a single nonzero number in each row<br />
C) is a square Matrix<br />
D) is an Identity Matrix<br />
3. A source emits messages A and B with probability 0.8 and 0.2 respectively. The<br />
redundancy provided by the optimum source-coding scheme for the above Source is [<br />
]<br />
A) 27% B) 72% C) 55% D) 45%<br />
4. A source X and the receiver Y are connected by a noise free channel. Its capacity is [ ]<br />
A) Max H(Y/X) B) Max H(X) C) Max H(X/Y) D) Max H(X,Y)<br />
5. Exchange between Band width and Signal noise ratio can be justified based on [ ]<br />
A) Hartley - Shanon‘s Law B) Shanon‘s source coding Theorem<br />
C) Shanon‘s limit D) Shanon‘s channel coding Theorem<br />
6. Information rate of a source is [ ]
A) maximum when the source is continuous B) the entropy of the source measured in<br />
bits/message<br />
C) a measure of the uncertainty of the communication system<br />
D) the entropy of the source measured in bits/sec.<br />
7. The Hamming Weight of the (6,3) Linear Block coded word 101011 [ ]<br />
A) 5 B) 4 C) 2 D) 3<br />
8. Which of the following can be the generating polynomial for a (7,4) systematic Cyclic<br />
code? [ ]<br />
A) x 3 +x+1 B) x 5 +x 2 +1 C) x 4 +x 3 +1 D) x 7 +x 4 +x 3 +1<br />
9. In a Linear Block code [ ]<br />
A) the received power varies linearly with that of the transmitted power<br />
B) parity bits of the code word are the linear combination of the message bits<br />
C) the communication channel is a linear system<br />
D) the encoder satisfies super position principle<br />
10. The fundamental limit on the average number of bits/source symbol is [ ]<br />
A) Mutual Information B) Channel capacity<br />
C) Information content of the message D) Entropy of the source<br />
Cont…….2<br />
A<br />
Code No: 56026 :2: Set No. 3<br />
II Fill in the blanks<br />
11. Entropy coding is a _____________________<br />
12. A convolutional encoder of code rate 1/2 is a 3 stage shift register with a message word<br />
length of 6.The code word length obtained from the encoder ( in bits) is<br />
_____________<br />
13. Relative to Hard decision decoding, soft decision decoding results in _____________<br />
14. If T is the code vector and H is the Parity check Matrix of a Linear Block code, then the<br />
code is defined by the set of all code vectors for which H T .T = ______________<br />
15. The advantage of CDMA over Frequency hopping is ____________<br />
16. The Parity check matrix of a linear block code is<br />
1 0 1 1 0 0<br />
0 1 1 0 1 0<br />
1 1 0 0 0 1<br />
Its Hamming distance is ___________________<br />
17. The significance of PN sequence in CDMA is ________________<br />
18. The cascade of two Binary Symmetric Channels is a __________________________<br />
19. The source coding efficiency can be increased by using _______________________<br />
20. The advantage of Spread Spectrum Modulation schemes over other modulations is<br />
_________________
Code No: 56026 Set No. 4<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. II Sem., II Mid-Term Examinations, April – 2012<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.<br />
I Choose the correct alternative:<br />
1. A source emits messages A and B with probability 0.8 and 0.2 respectively. The<br />
redundancy provided by the optimum source-coding scheme for the above Source is [<br />
]<br />
A) 27% B) 72% C) 55% D) 45%<br />
2. A source X and the receiver Y are connected by a noise free channel. Its capacity is [ ]<br />
A) Max H(Y/X) B) Max H(X) C) Max H(X/Y) D) Max H(X,Y)<br />
3. Exchange between Band width and Signal noise ratio can be justified based on [ ]<br />
A) Hartley - Shanon‘s Law B) Shanon‘s source coding Theorem<br />
C) Shanon‘s limit D) Shanon‘s channel coding Theorem<br />
4. Information rate of a source is [ ]<br />
A) maximum when the source is continuous B) the entropy of the source measured in<br />
bits/message<br />
C) a measure of the uncertainty of the communication system<br />
D) the entropy of the source measured in bits/sec.<br />
5. The Hamming Weight of the (6,3) Linear Block coded word 101011 [ ]<br />
A) 5 B) 4 C) 2 D) 3<br />
6. Which of the following can be the generating polynomial for a (7,4) systematic Cyclic<br />
code? [ ]<br />
A) x 3 +x+1 B) x 5 +x 2 +1 C) x 4 +x 3 +1 D) x 7 +x 4 +x 3 +1<br />
7. In a Linear Block code [ ]<br />
A) the received power varies linearly with that of the transmitted power<br />
B) parity bits of the code word are the linear combination of the message bits<br />
C) the communication channel is a linear system<br />
D) the encoder satisfies super position principle<br />
8. The fundamental limit on the average number of bits/source symbol is [ ]<br />
A) Mutual Information B) Channel capacity<br />
C) Information content of the message D) Entropy of the source<br />
9. A system has a band width of 3KHz and an S/N ratio of 29dB at the input of the receiver.<br />
If the band width of the channel gets doubled, then [ ]<br />
A) its capacity gets halved B) the corresponding S/N ratio gets doubled<br />
C) the corresponding S/N ratio gets halved D) its capacity gets doubled<br />
10. The Channel Matrix of a Noiseless channel [ ]<br />
A) consists of a single nonzero number in each column<br />
B) consists of a single nonzero number in each row<br />
C) is a square Matrix<br />
D) is an Identity Matrix<br />
Cont…….2
Code No: 56026 :2: Set No. 4<br />
II Fill in the blanks<br />
11. Relative to Hard decision decoding, soft decision decoding results in _____________<br />
12. If T is the code vector and H is the Parity check Matrix of a Linear Block code, then the<br />
code is defined by the set of all code vectors for which H T .T = ______________<br />
13. The advantage of CDMA over Frequency hopping is ____________<br />
14. The Parity check matrix of a linear block code is<br />
1 0 1 1 0 0<br />
0 1 1 0 1 0<br />
1 1 0 0 0 1<br />
Its Hamming distance is ___________________<br />
15. The significance of PN sequence in CDMA is ________________<br />
16. The cascade of two Binary Symmetric Channels is a __________________________<br />
17. The source coding efficiency can be increased by using _______________________<br />
18. The advantage of Spread Spectrum Modulation schemes over other modulations is<br />
_________________<br />
19. Entropy coding is a _____________________<br />
20. A convolutional encoder of code rate 1/2 is a 3 stage shift register with a message word<br />
length of 6.The code word length obtained from the encoder ( in bits) is<br />
_____________<br />
-oOo-<br />
JNTUWORLD<br />
Code No: 07A5EC09 Set No. 1<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., I Mid-Term Examinations, September – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) Word length in Delta modulation in Delta modulation is [ ]<br />
A)3 bits B)2 bits C)1 bit D)2 n bits<br />
2) Which of the following gives minimum probability of error [ ]<br />
A)FSK B)ASK C)DPSK D)PSK<br />
3) QPSK is an example of M-ary data transmission with M= [ ]<br />
A)2 B)8 C)6 D)4<br />
4) The quantization error in PCM when Δ is the step size [ ]<br />
A) Δ 2 /12 B) Δ 2 /2 C) Δ 2 /4 D) Δ 2 /3<br />
5) Quantization noise occurs in [ ]<br />
A) TDM B)FDM C) PCM D)PWM<br />
6) Non Uniform quantization is used to make [ ]<br />
A) (S/N) q<br />
is uniform B) (S/N) q<br />
is non-uniform<br />
C) (S/N) q<br />
is high D) (S/N) q<br />
is low<br />
7) Slope Overload distortion in DM can be reduced by [ ]<br />
A) Increasing step size B)Decreasing step size<br />
C) Uniform step size D)Zero step size<br />
8) Which of the following requires more band width [ ]
A) ASK B)PSK C)FSK D)DPSK<br />
9) Companding results in [ ]<br />
A) More S/N ratio at higher amplitudes B) More S/N ratio at lower amplitudes<br />
C) Uniform S/N ratio throughout the signal D) Better S/N ratio at lower frequencies<br />
10) Mean square quantization noise in the PCM system with step size of 2V is [ ]<br />
A)1/3 B)1/12 C)3/2 D)2<br />
Cont…..2<br />
A<br />
www.jntuworld.com www.jntuworld.com www.jwjobs.net JNTUWORLD<br />
Code No: 07A5EC09 :2: Set No.1<br />
II Fill in the blanks:<br />
11) The minimum symbol rate of a PCM system transmitting an analog signal band limited to<br />
2 KHz, the number of Q-levels 64 is ------------------<br />
12) In DM granular noise occurs if when step size is -------------<br />
13) The combination of compressor and expander is called---------------------<br />
14) Data word length in DM is ---------------<br />
15) Band width of PCM signals is -----------<br />
16) A signal extending over-4V to +4Vis quantized in to 8e maximum possible quantization<br />
error obtainable is-------------<br />
17) Probability of error of PSK scheme is-----------------------<br />
18) PSK and FSK have a constant--------------<br />
19) Granular noise occurs when step size is--------------<br />
20) Converting discrete time continuous signal into discrete amplitude discrete time signal is<br />
called-----------------.<br />
-oOowww.jntuworld.com<br />
www.jntuworld.com www.jwjobs.net JNTUWORLD
Code No: 07A5EC09 Set No. 2<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., I Mid-Term Examinations, September – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) The quantization error in PCM when Δ is the step size [ ]<br />
A) Δ 2 /12 B) Δ 2 /2 C) Δ 2 /4 D) Δ 2 /3<br />
2) Quantization noise occurs in [ ]<br />
A) TDM B)FDM C) PCM D)PWM<br />
3) Non Uniform quantization is used to make [ ]<br />
A) (S/N) q<br />
is uniform B) (S/N) q<br />
is non-uniform<br />
C) (S/N) q<br />
is high D) (S/N) q<br />
is low<br />
4) Slope Overload distortion in DM can be reduced by [ ]<br />
A) Increasing step size B)Decreasing step size<br />
C) Uniform step size D)Zero step size<br />
5) Which of the following requires more band width [ ]<br />
A) ASK B)PSK C)FSK D)DPSK<br />
6) Companding results in [ ]<br />
A) More S/N ratio at higher amplitudes B) More S/N ratio at lower amplitudes<br />
C) Uniform S/N ratio throughout the signal D) Better S/N ratio at lower frequencies<br />
7) Mean square quantization noise in the PCM system with step size of 2V is [ ]<br />
A)1/3 B)1/12 C)3/2 D)2<br />
8) Word length in Delta modulation in Delta modulation is [ ]<br />
A)3 bits B)2 bits C)1 bit D)2 n bits<br />
9) Which of the following gives minimum probability of error [ ]<br />
A)FSK B)ASK C)DPSK D)PSK<br />
10) QPSK is an example of M-ary data transmission with M= [ ]<br />
A)2 B)8 C)6 D)4<br />
A<br />
www.jntuworld.com www.jntuworld.com www.jwjobs.net JNTUWORLD<br />
Cont…..2
Code No: 07A5EC09 :2: Set No.2<br />
II Fill in the blanks:<br />
11) Data word length in DM is ---------------<br />
12) Band width of PCM signals is -----------<br />
13) A signal extending over-4V to +4Vis quantized in to 8e maximum possible quantization<br />
error obtainable is-------------<br />
14) Probability of error of PSK scheme is-----------------------<br />
15) PSK and FSK have a constant--------------<br />
16) Granular noise occurs when step size is--------------<br />
17) Converting discrete time continuous signal into discrete amplitude discrete time signal is<br />
called-----------------.<br />
18) The minimum symbol rate of a PCM system transmitting an analog signal band limited to<br />
2 KHz, the number of Q-levels 64 is ------------------<br />
19) In DM granular noise occurs if when step size is -------------<br />
20) The combination of compressor and expander is called---------------------<br />
-oOo-<br />
Code No: 07A5EC09 Set No. 3<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., I Mid-Term Examinations, September – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) Non Uniform quantization is used to make [ ]<br />
A) (S/N) q<br />
is uniform B) (S/N) q<br />
is non-uniform<br />
C) (S/N) q<br />
is high D) (S/N) q<br />
is low<br />
2) Slope Overload distortion in DM can be reduced by [ ]<br />
A) Increasing step size B)Decreasing step size<br />
C) Uniform step size D)Zero step size<br />
3) Which of the following requires more band width [ ]<br />
A) ASK B)PSK C)FSK D)DPSK<br />
4) Companding results in [ ]<br />
A) More S/N ratio at higher amplitudes B) More S/N ratio at lower amplitudes<br />
C) Uniform S/N ratio throughout the signal D) Better S/N ratio at lower frequencies<br />
5) Mean square quantization noise in the PCM system with step size of 2V is [ ]<br />
A)1/3 B)1/12 C)3/2 D)2<br />
6) Word length in Delta modulation in Delta modulation is [ ]<br />
A)3 bits B)2 bits C)1 bit D)2 n bits<br />
7) Which of the following gives minimum probability of error [ ]<br />
A)FSK B)ASK C)DPSK D)PSK<br />
8) QPSK is an example of M-ary data transmission with M= [ ]<br />
A)2 B)8 C)6 D)4<br />
9) The quantization error in PCM when Δ is the step size [ ]<br />
A) Δ 2 /12 B) Δ 2 /2 C) Δ 2 /4 D) Δ 2 /3<br />
10) Quantization noise occurs in [ ]<br />
A) TDM B)FDM C) PCM D)PWM
Cont…..2<br />
Code No: 07A5EC09 :2: Set No.3<br />
II Fill in the blanks:<br />
11) A signal extending over-4V to +4Vis quantized in to 8e maximum possible quantization<br />
error obtainable is-------------<br />
12) Probability of error of PSK scheme is-----------------------<br />
13) PSK and FSK have a constant--------------<br />
14) Granular noise occurs when step size is--------------<br />
15) Converting discrete time continuous signal into discrete amplitude discrete time signal is<br />
called-----------------.<br />
16) The minimum symbol rate of a PCM system transmitting an analog signal band limited to<br />
2 KHz, the number of Q-levels 64 is ------------------<br />
17) In DM granular noise occurs if when step size is -------------<br />
18) The combination of compressor and expander is called---------------------<br />
19) Data word length in DM is ---------------<br />
20) Band width of PCM signals is -----------<br />
-oOo-<br />
Code No: 07A5EC09 Set No. 4<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., I Mid-Term Examinations, September – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) Which of the following requires more band width [ ]<br />
A) ASK B)PSK C)FSK D)DPSK<br />
2) Companding results in [ ]<br />
A) More S/N ratio at higher amplitudes B) More S/N ratio at lower amplitudes<br />
C) Uniform S/N ratio throughout the signal D) Better S/N ratio at lower frequencies<br />
3) Mean square quantization noise in the PCM system with step size of 2V is [ ]<br />
A)1/3 B)1/12 C)3/2 D)2<br />
4) Word length in Delta modulation in Delta modulation is [ ]<br />
A)3 bits B)2 bits C)1 bit D)2 n bits<br />
5) Which of the following gives minimum probability of error [ ]<br />
A)FSK B)ASK C)DPSK D)PSK<br />
6) QPSK is an example of M-ary data transmission with M= [ ]<br />
A)2 B)8 C)6 D)4<br />
7) The quantization error in PCM when Δ is the step size [ ]<br />
A) Δ 2 /12 B) Δ 2 /2 C) Δ 2 /4 D) Δ 2 /3<br />
8) Quantization noise occurs in [ ]<br />
A) TDM B)FDM C) PCM D)PWM<br />
9) Non Uniform quantization is used to make [ ]<br />
A) (S/N) q<br />
is uniform B) (S/N) q<br />
is non-uniform<br />
C) (S/N) q<br />
is high D) (S/N) q<br />
is low<br />
10) Slope Overload distortion in DM can be reduced by [ ]<br />
A) Increasing step size B)Decreasing step size<br />
C) Uniform step size D)Zero step size
Cont…..2<br />
Code No: 07A5EC09 :2: Set No.4<br />
II Fill in the blanks:<br />
11) PSK and FSK have a constant--------------<br />
12) Granular noise occurs when step size is--------------<br />
13) Converting discrete time continuous signal into discrete amplitude discrete time signal is<br />
called-----------------.<br />
14) The minimum symbol rate of a PCM system transmitting an analog signal band limited to<br />
2 KHz, the number of Q-levels 64 is ------------------<br />
15) In DM granular noise occurs if when step size is -------------<br />
16) The combination of compressor and expander is called---------------------<br />
17) Data word length in DM is ---------------<br />
18) Band width of PCM signals is -----------<br />
19) A signal extending over-4V to +4Vis quantized in to 8e maximum possible quantization<br />
error obtainable is-------------<br />
20) Probability of error of PSK scheme is-----------------------<br />
-oOo-<br />
Code No: 07A5EC09 Set No. 1<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) The channel matrix of a noiseless channel [ ]<br />
a)consists of a single nonzero number in each column.<br />
b) consists of a single nonzero number in each row.<br />
c) is an Identity Matrix. d) is a square matrix.<br />
2 ) Information content of a message [ ]<br />
a) increase with its certainty of occurrence. b) independent of the certainty of<br />
occurrence.<br />
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of<br />
occurrence.<br />
3) The channel capacity of a BSC with the transition probability ½ is [ ]<br />
a) 0 bits b) 1 bit c) 2 bits d) infinity<br />
4) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x 2 +x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x+x 5<br />
5) A source transmitting ‘m’ number of messages is connected to a noise free channel. The<br />
capacity of the channel is [ ]<br />
a) m bits/symbol b) m 2 bits/symbol c) logm bits/symbol d) 2m bits/symbol<br />
6) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]<br />
a) b) c) d) None<br />
7) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]<br />
a)shannons limit b)shannons source coding<br />
c) shannons channel coading d) Shannon Hartley theorem
8) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x+x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x 4 +x 5<br />
9) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free<br />
channel. The conditional probability of the source is H(X/Y) and the joint entropy of<br />
the source and the receiver H(X, Y). Then [ ]<br />
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message<br />
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message<br />
Cont…..2<br />
Code No: 07A5EC09 :2: Set No.1<br />
10) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]<br />
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None<br />
II Fill in the blanks:<br />
11) The information rate of a source is also referred to as entropy measured in<br />
______________<br />
12) H(X,Y)=______________ or __________________<br />
13) Capacity of a noise free channel is _________________<br />
14) The Shannon’s limit is ______________<br />
15) The channel capacity with infinite bandwidth is not because ____________<br />
16) Assuming 26 characters are equally likely , the average of the information content of<br />
English language in bits/character is________________<br />
17) The distance between two vector c1 and c2 is defined as the no.of components in which<br />
they are differ is called as____________________<br />
18) The minimum distance of a linear block code is equal to____________________of any<br />
non-zero code word in the code.<br />
19) A linear block code with a minimum distance d min<br />
can detect upto ___________________<br />
20) For a Linear Block code Code rate =_________________<br />
-oOo-<br />
Code No: 07A5EC09 Set No. 2<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x 2 +x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x+x 5<br />
2) A source transmitting ‘m’ number of messages is connected to a noise free channel. The<br />
capacity of the channel is [ ]<br />
a) m bits/symbol b) m 2 bits/symbol c) logm bits/symbol d) 2m bits/symbol<br />
3) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]<br />
a) b) c) d) None<br />
4) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]<br />
a)shannons limit b)shannons source coding
c) shannons channel coading d) Shannon Hartley theorem<br />
5) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x+x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x 4 +x 5<br />
6) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free<br />
channel. The conditional probability of the source is H(X/Y) and the joint entropy of<br />
the source and the receiver H(X, Y). Then [ ]<br />
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message<br />
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message<br />
7) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]<br />
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None<br />
8) The channel matrix of a noiseless channel [ ]<br />
a)consists of a single nonzero number in each column.<br />
b) consists of a single nonzero number in each row.<br />
c) is an Identity Matrix. d) is a square matrix.<br />
9 ) Information content of a message [ ]<br />
a) increase with its certainty of occurrence. b) independent of the certainty of<br />
occurrence.<br />
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of<br />
occurrence.<br />
Cont…..2<br />
A<br />
www.jntuworld.com www.jntuworld.com www.jwjobs.net JNTUWORLD<br />
Code No: 07A5EC09 :2: Set No.2<br />
10) The channel capacity of a BSC with the transition probability ½ is [ ]<br />
a) 0 bits b) 1 bit c) 2 bits d) infinity<br />
II Fill in the blanks:<br />
11) The shannons limit is ______________<br />
12) The channel capacity with infinite bandwidth is not because ____________<br />
13) Assuming 26 characters are equally likely , the average of the information content of<br />
English language in bits/character is________________<br />
14) The distance between two vector c1 and c2 is defined as the no.of components in which<br />
they are differ is called as____________________<br />
15) The minimum distance of a linear block code is equal to____________________of any<br />
non-zero code word in the code.<br />
16) A linear block code with a minimum distance d min<br />
can detect upto ___________________<br />
17) For a Linear Block code Code rate =_________________<br />
18) The information rate of a source is also referred to as entropy measured in<br />
______________<br />
19) H(X,Y)=______________ or __________________<br />
20) Capacity of a noise free channel is _________________<br />
-oOo-
Code No: 07A5EC09 Set No. 3<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]<br />
a) b) c) d) None<br />
2) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]<br />
a) Shannon’s limit b )Shanon’s source coding<br />
c) Shannon’s channel coding d) Shannon Hartley theorem<br />
3) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x+x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x 4 +x 5<br />
4) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free<br />
channel. The conditional probability of the source is H(X/Y) and the joint entropy of<br />
the source and the receiver H(X, Y). Then [ ]<br />
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message<br />
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message<br />
5) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]<br />
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None<br />
6) The channel matrix of a noiseless channel [ ]<br />
a)consists of a single nonzero number in each column.<br />
b) consists of a single nonzero number in each row.<br />
c) is an Identity Matrix. d) is a square matrix.<br />
7 ) Information content of a message [ ]<br />
a) increase with its certainty of occurrence. b) independent of the certainty of<br />
occurrence.<br />
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of<br />
occurrence.<br />
8) The channel capacity of a BSC with the transition probability ½ is [ ]<br />
a) 0 bits b) 1 bit c) 2 bits d) infinity<br />
9) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x 2 +x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x+x 5 Cont…..2
Code No: 07A5EC09 :2: Set No.3<br />
10) A source transmitting ‘m’ number of messages is connected to a noise free channel. The<br />
capacity of the channel is [ ]<br />
a) m bits/symbol b) m 2 bits/symbol c) log m bits/symbol d) 2m bits/symbol<br />
II Fill in the blanks:<br />
11) Assuming 26 characters are equally likely , the average of the information content of<br />
English language in bits/character is________________<br />
12) The distance between two vector c1 and c2 is defined as the no.of components in which<br />
they are differ is called as____________________<br />
13) The minimum distance of a linear block code is equal to____________________of any<br />
non-zero code word in the code.<br />
14) A linear block code with a minimum distance d min<br />
can detect upto ___________________<br />
15) For a Linear Block code Code rate =_________________<br />
16) The information rate of a source is also referred to as entropy measured in<br />
______________<br />
17) H(X,Y)=______________ or __________________<br />
18) Capacity of a noise free channel is _________________<br />
19) The Shanon’s limit is ______________<br />
20) The channel capacity with infinite bandwidth is not because ____________<br />
-oOo-<br />
Code No: 07A5EC09 Set No. 4<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x+x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x 4 +x 5<br />
2) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free<br />
channel. The conditional probability of the source is H(X/Y) and the joint entropy of<br />
the source and the receiver H(X, Y). Then [ ]<br />
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message<br />
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message<br />
3) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]<br />
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None<br />
4) The channel matrix of a noiseless channel [ ]<br />
a)consists of a single nonzero number in each column.<br />
b) consists of a single nonzero number in each row.<br />
c) is an Identity Matrix. d) is a square matrix.<br />
5) Information content of a message [ ]<br />
a) increase with its certainty of occurrence. b) independent of the certainty of<br />
occurrence.<br />
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of<br />
occurrence.<br />
6) The channel capacity of a BSC with the transition probability ½ is [ ]
a) 0 bits b) 1 bit c) 2 bits d) infinity<br />
7) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x 2 +x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x+x 5<br />
8) A source transmitting ‘m’ number of messages is connected to a noise free channel. The<br />
capacity of the channel is [ ]<br />
a) m bits/symbol b) m 2 bits/symbol c) logm bits/symbol d) 2m bits/symbol<br />
9) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]<br />
a) b) c) d) None<br />
Cont…..2<br />
Code No: 07A5EC09 :2: Set No.4<br />
10) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]<br />
a)Shannon’s limit b)Shannon’s source coding<br />
c) Shannon’s channel coding d) Shannon Hartley theorem<br />
II Fill in the blanks:<br />
11) The minimum distance of a linear block code is equal to____________________of any<br />
non-zero code word in the code.<br />
12) A linear block code with a minimum distance d min<br />
can detect upto ___________________<br />
13) For a Linear Block code Code rate =_________________<br />
14) The information rate of a source is also referred to as entropy measured in<br />
______________<br />
15) H(X,Y)=______________ or __________________<br />
16) Capacity of a noise free channel is _________________<br />
17) The Shannon’s limit is ______________<br />
18) The channel capacity with infinite bandwidth is not because ____________<br />
19) Assuming 26 characters are equally likely , the average of the information content of<br />
English language in bits/character is________________<br />
20) The distance between two vector c1 and c2 is defined as the no.of components in which<br />
they are differ is called as____________________<br />
-oOo-<br />
Code No: 07A5EC09 Set No. 1<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) The channel matrix of a noiseless channel [ ]<br />
a)consists of a single nonzero number in each column.<br />
b) consists of a single nonzero number in each row.<br />
c) is an Identity Matrix. d) is a square matrix.<br />
2 ) Information content of a message [ ]
a) increase with its certainty of occurrence. b) independent of the certainty of<br />
occurrence.<br />
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of<br />
occurrence.<br />
3) The channel capacity of a BSC with the transition probability ½ is [ ]<br />
a) 0 bits b) 1 bit c) 2 bits d) infinity<br />
4) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x 2 +x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x+x 5<br />
5) A source transmitting ‘m’ number of messages is connected to a noise free channel. The<br />
capacity of the channel is [ ]<br />
a) m bits/symbol b) m 2 bits/symbol c) logm bits/symbol d) 2m bits/symbol<br />
6) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]<br />
a) b) c) d) None<br />
7) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]<br />
a)Shannon’s limit b)Shannon’s source coding<br />
c) Shannon’s channel coding d) Shannon Hartley theorem<br />
8) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x+x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x 4 +x 5<br />
9) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free<br />
channel. The conditional probability of the source is H(X/Y) and the joint entropy of<br />
the source and the receiver H(X, Y). Then [ ]<br />
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message<br />
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message<br />
Cont…..2<br />
Code No: 07A5EC09 :2: Set No.1<br />
10) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]<br />
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None<br />
II Fill in the blanks:<br />
11) The information rate of a source is also referred to as entropy measured in<br />
______________<br />
12) H(X,Y)=______________ or __________________<br />
13) Capacity of a noise free channel is _________________<br />
14) The Shannon’s limit is ______________<br />
15) The channel capacity with infinite bandwidth is not because ____________<br />
16) Assuming 26 characters are equally likely , the average of the information content of<br />
English language in bits/character is________________<br />
17) The distance between two vector c1 and c2 is defined as the no.of components in which<br />
they are differ is called as____________________<br />
18) The minimum distance of a linear block code is equal to____________________of any<br />
non-zero code word in the code.<br />
19) A linear block code with a minimum distance d min<br />
can detect upto ___________________<br />
20) For a Linear Block code Code rate =_________________<br />
-oOo-
Code No: 07A5EC09 Set No. 2<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x 2 +x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x+x 5<br />
2) A source transmitting ‘m’ number of messages is connected to a noise free channel. The<br />
capacity of the channel is [ ]<br />
a) m bits/symbol b) m 2 bits/symbol c) log m bits/symbol d) 2m bits/symbol<br />
3) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]<br />
a) b) c) d) None<br />
4) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]<br />
a)Shannon’s limit b)Shannon’s source coding<br />
c) Shannon’s channel coding d) Shannon Hartley theorem<br />
5) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x+x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x 4 +x 5<br />
6) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free<br />
channel. The conditional probability of the source is H(X/Y) and the joint entropy of<br />
the source and the receiver H(X, Y). Then [ ]<br />
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message<br />
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message<br />
7) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]<br />
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None<br />
8) The channel matrix of a noiseless channel [ ]<br />
a)consists of a single nonzero number in each column.<br />
b) consists of a single nonzero number in each row.<br />
c) is an Identity Matrix. d) is a square matrix.<br />
9 ) Information content of a message [ ]<br />
a) increase with its certainty of occurrence. b) independent of the certainty of<br />
occurrence.<br />
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of<br />
occurrence.<br />
Cont…..2
Code No: 07A5EC09 :2: Set No.2<br />
10) The channel capacity of a BSC with the transition probability ½ is [ ]<br />
a) 0 bits b) 1 bit c) 2 bits d) infinity<br />
II Fill in the blanks:<br />
11) The Shannon’s limit is ______________<br />
12) The channel capacity with infinite bandwidth is not because ____________<br />
13) Assuming 26 characters are equally likely , the average of the information content of<br />
English language in bits/character is________________<br />
14) The distance between two vector c1 and c2 is defined as the no.of components in which<br />
they are differ is called as____________________<br />
15) The minimum distance of a linear block code is equal to____________________of any<br />
non-zero code word in the code.<br />
16) A linear block code with a minimum distance d min<br />
can detect upto ___________________<br />
17) For a Linear Block code Code rate =_________________<br />
18) The information rate of a source is also referred to as entropy measured in<br />
______________<br />
19) H(X,Y)=______________ or __________________<br />
20) Capacity of a noise free channel is _________________<br />
-oOo-<br />
Code No: 07A5EC09 Set No. 3<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]<br />
a) b) c) d) None<br />
2) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]<br />
a)Shannon’s limit b)Shannon’s source coding<br />
c) Shannon’s channel coding d) Shannon Hartley theorem<br />
3) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x+x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x 4 +x 5<br />
4) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free<br />
channel. The conditional probability of the source is H(X/Y) and the joint entropy of<br />
the source and the receiver H(X, Y). Then [ ]<br />
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message<br />
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message<br />
5) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]<br />
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None<br />
6) The channel matrix of a noiseless channel [ ]<br />
a)consists of a single nonzero number in each column.<br />
b) consists of a single nonzero number in each row.<br />
c) is an Identity Matrix. d) is a square matrix.<br />
7 ) Information content of a message [ ]
a) increase with its certainty of occurrence. b) independent of the certainty of<br />
occurrence.<br />
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of<br />
occurrence.<br />
8) The channel capacity of a BSC with the transition probability ½ is [ ]<br />
a) 0 bits b) 1 bit c) 2 bits d) infinity<br />
9) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x 2 +x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x+x 5 Cont…..2<br />
Code No: 07A5EC09 :2: Set No.3<br />
10) A source transmitting ‘m’ number of messages is connected to a noise free channel. The<br />
capacity of the channel is [ ]<br />
a) m bits/symbol b) m 2 bits/symbol c) log m bits/symbol d) 2m bits/symbol<br />
II Fill in the blanks:<br />
11) Assuming 26 characters are equally likely , the average of the information content of<br />
English language in bits/character is________________<br />
12) The distance between two vector c1 and c2 is defined as the no.of components in which<br />
they are differ is called as____________________<br />
13) The minimum distance of a linear block code is equal to____________________of any<br />
non-zero code word in the code.<br />
14) A linear block code with a minimum distance d min<br />
can detect upto ___________________<br />
15) For a Linear Block code Code rate =_________________<br />
16) The information rate of a source is also referred to as entropy measured in<br />
______________<br />
17) H(X,Y)=______________ or __________________<br />
18) Capacity of a noise free channel is _________________<br />
19) The shannons limit is ______________<br />
20) The channel capacity with infinite bandwidth is not because ____________<br />
-oOo-
Code No: 07A5EC09 Set No. 4<br />
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD<br />
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010<br />
DIGITAL COMMUNICATIONS<br />
Objective Exam<br />
Name: ______________________________ Hall Ticket No.<br />
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.<br />
I Choose the correct alternative:<br />
1) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x+x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x 4 +x 5<br />
2) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free<br />
channel. The conditional probability of the source is H(X/Y) and the joint entropy of<br />
the source and the receiver H(X, Y). Then [ ]<br />
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message<br />
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message<br />
3) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]<br />
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None<br />
4) The channel matrix of a noiseless channel [ ]<br />
a)consists of a single nonzero number in each column.<br />
b) consists of a single nonzero number in each row.<br />
c) is an Identity Matrix. d) is a square matrix.<br />
5) Information content of a message [ ]<br />
a) increase with its certainty of occurrence. b) independent of the certainty of<br />
occurrence.<br />
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of<br />
occurrence.<br />
6) The channel capacity of a BSC with the transition probability ½ is [ ]<br />
a) 0 bits b) 1 bit c) 2 bits d) infinity<br />
7) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator<br />
polynomial 1+x 2 +x 3 , the code polynomial is [ ]<br />
a) 1+x+x 3 +x 5 b) 1+x 2 +x 3 +x 5 c) 1+x 2 +x 3 +x 4 d) 1+x+x 5<br />
8) A source transmitting ‘m’ number of messages is connected to a noise free channel. The<br />
capacity of the channel is [ ]<br />
a) m bits/symbol b) m 2 bits/symbol c) logm bits/symbol d) 2m bits/symbol<br />
9) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]<br />
a) b) c) d) None<br />
Cont…..2
Code No: 07A5EC09 :2: Set No.4<br />
10) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]<br />
a)Shannon’s limit b)Shannon’s source coding<br />
c) Shannon’s channel coding d) Shannon Hartley theorem<br />
II Fill in the blanks:<br />
11) The minimum distance of a linear block code is equal to____________________of any<br />
non-zero code word in the code.<br />
12) A linear block code with a minimum distance d min<br />
can detect upto ___________________<br />
13) For a Linear Block code Code rate =_________________<br />
14) The information rate of a source is also referred to as entropy measured in<br />
______________<br />
15) H(X,Y)=______________ or __________________<br />
16) Capacity of a noise free channel is _________________<br />
17) The Shannon’s limit is ______________<br />
18) The channel capacity with infinite bandwidth is not because ____________<br />
19) Assuming 26 characters are equally likely , the average of the information content of<br />
English language in bits/character is________________<br />
20) The distance between two vector c1 and c2 is defined as the no.of components in which<br />
they are differ is called as____________________<br />
-oOo-<br />
20. Tutorial Questions<br />
1. (a) Explain the basic principles of sampling, and distinguish between ideal sampling and practical<br />
sampling.<br />
(b) A band pass signal has a centre frequency fo and extends from fo - 5 KHz to f0 + 5 KHz. It is<br />
sampled at a rate of fs = 25 KHz. As f o varies from 5 KHz to 50 KHz, find the ranges of fo for which<br />
the sampling rate is adequate.<br />
2. a) Describe the synchronization procedure for PAM,PWM and PPM signals. Also discuss about<br />
the spectra of PWM and PDM signals.<br />
b) State and prove sampling theorem.<br />
3. (a) Explain the method of generation and detection of PPM signals with neat sketches.<br />
(b) Compare the characteristics of PWM and PPM signals.<br />
(c) Which analog pulse modulation can be termed as analogous to linear CW modulation and<br />
why?<br />
4. List out the applications, merits and demerits of PAM, PPM and PWM signals.<br />
5. What are the advantages and disadvantages of digital communication system?<br />
6. Draw and explain the elements of digital communication system?
7. Explain about the bandwidth and signal to noise ratio trade 0ff?<br />
8. Explain about Hartley Shannon’s law?<br />
9. Explain certain issues generally encountered while digital transmission?<br />
10. (a) Sketch and explain the typical waveforms of PWM signals, for leading edge, trailing edge and<br />
symmetrical cases.<br />
(b) Compare the analog pulse modulation schemes with CW modulation systems.<br />
11. (a) Explain how the PPM signals can be generated and reconstructed through<br />
PWM signals.<br />
(b) Compare the merits and demerits of PAM, PDM and PPM signals. List out their applications.<br />
12. (a) Define the Sampling theorem and establish the same for band pass signals, using neat<br />
schematics.<br />
(b) For the modulating signal m(t) = 2 Cos (100 t) + 18 Cos (2000 πt), determine the allowable<br />
sampling rates and sampling intervals.<br />
13. Draw the block diagram of PCM generator and explain each block.<br />
14. Determine the transmission bandwidth in PCM.<br />
15. What is the function of predictor in DPCM system?<br />
16. What are the applications of PCM? Give in detail any two applications.<br />
17. Explain the need for Non-uniform quantization in PCM system.<br />
18. Derive the expression for output Signal to noise ratio of PCM system.<br />
19. Explain µ-law companding for speech signals.<br />
20. Explain the working of DPCM system with neat block diagram.<br />
21. Prove that the mean value of the quantization error is inversely proportional to the<br />
Square of the number of quantization levels.<br />
22.. Explain why quantization noise could affect small amplitude signals in a PCM system More<br />
than large signals. With the aid of sketches show how tapered quantizing level Could be<br />
used to counteract this effect.<br />
23. Explain the working of Delta modulation system with a neat block diagram.<br />
24. Clearly bring out the difference between granular noise and slope overload error.<br />
25. Consider a speech signal with maximum frequency of 3.4 KHz and maximum
Amplitude of 1v.This speech signal applied to a DM whose bit rate is set at<br />
20kbps. Discuss the choice of appropriate step size for the modulator.<br />
26. Derive the expression for Signal to noise ratio of DM system<br />
27. Explain with neat block diagram, Adaptive Delta Modulator transmitter and receiver.<br />
28. Why is it necessary to use greater sampling rate for DM than PCM.<br />
29. Explain the advantages of ADM over DM and how is it achieved.<br />
30. A delta modulator system is designed to operate at five times the Nyquist rate for a<br />
signal with 3KHz bandwidth. Determine the maximum amplitude of a 2KHz input<br />
sinusoid for which the delta modulator does not have slope overload. Quantization step<br />
size is 250mv.Derive the formula used.<br />
31. Compare Delta modulation and PCM techniques in terms of bandwidth and signal to noise<br />
ratio.<br />
32. A signal m (t) is to be encoded using either Delta modulation or PCM technique. The signal<br />
to quantization noise ratio (So/No) ≥ 30dB.Find the ratio bandwidth required for PCM to<br />
Delta modulation.<br />
33. What are the advantages and disadvantages of Digital modulation schemes?<br />
34. Discuss base band transmission of M-ary Data.<br />
35. Explain how the residual effects of the channel are responsible for ISI?<br />
36. What is the practical solution to obtain zero ISI? Explain.<br />
37. What is the ideal solution to obtain zero ISI and what is the disadvantage of<br />
this Solution.<br />
38. Explain the signal space representation of QPSK .Compare QPSK with all other digital<br />
signaling schemes.<br />
39. Write down the modulation waveform for transmitting binary information over<br />
baseband channels for the following modulation schemes: ASK,PSK,FSK and DPSK.<br />
40. Explain in detail the power spectra and bandwidth efficiency of M-ary signals.<br />
41. Explain coherent and non-coherent detection of binary FSK waves.<br />
42. Compare and discuss a binary scheme with M-ary signaling scheme.<br />
43. Derive an expression for error probability of coherent ASK scheme.
44. Derive an expression for error probability of non-coherent ASK scheme.<br />
45. Find the transfer function of the optimum receiver and calculate the error probability.<br />
46. Derive an expression for probability of bit error of a binary coherent FSK receiver.<br />
47. Derive an expression for probability of bit error in a PSK system.<br />
48. Show that the impulse response of a matched filter is a time reversed and delayed<br />
version of the input signal .and Briefly explain the properties of matched filter.<br />
49. Binary data has to be transmitted over a telephone link that has a usable bandwidth of 3000<br />
Hz and a maximum achievable SNR of 6dB at its output.<br />
i) Determine the maximum signaling rate and error probability if a coherent ASK<br />
scheme is used for transmitting binary data through this channel.<br />
ii) If the data rate is maintained at 300 bits/sec. Calculate the error probability.<br />
50. Binary data is transmitted over an RF band pass channel with a usable bandwidth of 10MHz<br />
at a rate of 4.8×10 6 bits/sec using an ASK signaling method. The carrier amplitude at the<br />
receiver antenna is 1mV and the noise power spectral density at the receiver input is 10 -15<br />
w/Hz.<br />
i) Find the error probability of a coherent receiver.<br />
ii) Find the error probability of a non-coherent receiver.<br />
51. One of four possible messages Q 1, Q 2, Q 3, Q 4 having probabilities 1/8, 3/8, 3/8, and 1/8<br />
respectively is transmitted. Calculate average information per message.<br />
52. An ideal channel low pass channel of bandwidth B Hz with additive Gaussian white noise is<br />
used for transmitting of digital information.<br />
a. Plot C/B versus S/N in dB for an ideal system using this channel<br />
b. A practical signaling scheme on this channel used one of two waveforms of duration<br />
Tb sec to transmit binary information. The signaling scheme transmits data at the<br />
rare of 2B bits/sec, the probability of error is given by P (error/1sent) = P e<br />
c. Plot graphs of<br />
i C/B<br />
ii D t/B where D t is rate of information transmission over channel.<br />
53.Define and explain the following in terms of joint pdf (x, y) and marginal pdf`s p(x) and p(y).<br />
d. Mutual Information<br />
e. Average Mutual Information<br />
f. Entropy<br />
54 .Let X is a discrete random variable with equally probable outcomes X 1 = A, and X 2 = A and<br />
conditional probability pdf`s p(y/x i), i = 1, 2 be the Gaussian with mean xi and variance σ 2. Calculate<br />
average mutual information I (X,Y)<br />
55. Write short notes on the following<br />
a. Mutual Information
. Self Information<br />
c. Logarithmic measure for information<br />
56. Write short notes on the following<br />
a. Entropy<br />
b. Conditional entropy<br />
c. Mutual Information<br />
d. Information<br />
57. A DMS has an alphabet of eight letters, Xi , i=1,2,3,….,8, with probabilities<br />
0.36,0.14,0.13,0.12,0.1,0.09,0.04,0.02.<br />
i. Use the Huffman encoding procedure to determine a binary code for the<br />
source output.<br />
ii. Determine the entropy of the source and find the efficiency of the code<br />
58. A DMS has an alphabet of eight letters, Xi , i=1,2,3,….,8, with probabilities<br />
{0.05,0.1,0.1,0.15,0.05,0.25,0.3}<br />
i. Use the Shannon-fano coding procedure to determine a binary code for the<br />
source output.<br />
ii. Determine the entropy of the source and find the efficiency of the code. An<br />
analog signal band limited to 10HKz quantize is 8levels of PCM System with<br />
59. Probability of 1/4/1/5, 1/5, 1/10,1/20,1/20, and 1/20 respectively. Find the entropy and rate<br />
of information.<br />
60. Explain various methods for describing conventional methods.<br />
61. Explain about block codes in which each block of k message bits encoded into<br />
block of n>k bits with an example.<br />
62. Consider a (6,3) generator matrix<br />
1 0 0 0 1 1<br />
G = 0 1 0 1 0 1<br />
0 0 1 1 1 0<br />
Find<br />
a) All the code vectors of this code.<br />
b) The parity check matrix for this code.<br />
c) The minimum weight of the code.<br />
63 . What is the hardware components required to implement a cyclic code encoder.<br />
64. Explain about the syndrome calculation, error correction and error detection in<br />
(n, k) cyclic codes.<br />
65. Briefly discuss about the linear block code error control technique.
66. Show that if g(x) is a polynomial of degree (n-k) and is a factor of x n +1, then<br />
g(x) generates an (n,k) cyclic code in which the code polynomial for a data vector D<br />
is generated by v(x)=D(x)g(x).<br />
67. Briefly discuss about the parity check bit error control technique.<br />
68. Discuss about interlaced code with suitable example.<br />
69. Draw and explain a decoder diagram for a (7,4) majority logic code whose generator<br />
polynomial g(x)=1+x+x 3.<br />
70. Discuss about Hamming code with suitable examples.<br />
71. The generator polynomial of a (7,4) cyclic code is g(x)=1+x+x 3 .Find the 16<br />
code words of this code in the following ways.<br />
a) By forming the code polynomials using V(x)=D(x)g(x), where D(x) is the<br />
message polynomial.<br />
b) By using systematic form.<br />
72. Design an encoder for the (7,4) binary cyclic code generated by g(x)=1+x+x 3<br />
and verify its operation using the message vector (0101).<br />
73. A (7, 4) linear block code is generated according to the H matrix<br />
1 1 1 0 1 0 0<br />
H = 1 1 0 1 0 1 0<br />
1 0 1 1 0 0 1<br />
The code word received is 1000011 for a transmitted codeword C. Find the<br />
Corresponding data word transmitted.<br />
74. Consider a (6,3) generator matrix<br />
1 0 0 0 1 1<br />
G = 0 1 0 1 0 1<br />
0 0 1 1 1 0<br />
a. Find<br />
a) All the code vectors of this code.<br />
b) The parity check matrix for this code.<br />
c) The error syndrome of the code.
75. What are the advantages and disadvantages of convolutional codes?<br />
76. Explain about the Viterbi decoding method with an example.<br />
77. (a) What is meant by random errors and burst errors? Explain about a coding<br />
technique which can be used to correct both the burst and random errors simultaneously.<br />
(b) Discuss about the various decoders for convolutional codes.<br />
78. Draw the state diagram, tree diagram for K=3, rate1/3 code generated by<br />
79. (a) Design an encoder for the (7,4) binary cyclic code generated by g(x) = 1+x+x 3 and verify its<br />
operation using the message vector (0101).<br />
(b) What are the differences between block codes and the convolutional codes?<br />
80. Explain various methods for describing Conventional Codes.<br />
81. A convolutional encoder has two shift registers two modulo-2 adders and an output multiplexer.<br />
The generator sequences of the encoder are as follows: g (1) =(1,0,1); g (2) =( 1,1, 1). Assuming a<br />
5bit message sequence is transmitted. Using the state diagram find the message sequence when<br />
the received sequence is<br />
(11,01,00,10,01,10,11,00,00,......)<br />
82. (a) What is meant by random errors and burst errors? Explain about a coding technique which<br />
can be used to correct both the burst and random errors simultaneously.<br />
(b) Discuss about the various decoders for convolutional codes.<br />
83. Find the output codeword for the following convolutional encoder for the message sequence<br />
10011. (as shown in the figure).<br />
84. Construct the state diagram for the following encoder. Starting with all zero state, trace the path<br />
that correspond to the message sequence 1011101. Given convolutional encoder has a single shift<br />
register with two stages,(K=3) three modulo-2 adders and an output multiplexer. The generator<br />
sequence s of the encoder are as follows. g(1)=(1, 0, 1) ; g(2)=(1, 1, 0),g(3)=(1,1,1).<br />
85. Draw and explain Tree diagram of convolutional encoder shown below with rate=1/3, L=3
86. For the convolutional encoder shown below draw the trellis diagram for the message sequence<br />
110.let the first six received bits be 11 01 11 then by using viterbi decoding find the decoded<br />
sequence.<br />
87. Explain the Direct sequence spread spectrum technique with neat diagram<br />
88. Explain the Frequency hopping spread spectrum in detail.<br />
89. Explain the properties of PN Sequences.<br />
90. How pseudo noise sequence is generated? Explain it with example.<br />
91. How DS-SS works? Explain it with a block diagram.<br />
92. Explain the operation of slow and fast frequency hoping technique.<br />
93. Explain about source coding of Speech for wireless communication<br />
94. Explain the types of Multiple Access techniques.<br />
95. Explain TDMA system with frame structure, frame efficiency and features.<br />
96. Explain CDMA system with its features and list out various problems in CDMA systems.<br />
21. Known gaps<br />
Subject: DIGITAL COMMUNICATION<br />
Known gaps:<br />
1. The DC as per the curriculum is not matching with the real time applications<br />
2. This subject is not matching with the coding techniques presently using.<br />
Action to be taken: following additional topics are taken to fill the known gaps<br />
1. Real rime applications<br />
2. Draw backs of the each coding technique
22. Discussion Topics:<br />
Data transmission, digital transmission, or digital communications is the physical transfer<br />
of data (a digital bit stream or a digitized analog signal [1] ) over a point-to-point or point-tomultipoint<br />
communication channel. Examples of such channels are copper wires, optical<br />
fibres, wireless communication channels, storage media and computer buses. The data are<br />
represented as an electromagnetic signal, such as an electrical voltage, radiowave,<br />
microwave, or infrared signal.<br />
While analog transmission is the transfer of a continuously varying analog signal over an<br />
analog channel, digital communications is the transfer of discrete messages over a digital or<br />
an analog channel. The messages are either represented by a sequence of pulses by means of<br />
a line code (baseband transmission), or by a limited set of continuously varying wave forms<br />
(passband transmission), using a digital modulation method. The passband modulation and<br />
corresponding demodulation (also known as detection) is carried out by modem equipment.<br />
According to the most common definition of digital signal, both baseband and passband<br />
signals representing bit-streams are considered as digital transmission, while an alternative<br />
definition only considers the baseband signal as digital, and passband transmission of digital<br />
data as a form of digital-to-analog conversion.<br />
Data transmitted may be digital messages originating from a data source, for example a<br />
computer or a keyboard. It may also be an analog signal such as a phone call or a video<br />
signal, digitized into a bit-stream for example using pulse-code modulation (PCM) or more<br />
advanced source coding (analog-to-digital conversion and data compression) schemes. This<br />
source coding and decoding is carried out by codec equipment.<br />
Digital transmission or data transmission traditionally belongs to telecommunications<br />
and electrical engineering. Basic principles of data transmission may also be covered within<br />
the computer science/computer engineering topic of data communications, which also<br />
includes computer networking or computer communication applications and networking<br />
protocols, for example routing, switching and inter-process communication. Although the<br />
Transmission control protocol (TCP) involves the term "transmission", TCP and other<br />
transport layer protocols are typically not discussed in a textbook or course about data<br />
transmission, but in computer networking.<br />
The term tele transmission involves the analog as well as digital communication. In most<br />
textbooks, the term analog transmission only refers to the transmission of an analog message<br />
signal (without digitization) by means of an analog signal, either as a non-modulated<br />
baseband signal, or as a passband signal using an analog modulation method such as AM or<br />
FM. It may also include analog-over-analog pulse modulatated baseband signals such as<br />
pulse-width modulation. In a few books within the computer networking tradition, "analog<br />
transmission" also refers to passband transmission of bit-streams using digital modulation<br />
methods such as FSK, PSK and ASK. Note that these methods are covered in textbooks<br />
named digital transmission or data transmission, for example. [1]<br />
The theoretical aspects of data transmission are covered by information theory and coding<br />
theory.
Protocol layers and sub-topics<br />
OSI model<br />
by layer<br />
7. Application[show]<br />
6. Presentation[show]<br />
5. Session[show]<br />
4. Transport[show]<br />
3. Network[show]<br />
2. Data link[show]<br />
1. Physical[show]<br />
<br />
<br />
<br />
v<br />
t<br />
e<br />
Courses and textbooks in the field of data transmission typically deal with the following OSI<br />
model protocol layers and topics:<br />
<br />
<br />
<br />
Layer 1, the physical layer:<br />
o Channel coding including<br />
• Digital modulation schemes<br />
• Line coding schemes<br />
• Forward error correction (FEC) codes<br />
o Bit synchronization<br />
o Multiplexing<br />
o Equalization<br />
o Channel models<br />
Layer 2, the data link layer:<br />
o Channel access schemes, media access control (MAC)<br />
o Packet mode communication and Frame synchronization<br />
o Error detection and automatic repeat request (ARQ)<br />
o Flow control<br />
Layer 6, the presentation layer:<br />
o Source coding (digitization and data compression), and information theory.<br />
o Cryptography (may occur at any layer)
Applications and history<br />
Data (mainly but not exclusively informational) has been sent via non-electronic (e.g. optical,<br />
acoustic, mechanical) means since the advent of communication. Analog signal data has been<br />
sent electronically since the advent of the telephone. However, the first data electromagnetic<br />
transmission applications in modern time were telegraphy (1809) and teletypewriters (1906),<br />
which are both digital signals. The fundamental theoretical work in data transmission and<br />
information theory by Harry Nyquist, Ralph Hartley, Claude Shannon and others during the<br />
early 20th century, was done with these applications in mind.<br />
Data transmission is utilized in computers in computer buses and for communication with<br />
peripheral equipment via parallel ports and serial ports such as RS-232 (1969), Firewire<br />
(1995) and USB (1996). The principles of data transmission are also utilized in storage media<br />
for Error detection and correction since 1951.<br />
Data transmission is utilized in computer networking equipment such as modems (1940),<br />
local area networks (LAN) adapters (1964), repeaters, hubs, microwave links, wireless<br />
network access points (1997), etc.<br />
In telephone networks, digital communication is utilized for transferring many phone calls<br />
over the same copper cable or fiber cable by means of Pulse code modulation (PCM), i.e.<br />
sampling and digitization, in combination with Time division multiplexing (TDM) (1962).<br />
Telephone exchanges have become digital and software controlled, facilitating many value<br />
added services. For example the first AXE telephone exchange was presented in 1976. Since<br />
the late 1980s, digital communication to the end user has been possible using Integrated<br />
Services Digital Network (ISDN) services. Since the end of the 1990s, broadband access<br />
techniques such as ADSL, Cable modems, fiber-to-the-building (FTTB) and fiber-to-thehome<br />
(FTTH) have become widespread to small offices and homes. The current tendency is<br />
to replace traditional telecommunication services by packet mode communication such as IP<br />
telephony and IPTV.<br />
Transmitting analog signals digitally allows for greater signal processing capability. The<br />
ability to process a communications signal means that errors caused by random processes can<br />
be detected and corrected. Digital signals can also be sampled instead of continuously<br />
monitored. The multiplexing of multiple digital signals is much simpler to the multiplexing of<br />
analog signals.<br />
Because of all these advantages, and because recent advances in wideband communication<br />
channels and solid-state electronics have allowed scientists to fully realize these advantages,<br />
digital communications has grown quickly. Digital communications is quickly edging out<br />
analog communication because of the vast demand to transmit computer data and the ability<br />
of digital communications to do so.<br />
The digital revolution has also resulted in many digital telecommunication applications where<br />
the principles of data transmission are applied. Examples are second-generation (1991) and<br />
later cellular telephony, video conferencing, digital TV (1998), digital radio (1999),<br />
telemetry, etc.
Baseband or passband transmission<br />
The physically transmitted signal may be one of the following:<br />
1. A baseband signal ("digital-over-digital" transmission): A sequence of electrical pulses or<br />
light pulses produced by means of a line coding scheme such as Manchester coding. This is<br />
typically used in serial cables, wired local area networks such as Ethernet, and in optical fiber<br />
communication. It results in a pulse amplitude modulated(PAM) signal, also known as a<br />
pulse train.<br />
2. A passband signal ("digital-over-analog" transmission): A modulated sine wave signal<br />
representing a digital bit-stream. Note that this is in some textbooks considered as analog<br />
transmission, but in most books as digital transmission. The signal is produced by means of a<br />
digital modulation method such as PSK, QAM or FSK. The modulation and demodulation is<br />
carried out by modem equipment. This is used in wireless communication, and over<br />
telephone network local-loop and cable-TV networks.<br />
Serial and parallel transmission<br />
In telecommunications, serial transmission is the sequential transmission of signal elements<br />
of a group representing a character or other entity of data. Digital serial transmissions are bits<br />
sent over a single wire, frequency or optical path sequentially. Because it requires less signal<br />
processing and less chances for error than parallel transmission, the transfer rate of each<br />
individual path may be faster. This can be used over longer distances as a check digit or<br />
parity bit can be sent along it easily.<br />
In telecommunications, parallel transmission is the simultaneous transmission of the signal<br />
elements of a character or other entity of data. In digital communications, parallel<br />
transmission is the simultaneous transmission of related signal elements over two or more<br />
separate paths. Multiple electrical wires are used which can transmit multiple bits<br />
simultaneously, which allows for higher data transfer rates than can be achieved with serial<br />
transmission. This method is used internally within the computer, for example the internal<br />
buses, and sometimes externally for such things as printers, The major issue with this is<br />
"skewing" because the wires in parallel data transmission have slightly different properties<br />
(not intentionally) so some bits may arrive before others, which may corrupt the message. A<br />
parity bit can help to reduce this. However, electrical wire parallel data transmission is<br />
therefore less reliable for long distances because corrupt transmissions are far more likely.<br />
Types of communication channels<br />
Main article: communication channel<br />
<br />
<br />
<br />
<br />
<br />
<br />
Data transmission circuit<br />
Simplex<br />
Half-duplex<br />
Full-duplex<br />
Point-to-point<br />
Multi-drop:<br />
o Bus network<br />
o Ring network<br />
o Star network
o<br />
o<br />
Mesh network<br />
Wireless network<br />
Asynchronous and synchronous data transmission<br />
Main article: comparison of synchronous and asynchronous signalling<br />
This section may contain parts that are misleading. Please help clarify this article according to any<br />
suggestions provided on the talk page. (August 2012)<br />
[citation needed]<br />
Asynchronous transmission uses start and stop bits to signify the beginning bit<br />
ASCII character would actually be transmitted using 10 bits. For example, "0100 0001"<br />
would become "1 0100 0001 0". The extra one (or zero, depending on parity bit) at the start<br />
and end of the transmission tells the receiver first that a character is coming and secondly that<br />
the character has ended. This method of transmission is used when data are sent<br />
intermittently as opposed to in a solid stream. In the previous example the start and stop bits<br />
are in bold. The start and stop bits must be of opposite polarity. [citation needed] This allows the<br />
receiver to recognize when the second packet of information is being sent.<br />
Synchronous transmission uses no start and stop bits, but instead synchronizes transmission<br />
speeds at both the receiving and sending end of the transmission using clock signal(s) built<br />
into each component. [vague] A continual stream of data is then sent between the two nodes.<br />
Due to there being no start and stop bits the data transfer rate is quicker although more errors<br />
will occur, as the clocks will eventually get out of sync, and the receiving device would have<br />
the wrong time that had been agreed in the protocol for sending/receiving data, so some bytes<br />
could become corrupted (by losing bits). [citation needed] Ways to get around this problem include<br />
re-synchronization of the clocks and use of check digits to ensure the byte is correctly<br />
interpreted and received
23. References, Journals, websites and E-links:<br />
TEXT BOOKS<br />
1. Principles Of Communication Systems-Herberet Taub, Donald L Schiling, Goutham<br />
saha,3rf edition, Mc Graw Hill 2008.\<br />
2. Digital and anolog communiation systems- Sam Shanmugam, John Wiley,2005.<br />
3. Digital communications- John g. Prokaris, Masoud salehi-5 th edition Mc Graw-Hill,<br />
2008.<br />
4. Digital communications- Simon Haykin, Jon Wiley, 2005<br />
Websites:-<br />
1. http://en.wikipedia.org/wiki/digital_communications<br />
2. http://www.tmworld.com/archive/2011/20110801.php<br />
3. www.pemuk.com<br />
4. www.site.uottawa.com<br />
5. www.tews.elektronik.com<br />
Journals:-<br />
1. Communicaions Journal<br />
2. Omega online technical reference<br />
3. Review of Scientific techniques<br />
REFERNCES:<br />
1. Digital communications- John g. Prokaris, Masoud salehi-5 th edition Mc Graw-Hill,<br />
2008.<br />
2. Digital communication- Simon Haykin, Jon Wiley, 2005.<br />
3. Digital communications-Lan A.Glover, Peter M.Grant.2 nd edition, pearson edu., 2008.<br />
4. Communication systems-B.P.Lathi, BS Publication, 2006.<br />
24. QualityControl Sheets
25. STUDENTS LIST<br />
B.TECH III YEAR II SEMESTER:<br />
SECTION-D:<br />
S.No Roll number Student Name<br />
1 13R11A04F5 A RAMA THEJA<br />
2 13R11A04F7 ANUGU PRASHANTH<br />
3 13R11A04F8 ARACHANA DASH<br />
4 13R11A04F9 CHAVALI NAGARJUNA<br />
5 13R11A04G0 CHIGURUPATI MEENAKSHI<br />
6 13R11A04G1 D SRI RAMYA<br />
7 13R11A04G2 DEEKONDA RAJSHREE<br />
8 13R11A04G3 G MANIDEEP<br />
9 13R11A04G4 GATADI VADDE PREM SAGAR<br />
10 13R11A04G5<br />
GOGU JEEVITHA SPANDANA<br />
REDDY<br />
11 13R11A04G6 GOLLURI SINDHUJA<br />
12 13R11A04G7<br />
GOPANABOENA SAI KRANTHI<br />
KUMAR<br />
13 13R11A04G8 GUNTIMADUGU SAI RESHMA<br />
14 13R11A04G9 K DARSHAN<br />
15 13R11A04H0 K. ANIRUDH<br />
16 13R11A04H1 KOMIRISHETTY AKHILA<br />
17 13R11A04H2 KOPPU MOUNIKA<br />
18 13R11A04H3 KANNE RAVI KUMAR<br />
19 13R11A04H4 KARRA VINEELA<br />
20 13R11A04H5 KANUKALA SIDDHARTH<br />
21 13R11A04H6 KATHI SHIVARAM REDDY
22 13R11A04H7 KOMANDLA SRIKANTH REDDY<br />
23 13R11A04H8 KONDAM PADMA<br />
24 13R11A04H9 KRISHNA ASHOK MORE<br />
25 13R11A04J0 LINGAMPALLY RAJASRI<br />
26 13R11A04J1 M ROHITH SAI SHASHANK<br />
27 13R11A04J2 M TANVIKA<br />
28 13R11A04J3 MALIHA AZAM<br />
29 13R11A04J4 MANSHA NEYAZ<br />
30 13R11A04J5 MATTA SRI SATYA SAI GAYATHRI<br />
31 13R11A04J6 MD RAHMAN SHAREEF<br />
32 13R11A04J7 MEESALA SAI SHRUTHI<br />
33 13R11A04J8 P G CHANDANA<br />
34 13R11A04J9 PALLE AKILA<br />
35 13R11A04K0 PERNAPATI YAMINI<br />
36 13R11A04K1 POLISETTY VEDA SRI<br />
37 13R11A04K2 REGU PRAVALIKA<br />
38 13R11A04K3 R RITHWIK REDDY<br />
39 13R11A04K4 RAMYA S<br />
40 13R11A04K5 SURI BHASKER SRI HARSHA<br />
41 13R11A04K6 TANGUTOORI SIRI CHANDANA<br />
42 13R11A04K7 THIPPARAPU AKHIL<br />
43 13R11A04K8 UDDALA DEVAMMA<br />
44 13R11A04K9 VALASA SHIVANI<br />
45 13R11A04L0 VEPURI NAGA TARUN SAI<br />
46 13R11A04L1 VISWAJITH GOVINDA RAJAN<br />
47 13R11A04L2 YENDURI YUGANDHAR
48 13R11A04L3 M SAI KUMAR<br />
49 14R18A0401 MODUMUDI HARSHITHA<br />
26.GroupWise students list for discussion topics<br />
Section -D<br />
Group 1<br />
Group 2<br />
Group 3:<br />
1 13R11A04F5 A RAMA THEJA<br />
2 13R11A04F7 ANUGU PRASHANTH<br />
3 13R11A04F8 ARACHANA DASH<br />
4 13R11A04F9 CHAVALI NAGARJUNA<br />
5 13R11A04G0 CHIGURUPATI MEENAKSHI<br />
6 13R11A04G1 D SRI RAMYA<br />
7 13R11A04G2 DEEKONDA RAJSHREE<br />
8 13R11A04G3 G MANIDEEP<br />
9 13R11A04G4 GATADI VADDE PREM SAGAR<br />
GOGU JEEVITHA SPANDANA<br />
10 13R11A04G5 REDDY<br />
11 13R11A04G6 GOLLURI SINDHUJA<br />
12 13R11A04G7<br />
GOPANABOENA SAI KRANTHI<br />
KUMAR<br />
13 13R11A04G8 GUNTIMADUGU SAI RESHMA<br />
14 13R11A04G9 K DARSHAN<br />
15 13R11A04H0 K. ANIRUDH<br />
Group 4:
16 13R11A04H1 KOMIRISHETTY AKHILA<br />
17 13R11A04H2 KOPPU MOUNIKA<br />
18 13R11A04H3 KANNE RAVI KUMAR<br />
19 13R11A04H4 KARRA VINEELA<br />
20 13R11A04H5 KANUKALA SIDDHARTH<br />
Group 5:<br />
21 13R11A04H6 KATHI SHIVARAM REDDY<br />
22 13R11A04H7 KOMANDLA SRIKANTH REDDY<br />
23 13R11A04H8 KONDAM PADMA<br />
24 13R11A04H9 KRISHNA ASHOK MORE<br />
25 13R11A04J0 LINGAMPALLY RAJASRI<br />
Group 6:<br />
26 13R11A04J1 M ROHITH SAI SHASHANK<br />
27 13R11A04J2 M TANVIKA<br />
28 13R11A04J3 MALIHA AZAM<br />
29 13R11A04J4 MANSHA NEYAZ<br />
30 13R11A04J5 MATTA SRI SATYA SAI GAYATHRI<br />
Group 7:<br />
31 13R11A04J6 MD RAHMAN SHAREEF<br />
32 13R11A04J7 MEESALA SAI SHRUTHI<br />
33 13R11A04J8 P G CHANDANA<br />
34 13R11A04J9 PALLE AKILA<br />
35 13R11A04K0 PERNAPATI YAMINI
Group 8:<br />
36 13R11A04K1 POLISETTY VEDA SRI<br />
37 13R11A04K2 REGU PRAVALIKA<br />
38 13R11A04K3 R RITHWIK REDDY<br />
39 13R11A04K4 RAMYA S<br />
40 13R11A04K5 SURI BHASKER SRI HARSHA<br />
Group 9:<br />
41 13R11A04K6 TANGUTOORI SIRI CHANDANA<br />
42 13R11A04K7 THIPPARAPU AKHIL<br />
43 13R11A04K8 UDDALA DEVAMMA<br />
44 13R11A04K9 VALASA SHIVANI<br />
45 13R11A04L0 VEPURI NAGA TARUN SAI<br />
Group 10:<br />
46 13R11A04L1 VISWAJITH GOVINDA RAJAN<br />
47 13R11A04L2 YENDURI YUGANDHAR<br />
48 13R11A04L3 M SAI KUMAR<br />
49 14R18A0401 MODUMUDI HARSHITHA<br />
46 13R11A04L1 VISWAJITH GOVINDA RAJAN
10. Tutorial class sheets<br />
UNIT-1<br />
SAMPLING:<br />
Sampling Theorem for strictly band - limited signals<br />
1.a signal which is limited to W<br />
f W , can becompletely<br />
n <br />
described by g(<br />
) .<br />
2W<br />
<br />
n <br />
2.The signal can becompletely recovered from g(<br />
) <br />
2W<br />
<br />
Nyquist rate 2W<br />
Nyquist interval 1<br />
2W<br />
When the signal is not band - limited (under sampling)<br />
aliasing occurs.To avoid aliasing, we may limit the<br />
signal bandwidth or have higher sampling rate.<br />
FromTable<br />
g( t)<br />
<br />
<br />
<br />
<br />
<br />
n<br />
1<br />
G(<br />
f ) <br />
T<br />
m<br />
g<br />
G<br />
<br />
or G<br />
s<br />
( t)<br />
<br />
<br />
<br />
<br />
<br />
s m<br />
f G(<br />
f<br />
( f )<br />
( f )<br />
f<br />
mf<br />
<br />
<br />
n<br />
<br />
<br />
s<br />
m<br />
s<br />
s<br />
( f<br />
s<br />
<br />
<br />
G(<br />
f<br />
s<br />
m<br />
T<br />
f G(<br />
f ) f<br />
s<br />
mf<br />
or we may apply Fourier Transform on(3.1)<br />
<br />
<br />
( t nT )<br />
A6.3 we have<br />
)<br />
)<br />
g(<br />
nT )exp( j2<br />
nf T )<br />
<br />
<br />
s<br />
s<br />
m<br />
m<br />
<br />
<br />
0<br />
)<br />
G(<br />
f<br />
s<br />
mf<br />
s<br />
)<br />
(3.2)<br />
(3.3)<br />
(3.5)<br />
to obtain
Unit 2<br />
Pulse-Amplitude Modulation :<br />
Pulse Amplitude Modulation – Natural and Flat-Top Sampling:<br />
(3.14)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
we have<br />
property ,<br />
the sifting<br />
Using<br />
(3.13)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
(3.12)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
is<br />
)<br />
(<br />
version of<br />
The instantane ously sampled<br />
(3.11)<br />
otherwise<br />
T<br />
t<br />
0,<br />
t<br />
T<br />
t<br />
0<br />
,<br />
,<br />
0 2<br />
1<br />
1,<br />
)<br />
(<br />
(3.10)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
as<br />
top pulses<br />
-<br />
flat<br />
denote the sequence of<br />
)<br />
(<br />
Let<br />
s<br />
n<br />
s<br />
s<br />
n<br />
s<br />
s<br />
n<br />
s<br />
n<br />
s<br />
s<br />
s<br />
n<br />
s<br />
nT<br />
t<br />
h<br />
nT<br />
m<br />
t<br />
h<br />
t<br />
m<br />
d<br />
t<br />
h<br />
nT<br />
nT<br />
m<br />
d<br />
t<br />
h<br />
nT<br />
nT<br />
m<br />
d<br />
t<br />
h<br />
m<br />
t<br />
h<br />
t<br />
m<br />
nT<br />
t<br />
nT<br />
m<br />
t<br />
m<br />
t<br />
m<br />
t<br />
h<br />
nT<br />
t<br />
h<br />
nT<br />
m<br />
t<br />
s<br />
t<br />
s<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
(3.18)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
(3.17)<br />
)<br />
(<br />
)<br />
M (<br />
(3.2)<br />
)<br />
(<br />
)<br />
(<br />
(3.2)<br />
Recall<br />
(3.16)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
(3.15)<br />
)<br />
(<br />
)<br />
(<br />
)<br />
(<br />
is<br />
)<br />
(<br />
The PAM signal<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
k<br />
s<br />
s<br />
k<br />
s<br />
s<br />
m<br />
s<br />
s<br />
δ<br />
f<br />
H<br />
k f<br />
f<br />
M<br />
f<br />
f<br />
S<br />
k f<br />
f<br />
M<br />
f<br />
f<br />
mf<br />
f<br />
G<br />
f<br />
t<br />
g<br />
f<br />
H<br />
f<br />
M<br />
f<br />
S<br />
t<br />
h<br />
t<br />
m<br />
t<br />
s<br />
t<br />
s
• The most common technique for sampling voice in PCM systems is to a sample-andhold<br />
circuit.<br />
• The instantaneous amplitude of the analog (voice) signal is held as a constant charge<br />
on a capacitor for the duration of the sampling period Ts.<br />
• This technique is useful for holding the sample constant while other processing is<br />
taking place, but it alters the frequency spectrum and introduces an error, called<br />
aperture error, resulting in an inability to recover exactly the original analog signal.<br />
• The amount of error depends on how mach the analog changes during the holding<br />
time, called aperture time.<br />
• To estimate the maximum voltage error possible, determine the maximum slope of the<br />
analog signal and multiply it by the aperture time DT
Unit 3<br />
Differential Pulse-Code Modulation<br />
(DPCM):<br />
Usually PCM has the sampling rate higher than the Nyquist rate .The encode signal contains<br />
redundant information. DPCM can efficiently remove this redundancy.<br />
e<br />
n mn<br />
mˆ<br />
n<br />
is<br />
a<br />
Processing Gain:<br />
mˆ<br />
n<br />
The quantizer output is<br />
e<br />
q<br />
n en<br />
qn<br />
qnis<br />
where<br />
prediction<br />
value.<br />
quantizati on error.<br />
The prediction filter input is<br />
m<br />
q<br />
n mˆ<br />
n<br />
en qn<br />
(3.74)<br />
(3.75)<br />
(3.77)<br />
<br />
m<br />
q<br />
mn<br />
n mn<br />
qn (3.78)
Unit 4<br />
CORRELATIVE LEVEL CODING:<br />
• Correlative-level coding (partial response signaling)<br />
– adding ISI to the transmitted signal in a controlled manner<br />
• Since ISI introduced into the transmitted signal is known, its effect can be interpreted at<br />
the receiver<br />
• A practical method of achieving the theoretical maximum signaling rate of 2W symbol<br />
per second in a bandwidth of W Hertz<br />
• Using realizable and perturbation-tolerant filters<br />
Duo-binary Signaling :<br />
Duo : doubling of the transmission capacity of a straight binary system<br />
TEXT BOOKS<br />
Reference Text Books and websites<br />
5. Principles Of Communication Systems-Herberet Taub, Donald L Schiling, Goutham<br />
saha,3rf edition, Mc Graw Hill 2008<br />
6. digital and anolog communiation systems- Sam Shanmugam, John Wiley,2005<br />
REFERNCES:<br />
5. Digital communication- john g. Prokaris, Masoud salehi-5 th edition Mc Graw-Hill, 2008.<br />
6. Digital communicatio- Simon Haykin, Jon Wiley, 2005
MISSING TOPICS<br />
UNIT 1<br />
Hartley's law<br />
During that same year, Hartley formulated a way to quantify information and its line<br />
rate (also known as data signalling rate or gross bitrate inclusive of error-correcting code 'R'<br />
across a communications channel). [1] This method, later known as Hartley's law, became an<br />
important precursor for Shannon's more sophisticated notion of channel capacity.<br />
Hartley argued that the maximum number of distinct pulses that can be transmitted and<br />
received reliably over a communications channel is limited by the dynamic range of the<br />
signal amplitude and the precision with which the receiver can distinguish amplitude levels.<br />
Specifically, if the amplitude of the transmitted signal is restricted to the range of [ –A ... +A ]<br />
volts, and the precision of the receiver is ±ΔV volts, then the maximum number of distinct<br />
pulses M is given by<br />
By taking information per pulse in bit/pulse to be the base-2-logarithm of the number of<br />
distinct messages M that could be sent, Hartley [2] constructed a measure of the line<br />
rate R as:<br />
where fp is the pulse rate, also known as the symbol rate, in symbols/second or baud.<br />
Hartley then combined the above quantification with Nyquist's observation that the number<br />
of independent pulses that could be put through a channel of bandwidth B hertz was<br />
2B pulses per second, to arrive at his quantitative measure for achievable line rate.<br />
Hartley's law is sometimes quoted as just a proportionality between the analog bandwidth, B,<br />
in Hertz and what today is called the digital bandwidth, R, in bit/s. [3] Other times it is quoted<br />
in this more quantitative form, as an achievable line rate of R bits per second: [4]<br />
Hartley did not work out exactly how the number M should depend on the noise statistics of<br />
the channel, or how the communication could be made reliable even when individual symbol<br />
pulses could not be reliably distinguished to M levels; with Gaussian noise statistics, system<br />
designers had to choose a very conservative value of M to achieve a low error rate.<br />
The concept of an error-free capacity awaited Claude Shannon, who built on Hartley's<br />
observations about a logarithmic measure of information and Nyquist's observations about<br />
the effect of bandwidth limitations.<br />
Hartley's rate result can be viewed as the capacity of an errorless M-ary channel of<br />
2B symbols per second. Some authors refer to it as a capacity. But such an errorless channel<br />
is an idealization, and the result is necessarily less than the Shannon capacity of the noisy<br />
channel of bandwidth B, which is the Hartley–Shannon result that followed later.
Noisy channel coding theorem and capacity<br />
Main article: noisy-channel coding theorem<br />
Claude Shannon's development of information theory during World War II provided the next<br />
big step in understanding how much information could be reliably communicated through<br />
noisy channels. Building on Hartley's foundation, Shannon's noisy channel coding<br />
theorem (1948) describes the maximum possible efficiency of error-correcting<br />
methods versus levels of noise interference and data corruption. [5][6] The proof of the theorem<br />
shows that a randomly constructed error correcting code is essentially as good as the best<br />
possible code; the theorem is proved through the statistics of such random codes.<br />
Shannon's theorem shows how to compute a channel capacity from a statistical description of<br />
a channel, and establishes that given a noisy channel with capacity C and information<br />
transmitted at a line rate R, then if<br />
there exists a coding technique which allows the probability of error at the receiver to be<br />
made arbitrarily small. This means that theoretically, it is possible to transmit information<br />
nearly without error up to nearly a limit of C bits per second.<br />
The converse is also important. If<br />
the probability of error at the receiver increases without bound as the rate is increased. So no<br />
useful information can be transmitted beyond the channel capacity. The theorem does not<br />
address the rare situation in which rate and capacity are equal.<br />
[edit]Shannon–Hartley theorem<br />
The Shannon–Hartley theorem establishes what that channel capacity is for a finitebandwidth<br />
continuous-time channel subject to Gaussian noise. It connects Hartley's result<br />
with Shannon's channel capacity theorem in a form that is equivalent to specifying the M in<br />
Hartley's line rate formula in terms of a signal-to-noise ratio, but achieving reliability through<br />
error-correction coding rather than through reliably distinguishable pulse levels.<br />
If there were such a thing as an infinite-bandwidth, noise-free analog channel, one could<br />
transmit unlimited amounts of error-free data over it per unit of time. Real channels,<br />
however, are subject to limitations imposed by both finite bandwidth and nonzero noise.<br />
So how do bandwidth and noise affect the rate at which information can be transmitted over<br />
an analog channel?<br />
Surprisingly, bandwidth limitations alone do not impose a cap on maximum information rate.<br />
This is because it is still possible for the signal to take on an indefinitely large number of<br />
different voltage levels on each symbol pulse, with each slightly different level being<br />
assigned a different meaning or bit sequence. If we combine both noise and bandwidth<br />
limitations, however, we do find there is a limit to the amount of information that can be<br />
transferred by a signal of a bounded power, even when clever multi-level encoding<br />
techniques are used.<br />
In the channel considered by the Shannon-Hartley theorem, noise and signal are combined by<br />
addition. That is, the receiver measures a signal that is equal to the sum of the signal
encoding the desired information and a continuous random variable that represents the noise.<br />
This addition creates uncertainty as to the original signal's value. If the receiver has some<br />
information about the random process that generates the noise, one can in principle recover<br />
the information in the original signal by considering all possible states of the noise process. In<br />
the case of the Shannon-Hartley theorem, the noise is assumed to be generated by a Gaussian<br />
process with a known variance. Since the variance of a Gaussian process is equivalent to its<br />
power, it is conventional to call this variance the noise power.<br />
Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise<br />
is added to the signal; "white" means equal amounts of noise at all frequencies within the<br />
channel bandwidth. Such noise can arise both from random sources of energy and also from<br />
coding and measurement error at the sender and receiver respectively. Since sums of<br />
independent Gaussian random variables are themselves Gaussian random variables, this<br />
conveniently simplifies analysis, if one assumes that such error sources are also Gaussian and<br />
independent.<br />
Comparison of Shannon's capacity to Hartley's law<br />
Comparing the channel capacity to the information rate from Hartley's law, we can find the<br />
effective number of distinguishable levels M: [7]<br />
The square root effectively converts the power ratio back to a voltage ratio, so the number of<br />
levels is approximately proportional to the ratio of rms signal amplitude to noise standard<br />
deviation.<br />
This similarity in form between Shannon's capacity and Hartley's law should not be<br />
interpreted to mean that M pulse levels can be literally sent without any confusion; more<br />
levels are needed, to allow for redundant coding and error correction, but the net data rate that<br />
can be approached with coding is equivalent to using that M in Hartley's law.<br />
Frequency-dependent (colored noise) case<br />
In the simple version above, the signal and noise are fully uncorrelated, in which<br />
case S + N is the total power of the received signal and noise together. A generalization of the<br />
above equation for the case where the additive noise is not white (or that the S/N is not<br />
constant with frequency over the bandwidth) is obtained by treating the channel as many<br />
narrow, independent Gaussian channels in parallel:<br />
where<br />
C is the channel capacity in bits per second;<br />
B is the bandwidth of the channel in Hz;
S(f) is the signal power spectrum<br />
N(f) is the noise power spectrum<br />
f is frequency in Hz.<br />
Note: the theorem only applies to Gaussian stationary process noise. This formula's way of<br />
introducing frequency-dependent noise cannot describe all continuous-time noise processes.<br />
For example, consider a noise process consisting of adding a random wave whose amplitude<br />
is 1 or -1 at any point in time, and a channel that adds such a wave to the source signal. Such<br />
a wave's frequency components are highly dependent. Though such a noise may have a high<br />
power, it is fairly easy to transmit a continuous signal with much less power than one would<br />
need if the underlying noise was a sum of independent noises in each frequency band.<br />
Approximations<br />
For large or small and constant signal-to-noise ratios, the capacity formula can be<br />
approximated:<br />
• If S/N >> 1, then<br />
where<br />
• Similarly, if S/N
• f c denotes center frequency<br />
• Negative Frequencies contain no Additional Info<br />
Characteristics:<br />
• Complex valued signal<br />
• No information loss, truely equivalent<br />
Let us consider DN = {(xi , yi) : i = 1, .., N} iid realizations of the joint observation-class<br />
phenomenon (X(u), Y (u)) with true probability measure P(X,Y) defined on (X ×Y, σ(FX ×<br />
FY )). In addition, let us consider a family of measurable representation functions D, where<br />
any f(·) ∈ D is defined in X and takes values in Xf . Let us assume that any representation<br />
function f(·) induces an empirical distribution Pˆ Xf ,Y on (Xf ×Y, σ(Ff ×FY )), based on the<br />
training data and an implicit learning approach, where the empirical Bayes classification rule<br />
is given by: gˆf (x) = arg maxy∈Y Pˆ Xf ,Y (x, y).<br />
Turbo codes<br />
UNIT 6<br />
In information theory, turbo codes (originally in French Turbocodes) are a class of highperformance<br />
forward error correction (FEC) codes developed in 1993, which were the first<br />
practical codes to closely approach the channel capacity, a theoretical maximum for the code<br />
rate at which reliable communication is still possible given a specific noise level. Turbo<br />
codes are finding use in 3G mobile communications and (deep<br />
space) satellite communications as well as other applications where designers seek to achieve<br />
reliable information transfer over bandwidth- or latency-constrained communication links in<br />
the presence of data-corrupting noise. Turbo codes are nowadays competing with LDPC<br />
codes, which provide similar performance.<br />
Prior to turbo codes, the best constructions were serial concatenated codes based on an<br />
outer Reed-Solomon error correction code combined with an inner Viterbi-decoded short<br />
constraint length convolutional code, also known as RSV codes.<br />
In 1993, turbo codes were introduced by Berrou, Glavieux, and Thitimajshima (from<br />
Télécom Bretagne, former ENST Bretagne, France) in their paper: "Near Shannon Limit<br />
Error-correcting Coding and Decoding: Turbo-codes" published in the Proceedings of IEEE<br />
International Communications Conference. In a later paper, Berrou gave credit to the<br />
"intuition" of "G. Battail, J. Hagenauer and P. Hoeher, who, in the late 80s, highlighted the<br />
interest of probabilistic processing.". He adds "R. Gallager and M. Tanner had already<br />
imagined coding and decoding techniques whose general principles are closely related,"<br />
although the necessary calculations were impractical at that time.<br />
The first class of turbo code was the parallel concatenated convolutional code (PCCC). Since<br />
the introduction of the original parallel turbo codes in 1993, many other classes of turbo code<br />
have been discovered, including serial versions and repeat-accumulate codes. Iterative Turbo<br />
decoding methods have also been applied to more conventional FEC systems, including<br />
Reed-Solomon corrected convolutional codes
There are many different instantiations of turbo codes, using different component encoders,<br />
input/output ratios, interleavers, and puncturing patterns. This example encoder<br />
implementation describes a 'classic' turbo encoder, and demonstrates the general design of<br />
parallel turbo codes.<br />
This encoder implementation sends three sub-blocks of bits. The first sub-block is the m-bit<br />
block of payload data. The second sub-block is n/2 parity bits for the payload data, computed<br />
using a recursive systematic convolutional code (RSC code). The third sub-block is n/2 parity<br />
bits for a known permutation of the payload data, again computed using an RSC<br />
convolutional code. Thus, two redundant but different sub-blocks of parity bits are sent with<br />
the payload. The complete block has m+n bits of data with a code rate of m/(m+n).<br />
The permutation of the payload data is carried out by a device called an interleaver.<br />
Hardware-wise, this turbo-code encoder consists of two identical RSC coders, С1 and C2, as<br />
depicted in the figure, which are connected to each other using a concatenation scheme,<br />
called parallel concatenation:<br />
In the figure, M is a memory register. The delay line and interleaver force input bits dk to<br />
appear in different sequences. At first iteration, the input sequence dk appears at both outputs<br />
of the encoder, xk and y1k or y2k due to the encoder's systematic nature. If the<br />
encoders C1 and C2 are used respectively in n1 and n2 iterations, their rates are respectively<br />
equal to<br />
,<br />
The decoder<br />
.<br />
The decoder is built in a similar way to the above encoder - two elementary decoders are<br />
interconnected to each other, but in serial way, not in parallel. The decoder operates<br />
on lower speed (i.e. ), thus, it is intended for the encoder, and is<br />
for correspondingly. yields a soft decision which causes delay. The same<br />
delay is caused by the delay line in the encoder. The 's operation causes delay.
An interleaver installed between the two decoders is used here to scatter error bursts<br />
coming from output. DI block is a Demultiplexing and insertion module. It works as<br />
a switch, redirecting input bits to at one moment and to at another. In OFF<br />
state, it feeds both and inputs with padding bits (zeros).<br />
Consider a memoryless AWGN channel, and assume that at k-th iteration, the decoder<br />
receives a pair of random variables:<br />
,<br />
where and are independent noise components having the same<br />
variance . is a k-th bit from encoder output.<br />
Redundant information is demultiplexed and sent<br />
through DI to (when ) and to (when ).<br />
yields a soft decision, i.e.:<br />
and delivers it to . is called the logarithm of the likelihood<br />
ratio (LLR). is the a posteriori probability (APP) of the data bit<br />
which shows the probability of interpreting a received bit as . Taking the LLR into<br />
account, yields a hard decision, i.e. a decoded bit.<br />
It is known that the Viterbi algorithm is unable to calculate APP, thus it cannot be<br />
used in . Instead of that, a modified BCJR algorithm is used. For , the Viterbi<br />
algorithm is an appropriate one.<br />
However, the depicted structure is not an optimal one, because uses only a<br />
proper fraction of the available redundant information. In order to improve the structure, a<br />
feedback loop is used (see the dotted line on the figure).
ASSIGNMENT TOPICS<br />
Unit I:<br />
1. Certain issues of digital transmission,<br />
2. advantages of digital communication systems,<br />
3. Bandwidth- S/N trade off, and Sampling theorem<br />
4. PCM generation and reconstruction<br />
5. Quantization noise, Differential PCM systems (DPCM)<br />
6. Delta modulation,<br />
Unit II:<br />
1. Coherent ASK detector and non-Coherent ASK detector<br />
2. Coherent FSK detector BPSK<br />
3. Coherent PSK detection<br />
Unit III:<br />
1. A Base band signal receiver,<br />
2. Different pulses and power spectrum densities<br />
3. Probability of error<br />
4. Conditional entropy and redundancy,<br />
5. Shannon Fano coding<br />
6. Mutual information,<br />
Unit IV:<br />
1. Matrix description of linear block codes<br />
2. Matrix description of linear block codes<br />
3. Error detection and error correction capabilities of linear block codes<br />
4. Encoding<br />
5. decoding using state Tree and trellis diagrams<br />
6. Decoding using Viterbi algorithm<br />
Unit V:<br />
1. Use of spread spectrum<br />
2. direct sequence spread spectrum(DSSS),<br />
3. Code division multiple access<br />
4. Ranging using DSSS Frequency Hopping spread spectrum,
Subject Contents<br />
1.7. 1. Synopsis page for each period (62 pages)<br />
1.7.2. Detailed Lecture notes containing:<br />
1. PPTs<br />
2. OHP slides<br />
3. Subjective type questions (approximately 5 t0 8 in no)<br />
4. Objective type questions (approximately 20 to 30 in no)<br />
5. Any simulations<br />
1.8. Course Review (By the concerned Faculty):<br />
(I)Aims<br />
(II) Sample check<br />
(III) End of the course report by the concerned faculty<br />
GUIDELINES:<br />
Distribution of periods:<br />
No. of classes required to cover JNTU syllabus : 40<br />
No. of classes required to cover Additional topics : 4<br />
No. of classes required to cover Assignment tests (for every 2 units 1 test) : 4<br />
No. of classes required to cover tutorials : 8<br />
No. of classes required to cover Mid tests : 2<br />
No of classes required to solve University : 4<br />
Question papers ----------------<br />
62<br />
Total periods
CLOSURE REPORT<br />
Here the closure report is enclosed for the Digital Communication:<br />
1 No. of hours planned to complete the course-62 hrs<br />
No. of hours taken –<br />
2. Internal marks evaluation sheet is attached.<br />
3. How many students appeared for the external examination-