01.12.2014 Views

Space-Time Block Codes for Wireless Systems - The ...

Space-Time Block Codes for Wireless Systems - The ...

Space-Time Block Codes for Wireless Systems - The ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

COMMUNICATION SCIENCES<br />

INSTITUTE<br />

<br />

“<strong>Space</strong>-<strong>Time</strong> <strong>Block</strong> <strong>Codes</strong> <strong>for</strong> <strong>Wireless</strong> <strong>Systems</strong>:<br />

Construction, Per<strong>for</strong>mance Analysis, and Trellis<br />

Coded Modulation”<br />

by<br />

Jifeng Geng<br />

CSI-04-05-01<br />

USC VITERBI SCHOOL OF ENGINEERING<br />

UNIVERSITY OF SOUTHERN CALIFORNIA<br />

ELECTRICAL ENGINEERING — SYSTEMS<br />

LOS ANGELES, CA 90089-2565


SPACE-TIME BLOCK CODES FOR WIRELESS SYSTEMS:<br />

CONSTRUCTION, PERFORMANCE ANALYSIS, AND TRELLIS CODED<br />

MODULATION<br />

by<br />

Jifeng Geng<br />

A Dissertation Presented to the<br />

FACULTY OF THE GRADUATE SCHOOL<br />

UNIVERSITY OF SOUTHERN CALIFORNIA<br />

In Partial Fulfillment of the<br />

Requirements <strong>for</strong> the Degree<br />

DOCTOR OF PHILOSOPHY<br />

(ELECTRICAL ENGINEERING)<br />

May 2004<br />

Copyright 2004<br />

Jifeng Geng


Dedication<br />

This dissertation is dedicated to my family <strong>for</strong> their everlasting support and love.<br />

ii


Acknowledgements<br />

I would like to thank my advisor Urbashi Mitra, who <strong>for</strong>goes many of my shortcomings<br />

and provides valuable support and guidance throughout this work.<br />

iii


Contents<br />

Dedication<br />

Acknowledgements<br />

List Of Tables<br />

List Of Figures<br />

Abstract<br />

ii<br />

iii<br />

vi<br />

vii<br />

x<br />

1 Introduction 1<br />

2 <strong>Space</strong>-<strong>Time</strong> <strong>Block</strong> <strong>Codes</strong> in Multipath CDMA <strong>Systems</strong> 6<br />

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6<br />

2.2 <strong>The</strong> Uplink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />

2.2.1 <strong>The</strong> Uplink System Model . . . . . . . . . . . . . . . . . . . . . . . 10<br />

2.2.2 <strong>The</strong> Uplink Chernoff Bound and Code Design Criteria . . . . . . . 15<br />

2.2.3 <strong>The</strong> Correlated Codeword Sequence Difference Matrix Φ . . . . . . 17<br />

2.2.4 Code Design Principles . . . . . . . . . . . . . . . . . . . . . . . . 21<br />

2.2.5 <strong>The</strong> Effect of Multipath . . . . . . . . . . . . . . . . . . . . . . . . 24<br />

2.3 <strong>The</strong> Downlink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />

2.3.1 <strong>The</strong> Downlink Model . . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />

2.3.2 Joint Maximum Likelihood Decoder . . . . . . . . . . . . . . . . . 28<br />

2.3.3 Single user ML-based Decoder . . . . . . . . . . . . . . . . . . . . 30<br />

2.4 Non-spread systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32<br />

2.5 Optimal Minimum Metric <strong>Codes</strong> . . . . . . . . . . . . . . . . . . . . . . . 34<br />

2.5.1 OMM <strong>Codes</strong> <strong>for</strong> the Uplink . . . . . . . . . . . . . . . . . . . . . . 34<br />

2.5.2 OMM <strong>Codes</strong> <strong>for</strong> the Downlink . . . . . . . . . . . . . . . . . . . . 45<br />

2.6 Practical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50<br />

2.6.1 Isometries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50<br />

2.6.2 Sensitivity to Correlation . . . . . . . . . . . . . . . . . . . . . . . 52<br />

2.6.3 Suboptimal <strong>Codes</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . 53<br />

2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54<br />

iv


3 On Suboptimal Linear Decoders <strong>for</strong> <strong>Space</strong>-<strong>Time</strong> <strong>Block</strong> <strong>Codes</strong> 56<br />

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56<br />

3.2 Uplink Signal Model <strong>for</strong> Spread <strong>Systems</strong> . . . . . . . . . . . . . . . . . . . 57<br />

3.3 Linear Equalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59<br />

3.4 <strong>The</strong> Mapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62<br />

3.5 <strong>The</strong> Effect of Large Number of Receive Antennae . . . . . . . . . . . . . . 65<br />

3.6 Fisher’s In<strong>for</strong>mation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 67<br />

3.7 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69<br />

3.7.1 <strong>The</strong> BLUE and LS Decoders . . . . . . . . . . . . . . . . . . . . . 70<br />

3.7.2 <strong>The</strong> LM Decoders . . . . . . . . . . . . . . . . . . . . . . . . . . . 71<br />

3.7.3 <strong>The</strong> LMMSE Decoders . . . . . . . . . . . . . . . . . . . . . . . . . 71<br />

3.7.4 <strong>The</strong> Effect of Number of Receive Antennae L r . . . . . . . . . . . 71<br />

3.7.5 <strong>The</strong> Effect of Number of Active Users K . . . . . . . . . . . . . . . 72<br />

3.7.6 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72<br />

4 Nonlinear Hierarchical <strong>Codes</strong> 76<br />

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76<br />

4.2 Signal Model and Isometries . . . . . . . . . . . . . . . . . . . . . . . . . . 77<br />

4.3 Building Nonlinear Hierarchical <strong>Codes</strong> . . . . . . . . . . . . . . . . . . . . 82<br />

4.4 <strong>The</strong> Hierarchical Structure of Optimal 2 × 2 QPSK <strong>Codes</strong> . . . . . . . . . 91<br />

4.5 Code Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93<br />

5 Regular Multiple Trellis Coded Modulation 98<br />

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98<br />

5.2 Signal Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100<br />

5.3 RMTCM Design Parameters and Procedures . . . . . . . . . . . . . . . . 100<br />

5.4 Per<strong>for</strong>mance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105<br />

5.5 RMTCM Designs Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 111<br />

5.5.1 2 × 2 BPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112<br />

5.5.2 2 × 2 QPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113<br />

5.5.3 2 × 2 8PSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117<br />

5.5.4 3 × 3 BPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119<br />

5.5.5 3 × 3 QPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123<br />

5.5.6 3 × 3 8PSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123<br />

6 Conclusions 125<br />

Reference List 126<br />

Appendix A<br />

Code Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131<br />

Appendix B<br />

RMTCM Design Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137<br />

v


List Of Tables<br />

2.1 <strong>The</strong> distance spectra <strong>for</strong> rate 1 BPSK 2 × 2 Class 1 OMM code. . . . . . 26<br />

2.2 OMM codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36<br />

2.3 Detailed Structure of rate 1 OMM 2×2 8PSK and 16PSK <strong>Codes</strong>. ( u n = e i 2π n ) 39<br />

2.4 Rate 1 OMM 3 × 3 BPSK <strong>Codes</strong>. . . . . . . . . . . . . . . . . . . . . . . . 40<br />

2.5 Three space-time code structures and their per<strong>for</strong>mance comparison. . . . 54<br />

3.1 CPU time in seconds <strong>for</strong> 200 runs. . . . . . . . . . . . . . . . . . . . . . . 72<br />

4.1 Code list of the 2 × 2 QPSK NHC. . . . . . . . . . . . . . . . . . . . . . . 92<br />

4.2 List of Hierarchical <strong>Codes</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . 93<br />

5.1 <strong>The</strong> length M of single error event <strong>for</strong> given b and g. . . . . . . . . . . . . 108<br />

5.2 Rate 1/2 MTCM designs <strong>for</strong> 2 × 2 BPSK. . . . . . . . . . . . . . . . . . . 112<br />

5.3 Regular MTCM design <strong>for</strong> 2 × 2 BPSK |S 3 | = 8 codewords. . . . . . . . . 113<br />

5.4 MTCM design <strong>for</strong> 2 × 2 QPSK |S 4 | = 16 codewords. . . . . . . . . . . . . 114<br />

5.5 Hierarchical structure in Hamming distance. . . . . . . . . . . . . . . . . . 118<br />

A.1 2 × 2 BPSK codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131<br />

A.2 2 × 2 QPSK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132<br />

A.3 2 × 2 8PSK code list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132<br />

A.4 2 × 2 8PSK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133<br />

A.5 3 × 3 BPSK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133<br />

A.6 3 × 3 QPSK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134<br />

A.7 3 × 3 QPSK code list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134<br />

A.8 3 × 3 8PSK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135<br />

A.9 3 × 3 8PSK code list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135<br />

A.10 4 × 3 BPSK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136<br />

A.11 4 × 3 QPSK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136<br />

A.12 4 × 3 QPSK code list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136<br />

vi


List Of Figures<br />

2.1 Transmitter encoder block diagram <strong>for</strong> user k. . . . . . . . . . . . . . . . . 11<br />

2.2 Receiver decoder block diagram. . . . . . . . . . . . . . . . . . . . . . . . 11<br />

2.3 Channel model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11<br />

2.4 Union bound as a function of path delay τ. . . . . . . . . . . . . . . . . . 26<br />

2.5 Distinction between spread and non-spread transmission. . . . . . . . . . 33<br />

2.6 Union bound versus simulation <strong>for</strong> rate 1 OMM 2 × 2 BPSK Class 1 code. 35<br />

2.7 Distance spectra <strong>for</strong> rate 1 OMM 2 × 2 BPSK codes. . . . . . . . . . . . . 38<br />

2.8 Rate 1 OMM 2 × 2 BPSK block codes at SNR=6dB. . . . . . . . . . . . . 38<br />

2.9 Rate 1 OMM 2 × 2 BPSK and 16PSK codes <strong>for</strong> ρ c = 0.3. . . . . . . . . . 39<br />

2.10 Rate 1 OMM 3 × 3 BPSK codes at SNR=6dB. . . . . . . . . . . . . . . . 41<br />

2.11 Rate 1 OMM 3 × 3 BPSK codes. . . . . . . . . . . . . . . . . . . . . . . . 41<br />

2.12 Rate 2 OMM 2 × 2 QPSK <strong>Codes</strong> at ρ c = 1.0. . . . . . . . . . . . . . . . . 43<br />

2.13 Rate 2 OMM 2 × 2 QPSK <strong>Codes</strong> at ρ c = 0.3. . . . . . . . . . . . . . . . . 43<br />

2.14 Rate 2 OMM 2 × 2 QPSK <strong>Codes</strong>, Gold sequence, single user, flat fading. . 44<br />

2.15 Rate 2 OMM 2 × 2 QPSK <strong>Codes</strong>, Gold sequence, multipath fading Γ =<br />

[0, 5; 0, 5]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44<br />

2.16 Sequence decoder M = 4, Rate 2 OMM 2×2 QPSK <strong>Codes</strong>, Gold sequence,<br />

multipath fading Γ = [0, 5, 7, 10; 0, 5, 7, 10]. . . . . . . . . . . . . . . . . . . 45<br />

2.17 Rate 1 OMM 4 × 2 BPSK <strong>Codes</strong> <strong>for</strong> ρ c = 0.3. . . . . . . . . . . . . . . . . 46<br />

2.18 Multiuser rate 1 OMM 2 × 2 BPSK codes. . . . . . . . . . . . . . . . . . . 48<br />

2.19 Decoder per<strong>for</strong>mance comparison with no near-far effect. . . . . . . . . . . 48<br />

2.20 Decoder per<strong>for</strong>mance comparison with near-far effect. . . . . . . . . . . . 49<br />

2.21 Equivalent ρ e as a function of SNR and ρ c . . . . . . . . . . . . . . . . . . 49<br />

3.1 Decoder Classification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56<br />

3.2 Decoder Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57<br />

3.3 <strong>The</strong> BLUE and LS estimators lose diversity. . . . . . . . . . . . . . . . . . 70<br />

vii


3.4 <strong>The</strong> BLUE and LS estimators have large variance. . . . . . . . . . . . . . 71<br />

3.5 Ignoring R incurs 0.3dB loss. . . . . . . . . . . . . . . . . . . . . . . . . . 72<br />

3.6 LMMSE2 decoder has slight advantage over LMMSE1 decoder at high SNR. 73<br />

3.7 LMMSE2 decoder has less variance than LMMSE1 decoder. . . . . . . . . 73<br />

3.8 Two receive antennae. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74<br />

3.9 Two active users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74<br />

3.10 Four active users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75<br />

4.1 <strong>The</strong> relationship between {a, b} and {a ′ , b ′ }. . . . . . . . . . . . . . . . . . 88<br />

4.2 Hierarchical structure of the 2 × 2 QPSK NHC. . . . . . . . . . . . . . . . 91<br />

4.3 3×3 BPSK, 8PSK NHCs (8 codewords) vs best 8PSK unitary group codes<br />

(8 codewords). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95<br />

4.4 3 × 3 8PSK NHCs (64 codewords) vs best 21PSK unitary group codes (63<br />

codewords). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95<br />

4.5 4×3 BPSK (8 codewords) and QPSK (64 codewords) NHCs vs orthogonal<br />

codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96<br />

5.1 <strong>The</strong> shift-register configuration to generate state trellis. . . . . . . . . . . 101<br />

5.2 <strong>The</strong> shift-register configuration <strong>for</strong> 2 inputs and 3 registers and corresponding<br />

trellis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101<br />

5.3 Rate 1/2 MTCM designs <strong>for</strong> 2 × 2 BPSK. . . . . . . . . . . . . . . . . . . 112<br />

5.4 2 × 2 BPSK, Rate 2/2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113<br />

5.5 2 × 2 QPSK, |S 4 | = 16. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115<br />

5.6 2 × 2 QPSK, |S 4 | = 16, Ricean fading channels (h = 1). . . . . . . . . . . 115<br />

5.7 2 × 2 QPSK, |S 5 | = 32, quasi-static vs fast block fading channels. . . . . . 116<br />

5.8 Bit map of 2 × 2 QPSK, |S 4 | = 16, <strong>for</strong> serial concatenation. . . . . . . . . 118<br />

5.9 2 × 2 QPSK, |S 4 | = 16, serial concatenation vs RMTCM. . . . . . . . . . 119<br />

5.10 Rate 6/2, 2 × 2 8PSK, |S 7 | = 128 vs |S 8 | = 256. . . . . . . . . . . . . . . . 120<br />

5.11 3 × 3 BPSK, |S 3 | = 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121<br />

5.12 3 × 3 BPSK, d ∗ free<br />

of 2 × 4 × 1. . . . . . . . . . . . . . . . . . . . . . . . . 122<br />

5.13 Rate 3/3, 3 × 3 BPSK, S 4 , S 5 and S 6 . . . . . . . . . . . . . . . . . . . . . 122<br />

5.14 3 × 3 QPSK, |S 5 | = 32. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123<br />

5.15 3 × 3 QPSK, |S 7 | = 32. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124<br />

5.16 3 × 3 8PSK, rate=6/3, |S 7 | = 128 vs |S 8 | = 256. . . . . . . . . . . . . . . . 124<br />

B.1 2 × 2 BPSK, Rate 1/2, I2S2O2p1. . . . . . . . . . . . . . . . . . . . . . . 137<br />

viii


B.2 2 × 2 BPSK, Rate 1/2, I2S2O4p1. . . . . . . . . . . . . . . . . . . . . . . 138<br />

B.3 2 × 2 BPSK, Rate 1/2, I2S2O4P 1. . . . . . . . . . . . . . . . . . . . . . . 138<br />

B.4 2 × 2 BPSK, Rate 2/2, 2 × 2 × 2. . . . . . . . . . . . . . . . . . . . . . . . 138<br />

B.5 2 × 2 BPSK, Rate 2/2, 2 × 4 × 1. . . . . . . . . . . . . . . . . . . . . . . . 138<br />

B.6 2 × 2 QPSK, Rate 1/2, 8 × 2 × 1. . . . . . . . . . . . . . . . . . . . . . . . 138<br />

B.7 2 × 2 QPSK, Rate 2/2, 4 × 2 × 2. . . . . . . . . . . . . . . . . . . . . . . . 139<br />

B.8 2 × 2 QPSK, Rate 2/2, 4 × 4 × 1. . . . . . . . . . . . . . . . . . . . . . . . 139<br />

B.9 2 × 2 QPSK, Rate 3/2, 2 × 2 × 4. . . . . . . . . . . . . . . . . . . . . . . . 139<br />

B.10 2 × 2 QPSK, Rate 3/2, 2 × 4 × 2. . . . . . . . . . . . . . . . . . . . . . . . 140<br />

B.11 2 × 2 QPSK, Rate 3/2, 2 × 8 × 1. . . . . . . . . . . . . . . . . . . . . . . . 140<br />

B.12 2 × 2 QPSK, Rate 4/2, |S 5 | = 32, 2 × 2 × 8. . . . . . . . . . . . . . . . . . 140<br />

B.13 2 × 2 QPSK, Rate 4/2, |S 5 | = 32, 2 × 4 × 4. . . . . . . . . . . . . . . . . . 140<br />

B.14 2 × 2 QPSK, Rate 4/2, |S 5 | = 32, 2 × 8 × 2. . . . . . . . . . . . . . . . . . 141<br />

B.15 2 × 2 QPSK, Rate 4/2, |S 5 | = 32, 2 × 16 × 1. . . . . . . . . . . . . . . . . 141<br />

B.16 2 × 2 QPSK, Rate 4/2, |S 6 | = 64, 4 × 2 × 8. . . . . . . . . . . . . . . . . . 142<br />

B.17 2 × 2 QPSK, Rate 4/2, |S 6 | = 64, 4 × 4 × 4. . . . . . . . . . . . . . . . . . 142<br />

B.18 2 × 2 QPSK, Rate 4/2, |S 6 | = 64, 4 × 8 × 2. . . . . . . . . . . . . . . . . . 143<br />

B.19 2 × 2 QPSK, Rate 4/2, |S 6 | = 64, 2 × 16 × 1. . . . . . . . . . . . . . . . . 144<br />

ix


Abstract<br />

<strong>Space</strong>-time codes are effective means by which to combat fading by exploiting diversity<br />

in both spatial and temporal dimensions. In this work, code per<strong>for</strong>mance and design<br />

are considered in the context of Direct-Sequence Code-Division Multiple-Access (DS-<br />

CDMA) systems. <strong>The</strong> per<strong>for</strong>mance criteria, diversity gain and coding gain, are redefined<br />

as functions of the spreading code correlation. Uplink and downlink scenarios are both<br />

analyzed in multipath fading channels. Optimal codes are searched <strong>for</strong> both spread and<br />

non-spread systems. To reduce the decoding complexity, suboptimal linear decoders are<br />

classified and analyzed. <strong>The</strong> asymptotic achievable diversity level is derived when the<br />

number of receive antennae is large. Motivated by the structures found in computer<br />

optimized codes, via distance preserving isometries, <strong>Space</strong>-<strong>Time</strong> <strong>Block</strong> <strong>Codes</strong> (STBCs)<br />

are constructed bottom-up by optimizing the coding gain layer-by-layer and reusing the<br />

optimized structure. STBCs constructed in this way are dubbed Nonlinear Hierarchical<br />

<strong>Codes</strong> (NHCs). NHCs are essentially a generalization of Slepian’s group codes <strong>for</strong><br />

the Gaussian channel to the multiple-input multiple-output quasi-static fading channel.<br />

NHCs consistently offer comparable or significantly better per<strong>for</strong>mance than constrained<br />

designs (orthogonal or unitary). Systematic Regular MTCM (RMTCM) design procedures<br />

are proposed to exploit the layered structure of NHCs which leads to optimized<br />

designs <strong>for</strong> various rates/block sizes/constellation sizes. <strong>The</strong> match between the distance<br />

spectrum of NHCs to the regular trellis structure optimizes coding gain directly, and full<br />

diversity is naturally maintained given the full rank of the constituent NHCs. Several<br />

factors which affect per<strong>for</strong>mance in both quasi-static and fast block fading channels are<br />

analyzed and exploited to improve the overall per<strong>for</strong>mance. Set expansion is used to improve<br />

high rate RMTCM designs. RMTCM offers the best tradeoff between per<strong>for</strong>mance<br />

and complexity among STTC and (interleaved) serial concatenated system.<br />

x


Chapter 1<br />

Introduction<br />

Methods <strong>for</strong> introducing diversity have long been used to combat fading in wireless channels,<br />

e.g., rake receiver and Maximal Ratio Combining are classical receiver diversity<br />

schemes [Pro95]. Recently, transmit diversity techniques have received significant attention<br />

due to their efficiency in bandwidth and power [GFBK99, Ala98, TSC98, Hug00].<br />

<strong>The</strong>se works are, in part, motivated by in<strong>for</strong>mation theoretic analyses [Tel95, FG98],<br />

which show that capacity increases linearly in the minimum of the number of transmit<br />

and receive antennae. <strong>The</strong>se capacity results are shown under the assumptions of statistically<br />

independent flat Rayleigh channels between all transmit/receive antenna pairs.<br />

Significant work has been done <strong>for</strong> non-spread space-time systems. <strong>Space</strong>-time code<br />

design criteria are provided in [GFBK99, TSC98]. Alamouti [Ala98] proposed a simple<br />

orthogonal space-time block code <strong>for</strong> 2 transmit antennae, it was later generalized in<br />

[TJC99] to orthogonal space-time block codes <strong>for</strong> more transmit antennae. Motivated by<br />

in<strong>for</strong>mation theoretic results, unitary space-time codes are studied in [HM00]. A systematic<br />

way to construct full-diversity unitary group codes based on representation theory<br />

of fixed-point-free groups is presented in [SHHS01]. Differential space-time modulation<br />

based on unitary group codes are studied in [Hug00, HS00]. A layered space-time architecture<br />

is suggested in [Fos96, GARH01]. A number of these non-spread block codes<br />

are used as benchmarks <strong>for</strong> the per<strong>for</strong>mance evaluation of our optimized codes found by<br />

computer search.<br />

<strong>The</strong>re is also a lot of work to generalize <strong>Space</strong>-time coding to Direct-Sequence Code-<br />

Division Multiple-Access (DS-CDMA) systems, but very few has offered the unique view<br />

of per<strong>for</strong>mance criteria and code design as a function of spreading code in both the uplink<br />

and downlink of spread systems, and as a consequence, could not take full advantage of<br />

1


spreading. <strong>The</strong> focus of [HMP01] is to provide full diversity gain by employing existing<br />

schemes. In the sequel, it is shown that full diversity is trivially satisfied; thus a focus<br />

on optimizing coding gain is more appropriate. <strong>Space</strong>-time spreading (STS) [HMP01]<br />

relies heavily on orthogonal spreading codes, whose orthogonality cannot be maintained<br />

in a practical asynchronous frequency-selective channel. In contrast, our optimized codes<br />

maintain their good per<strong>for</strong>mance over a wide range of spreading code correlation values<br />

and multipath profiles, and hence have greater utility in real channels. In [DSAM03],<br />

spreading is combined with the so-called algebraic constellations, the per<strong>for</strong>mance is<br />

evaluated via simulation. Although multipath is considered in the modelling, the effects of<br />

spreading and multipath in per<strong>for</strong>mance analysis and code design are not analyzed. Chipinterleaving<br />

and block-spreading are used in [ZGM02] to suppress the effect of multipath<br />

fading in spread systems. Low-dimensional spread modulation was considered in [BV03],<br />

the signal model was similar to the uplink in this work except that very short spreading<br />

codes were used. <strong>The</strong> main conclusion of [BV03] is similar to Proposition 2 <strong>for</strong> the uplink<br />

in this work: single user code design is sufficient to achieve full diversity in the uplink of<br />

a multiuser environment. We will show that the decoupling of code designs among users<br />

in the uplink is not a characteristic of low-dimensional spreading, but a consequence of<br />

independent fading experienced by different users in the uplink. In [WP99], optimal<br />

decoder and several suboptimal decoders, like linear and blind decoders, are presented<br />

and analyzed.<br />

In Chapter 2 of this work, we present a general model <strong>for</strong> space-time DS-CDMA signals<br />

in a multipath fading channel, and focus on per<strong>for</strong>mance analysis and code design as a<br />

function of spreading code correlation and multipath fading. Both uplink and downlink<br />

scenarios are considered. One consequence of multipath fading is that the optimal decoder<br />

is a block sequence decoder, however, with typical channel assumptions, we show that the<br />

code design based on the worst case pairwise error probability analysis remains a block<br />

optimization problem. <strong>The</strong> presence of multiple access and spreading make code design<br />

<strong>for</strong> spread systems different from that of non-spread systems. <strong>The</strong> differences in both the<br />

design criteria and the resultant optimal codes <strong>for</strong> spread and non-spread systems are<br />

highlighted.<br />

2


Based on the design criteria, which is related to spreading code correlation, this work<br />

further attempts to optimize codes based on coding gain. <strong>The</strong> found codes, deemed Optimal<br />

Minimum Metric (OMM) code, provide solid gains over known codes which have<br />

more structure, like orthogonal codes [TJC99, Ala98] and unitary group codes [SHHS01],<br />

in flat/multipath fading channels and single-user/multiuser systems. <strong>The</strong> structure of optimized<br />

codes leads to the construction of Nonlinear Hierarchical <strong>Codes</strong> (NHC) [GM03b],<br />

which is the focus of Chapter 4. <strong>The</strong> effect of the multipath fading channel on code per<strong>for</strong>mance<br />

is evaluated. Stronger statement than those in [TNSC99] are made regarding<br />

diversity gain and coding gain.<br />

Chapter 2 only considers the optimal Maximum Likelihood (ML) decoder. When the<br />

numbers of active users and space-time block codes (STBC) are large, the ML decoder is<br />

impractical due to large decoding complexity. In Chapter 3, we consider several different<br />

suboptimal linear decoders <strong>for</strong> STBC. <strong>The</strong> suboptimal decoders considered are structurally<br />

similar to those of [NSTC98], which employ a two stage decoding algorithm. In<br />

the first stage, a linear equalizers suppresses MAI and spatial inter-symbol interference.<br />

Subsequently, a mapper is implemented to map the linear filter soft outputs to a valid<br />

STBC(see Figure 3.2). <strong>The</strong> asymptotic achievable diversity level is evaluated <strong>for</strong> each<br />

linear decoders when the number of receive antennae grows. <strong>The</strong> Fisher In<strong>for</strong>mation<br />

Matrix <strong>for</strong> random code vector is also considered to reveal the favorable code structure<br />

<strong>for</strong> Linear Minimum Mean Squared Error (LMMSE) estimator.<br />

<strong>The</strong> per<strong>for</strong>mance criteria <strong>for</strong> a space-time system, diversity gain and coding gain,<br />

demand a significantly different code design strategy than that <strong>for</strong> Gaussian channels,<br />

because the code space defined by the coding gain is a non-linear, non-metric space.<br />

<strong>The</strong> space (transmit antenna) dimension also introduces more degrees of freedom in the<br />

code space than using the time dimension only. In Chapter 2, exhaustive computer<br />

searches are used to seek the optimal STBC code set <strong>for</strong> certain STBCs with small size<br />

and low rate, but the search is NP-hard and quickly becomes infeasible when the block<br />

code size grows large and code rate is high. <strong>The</strong> computer optimized codes exhibit<br />

very nice layered structure which motivates the construction of NHC in Chapter 4. <strong>The</strong><br />

construction of NHC is fundamentally different from that of constrained designs (unitary,<br />

orthogonal) by manipulating the relationship between codewords rather than properties<br />

of the individual codewords. Two questions are to be addressed in Chapters 4 and<br />

3


5, respectively: first, <strong>for</strong> an arbitrary size STBC from an nPSK constellation, how do<br />

we find the optimal (or near optimal) code set of various rates resulting in a distance<br />

spectrum with good structure? Secondly, <strong>for</strong> such found codes, how do we introduce<br />

memory among blocks to systematically utilize the structure of the distance spectrum?<br />

Distance spectrum is the enumeration of the distance measure of interest between all<br />

possible codeword pairs.<br />

<strong>The</strong> methodology of NHC construction via isometries <strong>for</strong> MIMO quasi-static fading<br />

channels is an extension of Slepian’s group codes [Sle68], where group structure (and<br />

hence distance uni<strong>for</strong>mity), is conserved in most cases. A sufficient condition <strong>for</strong> NHCs<br />

to achieve geometric uni<strong>for</strong>mity [GDF91] is established. <strong>The</strong> NHC construction starts<br />

from some initial codeword matrices with no constraint on the size and structure, and<br />

by means of isometries, systematically builds higher rate codes from lower rate codes,<br />

while preserving the good structure of the lower rate code. It is the relationship between<br />

codewords, rather than the structure of each individual codeword, that is exploited to<br />

optimize coding gain. <strong>The</strong> found NHCs match those found via exhaustive computer<br />

search, and exhibit not only optimized coding gain at each layer, but also great symmetry<br />

and structure to be exploited in Chapter 5 of this work.<br />

STBCs are usually insufficient to provide the needed protection from the effects of a<br />

fading channel, there<strong>for</strong>e memory is needed. <strong>Space</strong>-time trellis codes (STTC) [TSC98,<br />

BBH02, YB02, Gri99] usually rely on a convolutional encoder to introduce memory among<br />

symbols, but there are no good design rules to optimize coding gain rather than exhaustive<br />

computer search. Multiple Trellis Coded Modulation (MTCM) [BDMS91, DS87, SF02,<br />

JS02] serves as an ideal bridge to combine the advantages of both block codes and trellis<br />

codes, and offers simple and intuitive design rules to optimize coding gain systematically.<br />

<strong>The</strong> MTCM designs in [SF02, ATP98] use STBCs as an inner code of a convolutional<br />

encoder. However, such serial concatenation usually cannot take full advantage of the<br />

structure in STBCs, and there<strong>for</strong>e introduces memory among blocks “blindly”. Chapter<br />

5 of this work proposes systematic MTCM designs where memory is introduced between<br />

block codes directly, instead of through an outer convolutional encoder. Because of<br />

the symmetric structure in the trellis and constituent NHCs, such designs are dubbed<br />

RMTCM designs.<br />

4


In our RMTCM design, a regular trellis constructed specifically <strong>for</strong> fading channels<br />

is generated by shift-register chains. NHCs, which admit a natural set partitioning as in<br />

[Ung82], <strong>for</strong>m the constituent codes. <strong>The</strong> parameters of the regular trellis and set partitioning<br />

are matched to exploit the layered structure of the constituent NHCs optimally.<br />

Several factors which affect per<strong>for</strong>mance in both quasi-static and fast block fading channels<br />

are analyzed and exploited to improve the overall per<strong>for</strong>mance. Set expansion is used<br />

to improve high rate MTCM designs. In each design, the per<strong>for</strong>mance/rate/complexity<br />

tradeoffs are gracefully balanced and optimized. <strong>The</strong> general design procedure can be<br />

used to serve two purposes: <strong>for</strong> a given set of NHCs, provide MTCM designs with various<br />

rates, and, <strong>for</strong> a given rate, provide MTCM designs with various complexity and<br />

per<strong>for</strong>mance.<br />

<strong>The</strong> MTCM design in [JS02] is a special case of the RMTCM design in this work,<br />

where the constituent codes are restricted to orthogonal codes and signal expansion is<br />

limited to a special isometry; the set partitioning and trellis structure of [JS02] are not<br />

systematic. RMTCM design can achieve a half dB gain over the rate 3 MTCM design in<br />

[JS02] by doubling the number of states via adjusting trellis parameters. With even more<br />

states, as much as 2dB gain can be achieved. <strong>The</strong> observation, that parallel transitions<br />

should be avoided <strong>for</strong> better per<strong>for</strong>mance in quasi-static channels, is made in [JS02] <strong>for</strong><br />

orthogonal constituent codes only, we will show that it holds <strong>for</strong> more general cases.<br />

<strong>The</strong> design principles of RMTCM do not rely on quasi-static fading channels. Thus<br />

they can be employed to design RMTCM even <strong>for</strong> fast block fading channels, where fast<br />

block fading is achieved due to perfect block interleaving. Both analysis and simulation<br />

show that parallel transitions are the major detrimental factor which should be avoided<br />

<strong>for</strong> fast block fading channels. A similar conclusion is drawn <strong>for</strong> MTCM design over<br />

Single-Input-Single-Output fast-fading (due to perfect interleaving) channels [BDMS91].<br />

This thesis is organized as follows: Chapter 2 discusses space-time block codes in<br />

multipath CDMA systems. Chapter 3 discusses suboptimal linear decoders. Chapter 4<br />

discusses the construction and properties of NHC. Chapter 5 discusses the design procedure<br />

and per<strong>for</strong>mance analysis of RMTCM. Chapter 6 summarizes this work and points<br />

out future work.<br />

5


Chapter 2<br />

<strong>Space</strong>-<strong>Time</strong> <strong>Block</strong> <strong>Codes</strong> in Multipath CDMA <strong>Systems</strong><br />

2.1 Introduction<br />

<strong>The</strong>re is a lot of work to generalize <strong>Space</strong>-time coding to Direct-Sequence Code-Division<br />

Multiple-Access (DS-CDMA) systems, but very few has offered the unique view of per<strong>for</strong>mance<br />

criteria and code design as a function of spreading code in both the uplink<br />

and downlink of spread systems, and as a consequence, couldn’t take full advantage of<br />

spreading. <strong>The</strong> focus of [HMP01] is to provide full diversity gain by employing existing<br />

schemes. In the sequel, it is shown that full diversity is trivially satisfied; thus a focus<br />

on optimizing coding gain is more appropriate. <strong>Space</strong>-time spreading (STS) [HMP01]<br />

relies heavily on orthogonal spreading codes, whose orthogonality cannot be maintained<br />

in a practical asynchronous frequency-selective channel. In contrast, our optimized codes<br />

maintain their good per<strong>for</strong>mance over a wide range of spreading code correlation values<br />

and multipath profiles, and hence have greater utility in real channels. In [DSAM03],<br />

spreading is combined with the so-called algebraic constellations, the per<strong>for</strong>mance is evaluated<br />

via simulation. Although multipath is considered in the modelling, the effects of<br />

spreading and multipath in per<strong>for</strong>mance analysis and code design are not analyzed. Chipinterleaving<br />

and block-spreading are used in [ZGM02] to suppress the effect of multipath<br />

fading in spread systems. Low-dimensional spread modulation was considered in [BV03],<br />

the signal model was similar to the uplink in this work except that very short spreading<br />

codes were used. <strong>The</strong> main conclusion of [BV03] is similar to Proposition 2 <strong>for</strong> the uplink<br />

in this work: single user code design is sufficient to achieve full diversity in the uplink of<br />

a multiuser environment. In [WP99], optimal decoder and several suboptimal decoders,<br />

like linear and blind decoders, are presented and analyzed.<br />

6


In this work, we present a general model <strong>for</strong> space-time DS-CDMA signals in a multipath<br />

fading channel, and focus on per<strong>for</strong>mance analysis and code design as a function<br />

of spreading code correlation and multipath fading. Both uplink and downlink scenarios<br />

are considered. Furthermore, the framework is general enough to encompass non-spread<br />

systems with slight modification (Section 2.4). Note that multipath fading is a consequence<br />

of signal bandwidth exceeding channel coherence bandwidth. Rich multipath fading<br />

is usually observed in spread systems where signal usually occupy large bandwidth which<br />

exceeds channel coherence bandwidth. Non-spread systems may also experience multipath<br />

fading [TNSC99], although less severe than spread systems. One consequence of multipath<br />

fading is that the optimal decoder is a block sequence decoder, however, with typical<br />

channel assumptions, we show that the code design based on the worst case pairwise error<br />

probability analysis remains a block optimization problem. <strong>The</strong> presence of multiple access<br />

and spreading make code design <strong>for</strong> spread systems different from that of non-spread<br />

systems. <strong>The</strong> differences in both the design criteria and the resultant optimal codes <strong>for</strong><br />

spread and non-spread systems are highlighted. Due to the assumption of independent<br />

channels, at the uplink, the multiuser coding problem decouples to multiple single user<br />

coding problems; in contrast, the downlink space time coding problem is truly a multiuser<br />

problem. At the uplink, all the transmissions of different users are asynchronous, but this<br />

does not affect the per<strong>for</strong>mance analysis and code design due to the decoupling among<br />

users.<br />

Based on the design criteria, which is related to spreading code correlation, this work<br />

further attempts to optimize codes based on coding gain. <strong>The</strong> found codes, deemed Optimal<br />

Minimum Metric (OMM) code, provide solid gains over known codes which have<br />

more structure, like orthogonal codes [TJC99, Ala98] and unitary group codes [SHHS01],<br />

in flat/multipath fading channels and single-user/multiuser systems. <strong>The</strong> search <strong>for</strong> optimized<br />

codes is simplified through “isometries”, trans<strong>for</strong>mations to which the design<br />

metrics are invariant. For more detailed discussion on isometry, please refer to Section<br />

4.2. We also analyze some methods to construct space-time codes and show their per<strong>for</strong>mance<br />

with respect to spreading code correlation. <strong>The</strong> structure of optimized codes<br />

leads to the construction of nonlinear hierarchical codes [GM03b].<br />

<strong>The</strong> effect of the multipath fading channel on code per<strong>for</strong>mance is evaluated. As expected,<br />

with perfect CSI, multipath provides a higher diversity level. Spread systems can<br />

7


easily achieve maximum diversity (the number of transmit antenna times the number of<br />

multipaths <strong>for</strong> one receive antenna), while non-spread systems must satisfy more stringent<br />

conditions to achieve full diversity. A similar, but weaker result was presented in<br />

[TNSC99], which showed that a space-time code will achieve at least the same diversity<br />

in a multipath fading channel as it does in a flat fading channel. Herein, the exact diversity<br />

level is easily determined. Based on the observation on diversity level, [TNSC99]<br />

concludes that a space-time code designed <strong>for</strong> a slow flat fading channel will continue<br />

to per<strong>for</strong>m well in a multipath fading channel, we will further show that this conclusion<br />

also holds in terms of coding gain when the multipath delay spread is relatively small in<br />

comparison to the symbol interval duration.<br />

This chapter is organized as follows. Section 2.2 derives the per<strong>for</strong>mance criteria and<br />

code design principles in the uplink. Section 2.3 discusses the downlink. Section 2.4<br />

briefly describes the non-spread system. Section 2.5 tabulates some of the optimized<br />

codes found via computer search, compares their per<strong>for</strong>mance with some known codes,<br />

and analyzes properties of these codes. Section 2.6 considers some application issues<br />

<strong>for</strong> the quasi-static channel: invariant operations, the sensitivity of code per<strong>for</strong>mance<br />

to spreading code correlation, and some suboptimal methods <strong>for</strong> constructing spacetime<br />

codes. Finally, conclusions and some suggestions <strong>for</strong> future work are discussed in<br />

Section 2.7.<br />

2.2 <strong>The</strong> Uplink<br />

In this section, we provide a general space-time model <strong>for</strong> the uplink of DS-CDMA systems.<br />

<strong>The</strong> generality of the <strong>for</strong>mulation incurs some added complexity, but offers the<br />

advantage of enabling analysis of coding gain as a function of the spreading code correlation.<br />

If the channel coefficients of different receive antennae are uncorrelated and<br />

the Maximum-Likelihood decoder is employed, multiple receive antennae will not affect<br />

code design based on worst case PEP analysis (but do affect per<strong>for</strong>mance). Recent work<br />

shows that as the number of receive antennae increases, the multi-input multi-output<br />

fading channel model approaches a multiple parallel AWGN channel model; as such different<br />

design metrics should be considered <strong>for</strong> the large number of receive antennae case<br />

[GF01, CYV99]. Herein, we consider a small to moderate number of receive antennae,<br />

8


hence the worst case PEP is still a valid design criterion. Furthermore, due to our consideration<br />

of the maximum likelihood decoder, we focus on a single receive antenna. Note<br />

that <strong>for</strong> other decoding rules like Minimum Mean Squared Error (MMSE), the number<br />

of receive antennae does matter in code design [GM01].<br />

Due to the large dimensionality of our problem, a variable X might have up to 4<br />

indices. To assist the reader in deciphering the notation X l(k)<br />

i<br />

(t), we generally con<strong>for</strong>m<br />

to the following rules: the subscript i is the transmit antenna index, the first superscript<br />

l is the multipath index, the second superscript (k) is the user index, and finally t is<br />

the time index.<br />

Sometimes a matrix or vector is independent of the time index t or<br />

transmit antenna index i, but we still keep that index to help track the dimension of the<br />

corresponding matrix or vector. In this case, we will point it out explicitly. Two kinds of<br />

codeword matrices are constantly used throughout the chapter, the calligraphic D, which<br />

represents the block code in its original block <strong>for</strong>m, and the regular D, which represents<br />

a reshaped version of D. A prefix ∆ in front of D or D denotes the difference between<br />

two codewords.<br />

We summarize the constants used in this chapter <strong>for</strong> quick reference:<br />

K: the # of active users<br />

L t : the # of transmit antennae<br />

L r : the # of receive antennae<br />

L c : the # of maximum resolvable multipaths <strong>for</strong> a single transmit antenna<br />

N c : the time dimension of block code in units of the symbol period<br />

L u : the # of samples per symbol period, or sampling rate, <strong>for</strong> non-spread systems,<br />

or spreading gain <strong>for</strong> chip-sampled DS-CDMA systems<br />

τ max : the largest path delay<br />

L s : the # of samples between two consecutive transmissions<br />

Note that L s is used <strong>for</strong> non-spread systems only, <strong>for</strong> spread systems, L s = L u .<br />

In this chapter, a unique integer, the code index, is used to represent a codeword. If<br />

nPSK is employed, u is the n th root of unity u = e j 2π n and d i (t) = u ci(t) , a block code<br />

matrix of size N c × L t can be represented by the index obtained from a base n number:<br />

(c Lt (N c )c Lt−1(N c )c Lt−2(N c ) · · · · · · + c 2 (1)c 1 (1)) n . (2.1)<br />

9


For example, a BPSK 2 × 2 block code,<br />

⎡<br />

⎣ 1<br />

−1<br />

⎤<br />

−1 ⎦ ,<br />

−1<br />

can be represented by code index,<br />

c 2 (2) + c 1 (2)n + c 2 (1)n 2 + c 1 (1)n 3 = 1 + 1 × 2 + 1 × 4 + 0 × 8 = 7.<br />

2.2.1 <strong>The</strong> Uplink System Model<br />

In the uplink between K active users and one base station, we will show how optimizing<br />

the per<strong>for</strong>mance of a sequence block decoder leads to the design of each block code<br />

individually and also how this problem simplifies into multiple, single user, coding problems.<br />

We shall consider an asynchronous multipath channel model. <strong>The</strong> transmitter is<br />

equipped with L t transmit antennae and the receiver is equipped with 1 receive antenna.<br />

Figure 2.1 depicts the encoder block diagram <strong>for</strong> a single block code. For each block, the<br />

transmitter maps k c in<strong>for</strong>mation bits of user k, [I (k) (1), I (k) (2), . . . , I (k) (k c )], to one of<br />

K c = 2 kc space-time block codewords,<br />

D (k) = [d (k)<br />

i<br />

(t)] ∈ C Nc×Lt , 1 ≤ i ≤ L t , 1 ≤ t ≤ N c (2.2)<br />

where N c is the block length in terms of symbol duration. Each d (k)<br />

i<br />

(t) is spread by<br />

a corresponding spreading code s (k)<br />

i<br />

(t) and transmitted via the corresponding transmit<br />

antenna T X i . Note that each row of the codeword matrix is transmitted simultaneously.<br />

A total of M such blocks are transmitted continuously, no memory exists among these<br />

blocks. <strong>The</strong> code rate is<br />

R c = k c<br />

N c<br />

. (2.3)<br />

In our <strong>for</strong>mulation, each d (k)<br />

i<br />

(t) can be taken from any point on the complex plane, but<br />

in the following discussion of code design, d (k)<br />

i<br />

(t) is constrained to PSK constellations <strong>for</strong><br />

practical purposes. <strong>The</strong> spreading code s (k)<br />

i<br />

(t) is a column vector of length L u , which has<br />

support over one symbol period. <strong>The</strong> typical entries of s (k)<br />

i<br />

(t) are {±1/ √ L u }, but this is<br />

not required in the following derivation and analysis. If one uses different spreading codes<br />

at different symbol times <strong>for</strong> the same antenna, then s (k)<br />

i<br />

(t 1 ) ≠ s (k)<br />

i<br />

(t 2 ), this model applies<br />

10


€<br />

‚ƒ<br />

„ …†<br />

‡<br />

‰ ˆ<br />

((( ( ( ( (((<br />

—<br />

˜<br />

DDD D D D DDD<br />

User k’s In<strong>for</strong>mation bits<br />

ST <strong>Block</strong> Coding<br />

ST Spreading<br />

<br />

1/24365<br />

=/=>= 1/2?3@5 ACB 8":¦<<br />

798;:¦<<br />

"!$# %%% <br />

&¦' "!$#<br />

¢¡¤£¦¥¨§© ¡¤£¦¥§<br />

)<br />

)<br />

1/24365<br />

+*-,"#.%%%<br />

7E8GFIHJ=<br />

AB 8LFIHJ<<br />

&/' +*-,0#<br />

1/243@5<br />

Figure 2.1: Transmitter encoder block diagram <strong>for</strong> user k.<br />

to long spreading codes which span several symbol times. For a non-spread system, which<br />

will be discussed briefly in Section 2.4, this vector is composed of higher rate samples of<br />

the modulation wave<strong>for</strong>m and is the same <strong>for</strong> all transmit antennae; <strong>for</strong> a spread system,<br />

s (k)<br />

i<br />

(t) is a well-designed spreading code, with a possibly different code <strong>for</strong> each antenna.<br />

We note that near orthogonality between spreading codes is not a requirement in our<br />

scheme, thus assigning unique spreading codes to different antennae of each user is not a<br />

challenge.<br />

<strong>The</strong> receiver block diagram is shown in Figure 2.2. In multipath fading channels, the<br />

optimal decoder should consider the entire received sequence of M blocks due to interblock<br />

interference from the previous and subsequent blocks. If the M blocks are among a<br />

continuous transmission of blocks, we assume that the sequence length M is long enough<br />

such that the edge effects can be safely neglected.<br />

Matched<br />

Filter Bank<br />

Sequence<br />

Decoder<br />

S-T0UVVVU R SXW R<br />

MONQP<br />

Figure 2.2: Receiver decoder block diagram.<br />

Œ Ž<br />

‘ ’“<br />

” •–<br />

žŸ<br />

Š‹<br />

Gš<br />

›œ<br />

¡/¡¦¡$¡¦¡/¡<br />

s+tu vewCx;y?z{ | }+~@<br />

r<br />

_ `+ab cedf?gh i j+kl meno?pq<br />

Y[Z;\?]^<br />

¢/¢¦¢$¢¦¢/¢<br />

Figure 2.3: Channel model.<br />

A chip spaced tap delay line model [Pro95] is used to model the channel between<br />

each transmit/receive antenna pair, as depicted in Figure 2.3. <strong>The</strong> number of resolvable<br />

11


multipaths corresponding to each transmit and receive antenna pair is denoted by L c .<br />

For simplicity, we will assume that the channels between different transmit/receive antenna<br />

pairs have the same number of multipaths. Our model handles different number<br />

of multipaths equally well, but that adds unnecessary complexity to the <strong>for</strong>mulae. Note<br />

that L c may be less than or equal to the largest path delay τ max . For user k, the delay<br />

corresponding to multipath l associated with transmit antenna i is τ l(k)<br />

i<br />

. <strong>The</strong> channel<br />

tap amplitude and phase associated with τ l(k)<br />

i<br />

at symbol time t are represented by a<br />

complex number h l(k)<br />

i<br />

(t), which is called the channel coefficient and modelled as a circularly<br />

symmetric complex Gaussian random process. We consider quasi-static fading only,<br />

there<strong>for</strong>e h l(k)<br />

i<br />

(t) is constant within the block, but independent from block to block. All<br />

unique channel coefficients are assumed to be i.i.d.. <strong>The</strong> chip spaced multipath model is<br />

an idealized model <strong>for</strong> real channels and is invoked here to facilitate an analytical understanding.<br />

<strong>The</strong> delay τ l(k)<br />

i<br />

is assumed constant within M blocks because it is determined by<br />

the propagation delay of corresponding multipath, which is fairly constant <strong>for</strong> a moderate<br />

mobility environment. Both transmit and receive antennae are assumed to be sufficiently<br />

separated such that the channel coefficients <strong>for</strong> different transmit/receive antennae are<br />

statistically independent. We use the delay profile matrix<br />

to describe the spread of the path delays <strong>for</strong> user k and<br />

⎡<br />

⎤<br />

τ 1(k)<br />

1 · · · τ Lc(k)<br />

1<br />

Γ (k) =<br />

.<br />

⎢ . .. . ⎥<br />

⎣<br />

⎦ , (2.4)<br />

τ 1(k)<br />

L t<br />

· · · τ Lc(k)<br />

L t<br />

Γ = [Γ (1) , . . . , Γ (K) ] (2.5)<br />

to describe all users’ delay profiles. Note that<br />

(t) ∈ R (Lu+τmax)×1 as the delayed duplicate of s (k) (t) corresponding to mul-<br />

Denote g l(k)<br />

i<br />

tipath l, it has τ l(k)<br />

i<br />

τ max = max τ l(k)<br />

i<br />

, ∀i, l, k (2.6)<br />

zeros preceding s (k)<br />

i<br />

(t) and possibly some trailing zeros to make its<br />

i<br />

12


length (L u + τ max ). For example, when using one sample per chip, the spreading code<br />

s (k)<br />

i<br />

(t) and its delayed version g l(k)<br />

i<br />

(t) with τ l(k)<br />

i<br />

= 2 may look like<br />

s (k)<br />

i<br />

(t) =<br />

1 √ Lu<br />

[ 1, −1, −1, 1, . . . , 1, −1 ] T<br />

g l(k)<br />

i<br />

(t) =<br />

1 √ Lu<br />

[ 0, 0, 1, −1, −1, 1, . . . , 1, −1 0 . . . 0] T (2.7)<br />

<strong>The</strong> received signal samples due to d (k)<br />

i<br />

(t) are<br />

r(t) = √ σ t<br />

∑<br />

K ∑L t L c<br />

∑<br />

k=1 i=1 l=1<br />

g l(k)<br />

i<br />

(t)d (k)<br />

i<br />

(t)h l(k)<br />

i<br />

(t) + n(t) ∈ C (Lu+τmax)×1 (2.8)<br />

where n(t) is complex white Gaussian noise ∼ CN (0, I). Note that the length of r(t)<br />

extends beyond a symbol period L u due to multipath fading.<br />

We use CN (m, R) to<br />

denote a complex Gaussian random vector with mean vector m and covariance matrix<br />

R.<br />

We will group r(t) in the order of multipath, user and block to <strong>for</strong>m a final observation<br />

vector r <strong>for</strong> the whole sequence of received blocks. We define<br />

G (k) (t) = [g 1(k)<br />

1 (t), . . . , g 1(k) (t), . . . . . . , g Lc(k)<br />

1 (t), . . . , g Lc(k)<br />

L t<br />

(t)] ∈ R (Lu+τmax)×LcLt (2.9)<br />

L t<br />

group 1<br />

group L c<br />

{ }} { { }} {<br />

D (k) (t) = diag[ d (k)<br />

1 (t), . . . , d(k)<br />

L t<br />

(t), . . . . . . d (k)<br />

1 (t), . . . , d(k)<br />

L t<br />

(t)] ∈ C LcLt×LcLt (2.10)<br />

h (k) (t) = [h 1(k)<br />

1 (t), . . . , h 1(k) (t), . . . . . . , h Lc(k)<br />

1 (t), . . . , h Lc(k)<br />

L t<br />

(t)] T ∈ C LcLt×1 (2.11)<br />

L t<br />

For symbol time t, matrix G (k) (t) collects the spreading codes corresponding to all multipaths<br />

<strong>for</strong> user k, D (k) (t) contains L c duplicates of user k’s data, and h (k) (t) collects the<br />

channel coefficients of all the multipaths <strong>for</strong> user k.<br />

Equation (2.8) can be written in matrix <strong>for</strong>m as,<br />

r(t) = √ σ t G(t)D(t)h(t) + n(t) ∈ C (Lu+τmax)×1 (2.12)<br />

13


where<br />

G(t) = [G (1) (t), . . . . . . , G (K) (t)] ∈ R (Lu+τmax)×KLcLt (2.13)<br />

D(t) = diag[D (1) (t), . . . . . . , D (K) (t)] ∈ C KLcLt×KLcLt (2.14)<br />

h(t) = [h (1) (t) T , . . . . . . , h (K) (t) T ] T ∈ C KLcLt×1 (2.15)<br />

Note that inter-symbol interference (ISI) occurs on the last τ max samples of r(t − 1)<br />

and the first τ max samples of r(t).<br />

<strong>The</strong> received vector due to block m is obtained<br />

by “concatenating” the r(t), t = 1, . . . , MN c into a larger vector with τ max overlapping<br />

samples between adjacent r(t − 1) and r(t),<br />

⎡<br />

⎤ ⎡<br />

⎤<br />

G((m − 1)N<br />

r m = √ c + 1) ∅ ∅ D((m − 1)N c + 1)<br />

[<br />

]<br />

σ<br />

.<br />

t ⎢ ∅ .. ∅ ⎥ ⎢ . ⎥ h((m − 1)N<br />

⎣<br />

⎦ ⎣<br />

⎦<br />

c + 1)<br />

∅ ∅ G(mN c ) D(mN c )<br />

(2.16)<br />

⎡<br />

⎤<br />

n((m − 1)N c + 1)<br />

+ ⎢ . ⎥<br />

⎣<br />

⎦ √ σ t G m D m h m + n m ∈ C (NcLu+τmax)×1 . (2.17)<br />

n(mN c )<br />

Note that<br />

⎡<br />

⎤<br />

G((m − 1)N c + 1) ∅ ∅<br />

G m =<br />

.<br />

⎢ ∅ .. ∅ ⎥<br />

⎣<br />

⎦<br />

∅ ∅ G(mN c )<br />

(2.18)<br />

is not block diagonal, there are τ max overlapping rows between the adjacent blocks of G(t).<br />

<strong>The</strong> matrix G m is described in this fashion <strong>for</strong> notational convenience. If a matrix is block<br />

diagonal, the notation 0 is used (versus ∅. <strong>The</strong> following block matrix shows how the<br />

last τ max rows of G(1) overlaps with the first τ max rows of G(2), the overlapping section<br />

are designated by a prime.<br />

⎡ [<br />

G(1)<br />

[<br />

⎢ G ′ (1)<br />

⎣ [<br />

0<br />

]L u×KL tL c<br />

[<br />

]τ max×KL tL c<br />

[<br />

]L u×KL tL c<br />

[<br />

]<br />

0<br />

]<br />

G ′ (2)<br />

]<br />

G(2)<br />

L u×KL tL c<br />

τ max×KL tL c<br />

L u×KL tL c<br />

⎤<br />

⎥<br />

⎦<br />

14


Due to quasi-static fading, h((m − 1)N c + 1) = · · · = h(mN c ), hence h m consists of only<br />

one h((m − 1)N c + 1). <strong>The</strong> noise vector n m is complex Gaussian n m ∼ CN (0, I).<br />

Finally, the observation vector of the whole sequence<br />

⎡<br />

⎤ ⎡<br />

⎤ ⎡ ⎤ ⎡ ⎤<br />

G<br />

r = √ 1 ∅ ∅ D σ t ⎢ ∅ . 1 0 0 h 1 n 1<br />

. . . ∅ ⎥ ⎢ 0 .. 0 ⎥ ⎢ . ⎥<br />

⎣<br />

⎦ ⎣<br />

⎦ ⎣ ⎦ + ⎢ . ⎥ (2.19)<br />

⎣ ⎦<br />

∅ ∅ G M 0 0 D M h M n M<br />

√ σ t Ḡ ¯Dh + n ∈ C (MNcLu+τmax)×1 . (2.20)<br />

Note that ¯D represents a sequence of M block codes.<br />

A matched filter bank, which is matched to each multipath of each user at every<br />

symbol time t, is constructed. <strong>The</strong> matched filter outputs, which <strong>for</strong>m a sufficient statistic<br />

(ignoring any edge effect from the first and last blocks) <strong>for</strong> the decoder, can be written<br />

as<br />

y = ḠT r = √ σ t ¯R ¯Dh + m ∈ C<br />

MN cL cKL t×1 , (2.21)<br />

where<br />

¯R ḠT Ḡ = [R ij ] ∈ R MNcLcKLt×MNcLcKLt (2.22)<br />

is the spreading code correlation matrix, note that the<br />

R ij G H i G j , 1 ≤ i, j ≤ M (2.23)<br />

are sparse or zero matrices when i ≠ j, m ḠT n is distributed as CN (0, ¯R).<br />

2.2.2 <strong>The</strong> Uplink Chernoff Bound and Code Design Criteria<br />

We derive the Chernoff bound of pairwise error probability (PEP) when the Maximum<br />

Likelihood (ML) decoder is employed at the receiver as in [GFBK99, TSC98] under the<br />

assumption of perfect channel state in<strong>for</strong>mation at the receiver. <strong>The</strong> matched filter output<br />

vector is y ∼ CN ( √ σ t R ¯Dh, R). <strong>The</strong> averaged probability of decoding codeword sequence<br />

¯D β while codeword sequence ¯D α is transmitted is upper bounded by<br />

E h [P ( ¯D α → ¯D 1<br />

β |h)] ≤ ∣<br />

∣I + σt<br />

4 Σ h( ¯D α − ¯D β ) H ¯R( ¯D α − ¯D β ) ∣ , (2.24)<br />

15


where D α and D β are two codeword sequences as defined in Equation (2.20) and Σ h =<br />

E[hh H ] ∈ R MKLtLc×MKLtLc is the correlation matrix of the channel coefficients. Due to<br />

independent fading, Σ h is a diagonal matrix, where each diagonal entry corresponds to<br />

the power of a particular multipath <strong>for</strong> a particular user at certain block m.<br />

We denote ∆ ¯D = ¯D α − ¯D β . <strong>The</strong> correlated codeword sequence difference matrix, ¯Φ,<br />

is defined as<br />

¯Φ = ∆ ¯D H ¯R∆ ¯D ∈ C<br />

MKL cL t×MKL cL t<br />

(2.25)<br />

Φ ij = ∆D H i R ij ∆D j , 1 ≤ i, j ≤ M ∈ C KLcLt×KLcLt (2.26)<br />

Notice that Φ mm , m = 1, . . . , M are the dominant blocks in Φ.<br />

For high SNR, the Chernoff bound can be approximated [GFBK99, TSC98] by<br />

E h [P (D α → D β |h)] ≤<br />

(<br />

( σt<br />

) −r r∏<br />

) −1<br />

λ i , (2.27)<br />

4<br />

i=1<br />

where λ i ’s are the nonzero eigenvalues of Σ h ¯Φ and r is the number of λi ’s. Taken over all<br />

codeword pairs, the minimum possible number of non-zero eigenvalues of Φ determines<br />

the diversity gain,<br />

∆ H = min rank(¯Φ). (2.28)<br />

∆ ¯D<br />

For those pairs of codewords which achieve ∆ H , the smallest product of non-zero eigenvalues<br />

of Φ determines the coding gain,<br />

∆∏<br />

H<br />

∆ p = min λ i . (2.29)<br />

∆ ¯D<br />

i=1<br />

In the case of full diversity,<br />

∆ p = min<br />

∆D det(¯Φ). (2.30)<br />

While diversity gain and coding gain are the classical worst-case space-time code design<br />

criteria, the distance spectrum of the code set is a more accurate per<strong>for</strong>mance indicator <strong>for</strong><br />

a small number of receive antennae [AF01]. Union bound is used in [GVM02] to optimize<br />

the whole distance spectrum, the resultant codes can achieve better per<strong>for</strong>mance than<br />

the corresponding OMM code.<br />

16


If we assume that the channel coefficients are independent <strong>for</strong> different transmit antennae,<br />

a nontrivial Σ h will be a diagonal matrix with full rank, and the code design<br />

will be independent of channel statistics. To simplify the analysis, in the sequel we will<br />

assume this and focus on ¯Φ only.<br />

2.2.3 <strong>The</strong> Correlated Codeword Sequence Difference Matrix Φ<br />

We assume each transmit antenna uses the same fixed spreading code during the M<br />

blocks<br />

s i (1) = s i (2) = · · · = s i (MN c ) (2.31)<br />

G 1 = · · · = G M and R mm exhibits a banded block Toeplitz structure. For the quasi-static<br />

fading channels, Φ mm can be put in a more compact <strong>for</strong>m.<br />

If nL u ≤ τ max < (n + 1)L u , then R mm = G T mG m ∈ R NcKLcLt×NcKLcLt has the <strong>for</strong>m<br />

⎡<br />

⎤<br />

R mm (0) R mm (1) . . . R mm (n + 1) 0<br />

R mm (1) T<br />

.<br />

R(0) .. . .. Rmm (n + 1)<br />

R mm =<br />

. .<br />

.. . .. . .. .<br />

, (2.32)<br />

⎢<br />

⎣<br />

R mm (n + 1) T . .. . .. . .. Rmm (1) ⎥<br />

⎦<br />

0 R mm (n + 1) T . . . R mm (1) T R mm (0)<br />

where R mm (0) captures the “on-time” correlation of the spreading codes,<br />

R mm (0) = G(t) T G(t) ∈ R KLcLt×KLcLt , (2.33)<br />

and R mm (t), 1 ≤ t ≤ n captures the “off-time” correlation,<br />

⎡ ⎤<br />

R mm (t) = ⎣ G(t) ⎦<br />

0<br />

⎤<br />

⎣ 0 ⎦ ∈ R LcKLt×LcKLt (2.34)<br />

G(t)<br />

T ⎡<br />

where, G(t) ∈ R (Lu+τmax)×KLcLt is defined in Equation (2.13), 0 ∈ R nLu×KLcLt is an<br />

all-zero matrix whose size depends on n.<br />

17


For the quasi-static fading channel, Φ mm can be simplified as 1<br />

n+1<br />

∑<br />

Φ mm = ∆DL H c<br />

∆D Lc ⊙ R mm (0) + ∆DL H c<br />

Q n+1 ∆D Lc ⊙ (R mm (t) + Rmm(t)) T (2.35)<br />

t=1<br />

where<br />

and<br />

L c blocks<br />

{ }} {<br />

∆D Lc = [ ∆D, . . . , ∆D] (2.36)<br />

⎡<br />

⎤<br />

0 1 · · · 0<br />

. . .. . .. .<br />

Q =<br />

⎢<br />

.<br />

⎣<br />

. .. . (2.37)<br />

1 ⎥<br />

⎦<br />

0 · · · · · · 0<br />

Regarding the rank of Φ and Φ mm , we have the following proposition that is independent<br />

of the space-time code selected.<br />

Proposition 1 If ∆D m ≠ 0 and at least one of ∆D n ≠ 0, n ≠ m,<br />

min rank ¯Φ > rank Φ mm , ∀m = 1, . . . , M, (2.38)<br />

∆ ¯D<br />

provided one of the following conditions is satisfied:<br />

1. <strong>The</strong> difference between no pairs of distinct path delays is an integer multiple of the<br />

spreading code length L u<br />

|τ lp(kp)<br />

i p<br />

− τ lq(kq)<br />

i q<br />

| ≠ nL u n = 1, 2, . . . ,<br />

1 ≤ i p ≠ i q ≤ L t or<br />

1 ≤ k p ≠ k q ≤ K.<br />

(2.39)<br />

2. If the spreading codes are not time-varying, i.e. s (k)<br />

i<br />

(1) = · · · = s (k)<br />

i<br />

(MN c ), ∀ i, k,<br />

then<br />

the set of spreading codes {s (k)<br />

i<br />

, 1 ≤ i ≤ L t , 1 ≤ k ≤ K} are linearly independent.<br />

(2.40)<br />

1 <strong>The</strong> operator ⊙ is the Schur product, and ⊗ is the Kronecker product.<br />

18


3. Without loss of generality, assume τ lp(kp)<br />

i p<br />

= τ min and τ lq(kq)<br />

i q<br />

= τ max , at least one of<br />

∆d (kp)<br />

i p<br />

(t m + (m − 1)N c ) ≠ 0, 1 ≤ t m ≤ N c , and (2.41)<br />

∆d (kq)<br />

i q<br />

(t m + (m − 1)N c ) ≠ 0, 1 ≤ t m ≤ N c . (2.42)<br />

Proof:<br />

Recall from Equations (2.25) and (2.22), ¯Φ = ∆ ¯D H ¯R∆ ¯D and ¯R = Ḡ T Ḡ. <strong>The</strong>re<strong>for</strong>e<br />

¯Φ = (Ḡ∆ ¯D) H Ḡ∆ ¯D, (2.43)<br />

rank ¯Φ = rank Ḡ∆ ¯D (2.44)<br />

where<br />

Similarly,<br />

⎡<br />

⎤<br />

G<br />

Ḡ∆ ¯D 1 ∆D 1 ∅ ∅<br />

=<br />

.<br />

⎢ ∅ .. ∅ ⎥<br />

⎣<br />

⎦ .<br />

∅ ∅ G M ∆D M<br />

rank Φ mm = rank G m ∆D m (2.45)<br />

Let’s consider the two blocks from Ḡ∆ ¯D:<br />

G m ∆D m = [g lp(kp)<br />

i p<br />

(t m + (m − 1)N c )∆d (kp)<br />

i p<br />

(t m + (m − 1)N c )], 1 ≤ t m ≤ N c , (2.46)<br />

G n ∆D n = [g lq(kq)<br />

i q<br />

(t n + (n − 1)N c )∆d (kq)<br />

i q<br />

(t n + (n − 1)N c )], 1 ≤ t n ≤ N c . (2.47)<br />

We want to prove that G m ∆D m and G n ∆D n span different column spaces of Ḡ∆ ¯D. It<br />

is sufficient to show that there exists at least one vector g lp(kp)<br />

i p<br />

(t m +(m−1)N c )∆d (kp)<br />

i p<br />

(t m +<br />

(m−1)N c ) from G m ∆D m that is not a linear combination of any components of {g lq(kq)<br />

i q<br />

(t n +<br />

(n − 1)N c )∆d (kq)<br />

i q<br />

(t n + (n − 1)N c )}, i.e., the columns of G n ∆D n from Ḡ∆ ¯D.<br />

1. For any non-zero vector g lp(kp)<br />

i p<br />

(t m + (m − 1)N c )∆d (kp)<br />

i p<br />

(t m + (m − 1)N c ), in Ḡ∆ ¯D,<br />

the non-zero segment start at sample time<br />

t m L u + (m − 1)N c L u + τ lp(kp)<br />

i p<br />

,<br />

19


and the non-zero segment of any linear combination of {g lq(kq)<br />

i q<br />

(t n +(n−1)N c )∆d (kq)<br />

i q<br />

(t n +<br />

(n − 1)N c )} can only start at sample time<br />

t n L u + (n − 1)N c L u + τ lq(kq)<br />

i q<br />

.<br />

If these times do not coincide, i.e.,<br />

|[(t m − t n ) + (m − n)N c ]L u + τ lp(kp)<br />

i p<br />

⇐⇒|τ lp(kp)<br />

i p<br />

− τ lq(kq)<br />

i q<br />

| ≠ 0 (2.48)<br />

− τ lq(kq)<br />

i q<br />

| ≠ [(t m − t n ) + (m − n)N c ]L u (2.49)<br />

the vector g lp(kp)<br />

i p<br />

(t m + (m − 1)N c )∆d (kp)<br />

i p<br />

(t m + (m − 1)N c ) can never be a linear<br />

combination of {g lq(kq)<br />

i q<br />

(t n + (n − 1)N c )∆d (kq)<br />

i q<br />

(t n + (n − 1)N c )}. Condition (2.39) is<br />

even stronger than Equation (2.49).<br />

2. If Condition (2.39) is violated, the non-zero segment of<br />

g lp(kp)<br />

i p<br />

(t m + (m − 1)N c )∆d (kp)<br />

i p<br />

(t m + (m − 1)N c )<br />

is aligned with the non-zero segments of possibly several vectors from {g lq(kq)<br />

i q<br />

(t n +<br />

(n−1)N c )∆d (kq)<br />

i q<br />

(t n +(n−1)N c )}, Condition (2.40) guarantees that g lp(kp)<br />

i p<br />

(t m +(m−<br />

1)N c )∆d (kp)<br />

i p<br />

(t m + (m − 1)N c ) still cannot be a linear combination of {g lq(kq)<br />

i q<br />

(t n +<br />

(n − 1)N c )∆d (kq)<br />

i q<br />

(t n + (n − 1)N c )} due to the linear independence amongst the set<br />

of spreading codes.<br />

3. Condition (2.41) guarantees that the non-zero segment of<br />

g lp(kp)<br />

i p<br />

(t m + (m − 1)N c )∆d (kp)<br />

i p<br />

(t m + (m − 1)N c )<br />

starts earlier than the non-zero segment of any of the vectors from subsequent blocks<br />

{g lq(kq)<br />

i q<br />

g lp(kp)<br />

i p<br />

(t n + (n − 1)N c )∆d (kq) (t n + (n − 1)N c ), n > m} in Ḡ∆ ¯D, there<strong>for</strong>e no linear<br />

i q<br />

combination of {g lq(kq)<br />

i q<br />

(t n + (n − 1)N c )∆d (kq)<br />

i q<br />

(t n + (n − 1)N c ), n > m} can yield<br />

(t m + (m − 1)N c )∆d (kp) (t m + (m − 1)N c ). <strong>The</strong> effects of Condition (2.42) can<br />

be shown in a similar fashion when n < m.<br />

i p<br />

✷<br />

20


<strong>The</strong> conditions of Proposition 1 are generally met in real applications and thus are<br />

not overly stringent. An important consequence of Proposition 1 is that, the minimum<br />

rank of ¯Φ occurs when all but one block from the sequence is identical and is determined<br />

by Φ mm of that single block. As such, the sequence coding problem decouples to a block<br />

coding problem. Thus, we will focus on the properties of Φ mm from now on. <strong>The</strong> strict<br />

inequality in Proposition 1 is critical, otherwise, the minimum rank of ¯Φ may occur when<br />

two blocks from the sequence are different, there<strong>for</strong>e, the joint design of these two blocks<br />

could improve coding gain of single block design and there is no decoupling of the blocks<br />

with regards to design.<br />

2.2.4 Code Design Principles<br />

This section discusses code design principles <strong>for</strong> quasi-static fading channels <strong>for</strong> the uplink<br />

(or the approximated downlink scenarios).<br />

Proposition 2 In the uplink, if every distinct pair of users have linearly independent<br />

spreading codes, under the assumption of independent fading <strong>for</strong> different users, the multiuser<br />

coding problem decouples to multiple single-user coding problems.<br />

Proof:<br />

Recall that Φ mm = ∆D m R mm ∆D m ∈ C KLcLt×KLcLt . Whenever the codeword difference<br />

of a certain user in ∆D m is set to zero, L c L t rows and columns of Φ mm will be zero. We<br />

consider the smaller matrix with the zero rows and columns removed. <strong>The</strong> rank of this<br />

matrix is never greater than the original. Continue removing rows and columns of users<br />

until there are only two remaining users, since every two users uses linearly independent<br />

spreading codes, removing one user always reduces rank Φ mm . So the minimum rank of<br />

Φ mm (worst case) can only occur when one user contributes to ∆D m solely. Hence we<br />

need only consider one user in the code design <strong>for</strong> the uplink. ✷<br />

Note that Proposition 2 does not require linearly independent spreading codes to be<br />

employed <strong>for</strong> different transmit antennae of the same user. From now on, only one user<br />

will be considered in the code design task at the uplink, the superscript of the user index<br />

will be dropped if no confusion is caused. Note that this decoupling will not exist in the<br />

downlink (See Section 2.3).<br />

21


Let us consider the issue of achievable diversity in multipath channels. We assume<br />

that the sampling rate L u is high enough to resolve the available diversity (the column<br />

rank of G m ):<br />

N c L u + τ max > N c L c L t . (2.50)<br />

Whether the correlation matrix R mm achieving full rank plays an important role in the<br />

following discussion, thus we have the following proposition:<br />

Proposition 3 If<br />

(a) N c L u + τ max > N c L c L t (2.51)<br />

(b) {s i (t), 1 ≤ i ≤ L t } are linearly independent ∀t ∈ [(m − 1)N c + 1, mN c ] (2.52)<br />

then R mm achieves full rank N c L c L t .<br />

Note that condition (a) is simply the a<strong>for</strong>ementioned sampling rate constraint and<br />

condition (b) implies that different transmit antenna of the same user use linearly independent<br />

spreading codes at each symbol time within the block.<br />

Proof:<br />

Consider any two spreading codes in G m , if their nonzero segments are not aligned,<br />

they are linearly independent; if their nonzero segments are aligned, they are linearly<br />

independent by condition (2.52). ✷<br />

To achieve full diversity gain, we have the following proposition.<br />

Proposition 4 In quasi-static fading channels, if R mm is full rank and<br />

<strong>for</strong> all i, ∆d i (t) ≠ 0, <strong>for</strong> at least one (m − 1)N c + 1 ≤ t ≤ mN c (2.53)<br />

block m achieves full diversity gain L c L t .<br />

Proof:<br />

Recall from Equation (2.16) and (2.10), that <strong>for</strong> the quasi-static channel,<br />

⎡<br />

⎤<br />

∆D((m − 1)N c + 1)<br />

∆D m = ⎢ . ⎥<br />

⎣<br />

⎦<br />

∆D(mN c )<br />

22


where<br />

group 1<br />

group L c<br />

{ }} { { }} {<br />

∆D(t) = diag[ ∆d 1 (t), . . . , ∆d Lt (t), . . . . . . ∆d 1 (t), . . . , ∆d Lt (t)] ∈ C LcLt×LcLt<br />

Condition (2.53) guarantees that the null space of ∆D m is empty, i.e.,<br />

∀v ≠ 0, ∆D m v = u ≠ 0<br />

when R m is full rank, it is positive definite, i.e.,<br />

∀u ≠ 0, u H R m u > 0<br />

hence<br />

∀v ≠ 0, v H ∆D H mR m ∆D m v = v H Φ mm v > 0<br />

this means Φ mm is positive definite and thus is full rank L t L c . ✷<br />

Under the assumption of equicorrelated spreading codes (a model commonly employed<br />

to gain intuition about spread systems), we can also make a statement about coding gain<br />

<strong>for</strong> flat fading channels.<br />

Proposition 5 (Coding Gain)<br />

In the quasi-static flat fading channel (L c = 1), if R mm (0) ∈ R KLt×KLt<br />

has equal<br />

cross-correlation entries − 1<br />

KL t−1 < ρ c < 1, and <strong>for</strong> all users condition (2.53) is satisfied,<br />

full diversity gain is achieved. <strong>The</strong> coding gain is a monotonically decreasing function of<br />

positive ρ c and is a monotonically increasing function of negative ρ c .<br />

Proof:<br />

Denote R mm (0) explicitly as a function of ρ c by R mm (ρ c ). For this case, Φ mm can be<br />

rewritten as Φ mm = ∆D H ∆D⊙R mm . For − 1<br />

KL t−1 < ρ c < 1, R mm (ρ c ) is positive definite.<br />

Since condition (2.53) holds <strong>for</strong> all users, ∆D H ∆D has no diagonal entries equaling zero,<br />

by [HJ91], ∆D H ∆D ⊙ R mm (ρ c ) is positive definite hence full rank.<br />

For |ρ 1 | < |ρ 2 |, R mm (|ρ 1 |/|ρ 2 |) is easily seen to be positive definite. We next plug<br />

∆D H ∆D⊙R mm (ρ 2 ) and R mm (|ρ 1 |/|ρ 2 |) into Oppenheim’s inequality (page 480 of [HJ91])<br />

23


as A and B respectively, and observe that all diagonal entries of R mm (|ρ 1 |/|ρ 2 |) are equal<br />

to unity, then we have<br />

(<br />

( ))<br />

det(∆D H ∆D ⊙ R mm (ρ 2 )) ≤ det ∆D H |ρ1 |<br />

∆D ⊙ R mm (ρ 2 ) ⊙ R mm<br />

|ρ 2 |<br />

= det(∆D H ∆D ⊙ R mm (ρ 1 ))<br />

the last equation holds if ρ 1 , ρ 2 have the same sign. Due to the positive definiteness of<br />

∆D H ∆D ⊙ R mm (ρ 1 ), we have a strict inequality if ρ 1 , ρ 2 have the same sign<br />

det(∆D H ∆D ⊙ R mm (ρ 2 )) < det(∆D H ∆D ⊙ R mm (ρ 1 )).<br />

✷<br />

We comment that Proposition 5 is about per<strong>for</strong>mance evaluation, where all the active<br />

users should be considered (condition (2.53) is required to hold <strong>for</strong> all users); while in<br />

code design, the worst case analysis tells us that if each user optimizes his own code,<br />

the overall per<strong>for</strong>mance improves, so code design is a single user problem. Increasing the<br />

spreading typically reduces |ρ c | and yields a larger coding gain by Proposition 5. We can<br />

trade off per<strong>for</strong>mance <strong>for</strong> spectrum.<br />

2.2.5 <strong>The</strong> Effect of Multipath<br />

For quasi-static channels, as was expected, multipath contributes more diversity gain<br />

<strong>for</strong> perfect CSI. For spread systems, <strong>for</strong> each block, each path typically contributes L t<br />

diversity levels. Now we look into the detailed structure of R mm (0) and discuss why, <strong>for</strong><br />

small delays and quasi-static channels, codes with good coding gain <strong>for</strong> flat channels still<br />

have good coding gain <strong>for</strong> multipath channels.<br />

Recall that in Equation (2.35),<br />

n+1<br />

∑<br />

Φ mm = ∆DL H c<br />

∆D Lc ⊙ R mm (0) + ∆DL H c<br />

Q n+1 ∆D Lc ⊙ (R mm (t) + Rmm(t))<br />

T<br />

t=1<br />

24


For small multipath delays, a typical spread system has R mm (t) ≃ 0, t ≠ 0, there<strong>for</strong>e<br />

Φ mm ≃ ∆D H L c<br />

∆D Lc ⊙ R mm (0). We partition R mm (0) as follows<br />

⎡<br />

⎤<br />

R (1,1) · · · R (1,Lc)<br />

R mm (0) =<br />

.<br />

⎢ . .. . ⎥<br />

⎣<br />

⎦ where R (i,j) = [s i 1, . . . , s i L t<br />

] T [s j 1 , . . . , sj L t<br />

]. (2.54)<br />

R (Lc,1) · · · R (Lc,L c)<br />

R (i,j) ≃ 0, i ≠ j, that is, well designed spreading codes have small cross-correlations<br />

<strong>for</strong> multiple lags. This implies det(Φ mm ) ≈ ∏ L c<br />

j=1 det(∆DH ∆D ⊙ R (j,j) ). Furthermore,<br />

if R (i,i) ≃ R (j,j) , i ≠ j, then we can further approximate det(Φ mm ) ≈ [det(∆D H ∆D ⊙<br />

R (1,1) )] Lc . Note that det(∆D H ∆D ⊙ R (1,1) ) corresponds to the coding gain <strong>for</strong> a single<br />

path system. Hence we see that good flat fading channel codes tend to give large coding<br />

gain even <strong>for</strong> multipath channels.<br />

For a spread system, based on the consideration of random sequences, we can use the<br />

following spreading code correlation model[AM00]:<br />

(s l 1<br />

i1<br />

) T (s l 2<br />

i2<br />

) =<br />

where ρ c is a common cross-correlation value.<br />

⎧<br />

1 if τ l 1<br />

i 1<br />

= τ l 2<br />

i 2<br />

and i 1 = i 2 ,<br />

⎪⎨ ρ c if τ l 1<br />

i 1<br />

= τ l 2<br />

i 2<br />

and i 1 ≠ i 2 ,<br />

ρ c × Lu−|τ l 1<br />

i1 −τ l 2<br />

i2 |<br />

L u<br />

if 0 < |τ l 1<br />

i 1<br />

− τ l 2<br />

i 2<br />

| < L u ,<br />

⎪⎩ 0 else.<br />

(2.55)<br />

To examine the effects of multipath on well-designed single path codes, we consider<br />

a specific code: the 2 × 2 rate 1 BPSK Class 1 code, which is discussed in depth in<br />

Section 2.5. If ρ c = 0.3, L u = 32, <strong>for</strong> L c = 1 with delay profile [0; 0], and <strong>for</strong> L c = 2 with<br />

delay profile [0 1; 0 1], we compare the distance spectra (the enumeration of the coding<br />

gains of all codeword pairs) in Table 2.1. As we can see, the coding gain <strong>for</strong> L c = 2<br />

is roughly the square of the coding gain <strong>for</strong> L c = 1, which agrees with our approximate<br />

analysis above. For the L c = 2 case, we have searched <strong>for</strong> optimal codes with delay profile<br />

[0, τ1 2; τ 2 1, τ 2 2], where 0 ≤ τ i l ≤ 30, the optimal codes all turn out to be the same as that<br />

<strong>for</strong> the flat fading channel. An exhaustive search <strong>for</strong> L c = 3 and τ max ≤ 5 shows that the<br />

flat fading channel code is still optimal. Figure 2.4 shows the union bounds <strong>for</strong> 2 × 2 rate<br />

25


L c = 1<br />

0 7 9 14<br />

0 0.00 30.56 16.00 30.56<br />

7 30.56 0.00 30.56 16.00<br />

9 16.00 30.56 0.00 30.56<br />

14 30.56 16.00 30.56 0.00<br />

L c = 2<br />

0 7 9 14<br />

0 0 765.06 214.58 765.06<br />

7 765.06 0 771.70 214.58<br />

9 214.58 771.70 0 771.70<br />

14 765.06 214.58 771.70 0<br />

Table 2.1: <strong>The</strong> distance spectra <strong>for</strong> rate 1 BPSK 2 × 2 Class 1 OMM code.<br />

Delay profile [0 τ; 0 τ]<br />

2.2 x 10−3 τ<br />

2<br />

1.8<br />

1.6<br />

Union Bound<br />

1.4<br />

1.2<br />

Class2 [1 7 8 14]<br />

Class1 [0 7 9 14]<br />

1<br />

0.8<br />

0.6<br />

0 5 10 15 20 25 30 35<br />

Figure 2.4: Union bound as a function of path delay τ.<br />

1 BPSK Classes 1 and 2 as a function of path delay τ with delay profile [0, τ; 0, τ]. <strong>The</strong><br />

per<strong>for</strong>mance of the two codes are quite robust to the value of the path delay τ.<br />

2.3 <strong>The</strong> Downlink<br />

In the downlink, without loss of generality, we consider the situation where the base<br />

station communicates with user 1. From the sequence decoder analysis <strong>for</strong> the uplink<br />

model in Section 2.2, we observe that the per<strong>for</strong>mance of sequence decoder, which is<br />

a function of Φ, is dominated by the per<strong>for</strong>mance of the single block code, which is<br />

a function of Φ mm . Furthermore, sequence code design decouples to single block code<br />

design. In the downlink, similar observations can be made and we jump directly to single<br />

26


lock decoder analysis. We will consider two decoder structures, the jointly optimal ML<br />

decoder <strong>for</strong> all users and a single user ML–based decoder. <strong>The</strong> key difference between the<br />

uplink and the downlink is the fact that in the downlink, all users experience the same<br />

channel, thus resulting in a multiuser coding problem. Note that the following downlink<br />

model is general enough to encompass the case where different transmit antennae of the<br />

same user employ distinct spreading codes, s (k)<br />

i<br />

(t) ≠ s (k)<br />

i<br />

(t), and the case where different<br />

transmit antennae of the same user employ identical spreading codes, s (k)<br />

i<br />

(t) = s (k)<br />

i<br />

(t).<br />

2.3.1 <strong>The</strong> Downlink Model<br />

At the base station, each user’s in<strong>for</strong>mation is encoded in the same way as shown in<br />

Figure 2.1, but the transmitted signal <strong>for</strong> each user is combined on each transmit antenna,<br />

and the sum signals are transmitted. We will initially assume that user 1 knows all users’<br />

spreading codes and the common channel state in<strong>for</strong>mation, while this assumption is<br />

impractical, the decoder will provide a bench mark <strong>for</strong> the suboptimal decoder discussed<br />

in the sequel.<br />

in<strong>for</strong>mation.<br />

<strong>The</strong> matched filter outputs are constructed using the spreading code<br />

At symbol time t, the received signal due to d (k)<br />

i<br />

(t) is,<br />

r(t) = √ σ t<br />

∑<br />

∑<br />

K ∑L t L c<br />

k=1 i=1 l=1<br />

g l(k)<br />

i<br />

(t)d (k)<br />

i<br />

(t)h l i(t) + n(t) ∈ C (Lu+τmax)×1 . (2.56)<br />

A single set of channel coefficients h l i (t) appear in (2.56), this is due to the fact that each<br />

user experiences the same channel in the downlink. Written in matrix <strong>for</strong>m<br />

r(t) = √ σ t G(t)D(t)h(t) + n(t) ∈ C (Lu+τmax)×1 , (2.57)<br />

where<br />

G(t) = [G (1) (t), . . . , G (K) (t)] ∈ R (Lu+τmax)×KLcLt , (2.58)<br />

and G (k) (t) is defined in Equation (2.9). <strong>The</strong> codeword matrix D(t) is<br />

⎡ ⎤<br />

D (1) (t)<br />

D(t) = ⎢ . ⎥<br />

⎣ ⎦ ∈ CKLcLt×LcLt , (2.59)<br />

D (K) (t)<br />

27


where D (k) (t) is defined in Equation (2.10), finally, h(t) h (1) (t) is defined in Equation<br />

(2.11).<br />

In comparison with the uplink case in Equation (2.12), we see that D(t)’s have the<br />

same nonzero entries but are arranged in a different <strong>for</strong>m; h(t) in the downlink is similar<br />

to h(t) in the uplink except that other users’ channel coefficients are not present, since<br />

they are the same as those of user 1.<br />

Similarly, “concatenating” the r(t), t = 1, . . . , N c into a larger vector, r, we have the<br />

received signal <strong>for</strong> user 1,<br />

⎡<br />

⎤ ⎡ ⎤ ⎡ ⎤<br />

G(1) ∅ ∅ D(1)<br />

n(1)<br />

r = √ σ<br />

.<br />

t ⎢ ∅ .. ∅ ⎥ ⎢ . ⎥<br />

⎣<br />

⎦ ⎣ ⎦ h(t) + ⎢ . ⎥ (2.60)<br />

⎣ ⎦<br />

∅ ∅ G(N c ) D(N c ) n(N c )<br />

√ σ t GDh + n ∈ C (NcLu+τmax)×1 . (2.61)<br />

Note h h(t) is constant within one block <strong>for</strong> the quasi-static channel, and adjacent<br />

blocks of G(t) are offset by L u rows.<br />

<strong>The</strong> matched filter output is<br />

y = G T r = √ σ t RDh + m ∈ C NcKLcLt×1 , (2.62)<br />

where R = G T G is the correlation matrix of spreading codes, m = G T n has distribution<br />

N (0, R).<br />

2.3.2 Joint Maximum Likelihood Decoder<br />

<strong>The</strong> jointly optimal ML decoder <strong>for</strong> one block is employed to decode all user’s in<strong>for</strong>mation<br />

jointly. <strong>The</strong> matched filter output vector is y ∼ N ( √ σ t RDh, R). Similar to the uplink<br />

case, the averaged pairwise error probability is upper bounded by<br />

1<br />

E h [P (D α → D β |h)] ≤ ∣<br />

∣I + σt<br />

4 Σ h(D α − D β ) H R(D α − D β ) ∣ , (2.63)<br />

where<br />

Σ h = E[hh H ]. (2.64)<br />

28


Similarly the key matrix Φ is defined as,<br />

Φ = (D α − D β ) H R(D α − D β ) = ∆D H R∆D ∈ C LcLt×LcLt . (2.65)<br />

Again, if nL u ≤ τ max < (n + 1)L u , and each transmit antenna uses the same fixed<br />

spreading code at each symbol time, the correlation matrix R has the same structure as<br />

R defined in (2.32).<br />

For the quasi-static fading channel<br />

⎡ ⎤<br />

∆D(1)<br />

∆D(2)<br />

Φ =<br />

⎢ . ⎥<br />

⎣ ⎦<br />

∆D(N c )<br />

H<br />

⎡<br />

⎤<br />

R(0) R(1) . . . R(n + 1) 0 ⎡ ⎤<br />

R(1) T<br />

.<br />

R(0) .. . .. ∆D(1)<br />

R(n + 1)<br />

. 0 .. . .. . .. ∆D(2)<br />

0<br />

.<br />

⎢<br />

⎣<br />

R(n + 1) T . .. . .. . .. ⎢ . ⎥<br />

R(1) ⎥ ⎣ ⎦<br />

⎦ ∆D(N c )<br />

0 R(n + 1) T 0 R(1) T R(0)<br />

(2.66)<br />

Note that the difference between the uplink Φ and the downlink Φ resides only in the<br />

<strong>for</strong>m of ∆D(t) (defined in Equation (2.14)) and the ∆D(t) (defined in Equation (2.59)).<br />

Note that R(0) and R(t) can be partitioned into K × K block matrices,<br />

⎡<br />

⎤<br />

R (1,1) (0) · · · R (1,K) (0)<br />

R(0) =<br />

.<br />

⎢ . .. . ⎥<br />

⎣<br />

⎦<br />

R (K,1) (0) · · · R (K,K) (0)<br />

thus ¯Φ can be simplified as<br />

⎡<br />

R(t) = ⎢<br />

⎣<br />

⎤<br />

n (t) · · · R n<br />

(1,K) (t)<br />

.<br />

. .. . ⎥<br />

⎦ , (2.67)<br />

n (t) · · · R (K,K) (t)<br />

R (1,1)<br />

R (K,1)<br />

n<br />

Φ =<br />

where<br />

K∑<br />

K∑<br />

i=1 j=1<br />

(∆D (i)<br />

L c<br />

) H ∆D (j)<br />

L c<br />

⊙ R (i,j) (0)+<br />

n∑<br />

t=1<br />

(∆D (i)<br />

L c<br />

) H Q n+1 ∆D (j)<br />

L c<br />

∆D (k)<br />

(<br />

⊙<br />

R (i,j)<br />

n<br />

(t) + (R (i,j)<br />

n (t)) T ) ∈ C LcLt×LcLt , (2.68)<br />

L c<br />

{ }} {<br />

L c<br />

= [ ∆D (k) , . . . , ∆D (k) ] ∈ C Nc×LcLt , (2.69)<br />

with D (k) defined in Equation (2.2) and Q defined in Equation (2.37).<br />

29


For this special case of fixed spreading codes, we see clearly the summation of contributions<br />

from different users’ codeword difference matrices and correlation matrices in the<br />

final Φ. <strong>The</strong> matrix Φ cannot be decomposed in such a way as to decouple the contributions<br />

of each of the active users. Again, we emphasize that this is a multiuser coding<br />

problem. Trying to design a code which is globally optimal in such a high dimensional<br />

space is computationally expensive. We want to point out the connection between downlink<br />

Φ and uplink Φ: in downlink Φ, there are K terms (i = j), which are equivalent to<br />

Φ, and these K terms have relatively large R (i,j) (0). This suggests that a good uplink<br />

code will per<strong>for</strong>m reasonably well in the downlink.<br />

2.3.3 Single user ML-based Decoder<br />

If user 1 is only interested in his own data and treats other user’s data as colored Gaussian<br />

noise, then we can <strong>for</strong>m a single user approximate ML decoder.<br />

We separate the contribution of user 1’s data and the interferers’ data in Equation<br />

(2.57)<br />

r j (t) = √ σ t G 1 (t)D 1 (t)h(t) + √ σ t G 2 (t)D 2 (t)h(t) + n j (t) ∈ C (Lu+τmax)×1 , (2.70)<br />

where<br />

G 1 (t) = G (1) (t) ∈ R (Lu+τmax)×LcLt G 2 (t) = [G (2) (t), . . . , G (K) (t)] ∈ R (Lu+τmax)×(K−1)LcLt ,<br />

and<br />

(2.71)<br />

D 1 (t) D (1) (t) ∈ C LcLt×LcLt (2.72)<br />

⎡ ⎤<br />

D (2) (t)<br />

D 2 (t) ⎢ . ⎥<br />

⎣ ⎦ ∈ C(K−1)LcLt×LcLt (2.73)<br />

D (K) (t)<br />

30


“Concatenating” the r j (t), t = 1, . . . , N c into a large vector,<br />

⎡<br />

⎤ ⎡ ⎤<br />

G<br />

r = √ 1 (1) ∅ ∅ D 1 (1)<br />

σ<br />

.<br />

t ⎢ ∅ .. ∅ ⎥ ⎢ . ⎥<br />

⎣<br />

⎦ ⎣ ⎦ h<br />

∅ ∅ G 1 (N c ) D 1 (N c )<br />

⎡<br />

⎤ ⎡ ⎤ ⎡ ⎤<br />

G<br />

+ √ 2 (1) ∅ ∅ D 2 (1) n j (1)<br />

σ<br />

.<br />

t ⎢ ∅ .. ∅ ⎥ ⎢ . ⎥<br />

⎣<br />

⎦ ⎣ ⎦ h + ⎢ . ⎥<br />

⎣ ⎦<br />

∅ ∅ G 2 (N c ) D 2 (N c ) n j (N c )<br />

(2.74)<br />

= √ σ t G 1 D 1 h + √ σ t G 2 D 2 h + n ∈ C (NcLu+τmax)×1 (2.75)<br />

<strong>The</strong> matched filter, which consists of user 1’s spreading codes only, outputs<br />

y = √ σ t G T 1 G 1 D 1¯h +<br />

√<br />

σt G T 1 G 2 D 2 h + m ∈ C LcLt×1 , (2.76)<br />

where<br />

m = G T 1 n. (2.77)<br />

Treating √ σ t G T 1 G 2D 2 h + m as Gaussian noise ˜m, we have<br />

E[ ˜m] = 0, E[ ˜m ˜m H ] = σ t G T 1 G 2 G T 2 G 1 + G T 1 G 1 . (2.78)<br />

matrix<br />

where<br />

Following our prior approach, we end up with the new correlated codeword difference<br />

˜Φ = ∆D1 H (ŘT 1 Ř−1 2 Ř 1 )∆D 1 ∈ C LcLt×LcLt (2.79)<br />

Ř 1 = G T 1 (t)G 1 (t) ∈ C LcLt×LcLt ,<br />

Ř 2 = σ t G T 1 (t)G 2 (t)G T 2 (t)G 1 (t)+G T 1 (t)G 1 (t) ∈ C LcLt×LcLt<br />

(2.80)<br />

<strong>The</strong> matrix ˜Φ can be simplified and approximated to<br />

˜Φ ≃ (∆D (1) ) H ∆D (1) ⊙ ˜R ∈ C LcLt×LcLt , (2.81)<br />

31


where<br />

˜R = ŘT 1 Ř−1 2 Ř 1 . (2.82)<br />

This <strong>for</strong>mulation converts this joint coding problem to a equivalent single user coding<br />

problem with a redefined correlation matrix ˜R. We will show in Section 2.5.2 that ˜R<br />

acts like a modified R in the uplink coding problem, thus we can apply the same code<br />

design principles and search <strong>for</strong> codes which are equally applicable to both the uplink<br />

and downlink scenario. We note that Ř1 and Ř2 can be estimated at the downlink in<br />

order to construct the decoder.<br />

2.4 Non-spread systems<br />

<strong>The</strong> system model in Section 2.2.1 can be applied to non-spread systems with slight modification.<br />

<strong>The</strong> system now consists of a single user and the spreading code is interpreted<br />

as samples of the modulation wave<strong>for</strong>m with length L u . All transmit antennae employ<br />

the same modulation wave<strong>for</strong>m, the entries of which can take on arbitrary real values.<br />

This perspective offers great advantage in analyzing non-spread systems in multipath<br />

fading channels. An important difference between spread and non-spread systems is that<br />

in non-spread systems, to enhance spectral efficiency, pulse shapes have duration longer<br />

than that of a symbol interval, see Figure 2.5 <strong>for</strong> the distinction between spread and<br />

non-spread transmissions. Thus there is significant overlap between the contributions of<br />

consecutive symbols, which causes Inter-Symbol Interference (ISI). To characterize this<br />

situation, a new parameter L s , the number of samples between consecutive transmissions,<br />

is introduced. We highlight the difference due to the new parameter in the system model.<br />

In Equation (2.16), while concatenating r(t), t = 1, . . . , N c into r, due to the ISI, the<br />

adjacent blocks of G(t) are offset by L s instead of L u . <strong>The</strong> final received vector is<br />

r = √ σ t GDh + n ∈ C ((Nc−1)Ls+Lu+τmax)×1 . (2.83)<br />

<strong>The</strong> dimension of G m ∆D m changes accordingly<br />

G m ∆D m ∈ C ((Nc−1)Ls+Lu+τmax)×NcLtLc . (2.84)<br />

32


Spread<br />

£¥¤ ¦¥§<br />

Non-spread<br />

ªX«<br />

¨¥©<br />

Figure 2.5: Distinction between spread and non-spread transmission.<br />

In Equation (2.50), to resolve the available diversity, the sampling rate is required to<br />

satisfy<br />

(N c − 1)L s + L u + τ max > N c L t L c . (2.85)<br />

<strong>The</strong> diversity gain is still the rank of F <strong>for</strong> quasi-static fading channels. For nonspread<br />

systems, F has dimension ((N c − 1)L s + L u + τ max ) × L t L c . Due to the same<br />

modulation wave<strong>for</strong>m, full diversity L t L c is not as easily achieved, we have the following<br />

proposition regarding the diversity level:<br />

Proposition 6 For non-spread systems in quasi-static multipath fading channels, the<br />

diversity level<br />

rank Φ mm ≥ max(the number of distinct path delays τ (l)<br />

i<br />

, flat fading diversity). (2.86)<br />

Proof:<br />

Recall Φ mm = F H m F m where F m = G m ∆D m . Columns in F m with distinct delays<br />

have different numbers of heading zeros and hence are linearly independent, there<strong>for</strong>e,<br />

rank Φ mm ≥ the number of distinct path delays τ (l)<br />

i<br />

. Aligning the nonzero segments of<br />

columns in F m reduces rank F m , when all columns are aligned, rank F m = flat fading diversity,<br />

there<strong>for</strong>e rank Φ mm ≥ flat fading diversity. ✷<br />

A weaker statement is made in [TNSC99] with much more complicated analysis.<br />

33


2.5 Optimal Minimum Metric <strong>Codes</strong><br />

In this section, we analyze the properties of optimized codes found via computer search.<br />

For simplicity, the setup <strong>for</strong> search is based on flat fading channels (L c = 1), but as<br />

predicted in Section 2.2.5, we will show the consistency of the per<strong>for</strong>mance of the found<br />

codes in both multipath and flat fading channels. Recall from Equation (2.35), in the<br />

uplink <strong>for</strong> flat quasi-static channels, Φ mm (we will drop the subscript mm <strong>for</strong> convenience)<br />

takes the following simple <strong>for</strong>m<br />

Φ(D α , D β ) = (D α − D β ) H R mm (0)(D α − D β ) = ∆D H ∆D ⊙ R mm (0) ∈ C Lt×Lt (2.87)<br />

and from Equation (2.68) in the downlink, Φ takes the following <strong>for</strong>m<br />

Φ(D α , D β ) =<br />

K∑ K∑<br />

(∆D (i) ) H ∆D (j) ⊙ R (i,j) (0) ∈ C Lt×Lt (2.88)<br />

i=1 j=1<br />

In the sequel, all codes are required to achieve full diversity. Optimum minimum metric<br />

(OMM) codes are optimized by maximize the minimum coding gain. Usually more than<br />

one set of codes can achieve the above criteria. In our study, the distance spectrum<br />

and the union bound are used to further evaluate the per<strong>for</strong>mance of these codes. <strong>The</strong><br />

distance spectrum is the enumeration of the coding gains of all codeword pairs. At high<br />

SNR, <strong>for</strong> a finite number of block codes, union bound is asymptotically tight, hence a<br />

good indicator of the per<strong>for</strong>mance. Figure 2.6 compares the union bound with simulated<br />

per<strong>for</strong>mance <strong>for</strong> OMM 2 × 2 BPSK Class 1 code (see Figure 2.7 <strong>for</strong> Class 1 code). In<br />

spread systems, codes are optimized <strong>for</strong> ρ c = 0.3.<br />

2.5.1 OMM <strong>Codes</strong> <strong>for</strong> the Uplink<br />

All the found codes are summarized in Table 2.2. Note that several classes of codes may<br />

satisfy the same OMM criteria, only the best in terms of the union bound at specified ρ c<br />

are listed in the table. <strong>The</strong> rate 1 2 × 2 BPSK codes consist of three equivalence classes,<br />

whose distance spectra are listed in Figure 2.7. <strong>The</strong>ir union bound versus ρ c is plotted<br />

in Figure 2.8.<br />

We observe:<br />

34


UB versus SIM, 2x2 BPSK Class 1, ρ c<br />

=0.3<br />

10 0 SNR<br />

Union Bound<br />

Simulation<br />

10 −1<br />

SER<br />

10 −2<br />

10 −3<br />

10 −4<br />

0 2 4 6 8 10 12 14 16<br />

Figure 2.6: Union bound versus simulation <strong>for</strong> rate 1 OMM 2 × 2 BPSK Class 1 code.<br />

35


index ρ c rate size constl OMM<br />

1 1 1 2 × 2 BPSK [1,7,8,14]<br />

2 1 1 2 × 2 QPSK [1,41,156,247]<br />

3 1 1 2 × 2 8PSK [3,1314,2167,3414]<br />

4 1 1 2 × 2 16PSK<br />

[<br />

[8,14469,33744,47965]<br />

]<br />

1 84 166 248<br />

5 1 1 3 × 3 QPSK<br />

[<br />

282 335 429 499<br />

]<br />

2 42 93 117<br />

6 1 1.5 2 × 2 QPSK<br />

128 168 223 247<br />

⎡<br />

⎤<br />

0 20 42 62<br />

7 1 2 2 × 2 QPSK<br />

⎢ 70 82 107 127<br />

⎥<br />

⎣133 145 173 185⎦<br />

195 215 236 248<br />

8 0.3 1 2 × 2 BPSK [0,6,11,13]<br />

9 0.3 1 2 × 2 QPSK [1,87,174,248]<br />

10 0.3 1 2 × 2 8PSK [3,1314,2447,3246]<br />

11 1 1 2 × 2 16PSK<br />

[<br />

[8,14469,33744,47965]<br />

]<br />

0 31 99 124<br />

12 0.3 1 3 × 3 QPSK<br />

[<br />

421 442 454 473<br />

]<br />

2 42 93 117<br />

13 0.3 1.5 2 × 2 QPSK<br />

128 168 223 247<br />

⎡<br />

⎤<br />

0 10 37 47<br />

14 0.3 2 2 × 2 QPSK<br />

⎢ 82 88 119 125<br />

⎥<br />

⎣133 143 160 170⎦<br />

15 0.3 1 4 × 2 BPSK<br />

215 221 242 248<br />

⎡<br />

⎤<br />

0 15 51 60<br />

⎢ 86 89 101 106<br />

⎥<br />

⎣149 154 166 169⎦<br />

195 204 240 255<br />

Table 2.2: OMM codes.<br />

36


1. As observed by others, the space defined by coding gain is not a metric space, since<br />

the triangle inequality does not hold. Figure 2.7 shows that the Class 2 code has<br />

four “edges” with coding gain 16 and two “diagonals” with coding gain 64.<br />

2. <strong>The</strong>se codes are metric uni<strong>for</strong>m, i.e. the distance profiles are the same <strong>for</strong> all<br />

codewords. We avoid the term geometrically uni<strong>for</strong>m, as Forney’s definition of a<br />

geometrically uni<strong>for</strong>m signal set [GDF91] is predicated upon the presence of a metric<br />

space.<br />

3. <strong>The</strong> coding gain corresponds to the “length” of the shortest branch in the distance<br />

spectrum. Two codes with the same coding gain do not necessarily have the same<br />

per<strong>for</strong>mance, as they might have different distance spectra. This fact can be easily<br />

observed from the union bound plots in Figure 2.8.<br />

4. Per<strong>for</strong>mance is generally a function of ρ c . From the distance spectra figures, it is<br />

easy to note that Class 1 and Class 3 have per<strong>for</strong>mance dependent on ρ c , while<br />

Class 2 does not. This is due to the fact that Class 2 is an orthogonal code set.<br />

Class 2 is Alamouti’s orthogonal code set, which is optimal by either OMM or OUB<br />

standards. Actually all orthogonal codes [Ala98, TJC99] have coding gain independent<br />

of spreading coding correlation, because the codeword difference matrices of<br />

these orthogonal codes are still orthogonal.<br />

5. <strong>The</strong> sensitivity of code per<strong>for</strong>mance to ρ c is a function of the coefficient of ρ 2 c. For<br />

Class 1, the coefficient of ρ 2 c is 16; <strong>for</strong> Class 3, it is 64. Thus we expect Class 3 to be<br />

more sensitive to ρ c than Class 1. This conjecture is verified from the union bound<br />

plot in Figure 2.8.<br />

6. Class 3 is not an OMM code <strong>for</strong> all ρ c , because when ρ c exceeds √ 3/2, 64 − 64ρ 2 c<br />

will be less than 16.<br />

7. Class 1 outper<strong>for</strong>ms Class 2 <strong>for</strong> ρ c < 0.6.<br />

<strong>The</strong> same set of optimal codes <strong>for</strong> rate one 8PSK and 16PSK are optimal <strong>for</strong> the<br />

spread and non-spread systems. <strong>The</strong> 16PSK OMM codes have several distinguishing<br />

properties:<br />

1. It is 0.3dB better than the BPSK OMM codes (orthogonal)(see Figure 2.9).<br />

37


16<br />

0 9<br />

32-16ρ 2 32-16ρ 2<br />

16 16<br />

16<br />

1 7<br />

64 64<br />

16 16<br />

16<br />

0 6<br />

64-64ρ 2 64-64ρ 2<br />

16 16<br />

16<br />

14 7<br />

16<br />

8 14<br />

16<br />

9 15<br />

Class 1 Class 2 Class 3<br />

Figure 2.7: Distance spectra <strong>for</strong> rate 1 OMM 2 × 2 BPSK codes.<br />

0.08<br />

0.075<br />

BPSK C1<br />

BPSK C2<br />

BPSK C3<br />

SNR=6dB<br />

0.07<br />

Union Bound<br />

0.065<br />

0.06<br />

0.055<br />

0.05<br />

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1<br />

ρ c<br />

Figure 2.8: Rate 1 OMM 2 × 2 BPSK block codes at SNR=6dB.<br />

38


[<br />

3 1314 2447 3246<br />

8PSK u<br />

0<br />

8 u 0 ] [<br />

8 u<br />

2<br />

8 u 4 ] [<br />

u 0 8 u 3 8 u<br />

4<br />

8 u 6 ] [<br />

8 u 4 8 u 2 8 u<br />

6<br />

8 u 2 ]<br />

8 u 1 8 u 7 8<br />

8 u 5 8 u 6 8<br />

8 14469 33744 47965<br />

16PSK [ u 0 16 u 0 ] [<br />

16 u<br />

3<br />

16 u 8 ] [<br />

u 0 16 u 8 16 u<br />

8<br />

16 u 3 ] [<br />

16 u 8 16 u 5 16 u<br />

11<br />

16 u 13<br />

16 u 0 16 u 11 ]<br />

16<br />

16 u 5 16 u 13<br />

16<br />

Table 2.3: Detailed Structure of rate 1 OMM 2×2 8PSK and 16PSK <strong>Codes</strong>. ( u n = e i 2π n )<br />

ρ c<br />

=0.3<br />

2X2 BPSK<br />

2X2 16PSK<br />

10 −2<br />

Union Bound<br />

10 −1 SNR<br />

10 −3<br />

10 −4<br />

6 7 8 9 10 11 12 13 14 15 16<br />

Figure 2.9: Rate 1 OMM 2 × 2 BPSK and 16PSK codes <strong>for</strong> ρ c = 0.3.<br />

2. It is unitary, but, it does not <strong>for</strong>m a group under matrix multiplication, neither is it a<br />

subset of group unitary codes [SHHS01]. <strong>The</strong> detailed structure of the OMM 8PSK<br />

and 16PSK (Table 2.3) suggests that as more degrees of freedom are introduced,<br />

code tends to be more unitary. This agrees with the general structure suggested in<br />

[MH99].<br />

3. <strong>The</strong> 16PSK optimal code is actually “correlation free”. It is OMM <strong>for</strong> all ρ!<br />

<strong>The</strong> rate 1 OMM 3 × 3 BPSK codes are listed in Table 2.4. We make the following<br />

observations:<br />

1. <strong>The</strong>re are 3 equivalence classes which are OMM codes <strong>for</strong> ρ c = 0.3, but their multiplicity<br />

of the minimum metric in the distance spectra, N p , are different. <strong>The</strong><br />

39


Class ρ ∆ p (ζ v ) N p code index<br />

C1 ρ = 0.3 122.24(0.6431) 8 [0,31,99,124,421,442,454,473]<br />

C2 ρ = 0.3 122.24(0.6431) 12 [0,31,99,124,394,433,470,493]<br />

C3 ρ = 0.3 122.24(0.6431) 16 [0,31,99,124,394,405,489,502]<br />

C4 ρ = 1.0 64(0.5774) 22 [1,84,166,248,282,335,429,499]<br />

Table 2.4: Rate 1 OMM 3 × 3 BPSK <strong>Codes</strong>.<br />

multiplicity of a certain coding gain is the number of codeword pairs which achieve<br />

that coding gain. Larger multiplicity <strong>for</strong> small coding gains results in higher probability<br />

of decoding error, thus worse per<strong>for</strong>mance. For minimum metric 122.24,<br />

Class 1 has multiplicity 8, Class 2 has multiplicity 12 and Class 3 has multiplicity<br />

16. <strong>The</strong> union bound plot confirms that Class 1 is slightly better than Class 2 and<br />

Class 2 is slightly better than Class 3.<br />

2. Class 4 is OMM code <strong>for</strong> ρ c = 1.<br />

3. <strong>The</strong> per<strong>for</strong>mances of all codes are correlation dependent, as shown in Figure 2.10.<br />

Class 1, 2, and 3 are more sensitive to correlation than Class 4.<br />

4. <strong>The</strong> union bound versus SNR plot of Class 1 and Class 4 is in Figure 2.11. Class 1<br />

is more sensitive to correlation than Class 4. At ρ c = 1, Class 1 is 1dB worse than<br />

Class 4, but at ρ c = 0.3, it is 1dB better than Class 4.<br />

5. In Figure 2.11, the best rate 1, 3 × 3 unitary group code listed in [SHHS01], which<br />

is a cyclic group achieving diversity product ζ v = 0.5134 and using the 8PSK constellation,<br />

is compared with Class 4, which is not unitary but constant envelop<br />

modulation, has diversity product ζ v = 0.5774 and uses only the BPSK constellation.<br />

We compare the rate 2 2 × 2 QPSK OMM codes with the orthogonal code.<br />

1. In a non-spread system (ρ c = 1), the per<strong>for</strong>mances of OMM spread, OMM nonspread<br />

and orthogonal codes are compared in Figure 2.12. <strong>The</strong> orthogonal code is<br />

about 0.3dB better than the OMM non-spread code, both are better than OMM<br />

spread code, which is optimized <strong>for</strong> spread systems and loses full diversity in a nonspread<br />

system. <strong>The</strong> orthogonal code turns out to have better distance spectrum<br />

than the OMM non-spread code [GVM02], there<strong>for</strong>e has better per<strong>for</strong>mance.<br />

40


SNR=6dB<br />

10 −1 ρ c<br />

Union Bound<br />

3X3 BPSK C1<br />

3X3 BPSK C2<br />

3X3 BPSK C3<br />

3X3 BPSK C4<br />

10 −2<br />

−0.5 0 0.5 1<br />

Figure 2.10: Rate 1 OMM 3 × 3 BPSK codes at SNR=6dB.<br />

10 0<br />

10 −1<br />

SNR<br />

3X3 Unitary Group ρ c<br />

=1<br />

3X3 BPSK C1 ρ c<br />

=1<br />

3X3 BPSK C4 ρ c<br />

=1<br />

3X3 BPSK C1 ρ c<br />

=0.3<br />

Union Bound<br />

10 −2<br />

10 −3<br />

10 −4<br />

0 2 4 6 8 10 12 14<br />

Figure 2.11: Rate 1 OMM 3 × 3 BPSK codes.<br />

41


2. In a spread system (ρ = 0.3), the per<strong>for</strong>mances of OMM spread, OMM non-spread<br />

and orthogonal codes are compared in Figure 2.13. OMM spread code is about<br />

0.3dB better than that of OMM non-spread code, and 0.6dB better than that of<br />

orthogonal code.<br />

3. <strong>The</strong> per<strong>for</strong>mance of OMM spread and orthogonal codes are compared in the following<br />

setup with Gold sequence of length L u = 31:<br />

(a) (Figure 2.14) Single user K = 1, flat fading L c = 1. Because Gold sequence<br />

has cross-correlation even smaller than 0.3, the OMM spread code is 0.7dB<br />

better than the orthogonal code.<br />

(b) (Figure 2.15) Single user K = 1, multipath fading channel L c = 2 with delay<br />

profile (defined in Equation (2.4) and (2.5))<br />

⎡ ⎤<br />

Γ = ⎣ 0 5 ⎦ .<br />

0 5<br />

<strong>The</strong> OMM spread code is about 1.2dB better than the orthogonal code, the<br />

gain almost doubles than that in flat fading channels.<br />

(c) (Figure 2.15) Two user K = 2, multipath fading channel L c = 2 with delay<br />

profile<br />

⎡<br />

⎤<br />

Γ = ⎣ 0 5 7 10 ⎦ .<br />

0 5 7 10<br />

<strong>The</strong> OMM spread code is still about 1.2dB better than the orthogonal code.<br />

<strong>The</strong> two user system is about 1.2dB worse than the single user system.<br />

(d) (Figure 2.16) Sequence decoder M = 4, two user K = 2, multipath fading<br />

channel L c = 2 with delay profile<br />

⎡<br />

⎤<br />

Γ = ⎣ 0 5 7 10 ⎦ .<br />

0 5 7 10<br />

<strong>The</strong> OMM spread code is about 0.7dB better than the orthogonal code.<br />

42


10 0 SNR<br />

OMM spread<br />

OMM non−spread<br />

Orthogonal<br />

Union Bound<br />

10 −1<br />

10 −2<br />

6 7 8 9 10 11 12 13 14 15 16<br />

Figure 2.12: Rate 2 OMM 2 × 2 QPSK <strong>Codes</strong> at ρ c = 1.0.<br />

OMM spread<br />

OMM non−spread<br />

Orthogonal<br />

10 −1<br />

Union Bound<br />

10 0 SNR<br />

10 −2<br />

10 −3<br />

6 7 8 9 10 11 12 13 14 15 16<br />

Figure 2.13: Rate 2 OMM 2 × 2 QPSK <strong>Codes</strong> at ρ c = 0.3.<br />

43


10 0 SNR<br />

OMM spread<br />

orthogonal<br />

10 −1<br />

Union Bound<br />

10 −2<br />

10 −3<br />

6 7 8 9 10 11 12 13 14 15 16<br />

Figure 2.14: Rate 2 OMM 2 × 2 QPSK <strong>Codes</strong>, Gold sequence, single user, flat fading.<br />

10 −1<br />

K=1, OMM spread<br />

K=1, orthogonal<br />

K=2, OMM spread<br />

K=2, orthogonal<br />

Union Bound<br />

10 0<br />

10 −2<br />

10 −3<br />

SNR<br />

10 −4<br />

10 −5<br />

6 7 8 9 10 11 12 13 14 15 16<br />

Figure 2.15: Rate 2 OMM 2 × 2 QPSK <strong>Codes</strong>, Gold sequence, multipath fading Γ =<br />

[0, 5; 0, 5].<br />

44


10 0 SNR<br />

OMM spread<br />

orthogonal<br />

10 −1<br />

SER<br />

10 −2<br />

10 −3<br />

0 2 4 6 8 10 12<br />

Figure 2.16: Sequence decoder M = 4, Rate 2 OMM 2 × 2 QPSK <strong>Codes</strong>, Gold sequence,<br />

multipath fading Γ = [0, 5, 7, 10; 0, 5, 7, 10].<br />

For ρ c = 0.3, the rate 1, OMM 4 × 2 BPSK code is also found. In Figure 2.17, its<br />

union bound shows about a 0.4dB gain over that of two continuous rate 1 OMM 2 × 2<br />

BPSK Class 1 codes.<br />

2.5.2 OMM <strong>Codes</strong> <strong>for</strong> the Downlink<br />

As pointed out in Section 2.3.1, the optimal approach <strong>for</strong> the downlink is a joint multiuser<br />

code design. Recall that we considered two cases in Section 2.3.2 and 2.3.3. First,<br />

if the MAI in the correlated codeword difference matrix in (2.68) is ignored, the resultant<br />

expression is identical to the uplink single user code and thus the code design problem<br />

is also equivalent to the single user problem. Secondly, approximating the MAI as Gaussian<br />

noise, we also get an equivalent single user code design problem with the modified<br />

45


ρ c<br />

=0.3<br />

10 0 SNR<br />

Rate 1 4X2<br />

2 Rate 1 2X2<br />

10 −1<br />

Union Bound<br />

10 −2<br />

10 −3<br />

0 2 4 6 8 10 12 14<br />

Figure 2.17: Rate 1 OMM 4 × 2 BPSK <strong>Codes</strong> <strong>for</strong> ρ c = 0.3.<br />

correlation matrix in (2.82). To understand this equivalence, we present an example. If<br />

equicorrelated spreading codes are used <strong>for</strong> K users, the equivalent Ř is<br />

Ř =<br />

⎡<br />

1<br />

⎣ a<br />

2σ t (K − 1)ρ 2 (1 − ρ) + 1 − ρ 2<br />

b<br />

⎤<br />

b ⎦<br />

a<br />

where (2.89)<br />

a = σ t (K − 1)ρ 4 − 2σ t (K − 1)ρ 3 + σ t (K − 1)ρ 2 − ρ 2 + 1 (2.90)<br />

b = −σ t (K − 1)ρ 4 + 2σ t (K − 1)ρ 3 − ρ 3 − σ t (K − 1)ρ 2 + ρ (2.91)<br />

This is equivalent to the single user case with correlation ρ e = b/a. In Figure 2.21, we plot<br />

ρ e as a function of σ t and ρ. As seen from the plot, <strong>for</strong> a large range of σ t ∈ (0dB, 15dB)<br />

and ρ ∈ (−0.2, 1), ρ e roughly lies within (−0.6, 0.6), <strong>for</strong> which case, the Class 1 codes in<br />

Figure 2.7 are the best OMM rate 1 BPSK codes. Observe that ρ e is not a monotonic<br />

function of either ρ or the SNR. As either K or σ t goes to ∞, Ř approaches,<br />

⎡ ⎤<br />

lim Ř = lim Ř = 1 − ρ ⎣ 1 −1 ⎦ (2.92)<br />

σ t→∞ K→∞ 2 −1 1<br />

46


where Class 2 in Figure 2.7 becomes the best OMM code. We note that the MAI approximation<br />

is experienced in two ways: Ř has a modified correlation ρ c and there is a<br />

reduction in effective SNR which is captured by the constant γ where Ř = γŘ, Ř is such<br />

that Ř ii = 1.<br />

For ρ c = 0.3, if distinct spreading codes are employed <strong>for</strong> different transmit antennae<br />

of the same user, the multiuser rate 1 OMM 2 × 2 BPSK codes <strong>for</strong> two users are<br />

C 1 = [3, 20, 40, 63, 78, 89, 101, 114, 141, 154, 166, 177, 192, 215, 235, 252],<br />

its distance spectrum shows it is metric uni<strong>for</strong>m. If identical spreading codes are employed<br />

<strong>for</strong> different transmit antennae of the same user, the OMM codes <strong>for</strong> two users are<br />

C 2 = [5, 18, 46, 57, 75, 92, 96, 119, 136, 159, 163, 180, 198, 209, 237, 250].<br />

Figure 2.18 compares the per<strong>for</strong>mance of 4 sets of codes in the downlink with optimal<br />

joint ML decoding: non-spread (single user) OMM code (Alamouti’s code, Class 2 in<br />

Figure 2.7), spread single user OMM code (Class 1 in Figure 2.7), spread downlink two<br />

user OMM code with same spreading (C 2 ) and spread downlink two user OMM code with<br />

distinct spreading (C 1 ). <strong>The</strong> per<strong>for</strong>mance is improved by about 0.2dB when the next set<br />

of codewords are used. We also consider the use of single user optimal codes (Class 1 in<br />

Figure 2.7) with various ML-based decoders in Figure 2.19 and 2.20. <strong>The</strong> per<strong>for</strong>mance of<br />

the following three decoders: joint optimal decoder, single user decoder ignoring MAI and<br />

single user decoder approximating MAI as Gaussian noise, is investigated. In Figure 2.19,<br />

we assume the two users have equal power, i.e., the Near-Far-Ratio (NFR) equals 0dB.<br />

<strong>The</strong> joint ML decoder gives the best per<strong>for</strong>mance (about 1dB gain over that of single user<br />

decoder ignoring MAI), the single user decoder approximating MAI gives slightly better<br />

per<strong>for</strong>mance than the single user decoder ignoring MAI. If we increase user 2’s signal<br />

power to be 4dB larger than user 1’s, i.e., NFR= 4dB, we have resulting per<strong>for</strong>mances as<br />

seen in Figure 2.20. <strong>The</strong> joint ML decoder and the single user decoder approximating MAI<br />

provide per<strong>for</strong>mance about 4dB and 2dB better than that of single user decoder ignoring<br />

MAI, respectively. Thus we have the classic complexity and per<strong>for</strong>mance tradeoff. <strong>The</strong><br />

joint designs offer superior per<strong>for</strong>mance at the expense of a more complex decoder.<br />

47


Code Comparison, Joint ML Decoder, ρ=0.3<br />

10 0 SNR(dB)<br />

SER<br />

10 −1<br />

Joint OMM, distinct spread<br />

Single OMM, distinct spread<br />

Alamouti Code, distinct spread<br />

Joint OMM, same spread<br />

UB on Joint OMM, distinct spread<br />

UB on Single OMM, distinct spread<br />

UB on Alamouti code, distinct spread<br />

UB on Joint OMM, same spread<br />

10 −2<br />

0 1 2 3 4 5 6 7 8 9 10<br />

Figure 2.18: Multiuser rate 1 OMM 2 × 2 BPSK codes.<br />

Decoder Comparison, ρ=0.3<br />

10 0 SNR(dB)<br />

Joint ML Decoder<br />

Single ML, App MAI<br />

Single ML, Ignore MAI<br />

SER<br />

10 −1<br />

10 −2<br />

0 1 2 3 4 5 6 7 8 9 10<br />

Figure 2.19: Decoder per<strong>for</strong>mance comparison with no near-far effect.<br />

48


10 0 Decoder per<strong>for</strong>mance comparison, Single user OMM code, ρ c<br />

=0.3, NFR=4dB<br />

Joint ML Decoder<br />

Single User ML, Aprx MAI<br />

Single User ML, Ignore MAI<br />

10 −1<br />

SER<br />

10 −2<br />

10 −3<br />

0 1 2 3 4 5 6 7 8 9 10<br />

SNR(db)<br />

Figure 2.20: Decoder per<strong>for</strong>mance comparison with near-far effect.<br />

Equivalent ρ e<br />

as a function of SNR and ρ c<br />

1<br />

0.8<br />

0.6<br />

0.4<br />

0.2<br />

ρ e<br />

0<br />

−0.2<br />

−0.4<br />

−0.6<br />

−0.8<br />

−1<br />

15<br />

10<br />

SNR(dB)<br />

5<br />

0<br />

−0.5<br />

0<br />

0.5<br />

1<br />

ρ c<br />

Figure 2.21: Equivalent ρ e as a function of SNR and ρ c .<br />

49


2.6 Practical Considerations<br />

In this section, we consider practical issues that can affect code design. We present<br />

some invariant operations which yield equivalent codes and their application to improve<br />

code searches. We observe that some codes have per<strong>for</strong>mance independent of spreading<br />

code correlation, this phenomenon is analyzed in some detail. Some suboptimal ways to<br />

construct space-time block codes are also discussed.<br />

2.6.1 Isometries<br />

Isometries are defined as “distance preserving” trans<strong>for</strong>mations, where the “distance” can<br />

be diversity gain ∆ H in (2.28) or coding gain in (2.29). For any function γ(.) on codeword<br />

D, if<br />

∆ p (γ(D i ) − γ(D j )) = ∆ p (D i − D j ) (2.93)<br />

γ(.) is an isometry of coding gain ∆ p . Likewise isometry of diversity gain ∆ H can be<br />

defined.<br />

If we restrict to the flat fading quasi-static channel, we have Φ mm = ∆D H ∆D ⊙<br />

R mm (0), the following isometries of ∆ H and ∆ p <strong>for</strong> spread systems can be identified (<strong>for</strong><br />

a discussion on isometries <strong>for</strong> non-spread systems, please refer to Section 4.2):<br />

1. φ(D) = UD, where U is arbitrary unitary matrix U H U = I.<br />

2. φ(D) = DP ,<br />

• P is a unitary permutation matrix: P H P = I, each row and column of P has<br />

only one non-zero element.<br />

• Equi-correlated spreading codes.<br />

3. φ(D) = [d ∗ ti ], componentwise complex conjugate<br />

• Real spreading codes.<br />

<strong>The</strong> right isometry P generally does not apply to spread system[Gen00], where spreading<br />

codes have non-equal correlations, but does hold true <strong>for</strong> the academic equi-correlated<br />

spreading code case.<br />

Detailed proofs of the above isometries <strong>for</strong> spread systems are<br />

omitted. We call two codes equivalent codes if they can be related by the above isometries.<br />

50


<strong>The</strong> set of equivalent codes <strong>for</strong>ms an equivalence class. Since all codes in an equivalence<br />

class achieve the same per<strong>for</strong>mance, in terms of coding gain and diversity gain, we need<br />

only investigate only one member of the equivalence class.<br />

In the search process, once one member of equivalence class is found, all other members<br />

can be ignored. This fact can greatly reduce the number of candidate codes. A<br />

fundamental set is the smallest set of codes such that all possible codes are equivalent to<br />

one code of this set. It is usually very hard to identify the fundamental set, but relatively<br />

easy to identify a larger set which contains the fundamental set. In our case, we can<br />

always use these isometries to <strong>for</strong>ce one codeword to have entries of value 1 on the top<br />

row and left column, i.e.,<br />

⎡<br />

⎤<br />

1 1 . . . 1<br />

1 ∗ . . . ∗<br />

.<br />

⎢.<br />

. .. . ⎥<br />

⎣<br />

⎦<br />

1 ∗ . . . ∗<br />

For example, if we want to search <strong>for</strong> 2 × 2 rate 1 BPSK codes, we need to search <strong>for</strong> four<br />

2 × 2 codewords, each entry of which has two possible values, thus the total number of<br />

possible combinations are 2 16 . By <strong>for</strong>cing one codeword to have the above structure, the<br />

number of cases to be searched is reduced to 2 13 . For constellation size N cstl and block<br />

size N c × L t , this can reduce the candidate set by a factor of N Nc+Lt−1<br />

cstl<br />

.<br />

Another use of these isometries is to find all the equivalent codes. Due to the symmetry<br />

of the codes, a subset of these invariant operations is sufficient to generate all the<br />

equivalent codes. For example, the equivalence class of optimal minimal metric rate 1<br />

BPSK 2 × 2 Class 1 code contains 8 equivalent codes, which can be generated from any<br />

one of these equivalent codes plus a left multiplier which comes from the Dihedral Group<br />

D 8 ,<br />

⎡<br />

⎣ 1 ⎤<br />

0 ⎦<br />

⎡<br />

⎣ −1 ⎤<br />

0 ⎦<br />

⎡<br />

⎣ 1 ⎤<br />

0 ⎦<br />

⎡<br />

⎣ −1 ⎤<br />

0 ⎦<br />

0 1 0 1 0 −1 0 −1<br />

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤<br />

⎣ 0 1 ⎦ ⎣ 0 −1 ⎦ ⎣ 0 1 ⎦ ⎣ 0 −1 ⎦ .<br />

1 0 1 0 −1 0 −1 0<br />

51


2.6.2 Sensitivity to Correlation<br />

Usually, the per<strong>for</strong>mance of the resultant codes is a function of the spreading code correlation<br />

function, but this is not true <strong>for</strong> all code sets. It can be desirable <strong>for</strong> a code to be<br />

insensitive to correlation, i.e., to maintain its optimality or relatively good per<strong>for</strong>mance<br />

<strong>for</strong> all correlation values, especially in multipath fading channels where the low valued<br />

correlation functions <strong>for</strong> spreading codes do not always exist. We observe that in our<br />

resulting optimal codes, the 2 × 2 BPSK Class 2 code (Alamouti’s code [Ala98]), which<br />

is an orthogonal code, and the 2 × 2 16PSK code, which is an unitary code, have per<strong>for</strong>mance<br />

independent of the spreading code correlation; thus we shall call these codes<br />

“correlation free”. Unitary codes [HM00, SHHS01, Hug00, HS00] U satisfy UU H = I,<br />

orthogonal codes [Ala98, TJC99] are unitary codes which have more structure. <strong>The</strong> 2 × 2<br />

orthogonal codes have following structure:<br />

⎡<br />

⎣ x<br />

y ∗<br />

⎤<br />

y ⎦<br />

−x ∗<br />

where x and y are from some constellation points. Indeed all orthogonal codes are correlation<br />

free, because the codeword difference matrix is still orthogonal. A natural conjecture<br />

is that unitary codes are correlation free too. But this is not true! For the 2-transmitantenna<br />

case, we investigate what are the sufficient conditions to be correlation free.<br />

Rewriting the two codewords D α and D β in terms of their column vectors<br />

D α = [α 1 α 2 ] D β = [β 1 β 2 ], (2.94)<br />

the condition of being correlation free is met when the off-diagonal entries of (D α −<br />

D β ) H (D α − D β ) are zero, which leads to<br />

α H 1 α 2 + β H 1 β 2 − α H 1 β 2 − β H 1 α 2 = 0. (2.95)<br />

Unitary codes <strong>for</strong>ce the first two terms in Equation (2.95) to be zero by definition, but<br />

do not necessarily cancel the last two terms. An example is<br />

⎡ ⎤ ⎡ ⎤<br />

D α = ⎣ 1 1 ⎦ D β = ⎣ i 1 ⎦ ,<br />

1 −1<br />

−1 −i<br />

52


which does not satisfy Equation (2.95) and has<br />

⎡<br />

⎤<br />

6 (−2 + 2i)ρ<br />

Φ = ⎣<br />

c<br />

⎦ ,<br />

(−2 − 2i)ρ c 2<br />

and<br />

det(Φ) = 12 − 8ρ 2 c.<br />

Furthermore, unitary group codes are not always correlation free. An example which<br />

has per<strong>for</strong>mance dependent on correlation is G 21,4 , which is given in [SHHS01]. Two<br />

elements of this fixed-point-free group are<br />

⎡ ⎤<br />

η 0 0<br />

A = ⎢<br />

⎣0 η 4 0 ⎥<br />

⎦<br />

0 0 η 16<br />

⎡ ⎤<br />

0 1 0<br />

B = ⎢<br />

⎣ 0 0 1⎥<br />

⎦ (2.96)<br />

η 7 0 0<br />

where η = e 2πj/21 . <strong>The</strong>se two codewords have<br />

Φ =<br />

⎡<br />

⎢<br />

⎣<br />

⎤<br />

2 −η −1 ρ c −η 9 ρ c<br />

−ηρ c 2 −η −4 ρ c<br />

⎥<br />

⎦ ,<br />

−η −9 ρ c −η 4 2<br />

and<br />

det(Φ) = 8 − 6ρ 2 c + ρ 3 c.<br />

Interestingly, we have checked all the smallest examples <strong>for</strong> 2 × 2 irreducible fixedpoint-free<br />

unitary group codes listed in [SHHS01], i.e., G 6,−1 , D 4,1,−1 , E 3,1 , F 3,1,−1 and<br />

J 1,1 , and found that all of these are correlation free. It seems that the group structure<br />

implicitly <strong>for</strong>ces these unitary codes to satisfy the condition of being correlation free in<br />

the 2 × 2 case, but does not do so <strong>for</strong> the 3 × 3 case.<br />

2.6.3 Suboptimal <strong>Codes</strong><br />

Most space-time block codes used in DS-CDMA systems are adopted from non-spread<br />

codes. By <strong>for</strong>cing some structure on space-time code sets (like BLAST [Fos96] and layered<br />

space-time codes [GARH01]), the code design or the received signal processing can<br />

53


structure<br />

]<br />

1 structure<br />

]<br />

2 structure<br />

]<br />

3<br />

[˜c1 ˜c<br />

space-time code<br />

2<br />

[˜c1 ˜c 1<br />

[˜c1 ˜c 2<br />

˜c 2<br />

[<br />

˜c 1<br />

]<br />

˜c 2<br />

[<br />

˜c 2<br />

]<br />

˜c 1<br />

[<br />

˜c 2<br />

]<br />

1 0 1<br />

Φ δmin<br />

2 δ<br />

0 1<br />

min<br />

2 ρc 2 0<br />

δ<br />

ρ c 1<br />

min<br />

2 0 0<br />

diversity gain 2 2 1<br />

coding gain δmin 4 δmin 4 (1 − ρ2 c) 2δmin<br />

2<br />

Table 2.5: Three space-time code structures and their per<strong>for</strong>mance comparison.<br />

be facilitated at the expense of per<strong>for</strong>mance loss. Based on our previous analysis, the<br />

per<strong>for</strong>mance of these codes are easily analyzed as a function of spreading code correlation.<br />

We consider the simplest 2 × 2 case. Two symbols from the nPSK constellation, c 1<br />

and c 2 , are mapped onto three space-time structures; their key matrices Φ, diversity gain<br />

and coding gain are delineated in Table 2.5. <strong>The</strong> smallest Euclidean distance of the nPSK<br />

constellation is denoted by δ min . In the mapping, ˜c i , i = 1, 2 can be anyone from the set<br />

{±c i , ±c ∗ i }, and the choice of ˜c i will not affect the following discussion.<br />

In Table 2.5, Structure 1 is diagonal mapping, Structure 2 is horizontal mapping<br />

and Structure 3 is vertical mapping. Structure 1 and 2 achieve full diversity of 2 and<br />

Structure 3 achieves only 1 level of diversity due to the fact that the same symbol is<br />

transmitted via the same antenna and experiences the same fading. Structure 1 is better<br />

than Structure 2 because its coding gain is independent of spreading code correlation.<br />

Alamouti’s code belongs to Structure 1. <strong>The</strong> above discussion can be easily generalized<br />

to space-time block codes of arbitrary size. <strong>The</strong> conclusions are: we want to repeat the<br />

same symbol at different transmit antenna and different time slots, because repeating at<br />

the same antenna incurs a diversity level loss and repeating at the same time slot suffers<br />

correlation loss. This explains why diagonal BLAST outper<strong>for</strong>ms vertical BLAST [Fos96]<br />

and horizontal BLAST.<br />

2.7 Conclusions<br />

In this chapter, we develop a general framework <strong>for</strong> space-time block code design in<br />

multipath fading channels <strong>for</strong> both non-spread and spread systems. This framework is<br />

facilitated by a general interpretation of the notion of a “spreading code”. From the<br />

Chernoff bound on the pairwise error probability <strong>for</strong> a maximum likelihood sequence<br />

54


decoder, the classical space-time coding design criteria–diversity gain and coding gain,<br />

are determined <strong>for</strong> quasi-static fading channels. Optimizing the sequence decoder turns<br />

out to be code design of single block, which is complicated <strong>for</strong> spread system due to<br />

the interaction of the space-time code and the spreading code. We investigated the<br />

effects of multipath and spreading on diversity level and code designs. We find the exact<br />

diversity level in multipath fading channel <strong>for</strong> both spread and non-spread systems and<br />

also demonstrate that under small delay assumptions, good space-time codes <strong>for</strong> flat<br />

fading channels tend to per<strong>for</strong>m well even in a multipath fading environment. In the<br />

uplink, the multiuser code design decouples to multiple single user code design problems;<br />

in the downlink a true multiuser coding problem exists. We provide various approaches to<br />

approximate the downlink case as single user problem. Due to the high dimensionality of<br />

computer search, only some small dimensional codes are presented here, but they exhibit<br />

solid to substantial per<strong>for</strong>mance gains over known space-time codes. <strong>The</strong> OMM codes<br />

found <strong>for</strong> two transmit antennae yield 0.5dB gain over known comparable unitary group<br />

codes when 16PSK signals are used; the non-spread optimal codes <strong>for</strong> three transmit<br />

antenna yield 4dB gain over known unitary group codes and the spread optimal codes<br />

exhibit another 1dB gain over the optimal non-spread optimal code. In the downlink, the<br />

multiuser OMM code <strong>for</strong> two users shows 0.5dB gain over the single user OMM code. Our<br />

results suggest that significant per<strong>for</strong>mance gains over some existing systematic designs<br />

are possible thus prompting the search <strong>for</strong> alternative systematic designs [GM03a].<br />

55


Chapter 3<br />

On Suboptimal Linear Decoders <strong>for</strong> <strong>Space</strong>-<strong>Time</strong> <strong>Block</strong><br />

<strong>Codes</strong><br />

3.1 Introduction<br />

Throughout Chapter 2, only the optimal Maximum Likelihood (ML) decoder is considered.<br />

When the numbers of active users and space-time block codes (STBC) are large,<br />

ML decoder is impractical due to large decoding complexity. In this chapter we consider<br />

several different suboptimal linear decoders <strong>for</strong> STBC.<br />

<strong>The</strong> suboptimal decoders considered in this paper are structurally similar to those of<br />

[NSTC98]. That is, equalization and decoding are separated (see Figure 3.1), thus a two<br />

stage decoding algorithm is proposed. In the first stage, a linear equalizers suppresses<br />

MAI and spatial inter-symbol interference. Subsequently, a mapper is implemented to<br />

map the linear filter soft outputs to a valid STBC(see Figure 3.2).<br />

Combined<br />

Equalization<br />

and Decoding<br />

All Decoders<br />

Optimal<br />

Mapper<br />

Seperate<br />

Equalizatin<br />

and Decoding<br />

Linear<br />

Equalization<br />

Suboptimal<br />

Mapper<br />

Figure 3.1: Decoder Classification.<br />

56


®°¯²±<br />

³°´µ<br />

Î<br />

À<br />

»<br />

Á<br />

Æ<br />

Ç<br />

¼>½ ¾G¿<br />

Matched<br />

Filter Bank<br />

Linear<br />

Filter<br />

Mapper<br />

to STBC<br />

Â>Ã ÄGÅ<br />

­¬<br />

·°¸¹Jº<br />

È>É ÊGË<br />

Figure 3.2: Decoder Structure.<br />

ÊÍÌ<br />

3.2 Uplink Signal Model <strong>for</strong> Spread <strong>Systems</strong><br />

To make the equalizer nontrivial, the number of observations must be greater than or<br />

equal to the number of estimated signals. We setup our system under this constraint.<br />

For spread system, due to our assumption that distinct spreading codes are employed at<br />

different transmit antennae, one receive antenna is sufficient; <strong>for</strong> non-spread system, the<br />

number of receive antennae must not be less than the number of transmit antennae.<br />

We consider a general framework <strong>for</strong> space-time block codes in the uplink of spread<br />

systems. <strong>The</strong> transmitter has the same structure as Figure 2.1 in Section 2.2.1. <strong>The</strong><br />

receiver block diagram is shown in Figure 3.2. Synchronous transmission is assumed<br />

and perfect channel state in<strong>for</strong>mation is assumed available at the receiver but not at the<br />

transmitter. A matched filter bank, which is a set of filters matched to the spreading<br />

wave<strong>for</strong>m at each transmit antenna is constructed. <strong>The</strong> output of the matched filter bank<br />

is a set of sufficient statistics <strong>for</strong> the decoding problem. A two stage decoding scheme is<br />

employed. <strong>The</strong> first stage corresponds to a linear equalizer which <strong>for</strong>ms a soft estimate<br />

˜d of the transmitted codeword d. <strong>The</strong> second stage maps the soft estimate to a valid<br />

codeword which is then employed to determine an estimate of the in<strong>for</strong>mation sequence,<br />

[Îk (1), Îk (2), . . . , Îk (k c )].<br />

<strong>The</strong> signal received by receive antenna r at time n can be written as<br />

y r (n) = √ σ t R r (n)H r (n)d(n) + m r (n) ∈ C LtK×1 , (3.1)<br />

57


where σ t is signal-to-noise ratio (SNR) normalized by L t to keep the total transmit power<br />

constant and<br />

S r (n) = [s 1 1(n), . . . , s 1 L t<br />

(n), . . . . . . , s K 1 (n), . . . , s K L t<br />

(n)] ∈ R Lu×LtK (3.2)<br />

R r (n) = S r (n) T S r (n) ∈ R LtK×LtK (3.3)<br />

H r (n) = diag(h 1 1r(n), . . . , h 1 L tr(n), . . . . . . , h K 1r(n), . . . , h K L tr(n)) ∈ C LtK×LtK (3.4)<br />

d(n) = [d 1 1(n), . . . , d 1 L t<br />

(n), . . . , d K 1 (n), . . . , d K L t<br />

(n)] T ∈ C LtK×1 (3.5)<br />

m r (n) = [m 1 1r(n), . . . , m k L tr(n), . . . , m K 1r(n), . . . , m K L tr(n)] T , ∈ C LtK×1 (3.6)<br />

where s k t (n) is the spreading code used at transmit antenna t at time n <strong>for</strong> user k; R r (n)<br />

is the spreading code correlation matrix <strong>for</strong> receive antenna r at time n 1 ; h k tr(n) ∼<br />

CN (0, 1) 2 is the channel coefficient between transmit antenna t and receive antenna r <strong>for</strong><br />

user k at time n; d k t (n) is the symbol transmitted from antenna t <strong>for</strong> user k at time n;<br />

m tr (n) is complex Gaussian noise <strong>for</strong> receive antenna t at time n, m r (n) ∼ CN (0, R r (n)).<br />

Concatenating y r (n), n = 1, . . . , N c into a larger vector y r we get<br />

y r = [y r (1) T , . . . , y r (N c ) T ] T (3.7)<br />

⎡<br />

⎤ ⎡<br />

⎤ ⎡ ⎤ ⎡ ⎤<br />

R<br />

= √ r (1) 0 0 H r (1) 0 0 d(1) m r (1)<br />

σ<br />

.<br />

t ⎢ 0 .. . 0 ⎥ ⎢ 0 .. 0 ⎥ ⎢ . ⎥<br />

⎣<br />

⎦ ⎣<br />

⎦ ⎣ ⎦ + ⎢ . ⎥<br />

⎣ ⎦ (3.8)<br />

0 0 R r (N c ) 0 0 H r (N c ) d(N c ) m r (N c )<br />

√ σ t R r H r d + m r ∈ C LtKNc×1 . (3.9)<br />

1 Note that in a synchronous flat fading channel, R 1(n) = · · · = R Lr (n), we keep the subscript r simply<br />

to help keep track of the matrix size. Asynchronism and multipath could result in different effective code<br />

correlation matrices between each transmit and receive antenna pair.<br />

2 We use CN (m, K) to denote a circularly symmetric complex Gaussian random vector with mean m<br />

and variance matrix K.<br />

58


Further concatenating y r , r = 1, . . . , L r over each receive antenna into a super vector y,<br />

we have the overall received signal,<br />

y = [y1 T , . . . , yL T r<br />

] T (3.10)<br />

⎡<br />

⎤ ⎡ ⎤ ⎡ ⎤<br />

R<br />

= √ 1 0 0 H 1 m 1<br />

σ<br />

.<br />

t ⎢ 0 .. 0 ⎥ ⎢ . ⎥<br />

⎣<br />

⎦ ⎣ ⎦ d + ⎢ . ⎥<br />

(3.11)<br />

⎣ ⎦<br />

0 0 R Lr H Lr m Lr<br />

√ σ t RHd + m ∈ C LtKNcLr×1 . (3.12)<br />

where R ∈ R LtKNcLr×LtKNcLr is the spreading code correlation matrix, H ∈ C LtKNcLr×LtKNc<br />

is channel coefficient matrix and d ∈ C LtKNc×1 is codeword vector.<br />

3.3 Linear Equalizer<br />

By Equation (3.12), d is the codeword vector to be estimated and y is the observed signal<br />

vector. <strong>The</strong> linear equalizer tries to find a linear equalizer M to approximate the true<br />

data vector d by<br />

˜d = My Ad + ˜m (3.13)<br />

based on various criteria. <strong>The</strong> equalizer suppresses MAI and spatial intersymbol interference.<br />

We focus on linear equalizers and evaluate their per<strong>for</strong>mance (in terms of achievable<br />

diversity levels).<br />

<strong>The</strong> Best Linear Unbiased Estimator (BLUE), the Minumum Variance UnBiased<br />

(MVUB) estimator and the Maximum Likelihood (ML) estimator have the same <strong>for</strong>m<br />

(due to Gaussian noise vector m)<br />

˜d BLUE = 1 √<br />

σt<br />

[<br />

H H RH ] −1<br />

H H y. (3.14)<br />

Note that <strong>for</strong> one receive antenna L r = 1, random channel matrix H is diagonal, if we<br />

assume H is invertible <strong>for</strong> most channel realizations, BLUE estimator is essentially a<br />

59


decorrelator. Denote ‖x‖ 2 R xH R −1 x and ‖x‖ 2 x H x, the Weighted Least Square<br />

(WLS) estimator finds the linear equalizer M to minimize weighted squared error<br />

J W LS = ‖My − d‖ 2 R (3.15)<br />

<strong>The</strong> WLS estimator turns out to be the same as the BLUE estimator. It is easy to show<br />

that E[˜d BLUE ] = d, hence the BLUE estimator is unbiased. <strong>The</strong> residual error ˜m has<br />

covariance matrix<br />

˜R BLUE = [ σ t H H RH ] −1<br />

(3.16)<br />

error<br />

<strong>The</strong> Least Square (LS) estimator finds the linear equalizer M to minimize squared<br />

J LS = ‖My − d‖ 2 (3.17)<br />

and turns out to be<br />

˜d LS = 1 √<br />

σt<br />

[<br />

H H R 2 H ] −1<br />

H H Ry. (3.18)<br />

Again it is easy to show that the LS estimator is also unbiased and the residual error has<br />

covariance matrix<br />

˜R LS = 1 σ t<br />

[<br />

H H R 2 H ] −1 [<br />

H H R 3 H ] [ H H R 2 H ] −1<br />

. (3.19)<br />

Assuming the codeword vector d has zero-mean and correlation E[dd H ] = R d , and<br />

further that the transmitted code vector d and the noise m are also independent, we can<br />

determine the following statistics:<br />

E[dy H ] = √ σ t H H R (3.20)<br />

E[yy H ] = √ σ t RHR d ( √ σ t RH) H + R. (3.21)<br />

<strong>The</strong> Linear Minimum Mean Squared Error (LMMSE) estimator finds a linear equalizer M<br />

to minimize the Mean Squared Error (MSE) between My and the transmitted codeword<br />

vector d<br />

J LMMSE = E[‖My − d‖ 2 ]. (3.22)<br />

60


<strong>The</strong> resultant LMMSE estimator has the following <strong>for</strong>m,<br />

˜d LMMSE = E[dy H ]E[yy H ] −1 y (3.23)<br />

= √ σ t R d H H R [ σ t RHR d H H R + R ] −1<br />

y (3.24)<br />

= √ σ t R d<br />

[<br />

I − σt H H RHR d (I + σ t H H RHR d ) −1] H H y. (3.25)<br />

Matrix inversion lemma 3 is invoked in Equation (3.25) to reduce the dimension of matrix<br />

inversion from KL r L t N c in Equation (3.24) to KL t N c in Equation (3.25). For more than<br />

two receive antennae, Equation (3.25) has less complexity.<br />

<strong>The</strong> LMMSE estimator is<br />

biased. If R d is nonsingular, another <strong>for</strong>m of the LMMSE estimator is<br />

˜d LMMSE = √ [<br />

σ t σt H H RH + R −1 ] −1<br />

d H H y, <strong>for</strong> nonsingular R d . (3.26)<br />

in this case, it is easy to see that LMMSE estimator converges to BLUE estimator and<br />

is efficient,<br />

lim ˜d LMMSE = ˜d BLUE . (3.27)<br />

σ t→∞<br />

This is not true <strong>for</strong> singular R d . <strong>The</strong> residual error has covariance matrix<br />

˜R LMMSE = σ t R d H H [ I + σ t RHR d H H] −1<br />

R<br />

[<br />

I + σt HR d H H R ] −1<br />

HRd (3.28)<br />

when R d in nonsingular<br />

˜R LMMSE = [ R −1<br />

d<br />

thus it is easy to see that<br />

+ σ t HRH H] −1<br />

σt H H RH [ R −1<br />

d<br />

+ σ t HRH H] −1<br />

, <strong>for</strong> nonsingular Rd<br />

(3.29)<br />

lim ˜R LMMSE = ˜R BLUE (3.30)<br />

σ t→∞<br />

3 (A − CB −1 D) −1 = A −1 + A −1 C(B − DA −1 C) −1 DA −1 .<br />

61


3.4 <strong>The</strong> Mapper<br />

Recall that the output of linear equalizer ˜d = Ad + ˜m is the soft estimate of d. We have<br />

A BLUE = I KLtN c<br />

(3.31)<br />

A LS = I KLtN c<br />

(3.32)<br />

[<br />

A LMMSE = R d IKLtNc − σ t H H RHR d (I + σ t H H RHR d ) −1] σ t H H RH (3.33)<br />

= [ σ t H H RH + R −1 ] −1 √<br />

d σt H H RH <strong>for</strong> nonsingular R d (3.34)<br />

<strong>The</strong> optimal mapper<br />

is equivalent to ML decoder.<br />

ˆd = arg min<br />

d k<br />

‖˜d − Ad k ‖ 2˜R<br />

(3.35)<br />

= arg min<br />

d k<br />

‖y − √ σ t RHd k ‖ 2 R (3.36)<br />

Now we consider suboptimal mapper. <strong>The</strong> simplest mapper (MAP1) simply applies<br />

a hard decision rule to each component of ˜d, that is we decode ˆd bit by bit,<br />

ˆd t (n) = arg min<br />

c k<br />

| ˜d t (n) − c k | 2 , (3.37)<br />

where c k is a valid constellation point. <strong>The</strong> decoded ˆd might not be a valid codeword,<br />

in this case, the closest codeword is the one with the largest number of matches with ˆd.<br />

However, we find that the per<strong>for</strong>mance of the decoder in Equation (3.37) is significantly<br />

worse than that of the mappers to be considered and thus this mapper is not considered<br />

<strong>for</strong> further investigation.<br />

<strong>The</strong> second mapper (MAP2) is the minimum Euclidean distance mapper,<br />

ˆd = arg min<br />

d k<br />

‖˜d − d k ‖ 2 , (3.38)<br />

where d k is a valid codeword vector. If the noise ˜m in Equation (3.13) is a white Gaussian<br />

noise vector, this mapper corresponds to the maximum likelihood decoder (this is a M-ary<br />

hypothesis testing problem).<br />

62


<strong>The</strong> third mapper (MAP3) is<br />

this mapper is also considered in [BTT02].<br />

ˆd = arg min<br />

d k<br />

‖˜d − Ad k ‖ 2 , (3.39)<br />

MAP3 is equivalent to MAP2 <strong>for</strong> BLUE and LS estimators, but different <strong>for</strong> LMMSE<br />

estimators.<br />

For the ML decoder, two scenarios are considered: the optimal ML decoder (ML1)<br />

and the ML decoder ignoring the noise color (ML2)<br />

ˆd = arg min<br />

d k<br />

‖y − √ σ t RHd k ‖ 2 R, (3.40)<br />

ˆd = arg min<br />

d k<br />

‖y − √ σ t RHd k ‖ 2 . (3.41)<br />

<strong>The</strong> BLUE decoder<br />

<strong>The</strong> LS decoder<br />

ˆd = arg min<br />

d k<br />

‖˜d BLUE − d k ‖ 2 . (3.42)<br />

ˆd = arg min<br />

d k<br />

‖˜d LS − d k ‖ 2 . (3.43)<br />

For the LMMSE decoder, three possible scenarios are considered: the LMMSE estimator<br />

plus MAP2 (LMMSE1)<br />

the LMMSE estimator plus MAP3 (LMMSE2)<br />

ˆd = arg min<br />

d k<br />

‖˜d LMMSE − d k ‖ 2 , (3.44)<br />

ˆd = arg min<br />

d k<br />

‖˜d LMMSE − Ad k ‖ 2 , (3.45)<br />

and the LMMSE estimator assuming R d = I plus MAP2 (LMMSE3)<br />

ˆd = arg min<br />

d k<br />

‖˜d LMMSE (R d = I) − d k ‖ 2 . (3.46)<br />

63


Now we evaluate the overall decoder per<strong>for</strong>mance via Chernoff bound on pairwise<br />

codeword error probability conditioned on channel realization.<br />

For the ML1 decoder, when d 1 is transmitted, the probability of decoding d 2 erroneously<br />

is upperbounded by<br />

(<br />

P ML (d 1 → d 2 |H) ≤ exp − σ )<br />

t<br />

4 (d 1 − d 2 ) H H H RH(d 1 − d 2 ) CB ML1 (3.47)<br />

For the ML2 decoder<br />

P ML (d 1 → d 2 |H) ≤ exp<br />

(<br />

− σ t<br />

4<br />

[<br />

(d1 − d 2 ) H H H R 2 H(d 1 − d 2 ) ] )<br />

2<br />

(d 1 − d 2 ) H H H R 3 CB ML2 (3.48)<br />

H(d 1 − d 2 )<br />

For the BLUE decoder<br />

P BLUE (d 1 → d 2 |H) ≤ exp<br />

(<br />

− σ t<br />

4<br />

[<br />

(d1 − d 2 ) H (d 1 − d 2 ) ] 2<br />

(d 1 − d 2 ) H (H H RH) −1 (d 1 − d 2 )<br />

)<br />

CB BLUE (3.49)<br />

For the LS decoder<br />

P LS (d 1 → d 2 |H) ≤ exp<br />

(<br />

− σ t<br />

4<br />

[<br />

(d1 − d 2 ) H (d 1 − d 2 ) ] 2<br />

(d 1 − d 2 ) H (H H R 2 H) −1 (H H R 3 H)(H H R 2 H) −1 (d 1 − d 2 )<br />

CB LS (3.50)<br />

)<br />

For the LMMSE1 decoder<br />

(<br />

P LMMSE2 (d 1 → d 2 |H) ≤ exp − 1 [<br />

(ALMMSE d 1 − d 2 ) H (A LMMSE d 1 − d 2 ) ] )<br />

2<br />

4 (A LMMSE d 1 − d 2 ) H ˜R LMMSE (A LMMSE d 1 − d 2 )<br />

CB LMMSE1 (3.51)<br />

For the LMMSE2 decoder<br />

(<br />

P LMMSE3 (d 1 → d 2 |H) ≤ exp − 1 [<br />

(d1 − d 2 ) H A H LMMSE A LMMSE(d 1 − d 2 ) ] )<br />

2<br />

4 (d 1 − d 2 ) H A H ˜R LMMSE LMMSE A LMMSE (d 1 − d 2 )<br />

CB LMMSE2 (3.52)<br />

64


For the LMMSE3 decoder, the Chernoff bound CB LMMSE3<br />

is a special case of<br />

CB LMMSE1 when R d = I.<br />

3.5 <strong>The</strong> Effect of Large Number of Receive Antennae<br />

Define<br />

∑L r<br />

X H H RH = Hr H R r H r . (3.53)<br />

r=1<br />

where H r and R r defined in Equation (3.9). By noting that H r is a diagonal matrix, we<br />

have<br />

∑L r<br />

X ii = |(H r ) ii | 2 diagonal elements, (3.54)<br />

r=1<br />

L r<br />

∑<br />

X ij = (R r ) ij (H r ) ∗ ii(H r )jj off-diagonal elements. (3.55)<br />

r=1<br />

and as L r → ∞, by the assumption of i.i.d. h k tr and strong Law of Large Numbers,<br />

lim X = L rI LtKN c<br />

almost surely. (3.56)<br />

L r→∞<br />

Clearly the effect of more receive antennae is to strengthen the diagonal elements and<br />

weaken the off-diagonal elements of X.<br />

Similarly we can show that<br />

∑L r<br />

∑L r<br />

H H R 2 H = Hr H RrH 2 r H H R 3 H = Hr H RrH 3 r (3.57)<br />

r=1<br />

r=1<br />

If we assume spreading codes are equi-correlated,<br />

lim<br />

L HH R 2 H = L r [1 + (KL t − 1)ρ 2 ]I LtKN c<br />

(3.58)<br />

r→∞<br />

65


and<br />

lim<br />

L HH R 3 H =<br />

r→∞<br />

L r [1 + 6(KL t − 1)ρ 2 + 4(KL t − 1)(KL t − 2)ρ 3 + (KL t − 1)((KL t − 2) 2 + 1)ρ 4 ]I LtKN c<br />

.<br />

(3.59)<br />

Now let us explicitly consider the effect of a large number of receive antennae on the<br />

equalizer. Since BLUE and LS estimators are unbiased, we only need to evaluate the<br />

residual noise covariance. For BLUE estimator<br />

<strong>for</strong> LS estimator<br />

lim ˜R LS <br />

L r→∞<br />

lim ˜R BLUE = 1 I LtKN<br />

L r→∞ σ t L<br />

c<br />

(3.60)<br />

r<br />

α<br />

σ t L r<br />

I LtKN c<br />

(3.61)<br />

= 1 + 6(KL t − 1)ρ 2 + 4(KL t − 1)(KL t − 2)ρ 3 + (KL t − 1)((KL t − 2) 2 + 1)ρ 4<br />

(σ t L r )(1 + 2(KL t − 1)ρ 2 + (KL t − 1) 2 ρ 4 I LtKN<br />

)<br />

c<br />

.<br />

(3.62)<br />

because the LS estimator has larger variance than the BLUE estimator in the limit<br />

(α > 1), we expect the LS estimator has worse per<strong>for</strong>mance than the BLUE estimator.<br />

<strong>The</strong> LMMSE estimator, when R d in nonsingular<br />

lim A [<br />

LMMSE = lim ILtKN c<br />

+ R −1<br />

L r→∞ L<br />

d /(σ tL r ) ] −1<br />

= ILtKN c<br />

(3.63)<br />

r→∞<br />

is unbiased, the residual noise have covariance matrix<br />

lim ˜R LMMSE =<br />

L r→∞<br />

lim<br />

L r→∞<br />

1 [<br />

ILtKN<br />

σ t L<br />

c<br />

+ R −1<br />

d /(σ tL r ) ] −2 1 = I LtKN<br />

r σ t L<br />

c<br />

. (3.64)<br />

r<br />

66


Now we consider the effect of large number of receive antennae on overall decoder<br />

per<strong>for</strong>mance. Plug the above asymptotics into Equations (3.47), (3.48), (3.49), (3.50),<br />

(3.51) and (3.52).<br />

lim CB ML1 =<br />

L r→∞<br />

lim CB BLUE =<br />

L r→∞<br />

= lim<br />

L r→∞ CB LMMSE2 =<br />

lim<br />

L r→∞ CB LMMSE1<br />

lim CB LMMSE3 = exp<br />

L r→∞<br />

(<br />

− σ tL r<br />

4 ‖d 1 − d 2 )‖ 2 )<br />

(<br />

lim CB LS = lim CB ML1 = exp − σ )<br />

tL r<br />

L r→∞ L r→∞ 4α ‖d 1 − d 2 )‖ 2<br />

(3.65)<br />

(3.66)<br />

Hence <strong>for</strong> large number of receive antennae, all the BLUE decoder and LMMSE decoders<br />

achieve the same per<strong>for</strong>mance as that of optimal ML1 decoder. <strong>The</strong> LM2 and LS decoder<br />

is worse by a factor α.<br />

3.6 Fisher’s In<strong>for</strong>mation Matrix<br />

To seek the favorable structure of codeword <strong>for</strong> the LMMSE estimator, we evaluate<br />

Fisher’s In<strong>for</strong>mation Matrix (FIM) conditioned on channel realization H <strong>for</strong> random<br />

codeword vector d. Recall that<br />

y = √ σ t RHd + m<br />

where m ∼ CN (0, R). We have<br />

P (y|d, H) =<br />

1<br />

π LtKNc det(R) exp ( −(y − √ σ t RHd) H R −1 (y − √ σ t RHd) ) (3.67)<br />

If we assume d ∼ CN (0, R d ),<br />

P (d) =<br />

1<br />

( )<br />

π NcLt det(R d ) exp d H R<br />

d −1 d<br />

(3.68)<br />

the joint pdf of y and d is<br />

P (y, d|H) = P (y|d, H)P (d) (3.69)<br />

67


the FIM <strong>for</strong> random d is<br />

[ ∂ ln P (y, d|H)<br />

J = E<br />

∂d ∗<br />

= E<br />

]<br />

∂ ln P (y, d|H)<br />

∂d T<br />

[ ∂ ln P (y|d, H) ∂ ln P (y|d, H)<br />

∂d ∗ ∂d T +<br />

= σ 2 t H H RH + R −1<br />

d<br />

]<br />

∂ ln P (d) ∂ ln P (d)<br />

∂d ∗ ∂d T<br />

(3.70)<br />

(3.71)<br />

(3.72)<br />

Let I denote the inverse of J, the Cramer-Rao lower bound (CRLB) says<br />

var( ˆd i ) ≥ I ii (3.73)<br />

For large number of receive antennae, plug Equation (3.56) into J<br />

lim J =<br />

L σ2 t L r I LtKN c<br />

+ R −1<br />

r→∞ d<br />

(3.74)<br />

To seek the optimal code structure <strong>for</strong> the estimator, we try to minimize the overall<br />

lower bound of estimation variance subject to constant block energy<br />

(<br />

) −1<br />

min<br />

d trace σ t H H RH + R<br />

d<br />

−1 subject to E[d H d] = c (3.75)<br />

Recall R d = E[dd H ], if the eigenvalues of R d are {λ i } n 1 , the optimization problem is<br />

equivalent to<br />

min<br />

d<br />

n∑<br />

i=1<br />

1<br />

σ 2 t L r + 1 λ i<br />

subject to<br />

By Kuhn-Tucker conditions, the minimum is achieved when<br />

n∑<br />

λ i = c (3.76)<br />

i=1<br />

λ 1 = c, λ 1 = · · · λ n = 0 (3.77)<br />

This suggests strong correlation among d is preferred by the estimator, which agrees with<br />

common sense: the estimation of one element helps the estimation of the others. Since<br />

usually<br />

σt 2 L r ≫ 1 , (3.78)<br />

λ i<br />

68


hence<br />

1<br />

σt 2L r + 1 ≃ 1<br />

σ 2 λ i t L if λ i ≠ 0 (3.79)<br />

r<br />

1<br />

σt 2L r + 1 = 0 if λ i = 0. (3.80)<br />

λ i<br />

This suggests the minimum rank of R d will decide the minimum of Equation (3.76),<br />

rather than the relative distribution of non-zero eigenvalues.<br />

To search <strong>for</strong> good codes <strong>for</strong> LMMSE decoder, the difficulty resides in the absence of<br />

closed <strong>for</strong>m expression <strong>for</strong> PEP. <strong>The</strong> above Fisher’s In<strong>for</strong>mation Matrix analysis suggests<br />

that small number of non-zero eigenvalues of R d reduces the variance of linear estimator,<br />

but another measure which quantize the effect of mapper is still missing.<br />

Numerical<br />

results show that the mappers are very insensitive to the variance of linear estimator.<br />

Two unsatisfactory approaches have been tested to search <strong>for</strong> good codes suitable <strong>for</strong><br />

LMMSE decoder.<br />

<strong>The</strong> first approach is based on the fact that OMM codes are not<br />

unique, within the set of OMM codes, we try to find the code with the minimum rank<br />

of R d . But <strong>for</strong> all known OMM codes within the same set, R d ’s turn out to have the<br />

same rank. <strong>The</strong> second approach tries to find the code sets with minimum rank of R d , <strong>for</strong><br />

codes with the same rank of R d , worst case coding gain is minimized. This yields large<br />

number of code sets, primitive analysis shows no noticeable advantage of found codes.<br />

3.7 Simulation<br />

In the simulation, a rate 1, 2 × 2 BPSK code set optimized <strong>for</strong> spread system [GMF03]<br />

is used, it consists of four codewords:<br />

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤<br />

⎣ 1 1 ⎦ ⎣ 1 −1 ⎦ ⎣ −1 1 ⎦ ⎣ −1 −1 ⎦ . (3.81)<br />

1 1 −1 −1 1 −1 −1 1<br />

<strong>The</strong> codeword correlation is<br />

⎡ ⎤<br />

1 0 0 0<br />

0 1 1 0<br />

R d =<br />

⎢<br />

⎣0 1 1 0⎥<br />

⎦<br />

0 0 0 1<br />

(3.82)<br />

69


Apparently R d is singular, which is typical <strong>for</strong> small number of codewords and small<br />

constallation size.<br />

3.7.1 <strong>The</strong> BLUE and LS Decoders<br />

Although the BLUE and LS decoders are unbiased, they cannot achieve the same diversity<br />

level as ML decoder (Figure 3.3). An inspection of the average variance (Figure 3.4)<br />

L t<br />

=2, L r<br />

=1, K=1, ρ=0.3, BLUE and LS decoder lose diversity<br />

10 0 SNR<br />

LS<br />

BLUE<br />

LMMSE1<br />

ML1<br />

10 −1<br />

SER<br />

10 −2<br />

10 −3<br />

0 1 2 3 4 5 6 7 8 9 10<br />

Figure 3.3: <strong>The</strong> BLUE and LS estimators lose diversity.<br />

var = ‖˜d − d‖ 2 (3.83)<br />

shows the loss of diversity is due to the large variance of estimated codeword vector. On<br />

the contrary, LMMSE decoder is biased, but has very small variance, hence achieves the<br />

same diversity level as ML decoder. <strong>The</strong> LS decoder is slightly worse than the BLUE<br />

decoder, and has slightly larger estimator variance.<br />

70


250<br />

L t<br />

=2, L r<br />

=1, K=1, ρ=0.3, BLUE and LS estimator has large variance<br />

LS<br />

BLUE<br />

LMMSE1<br />

200<br />

150<br />

VAR<br />

100<br />

50<br />

0<br />

0 1 2 3 4 5 6 7 8 9 10<br />

SNR<br />

Figure 3.4: <strong>The</strong> BLUE and LS estimators have large variance.<br />

3.7.2 <strong>The</strong> LM Decoders<br />

LM2 has about 0.3dB loss than LM1 by ignoring the color of the noise (Figure 3.5). By<br />

considering the noise color, LM1 calculates a weighted Euclidean distance, there<strong>for</strong>e the<br />

per<strong>for</strong>mance improvement is obtained by more decoding complexity.<br />

3.7.3 <strong>The</strong> LMMSE Decoders<br />

LMMSE2 decoder has slight advantage over LMMSE1 at high SNR (Figure 3.6), but it has<br />

complexity comparable to ML decoder, it is not a good candidate <strong>for</strong> multiuser detection,<br />

where the number of hypotheses grows exponentially with the number of active users K.<br />

LMMSE2 decoder has less variance than LMMSE1 decoder (Figure 3.7). Ignoring the<br />

codeword correlation incurs about 0.2dB loss in per<strong>for</strong>mance and larger variance.<br />

3.7.4 <strong>The</strong> Effect of Number of Receive Antennae L r<br />

Two receive antennae improve the per<strong>for</strong>mance of all decoders, but the BLUE and LS<br />

decoders still cannot achieve the same level of diversity as ML and LMMSE1 decoders.<br />

71


L t<br />

=2, L r<br />

=1, K=1, ρ=0.3, ignoring R incurrs 0.3dB loss<br />

10 0 SNR<br />

ML2<br />

ML1<br />

10 −1<br />

SER<br />

10 −2<br />

10 −3<br />

0 1 2 3 4 5 6 7 8 9 10<br />

Figure 3.5: Ignoring R incurs 0.3dB loss.<br />

# of active users ML1 LMMSE3<br />

K=2 0.7500 0.1100<br />

K=4 13.7190 0.2810<br />

Table 3.1: CPU time in seconds <strong>for</strong> 200 runs.<br />

3.7.5 <strong>The</strong> Effect of Number of Active Users K<br />

For 2 (Figure 3.9) and 4 (Figure 3.10) active users, the single user LMMSE3 decoder<br />

shows per<strong>for</strong>mance close to joint LMMSE3 decoder and has much less complexity. Note<br />

the equalizer is still joint equalizer.<br />

3.7.6 Complexity<br />

Simulation results show that LMMSE3 has complexity which grows linearly with the<br />

number of active users, while ML1 has complexity which grows exponentially with the<br />

number of active users (See Table 3.7.6).<br />

72


L t<br />

=2, L r<br />

=1, K=1, ρ=0.3<br />

10 0 SNR<br />

LMMSE1<br />

LMMSE2<br />

LMMSE3<br />

10 −1<br />

SER<br />

10 −2<br />

10 −3<br />

10 −4<br />

0 2 4 6 8 10 12 14 16<br />

Figure 3.6: LMMSE2 decoder has slight advantage over LMMSE1 decoder at high SNR.<br />

3<br />

2.5<br />

L t<br />

=2, L r<br />

=1, K=1, ρ=0.3, LMMSE2 has less variance than LMMSE1<br />

LMMSE1<br />

LMMSE2<br />

LMMSE3<br />

2<br />

VAR<br />

1.5<br />

1<br />

0.5<br />

0<br />

0 2 4 6 8 10 12 14 16<br />

SNR<br />

Figure 3.7: LMMSE2 decoder has less variance than LMMSE1 decoder.<br />

73


L t<br />

=2, L r<br />

=2, K=1, ρ=0.3, two receive antennae<br />

10 0 SNR<br />

LS<br />

BLUE<br />

LMMSE1<br />

ML1<br />

10 −1<br />

SER<br />

10 −2<br />

10 −3<br />

10 −4<br />

0 1 2 3 4 5 6 7 8 9 10<br />

Figure 3.8: Two receive antennae.<br />

K=2, [0 7 9 14]<br />

10 0 SNR<br />

Single LMMSE3<br />

Joint LMMSE3<br />

ML<br />

SER<br />

10 −1<br />

10 −2<br />

0 1 2 3 4 5 6 7 8 9 10<br />

Figure 3.9: Two active users.<br />

74


K=4, [0 7 9 14]<br />

10 0 SNR<br />

Single LMMSE3<br />

Joint LMMSE3<br />

ML<br />

10 −1<br />

SER<br />

10 −2<br />

10 −3<br />

0 2 4 6 8 10 12 14<br />

Figure 3.10: Four active users.<br />

75


Chapter 4<br />

Nonlinear Hierarchical <strong>Codes</strong><br />

4.1 Introduction<br />

<strong>The</strong> computer optimized codes in Chapter 2 exhibit layered structure which can be characterized<br />

by isometry (see Section 4.2 <strong>for</strong> detailed discussion on isometry). This motivates<br />

the construction of Nonlinear Hierarchical <strong>Codes</strong> (NHC) based on isometries. <strong>The</strong> construction<br />

of NHC is a generalization of group codes [Sle68] to the space-time scenario.<br />

STBCs have been designed using constraints on the structure of the codewords. Such<br />

constraints often yield systematic designs making code set generation and often analysis<br />

quite straight<strong>for</strong>ward. In particular, orthogonal codes [TJC99] and unitary group codes<br />

[HM00, SHHS01] en<strong>for</strong>ce orthogonal or unitary structure on the code blocks and are easily<br />

generated. However, computer optimized codes, optimal minimum metric codes (OMM)<br />

[Gen00] and optimal union bound codes (OUB) [GVM02], can offer up to several dB gains<br />

over the a<strong>for</strong>ementioned code sets, but are computationally expensive to determine. For<br />

STBCs to be used in a serially concatenated system, this several dB gain [Gen00] is seen<br />

directly in overall per<strong>for</strong>mance. For both practical and theoretical reasons, there is strong<br />

interest in developing high per<strong>for</strong>mance code sets that have systematic constructions.<br />

As noted above, the coding gain measure does not define a metric space, which complicates<br />

systematic constructions of code sets. Operations that preserve “distance” measures<br />

like the coding gain are deemed isometries. Isometric trans<strong>for</strong>mations have been<br />

previously investigated in the context of space-time systems: to shorten search times<br />

<strong>for</strong> codes sets [Gen00, GVM02] and to per<strong>for</strong>m code set expansion <strong>for</strong> use in space-time<br />

trellis-coded modulation design [SF02]. <strong>The</strong> computer optimized codes [Gen00, GVM02]<br />

exhibit strong structure and symmetry which can be explained via isometries. This in<br />

76


turn, suggests the systematic construction of nonlinear hierarchical codes (NHC) <strong>for</strong> large<br />

block and/or constellation sizes, which we present herein.<br />

Slepian [Sle68] studied group codes <strong>for</strong> the Gaussian channel in 1967. 1 <strong>The</strong> group<br />

codes defined by Slepian possessed geometric uni<strong>for</strong>mity and can be decomposed into a<br />

direct sum of certain basic group codes. Starting from an initial vector, the basic group<br />

codes can be generated by irreducible representations of a finite group. However, finding<br />

group codes with best worst case per<strong>for</strong>mance is challenging as noted by Slepian. However,<br />

we are able to generalize and extend Slepian’s design strategy <strong>for</strong> the construction of<br />

NHCs. <strong>The</strong> methodology of NHC construction via isometries <strong>for</strong> MIMO quasi-static<br />

fading channels is an extension of Slepian’s group codes, where group structure (and<br />

hence distance uni<strong>for</strong>mity), is conserved in most cases. A sufficient condition <strong>for</strong> NHCs<br />

to achieve geometric uni<strong>for</strong>mity [GDF91] is established. <strong>The</strong> NHC construction starts<br />

from some initial codeword matrices with no constraint on the size and structure, and<br />

by means of isometries, systematically builds higher rate codes from lower rate codes,<br />

while preserving the good structure of the lower rate code. It is the relationship between<br />

codewords, rather than the structure of each individual codeword, that is exploited to<br />

optimize coding gain. <strong>The</strong> found NHCs match those found via exhaustive computer<br />

search, and exhibit not only optimized coding gain at each layer, but also great symmetry<br />

and structure to be exploited in Chapter 5 of this work.<br />

This chapter is organized as follows: Section 4.2 presents the signal model; Section 4.3<br />

discusses the construction and properties of NHCs. As an example, Section 4.4 describes<br />

in detail the generation of the 2 × 2 QPSK NHC. Section 4.5 discusses the constructed<br />

NHCs and compares their per<strong>for</strong>mance with corresponding orthogonal or unitary group<br />

codes.<br />

4.2 Signal Model and Isometries<br />

We consider a single user system where the transmitter and receiver are equipped with<br />

L t and L r antennae, respectively. We assume perfect synchronization. <strong>The</strong> received<br />

1 Distinction should be made from the unitary groups codes in [HM00], where unitary group codes are<br />

indeed matrix representations of Slepian’s groups codes.<br />

77


signal corresponding to N c symbol intervals after appropriate front-end processing can<br />

be <strong>for</strong>mulated as<br />

Y = √ σ t DH + N ∈ C Nc×Lr (4.1)<br />

where<br />

σ t = signal-to-noise ration (SNR) normalized by L t (4.2)<br />

D = [d ti ] ∈ C Nc×Lt codeword matrix (4.3)<br />

H = [h ij ] ∈ C Lt×Lr fading coefficient matrix (4.4)<br />

N = [n tj ] ∈ C Nc×Lr additive white Gaussian noise matrix (4.5)<br />

Y = [y tj ] ∈ C Nc×Lr received signal matrix (4.6)<br />

For the current work, d ti is constrained to a finite nP SK set, that is, d ti = u c ti<br />

where<br />

u = e j 2π n , c ti ∈ {0, . . . , n−1}. For ease of representation, each codeword matrix is assigned<br />

a unique index, by concatenating c ti ’s into a single base-n number:<br />

(c 1,1 . . . c 1,Lt . . . . . . c Nc,1 . . . c Nc,L t<br />

) n . (4.7)<br />

<strong>The</strong> fading coefficients are complex Gaussian, and modeled as independent <strong>for</strong> different<br />

transmit antenna i or receive antenna j. A quasi-static fading channel is considered, i.e.,<br />

H remains constant within each block of N c symbols, but independent from block to<br />

block. Perfect channel state in<strong>for</strong>mation is assumed known at the receiver. <strong>The</strong> noise,<br />

n tj ∼ CN (0, 1) is additive complex Gaussian noise, independent <strong>for</strong> different symbol times<br />

t and different receive antennae j.<br />

We define the codeword difference correlation matrix<br />

Φ(D i − D j ) (D i − D j ) H (D i − D j ), (4.8)<br />

and denote the non-zero eigenvalues of Φ(D i − D j ) by λ k . <strong>The</strong> channel vector h, which<br />

is a concatenation of the columns of H, has mean and covariance<br />

¯h E[h] (4.9)<br />

Σ h E[(h − ¯h)(h − ¯h) H ] (4.10)<br />

78


We will assume Σ h always has full rank.<br />

<strong>The</strong> averaged (over channel) pairwise error<br />

probability of decoding D j when D i is transmitted can be upper bounded by<br />

E h [P (D i → D j )|h)] ≤<br />

(<br />

exp<br />

−h H Σ −1<br />

h<br />

[<br />

I − (I +<br />

σ t<br />

4 Σ hΦ) −1] )<br />

h<br />

|I + σt<br />

4 Σ hΦ|<br />

(4.11)<br />

at high SNR, assuming Φ is full rank, which is the focus of this work, the bound can be<br />

simplified to<br />

E h [P (D i → D j )|h)] ≤<br />

( )<br />

exp −h H Σ −1<br />

h h<br />

| σt<br />

4 Σ h|<br />

1<br />

|Φ|<br />

(4.12)<br />

Because the factor in front of 1/|Φ| is a function of channel mean and covariance only,<br />

the worst case code design criteria [TSC98, GFBK99] determined by Φ are independent<br />

of channel conditions: Rayleigh (¯h = 0) or Ricean (¯h ≠ 0), uncorrelated (Σ h = I) or<br />

correlated (Σ h ≠ I). <strong>The</strong> special case when Σ h is rank deficient is not discussed in this<br />

work. <strong>The</strong> design criteria are diversity gain<br />

∆ H (D i − D j ) = rank Φ(D i − D j ), (4.13)<br />

and coding gain 2<br />

∆ p (D i − D j ) =<br />

∆ H ∏<br />

k=1<br />

λ k (4.14)<br />

= det Φ(D i − D j ) assuming full diversity gain. (4.15)<br />

Another interesting distance measure is<br />

∆ H<br />

∑<br />

∆ s (D i − D j ) = λ k = trace(Φ(D i − D j )) (4.16)<br />

k=1<br />

which characterizes the per<strong>for</strong>mance of a space-time system in Gaussian channel or when<br />

the number of receive antennae is large [SF02] and will be used in the sequel <strong>for</strong> signal<br />

set expansion (see Equation (5.8)). For a set of codewords, the worst case design goals<br />

are<br />

2 <strong>The</strong>re are other equivalent definitions of coding gain, we adopt this <strong>for</strong>m <strong>for</strong> ease of presentation.<br />

79


1. full diversity<br />

∆ H (D i − D j ) = L t , ∀i ≠ j (4.17)<br />

2. maximize minimum coding gain<br />

max min ∆ p (D i − D j ), ∀i ≠ j (4.18)<br />

To define our code construction method, we require the notion of an isometry. Isometries<br />

are defined as “distance preserving” trans<strong>for</strong>mations, where the “distance” ∆ x can<br />

be ∆ H , ∆ p or ∆ s .<br />

Definition 1 A function φ of codeword D is an isometry <strong>for</strong> ∆ x if<br />

∆ x (γ(D i ) − γ(D j )) = ∆ x (D i − D j ). (4.19)<br />

We assume that all isometries map the all zero matrix to the all zero matrix, i.e., γ(0) = 0.<br />

Lemma 1 For ∆ x = ∆ H , ∆ p , or ∆ s , the followings are isometries: 3<br />

1. γ(D) = UD, where U is arbitrary unitary matrix U H U = I.<br />

2. γ(D) = DP , where P is arbitrary unitary matrix: P H P = I.<br />

3. γ(D) = [d ∗ ti ], componentwise complex conjugation.<br />

Proof:<br />

If we can show that the three isometries do not change the eigenvalues of Φ(D i − D j ),<br />

then the “distance” ∆ x (x = H, p or s) is preserved. Let λ and v be an eigenvalue and<br />

corresponding eigenvector of Φ,<br />

Φv = λv. (4.20)<br />

3 This work focuses on non-spread systems, which applies equally well to the uplink of a multiuser<br />

Direct-Sequence Code-Division Multiple-Access (DS-CDMA) system when different transmit antennae of<br />

one user employ the same spreading code [Gen00]. <strong>The</strong> system with different spreading codes employed<br />

<strong>for</strong> different transmit antennae is called a spread system. Not all isometries <strong>for</strong> non-spread systems apply<br />

to spread systems, some limitations have to be applied. Isometry 1 holds as be<strong>for</strong>e. Isometry 2 requires:<br />

P is a unitary permutation matrix (P H P = I, each row and column of P has only one non-zero element)<br />

and equi-correlated spreading codes. Isometry 3 requires real valued spreading codes. Hence <strong>for</strong> spread<br />

systems, pragmatic isometries are essentially limited to U only.<br />

80


1. Isometry U does not change Φ at all: Φ(UD i −UD j ) = (D i −D j ) H U H U(D i −D j ) =<br />

Φ(D i − D j ).<br />

2. Let v = P v ′ , then ||v ′ || = ||v|| = 1 and<br />

ΦP v ′ = λP v ′ =⇒ P H ΦP v ′ = λv ′ ,<br />

there<strong>for</strong>e λ and v ′ are an eigenvalue and corresponding eigenvector of Φ(D i P −<br />

D j P ) = P H ΦP . Thus the eigenvalues are preserved.<br />

3. <strong>The</strong> eigenvalues of a Hermitian matrix, Φ, are real. Taking complex conjugation on<br />

both sides of Equation (4.20), Φ ∗ v ∗ = λv ∗ , there<strong>for</strong>e, λ and v ∗ are an eigenvalue<br />

and corresponding eigenvector of Φ ∗ .<br />

In summary, the eigenvalues of Φ are not changed by the these isometries and, thus the<br />

“distance”, ∆ x is preserved. ✷<br />

An interesting observation is that <strong>for</strong> 2 × 2 block codes, the effect of complex conjugation<br />

can be “emulated” by combinations of U and P . Here is an example:<br />

⎡ U ⎤ ⎡ 1 ⎤ ⎡ P ⎤ ⎡ 7 ⎤<br />

⎣ 1 0 ⎦ × ⎣ 1 1 ⎦ × ⎣ 0 1 ⎦ = ⎣ 1 1 ⎦ (4.21)<br />

0 e −j π 4 1 e j π 4 1 0 1 e −j π 4<br />

For larger block sizes, it may be challenging to find these equivalent combinations of U<br />

and P to emulate the effect of complex conjugation.<br />

To ensure closure <strong>for</strong> the isometries, we further require that U and P have only<br />

one non-zero element on each row and column, which is drawn from the same nPSK<br />

constellation as the original codeword D. With this constraint, there are n Lt L t ! possible<br />

matrices of size L t ×L t <strong>for</strong> the left isometries, and n Nc N c ! possible matrices of size N c ×N c<br />

<strong>for</strong> the right isometries. Either left or right isometries <strong>for</strong>m a non-abelian group; the fixedpoint-free<br />

groups of these isometries are well-studied[SHHS01]. Isometries consisting of all<br />

the combinations of U and P also <strong>for</strong>m a non-abelian group with cardinality n Lt L t !n Nc N c !,<br />

which includes left (or right) only isometries as a subgroup.<br />

Codewords which can be trans<strong>for</strong>med from one to another via isometries <strong>for</strong>m an<br />

equivalence class, which shall be called “double cosets” if both U and P are used, and<br />

“cosets” if only U is used. When no confusion is caused, we will simply use “coset” to<br />

81


epresent both <strong>for</strong> convenience. All codewords within a single equivalence class have the<br />

same ∆ p (D), which will be called the coding gain of the coset; but the converse is generally<br />

not true. A distinction between our defined cosets and those classical cosets defined in<br />

group theory [Arm87] is that in group theory all cosets have a common cardinality, while<br />

here, different cosets may have different cardinalities. <strong>The</strong> codeword space is naturally<br />

decomposed into cosets.<br />

<strong>The</strong> coset leader, l, has the smallest codeword index within the coset. A double coset<br />

with cardinality m will be designated by its coset leader as DC l (m), similarly a coset<br />

will be designated as C l (m). A double coset usually splits into several cosets with equal<br />

cardinality. An example <strong>for</strong> 2 × 2 QPSK codes is given below, where the optimal rate 2<br />

code (16 codewords) is found within DC 2 (64):<br />

DC 0 (64) = C 0 (16) ∪ C 17 (16) ∪ C 34 (16) ∪ C 51 (16)<br />

DC 1 (128) = C 1 (32) ∪ C 3 (32) ∪ C 18 (32) ∪ C 35 (32)<br />

DC 2 (64) = C 2 (32) ∪ C 19 (32)<br />

A coset is actually a trivial set of group codes with the largest possible cardinality.<br />

4.3 Building Nonlinear Hierarchical <strong>Codes</strong><br />

Given the nonlinear, non-metric, codeword space, finding a good set of codewords is not<br />

a simple task. With the aid of isometries, we propose the following greedy algorithm to<br />

build nonlinear hierarchical codes (NHCs):<br />

1. Pick a good initial codeword set S k with 2 k codewords.<br />

2. Generate S<br />

k ′ from S k by the following criterion<br />

S ′ k = {U mS k P n |max<br />

m,n min<br />

i,j ∆ p(D ′ i − D j ), ∆ H = L t , D ′ i ∈ S ′ k , D j ∈ S k } (4.22)<br />

3. Let S k+1 = S k ∪ S<br />

k ′ , k = k + 1 and go to step 2.<br />

Step 2 generates a new set S ′ which is isomorphic to the base set S and is as “far<br />

away” from the base set as possible. <strong>The</strong> procedure stops when the desired rate is reached<br />

or S ′ is empty. <strong>The</strong> following properties can be shown:<br />

82


1. Cardinality: |S k | = 2 k<br />

2. S<br />

k ′ and S k have the identical distance spectrum with respect to ∆ H , ∆ p and ∆ s .<br />

3. Coding gain: ∆ p (S k+1 ) ≤ ∆ p (S k )<br />

Step 2 and 3 extend the size of any code set while maintaining full diversity and good<br />

coding gain. <strong>The</strong> initial codeword set may be a single codeword matrix, in this case,<br />

all the codewords generated by the above procedure belong to the same double coset.<br />

Since each double coset is of the largest possible cardinality <strong>for</strong> group codes, the above<br />

procedure is a systematic way of carving out a good subset (or subgroup) from this double<br />

coset. We could start from a codeword set with members from different double cosets,<br />

but this set is generally difficult to expand in Step 2, because members from the same<br />

double coset lead to desirable symmetry properties 4 . To simplify code construction and<br />

analysis, we focus on codewords from the same double coset. Step 2 can be relaxed by<br />

accepting S ′ whose “distance” from S is larger than a certain threshold τ:<br />

S ′ k = {U mS k P n |min<br />

i,j ∆ p(D ′ i − D j ) ><br />

m,n<br />

τ, ∆ H = L t , D ′ i ∈ S ′ k , D j ∈ S k }. (4.23)<br />

Of course, multiple candidate sets would be found, but this makes it easier to find code<br />

sets of larger cardinality. <strong>The</strong> above procedure enables us to start from a single codeword<br />

and build a good set of codes <strong>for</strong> different rates with limited complexity by reusing the<br />

good sub-structure again and again. At each layer of the construction/expansion, the<br />

associated distance spectra have good properties and coding gains are nearly-optimized<br />

and we conjecture could in fact be optimal <strong>for</strong> several cases.<br />

are summarized in Table 4.2 of Section 4.5.<br />

<strong>The</strong> constructed NHCs<br />

<strong>The</strong> beauty of the NHC is the graceful<br />

tradeoff between per<strong>for</strong>mance and rate while maintaining good structure. Properties 2<br />

and 3 induce a natural set partitioning [Ung82] on NHCs which lays the foundation <strong>for</strong><br />

space-time trellis-coded modulation design. This is the topic of Chapter 5.<br />

To obtain a good hierarchical code, two important issues need to be addressed:<br />

1. Choice of initial codeword or lower rate code<br />

2. Choice of isometries<br />

4 <strong>The</strong>re are exceptions where codewords from different double cosets give good per<strong>for</strong>mance.<br />

83


We may construct the whole set from a single codeword, in this case, all codewords belong<br />

to the same double coset, the S k of the current step always serves as the lower rate code<br />

<strong>for</strong> next step; we may also start from a well constructed set of codewords, which may be<br />

a NHC of lower constellation, an orthogonal code or a unitary group code, in this case<br />

the constructed NHC may not belong to the same double coset.<br />

<strong>The</strong> interaction between an initial codeword and isometries is complicated when double<br />

cosets instead of cosets are considered. If we require that all codewords are drawn from<br />

the same double coset, choosing a good initial codeword is equivalent to selecting a good<br />

double coset. This significantly simplifies the task of choose a good initial codeword, but<br />

still does not solve the problem completely. Our strategy is to seek the double cosets with<br />

the maximum coding gain as motivated by the following two arguments. Consider a pair<br />

of codewords, D and D ′ , from the same double coset such that they maximize the coding<br />

gain ∆ p (D − D ′ ), the maximizing pair is simply D and −D with ∆ p (2D) = det(4D H D).<br />

Maximizing ∆ p (2D) leads to the coset with the largest coding gain,<br />

D ∗ = arg max<br />

D<br />

det(Φ(D − (−D)) = arg max<br />

D det(DH D) (4.24)<br />

Our second argument behind using the maximal determinant coset is as follows. Consider<br />

cosets <strong>for</strong>med by the left isometry U, then our goal <strong>for</strong> the initial codeword D is:<br />

D ∗ = arg max det(Φ(UD − D)) (4.25)<br />

D,U<br />

= arg max<br />

D<br />

det(DH D)max det((U −<br />

U I)H (U − I)) (4.26)<br />

<strong>The</strong> optimization can be decoupled to choosing the correct coset max<br />

D<br />

det(DH D) and<br />

the correct isometry max det((U −<br />

U<br />

I)H (U − I)). Note that the condition max det((U −<br />

U<br />

I) H (U − I)) implies det(U − I) ≠ 0, which is the condition of U being fixed point free<br />

[HM00]. Our simple argument suggests that fixed point free matrices/codes may not be<br />

the best codewords, but they are desirable isometries. Thus, fixed point free isometries<br />

should be coupled with maximal determinant code words to trans<strong>for</strong>m them into constant<br />

envelope modulation codewords. A similar argument cannot be extended to double cosets.<br />

Empirically, we have found that our strategy of starting with the coset with the largest<br />

associated determinant does indeed yield the best codes.<br />

84


We have observed that good codes from smaller constellations are embedded in codes<br />

from larger constellations and usually serve as better lower rate codes <strong>for</strong> the basis of<br />

expansion. Relaxing the maximization in Equation (4.22) to the threshold in Equation<br />

(4.23) also makes further expansion easier.<br />

Left or right isometries alone are usually insufficient to generate good higher rate<br />

codes. <strong>The</strong> theory of fixed-point free group codes enables one to eliminate inappropriate<br />

isometries and reduce the search space, thus enabling the practical search/design of block<br />

codes with very large cardinality. Recall that complex conjugation can be emulated in<br />

2 × 2 block codes by a suitable combination of left and right isometries, but this does not<br />

hold <strong>for</strong> general block sizes. We endeavor to avoid complex conjugation, or cross-coset<br />

codewords <strong>for</strong> that matter, because the fine structure maintained by focusing on left and<br />

right isometries is broken and thus makes generating a good S<br />

k ′ from S k challenging.<br />

In general, we propose cross-coset codewords and complex conjugation as methods <strong>for</strong><br />

expansion only in the last step of the design process.<br />

Given the nature of the problem (nonlinear, non-metric space, non-unique trans<strong>for</strong>mation),<br />

carrying out the procedure requires some trial and error; however very good codes<br />

can be found via a few steps and with fairly small complexity. A brute <strong>for</strong>ce search <strong>for</strong><br />

2 K codewords of size L t ×N c from nPSK constellations requires testing O(n LtNc2K) cases.<br />

In contrast, the hierarchical design requires testing only O(Kn Lt+Nc L t !N c !) cases. <strong>The</strong>re<br />

is inherent redundancy in the isometries. That is, the same code set S ′ can be generated<br />

by different isometries U m and P n from the same base set S; similarly a single initial code<br />

set S can yield multiple code sets S ′ , each with the maximal coding gain. Because we<br />

are interested in finding only one optimized code set, any other code sets with equivalent<br />

per<strong>for</strong>mance (distance spectrum) are deemed redundant. By exploiting the properties of<br />

isometries, many redundancies can be removed from the search as follows:<br />

1. About half of the left isometries U are redundant. Because U H U = I and U r = I<br />

<strong>for</strong> some r, the following pair of sets share the identical distance spectrum,<br />

S ∪ US ↔ S ∪ U H S (4.27)<br />

S ∪ U t S ↔ S ∪ U r−t S (4.28)<br />

If U is included in the search, U H becomes redundant.<br />

85


2. Most right isometries P are redundant. For example, the 2×2 QPSK NHC is found<br />

in the double coset DC 2 (64), which can be decomposed into two cosets<br />

DC 2 (64) = C 2 (32) ∪ C 19 (32) (4.29)<br />

⎡ ⎤ ⎡ ⎤<br />

= C 2 (32) ⎣ 1 ⎦ ∪ C 2 (32) ⎣ 1 ⎦ . (4.30)<br />

1<br />

i<br />

In other words, applying all other right isometries on C 2 (32) yields no new cosets,<br />

there<strong>for</strong>e, only two right isometries need to be included in the search.<br />

For 2 × 2 QPSK, the number of left/right isometry pair is reduced from 32 × 32 to 18 × 2,<br />

<strong>for</strong> 3 × 3 8PSK, the reduction is significant: from 3072 × 3072 to 1564 × 64.<br />

As noted earlier, the NHCs are strongly analogous to Slepian’s group codes <strong>for</strong> Gaussian<br />

channels [Sle68] which are geometrically uni<strong>for</strong>m signal sets [GDF91]. Our extension<br />

of Slepian’s group codes to MIMO quasi-static fading channels considers designs which<br />

endeavor to maintain geometric uni<strong>for</strong>mity via isometries. <strong>The</strong> strong analogy between<br />

NHC and Slepian’s group code is evident:<br />

NHC<br />

Coding gain ∆ p<br />

Initial block code<br />

Isometry U and P<br />

Double coset<br />

Slepian’s group code<br />

Euclidean distance<br />

Initial vector<br />

Real orthogonal matrices O<br />

Fundamental region<br />

NHCs attempt to generalize Slepian’s group codes to MIMO fading channel in a<br />

practical way. <strong>The</strong> group structure between the group of real orthogonal matrices {O i } is<br />

replaced by hierarchical structure <strong>for</strong> ease of construction. As remarked by Slepian, “the<br />

initial vector problem and the fundamental region” are complicated, general solution of<br />

these problems are not available yet.<br />

We refashion Forney’s definition [GDF91] <strong>for</strong> geometrically uni<strong>for</strong>m (GU) group codes<br />

<strong>for</strong> MIMO quasi-static fading channels:<br />

Definition 2 <strong>The</strong> codeword set S is geometrically uni<strong>for</strong>m if, given any two codeword<br />

matrices, D and D ′ in S, there exists an isometry γ that trans<strong>for</strong>ms D to D ′ while leaving<br />

S invariant; i.e. γ(D) = D ′ and γ(S) = S.<br />

86


<strong>The</strong> isometry γ could be a combination of the a<strong>for</strong>e-mentioned three isometries.<br />

Specifically, <strong>for</strong> NHC S k to be GU, we need ∀ a, b ∈ S k , ∃ U j , P j such that<br />

b = U j aP j (4.31)<br />

S k = U j S k P j . (4.32)<br />

GU is a strong kind of symmetry, a less strict symmetry is distance uni<strong>for</strong>m (DU).<br />

Definition 3 <strong>The</strong> codeword set S is distance uni<strong>for</strong>m if the distance measures (e.g. the<br />

coding gain) of any codeword to others do not depend on the choice of the codeword, in<br />

other words, the distance profile from one codeword to all others are all the same.<br />

Geometrical uni<strong>for</strong>mity implies distance uni<strong>for</strong>mity, but the reverse may not be true.<br />

An counter example is the 3 × 3 BPSK NHC (see Table A.5), the following subset is DU,<br />

{1, 172, 304, 413} (4.33)<br />

but not GU, because {1, 172} and {304, 413} belong to double coset 1 and 10, respectively,<br />

the isometries to trans<strong>for</strong>m codewords from double coset 1 to 10 do not exist. We observe<br />

that most, but not all, of our found NHCs are in fact DU. We ask what are the conditions<br />

<strong>for</strong> a NHC to be GU? Lemma 2 and <strong>The</strong>orem 1 answer this question. In the sequel, we<br />

use the notation<br />

U{code set S}P = {code set S ′ } (4.34)<br />

to represent the 1-1 isometry trans<strong>for</strong>mation that maps each codeword a from S to a<br />

codeword a ′ in S ′ such that<br />

UaP = a ′ , ∀ a ∈ S, a ′ ∈ S ′ . (4.35)<br />

Without loss of generality, we assume a and b are two arbitrary elements from S k and<br />

the following trans<strong>for</strong>mation holds (see Figure 4.1)<br />

{a ′ , b ′ } = U k {a, b}P k . (4.36)<br />

87


★<br />

✥<br />

S k<br />

✧<br />

a<br />

U j , P j ✲<br />

b<br />

✦<br />

U k , P k<br />

U k , P k<br />

★<br />

S<br />

k<br />

′ ❄<br />

a ′ ❄<br />

b ′<br />

✧<br />

✥<br />

✦<br />

Figure 4.1: <strong>The</strong> relationship between {a, b} and {a ′ , b ′ }.<br />

Lemma 2 If S k is GU, and I U and I P are the isometry sets which satisfy Definition<br />

2 <strong>for</strong> S k , then <strong>for</strong> arbitrary left and right isometries U k and P k , S<br />

k ′ = U kS k P k is also<br />

GU. A set of isometries which satisfies Definition 2 <strong>for</strong> S<br />

k ′ is {U kU j Uk H, U j ∈ I U } and<br />

{Pk HP jP k , P j ∈ I P }.<br />

Proof:<br />

First, let us find the isometries to trans<strong>for</strong>m a ′ to b ′ ,<br />

b ′ = U k bP k<br />

= U k U j aP j P k<br />

= U k U j U H k a′ P H k P jP k .<br />

If we can show S ′ k is invariant under isometries U kU j U H k<br />

holds because<br />

and P H k P jP k , S ′ k<br />

is GU. This<br />

S ′ k = U kS k P k<br />

= U k U j S k P j P k<br />

= U k U j U H k S′ k P H k P jP k .<br />

✷<br />

88


<strong>The</strong>orem 1 If S k is GU, and I U and I P are the isometry sets which satisfy Definition<br />

2, then a set of sufficient conditions <strong>for</strong> S k+1 = S k ∪ S ′ k , where S′ k = U kS k P k , to be GU<br />

is<br />

S k = (U k U j U k )S k (P k P j P k ) and (4.37)<br />

= (U H k U jU k )S k (P k P j P H k<br />

) and (4.38)<br />

= (U k U j U H k )S k(P H k P jP k ), ∀ U j ∈ I U , ∀ P j ∈ I P . (4.39)<br />

Proof:<br />

Given S ′ k = U kS k P k and b = U j aP j , we first identify the isometries to trans<strong>for</strong>m one<br />

codeword to another and the effects of these isometries on the corresponding code set.<br />

b = U j aP j S k = U j S k P j (4.40)<br />

b ′ = U k U j Uk H a′ Pk H P jP k S k ′ = U kU j Uk H S′ k P k H P jP k (4.41)<br />

b ′ = U k U j aP j P k S k ′ = U kU j S k P j P k (4.42)<br />

a = Uj H Uk H b′ Pk H P j H S k = Uj H Uk H S′ k P k H P j H , (4.43)<br />

For S k+1 = S k ∪ S ′ k<br />

to be GU, it is sufficient to establish the following three points<br />

1. Given isometries U j and P j preserve S k (Equation (4.40)), they should also preserve<br />

S ′ k . This requires S ′ k<br />

= U j S ′ k P j<br />

which is Equation (4.38).<br />

⇔ U k S k P k = U j U k S k P k P j<br />

(4.44)<br />

⇔ S k = Uk HU jU k S k P k P j Pk H, 2. Given isometries U k U j U H k and P H k P jP k preserve S ′ k<br />

also preserve S k . This requires<br />

(Equation (4.41)), they should<br />

S k = U k U j U H k S kP H k P jP k , (4.45)<br />

which is Equation (4.39).<br />

89


3. Given Isometries U k U j and P j P k trans<strong>for</strong>m S k to S<br />

k ′ (Equation (4.42)), they should<br />

trans<strong>for</strong>m S ′ k to S k at the same time. This requires<br />

S k = U k U j S ′ k P jP k = U k U j U k S k P k P j P k , (4.46)<br />

which is Equation (4.37).<br />

Similarly, given Isometries Uj HU k<br />

H and P k HP j trans<strong>for</strong>m S<br />

k ′ to S k, they should trans<strong>for</strong>m<br />

S k to S<br />

k ′ at the same time. This yields the same condition as Equation (4.37).<br />

✷<br />

We observe that the definition of GU is not constructive. Given a set of isometries<br />

which satisfies definition GU <strong>for</strong> S k , Lemma 2 and <strong>The</strong>orem 1 are able to construct a set<br />

of isometries which satisfies the definition of GU <strong>for</strong> S<br />

k ′ or S k+1 = S k ∪ S<br />

k ′ . Conditions<br />

specified in <strong>The</strong>orem 1 may be stronger than necessary, namely, if the set of isometries<br />

specified in <strong>The</strong>orem 1 fails the test of GU, other set of isometries may still pass the<br />

test. Nevertheless, <strong>The</strong>orem 1 provides a sufficient condition which is easy to test. For<br />

example, to test if the 2 × 2 BPSK NHC (Table A.1) is GU, we can start from S 1 :<br />

1. S 1 = {1, 14} is obviously GU where I U = {−I}, I P = {I}.<br />

2. Is S 2 = {1, 14, 7, 8} GU? From the structure of S 1 ,<br />

⎡ ⎤ ⎡ ⎤<br />

U j = ⎣ −1 0 ⎦ , P j = ⎣ 1 0 ⎦<br />

0 −1 0 1<br />

and from Table A.1<br />

⎡ ⎤ ⎡ ⎤<br />

U k = ⎣ 0 1 ⎦ , P k = ⎣ 1 0 ⎦ .<br />

−1 0 0 1<br />

Since U k U j U k = P k P j P k = P<br />

k ′ P jP k = P k P j P<br />

k ′ = I and U k ′ U jU k = U k U j U<br />

k ′ = −I, it<br />

is easy to verify that S 1 is invariant in <strong>The</strong>orem 1, there<strong>for</strong>e S 2 is GU. Now we have<br />

⎧⎡<br />

⎤ ⎡ ⎤ ⎡ ⎤⎫<br />

⎨<br />

I U = ⎣ −1 0 ⎦ , ⎣ 0 1 ⎦ , ⎣ 0 −1 ⎬<br />

⎦<br />

⎩ 0 −1 −1 0 1 0 ⎭<br />

⎧⎡<br />

⎤⎫<br />

⎨<br />

I P = ⎣ 1 0 ⎬<br />

⎦<br />

⎩ 0 1 ⎭ . 90


2<br />

ÕOØÙ<br />

168 30 180<br />

×<br />

42<br />

ÔÖÕ<br />

ìíïîIð0ñ<br />

òóïôöõL÷<br />

128<br />

å æçè é¦êëêå æ¦è é<br />

54<br />

àãâCä<br />

ßáà<br />

156<br />

ÚÛ@Ü<br />

93<br />

247<br />

105<br />

195<br />

ÛÝÞ<br />

ÏÐ@Ñ<br />

ÐÒÓ<br />

117<br />

223<br />

65<br />

235<br />

Figure 4.2: Hierarchical structure of the 2 × 2 QPSK NHC.<br />

3. Is S 3 = {1, 14, 7, 8, 2, 13, 4, 11} GU? For each U j ∈ I U , we have<br />

⎡ ⎤<br />

U j = ⎣ −1 0 ⎦ ,<br />

0 −1<br />

⎡ ⎤<br />

U j = ⎣ 0 1 ⎦ ,<br />

−1 0<br />

⎡ ⎤<br />

U j = ⎣ 0 1 ⎦ ,<br />

−1 0<br />

⎡ ⎤<br />

U k U j U k = U k ′ U jU k = U k U j U k ′ = ⎣ −1 0 ⎦ .<br />

0 −1<br />

⎡ ⎤<br />

U k U j U k = U k ′ U jU k = U k U j U k ′ = ⎣ 0 −1 ⎦ .<br />

1 0<br />

⎡ ⎤<br />

U k U j U k = U k ′ U jU k = U k U j U k ′ = ⎣ 0 1 ⎦ .<br />

−1 0<br />

Since P k = P j = I, P k P j P k = P ′ k P jP k = P k P j P ′ k = I. It is easy to verify that S 2 in<br />

invariant in <strong>The</strong>orem 1, there<strong>for</strong>e S 3 is GU.<br />

4.4 <strong>The</strong> Hierarchical Structure of Optimal 2×2 QPSK <strong>Codes</strong><br />

As a case-study, we consider the construction of the 2 × 2 QPSK NHC. We shall see that<br />

the NHC is in fact the optimal union bound code 5 found by brute <strong>for</strong>ce search [GVM02].<br />

<strong>The</strong> hierarchical structure of the 2×2 QPSK NHC is depicted in Figure 4.2, the codewords<br />

are delineated in Table 4.1.<br />

<strong>The</strong> steps to generate 2 × 2 QPSK 16 codeword set are described below:<br />

5 <strong>The</strong> union bound of its block error rate is lowest among all possible STBCs with the same block size<br />

and constellation size.<br />

91


[<br />

2<br />

] [<br />

168<br />

] [<br />

42<br />

] [<br />

128<br />

]<br />

1 1 −1 −1 1 −1 −1 1<br />

1 −1 −1 1 −1 −1 1 1<br />

[<br />

93<br />

] [<br />

247<br />

] [<br />

117<br />

] [<br />

223<br />

]<br />

i i −i −i i −i −i i<br />

−i i i −i i i −i −i<br />

[<br />

30<br />

] [<br />

180<br />

] [<br />

54<br />

] [<br />

156<br />

]<br />

1 i −1 −i 1 −i −1 i<br />

−i −1 i 1 i −1 −i 1<br />

[<br />

105<br />

] [<br />

195<br />

] [<br />

65<br />

] [<br />

235<br />

]<br />

i −1 −i 1 i 1 −i −1<br />

−1 i 1 −i 1 i −1 −i<br />

Table 4.1: Code list of the 2 × 2 QPSK NHC.<br />

1. Pick an initial codeword, say 2 from DC 2 .<br />

2. Generate another codeword (rate r = 0.5, coding gain ∆ p = 64.00)<br />

⎡<br />

⎣ −1<br />

⎤ ⎡ ⎤<br />

⎦ {2} ⎣ 1 ⎦ = {168}<br />

−1 1<br />

3. Generate 4 codewords (r = 1.0, ∆ p = 16.00)<br />

⎡<br />

⎣<br />

−1<br />

⎤ ⎡ ⎤<br />

1<br />

⎦ {2, 168} ⎣ 1 ⎦ = {42, 128}<br />

1<br />

4. Generate 8 codewords (r = 1.5, ∆ p = 16.00)<br />

[<br />

−1<br />

]<br />

[<br />

1<br />

1<br />

{2, 168, 42, 128}<br />

1<br />

]<br />

= {93, 247, 117, 223}<br />

5. Generate 16 codewords, (r = 2.0, ∆ p = 4.00)<br />

⎡<br />

⎣ 1<br />

⎤<br />

⎡<br />

⎦ {2, 168, 42, 128, 93, 247, 117, 223} ⎣ 1<br />

−i<br />

⎤<br />

⎦<br />

i<br />

= {30, 180, 54, 156, 105, 195, 65, 235}<br />

92


size const coset S 1 S 2 S 3 S 4 S 5 S 6 S 7 Table<br />

2 × 2 BPSK 1 64.00 16.00† A.1<br />

2 × 2 QPSK 2 64.00 16.00 16.00 4.00† A.2<br />

2 × 2 8PSK 4 64.00 16.00 16.00 4.00 1.37 0.34† A.4,A.3<br />

2 × 2 16PSK 8 64.00 16.00 16.00 4.00 1.37 0.34 0.09<br />

3 × 3 BPSK 1, 10 256.00 64.00 64.00‡† A.5<br />

3 × 3 QPSK 397 1280.00 160.00 64.00 32.00‡ 8.00‡ A.6,A.7<br />

3 × 3 8PSK 10794 1620.08 202.51 104.97 32.00‡ 7.03‡ 2.34‡ 0.69‡ A.8,A.9<br />

4 × 3 BPSK 83 4096.00 512.00 512.00 64.00‡† A.10<br />

4 × 3 QPSK 8714 4096.00 512.00 512.00 384.00 128.00 32.00‡ A.11,A.12<br />

‡ Code set which is not distance uni<strong>for</strong>m. † Code set which achieves Singleton bound.<br />

Table 4.2: List of Hierarchical <strong>Codes</strong><br />

<strong>The</strong> initial codeword 2 is not picked by chance, it belongs to the double coset DC 2<br />

which has codewords of the largest determinant. Surprisingly, all of the optimal union<br />

bound codes found by exhaustive search [GVM02] exhibit this hierarchical structure. We<br />

note that our construction does not en<strong>for</strong>ce group structure, however, the 16 generated<br />

codewords fulfill the requirement of geometric uni<strong>for</strong>mity as is clearly seen in Figure 4.2.<br />

4.5 Code Lists<br />

This section lists the hierarchical codes <strong>for</strong> various block sizes, constellation sizes and the<br />

corresponding MTCM designs. Table 4.2 summarizes the largest full diversity NHC sets<br />

<strong>for</strong> various block and constellation sizes. <strong>The</strong> double coset and minimum coding gain of<br />

each layer is also listed. If the target is not the largest full diversity set, better NHCs<br />

could be constructed <strong>for</strong> smaller sets. <strong>The</strong> detailed structure of each code set is tabulated<br />

in the tables in the Appendix.<br />

For 2 × 2 NHCs, the lower constellation code sets are “embedded” in the higher<br />

constellation code sets. For example, the 16PSK NHC is identical to 8PSK NHC up to<br />

S 6 , the 8PSK NHC is identical to QPSK NHC up to S 4 . However this embedded structure<br />

does not always exist in NHCs of larger size. Note that <strong>for</strong> 3 × 3 BPSK NHC, S 3 is a<br />

cross-coset design: the union of the two S 2 ’s from coset 1 and 10, respectively.<br />

Several of the NHCs also achieve the Singleton bound<br />

|S k | ≤ n Nc(Lt−∆ H+1)<br />

(4.47)<br />

93


which are very likely to be optimal code sets. <strong>The</strong> Singleton bound is an upper bound<br />

on the cardinality of the N c × L t STBC set with diversity gain L t and constructed from<br />

a constellation with size n. We compare the per<strong>for</strong>mance of our nonlinear hierarchical<br />

space-time block codes with the best known unitary or orthogonal codes via union bounds.<br />

For 3 × 3 STBCs, the best unitary group codes (UGC) [HM00] with cardinality 8 has the<br />

following structure: let u = e j π 4 , the UGC is a cyclic group generated by<br />

⎡ ⎤<br />

u 0 0<br />

⎢<br />

⎣0 u 0 ⎥<br />

⎦ . (4.48)<br />

0 0 u 3<br />

In Figure 4.3, Our BPSK NHC achieves the same per<strong>for</strong>mance as the 8PSK UGC; our<br />

8PSK NHC yields about a 0.9dB gain over the UGC. <strong>The</strong> best 3 × 3 UGC with 63<br />

codewords is G 21,4 [SHHS01], which is constructed from 21PSK in the <strong>for</strong>m: A l B k , l =<br />

0, . . . , 20, k = 0, 1, 2, where η = e 2π<br />

21 i ,<br />

⎡<br />

η<br />

A = ⎢<br />

⎣<br />

η 4<br />

⎤<br />

⎥<br />

⎦<br />

η 16<br />

⎡ ⎤<br />

0 1 0<br />

B = ⎢<br />

⎣ 0 0 1⎥<br />

⎦ . (4.49)<br />

η 7 0 0<br />

Figure 4.4 shows that the 64 code set NHC S 6 constructed from 8PSK is less than half<br />

dB away from the 63 code set G 21,4 in per<strong>for</strong>mance. Considering that S 6 uses only<br />

8PSK while G 21,4 uses 21PSK, S 6 offers very good per<strong>for</strong>mance. In Figure 4.5, 4 × 3<br />

BPSK/QPSK NHCs (8/64 codewords) achieve about a 2dB gain over the corresponding<br />

orthogonal designs[TH01], which have the following structure<br />

⎡<br />

s 1 s 2 s 3<br />

−s ∗ 2 s ∗ 1 0<br />

⎢<br />

⎣−s ∗ 3 0 s ∗ 1<br />

⎤<br />

. (4.50)<br />

⎥<br />

⎦<br />

0 s ∗ 3 −s ∗ 2<br />

94


3X3 NHC vs UGC, 8 Codewords<br />

10 0 SNR<br />

NHC(BPSK)<br />

NHC(8PSK)<br />

UGC(8PSK)<br />

10 −1<br />

Union Bound<br />

10 −2<br />

10 −3<br />

10 −4<br />

0 2 4 6 8 10 12 14<br />

Figure 4.3: 3 × 3 BPSK, 8PSK NHCs (8 codewords) vs best 8PSK unitary group codes<br />

(8 codewords).<br />

3X3 NHC vs UGC<br />

UGC |G 21,4<br />

|=63 (21PSK)<br />

NHC |S 6<br />

|=64, (8PSK)<br />

10 0<br />

Union Bound<br />

10 1 SNR<br />

10 −1<br />

10 −2<br />

0 2 4 6 8 10 12 14<br />

Figure 4.4: 3 × 3 8PSK NHCs (64 codewords) vs best 21PSK unitary group codes (63<br />

codewords).<br />

95


4X3 BPSK(8 Codewords) and QPSK(64 Codewords)<br />

10 0<br />

Orthogonal <strong>Codes</strong>, QPSK<br />

Hierarchical <strong>Codes</strong>, QPSK<br />

Orthogonal <strong>Codes</strong>, BPSK<br />

Hierarchical <strong>Codes</strong>, BPSK<br />

Union Bound<br />

10 1<br />

10 −1<br />

10 −2<br />

SNR<br />

10 −3<br />

10 −4<br />

0 2 4 6 8 10 12 14<br />

Figure 4.5: 4 × 3 BPSK (8 codewords) and QPSK (64 codewords) NHCs vs orthogonal<br />

codes.<br />

All the 2 × 2 NHCs happen to be orthogonal, or should be called super-orthogonal<br />

[JS02]. <strong>The</strong> super-orthogonal STBC is like a simplified NHC. For example, the 2 × 2 and<br />

4 × 4 super-orthogonal STBCs have the following structures respectively<br />

⎡ ⎤ ⎡ ⎤<br />

C(x 1 , x 2 , θ) = ⎣ x 1 x 2<br />

⎦ ⎣ ejθ 0<br />

⎦ (4.51)<br />

−x ∗ 2 x ∗ 1 0 1<br />

⎡<br />

⎤ ⎡<br />

⎤<br />

x 1 x 2 x 3 0 e jθ 1<br />

0 0 0<br />

−x ∗ 2 x ∗ 1 0 −x 3<br />

0 e jθ 2<br />

0 0<br />

C(x 1 , x 2 , x 3 , θ 1 , θ 2 , θ 3 ) =<br />

(4.52)<br />

⎢<br />

⎣ x ∗ 3 0 −x ∗ 1 −x 2<br />

⎥ ⎢<br />

⎦ ⎣ 0 0 e jθ 3<br />

0⎥<br />

⎦<br />

0 −x ∗ 3 x ∗ 2 −x 1 0 0 0 1<br />

In comparison with NHCs, the super-orthogonal STBCs have the following limits:<br />

1. Based on orthogonal code, the super-orthogonal STBCs do not exist <strong>for</strong> all possible<br />

block size. As shown by NHCs, the orthogonal code are not always the best starting<br />

base set.<br />

2. Using only a limited <strong>for</strong>m of right isometry P , the choice of parameter θ in P is not<br />

systematic.<br />

96


3. Lack of a hierarchical structure identifiable by isometry, the set partitioning of<br />

super-orthogonal STBCs is not systematic.<br />

97


Chapter 5<br />

Regular Multiple Trellis Coded Modulation<br />

5.1 Introduction<br />

In the design of STBCs, memory or redundancy is inherent within each block. But this<br />

is usually not strong enough in a fading channel, there<strong>for</strong>e, more memory needs to be<br />

introduced between STBCs.<br />

<strong>Space</strong>-time trellis codes (STTC) [TSC98, BBH02, YB02, Gri99] usually rely on a<br />

convolutional encoder to introduce memory among symbols, but there are no good design<br />

rules to optimize coding gain rather than exhaustive computer search. Multiple Trellis<br />

Coded Modulation (MTCM) [BDMS91, DS87, SF02, JS02] serves as an ideal bridge to<br />

combine the advantages of both block codes and trellis codes, and offers simple and<br />

intuitive design rules to optimize coding gain systematically. Both STBC and STTC<br />

can be viewed as certain kind of degenerated MTCM: STBC is equivalent to a single<br />

state MTCM, and STTC is MTCM with a vector signal instead of a matrix block on<br />

each transition. <strong>The</strong> MTCM designs in [SF02, ATP98] use STBCs as an inner code of a<br />

convolutional encoder. Two important features of MTCM design over quasi-static MIMO<br />

channels are pointed out in [SF02]: set expansion via isometries to generate large enough<br />

“piece-wise full-rank” signal sets and simple labeling rules to guarantee full diversity.<br />

However, such serial concatenation usually cannot take full advantage of the structure in<br />

STBCs, and there<strong>for</strong>e introduces memory among blocks “blindly”. Chapter 5 of this work<br />

proposes systematic MTCM designs where memory is introduced between block codes<br />

directly, instead of through an outer convolutional encoder. Because of the symmetric<br />

structure in the trellis and constituent NHCs, such designs are dubbed RMTCM designs.<br />

98


In our RMTCM design, a regular trellis constructed specifically <strong>for</strong> fading channels<br />

is generated by shift-register chains. NHCs, which admit a natural set partitioning as in<br />

[Ung82], <strong>for</strong>m the constituent codes. <strong>The</strong> parameters of the regular trellis and set partitioning<br />

are matched to exploit the layered structure of the constituent NHCs optimally.<br />

Several factors which affect per<strong>for</strong>mance in both quasi-static and fast block fading channels<br />

are analyzed and exploited to improve the overall per<strong>for</strong>mance. Set expansion is used<br />

to improve high rate MTCM designs. In each design, the per<strong>for</strong>mance/rate/complexity<br />

tradeoffs are gracefully balanced and optimized. <strong>The</strong> general design procedure can be<br />

used to serve two purposes: <strong>for</strong> a given set of NHCs, provide MTCM designs with various<br />

rates, and, <strong>for</strong> a given rate, provide MTCM designs with various complexity and<br />

per<strong>for</strong>mance.<br />

<strong>The</strong> MTCM design in [JS02] is a special case of the RMTCM design in this work,<br />

where the constituent codes are restricted to orthogonal codes and signal expansion is<br />

limited to a special isometry; the set partitioning and trellis structure of [JS02] are not<br />

systematic. RMTCM design can achieve a half dB gain over the rate 3 MTCM design in<br />

[JS02] by doubling the number of states via adjusting trellis parameters. With even more<br />

states, as much as 2dB gain can be achieved. <strong>The</strong> observation, that parallel transitions<br />

should be avoided <strong>for</strong> better per<strong>for</strong>mance in quasi-static channels, is made in [JS02] <strong>for</strong><br />

orthogonal constituent codes only, we will show that it holds <strong>for</strong> more general cases.<br />

<strong>The</strong> design principles of RMTCM do not rely on quasi-static fading channels. Thus<br />

they can be employed to design RMTCM even <strong>for</strong> fast block fading channels, where fast<br />

block fading is achieved due to perfect block interleaving. Both analysis and simulation<br />

show that parallel transitions are the major detrimental factor which should be avoided<br />

<strong>for</strong> fast block fading channels. A similar conclusion is drawn <strong>for</strong> MTCM design over<br />

Single-Input-Single-Output fast-fading (due to perfect interleaving) channels [BDMS91].<br />

In regular MTCM design, a sequence of STBCs are organized into frames and memory<br />

is added between STBCs in a smart way to exploit the distance spectrum of constituent<br />

code. This chapter is organized as follows. Section 5.3 provides the general regular<br />

MTCM design procedure <strong>for</strong> a given hierarchical block code and rate. <strong>The</strong> per<strong>for</strong>mance<br />

of such a design is analyzed in quasi-static and fast block fading channels (Section 5.4),<br />

the connections between per<strong>for</strong>mance and design parameters are revealed and general<br />

guidelines to improve regular MTCM designs in two fading channels are summarized.<br />

99


<strong>The</strong> high rate per<strong>for</strong>mance is poor, Section 5.3 also proposes signal set expansion to<br />

improve high rate per<strong>for</strong>mance.<br />

5.2 Signal Model<br />

In RMTCM design, a sequence of STBCs are organized into frames. We consider two<br />

extreme channel conditions: if channel is constant <strong>for</strong> all blocks, it is deemed quasi-static<br />

fading; if the channel is constant within a block, but independent from block to block<br />

(due to perfect block interleaving), it is deemed fast block fading. In both fading channels,<br />

the received signal <strong>for</strong> one block is still characterized by Equation (4.1). A single error<br />

event (SEE) consists of two paths starting from the same state and merging only once<br />

after M blocks.<br />

Quasi-static Fading Channels In quasi-static fading channels, the received signal model<br />

<strong>for</strong> the M-stage single error event is<br />

Ȳ = √ σ t ¯DH + ¯N ∈ C<br />

MN c×L r<br />

(5.1)<br />

where Ȳ = [Y T<br />

1 , Y T<br />

2 , . . . , Y T M ]T , ¯D = [D T 1 , DT 2 , . . . , DT M ]T and ¯N = [N T 1 , N T 2 , . . . , N T M ]T ,<br />

Y m , D m , N m and H are defined in Equations (4.1)-(4.5).<br />

Fast Fading Channels <strong>The</strong> received signal model <strong>for</strong> the M-stage single error event is<br />

similar to that in quasi-static fading channel<br />

Ȳ = √ σ t ¯D ¯H + ¯N ∈ C<br />

MN c×L r<br />

(5.2)<br />

except that ¯D = diag(D 1 , D 2 , . . . , D M ) and ¯H = [H T 1 , HT 2 , . . . , HT M ]T . Note that ¯D<br />

is block diagonal matrix in fast fading channels.<br />

5.3 RMTCM Design Parameters and Procedures<br />

NHCs admit a natural set partitioning as in [Ung82], and hence are a perfect candidate<br />

<strong>for</strong> MTCM design. <strong>The</strong> set partitioning is simply the reverse procedure of construction<br />

which will not be repeated here. Another design factor is the state trellis. Because the<br />

only purpose of the state trellis is to introduce memory among the block codes, we will<br />

100


S3 S2 S1<br />

0 0 0<br />

log 2<br />

g<br />

registers<br />

0 0 1<br />

0 1 0<br />

S1<br />

S3<br />

0 1 1<br />

log 2<br />

b<br />

register<br />

chains<br />

S1<br />

S2<br />

Sa+1<br />

Sa+2<br />

S2a+1<br />

S2a+2<br />

S2<br />

1 0 0<br />

1 0 1<br />

1 1 0<br />

Sa<br />

S2a<br />

1 1 1<br />

Figure 5.1: <strong>The</strong> shift-register configuration<br />

to generate state trellis.<br />

Figure 5.2: <strong>The</strong> shift-register configuration<br />

<strong>for</strong> 2 inputs and 3 registers and corresponding<br />

trellis.<br />

keep the state trellis as simple as possible. If the first a = log 2 b out of log 2 I input bits<br />

are used to generate state trellis, the rest are used <strong>for</strong> parallel transitions, <strong>for</strong> N = log 2 S<br />

registers, the simplest configuration of shift-registers to generate a state trellis is (see<br />

Figure 5.1):<br />

1. <strong>The</strong>re are a = log 2 b feed-<strong>for</strong>ward shift-register chains<br />

2. Each of the first log 2 b bits is feed into the beginning of only one shift-register chain<br />

3. No cross-links between two shift-register chains exist<br />

4. Registers s k , k = 1, . . . , N, are assigned to each chain in a round-robin fashion,<br />

states are defined as (s N , . . . , s 1 ).<br />

Figure 5.2 shows the trellis generated by a shift-register configuration of two inputs<br />

and three registers. For a regular trellis, there are b branches emerging from any given<br />

state, and each branch contains p parallel transitions. Every b starting states are fully<br />

connected to b ending states, all the branches connecting these states <strong>for</strong>m a trellis group.<br />

<strong>The</strong> number of trellis groups is denoted by g. To prevent catastrophic RMTCM designs, g<br />

must be greater than 1, this means that the first shift-register chain must contain at least<br />

2 registers (Figure 5.1). <strong>The</strong> set of parameters {g, b, p} describes the internal structure<br />

of a regular trellis. Another set of parameters {I, S, O} which characterize the external<br />

101


ehavior of a regular trellis are: the number of inputs I, the number of outputs O and<br />

the number of states S. <strong>The</strong>se two sets of parameters are related by:<br />

O = gbp (5.3)<br />

I = bp (5.4)<br />

S = gb (5.5)<br />

<strong>The</strong> overall code rate is<br />

r = 1 N c<br />

log 2 I. (5.6)<br />

<strong>The</strong> E b /N 0 is related to SNR σ t by<br />

E b<br />

N 0<br />

= L tσ t<br />

r . (5.7)<br />

When the frame error rates (FER) of designs with various rates are compared, if code<br />

efficiency is of more importance, E b /N 0 should be used; if signal power constraint L t σ t<br />

and achievable diversity level are of more importance, SNR L t σ t should be used. In this<br />

work, we find plotting the FER curves with respect to SNR L t σ t is more instructive.<br />

Using Equation (5.7), readers could easily obtain the corresponding curves with respect<br />

to E b /N 0 .<br />

If we want to assign the NHC set S K to this trellis, the number of trellis outputs<br />

should match the cardinality of S K . Given O = |S K |, the procedure <strong>for</strong> RMTCM design<br />

is:<br />

1. Factor O = gbp<br />

2. Generate the regular trellis by parameters {g, b}<br />

3. Partition S K to g subsets, assign each subset S k to each trellis group<br />

4. Per<strong>for</strong>m Catastrophe test<br />

<strong>The</strong> particular factorization of O determines the code rate r by Equations (5.3) and<br />

(5.6). Once the code rate r is determined, the only free design parameter is b, which will<br />

in turn determine p by Equation (5.4) and S by Equation (5.5). <strong>The</strong> labeling rule of<br />

Step 4 guarantees full diversity [SF02]. <strong>The</strong> coding gain is optimized through the NHC<br />

102


construction, the set partitioning principle [Ung82] and the matching between the layered<br />

distance spectrum of NHC and the structure of the regular trellis. Such a RMTCM<br />

design with g = xx, b = yy and p = zz will be labeled as xx × yy × zz. <strong>The</strong>re is<br />

still some freedom in assigning STBCs from S k to the transitions in a trellis group, we<br />

find the “cyclic-shift rule” works very well: in every trellis group, there are b starting<br />

states, if the transitions from one starting state are labeled by block codes in the order<br />

D k1 , . . . , D kI , the transitions from the next starting state are labeled by a p-cyclic shift<br />

of D k1 , . . . , D kI , namely, D kp+1 , . . . , D kI , D k1 , . . . , D kp . Special caution should be taken<br />

to avoid catastrophic codes caused by bad labeling.<br />

Due to the labeling rule that transitions diverging from and converging to the same<br />

state are uniquely labeled by codewords from the same set S k , full diversity over S k instead<br />

of S K is sufficient to guarantee full diversity of the MTCM design. This motivates<br />

the expansion of the signal set further without the requirement of full diversity. A natural<br />

criterion used in the expansion is ∆ s and the procedures to expand the signal set are<br />

similar to those based on ∆ p :<br />

1. Expand S k to S ′ k<br />

by the following criterion<br />

S ′ k = {U mS k P n |max<br />

m,n min<br />

i,j ∆ s(D ′ i − D j ), ∆ H < L t , D ′ i ∈ S ′ k , D j ∈ S k } (5.8)<br />

2. Let S k+1 = S k ∪ S<br />

k ′ , k = k + 1 and go to step 1.<br />

<strong>The</strong> procedure stops when the desired rate is reached or S ′ is empty.<br />

<strong>The</strong> expansion<br />

can also be based on ∆ p . For two transmit antennae, the two approaches are essentially<br />

identical. For more than two transmit antennae, the expanded NHCs based on two criteria<br />

have similar per<strong>for</strong>mance.<br />

Notice that full diversity ∆ H = L t is no longer required in signal set expansion. If<br />

the largest full diversity subset is S m , the highest achievable rate of RMTCM design with<br />

full diversity is r m = m N c<br />

. Similar properties hold <strong>for</strong> expanded sets m ≤ k ≤ K:<br />

1. Cardinality: |S k | = 2 k .<br />

2. S ′ k and S k have identical distance spectrum with respect to ∆ H , ∆ p and ∆ s .<br />

3. ∆ s : ∆ s (S k+1 ) ≤ ∆ s (S k ).<br />

103


All the NHCs found in this work satisfying the following property (See code lists in<br />

Table 4.2)<br />

∆ x (S k+1 ) ≤ ∆ x (S k ), x = H, p or s. (5.9)<br />

Because ∆ H and ∆ p characterize the per<strong>for</strong>mance of NHCs in Ricean channels 1 and<br />

∆ s characterize the per<strong>for</strong>mance in Gaussian channels, this property suggests that the<br />

set partitioning of NHCs is optimal in both Ricean and Gaussian channels. Although<br />

designed <strong>for</strong> Ricean channel, RMTCM designs should yield reasonably good per<strong>for</strong>mance<br />

even in a Gaussian channel.<br />

We briefly discuss two problems regarding trellis design be<strong>for</strong>e concluding this section.<br />

First, can feedback be added to the shift-register chains? <strong>The</strong> answer is yes, but proceed<br />

with caution. Section 5.4 presents an important design parameter M m : the minimum<br />

length M m of a single error event. If feedback is added at the middle of the register chain<br />

rather than the beginning, M m is reduced and per<strong>for</strong>mance will be degraded. Feedback to<br />

the beginning will not affect the frame error rate per<strong>for</strong>mance, but shuffles the mapping<br />

between input bit sequences and output block codes and makes the overall structure<br />

recursive, and could there<strong>for</strong>e potentially benefit a serial concatenated system [BDMP98].<br />

<strong>The</strong> second question is, why not serially concatenate an optimal binary convolutional<br />

encoder [LC83] with a “good” mapper, which maps the convolutional encoder’s output<br />

bit sequence to a block code? <strong>The</strong> simple answer is because serial concatenation usually<br />

cannot take advantage of the distance spectrum and the underlying finite state machine<br />

directly and hence yields inferior per<strong>for</strong>mance with respect to RMTCM (see Figure 5.9 in<br />

Section 5.5.2 <strong>for</strong> a direct comparison). Consider the following, let u denote the uncoded<br />

input bits to the convolutional encoder, S denote the states, and v denote the output<br />

bit sequence, f(.) denote the mapper from the output bit sequence to STBCs. For serial<br />

concatenation, the overall function to map input bits to a STBC is<br />

f(v(u, S)), (5.10)<br />

while <strong>for</strong> RMTCM, the overall function is<br />

f ′ (u, S). (5.11)<br />

1 Note that we consider Rayleigh channels to be a special case of Ricean channels.<br />

104


Since f ′ is a direct function of inputs u and states S, rather than through an intermediate<br />

(linear) function v, RMTCM is better adapted to match the underlying states to the<br />

output code space, namely, the set of NHCs. Secondly, the code space of NHCs is rather<br />

large due to the high dimensionality of the block code. For example, the full diversity 3×3<br />

8PSK NHC has 128 block codes, which is much larger than the number of constellation<br />

points <strong>for</strong> PSK. RMTCM can naturally exploit this large code space, while a mapper f(.)<br />

in serial concatenation appears to be inefficient and sometimes awkward. <strong>The</strong>re seems<br />

to exist the possibility to take advantage of both the well-designed binary convolutional<br />

codes and the large code space of NHCs via a smart mapper as in pragmatic TCM<br />

[VWZP89], but this topic is beyond the scope of this work.<br />

5.4 Per<strong>for</strong>mance Analysis<br />

We next analyze the per<strong>for</strong>mance of our space-time MTCM systems. We consider a single<br />

error event (SEE) with M blocks. Recall that ¯D is the sequence of M transmitted STBCs<br />

where M is the length of a SEE.<br />

Quasi-static Fading Channels In quasi-static fading channels, the pairwise error probability<br />

of decoding ¯D ′ when ¯D (see Equation (5.1)) is transmitted, after averaging<br />

over the channel, can be upper bounded by<br />

P ( ¯D → ¯D ′ ) ≤<br />

( σt<br />

) −Lt 1<br />

∣<br />

4 ∣( ¯D − ¯D ′ ) H ( ¯D − ¯D ′ ) ∣ . (5.12)<br />

Fast Fading Channels In fast fading channels, the channel-averaged pairwise error<br />

probability of decoding ¯D ′ when ¯D (see Equation (5.2)) is transmitted can be upper<br />

bounded by<br />

P ( ¯D → ¯D ′ ) ≤<br />

( σt<br />

) −MLt 1<br />

∣<br />

4 ∣( ¯D − ¯D ′ ) H ( ¯D − ¯D ′ ) ∣ . (5.13)<br />

Comparing Equations (5.12) and (5.13), the diversity gain is<br />

⎧<br />

⎪⎨ L t quasi-static fading<br />

∆ H =<br />

⎪⎩ ML t fast block fading<br />

(5.14)<br />

105


Whenever there exist parallel transitions, M = 1, and the achievable diversity level in<br />

fast block fading channels is limited to that of quasi-static fading channels. <strong>The</strong>re<strong>for</strong>e,<br />

in fast block fading channels parallel transitions should be avoided (this is verified by<br />

simulation in Figure 5.7). <strong>The</strong> coding gain is<br />

⎧<br />

⎪⎨<br />

∆ p = det[( ¯D − ¯D ′ ) H ( ¯D − ¯D<br />

det[ ∑ M<br />

′ i=1<br />

)] =<br />

(D i − D i ′)H (D i − D i ′ )] quasi-static fading<br />

⎪⎩<br />

∏ M<br />

i=1 det[(D i − D i ′)H (D i − D i ′)]<br />

fast block fading<br />

In both cases, increasing the length, M, of a SEE is preferred.<br />

(5.15)<br />

<strong>The</strong> following two theorems justify our approach to MTCM design with STBCs in<br />

quasi-static fading channels: cascading well-designed block codes. First, by Minkowski’s<br />

Inequality [HJ91], <strong>for</strong> positive definite matrices (D i − D ′ i )H (D i − D ′ i ),<br />

[ M<br />

]<br />

∑<br />

det (D i − D i) ′ H (D i − D i)<br />

′ ≥<br />

i=1<br />

M∑<br />

det [ (D i − D i) ′ H (D i − D i) ′ ] (5.16)<br />

i=1<br />

This suggests that the coding gain of an error path grows at least with the sum of coding<br />

gains of individual stages. Secondly, the monotonicity theorem [HJ91] yields<br />

µ k ((D i − D ′ i) H (D i − D ′ i) + (D j − D ′ j) H (D j − D ′ j)) ≥ µ k ((D i − D ′ i) H (D i − D ′ i)) (5.17)<br />

<strong>for</strong> positive semi-definite (D j − D j ′ )H (D j − D j ′ ), where µ k’s are eigenvalues arranged in<br />

increasing order. Similar observations <strong>for</strong> specifically orthogonal STBCs are made in<br />

[JS02], here we have shown this property holds <strong>for</strong> more general STBCs. For an example<br />

of Minkowski’s inequality, see Figure 5.12 and Equation (5.33).<br />

<strong>The</strong> minimum coding gain of all possible SEEs is defined as<br />

d free = min<br />

j<br />

det( ¯D j − ¯D ′ j) H ( ¯D j − ¯D ′ j) (5.18)<br />

where ¯D j and ¯D j ′ <strong>for</strong>m a single error event. <strong>The</strong> coding gain associated with the shortest<br />

path diverging from the zero state and converging at the zero state will be called d ∗ free . We<br />

notice that the code is nonlinear, hence d ∗ free is not necessarily equal to d free. However,<br />

d ∗ free agrees with the simulated per<strong>for</strong>mance fairly well and inspection of d∗ free<br />

reveals very<br />

106


helpful insight <strong>for</strong> code design (see Section 5.5.4). A SEE may be caused by parallel transitions<br />

(M = 1), in this case, Equation (5.18) takes a simpler <strong>for</strong>m d free = ∆ p (S k ), where<br />

S k is the subset used to label a trellis group; otherwise (M > 1), the minimum length of<br />

such a SEE not caused by parallel transitions, M m , is related to design parameters by<br />

the following theorem:<br />

<strong>The</strong>orem 2 <strong>The</strong> minimum length M m of a single error event not caused by parallel<br />

transitions <strong>for</strong> a regular trellis can be determined by<br />

M m = ⌊log b g⌋ + 2. (5.19)<br />

Proof:<br />

In the finite state machine, the number of shift-register chains is log 2 b, the number of<br />

shift-registers is log 2 S. By the construction rules of a regular trellis, the minimum length<br />

of registers per chain is ⌊log 2 S/ log 2 b⌋ = 1 + ⌊log b g⌋. Starting from the same state, the<br />

minimum number of stages <strong>for</strong> the minimum-length shift-register chain to converge to<br />

another identical state due to different inputs is obviously 2 + ⌊log b g⌋. An easier way to<br />

envision this is to consider the smallest number of stages of a “1” propagate through the<br />

all-zero shift register chains. This happens when the “1” is feed into the shortest chain<br />

with length 1 + ⌊log b g⌋. It takes 1 + ⌊log b g⌋ stages <strong>for</strong> the “1” to be shifted to the end<br />

of the chain, and one more stage to be shifted out of the chain. ✷<br />

Intuitively, increasing the number of trellis groups, g, <strong>for</strong>ces two diverging paths to<br />

traverse more branches be<strong>for</strong>e converging, and hence enlarges M m . Larger M m usually<br />

results in better per<strong>for</strong>mance. <strong>The</strong> effects of g on M m is listed in Table 5.1. <strong>The</strong> point<br />

where M m jumps is where we expect per<strong>for</strong>mance improvement. <strong>The</strong> sweet point <strong>for</strong> b =<br />

2, 4, 8 seems to be g = 8, 4, 8, respectively, where the jumps occur still within a reasonable<br />

number of states. Quadrupling the block set, coupled with RMTCM modulation, can<br />

typically achieve 2dB gain over the original block code without memory, which seems to<br />

be a good tradeoff between per<strong>for</strong>mance and complexity. <strong>The</strong>re<strong>for</strong>e, this work considers<br />

mostly g = 4. For high rate codes, signal set expansion (Equation (5.8)) is used to<br />

increase g.<br />

<strong>The</strong>orem 3 If we assume that g and b are powers of 2 (as in the case of RMTCM) and<br />

there are no parallel transitions (p = 1), then the number of distinct paths T (n) starting<br />

107


M m g=1 g=2 g=4 g=8 next jump<br />

b=2 2 3 4 5 g=16<br />

b=4 2 2 3 3 g=16<br />

b=8 2 2 2 3 g=64<br />

b=16 2 2 2 2 g=16<br />

Table 5.1: <strong>The</strong> length M of single error event <strong>for</strong> given b and g.<br />

from zero state and ending at a reachable state i after n stages is only a function of n<br />

(independent of i):<br />

⎧<br />

⎪⎨ 1 n = 1<br />

(b ≥ g) T (n) =<br />

⎪⎩ b n−1<br />

g<br />

n > 1<br />

⎧<br />

⎪⎨ 1 n ≤ log b g + 1<br />

(b < g) T (n) =<br />

⎪⎩ b n−1<br />

g<br />

n > log b g + 1<br />

(5.20)<br />

(5.21)<br />

Proof:<br />

<strong>The</strong> shift register chains employ only the linear operation: addition modulo-2, ⊕. Let the<br />

set of possible state sequences starting from the zero state be W = { ¯w| ¯w = (w 0 , . . . , w n ), w 0 =<br />

0}, and define the following binary operation of two state sequences as element-wise addition<br />

modulo-2,<br />

ā ⊕ ¯b (a 1 ⊕ b 1 , . . . , a n ⊕ b n ). (5.22)<br />

It is easy to verify the following properties<br />

1. Closure: if ¯w 1 , ¯w 2 ∈ W, ¯w 1 ⊕ ¯w 2 ∈ W, because of the linear shift register chain<br />

structure. <strong>The</strong> state a k is a linear function of previous state a k−1 and input u k−1 ,<br />

similarly, b k is a linear function of b k−1 and input v k−1<br />

a k = f(a k−1 , u k−1 ) by trellis structure (5.23)<br />

b k = f(b k−1 , v k−1 ) by trellis structure (5.24)<br />

a k ⊕ b k = f(a k−1 ⊕ b k−1 , u k−1 ⊕ v k−1 ) by linearity (5.25)<br />

108


this suggests that <strong>for</strong> any previous state a k−1 ⊕ b k−1 , there exists a valid input<br />

u k−1 ⊕v k−1 to bring the system to next state a k ⊕b k , there<strong>for</strong>e (a k−1 ⊕b k−1 , a k ⊕b k )<br />

is a valid state sequence. By induction, ¯w 1 ⊕ ¯w 2 is a valid sequence.<br />

2. Associativity: ( ¯w 1 ⊕ ¯w 2 ) ⊕ ¯w 3 = ¯w 1 (⊕ ¯w 2 ⊕ ¯w 3 ).<br />

3. Identity: the all zero state sequence ¯0, ¯w + ¯0 = ¯w.<br />

4. Inverse: every state sequence ¯w is its own inverse ¯w ⊕ ¯w = ¯0, ¯w −1 = ¯w.<br />

5. Commutativity: ¯w 1 ⊕ ¯w 2 = ¯w 2 ⊕ ¯w 1 .<br />

there<strong>for</strong>e, W <strong>for</strong>ms an abelian group under the binary operation ⊕ defined in Equation<br />

(5.22). Denote the set of all the state sequences starting from the zero state and ending<br />

in state i by W i = { ¯w|w 0 = 0, w n = i}, then we want to show that all W i ’s have equal<br />

cardinality T (n). <strong>The</strong> subset W 0 actually <strong>for</strong>ms a subgroup. This can be verified by the<br />

subgroup theorem [Arm87]: if ā, ¯b ∈ W 0 , then<br />

ā ⊕ (¯b) −1 =ā ⊕ ¯b (by inverse property 4, (¯b) −1 = ¯b) (5.26)<br />

∈W (by closure property 1) (5.27)<br />

∈W 0 (by a n ⊕ b n = 0). (5.28)<br />

By the subgroup theorem, W 0 is a subgroup, and W i ’s, i ≠ 0, are cosets. All cosets have<br />

equal cardinality, there<strong>for</strong>e T (n) is independent of ending state i.<br />

If b ≥ g, at stage n = 1, all reachable states {0, . . . , b−1} have T (1) = 1. At stage n =<br />

2, all states become reachable, and T (2) = b/g. For any stage n > 2, T (n) = bT (n − 1).<br />

This proves Equation (5.20).<br />

If b < g, at stages n = 1, . . . , log b g+1, all reachable states {0, . . . , b−1} have T (1) = 1.<br />

At stage n = log b g + 1, all states become reachable. For any stage n > log b g + 1,<br />

T (n) = bT (n − 1). This proves Equation (5.21). ✷<br />

Corollary 1 For any state with T (n) ≥ 2, the number of error events not caused by<br />

parallel transitions is ( )<br />

T (n)<br />

2 = (b 2n−2 − b n−1 )/(2g 2 ).<br />

<strong>The</strong> proof is straight<strong>for</strong>ward because any two distinct paths ending at the same state<br />

<strong>for</strong>m an error event. Because an error event may contain several SEEs, Corollary 1<br />

109


provides a lower bound on the number of SEEs. In <strong>The</strong>orem 3, the existence of error<br />

events is equivalent to T (n) ≥ 2, which in turn is equivalent to n ≥ log b g + 2. This<br />

provides another proof of <strong>The</strong>orem 2 when b and g are powers of 2. <strong>The</strong>orem 3 and<br />

Corollary 1 characterizes the number of distinct paths, T (n), and the number of error<br />

events as a function of b, g and n. A larger g always reduces the number of SEEs,<br />

and there<strong>for</strong>e improves system per<strong>for</strong>mance. Because SEEs with short length tend to<br />

dominate per<strong>for</strong>mance, a more direct improvement of per<strong>for</strong>mance due to increasing g is<br />

via reducing T (n) <strong>for</strong> small n.<br />

We want to point out that the RMTCM design is a nonlinear code once the effects<br />

of labeling is taken into account. <strong>The</strong>orems 2, 3 and Corollary 1 characterize the general<br />

structure of regular trellis without considering the effects of labeling. <strong>The</strong>re<strong>for</strong>e they<br />

apply to any code with a regular trellis structure.<br />

For a RMTCM design, the ultimate design goal is to increase d free . <strong>The</strong>re are two<br />

means by which to increase d free : if d free is caused by parallel transitions (M = 1),<br />

reducing the number of parallel transitions p can increase d free due to improving the<br />

per<strong>for</strong>mance of the constituent NHC ∆ p (S k ) ≥ ∆ p (S k+1 ); otherwise, increasing the minimum<br />

length M m of SEE not caused by parallel transitions can increase d free due to<br />

the observations on coding gain in Equation (5.15). From a design point of view, <strong>for</strong> a<br />

given rate, increasing g may increase M m ; increasing b decreases p. A negative point of<br />

increasing b, by Equation (5.19), is that M m might be decreased (See Table 5.1). Empirical<br />

results suggest that the advantage of reducing p usually outweighs the disadvantage<br />

of reduced M m . For RMTCM in fast block fading channels, p should be reduced until<br />

p = 1; while in quasi-static fading channels, p should be reduced to a sweet point where<br />

large per<strong>for</strong>mance gains can be achieved by relative small decrease of p. <strong>The</strong> sweet point<br />

is closely related to the distance spectrum of the hierarchical codes.<br />

<strong>The</strong> per<strong>for</strong>mance of RMTCM be can improved by increasing either b or g, as a consequence,<br />

the number of states, S = gb, grows linearly with g and b. <strong>The</strong> decoder employs<br />

the Viterbi algorithm, whose decoding complexity is linear in the number of states S.<br />

To calculate the branch metric <strong>for</strong> Viterbi decoder, the received signal is correlated with<br />

O = gbp block codes. <strong>The</strong>re<strong>for</strong>e both the calculation of branching metric and decoding<br />

are linear in g and b, furthermore, the calculation of branching metric is also linear in p.<br />

110


5.5 RMTCM Designs Examples<br />

This section summarizes various RMTCM designs with 2×2 and 3×3 BPSK, QPSK and<br />

8PSK NHCs. For the detailed list of NHCs, please refer to Table 4.2 and the references<br />

therein.<br />

MTCM designs following the rules outlined in Section 5.3 will be denoted Regular<br />

MTCM designs, otherwise irregular MTCM designs. <strong>The</strong> per<strong>for</strong>mance of various MTCM<br />

designs and comparable space-time trellis codes are compared. We use two sets of parameters<br />

to distinguish different designs:<br />

1. Regular MTCM designs with the three parameters g = x, b = y and p = z are<br />

denoted by x × y × z.<br />

2. Irregular MTCM designs with the four parameters I = w, S = x, O = y and p = z<br />

are denoted by IwSxOyP z.<br />

<strong>The</strong> set partitioning and labelling <strong>for</strong> 2 × 2 QPSK RMTCM designs are listed in Chapter<br />

B.<br />

<strong>The</strong> per<strong>for</strong>mance of various RMTCM designs and comparable space-time trellis codes<br />

are compared. A block code can be viewed as a trivial trellis code with a single state and<br />

every block on parallel transitions. For a RMTCM design x × y × z, the same rate trivial<br />

trellis code (block code) 1 × 1 × yz which uses only one out of the g groups of STBCs is<br />

often included <strong>for</strong> per<strong>for</strong>mance comparison.<br />

If the largest full diversity subset is S m , the highest achievable rate RMTCM design<br />

with full diversity is determined by<br />

r m = m N c<br />

. (5.29)<br />

Via simulation we observe that expanding S m 4 times is sufficient to achieve very good<br />

per<strong>for</strong>mance with a relative small increase in complexity.<br />

<strong>The</strong> super-orthogonal space-time trellis codes [JS02] coincide with a subset of nonlinear<br />

hierarchical codes and their MTCM designs. <strong>The</strong> RMTCM designs in this work<br />

are more systematic and complete. We show several ways to improve the per<strong>for</strong>mance<br />

of the super-orthogonal space-time trellis codes. With added complexity, more than 2dB<br />

111


Label S k S Comment Figure<br />

1/2 I2S2O2p1 S 1 2 regular, catastrophic B.1<br />

1/2 I2S2O4p1 S 2 2 irregular, non-catastrophic B.2<br />

1/2 2 × 2 × 1 S 2 4 regular, non-catastrophic B.3<br />

Table 5.2: Rate 1/2 MTCM designs <strong>for</strong> 2 × 2 BPSK.<br />

2X2 BPSK, Rate 1/2<br />

10 0 SNR<br />

10 −1<br />

FER<br />

10 −2<br />

10 −3<br />

r=0.5, 1x1x2, block<br />

r=0.5, I2S2O2p1, cata<br />

r=0.5, I2S2O4p1<br />

r=0.5, 2x2x1<br />

10 −4<br />

0 2 4 6 8 10 12 14 16<br />

Figure 5.3: Rate 1/2 MTCM designs <strong>for</strong> 2 × 2 BPSK.<br />

improvement can be made. <strong>The</strong> RMTCM designs based on 3 × 3 NHCs have no superorthogonal<br />

counterparts.<br />

5.5.1 2 × 2 BPSK<br />

Table 5.2 summarized 3 possible rate half designs based on S 1 and S 2 , the detailed<br />

labelling is depicted by the corresponding figures in Chapter B. Figure 5.3 compares<br />

their per<strong>for</strong>mance. Even the 2-state catastrophic design I2S2O2p1 is more than half dB<br />

better than the block code 1 × 1 × 2; the 2-state non-catastrophic design I2S2O4p1 is<br />

about 1dB better than the catastrophic design I2S2O2p1 by using S 2 instead of S 1 ; the<br />

4-state regular MTCM design 2 × 2 × 1 is 2dB better than the 2-state non-catastrophic<br />

design I2S2O4p1. We note that the 4-state regular MTCM design 2 × 2 × 1 achieves<br />

more than 3.5dB gain over the block code 1 × 1 × 2 by simply introducing 2-bit memory.<br />

112


Label S k O(= gbp) S(= gb) Figure<br />

2/2 1 × 1 × 4 S 2 4 1<br />

2/2 2 × 2 × 2∗ S 3 8 4 B.4<br />

2/2 2 × 4 × 1 S 3 8 8 B.5<br />

Table 5.3: Regular MTCM design <strong>for</strong> 2 × 2 BPSK |S 3 | = 8 codewords.<br />

2X2 BPSK, Rate 2/2<br />

10 0 SNR<br />

10 −1<br />

FER<br />

10 −2<br />

r=1.0, 1x1x4, block<br />

r=1.0, 2x2x2<br />

r=1.0, 2x4x1<br />

10 −3<br />

0 2 4 6 8 10 12 14 16<br />

Figure 5.4: 2 × 2 BPSK, Rate 2/2.<br />

<strong>The</strong> highest rate RMTCM design based on 2 × 2 BPSK NHC is determined by S 2 :<br />

r m = 2/2 = 1. For rate 2/2, the two possible expanded MTCM designs with S 3 are<br />

listed in Table 5.3; their per<strong>for</strong>mance is compared in Figure 5.4. Notice that Coset C 1 (8)<br />

contains 8 codewords only, which is partitioned as S 2 and S 2 ′ and assigned to the two<br />

trellis groups in designs 2 × 2 × 2 and 2 × 4 × 1. <strong>The</strong> RMTCM designs show 2dB gain over<br />

the block codes 1 × 1 × 4, the 8-state design 2 × 4 × 1 is slightly better than the 4-state<br />

design 2 × 2 × 2, which is reported in [JS02].<br />

5.5.2 2 × 2 QPSK<br />

<strong>The</strong> full diversity code sets are S 1 –S 4 , S 5 and S 6 are rank deficient expanded code sets.<br />

<strong>The</strong> 2 × 2 QPSK code is a very good example to show the general design procedures<br />

of RMTCM outlined in Section 5.3, the detailed labeling of each RMTCM design are<br />

113


Label S k O(= gbp) S(= gb) Figure<br />

1/2 8 × 2 × 1 S 4 16 16 B.6<br />

2/2 4 × 2 × 2∗ S 4 16 8 B.7<br />

2/2 4 × 4 × 1 S 4 16 16 B.8<br />

3/2 2 × 2 × 4 S 4 16 4 B.9<br />

3/2 2 × 4 × 2∗ S 4 16 8 B.10<br />

3/2 2 × 8 × 1 S 4 16 16 B.11<br />

4/2 1 × 1 × 16 S 4 16 1<br />

Table 5.4: MTCM design <strong>for</strong> 2 × 2 QPSK |S 4 | = 16 codewords.<br />

omitted, only their per<strong>for</strong>mance is compared.<br />

Readers can apply the general design<br />

procedure and the “cyclic shift rule” to reproduce all the RMTCM designs in the sequel.<br />

<strong>The</strong>re are seven possible ways to factor |S 4 | = 16, this yields seven possible RMTCM<br />

designs with S 4 (Table 5.4). Figure 5.5 compares their per<strong>for</strong>mance. For a given rate<br />

(I = bp), there might be several designs with different numbers of parallel transitions<br />

and states. As can be seen, the rate/per<strong>for</strong>mance/complexity tradeoffs are gracefully<br />

balanced. For rate r = 1.0, the two designs with states 8 and 16 have roughly the same<br />

per<strong>for</strong>mance, there<strong>for</strong>e the 8-state design 4 × 2 × 2 is preferred. <strong>The</strong> reason is that the<br />

two parallel transitions have coding gain ∆ p (S 1 ) = 64, which is large enough not to affect<br />

the d free of the overall RMTCM design. Similarly, <strong>for</strong> rate r = 1.5, the 8-state and<br />

16-state designs have about the same per<strong>for</strong>mance, which is about 2dB better than that<br />

of 4-state design, there<strong>for</strong>e the 8-state design 2 × 4 × 2 is preferred. <strong>The</strong> per<strong>for</strong>mance of<br />

these seven designs in Ricean channels (h = 1) is shown in Figure 5.6. Comparing Figure<br />

5.6 with Figure 5.5, the per<strong>for</strong>mance of all codes are improved by about 4dB in Ricean<br />

channel than the per<strong>for</strong>mance in Rayleigh channel. If we plug h = [1, 1] T and Σ h = I<br />

into Equation (4.12),<br />

E h [P (D i → D j )|h)] ≤<br />

1 1<br />

4 Σ h| |Φ|<br />

| eσt<br />

(5.30)<br />

the SNR gain is 10 log e = 10 log 2.718 = 4.3dB, which agrees with the simulations. Note<br />

that the purpose of this comparison is to demonstrate that the code design is independent<br />

of Rayleigh or Ricean channel, there<strong>for</strong>e we do not normalize the power of the fading to<br />

114


2X2 QPSK, |S 4<br />

|=16 codewords<br />

10 0 SNR<br />

10 −1<br />

FER<br />

10 −2<br />

10 −3<br />

r=2.0, 1x1x16<br />

r=1.5, 2x2x4<br />

r=1.5, 2x4x2<br />

r=1.5, 2x8x1<br />

r=1.0, 4x2x2<br />

r=1.0, 4x4x1<br />

r=0.5, 8x2x1<br />

10 −4<br />

0 2 4 6 8 10 12 14 16<br />

Figure 5.5: 2 × 2 QPSK, |S 4 | = 16.<br />

2X2 QPSK, |S 4<br />

|=16 codewords, Ricean h=1<br />

10 0 SNR<br />

10 −1<br />

10 −2<br />

FER<br />

10 −3<br />

10 −4<br />

r=2.0, 1x1x16<br />

r=1.5, 2x2x4<br />

r=1.5, 2x4x2<br />

r=1.5, 2x8x1<br />

r=1.0, 4x2x2<br />

r=1.0, 4x4x1<br />

r=0.5, 8x2x1<br />

10 −5<br />

0 2 4 6 8 10 12 14 16<br />

Figure 5.6: 2 × 2 QPSK, |S 4 | = 16, Ricean fading channels (h = 1).<br />

115


2X2 QPSK, |S 5<br />

|=32 codewords<br />

10 0 SNR<br />

10 −1<br />

FER<br />

10 −2<br />

10 −3<br />

10 −4<br />

r=4/2, 1x1x16, quasi−static<br />

r=4/2, 1x1x16, fast block,<br />

r=4/2, tarokh 4 state, quasi−static<br />

r=4/2, 2x2x8, quasi−static<br />

r=4/2, 2x2x8, fast block<br />

r=4/2, 2x4x4, quasi−static<br />

r=4/2, 2x4x4, fast block<br />

r=4/2, 2x8x2, quasi−static<br />

r=4/2, 2x8x2, fast block<br />

r=4/2, 2x16x1, quasi−static<br />

r=4/2, 2x16x1, fast block<br />

10 −5<br />

0 2 4 6 8 10 12 14 16 18 20<br />

Figure 5.7: 2 × 2 QPSK, |S 5 | = 32, quasi-static vs fast block fading channels.<br />

unity. If the power is normalized, then plugging h ′ = [0.5, 0.5] T<br />

Equation (4.12) yields<br />

and Σ ′ h = 0.5Σ h into<br />

E h [P (D i → D j )|h)] ≤<br />

the SNR loss would be 10 log(2/ √ e) = 0.8dB.<br />

|<br />

√ e σ t<br />

2<br />

1 1<br />

4 Σ h| |Φ|<br />

(5.31)<br />

<strong>The</strong> rate r = 2 single-state MTCM design of S 4 is just the block code without memory,<br />

it has very poor per<strong>for</strong>mance. Expanded RMTCM designs based on S 5 and S 6 can<br />

improve the per<strong>for</strong>mance at this rate. Figure 5.7 compares the per<strong>for</strong>mance of designs<br />

with S 5 in both quasi-static and fast fading channels.<br />

• In quasi-static fading channels, the 4-state design is reported in [JS02], it yields<br />

about 1.7dB gains over corresponding 4-state QPSK design by Tarokh [TSC98].<br />

As p decreases from 8 to 1, (and the number of states increases from 8 to 64),<br />

per<strong>for</strong>mance is improved consistently by an extra 0.8dB. Similar observations can<br />

be made <strong>for</strong> designs with S 6 . Designs based on S 6 have about 0.4dB gain over<br />

corresponding (same b) designs based on S 5 by doubling g. <strong>The</strong> 4-state design<br />

116


2 × 2 × 8 is reported in [JS02], the 32-state design 4 × 8 × 2 and the 8-state design<br />

4 × 2 × 8 are 1dB and 0.5dB better than 2 × 2 × 8, respectively.<br />

• In fast fading channels, all designs with parallel transitions achieve the same level of<br />

diversity (∆ H = 2) as in quasi-static fading channels eventually, the design 2×8×2<br />

shows this tendency only at SNR>16dB. <strong>The</strong> design with no parallel transitions<br />

2 × 16 × 1 is capable of exploiting more diversity (∆ H = 12) in the channel as<br />

predicted by Equation (5.14).<br />

Figure 5.9 compares the serially concatenated system with RMTCM. <strong>The</strong> concatenated<br />

system uses optimal rate 1/2 convolutional encoder from [LC83]. <strong>The</strong> three generators<br />

are: {64, 74}, {46, 72} and {554, 744}, which generate state trellises with 8, 16<br />

and 64 states respectively. <strong>The</strong> bit mapper maps every four output bits to one STBC<br />

from S 4 . Overall, two in<strong>for</strong>mation bits are transmitted over every STBC which lasts<br />

two symbols, there<strong>for</strong>e the rate is r = 1. <strong>The</strong> mapper is designed to imitate the hierarchical<br />

structure of coding gain ∆ p in S 4 in terms of Hamming distance d H , in the<br />

hope that larger Hamming distance, which is the optimization goal of convolutional encoder,<br />

will result in larger coding gain, which is a per<strong>for</strong>mance measure of interest in a<br />

MIMO fading channel. Figure 5.8 shows the correspondence between bit sequence and<br />

block codes in S 4 . Let c 1 and c 2 be two binary codewords, d be any binary vector, then<br />

d H (c 1 − c 2 ) = d H ((c 1 + d) − (c 2 + d)). Hence the isometry <strong>for</strong> Hamming distance is<br />

simply addition modulo-2 by any binary vector, via which, the hierarchical structure of<br />

the Hamming distance of the bit sequence can be described in Table 5.5 in a similar manner<br />

as the hierarchical structure of the coding gain in Table 4 of [GM04]. <strong>The</strong> mismatch<br />

between the optimization goal of convolutional encoding and per<strong>for</strong>mance measure <strong>for</strong><br />

MIMO fading channels yields the relative per<strong>for</strong>mance loss of such a concatenated system:<br />

the 8, 16 and 64 state concatenated systems are respectively, 2.5dB, 1dB and 0.5dB,<br />

worse than the corresponding RMTCM designs.<br />

5.5.3 2 × 2 8PSK<br />

For 2 × 2 8PSK NHCs, up to S 6 are full diversity code sets; S 7 and S 8 are expanded code<br />

sets. <strong>The</strong> highest achievable rate with full diversity is r m = 6/2 = 3. Various designs<br />

with rates from 1/2 to 6/2 = 3 can be realized. We note that <strong>for</strong> a given rate, p = 2 and<br />

117


S k d S k S<br />

k ′ d H<br />

S 1 (1, 1, 1, 1)<br />

{<br />

{(0, 0, 0, 0)}<br />

} {<br />

{(1, 1, 1, 1)}<br />

}<br />

4<br />

(0, 0, 0, 0) (1, 1, 0, 0)<br />

S 2 (1, 1, 0, 0)<br />

2<br />

⎧<br />

(1, 1, 1, 1)<br />

⎫ ⎧<br />

(0, 0, 1, 1)<br />

⎫<br />

(0, 0, 0, 0) (1, 0, 1, 0)<br />

⎪⎨ ⎪⎬ ⎪⎨ ⎪⎬<br />

(1, 1, 1, 1) (0, 1, 0, 1)<br />

S 3 (1, 0, 1, 0)<br />

2<br />

(1, 1, 0, 0) (0, 1, 1, 0)<br />

⎪⎩ ⎪⎭ ⎪⎩ ⎪⎭<br />

(0, 0, 1, 1) (1, 0, 0, 1)<br />

⎧ ⎫ ⎧ ⎫<br />

(0, 0, 0, 0) (0, 0, 0, 1)<br />

(1, 1, 1, 1) (1, 1, 1, 0)<br />

(1, 1, 0, 0) (1, 1, 0, 1)<br />

⎪⎨ ⎪⎬ ⎪⎨ ⎪⎬<br />

S 4 (0, 0, 0, 1)<br />

(0, 0, 1, 1)<br />

(1, 0, 1, 0)<br />

(0, 1, 0, 1)<br />

(0, 1, 1, 0) ⎪⎩ ⎪⎭<br />

(1, 0, 0, 1)<br />

(0, 0, 1, 0)<br />

(1, 0, 1, 1)<br />

(0, 1, 0, 0)<br />

(0, 1, 1, 1) ⎪⎩ ⎪⎭<br />

(1, 0, 0, 0)<br />

Table 5.5: Hierarchical structure in Hamming distance.<br />

1<br />

(0000) 2 (1111) 168<br />

(0001) 30 (1110) 180<br />

(1100) 42 (0011) 128<br />

(1101) 54 (0010) 156<br />

(1010) 93 (0101) 247<br />

(1011) 105 (0100) 195<br />

(0110) 117 (1001) 223<br />

(0111) 65 (1000) 235<br />

Figure 5.8: Bit map of 2 × 2 QPSK, |S 4 | = 16, <strong>for</strong> serial concatenation.<br />

118


2X2 QPSK, |S 4<br />

|=16, concat vs MTCM<br />

10 0 SNR<br />

10 −1<br />

FER<br />

10 −2<br />

r=1.0, S=8, concat<br />

r=1.0, S=16, concat<br />

r=1.0, S=64, concat<br />

r=1.0, 4x2x4, MTCM<br />

r=1.0, 4x4x2, MTCM<br />

r=1.0, 16x4x1, MTCM<br />

10 −3<br />

0 2 4 6 8 10 12 14 16<br />

Figure 5.9: 2 × 2 QPSK, |S 4 | = 16, serial concatenation vs RMTCM.<br />

p = 1 have about the same per<strong>for</strong>mance, because ∆ p (S 1 ) = 64, which is large enough<br />

not to affect d free . Figure 5.10 compares rate 6/2 design with expanded S 7 and S 8 . By<br />

doubling the number of states, both the 8-state design 2 × 4 × 16 and 4 × 2 × 32 are about<br />

0.5 dB better than the 4-state design 2 × 2 × 32, which is reported in [JS02]. For b = 16<br />

and 32, the designs based on S 8 is about 0.3dB and 0.5dB better than those based on S 7 .<br />

<strong>The</strong> best design 4 × 32 × 2 is more than 2dB better than the design 2 × 2 × 32 reported<br />

in [JS02].<br />

5.5.4 3 × 3 BPSK<br />

To the best of our knowledge, these 3 × 3 codes are new to the literature. Figure 5.11<br />

compares the per<strong>for</strong>mance of designs with S 3 . Next, we analyze the d ∗ free<br />

of two rate<br />

r = 2/3 designs: the 8-state design 2 × 4 × 1 and the 4-state design 2 × 2 × 2. Figure 5.12<br />

shows the shortest diverging path <strong>for</strong> design 2×4×1. We denote the coding gain between<br />

119


Rate 6/2 2X2 8PSK, |S 7<br />

|=128 vs |S 8<br />

|=256 Codewords<br />

10 0 SNR<br />

10 −1<br />

FER<br />

10 −2<br />

r=6/2, 1x1x64<br />

r=6/2, 2x2x32<br />

r=6/2, 2x4x16<br />

r=6/2, 2x16x4<br />

r=6/2, 2x32x2<br />

r=6/2, 4x2x32<br />

r=6/2, 4x16x4<br />

r=6/2, 4x32x2<br />

10 −3<br />

10 12 14 16 18 20 22 24 26<br />

Figure 5.10: Rate 6/2, 2 × 2 8PSK, |S 7 | = 128 vs |S 8 | = 256.<br />

two paths with the label (C 1 , C 2 , . . . , C M ) and (C ′ 1 , C′ 2 , . . . , C′ M ) by ∆ p((C 1 , C 2 , . . . , C M )−<br />

(C 1 ′ , C′ 2 , . . . , C′ M<br />

)). According to Equation (5.16),<br />

∆ p ((C 1 , C 2 , . . . , C M ) − (C ′ 1, C ′ 2, . . . , C ′ M)) ><br />

M∑<br />

∆ p ((C 1 ) − (C 1)) ′ (5.32)<br />

i=1<br />

<strong>The</strong> 8-state design has no parallel transitions, the shortest path starting from the zero<br />

state and merging back to the zero state takes 3 stages:<br />

d ∗ free = ∆ p((330, 413, 251) − (1, 1, 1)) = 2048 (5.33)<br />

>> ∆ p ((330) − (1)) + ∆ p ((413) − (1)) + ∆ p ((251) − (1)) (5.34)<br />

= 64 + 64 + 64 = 192 (5.35)<br />

<strong>The</strong> gain achieved by elongating the number of stages in the shortest path is usually much<br />

larger than that predicted by Minkowski’s Inequality. <strong>The</strong> 4-state design has parallel<br />

120


3X3 BPSK, |S 3<br />

|=8 Codewords<br />

10 0 SNR<br />

10 −1<br />

10 −2<br />

FER<br />

10 −3<br />

10 −4<br />

r=3/3, 1x1x8<br />

r=2/3, 2x2x2<br />

r=2/3, 2x4x1<br />

r=1/3, 4x2x1<br />

10 −5<br />

0 2 4 6 8 10 12 14 16<br />

Figure 5.11: 3 × 3 BPSK, |S 3 | = 8.<br />

transitions, the shortest path takes just one stage, hence the free distance is just the<br />

coding gain of block code set S 1 .<br />

d ∗ free = ∆ p(S 1 ) = 256.<br />

Due to the large difference in d ∗ free<br />

, the 8-state design yields 1.5dB gain over the 4-state<br />

design.<br />

For rate r = 3/3, Figure 5.13 compares the per<strong>for</strong>mance of 2×8 designs with S 4 , 4×8<br />

designs with S 5 and 8 × 8 designs with S 6 . For the same g, the per<strong>for</strong>mance is always<br />

improved by reducing the number of parallel transitions. For b = 2, increasing g does not<br />

improve per<strong>for</strong>mance, because the limiting factor is parallel transitions p = 8. For b = 4,<br />

increasing g from 2 to 4 yields about 0.5dB gains, but increasing g from 4 to 8 yields no<br />

noticeable gains. For b = 8, more than 0.5 dB gains is observed when g increases from<br />

2 to 4, but about 0.3 dB gains is observed when g increases from 4 to 8. We conjecture<br />

that <strong>for</strong> most RMTCM designs with b = 2 and b = 4, g = 4 is sufficient.<br />

121


1 172 251 330<br />

1<br />

1 1<br />

304 413 486 87<br />

172 251 330 1<br />

330<br />

251<br />

413 486 87 304<br />

251 330 1 172<br />

413<br />

486 87 304 413<br />

330 1 172 251<br />

87 304 413 486<br />

Figure 5.12: 3 × 3 BPSK, d ∗ free<br />

of 2 × 4 × 1.<br />

3X3 BPSK, Rate 3/3, |S 4<br />

| vs |S 5<br />

| vs |S 6<br />

|<br />

10 0 SNR<br />

10 −1<br />

FER<br />

10 −2<br />

10 −3<br />

r=3/3, 2x2x4<br />

r=3/3, 2x4x2<br />

r=3/3, 2x8x1<br />

r=3/3, 4x2x4<br />

r=3/3, 4x4x2<br />

r=3/3, 4x8x1<br />

r=3/3, 8x2x4<br />

r=3/3, 8x4x2<br />

r=3/3, 8x8x1<br />

10 −4<br />

0 2 4 6 8 10 12 14 16<br />

Figure 5.13: Rate 3/3, 3 × 3 BPSK, S 4 , S 5 and S 6 .<br />

122


3X3 QPSK, |S 5<br />

|=32 codewords<br />

10 0 SNR<br />

10 −1<br />

10 −2<br />

FER<br />

10 −3<br />

10 −4<br />

r=5/3, 1x1x32<br />

r=4/3, 2x2x8<br />

r=4/3, 2x4x4<br />

r=4/3, 2x8x2<br />

r=4/3, 2x16x1<br />

r=3/3, 4x2x4<br />

r=3/3, 4x4x2<br />

r=3/3, 4x8x1<br />

r=2/3, 8x2x2<br />

r=2/3, 8x4x1<br />

r=1/3, 16x2x1<br />

10 −5<br />

10 −6<br />

0 2 4 6 8 10 12 14 16<br />

Figure 5.14: 3 × 3 QPSK, |S 5 | = 32.<br />

5.5.5 3 × 3 QPSK<br />

S 1 to S 5 are full diversity code sets, S 6 , S 7 and S 8 are expanded code sets. <strong>The</strong> highest<br />

achievable rate of RMTCM design is r m = 5/3. Figure 5.14 shows RMTCM designs<br />

with S 5 <strong>for</strong> various rates. Figure 5.15 shows rate 5/3 RMTCM designs with S 7 . By<br />

increasing memory and decreasing the number of parallel transitions, up to 5dB gain can<br />

be obtained over the 1-state block code.<br />

5.5.6 3 × 3 8PSK<br />

S 1 to S 7 are full diversity code sets, S 8 is expanded code set. <strong>The</strong> highest achievable rate<br />

of RMTCM design is r m = 7/3. Based on S 7 , there are four rate 4/3, five rate 5/3 and six<br />

rate 6/3 RMTCM designs. For any rate, consistent per<strong>for</strong>mance improvement is observed<br />

with increased b and the best design offers more than 5dB gains over corresponding block<br />

code. Figure 5.16 compares the expanded high rate designs based on S 8 with designs<br />

based on S 7 , per<strong>for</strong>mance improvement is large <strong>for</strong> b ≥ 8.<br />

123


Rate 5/3, 3X3 QPSK, |S 7<br />

|=128 Codewords<br />

10 0 SNR<br />

10 −1<br />

FER<br />

10 −2<br />

r=5/3, 1x1x32<br />

r=5/3, 4x2x16<br />

r=5/3, 4x4x8<br />

r=5/3, 4x8x4<br />

r=5/3, 4x16x2<br />

10 −3<br />

0 2 4 6 8 10 12 14 16<br />

Figure 5.15: 3 × 3 QPSK, |S 7 | = 32.<br />

3X3 8PSK, rate=6/3, |S 7<br />

|=128 codwords vs |S 8<br />

|=256 codewords<br />

10 0 SNR<br />

10 −1<br />

FER<br />

10 −2<br />

r=6/3, 1x1x64<br />

r=6/3, 2x2x32<br />

r=6/3, 2x4x16<br />

r=6/3, 2x8x8<br />

r=6/3, 2x64x1<br />

r=6/3, 4x2x32<br />

r=6/3, 4x4x16<br />

r=6/3, 4x8x8<br />

r=6/3, 4x64x1<br />

10 −3<br />

0 2 4 6 8 10 12 14 16<br />

Figure 5.16: 3 × 3 8PSK, rate=6/3, |S 7 | = 128 vs |S 8 | = 256.<br />

124


Chapter 6<br />

Conclusions<br />

This dissertation covers four topics of space-time codes which are involved with my Ph.D.<br />

research: Chapter 2 discusses the code per<strong>for</strong>mance and design in spread systems, Chapter<br />

3 considers suboptimal linear decoders, Chapter 4 discusses the construction and<br />

properties of NHC, finally, Chapter 5 discusses the design procedure and per<strong>for</strong>mance<br />

analysis of RMTCM. <strong>The</strong> framework of Chapter 2 was finished during my master’s research<br />

at Ohio State University, but most analyses and proofs are tightened as part of<br />

my Ph.D. work. <strong>The</strong>re<strong>for</strong>e it is included this dissertation.<br />

<strong>The</strong> original intention of studying suboptimal decoders in Chapter 3 was to find code<br />

structures more suitable <strong>for</strong> these suboptimal linear decoders. But it was realized to be<br />

a very challenging task because no closed <strong>for</strong>m upper bounds could be obtained <strong>for</strong> these<br />

decoders.<br />

Motivated by the hierarchical structure of optimal codes found by computer search in<br />

Chapter 2, a methodology to build optimized codes hierarchically via distance preserving<br />

isometries is proposed in Chapter 4. <strong>The</strong> procedure can yield arbitrary STBCs with<br />

no constraints on the block size and code structure (unitary or orthogonal). <strong>The</strong> found<br />

hierarchical codes yield not only optimized coding gain at each layer of the hierarchy,<br />

but also symmetry and structure which can be exploited in system implementation and<br />

integration. <strong>The</strong> interaction between initial codeword and isometries is still not completely<br />

clear. Further investigation could eliminate the computer search aspect of our<br />

construction method altogether and result in a fully systematic code design.<br />

General RMTCM design techniques with NHCs and their expansions are presented<br />

in Chapter 5. For a given set of nonlinear hierarchical code, various MTCM designs are<br />

provided to show how to balance rate/per<strong>for</strong>mance/complexity tradeoffs. Two design<br />

125


parameters: the number of trellis groups g and the number of emerging branches from<br />

any state b, are used to improve the per<strong>for</strong>mance of MTCM design, this work endeavors<br />

to find the sweet point where g and b are mostly effectively used to boost per<strong>for</strong>mance.<br />

As much as 2dB gain is observed over other MTCM design via our systematic approach.<br />

<strong>The</strong>re is still some freedom in the trellis labeling, our approach, the cyclic-shift rule is<br />

perhaps not the best. Proper labeling may yield a recursive structure to facilitate serial<br />

concatenation of interleaved codes [BDMP98].<br />

126


Reference List<br />

[AF01]<br />

[Ala98]<br />

[AM00]<br />

Defne Aktas and Michael P. Fitz. Distance spectrum analysis of space-time<br />

trellis coded modulations in quasi-static rayleigh fading channels. In submitted<br />

IEEE Transactions on In<strong>for</strong>mation <strong>The</strong>ory, January 2001.<br />

S. M. Alamouti. A simple transmit diversity technique <strong>for</strong> wireless communications.<br />

IEEE Trans. on Select Areas in Comm., 16(8):1451–58, October<br />

1998.<br />

Emre. Aktas and Urbashi Mitra. Complexity reduction in subspace based<br />

blind channel identification <strong>for</strong> DS/CDMA systems. IEEE Transactions on<br />

Communications, 48(8):1392–1404, Aug 2000.<br />

[Arm87] M. A. Armstrong. Groups and Symmetry. Springer-Verlog, New York, 1987.<br />

[ATP98]<br />

Siavash M. Alamouti, Vahid Tarokh, and Patrick Poon. Trellis-coded modulation<br />

and transmit diversity: Design criteria and per<strong>for</strong>mance evaluation. In<br />

ICUPC, pages 703–707. Florence, Italy, October 5–9 1998.<br />

[BBH02] Stephan Bäro, Gerhard Bauch, and Axel Hansmann. Improved codes <strong>for</strong><br />

space-time trellis-coded modulation. IEEE Communications Letters, 4:20–<br />

22, January 2002.<br />

[BDMP98] Sergio Benedetto, Dariush Divsalar, Guido Montorsi, and Fabrizio Pollara.<br />

Serial concatenation of interleaved codes: Per<strong>for</strong>mance analysis, design and<br />

iterative decoding. IEEE Transactions on In<strong>for</strong>mation <strong>The</strong>ory, 44(3):909–<br />

926, May 1998.<br />

[BDMS91] E. Biglieri, D. Divsalar, P. J. McLane, and M. K. Simon. Introduction to<br />

Trellis-<strong>Codes</strong> Modulation with Applications. Macmillan, New York, 1991.<br />

[BTT02] Ezio Biglieri, Giorgio Taricco, and Antonia Tulino. Per<strong>for</strong>mance of spacetime<br />

codes with linear receiver interfaces. In 2002 Conference on In<strong>for</strong>mation<br />

Sciences and <strong>Systems</strong>, pages 20–22. Princeton University, NJ, March 2002.<br />

[BV03] Matthias Brehler and Mahesh K. Varanasi. Optimum receivers and lowdimensional<br />

spreaded modulation <strong>for</strong> multiuser space-time communications.<br />

IEEE Transactions on In<strong>for</strong>mation <strong>The</strong>ory, 49:901–918, April 2003.<br />

[CYV99] Zhuo Chen, Jinhong Yuan, and Branka Vucetic. An improved space-time<br />

trellis coded modulation scheme on slow rayleigh fading channels. In IEEE<br />

WCNC’99. New Orleans, LA, September 1999.<br />

127


[DS87]<br />

D. Divsalar and M. K. Simon. Trellis-coded modulation <strong>for</strong> 4800 to 9600 bps<br />

transmission over a fading satellite channel. IEEE Journal on Selected Areas<br />

of Communications, 5(2):162–175, Feburary 1987.<br />

[DSAM03] Mohamed Oussama Damen, Anahid Safavi, and Karim Abed-Meraim. On<br />

CDMA with space-time codes over multipath fading channels. IEEE Transactions<br />

on <strong>Wireless</strong> Communications, 2:11–19, January 2003.<br />

[FG98]<br />

[Fos96]<br />

Gerard J. Foschini and M. J. Gans. On limits of wireless communications<br />

in a fading environment when using multiple antennas. <strong>Wireless</strong> Personal<br />

Communications, pages 311–335, 6 1998.<br />

Gerard J. Foschini. Layered space-time architecture <strong>for</strong> wireless communication<br />

in a fading environment when using multi-element antennas. Bell Labs<br />

Technical Journal, pages 41–59, Autumn 1996.<br />

[GARH01] Hesham El Gamal and Jr. A. Roger Hammons. A new approach to layered<br />

space-time coding and signal processing. IEEE Transactions on In<strong>for</strong>mation<br />

<strong>The</strong>ory, 47:2321–34, September 2001.<br />

[GDF91]<br />

[Gen00]<br />

[GF01]<br />

Jr. G. David Forney. Geometrically uni<strong>for</strong>m codes. IEEE Transactions on<br />

In<strong>for</strong>mation <strong>The</strong>ory, 37(5):1241–60, September 1991.<br />

Jifeng Geng. Optimal space-time block codes <strong>for</strong> CDMA systems. Master’s<br />

thesis, Ohio State University, December 2000.<br />

Hesham El Gamal and Michael P. Fitz. Generalized maximum ratio combining<br />

<strong>for</strong> space-time decoding. manuscript, March 2001.<br />

[GFBK99] J. C. Guey, M. P. Fitz, M. R. Bell, and W. Y. Kuo. Signal design <strong>for</strong> transmitter<br />

diversity wireless communication systems over Rayleigh fading channels.<br />

IEEE Transaction on Communications, 46:527–537, April 1999.<br />

[GM01]<br />

[GM03a]<br />

Jifeng Geng and Urbashi Mitra. Optimal space-time block codes <strong>for</strong> reduced<br />

complexity DS-CDMA decoders. In Proc. 35rd Asilomar Conference on Signals,<br />

<strong>Systems</strong> and Computers. Pacific Grove, CA, November 2001.<br />

Jifeng Geng and Urbashi Mitra. MTCM design with nonlinear hierarchical<br />

space-time block codes. In to be submitted to IEEE Transactions on In<strong>for</strong>mation<br />

<strong>The</strong>ory, 2003.<br />

[GM03b] Jifeng Geng and Urbashi Mitra. Nonlinear hierarchical space-time block<br />

codes. In Proc. 37rd Asilomar Conference on Signals, <strong>Systems</strong> and Computers.<br />

Pacific Grove, CA, November 2003.<br />

[GM04] Jifeng Geng and Urbashi Mitra. Nonlinear hierarchical space-time block<br />

codes–part i: Construction. In submitted to IEEE Transactions on Communications,<br />

January 2004.<br />

128


[GMF03]<br />

[Gri99]<br />

[GVM02]<br />

[HJ91]<br />

[HM00]<br />

[HMP01]<br />

[HS00]<br />

[Hug00]<br />

[JS02]<br />

Jifeng Geng, Urbashi Mitra, and Michael P. Fitz. <strong>Space</strong>-time block codes in<br />

multipath CDMA systems. In to be submitted IEEE Transactions on In<strong>for</strong>mation<br />

<strong>The</strong>ory, October 2003.<br />

J. Grimm. Transmitter diversity code design <strong>for</strong> achieving full diversity on<br />

Rayleigh fading channels. PhD thesis, Purdue University, 1999.<br />

Jifeng Geng, Madhavan Vajapeyam, and Urbashi Mitra. Distance spectrum of<br />

space-time block codes: A union bound point of view. In Proc. 36rd Asilomar<br />

Conference on Signals, <strong>Systems</strong> and Computers. Pacific Grove, CA, November<br />

2002.<br />

Roger A Horn and Charles R Johnson. Topics in Matrix Analysis. Cambridge<br />

University Press, New York, 1991.<br />

Bertrand M. Hochwald and Thomas L. Marzetta. Unitary space-time modulation<br />

<strong>for</strong> multiple-antenna communications in rayleigh flat fading. IEEE<br />

Transactions on In<strong>for</strong>mation <strong>The</strong>ory, 46(2):543–564, March 2000.<br />

Bertrand Hochwald, Thomas L. Marzetta, and Constantinos B. Papadias.<br />

A transmitter diversity scheme <strong>for</strong> wideband CDMA systems based on<br />

space-time spreading. IEEE Journal on Selected Areas in Communications,<br />

19(1):48–60, January 2001.<br />

Bertrand M. Hochwald and Wim Sweldens. Differential unitary space-time<br />

modulation. IEEE Transactions on Communications, 48(12):2041–52, December<br />

2000.<br />

Brian L. Hughes. Differential space-time modulation. IEEE Transactions on<br />

In<strong>for</strong>mation <strong>The</strong>ory, 46(7):2567–78, November 2000.<br />

Hamid Jafarkhani and Nambi Seshadri. Super-orthogonal space-time trellis<br />

codes. In IEEE Transactions on In<strong>for</strong>mation <strong>The</strong>ory, volume 49, pages 937–<br />

950, April 2002.<br />

[LC83] Shu Lin and Daniel J. Costello. Error Control Coding, Fundamentals and<br />

Applications. Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1983.<br />

[MH99] Thomas L. Marzetta and Bertrand M. Hochwald. Capacity of a mobile<br />

multiple-antenna communication link in rayleigh flat fading. IEEE Transactions<br />

on In<strong>for</strong>mation <strong>The</strong>ory, 45:139–157, January 1999.<br />

[NSTC98]<br />

Ayman F. Naguib, Nambi Seshadri, Vahid Tarokh, and A. R. Calderbank.<br />

Combined interference cancellation and ML decoding of space-time block<br />

codes. In Proc. of IEEE DSP Workshop. Bryce Canyon,UT, 1998.<br />

[Pro95] J. G. Proakis. Digital Communications. McGraw-Hill, New York, NY, 1995.<br />

[SF02] Siwaruk Siwamogsatham and Michael P. Fitz. Improved high-rate spacetime<br />

codes from expanded STB-MTCM construction. In submitted IEEE<br />

Transactions on In<strong>for</strong>mation <strong>The</strong>ory, February 2002.<br />

129


[SHHS01]<br />

[Sle68]<br />

A. Shokrollahi, B. Hassibi, B. M. Hochwald, and W. Sweldens. Representation<br />

theory <strong>for</strong> high-rate multiple-antenna code design. IEEE Transactions on<br />

In<strong>for</strong>mation <strong>The</strong>ory, 47(6):2335–67, September 2001.<br />

David Slepian. Group codes <strong>for</strong> the gaussian channel. <strong>The</strong> Bell System Technical<br />

Journal, 47:575–602, April 1968.<br />

[Tel95] I. Emre Telatar. Capacity of multi-antenna gaussian channels. Technical<br />

Memorandum, Bell Laboratories, Lucent Technologies, October 1995.<br />

[TH01]<br />

[TJC99]<br />

[TNSC99]<br />

[TSC98]<br />

[Ung82]<br />

O. Tirkkonen and A. Hottinen. Classification of complex space-time block<br />

codes. In submitted IEEE Transactions on In<strong>for</strong>mation <strong>The</strong>ory, 2001.<br />

V. Tarokh, H. Jafarkhani, and A. R. Calderbank. <strong>Space</strong>-time block codes from<br />

orthogonal designs. IEEE Transactions on In<strong>for</strong>mation <strong>The</strong>ory, 45(5):1456–<br />

67, June 1999.<br />

Vahid. Tarokh, Ayman Naguib, Nambi Seshadri, and A. R. Calderbank.<br />

<strong>Space</strong>-time codes <strong>for</strong> high data rate wireless communication: Per<strong>for</strong>mance<br />

criteria in the presence of channel estimation errors, mobility, and multiple<br />

paths. IEEE Transactions on Communications, 47(2):199–206, February<br />

1999.<br />

V. Tarokh, N. Seshadri, and A. R. Calderbank. <strong>Space</strong>-time codes <strong>for</strong> high data<br />

rate wireless communication: Per<strong>for</strong>mance criterion and code construction.<br />

IEEE Transactions on In<strong>for</strong>mation <strong>The</strong>ory, 44(2):744–765, March 1998.<br />

Gottfried Ungerboeck. Channel coding with multilevel/phase signals. IEEE<br />

Transactions on In<strong>for</strong>mation <strong>The</strong>ory, 28:55–67, January 1982.<br />

[VWZP89] Andrew J. Viterbi, Jack W. Wolf, Ephraim Zehavi, and Roberto Padovani.<br />

A pragmatic approach to trellis-coded modulation. IEEE Communications<br />

Magazine, pages 11–17, July 1989.<br />

[WP99]<br />

[YB02]<br />

[ZGM02]<br />

Xiaodong Wang and Vincent Poor. <strong>Space</strong>-time multiuser detection in multipath<br />

CDMA channels. IEEE Transactions on Signal Processing, 47(9):2356–<br />

74, September 1999.<br />

Qing Yan and Rick S. Blum. Optimum space-time convolutional codes. In<br />

<strong>Wireless</strong> Communications and Networking Confernce, 2000. WCNC, pages<br />

1351–55. <strong>The</strong> Fairmont Hotel in Grant Park, Chicago, IL, September 23–28<br />

2002.<br />

Shengli Zhou, G. B. Giannakis, and C. Le Martret. Chip-interleaved blockspread<br />

code division multiple access. IEEE Transactions on Communications,<br />

50:235–248, Feburary 2002.<br />

130


Appendix A<br />

Code Lists<br />

For different block sizes and constellations, tables of nonlinear hierarchical codes are listed<br />

in this section. <strong>The</strong>re are two kinds of tables: one lists the isometries used to generate<br />

each layer of the code, together with the minimum diversity gain ∆ H , coding gain ∆ p<br />

and ∆ s ; the other lists the full code set. <strong>The</strong> second table is used only when the code<br />

lists are too large to fit in the first table. <strong>The</strong> first table is divided into two sections<br />

marked by double lines, the first section lists full diversity code, the second section lists<br />

the expanded code (see Equation (5.8) <strong>for</strong> an explanation of expanded code).<br />

S k<br />

[<br />

U<br />

]<br />

S k<br />

[<br />

P<br />

]<br />

S<br />

k ′ ∆ H ∆ p ∆ s<br />

−1 0<br />

1 0<br />

S 1 {1}<br />

{14} 2 64.00 16.00<br />

[<br />

0 −1<br />

]<br />

[<br />

0 1<br />

]<br />

0 1<br />

1 0<br />

S 2 {1, 14}<br />

{7, 8} 2 16.00 8.00<br />

−1 0<br />

0 1<br />

[ ]<br />

[ ]<br />

1 0<br />

1 0<br />

S 3 {1, 14, 7, 8}<br />

{2, 13, 4, 11} 1 8.00 8.00<br />

0 −1<br />

0 1<br />

Table A.1: 2 × 2 BPSK codes.<br />

131


ù"ú"ú"û ù"ü"ù ù"ý0øJþ ÿLù"ý"ù ú ø¢¡;þ ÿ+øJù"û ú"ù¤£ ø ÿLþ"ý ù ø"øJþ øJý ø ù"ù"ù ø ÿLúÿ+ø ú"ý¤¡"þ ÿLý"ù"û ú"ú"û;û<br />

ø<br />

ù¤¡"ü"ü ¡ ø¢¡ ú"ý;ù"ú ÿLû¤£¤¡ ú"ü"ú;ú ÿLþ"þÿ ú¤¡"ý"ü û"ú¤£ ù"üÿ"ÿ þ¤£"ü ù"û"þ¤¡ ÿ¥¡"û"ü øJý ø¢£ ÿLü"ü"ú ú"þ"ùÿ<br />

ü¤¡ÿ<br />

ùÿ¥£"þ ÿLý"û ù"ú;þ"ù ÿ¥£"ù"þ ú"ù"ü;ù ÿLù øJý ú¤£"þ"ý £"ý"û ù"ù¤¡"ý ùÿLþ ù¤£¤£"þ ÿ"ÿLù"þ ú øJý ø ÿLú¤£"ù úÿLþ;ý<br />

ú"ú"ý<br />

ù"û¤¡"þ û"ù"û ù"ü;ý"ù ÿLü"þ ø ú"þÿ ù ÿ¥¡"û"ý øJý"ú"û ü"û"ù ù¤¡"ü"ý ¡"ú"þ ú"ýÿ+ø ÿLû øJþ ú"ü"ù ø ÿLþ¤¡"ù ú¤¡"ý;ý<br />

þ¤£"ý<br />

ú"ùÿ ùÿ+øJü ü¤¡ ù"ú§¡"ú ÿ¥£"ù¤¡ ú"ù"þ;ú ÿLù"ú"ü ú¤£¤¡ÿ øJü¤¡ ù"ù"ûÿ ù"ý"ü ù¤£ øJü ÿ"ÿLù¤¡ ú"ú"ü¤£ ÿLú¤£ÿ úÿ¥¡ÿ<br />

¨¤¨¤¨¤¨§¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨§¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨§¨¤¨¤¨¤¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨¤¨¤¨§¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨§¨¤¨¤¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨¤¨¤¨§¨©¨¤¨¤¨¤¨<br />

S k<br />

[<br />

U<br />

]<br />

S k<br />

[<br />

P<br />

]<br />

S<br />

k ′ ∆ H ∆ p ∆ s<br />

−1 0<br />

1 0<br />

S 1 {2}<br />

{168} 2 64.00 16.00<br />

[<br />

0 −1<br />

]<br />

[<br />

0 1<br />

]<br />

0 1<br />

1 0<br />

S 2 {2, 168}<br />

{42, 128} 2 16.00 8.00<br />

[<br />

−1 0<br />

]<br />

[<br />

0 1<br />

]<br />

i 0<br />

1 0<br />

S 3 {2, 168, 42, 128}<br />

{93, 247, 117, 223} 2 16.00 8.00<br />

[<br />

0 −i<br />

] { } [<br />

0 1<br />

] { }<br />

1 0 2, 168, 42, 128 1 0 30, 180, 54, 156<br />

S 4 2 4.00 4.00<br />

0 −i 93, 247, 117, 223 0 i 105, 195, 65, 235<br />

⎧<br />

⎫<br />

⎧<br />

⎫<br />

[ ]<br />

2, 168, 42, 128<br />

⎪⎨<br />

⎪⎬ [ ]<br />

7, 173, 47, 133<br />

⎪⎨<br />

⎪⎬<br />

1 0 93, 247, 117, 223 1 0 82, 248, 122, 208<br />

S 5 1 4.00 4.00<br />

0 i 30, 180, 54, 156 0 1 19, 185, 59, 145<br />

⎪⎩<br />

⎪⎭<br />

⎪⎩<br />

⎪⎭<br />

⎧<br />

105, 195, 65, 235<br />

⎫<br />

⎧<br />

110, 196, 70, 236<br />

⎫<br />

2, 168, 42, 128<br />

8, 162, 138, 32<br />

93, 247, 117, 223<br />

87, 253, 213, 127<br />

30, 180, 54, 156<br />

75, 225, 201, 99<br />

⎪⎨<br />

⎪⎬<br />

⎪⎨<br />

⎪⎬<br />

[ ]<br />

1 0<br />

S 6<br />

0 1<br />

105, 195, 65, 235<br />

7, 173, 47, 133<br />

82, 248, 122, 208<br />

19, 185, 59, 145<br />

⎪⎩<br />

⎪⎭<br />

110, 196, 70, 236<br />

[ ]<br />

0 1<br />

1 0<br />

Table A.2: 2 × 2 QPSK.<br />

150, 60, 20, 190<br />

13, 167, 143, 37<br />

88, 242, 218, 112<br />

76, 230, 206, 100<br />

⎪⎩<br />

⎪⎭<br />

155, 49, 25, 179<br />

1 4.00 4.00<br />

ù"ûÿLú £"ûÿ ù"þ;ú¤¡ ÿLü"ù¤¡ ú¤¡ ø¦¡ ÿ¥¡"ý"ú ú"ü¤¡ÿ þ"ü¤¡ ù¤¡"ù¤£ û¤¡"ú ù"ü øJü ÿ¥£"üÿ ú"þ¤£"ü ÿLþÿ¥£ ú"û"ú§£<br />

¡"þ¤£<br />

ù¤£"û ù"ý"þ ø ú"ù ù"ú;ý"þ ÿ+øJû"ù ú"ùÿ þ ÿ"ÿ¥¡ ø ú¤£"ý"û øJú"ù ùÿLü"û ÿ+ø"ø ù øJþ ø ÿLý"û"ù ú"ú"ú"ý ÿLù"þ"û úÿLý;û<br />

¨¤¨¤¨¤¨§¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨§¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨§¨¤¨¤¨¤¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨¤¨¤¨§¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨§¨¤¨¤¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨¤¨¤¨¤¨¤¨¤¨¤¨¤¨©¨¤¨¤¨§¨©¨¤¨¤¨¤¨<br />

¡ÿLü ú"ý¤£ÿ ÿLý"ý¤¡ ù¤¡;û"ú ÿLüÿ¥¡ ú"û¤¡;ú ÿLû"ù"ü ú"ü"ûÿ þ"ü¤£ ù"û¤£ÿ û"ý¤¡ ù"ü"ú"ü ù"ý"ù"ü ú¤¡"þ¤£ ÿ¥¡ ø@ÿ øJý¤¡;ú<br />

¡"þ ù ø@ÿLý ú"û"û ùÿ ù"ù ÿLù¤¡"û ú¤£ ø;ø ÿ¥£"ý"ý ú"ú"ù"ý ù¤£ ø ù¤£"ù"ù ø¢¡"þ ù"ù"ü"þ ÿLú"þ"þ úÿ+ø"ø ÿ"ÿLý"ý ú øJú;ù<br />

£"ú"ú ù"þ"û¤£ þ"ùÿ ù¤£§¡¤¡ ÿLû"û¤¡ øJý"ý§¡ ÿLü¤£¤£ ú¤¡ÿLü û ø¢£ ù"ü¤¡¤¡ ü"ú"ú ù"û"þ"ü ÿLþ øJú ú"û"ý¤¡ ÿ¥£¤£¤£ ú"þ"ü§£<br />

û"ü ù øJýÿ<br />

¤¤¤§¤¤¤¤¤¤¤¤¤¤¤¤¤¤¤§¤¤¤¤¤¤¤¤¤¤¤¤¤¤¤§¤¤¤©¤¤¤¤¤¤¤©¤¤§©¤¤¤¤¤¤¤©¤¤¤¤¤¤¤©§¤¤©¤¤¤¤¤¤¤©¤¤¤¤¤¤¤¤¤¤¤©¤¤¤¤¤¤¤©¤¤§©¤¤¤<br />

ùÿ;ÿLú ÿLù"û¤¡ ú¤£ øú ÿ+øJüÿ ú"úÿLü ù ø¢£ ù¤£ÿLú øJû"ü ù"ù"þ"ü ÿLú¤¡"ü úÿ+øJú ÿLý"üÿ ú øJúÿ<br />

ú¤£¤¡<br />

£"ù ø ù"þ¤£"û þÿLù ù¤£;û"þ ÿ¥¡"ù"ù ú"ü"ü;þ ÿLü øJû ú¤¡¤¡ ø ¡"ý"ý ù"ü"û"þ ü"ù ø ù¤¡ ø"ø ÿLþ"ú ø ú¤£"ü"þ ÿ¥£ øJû ú"þ"þ;û<br />

ú"ü¤£ ù"ù"ù"ú ÿ¥¡ÿ ù ø;ø¢¡ ÿLý"þÿ ú"ú¤£§¡ ÿLú"ý¤£ úÿLú"ú £"ü ù"ú"ú¤£ ù"þ"ú ùÿ"ÿ"ÿ ÿ"ÿLü"ú ú øJû"ü ÿ+ø@ÿ¥¡ ú"ù ø¦£<br />

¡¤¡"û ù"û"ý ø £¤£"ù ù"þ;ù"þ ÿLü"þ"ù ú¤¡"ú;þ ÿLû"ü ø øJý"ù"û ü¤£"ù ù¤¡ÿLû û"û ø ú"ý"ý ø ÿ¥£"þ"ù ú"þ¤£"ý ÿLþ"ý"û ú"û"ù;û<br />

ÿ+øJú ù ø¢¡¤£ øJúÿ ùÿ þ¤¡ ÿLú ø@ÿ ú"ý"ü§¡ ÿLý¤£"ú ú"ú"þ¤£ úÿLü ù"ý¤¡¤£ úÿ ù"ú"û"ú ÿ+ø¢£"ú ú"ù"ý"ü ÿ"ÿLû¤£ ú øJü§¡<br />

ù"ü"ú"ý þ"þ"û ù"û0øJù ÿ¥¡"ú"ù øJý"û0ø ù"ý"ù"ý ú¤¡¤¡"û ¡ÿLý ú"ý øJù ü"ü"þ ù¤¡¤£ ø ÿLü"ý"þ ú"û"û ø ÿLû"ù"ý ú"ü¤£;ù<br />

£"ü"þ<br />

Table A.3: 2 × 2 8PSK code list.<br />

132


S k [<br />

U<br />

]<br />

S k [<br />

P<br />

]<br />

S k ′ ∆ H ∆ p ∆ s<br />

−1 0<br />

1 0<br />

S 1 {4}<br />

{2336} 2 64 16<br />

[<br />

0 −1<br />

]<br />

[<br />

0 1<br />

]<br />

0 1<br />

1 0<br />

S 2 {4, 2336}<br />

{292, 2048} 2 16 8<br />

[<br />

−1 0<br />

]<br />

[<br />

0 1<br />

]<br />

i 0<br />

1 0<br />

S 3 {4, 2336, 292, 2048}<br />

{1202, 3478, 1426, 3254} 2 16 8<br />

[<br />

0 −i<br />

] { } [<br />

0 1<br />

] { }<br />

1 0<br />

4, 2336, 292, 2048<br />

1 0<br />

180, 2448, 404, 2224<br />

S 4 2 4 4<br />

0 −i<br />

⎧<br />

1202, 3478, 1426, 3254<br />

⎫<br />

0 i<br />

⎧<br />

1314, 3078, 1026, 3366<br />

⎫<br />

[ ]<br />

4, 2336, 292, 2048 [<br />

] 971, 2799, 1657, 3933<br />

⎪⎨<br />

⎪⎬<br />

1 0 1202, 3478, 1426, 3254 e jπ/4 ⎪⎨<br />

⎪⎬<br />

0<br />

747, 3023, 1881, 3709<br />

S 5 0 1<br />

180, 2448, 404, 2224<br />

⎪⎩<br />

1314, 3078, 1026, 3366<br />

⎪⎭<br />

0 e −jπ/4 2 1.37 2.34<br />

635, 2911, 1769, 4045<br />

⎪⎩<br />

859, 2687, 1993, 3821<br />

⎪⎭<br />

⎧<br />

⎫<br />

⎧<br />

⎫<br />

4, 2336, 292, 2048<br />

330, 2158, 1528, 3292<br />

1202, 3478, 1426, 3254<br />

106, 2382, 1240, 3580<br />

[ ] 180, 2448, 404, 2224<br />

1 0<br />

⎪⎨<br />

⎪⎬ [ ] 506, 2270, 1128, 3404<br />

1314, 3078, 1026, 3366<br />

1 0<br />

⎪⎨<br />

⎪⎬<br />

218, 2558, 1352, 3180<br />

S 6<br />

0 e jπ/4 971, 2799, 1657, 3933<br />

0 e j5π/4 2 0.34 2.34<br />

785, 2613, 1927, 3747<br />

747, 3023, 1881, 3709<br />

561, 2837, 1703, 3971<br />

⎪⎩<br />

635, 2911, 1769, 4045 ⎪⎭<br />

⎪⎩<br />

897, 2725, 1591, 3859⎪⎭<br />

859, 2687, 1993, 3821<br />

673, 2949, 1815, 3635<br />

[ ]<br />

1 0<br />

S 7 0 1<br />

[ ]<br />

1 0<br />

S 8 0 1<br />

[ ]<br />

1 0<br />

S 6 0 −1<br />

[ ]<br />

1 0<br />

S 7<br />

0 e jπ/4<br />

S ′ 6 1 0.34 2.34<br />

S ′ 7 1 0.20 1.17<br />

Table A.4: 2 × 2 8PSK.<br />

coset S k ⎡<br />

U S k P S k ′ ∆ H ∆ p ∆ s<br />

1 S 1 ⎣ 1 0 0<br />

⎤<br />

⎡<br />

0 −1 0 ⎦ {1}<br />

⎣ 1 0 0<br />

⎤<br />

0 −1 0⎦ {172} 3 256 20<br />

0 0 −1<br />

0 0 1<br />

⎡<br />

1 S 2 ⎣ 0 1 0<br />

⎤<br />

⎡<br />

0 0 1⎦ {1, 172}<br />

⎣ 0 0 1<br />

⎤<br />

−1 0 0⎦ {251, 330} 3 64 20<br />

1 0 0<br />

0 −1 0<br />

⎡<br />

10 S 1 ⎣ 1 0 0<br />

⎤<br />

⎡<br />

0 −1 0 ⎦ {304}<br />

⎣ 1 0 0<br />

⎤<br />

0 −1 0⎦ {413} 3 256 20<br />

0 0 −1<br />

0 0 1<br />

⎡<br />

10 S 2 ⎣ 0 1 0<br />

⎤<br />

⎡<br />

0 0 1⎦ {304, 413}<br />

⎣ 0 0 −1<br />

⎤<br />

1 0 0 ⎦ {486, 87} 3 64 20<br />

1 0 0<br />

0 1 0<br />

1,10 S 3 N/A {1, 172, 251, 330} N/A {304, 413, 486, 87} 3 64 16<br />

⎡<br />

1,10 S 4 ⎣ 0 0 1<br />

⎤<br />

⎡<br />

{ }<br />

0 1 0⎦<br />

1, 172, 251, 330 ⎣ 1 0 0<br />

⎤<br />

{ }<br />

0 0 1⎦<br />

206, 383, 181, 24<br />

2 16 12<br />

304, 413, 486, 87<br />

98, 467, 297, 388<br />

−1 0 0<br />

⎧<br />

⎫<br />

0 −1 0<br />

⎧<br />

⎫<br />

⎡<br />

1,10 S 5 ⎣ 0 0 1<br />

⎤ 1, 172, 251, 330 ⎡<br />

⎪⎨<br />

⎪⎬<br />

0 1 0⎦<br />

304, 413, 486, 87 ⎣ −1 0 0<br />

⎤ 419, 18, 472, 373<br />

⎪⎨<br />

⎪⎬<br />

0 0 1⎦<br />

271, 190, 68, 233<br />

1 8 8<br />

206, 383, 181, 24<br />

112, 221, 138, 315<br />

−1 0 0 ⎪⎩<br />

98, 467, 297, 388<br />

⎪⎭ 0 1 0 ⎪⎩<br />

321, 492, 407, 38<br />

⎪⎭<br />

⎧<br />

⎫<br />

⎧<br />

⎫<br />

1, 172, 251, 330<br />

72, 229, 178, 259<br />

304, 413, 486, 87<br />

377, 468, 431, 30<br />

⎡<br />

1,10 S 6 ⎣ 1 0 0<br />

⎤ 206, 383, 181, 24 ⎡<br />

⎪⎨<br />

⎪⎬<br />

0 1 0⎦<br />

98, 467, 297, 388 ⎣ 1 0 0<br />

⎤ 135, 310, 252, 81<br />

⎪⎨<br />

⎪⎬<br />

0 1 0 ⎦ 43, 410, 352, 461<br />

1 8 8<br />

419, 18, 472, 373<br />

490, 91, 401, 316<br />

0 0 1<br />

0 0 −1<br />

271, 190, 68, 233<br />

326, 247, 13, 160<br />

⎪⎩<br />

112, 221, 138, 315⎪⎭<br />

⎪⎩<br />

57, 148, 195, 370 ⎪⎭<br />

321, 492, 407, 38<br />

264, 421, 478, 111<br />

Table A.5: 3 × 3 BPSK.<br />

133


©¤¥¢©©©©¤©¤©©¤©¤©©¤§¥¢©©©¤©¤¥§¢¥¤©©¤©¢¥©©©¤©©¤©©¤¢©©©¥©©¤©¤©©¦©§©¤<br />

¥©©¤©©©©¦¥!©¤<br />

©©¤©¤©©©©¤©¤©©¤©¢©©¤©§¤¢¥©©¦¦¤¤©©¤©©¤©¥©¥¢©¤©¥¢¢©©¢<br />

"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""©"<br />

S k<br />

⎡<br />

U<br />

⎤<br />

S k<br />

⎡<br />

P<br />

⎤<br />

S<br />

k ′ ∆ H ∆ p ∆ s<br />

−1 0 0<br />

1 0 0<br />

S 1<br />

⎣ 0 −1 0 ⎦ {397} ⎣0 1 0⎦ {174887} 3 1280 36<br />

⎡<br />

0 0 −1<br />

⎤<br />

⎡<br />

0 0 1<br />

⎤<br />

1 0 0<br />

0 i 0<br />

S 2<br />

⎣0 −1 1 ⎦ {397, 174887} ⎣i 0 0⎦ {86892, 260550} 3 160 18<br />

0<br />

⎡<br />

0 −1<br />

⎤<br />

⎡<br />

0 0 i<br />

⎤<br />

1 0 0<br />

0 i 0<br />

S 3<br />

⎣0 1 0⎦ S 2<br />

⎣1 0 0 ⎦ S 2 ′ 3 64 16<br />

⎡<br />

0 0 1<br />

⎤<br />

⎡<br />

0 0 −i<br />

⎤<br />

1 0 0<br />

i 0 0<br />

S 4<br />

⎣0 −1 0⎦ S 3<br />

⎣0 1 0 ⎦ S 3 ′ 3 32 12<br />

⎡<br />

0 0 1<br />

⎤<br />

⎡<br />

0 0 −1<br />

⎤<br />

1 0 0<br />

1 0 0<br />

S 5<br />

⎣0 i 0 ⎦ S 4<br />

⎣0 0 1⎦ S 4 ′ 3 8 10<br />

0 0 −i<br />

0 −1 0<br />

⎡ ⎤<br />

⎡ ⎤<br />

1 0 0<br />

1 0 0<br />

S 6<br />

⎣0 0 i⎦ S 5<br />

⎣0 −i 0⎦ S 5 ′ 1 8 10<br />

⎡<br />

0 −i 0<br />

⎤<br />

⎡<br />

0 0 1<br />

⎤<br />

1 0 0<br />

1 0 0<br />

S 7<br />

⎣0 −1 0⎦ S 6<br />

⎣0 1 0⎦ S 6 ′ 1 8 10<br />

⎡<br />

0 0 1<br />

⎤<br />

⎡<br />

0 0 1<br />

⎤<br />

1 0 0<br />

1 0 0<br />

S 8<br />

⎣0 1 0⎦ S 7<br />

⎣0 0 −1⎦ S 7 ′ 1 8 8<br />

0 0 1<br />

0 1 0<br />

Table A.6: 3 × 3 QPSK.<br />

©©©#©¦©©©§¤©©¤©¢©©¤©©¤¥$¤©¤©¤¥§¢¥¢©©¤¢¤©¥©©¤¥%¢¥©¥¢&¥©©©¥©¥¢©¥©¥§%¢¤©¤<br />

¢©¤¥!©©¥¢©¤©©¤¥'¤©©¤©©¦¤©¤©¤©©©¦¥©¢¥©©©&©©¤©©¤©©¤¢©©©(©§§¥©©©¤¢(!<br />

"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""©"<br />

©©¤©§©©©¤©¤©©¤©¤©©¤©¢¥©©©¤©¤©©©¤¤©©¤¢¤©©©¤©©¤©©¤¢©¢¥©¥¢©¤©©¤©¤©©¦<br />

©©¤©¤©©©©¤©¤¢¤¥'¤©©¤§©¤©¥¥§¤©¦©©¦)¦©¤¥§¤¥$¥©©¤©¤©©¤¦©©¥©©§©©©¤¥¤§©¤<br />

"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""©"<br />

©©©¤¢¥¥§©©¤¢¤©©¤©¢©©#©¥¢©¤©¤©¤©©©¤!©¤©©§¥©¥¢©¢¥©©¤&¥¥§©¥©©¤©¥©¢¤©¤©©¤<br />

¥§©#¢¥©©©©¤¥%§©¤©¤¢¤©¥¢©¤©¤©¤¢©¤&¤©#©©¤©©©¤¥%©¦¥¢¤©©©(§©¤¢¥©©©¦¢¥©©¤<br />

*©*©*©*¤*©*©*¤*©*©*©*¤*©*©*¤*©*©*¤*©*©*©*¤*©*©*¤*©*©*©*¤*©*©*¤*©*©*¤*©*©*©*¤*©*©*©*©*¤**¤*©*©*©*©*©*¤**¤*©*©*©*©*¤*©**¤*©*©*©*©*¤*©*©*©*©*©*©*©*¤*©*©*©*©*¤**¤*©*©*©*©*©*¤**¤*©*©*©*©*¤*©*©*©*©*©*©*©*¤*©*©*©*©*¤**©*<br />

©©¥¢§¤©©©©¤©¤©©§&(§©¤©¥¢¥&¥§©%¤¥§©¤¤©©¤©©¤©©©¤©©¤©©¤¤©©©¥©¢¥©¤©©¤©©©¢<br />

©#©¦©©¢¥©¤¢¤¢)¦§#§©¤¥&¢§¤©©©¤¤©©¤©¥¢©©©¤©©¤©©¤¢©©¢ ¥©¢¥©¤©©¤©¤©©¤<br />

"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""©"<br />

©©©¤©§¢©©¤¢¤©¢¥¢)¤©©¤©©¤©¤©§¤¥©§¤&¥©©¤¥§¤¢©©§©¤¢¤¤©©©(§©¤©¥©©©¤¢¥¥§<br />

©©©¦ ©¤¥§©©©%¤©¥! ¥©¢©©¤¢©©¤©§©© ¥¢©©¤¥&©©¦©¦©¤¦§©(§©¤©¢©¦©¤©©¤<br />

"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""©"<br />

©©©¤©¤©©¢©¥©¤©©¥©¥¤#©¢¤©©¦¤©©©¤¤©©¤©©¤©©¢¥©¥¢©©¤¤©©¥#¥©©¤©¤§©¤©¦©¤<br />

¢¤©¤©©©©¤©¤©©§¤©¢¥¢§©©¤©¤©©¥¢¤©©¦ ¢¤©©©¤©©¤©©¤¢©©©¥©¥¢©¤©©¤©¤©©<br />

"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"©"¤"©"©"¤"©"©"¤"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""¤"©"©"©"©"©"¤""¤"©"©"©"©"¤"©"©"©"©"©"©"©"¤"©"©"©"©"¤""©"<br />

©©©¤©¤¥§©¢¥©§©¤©¤©¢¥©©¦¤§¤¥%¤¢©¤<br />

©©¦ ©©¤©¥§¤¢©¤©©¤¤¥§©(§©¤©¥©©¢(%¢¥©©¤<br />

©©¤©¥¢©©©¤©§©©¤ ¢#©©¦©©§©¤©©¤¤©¥§¥¢¦©©©¤©¤©©¦<br />

¢©¤¥¢©©©©¤©¤©©¤©<br />

Table A.7: 3 × 3 QPSK code list.<br />

134


S k ⎡<br />

U S k P S k ′ ∆ H ∆ p ∆ s<br />

S 1 ⎣ 1 0 0<br />

⎤<br />

⎡<br />

0 1 0⎦ {10794} ⎣ −1 0 0<br />

⎤<br />

0 −1 0 ⎦ {76702478} 3 1620.08 36.00<br />

0 0 1<br />

0 0 −1<br />

⎡<br />

S 2 ⎣ 1 0 0<br />

⎤<br />

⎡<br />

0 1 1⎦ S 1 ⎣ i 0 0<br />

⎤<br />

0 i 0 ⎦ S 1 ′ 3 202.51 18.00<br />

0 0 1<br />

0 0 −i<br />

⎡<br />

⎡<br />

S 3 ⎣ 1 0 0<br />

⎤<br />

e j π ⎤<br />

4 0 0<br />

0 1 0 ⎦ S 2<br />

⎢<br />

⎣ 0 e j π 4 0 ⎥<br />

0 0 −1<br />

0 0 e j 3π ⎦ S 2 ′ 3 104.97 16.59<br />

4<br />

⎡<br />

⎤<br />

⎡<br />

S 4 ⎣ 1 0 0<br />

⎤<br />

1 0 0<br />

0 −1 0⎦ S 3<br />

⎢<br />

⎣<br />

0 e j 3π 4 0 ⎥<br />

0 0 i<br />

0 0 e j 5π ⎦ S 3 ′ 3 32.00 15.41<br />

4<br />

⎡<br />

⎤<br />

1 0 0<br />

⎡<br />

⎢<br />

S 5 ⎣0 e j π ⎥<br />

4 0<br />

0 0 e j π ⎦ S 4 ⎣ 1 0 0<br />

⎤<br />

0 −i 0 ⎦ S 4 ′ 3 7.03 13.76<br />

4<br />

0 0 −1<br />

⎡<br />

⎤<br />

⎡<br />

1 0 0<br />

⎢<br />

S 6 ⎣<br />

0 1 0 ⎥<br />

0 0 e j 3π ⎦ S 5 ⎣ 1 0 0<br />

⎤<br />

0 −i 0 ⎦ S 5 ′ 3 2.34 5.51<br />

4<br />

0 0 −i<br />

⎡<br />

⎤<br />

⎡<br />

1 0 0<br />

e j π ⎤<br />

4 0 0<br />

S 7 ⎣0 i 0 ⎦ ⎢<br />

0 0 e j π S 6 ⎣ 0 −1 0<br />

⎥<br />

⎦ S<br />

4<br />

0 0 e j 3π 6 ′ 3 0.69 4.93<br />

4<br />

⎡<br />

⎤<br />

⎡<br />

S 8 ⎣ 0 1 0<br />

⎤<br />

1 0 0<br />

1 0 0⎦ S 7<br />

⎢<br />

⎣<br />

0 e j 3π 4 0 ⎥<br />

0 0 1<br />

0 0 e j 7π ⎦ S 7 ′ 2 0.12 4.93<br />

4<br />

Table A.8: 3 × 3 8PSK.<br />

,.-0/01#-023-4,.5413-36$70/010,.805010,9+0+470/3-.+3+ ,0,:+4/0-0,32;+


J;KL4MNL=L;L0K=O;P=Q;ORO=Q;O=Q;Q;MNL3P.M=J=K;JL0OTS;S=U;P=QVL0MTJ=Q;W=U;J=Q;OTS;O=W;O=JVL0PTJ=W;S=J;P=P;PTW;O=S;W=J;W;UNL0O=K.MXU;K;O=KTO;S;M=J=S;K;KNL0OL0SL0U.MXUTK;J=O.ML3JVLYL4MXW;P;W;M=U=QTJ;S=S.M=K=U;UNL4M(L4M;MXJ;J=K<br />

U;M=O;S=S;JZL=L0W;S=O;J=K;J[L0P=U.M=M=O=PZL3P;J;U=J;O;M=OTS.MXK.M(L0Q=J\Q=U;Q=J;Q=P;JTU;P=Q;O=K;P;STJ=K;K=O.MXP.M]W;U=J;Q=K;U;QNL0O=S;QL0W;J=WTW;JL0W=W;W;WNL0W=J;O=W;K;O=QTK;WL0K.M=M=QNL4M(L0O;U=J;K=WTK;J=K;J;P;M=QNL0U=J;P;U=S;K=W<br />

L=L0Q.MXS;OZL=L0S;U=K.M(L0ORK;ML3Q;U=PZL3P;OVL0M=QL0STS;U=W;K=K;K=P\J=Q;Q=Q;P=K;STS;KL0J=U;K;PTJ=O;U=J.MXK;STW;O=K;P=K;K;UNL0O=W;S=O;JVL3WTO;P=UVL3U;S;WNL0OL0O=O;S;O=UTK;O;M=K;S=S;QNL4MXW;W;P=U;W=QTJ;S=O;Q;O=W;UNL0U=Q;S;K=Q;U=W<br />

S k ⎡<br />

U<br />

⎤<br />

S k P S k ′ ∆ H ∆ p ∆ s<br />

1 0 0 0<br />

⎡<br />

S 1<br />

⎢0 −1 0 0<br />

⎥<br />

⎣0 0 1 0 ⎦ {83} ⎣ 1 0 0<br />

⎤<br />

0 1 0 ⎦ {4012} 3 4096.00 48.00<br />

0 0 −1<br />

0 0 0 −1<br />

⎡<br />

⎤<br />

1 0 0 0<br />

⎡<br />

S 2<br />

⎢0 −1 0 0<br />

⎥<br />

⎣0 0 1 0 ⎦ {83, 4012} ⎣ 1 0 0<br />

⎤<br />

0 1 0 ⎦ {989, 3106} 3 512.00 24.00<br />

0 0 −1<br />

0 0 0 −1<br />

⎡<br />

⎤<br />

1 0 0 0<br />

⎡<br />

S 3<br />

⎢0 1 0 0<br />

⎥<br />

⎣0 0 −1 0 ⎦ S 2 ⎣ 1 0 0<br />

⎤<br />

0 −1 0 ⎦ S 2 ′ 3 512.00 24.00<br />

0 0 −1<br />

0 0 0 −1<br />

⎡<br />

⎤<br />

1 0 0 0<br />

⎡<br />

S 4<br />

⎢0 1 0 0<br />

⎥<br />

⎣0 0 1 0 ⎦ S 3 ⎣ 0 1 0<br />

⎤<br />

0 0 1⎦ S 3 ′ 3 64.00 12.00<br />

−1 0 0<br />

0 0 0 −1<br />

Table A.10: 4 × 3 BPSK.<br />

S k ⎡<br />

U<br />

⎤<br />

S k P S k ′ ∆ H ∆ p ∆ s<br />

1 0 0 0<br />

⎡<br />

S 1<br />

⎢0 −1 0 0<br />

⎥<br />

⎣0 0 1 0 ⎦ {8714}<br />

⎣ 1 0 0<br />

⎤<br />

0 1 0 ⎦ {11176096} 3 4096 48<br />

0 0 −1<br />

0 0 0 −1<br />

⎡<br />

⎤<br />

1 0 0 0<br />

⎡<br />

S 2<br />

⎢0 −1 0 0<br />

⎥<br />

⎣0 0 1 0 ⎦ {8714, 11176096} ⎣ 1 0 0<br />

⎤<br />

0 1 0 ⎦ {696994, 10487816} 3 512 24<br />

0 0 −1<br />

0 0 0 −1<br />

⎡<br />

⎤<br />

1 0 0 0<br />

⎡<br />

S 3<br />

⎢0 1 0 0<br />

⎥<br />

⎣0 0 −1 0 ⎦ S 2 ⎣ 1 0 0<br />

⎤<br />

0 −1 0 ⎦ S 2 ′ 3 512 24<br />

0 0 −1<br />

0 0 0 −1<br />

⎡<br />

⎤<br />

1 0 0 0<br />

⎡<br />

S 4<br />

⎢0 1 0 0<br />

⎥<br />

⎣0 0 0 1⎦ S 3 ⎣ i 0 0<br />

⎤<br />

0 0 i⎦ S 3 ′ 3 384 24<br />

0 i 0<br />

0 0 −1 0<br />

⎡<br />

⎤<br />

1 0 0 0<br />

⎡<br />

S 5<br />

⎢0 i 0 0<br />

⎥<br />

⎣0 0 1 0⎦ S 4 ⎣ 0 1 0<br />

⎤<br />

1 0 0⎦ S 4 ′ 3 128 20<br />

0 0 i<br />

0 0 0 i<br />

⎡<br />

⎤<br />

1 0 0 0<br />

⎡<br />

S 6<br />

⎢0 1 0 0<br />

⎥<br />

⎣0 0 1 0⎦ S 5 ⎣ 0 i 0<br />

⎤<br />

i 0 0⎦ S 5 ′ 3 32 14<br />

0 0 1<br />

0 0 0 i<br />

⎡<br />

⎤<br />

1 0 0 0<br />

⎡<br />

S 7<br />

⎢0 1 0 0<br />

⎥<br />

⎣0 0 1 0⎦ S 6 ⎣ 1 0 0<br />

⎤<br />

0 1 0 ⎦ S 6 ′ 1 16 14<br />

0 0 −1<br />

0 0 0 1<br />

⎡<br />

⎤<br />

1 0 0 0<br />

⎡<br />

S 8<br />

⎢0 −1 0 0<br />

⎥<br />

⎣0 0 1 0 ⎦ S 7 ⎣ 0 1 0<br />

⎤<br />

−1 0 0 ⎦ S 7 ′ 1 16 14<br />

0 0 −i<br />

0 0 0 −1<br />

Table A.11: 4 × 3 QPSK.<br />

U=P;Q.MXO;PZL=L4M=P=K;Q=Q;JRQ=UVL3Q;O;MNL3P;K;J=W.MXQ.M]S;U=KVL3K;P=S\Q=U.MXW;K=W;OTU;P=W;Q=Q;U.M]J=O;W=K;W=S.M]W;U=U;O=U;UVLYL0O=W;P=P;O.M(L^O;P=S.MXOVL;LYL0W=JVL3S;U;OL^K.MXO.M=U=OVLYL4MXU;K;S=OVL=L^J;P=J;O;K=J;WNL0U=K;W;PL0J=K<br />

W=S.M=Q=U;OZL3P;O;W=Q;J=K.M`L3O.MXU;J;MNL=L0P;S=P.MXS;OTS;K=O;U=S;O;MTJ;M=SL0W;M=OTSVL0M=P=J.M=PTQ=P.MXU;Q=K;PTOVL3W;J=S;P;KNL0O=SVL=L4ML3UTW;K=U;S=UVL;LYL0O=O;U=K;U;P=QTJ;U=J;P.MXP;KNL0U=Q;J;Q=SVL3UTK;O=Q;SVL3S;KNL4MXO;K;K;M=Q=U<br />

_=_;_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_;_=_;_=_;_=_;_=_=_;_;_=_;_=_;_;_X_;_;_=_;_=_;_=_;_;_=_=_;_=_;_=_;_;_=_=_;_=_;_=_;_;_=_;_=_=_;_=_;_=_;_;_=_;_=_=_;_=_;_;_=_;_=_<br />

J=K;J;J=U;JZL3P;Q;Q;M=S=O;JRWL0J=S;U=JZL=L0U;W;M=J=O;JTU;P=P;O=W.MXJ\J=J;O=O;W=W;JTS;W=J;P=K;U;STQ=S;Q=S;U=K.M]W;Q=P;W=K;P;WNL0W=K;K=W;OVL3QTW;S=J;U=S;PVLYL0O=U;Q=JVL0S=UTJ;P=U;U;O=K;WNL0U=O.M=K=O.MXQTK;U;M=W.M=M=UNL4MXU;U;W=J;JL<br />

O=WVL0K=K;OZL3P;K;P=W;P=O;ORS=S;W=Q;O=PZL=L;L0U=P;J=J;STS;J=K;U=Q;Q=S\J;M=J=S;J=W;PTSVL3J;W=K;O;PTQL0KL0P=J;STO;S=P;U=U;J;UNL0W=Q;Q;M=S;P=WTW;WL0WL0P;UNL0O=O;J=S.M=J=WTJVL3O;UVL3Q;QNL4MXP;U.MXU;J=QTK;K=U;K;U=P;UNL4M=M=O;P=S;J=W<br />

J=S;W.MXU;PZL3P;J;Q=S;P=S;JRU=Q;Q=OVL0MNL=L0UVL3K;J;M;M]S;J=J;K=Q;S;MTJ=J;S=Q;W=U.M]S;W=S;K=U;S.M]QL0Q=PVL3U.M]W;J=W;S=W;W;UNL0W=Q;J;M;ML3QTW.MXQ;S=P;PVLYL0O=U.M=M=Q;KL^K;Q=J;P;U=UVLYL0U=J;W;O=O.M(L^K;W=W.M;MXU;WNL4MXS;J;S=W;U=K<br />

a=a;a;a=a;a=a;a;a=a;a=a;a=a;a;a=a;a=a;a=a;a;a=a;a=a;a;a=a;a=a;a=a;a;a=a;a=a;a=a;a;a=a;a=a;a=a;a;a=a;a=a;a;a=a;a=a;a=a;a;a=a;a=a;a=a;a;a=a;a=a;a=a;a;a=a;a=a;a;a=a;a=a;a=a;a=a=a;a;a=a;a=a;a;aXa;a;a=a;a=a;a=a;a;a=a=a;a=a;a=a;a;a=a=a;a=a;a=a;a;a=a;a=a=a;a=a;a=a;a;a=a;a=a=a;a=a;a;a=a;a=a<br />

U=P;P;Q=S;K=JTJ;J;O=U;J=S;JTS;W=J;UL0S=OTQ;S;J=Q;Q=J;PNL;L0U=W;O=S;W;MbWVL3O;J=W;SZL3P;Q=Q;W=Q;W;JbJ;K=KVL0M=JTK;U;M=JL0P;WNL4MXU;U=U;SVL3QTJ;P=U;OL0S;QNL0U=O.MXWVL0Q=WZL3W;K=K;K;P=O;WTW=Q;P.MXS;W=QZL3O;U=Q;Q;K;M=WTW=S;JVL3W;K=Q<br />

S=S;S;J=S;O;M]J;Q;W=O;W;M=OTS;O=W.MXU;U=OTJ;W;U=P.MXK.McL0P.MXJ;O;M=U;MbO;Q=J;U=K;OZL=L;L3K.M=ML0P<br />

L3P.MXP;PTJ;S=S;S=P;KVLYL4M(L4MXK;W.MXQTK;J=OVL3K;S;KNL4MXW;P=K;J;Q=UZL3O;K;M=S;U=Q;KTW=O;S;K=S;S=UZL3OVL=L0Q;O=U;KTO=S.M=Q=Q;J=U<br />

S=U;W;W=U;O=PTQ;P;PL4MXJ;STS;KL0W=O;W=OTJ;O.M(L;L3J;ONL0P;OL0U=S;Q=PRK.MXU;W=W;SZL=L0S=U;W=Q;K;P`L0S=P;J=K;STJ;S=O;K=S;O;UNL0U=Q;U=P;U;S=WTK;O;M;MXW;P;UNL4MXW;W=U;P;J=WZL3O;W=S;WVL3K;UTW=O;K;S;ML3WZL3OVL3O.M=J=S;QTO=P;U;S=K;W=Q<br />

S=J;Q;P=U;U;M]J;J;S=KVL3S.M]S;W=U;P=P;U=JTQVL0J=K.MXS;PNL;L0UL0Q=W;W=PRU;Q=K;Q=P;JZL3P;J=Q;U=U;Q;JbJ;S;M=P=O;PTK;W=W;O=Q;P;WNL4MXS;J=P;P;O=KTK;Q=J;S=Q;K;KNL0U=J;W=U;Q;Q=WZL3W;Q=J;O;P=W;KTW=J;W;P=QVL3WZL3O;U;M=O.MXPVL^W;M=Q;P=W;KL<br />

S;M=K;O=O;O=JTQ;U;Q=O.MXU;JTU;P=Q;Q=U.MXJTJ;K;K=U;K=W;JNL0P;J;M=P=S;J;M[L3P;U=S;J=S;SZL=L0W=S;J=U;P;JbU.M=M=K=Q;JTK;J=J;P;M=W;QNL0U=J;P=P;J;O=WTK;W=S;PL0O;UNL4M(L0OL;L0OLdL3O;S=Q;U;S=QVL^W=U;J;J=P;U=UZL3W;J=O;KVL3U;QTW=JVL4M(L0J=W<br />

_=_;_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_=_;_;_=_;_=_;_;_=_;_=_;_=_;_=_=_;_;_=_;_=_;_;_X_;_;_=_;_=_;_=_;_;_=_=_;_=_;_=_;_;_=_=_;_=_;_=_;_;_=_;_=_=_;_=_;_=_;_;_=_;_=_=_;_=_;_;_=_;_=_<br />

S=K;O;P=J;K;M]J.M=S=U;Q=U;OTSVL3U;JL;L0M]Q;P.MXO;O=Q;ONL;L0PL0J=K;J;M`L0O=O;P=S;OZL3P;O=W;J;M;M=PbW;S=O;U=K;PTK;O=J;Q=KVL0KNL4MXO;K=Q;Q;P=UTJ;U=K;K=O;Q;UNL0U=Q;QL0Q;S=KZL3O;S=P;Q;K=QVL^OL0W;Q=J;S=QZL3O;O=U;W;J=O;UTW=K;U;U=K;W=K<br />

S=U;K.MXU;O;M]Q;U.MXU;P=Q.M]U;P=O;S=U;J=JTJ;O;W=W;P=K;PNL0P;K=J;O=Q.MXPRQ;U=P;WL0JZL=L4MXP;Q=O;S;PbU;P=K;J=U;JTJ;P=J;Q=WVL0WNL0U=K.MXK.M=W=KTK.MXO;O=K;W;WNL4MXU;K=P;SVL3KZL3O;W=P;S;P=S;KTW=U;U.MXQ.MXWZL3W;JL4M=P=WVL^O=P;S;S=Q;SL<br />

S=J;KVL3U;U=PTJ.M=J=W;WL0STSVL3J;U=U;P=OTQVL0K=U;W=U;ONL;L;L3S;Q=WVL3SRS;S=K;U=U;PZL3P;K=P;U=U;O;PbO;W=U.MXJ;STK;K=U.MXO;W;UNL4M=M=O=S;Q;U=WTJVL3O;P=K;S;WNL4MXP;U=O;J;O=UZL3W;Q=Q;S;J=S;UTO=S;P.MXK;O=WZL3O;O=J;P;K=Q;QTW=WVL0O=K;J=Q<br />

Table A.12: 4 × 3 QPSK code list.<br />

136


Appendix B<br />

RMTCM Design Examples<br />

If the regular trellis is too complicated, only the partition and labelling will be shown,<br />

readers can follow the procedures in Section 5.3 to generate the corresponding regular<br />

trellis.<br />

0 1 14<br />

1 14 1<br />

Figure B.1: 2 × 2 BPSK, Rate 1/2, I2S2O2p1.<br />

137


0 1 14<br />

1 7 8<br />

Figure B.2: 2 × 2 BPSK, Rate 1/2, I2S2O4p1.<br />

0 1 14<br />

1 7 8<br />

2 14 1<br />

3 8 7<br />

Figure B.3: 2 × 2 BPSK, Rate 1/2, I2S2O4P 1.<br />

0 1 14 7 8<br />

1 11 4 13 2<br />

2 7 8 1 14<br />

3 13 2 11 4<br />

Figure B.4: 2 × 2 BPSK, Rate 2/2, 2 × 2 × 2.<br />

0 1 14 7 8<br />

1 11 4 13 2<br />

2 14 7 8 1<br />

3 4 13 2 11<br />

4 7 8 1 14<br />

5 13 2 11 4<br />

6 8 1 14 7<br />

7 2 11 4 13<br />

Figure B.5: 2 × 2 BPSK, Rate 2/2, 2 × 4 × 1.<br />

0 2 168<br />

1 42 128<br />

2 93 247<br />

3 117 223<br />

4 30 180<br />

5 54 156<br />

6 105 195<br />

7 65 235<br />

8 168 2<br />

9 128 42<br />

10 247 93<br />

11 223 117<br />

12 180 30<br />

13 156 54<br />

14 195 105<br />

15 235 65<br />

Figure B.6: 2 × 2 QPSK, Rate 1/2, 8 × 2 × 1.<br />

138


0 2 168 42 128<br />

1 93 247 117 223<br />

2 30 180 54 156<br />

3 105 195 65 235<br />

4 42 128 2 168<br />

5 117 223 93 247<br />

6 54 156 30 180<br />

7 65 235 105 195<br />

Figure B.7: 2 × 2 QPSK, Rate 2/2, 4 × 2 × 2.<br />

0 2 168 42 128<br />

1 93 247 117 223<br />

2 30 180 54 156<br />

3 105 195 65 235<br />

4 168 42 128 2<br />

5 247 117 223 93<br />

6 180 54 156 30<br />

7 65 235 105 195<br />

8 42 128 2 168<br />

9 117 223 93 247<br />

10 54 156 30 180<br />

11 65 235 105 195<br />

12 128 2 168 42<br />

13 223 93 247 117<br />

14 156 30 180 54<br />

15 235 105 195 65<br />

Figure B.8: 2 × 2 QPSK, Rate 2/2, 4 × 4 × 1.<br />

0 2 168 42 128 93 247 117 223<br />

1 30 180 54 156 105 195 65 235<br />

2 93 247 117 223 2 168 42 128<br />

3 105 195 65 235 30 180 54 156<br />

Figure B.9: 2 × 2 QPSK, Rate 3/2, 2 × 2 × 4.<br />

139


0 2 168 42 128 93 247 117 223<br />

1 30 180 54 156 105 195 65 235<br />

2 42 128 93 247 117 223 2 168<br />

3 54 156 105 195 65 235 30 180<br />

4 93 247 117 223 2 168 42 128<br />

5 105 195 65 235 30 180 54 156<br />

6 117 223 2 168 42 128 93 247<br />

7 65 235 30 180 54 156 105 195<br />

Figure B.10: 2 × 2 QPSK, Rate 3/2, 2 × 4 × 2.<br />

0 2 168 42 128 93 247 117 223<br />

1 30 180 54 156 105 195 65 235<br />

2 168 42 128 93 247 117 223 2<br />

3 180 54 156 105 195 65 235 30<br />

4 42 128 93 247 117 223 2 168<br />

5 54 156 105 195 65 235 30 180<br />

6 128 93 247 117 223 2 168 42<br />

7 156 105 195 65 235 30 180 54<br />

8 93 247 117 223 2 168 42 128<br />

9 105 195 65 235 30 180 54 156<br />

10 247 117 223 2 168 42 128 93<br />

11 195 65 235 30 180 54 156 105<br />

12 117 223 2 168 42 128 93 247<br />

13 65 235 30 180 54 156 105 195<br />

14 223 2 168 42 128 93 247 117<br />

15 235 30 180 54 156 105 195 65<br />

Figure B.11: 2 × 2 QPSK, Rate 3/2, 2 × 8 × 1.<br />

0 2 168 42 128 93 247 117 223 30 180 54 156 105 195 65 235<br />

1 7 173 47 133 82 248 122 208 19 185 59 145 110 196 70 236<br />

2 30 180 54 156 105 195 65 235 2 168 42 128 93 247 117 223<br />

3 19 185 59 145 110 196 70 236 7 173 47 133 82 248 122 208<br />

Figure B.12: 2 × 2 QPSK, Rate 4/2, |S 5 | = 32, 2 × 2 × 8.<br />

0 2 168 42 128 93 247 117 223 30 180 54 156 105 195 65 235<br />

1 7 173 47 133 82 248 122 208 19 185 59 145 110 196 70 236<br />

2 93 247 117 223 30 180 54 156 105 195 65 235 2 168 42 128<br />

3 82 248 122 208 19 185 59 145 110 196 70 236 7 173 47 133<br />

4 30 180 54 156 105 195 65 235 2 168 42 128 93 247 117 223<br />

5 19 185 59 145 110 196 70 236 7 173 47 133 82 248 122 208<br />

6 105 195 65 235 2 168 42 128 93 247 117 223 30 180 54 156<br />

7 110 196 70 236 7 173 47 133 82 248 122 208 19 185 59 145<br />

Figure B.13: 2 × 2 QPSK, Rate 4/2, |S 5 | = 32, 2 × 4 × 4.<br />

140


0 2 168 42 128 93 247 117 223 30 180 54 156 105 195 65 235<br />

1 7 173 47 133 82 248 122 208 19 185 59 145 110 196 70 236<br />

2 42 128 93 247 117 223 30 180 54 156 105 195 65 235 2 168<br />

3 47 133 82 248 122 208 19 185 59 145 110 196 70 236 7 173<br />

4 93 247 117 223 30 180 54 156 105 195 65 235 2 168 42 128<br />

5 82 248 122 208 19 185 59 145 110 196 70 236 7 173 47 133<br />

6 117 223 30 180 54 156 105 195 65 235 2 168 42 128 93 247<br />

7 122 208 19 185 59 145 110 196 70 236 7 173 47 133 82 248<br />

8 30 180 54 156 105 195 65 235 2 168 42 128 93 247 117 223<br />

9 19 185 59 145 110 196 70 236 7 173 47 133 82 248 122 208<br />

10 54 156 105 195 65 235 2 168 42 128 93 247 117 223 30 180<br />

11 59 145 110 196 70 236 7 173 47 133 82 248 122 208 19 185<br />

12 105 195 65 235 2 168 42 128 93 247 117 223 30 180 54 156<br />

13 110 196 70 236 7 173 47 133 82 248 122 208 19 185 59 145<br />

14 65 235 2 168 42 128 93 247 117 223 30 180 54 156 105 195<br />

15 70 236 7 173 47 133 82 248 122 208 19 185 59 145 110 196<br />

Figure B.14: 2 × 2 QPSK, Rate 4/2, |S 5 | = 32, 2 × 8 × 2.<br />

0 2 168 42 128 93 247 117 223 30 180 54 156 105 195 65 235<br />

1 7 173 47 133 82 248 122 208 19 185 59 145 110 196 70 236<br />

2 168 42 128 93 247 117 223 30 180 54 156 105 195 65 235 2<br />

3 173 47 133 82 248 122 208 19 185 59 145 110 196 70 236 7<br />

4 42 128 93 247 117 223 30 180 54 156 105 195 65 235 2 168<br />

5 47 133 82 248 122 208 19 185 59 145 110 196 70 236 7 173<br />

6 128 93 247 117 223 30 180 54 156 105 195 65 235 2 168 42<br />

7 133 82 248 122 208 19 185 59 145 110 196 70 236 7 173 47<br />

8 93 247 117 223 30 180 54 156 105 195 65 235 2 168 42 128<br />

9 82 248 122 208 19 185 59 145 110 196 70 236 7 173 47 133<br />

10 247 117 223 30 180 54 156 105 195 65 235 2 168 42 128 93<br />

11 248 122 208 19 185 59 145 110 196 70 236 7 173 47 133 82<br />

12 117 223 30 180 54 156 105 195 65 235 2 168 42 128 93 247<br />

13 122 208 19 185 59 145 110 196 70 236 7 173 47 133 82 248<br />

14 223 30 180 54 156 105 195 65 235 2 168 42 128 93 247 117<br />

15 208 19 185 59 145 110 196 70 236 7 173 47 133 82 248 122<br />

16 30 180 54 156 105 195 65 235 2 168 42 128 93 247 117 223<br />

17 19 185 59 145 110 196 70 236 7 173 47 133 82 248 122 208<br />

18 180 54 156 105 195 65 235 2 168 42 128 93 247 117 223 30<br />

19 185 59 145 110 196 70 236 7 173 47 133 82 248 122 208 19<br />

20 54 156 105 195 65 235 2 168 42 128 93 247 117 223 30 180<br />

21 59 145 110 196 70 236 7 173 47 133 82 248 122 208 19 185<br />

22 156 105 195 65 235 2 168 42 128 93 247 117 223 30 180 54<br />

23 145 110 196 70 236 7 173 47 133 82 248 122 208 19 185 59<br />

24 105 195 65 235 2 168 42 128 93 247 117 223 30 180 54 156<br />

25 110 196 70 236 7 173 47 133 82 248 122 208 19 185 59 145<br />

26 195 65 235 2 168 42 128 93 247 117 223 30 180 54 156 105<br />

27 196 70 236 7 173 47 133 82 248 122 208 19 185 59 145 110<br />

28 65 235 2 168 42 128 93 247 117 223 30 180 54 156 105 195<br />

29 70 236 7 173 47 133 82 248 122 208 19 185 59 145 110 196<br />

30 235 2 168 42 128 93 247 117 223 30 180 54 156 105 195 65<br />

31 236 7 173 47 133 82 248 122 208 19 185 59 145 110 196 70<br />

Figure B.15: 2 × 2 QPSK, Rate 4/2, |S 5 | = 32, 2 × 16 × 1.<br />

141


0 2 168 42 128 93 247 117 223 30 180 54 156 105 195 65 235<br />

1 7 173 47 133 82 248 122 208 19 185 59 145 110 196 70 236<br />

2 8 162 138 32 87 253 213 127 75 225 201 99 150 60 20 190<br />

3 13 167 143 37 88 242 218 112 76 230 206 100 155 49 25 179<br />

4 30 180 54 156 105 195 65 235 2 168 42 128 93 247 117 223<br />

5 19 185 59 145 110 196 70 236 7 173 47 133 82 248 122 208<br />

6 75 225 201 99 150 60 20 190 8 162 138 32 87 253 213 127<br />

7 76 230 206 100 155 49 25 179 13 167 143 37 88 242 218 112<br />

Figure B.16: 2 × 2 QPSK, Rate 4/2, |S 6 | = 64, 4 × 2 × 8.<br />

0 2 168 42 128 93 247 117 223 30 180 54 156 105 195 65 235<br />

1 7 173 47 133 82 248 122 208 19 185 59 145 110 196 70 236<br />

2 8 162 138 32 87 253 213 127 75 225 201 99 150 60 20 190<br />

3 13 167 143 37 88 242 218 112 76 230 206 100 155 49 25 179<br />

4 93 247 117 223 30 180 54 156 105 195 65 235 2 168 42 128<br />

5 82 248 122 208 19 185 59 145 110 196 70 236 7 173 47 133<br />

6 87 253 213 127 75 225 201 99 150 60 20 190 8 162 138 32<br />

7 88 242 218 112 76 230 206 100 155 49 25 179 13 167 143 37<br />

8 30 180 54 156 105 195 65 235 2 168 42 128 93 247 117 223<br />

9 19 185 59 145 110 196 70 236 7 173 47 133 82 248 122 208<br />

10 75 225 201 99 150 60 20 190 8 162 138 32 87 253 213 127<br />

11 76 230 206 100 155 49 25 179 13 167 143 37 88 242 218 112<br />

12 105 195 65 235 2 168 42 128 93 247 117 223 30 180 54 156<br />

13 110 196 70 236 7 173 47 133 82 248 122 208 19 185 59 145<br />

14 150 60 20 190 8 162 138 32 87 253 213 127 75 225 201 99<br />

15 155 49 25 179 13 167 143 37 88 242 218 112 76 230 206 100<br />

Figure B.17: 2 × 2 QPSK, Rate 4/2, |S 6 | = 64, 4 × 4 × 4.<br />

142


0 2 168 42 128 93 247 117 223 30 180 54 156 105 195 65 235<br />

1 7 173 47 133 82 248 122 208 19 185 59 145 110 196 70 236<br />

2 8 162 138 32 87 253 213 127 75 225 201 99 150 60 20 190<br />

3 13 167 143 37 88 242 218 112 76 230 206 100 155 49 25 179<br />

4 42 128 93 247 117 223 30 180 54 156 105 195 65 235 2 168<br />

5 47 133 82 248 122 208 19 185 59 145 110 196 70 236 7 173<br />

6 138 32 87 253 213 127 75 225 201 99 150 60 20 190 8 162<br />

7 143 37 88 242 218 112 76 230 206 100 155 49 25 179 13 167<br />

8 93 247 117 223 30 180 54 156 105 195 65 235 2 168 42 128<br />

9 82 248 122 208 19 185 59 145 110 196 70 236 7 173 47 133<br />

10 87 253 213 127 75 225 201 99 150 60 20 190 8 162 138 32<br />

11 88 242 218 112 76 230 206 100 155 49 25 179 13 167 143 37<br />

12 117 223 30 180 54 156 105 195 65 235 2 168 42 128 93 247<br />

13 122 208 19 185 59 145 110 196 70 236 7 173 47 133 82 248<br />

14 213 127 75 225 201 99 150 60 20 190 8 162 138 32 87 253<br />

15 218 112 76 230 206 100 155 49 25 179 13 167 143 37 88 242<br />

16 30 180 54 156 105 195 65 235 2 168 42 128 93 247 117 223<br />

17 19 185 59 145 110 196 70 236 7 173 47 133 82 248 122 208<br />

18 75 225 201 99 150 60 20 190 8 162 138 32 87 253 213 127<br />

19 76 230 206 100 155 49 25 179 13 167 143 37 88 242 218 112<br />

20 54 156 105 195 65 235 2 168 42 128 93 247 117 223 30 180<br />

21 59 145 110 196 70 236 7 173 47 133 82 248 122 208 19 185<br />

22 201 99 150 60 20 190 8 162 138 32 87 253 213 127 75 225<br />

23 206 100 155 49 25 179 13 167 143 37 88 242 218 112 76 230<br />

24 105 195 65 235 2 168 42 128 93 247 117 223 30 180 54 156<br />

25 110 196 70 236 7 173 47 133 82 248 122 208 19 185 59 145<br />

26 150 60 20 190 8 162 138 32 87 253 213 127 75 225 201 99<br />

27 155 49 25 179 13 167 143 37 88 242 218 112 76 230 206 100<br />

28 65 235 2 168 42 128 93 247 117 223 30 180 54 156 105 195<br />

29 70 236 7 173 47 133 82 248 122 208 19 185 59 145 110 196<br />

30 20 190 8 162 138 32 87 253 213 127 75 225 201 99 150 60<br />

31 25 179 13 167 143 37 88 242 218 112 76 230 206 100 155 49<br />

Figure B.18: 2 × 2 QPSK, Rate 4/2, |S 6 | = 64, 4 × 8 × 2.<br />

143


0 2 168 42 128 93 247 117 223 30 180 54 156 105 195 65 235<br />

1 7 173 47 133 82 248 122 208 19 185 59 145 110 196 70 236<br />

2 8 162 138 32 87 253 213 127 75 225 201 99 150 60 20 190<br />

3 13 167 143 37 88 242 218 112 76 230 206 100 155 49 25 179<br />

4 168 42 128 93 247 117 223 30 180 54 156 105 195 65 235 2<br />

5 173 47 133 82 248 122 208 19 185 59 145 110 196 70 236 7<br />

6 162 138 32 87 253 213 127 75 225 201 99 150 60 20 190 8<br />

7 167 143 37 88 242 218 112 76 230 206 100 155 49 25 179 13<br />

8 42 128 93 247 117 223 30 180 54 156 105 195 65 235 2 168<br />

9 47 133 82 248 122 208 19 185 59 145 110 196 70 236 7 173<br />

10 138 32 87 253 213 127 75 225 201 99 150 60 20 190 8 162<br />

11 143 37 88 242 218 112 76 230 206 100 155 49 25 179 13 167<br />

12 128 93 247 117 223 30 180 54 156 105 195 65 235 2 168 42<br />

13 133 82 248 122 208 19 185 59 145 110 196 70 236 7 173 47<br />

14 32 87 253 213 127 75 225 201 99 150 60 20 190 8 162 138<br />

15 37 88 242 218 112 76 230 206 100 155 49 25 179 13 167 143<br />

16 93 247 117 223 30 180 54 156 105 195 65 235 2 168 42 128<br />

17 82 248 122 208 19 185 59 145 110 196 70 236 7 173 47 133<br />

18 87 253 213 127 75 225 201 99 150 60 20 190 8 162 138 32<br />

19 88 242 218 112 76 230 206 100 155 49 25 179 13 167 143 37<br />

20 247 117 223 30 180 54 156 105 195 65 235 2 168 42 128 93<br />

21 248 122 208 19 185 59 145 110 196 70 236 7 173 47 133 82<br />

22 253 213 127 75 225 201 99 150 60 20 190 8 162 138 32 87<br />

23 242 218 112 76 230 206 100 155 49 25 179 13 167 143 37 88<br />

24 117 223 30 180 54 156 105 195 65 235 2 168 42 128 93 247<br />

25 122 208 19 185 59 145 110 196 70 236 7 173 47 133 82 248<br />

26 213 127 75 225 201 99 150 60 20 190 8 162 138 32 87 253<br />

27 218 112 76 230 206 100 155 49 25 179 13 167 143 37 88 242<br />

28 223 30 180 54 156 105 195 65 235 2 168 42 128 93 247 117<br />

29 208 19 185 59 145 110 196 70 236 7 173 47 133 82 248 122<br />

30 127 75 225 201 99 150 60 20 190 8 162 138 32 87 253 213<br />

31 112 76 230 206 100 155 49 25 179 13 167 143 37 88 242 218<br />

32 30 180 54 156 105 195 65 235 2 168 42 128 93 247 117 223<br />

33 19 185 59 145 110 196 70 236 7 173 47 133 82 248 122 208<br />

34 75 225 201 99 150 60 20 190 8 162 138 32 87 253 213 127<br />

35 76 230 206 100 155 49 25 179 13 167 143 37 88 242 218 112<br />

36 180 54 156 105 195 65 235 2 168 42 128 93 247 117 223 30<br />

37 185 59 145 110 196 70 236 7 173 47 133 82 248 122 208 19<br />

38 225 201 99 150 60 20 190 8 162 138 32 87 253 213 127 75<br />

39 230 206 100 155 49 25 179 13 167 143 37 88 242 218 112 76<br />

40 54 156 105 195 65 235 2 168 42 128 93 247 117 223 30 180<br />

41 59 145 110 196 70 236 7 173 47 133 82 248 122 208 19 185<br />

42 201 99 150 60 20 190 8 162 138 32 87 253 213 127 75 225<br />

43 206 100 155 49 25 179 13 167 143 37 88 242 218 112 76 230<br />

44 156 105 195 65 235 2 168 42 128 93 247 117 223 30 180 54<br />

45 145 110 196 70 236 7 173 47 133 82 248 122 208 19 185 59<br />

46 99 150 60 20 190 8 162 138 32 87 253 213 127 75 225 201<br />

47 100 155 49 25 179 13 167 143 37 88 242 218 112 76 230 206<br />

48 105 195 65 235 2 168 42 128 93 247 117 223 30 180 54 156<br />

49 110 196 70 236 7 173 47 133 82 248 122 208 19 185 59 145<br />

50 150 60 20 190 8 162 138 32 87 253 213 127 75 225 201 99<br />

51 155 49 25 179 13 167 143 37 88 242 218 112 76 230 206 100<br />

52 195 65 235 2 168 42 128 93 247 117 223 30 180 54 156 105<br />

53 196 70 236 7 173 47 133 82 248 122 208 19 185 59 145 110<br />

54 60 20 190 8 162 138 32 87 253 213 127 75 225 201 99 150<br />

55 49 25 179 13 167 143 37 88 242 218 112 76 230 206 100 155<br />

56 65 235 2 168 42 128 93 247 117 223 30 180 54 156 105 195<br />

57 70 236 7 173 47 133 82 248 122 208 19 185 59 145 110 196<br />

58 20 190 8 162 138 32 87 253 213 127 75 225 201 99 150 60<br />

59 25 179 13 167 143 37 88 242 218 112 76 230 206 100 155 49<br />

60 235 2 168 42 128 93 247 117 223 30 180 54 156 105 195 65<br />

61 236 7 173 47 133 82 248 122 208 19 185 59 145 110 196 70<br />

62 190 8 162 138 32 87 253 213 127 75 225 201 99 150 60 20<br />

63 179 13 167 143 37 88 242 218 112 76 230 206 100 155 49 25<br />

Figure B.19: 2 × 2 QPSK, Rate 4/2, |S 6 | = 64, 2 × 16 × 1.<br />

144

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!