Close
The page header's logo
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected 
Invert selection
Deselect all
Deselect all
 Click here to refresh results
 Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Iterative data detection: Modified message -passing algorithms and convergence
(USC Thesis Other) 

Iterative data detection: Modified message -passing algorithms and convergence

doctype icon
play button
PDF
 Download
 Share
 Open document
 Flip pages
 More
 Download a page range
 Download transcript
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content ITERATIVE DATA DETECTION: MODIFIED MESSAGE-PASSING ALGORITHMS AND CONVERGENCE by Jun Heo A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (ELECTRICAL ENGINEERING) May 2002 Copyright 2002 Jun Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UMI Number: 3093416 Copyright 2002 by Heo, Jun All rights reserved. ® UMI UMI Microform 3093416 Copyright 2003 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ProQuest Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, Ml 48106-1346 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UNIVERSITY OF SOUTHERN CALIFORNIA The Graduate School University Park LOS ANGELES, CALIFORNIA 90089-1695 This dissertation, w ritten b y Com m ittee, and approved b y a ll its m em bers, has been p resen ted to an d a ccep ted b y The Graduate School, in p a rtia l fu lfillm en t o f O u f t H U nder th e direction o f h.JS... D issertation requirem ents fo r th e degree o f DOCTOR OF PHILOSOPHY £ £yean Q f G ra d u a l S tu dies D ate A ugust 6 , 2002 DISSER TA TIOM COM M W IF E Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Dedication To My wife, Soonim and two lovely sons, Kwon and Chan Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Acknowledgements First of all I would like to express my profound gratitude toward the chairman of my dissertation committee, Professor Keith M. Chugg, for his guidance during my Ph.D. study. My doctoral degree here at the University of Southern California attributes in a great deal to his invaluable encouragement and academic stimulation. Also I want to thank Dr. Peter Baxendale from the department of mathematics, Dr. Peter Bereel from the department of Computer Engineering, Dr. Vijay Kumar, and Dr Antonio Ortega for gladly accepting to serve on my committee. Besides the committee members, there are numerous people who deserve my spe­ cial thanks. Particularly, my colleagues within Dr. Chugg’ s group, Orhan Coskun, Ali, Robert, Phunsak, Kyuhyuk, has been another source of my energy to travel the long journey of the doctoral study. My life in the United States that otherwise could have easily been withered by the home-sickness was nourished by the colleagues from my homeland, Republic of Korea: Sangyoub Lee, Yongmin Choi, Jun-Sung Park, Jun-Yong Lee, and Won-Seok Baek. It has been a big comfort that I can have people around my work environment that share the same language with me and have the identical cultural background. iii Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Another group of people that I must be grateful for are the CSI staffs at EEB 500: Milly Montenegro, Mayumi Thrasher, and Gerrielyn Ramos. Their assistance and professionalism in the area of administrative support was essential part of my study. Above all nothing could even come close to the support of my family. My parents, brother and sister have always been there for me throughout the whole stage of my study and the hardships that they have to endure. It would be impossible that I could ever repay what they have done to me but I hope that my humble accomplishment can compensate for what little part of their efforts and sacrifices. Finally I am deeply indebted to my beloved wife, Soonim Huh. As a wife, mother of our two sons, Kwon and Chan, and also as a full-time doctoral student herself at UCLA, she has done an unbelievable job that no one could ever match. Without her love and emotional support, my odyssey to the doctoral degree could have wrecked somewhere along the way. She is the one who should take the honor of my accomplishment. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Contents D edication ii Acknowledgem ents iii List Of Tables viii List Of Figures ix Abstract xiii 1 Introduction 1 1.1 Iterative detection ................................................................................... 1 1.2 Adaptive Iterative detection .................................................................... 3 1.3 Message Passing (Belief Propagation) Algorithm.................................... 5 1.4 Convergence of Message Passing (Belief Propagation) Algorithm . . . 8 1.5 Organization ............................................................................................ 9 2 Iterative D etection and Adaptive Iterative D etection 13 2.1 Iterative D etectio n ................................................................................... 13 2.1.1 Soft Input Soft Output (SISO) Algorithms................................. 13 2.1.2 Reduced State Soft Input Soft Output (RS-SISO) Algorithms 16 2.2 Adaptive Iterative Detection (A ID )........................................................ 18 2.2.1 Adaptive SISO ............................................................................. 18 2.2.2 Decoupled estimator AID ........................................................... 22 2.2.2.1 Parameter Estimation prior to Iterative detection . . 23 2.2.2.2 Parameter Estimation within Iterative detection . . . 23 2.2.2.3 Hard decision directed A I D ................................ 24 2.2.2.4 Soft decision directed A ID ................................... 25 3 M odified A daptive Iterative D etection 26 3.1 Comparison of Two Fixed-Lag A-SISOs................................................. 26 3.1.1 Introduction................................................................................... 26 3.1.2 System Description....................................................................... 29 v Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.1.3 Bi-Directional Fixed Lag A -S IS O ............................................... 31 3.1.4 Forward-Only Fixed Lag A -S IS O ............................................... 32 3.1.5 Numerical Results.......................................................................... 33 3.1.6 Conclusion..................................................................................... 36 3.2 Adaptive Iterative Detection for Turbo Codes on Flat Fading Channels 39 3.2.1 Introduction.................................................................................. 39 3.2.2 System Description...................................................................... 42 3.2.3 Decoupled Adaptive Iterative Detection...................................... 44 3.2.3.1 Hard decision feedback (HDF)............................ 44 3.2.3.2 Re-encoded Hard decision feedback (R-HDF) .... 45 3.2.3.3 Soft decision feedback (SDF) ...................................... 45 3.2.3.4 (Normalized) Partial Soft decision feedback ((N)P-SDF) 46 3.2.4 Channel E stim ato r............................................................ 46 3.2.4.1 Wiener filter ................................................................. 47 3.2.4.2 Average Wiener filter........................................... 48 3.2.4.3 Kalman f ilte r ....................................................... 49 3.2.5 Numerical re su lts......................................................................... 50 3.2.6 Conclusion..................................................................................... 53 4 A Simple Stopping Criteria for the M IN -SU M Iterative D ecoding Algorithm 55 4.1 Introduction.............................................................................................. 55 4.2 Simulation results..................................................................................... 57 4.3 Conclusion................................................................................................. 60 5 Analysis of Scaling Soft Information on SCCC and LD PCC based on density evolution 61 5.1 Introduction.............................................................................................. 61 5.2 Sum-product analytic density evolution of SCCC .............................. 64 5.3 Min-sum analytic density evolution of SC C C ....................................... 72 5.4 SNR evolution of S C C C ......................................................................... 76 5.5 Scaling soft information of S C C C ......................................................... 78 5.6 Min-sum density evolution of LDPC code............................................. 84 5.7 Scaling soft information of LDPC c o d e ................................................ 86 5.8 Conclusion................................................................................................. 89 6 Constrained Iterative Decoding: performance and convergence anal­ ysis 92 6.1 Introduction.............................................................................................. 92 6.2 CID Algorithms........................................................................................ 95 6.2.1 Optimal CID Algorithms............................................................. 95 6.2.2 Suboptimal CID Algorithms...................................................... 97 vi Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6.3 Simulation and SNR evolution.................................................................. 100 6.4 Conclusion................................................................................................. 104 7 Conclusions and Future Work 106 Reference List 108 A ppendix A Equivalence sign{Lv1{vi))sign{Lv2{v2))vsim^[ Lv1{vi)\,\Lv.2{v2)\\ to min-sum a lg o rith m ................................................................................................. 113 vii Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. List Of Tables 3.1 The decision feedback and channel filter combinations........................ 51 5.1 The threshold on analytic computation for sum-product and min-sum algorithm s................................................................................................. 75 5.2 The threshold for SCCC2 on analytic computation with/without scaling 81 5.3 The threshold for SCCC3 on simulation based computation with/without scalin g ............................................................................... 82 5.4 The threshold on min-sum and sum-product algorithms for various pairs (dv,dc) .............................................................................................. 87 5.5 The threshold on analytic computation with/without scaling for LDPC c o d e .......................................................................................................... 88 6.1 The CID options of the SCCC d eco d er................................................ 103 viii Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. List Of Figures 1.1 Block diagram of a Parallel Concatenated Convolutional Code (PCCC) and iterative d etectio n ............................................................................ 2 1.2 (a)Block diagram of standard turbo decoder (b)Bayesian network of standard turbo decoder............................................................................ 6 1.3 Message passing flow: step (1 - 6 ) .............................................................. 7 2.1 Block diagram of (a)FSM. (b)S IS O ........................................................ 14 2.2 Reduced states for forward and backward recursions............................ 17 2.3 Block diagram of RS-SISO....................................................................... 18 2.4 A tree structure and force-folded tre llis ................................................. 19 2.5 Channel adaptation in PSP m anner........................................................ 20 2.6 Forward/Backward recursions and binding on A-SISO......................... 22 2.7 Block diagram of decision directed AID (a)Hard decision directed (b)Soft decision d ire c ted ......................................................................... 24 3.1 The block diagram of TCM in interleaved frequency selective fading channel .................................................................................................... 30 3.2 The performance of the fixed-interval and fixed-lag adaptive SISO al­ gorithms, D— 6 ........................................................................................ 37 3.3 The performance of fixed-lag adaptive SISO algorithms for the different lag sizes D=2 , 6 ........................................................................................ 37 3.4 Architectures for the FL algorithms: (a)FL-BD, (c)FL-L2 VS-CE, and (d)FL-L2VS-UE .......................................................................................... 38 ix Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.5 The block diagram of receivers for turbo codes on fading channels (a) a decoupled channel estimator run only before iterative detection (b) a decoupled adaptive iterative detection (c) iterative joint data and parameter estim ation................................................... 40 3.6 The format of a transmitted symbol b lo c k ........................................... 43 3.7 The block diagram of decoupled-AID for turbo codes on an interleaved flat fading channel .................................................................................. 43 3.8 The performance of different decoupled AID methods with punctured QPSK and V d = 0.01 at the 10th iteration ........................................... 52 3.9 The performance of different decoupled AID methods with 8PSK and V d = 0.01 at the 10th iteration................................................................ 53 3.10 The performance for the different intervals of average fading amplitude at a BER of 6dB with 8PSK m odulation.............................................. 54 4.1 Block diagram of SCCC encoder and decoder with stopping rules . . 59 4.2 Performance of different stopping criteria with the fixed-number itera­ tive decoding for a input block length iV=1024 59 4.3 Average number of iterations for different stopping criteria for a input block length Af=1024 60 5.1 2-state convolutional code (R — |) 64 5.2 Tanner graph of a 2-state convolutional code (R = | ) ......................... 65 5.3 Block diagram of SCCC1 with 2-state constituent codes (R = |) . . 65 5.4 Serially concatenated bipartite graph for SCCC1 (R = |) 66 5.5 Comparison between simulation results and convergence threshold for SCCC1 with 2-state constituent codes ................................................. 71 5.6 Actual density evolution and Gaussian approximation for SCCC1 at Eb/N o=0.56dB ........................................................................................ 77 5.7 Actual density evolution and Gaussian approximation for SCCC1 at Eb/No=0.8dB ........................................................................................ 78 5.8 A block diagram of SCCC2 with scaling .............................................. 79 x Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5.9 Serially concatenated bipartite graph with scaling a ........................ 80 5.10 Thresholds on the different scaling factors for SCCC1 ..................... 81 5.11 Comparison between simulation results and convergence thresholds for SCCC1 with and without scaling .......................................................... 82 5.12 Comparison between the Minimal Tunnel Width (MTW) and BER for SCCC2 with different filtering com binations........................................ 83 5.13 A block diagram of SCCC3 with scaling ............................................. 83 5.14 actual SNR evolution with and without scaling for SCCC2 at Eb/No=1.5dB ........................................................................................ 84 5.15 Comparison between simulation results and convergence thresholds for SCCC3 with and without scaling .......................................................... 85 5.16 Thresholds on the different scaling factors a for (3,6) LDPC code . . 88 5.17 Evolution of mean and variance on the different scaling factors a for (3,6) LDPC code at E b/N o= 1.5dB ....................................................... 89 5.18 Actual SNR evolution with/without scaling for (3,6,10000) LDPC code 90 5.19 Simulation results for regular (3,6) LDPC for two different block size 1000 and 10000 with and without scaling ........................................... 91 6.1 The state constrained decoding on tail-biting code with 4-state non­ recursive convolution c o d e ...................................................................... 96 6.2 Data and tail bit arrangement with graph (a)tail-biting code (b)4-cycle tail-biting c o d e ........................................................................................ 97 6.3 Block diagrams of (a) the SCCC encoder, (b) the associated CID de­ coder ....................................................................................................... 99 6.4 Performance of tail-biting and tail-bit convolutional codes based on the min-sum algorithm with the block size AT=10 101 6.5 Performance of the 4-cycle tail-biting codes based on the min-sum al­ gorithm with the block size 1V=40 101 6.6 Performance for SCCC based on min-sum algorithm with the inter­ leaver size 256. M'=4 nodes were constrained........................................ 102 xi Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6.7 CID performance of regular (3,6) LDPC code based on MSM algorithm with interleaver size 500. (a)CIDl: constrained at 2 nodes, (b)CID2: constrained at 4 nodes. Both CID1 and CID2 were constrained at check nodes with the Reliability-marginalization.................................... 103 6.8 SNR evolution curves of standard ID and CID on SCCC at 0.53dB. . 104 6.9 SNR evolution curves of standard ID and CID on SCCC at 0.53dB and 0.48dB respectively..................................................................................... 105 xii Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Abstract Since Turbo codes were introduced, numerous works have focused on iterative decod­ ing (ID), a technique in which soft information (also known as extrinsic information) is exchanged among Soft Input Soft Output (SISO) modules. A SISO module can be viewed as the probabilistic inverse of a constituent FSM at the encoder. It has been shown that the application of such ID techniques to a system consisting of a network of FSMs achieves very good iteration gain. The extrinsic information is refined at each iteration, becoming more reliable. The modifications needed to the ID algorithm change with application. For exam­ ple, in the unknown channel case, ID has to be made adaptive, for implementation, the complexity has to be reduced. In this work, several modified ID algorithms are introduced and their performance and complexity are considered. For systems with time-varying channels, several ways to add the adaptivity to the SISO module are pre­ sented. A simple stopping criteria is also presented to reduce the average complexity of iterative decoding. In the recent past, an analytical tool was developed to explain the superior perfor­ mance of these ID algorithms. This development was based on the message passing xiii Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. algorithms and the corresponding graphical representations (e.g., Bayesian networks, Factor graphs, Tanner graphs, etc.) of the codes (e.g., Turbo codes, Low Density Parity Check (LDPC) codes etc.). This tool, known as the Density Evolution (DE), gives the asymptotic capacity of ID and explains many characteristics of ID, including convergence of performance and preferred structures for the constituent codes. In this work, several variations of the density evolution concept, which were inde­ pendently developed in the literature, are used to analyze common code structures. It is shown that those ideas yield results that agree well with simulation results. In particular, the performance gain due to scaling of the extrinsic information is ana­ lyzed by these density evolution techniques. The expected scaling gain by density evolution matches well with the achievable scaling gain from simulation results. Finally, in an attempt to optimally decode codes with cycles, a conceptual cycle cutting problem is introduced. This cycle cutting problem is a generalization of the optimal decoder of a single cycle tail-biting convolutional code, and can be realized by imposing constraints on a single or multiple nodes in the graphical representation of codes. This modified message passing algorithm, referred to Constrained Iterative Decoding (CID), is applied to the Serially concatenated Convolutional Code (SCCC) and the LDPC code. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. C hapter 1 Introduction 1.1 Iterative d etection Since Turbo code was introduced in 1993 [12], numerous works have focused on the iterative detection. Recently, some research has sought effective algorithms in terms of performance and complexity. It is well known th at the systems consisting of a network of Finite State Machines (FSMs) has a great iteration gain by iterative detection techniques th at exchange soft information. The soft information, so called extrinsic information, is exchanged between Soft Input Soft O utput (SISO) modules which can be viewed as the probabilistic inverse of an FSM [11], The exchanged soft information is refined during each iteration. As an example of an iterative detection, Parallel Concatenated Convolutional Code networks are illustrated in Fig. 1.1. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. U f c punctured QPSK U k SODEM CC1 CC2 SISOl SIS02 — X © « ----- wk (AWGN) Figure 1.1: Block diagram of a Parallel Concatenated Convolutional Code (PCCC) and iterative detection The SISO, as a major block of the decoding network, accepts (refined) a-priori soft information on the input and output symbols of the corresponding FSM and outputs a soft information of the symbols. Based on the exchanged information and the optimization criteria, the SISO can be classified as A-Posteriori Probability (APP) or Minimum Sequence Metric (MSM) [18]. An additional classification is based on the observation interval, resulting in Fixed Interval (FI) or Fixed Lag (FL). The former uses the whole observation interval to generate soft information, while the latter uses observation from time 0 to time k + D to produce soft-output information of a quantity Uk, where D is the smoothing lag. The complexity of SISO grows exponentially with the number of states of the corresponding FSM. By reducing the number of states, the complexity can be decreased dramatically. In [13], a Reduced- State (RS) SISO for a known channel was suggested. This suboptim al algorithm uses Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. decision feedback techniques to truncate some part of state information and replace the truncated information with the for ward/backward survivor paths. 1.2 A d ap tive Iterative d etection Perfect Channel State Information (CSI) is not available in many practical situations. The channel has one or more unknown, and possibly time varying, param eters in its model. In adaptive detection, those param eters are tracked by a certain type of estim ator combined with data detection. An adaptive iterative detection (AID) algorithm is a combination of the adaptive detection and the iterative detection. There are several approaches to AID in the literature: 1. iterative detection with an estim ator decoupled from data detection, which is run only before the first iteration, as in [37]. 2. iterative detection with a decoupled estim ator which is run iteratively by ex­ changing hard or soft information with data detection parts for both parameters and data as in [49, 50]. 3. iterative detection using adaptive SISOs which jointly estim ate both data and param eters as in [7]. 3 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Though (1) gives the simplest detection algorithm [37], the estim ator is not involved in the iterative detection, which results in the worst performance. Of signifi­ cant interest are the cases of (2) and (3) in the presence. Initial versions of the second approach were based on exchanging hard information rather than the soft (reliability) information for data as in [49, 50]. Recently, adaptive iterative detection based on the soft information of data to update the param eter estimates for the phase uncertainty was dem onstrated [7]. The soft information is transferred from one or more SISOs to the estimator. This approach is based on the idea th at a soft estim ator works better than a hard estim ator as in soft data detection. The concept of an Adaptive SISO (A-SISO) was introduced in [7] and it has shown a great iteration gain for both SCCC and PCCC. In this third approach, there is no explicit exchange of soft/hard information about data, rather, the estimators are inside the data detection trellis structure and implicitly re-generate the param eter estimates during the iterative detection. The A-SISO may have single estim ator with delayed decision or multiple estim ators in a Per Survivor Processing (PSP) manner [41], However, the complexity may be the lim itation of these algorithms, especially when more than one A-SISO is required in the decoding network. In [7] the full length of observations are used for A-SISO which results in FI-A-SISO. The simulation results show great performance gains. In realistic situation, many cases have only forward channel training sequence and requires less delay. Fixed-Lag (FL) A-SISO is appropriate for these applications. 4 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1.3 M essage P assing (B elief P ropagation) A lgorithm Message passing (Belief Propagation) algorithm in graphical models (Bayesian net­ works, Factor graphs, Tanner graphs, etc.) has been used to explain the success of iterative detection, especially for Turbo codes [34],[38]. A generalized forward-backward algorithm yields a serially effective realization of Belief Propagation (BP). This forward-backward algorithm consists of three steps: (l)forward message passing, (2)backward message passing, and (3)message fusion on each vertex. The type of messages passed depends on the graphical model. In [28], the messages are derived for the factor graph while they were derived for the Bayesian network in [34], The messages for the Bayesian network are a bit more complicated because a vertex in a Bayesian network corresponds to two vertices (variable vertex, function vertex) in a Factor graph. For both graphical models, however, the basic computation for an outgoing message consists of combining all incoming messages and marginalizing it over the neighbor variables except the variable which the outgoing message is heading to. It is well known th at the BP gives an exact probability for a cycle-free situation, but with cycles, it only gives an estim ate of the required probabilistic information. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (a) u, s S IS 02 SISOl SO D EM (D iii iii S IS02 SISO l Figure 1.2: (a)Block diagram of standard turbo decoder (b)Bayesian network of stan­ dard turbo decoder In Fig. 1.2, a block diagram and corresponding Bayesian network for the standard Turbo decoding are depicted. The shaded circles represent the observations and the punctured structure is also taken into account for two SISOs. In Fig. 1.3, the message passing is shown step by step for one iteration. The passed messages for each step are • step 1: observing channel output symbols (SODEM) • step 2: soft channel metrics from observations (SODEM) • step 3: forward message passing along the trellis (SISO l,SIS02) • step 4: backward message passing along the trellis (SISO l,SIS02) • step 5: fusion (combining) on vertices (SISOl,SIS02) • step 6: message interleaving (Interleaver) Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. S IS 0 2 SISOl (d ) SIS 0 2 (/) Figure 1.3: Message passing flow: step(l-6) where the corresponding block of each step is shown in the parenthesis. In [38], it is shown th at the forward backward message passing structure corresponds exactly to the B CJR algorithm [10]. 7 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1.4 C onvergence of M essage P assin g (B elief P ropagation) A lgorithm The recently developed density evolution technique has achieved the analytic capac­ ity of LDPC code. In [44], the density evolution technique was recursively used to track the density of extrinsic message between the variable nodes and check nodes of LDPC code under various channel conditions. A simplified version of the density evolution technique was introduced with a Gaussian approximation in [21]. The Gaus­ sian approximation was based on the fact th at the extrinsic information can be well approximated as a Gaussian random variable as the number of iterations increases. Another approach for the LDPC density evolution was introduced in [4] , which pro­ posed a min-sum density evolution using an approximation at a check node. Because the LDPC decoding algorithm can be represented by a simple bipartite graph, most of the analytical work has been focused on LDPC codes. Meanwhile, similar attem pts have been made for Turbo codes based on simulation in order to obtain the threshold, which in turn, determines the asym ptotic capacity of Turbo codes when the interleaver size and the number of iterations go to infinity [48, 29]. The obtained thresholds agree well to the region where the bit error curves start to fall down. Because these analysis depend on simulation, the threshold has a limited precision compared to those of LDPC codes. In [24], the density evolution 8 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. technique was used to explain many mysteries of turbo codes and SCCC. For exam­ ple, the role of systematic bits and the importance of recursive constituent codes etc. The input-output SNR curves were used to explain the evolution of exchanged soft information. The density evolution technique based on SNR curves were classified into two categories. The actual SNR curves were obtained during a standard itera­ tive decoding procedure by collecting the extrinsic information, while the Gaussian SNR curves were obtained by generating Gaussian random variable fed into each constituent decoder. Based on all these capacity related works, we should notify th at when the decoding algorithm is not optimal, capacity depends on the decoding algorithm to which the density evolution technique is applied. Therefore, it can be understood as a decoding capacity rather than encoding (or code) capacity. 1.5 O rganization In Chapter 2, the basic structure of iterative decoding and adaptive iterative decoding are introduced. The classification of SISOs based on the type of soft output are discussed. W hen the channel is unknown, the adaptive decoding is combined with the iterative decoding. Several adaptive iterative decoding techniques are discussed. The Adaptive SISO has the channel adaptation (estimation) module, which is combined with the trellis decoding structure. Meanwhile, the AID with decoupled channel 9 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. estimator(s) is classified by the decision feedback options (e.g., hard-decision, soft- decision). In Chapter 3, two modified A-SISO algorithms are compared in terms of com­ plexity and performance. Under the system of the TCM in the interleaved frequency selective fading channel, it is shown th at the iteration gain shown at fixed-interval algorithm is maintained at the fixed-lag algorithm. In addition, the effective chan­ nel adaptation option are introduced for the fixed-lag algorithm. Using the idea of Reduced-State (RS) SISO, the trade off between complexity and performance are discussed as well. The decoupled AID algorithms are extensively investigated as well in C hapter 3. Under the Turbo decoding on a flat fading channel, three de­ coding options (i.e., hard-decision feedback, soft-decision feedback, and reencoded- hard-decision feedback) are compared. As a variation of soft-decision feedback AID, the partial-soft-decision-feedback algorithm is proposed, which uses only reliable soft information among the all available soft information. In Chapter 4, as an attem pt to reduce the average complexity of iterative decoder, several stopping criteria are compared in terms of performance and complexity. Com­ pared to other stopping criteria, the proposed Valid Code Word (VCW) algorithm achieves the same performance with less memory and complexity. The VCW checks if the decoded codeword at current iteration is a valid codeword, while other stopping criteria use the consecutive soft or hard output to determine a proper stopping point. 10 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. In Chapter 5, recently developed several density evolution ideas are used to analyze common code structures and it is shown th at those ideas yield consistent results. In order to do that, the expressions for density evolution of SCCC with a simple 2-state constituent code are derived. The analytic expressions are based on the sum-product and min-sum algorithms, and the thresholds are evaluated for both message passing algorithms. Particularly, for the min-sum algorithm, the density evolution is used to analyze the effect of scaling soft information. The scaling of extrinsic information can be viewed as a method to slow down the convergence of soft information or a method to avoid an overestimation effect of the min-sum based soft information. The scaling of extrinsic information yields better performance, and its gain is maximized in particular constituent codes. Similar approaches are used for LDPC code. The scaling gain is noticeable in the LDPC code as well. This scaling gain is analyzed with both density evolution and simulation performance. The expected scaling gain by density evolution matches well with the achievable scaling gain from simulation results. A modification of the standard iterative decoding (message passing) algorithm is introduced in Chapter 6, which yields improved performance at the cost of higher complexity. This modification is to run multiple iterative decoders, each with a dif­ ferent constraint on a system variable (e.g., input value, state value, etc.). This Constrained Iterative Decoding (CID) implements optimal MAP decoding for sys­ tems represented by single-cycle graphs (e.g., tail-biting convolutional codes). For 11 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. more complex graphical models, the CID is suboptimal, but outperforms the stan­ dard decoding algorithm because it negates the effects of some cycles in the model. It is shown th at the CID outperforms the standard ID for a Serially Concatenated Convolution Code (SCCC) system and Low-Density-Parity-Check (LDPC) code sys­ tem, especially when the interleaver size is small. Density evolution analysis is used to show how the CID improves the convergence relative to th a t of standard ID by showing th at the threshold of the CID is lower than that of the standard ID. Some conclusion remarks and future works are in Chapter 7. 12 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. C hapter 2 Iterative D etectio n and A daptive Iterative D etectio n 2.1 Iterative D etection 2.1.1 Soft In p u t Soft O u tp u t (SISO ) A lgorith m s Iterative decoding using soft output information has been developed in numerous papers since the turbo code was introduced. A great iteration gain has been observed by exchanging the soft information for a variety of applications. In [11], the previously described soft output equalizer and decoder were generally defined by the Soft Input Soft O utput (SISO) block which can be understood as the probabilistic inverse of a Finite State Machine (FSM). Fig. 2.1 shows the correspondence between FSM and a SISO. 13 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. FSM SISO (a) (b) Figure 2.1: Block diagram of (a)FSM. (b)SISO As a well-known application, Fig. 2.1 shows a standard turbo encoder and de­ coder. It illustrates the iterative decoding using two SISOs connected with inter­ leaver/deinterleaver. A SISO algorithm can be classified by the type of soft output as A-Posteriori Probability (APP) or Minimum Sequence Metric (MSM) [18]. Each soft output is defined by A p p g ( u k) = p ( A y ti)p (4 M S M k kHut ) lr i[m a x P (4 ;|i£ )P (tg )] J .& 2 ' (2 .1) (2 .2 ) where k is the time index between and k2, tk = (s*,; a^, Sfc+i) is the transition, Uk is an arbitrary quantity related to the FSM (e.g., bk,ak,Sk etc), and : Uk denotes all transitions consistent with Uk■ It is noted th at the MSM is the negative log of a generalized A PP in (2.1),(2.2). The two types of soft output are interchangeable by replacing min-sum operation by sum-product operations. Therefore we will con­ centrate on the MSM-based algorithms in this development. In [18], the relationship 14 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. between the A PP and the MSM is described in detail. Based on the MSM soft output, the basic forward-Add Compare Select (ACS) and backward-ACS recursions are MSMk kl(sk+1) = mm [MSMfc1 (sk) + M k{tk)] (2.3) x tk'-Sk + l MSM t2(sf c ) = min[MSM k k2 +1(sk+1) + M k(tk)\ (2.4) tk-Sk where M k(tk) is a transition metric in terms of tk. The extrinsic information is computed by the following completion step. SO%(uk) = m in[M SM ^1(s/;) + M k(tk) + MSMf e f c 2 +1(Sfc+1)] - S I(u k) (2.5) tk-U k where SI{uk) is the input soft-information on uk and SOk*(uk) is the output soft- information on uk based on input soft information between times and fc 2. The soft-information is passed from one SISO to other SISOs in a sequential or parallel manner and it results in refined a-priori probability of uk iteratively. The soft output algorithms can also be classified as fixed-interval or fixed-lag al­ gorithms with smoothing lag D based on the observation interval. Abend and Fritch- man [1] introduced a Maximum A-posteriori Probability(M AP) symbol detection al­ gorithm with a fixed-lag D. The complexity of the Abend and Fritchm an algorithm grows exponentially with a fixed-lag D. An equivalent algorithm was developed by Lee [35] and Li, Vucetic and Sato [36], having linearly growing complexity with D. We refer to this algorithm as L2VS algorithm. Recently it has been shown th at there is 15 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. apparently no advantage of the L2VS algorithm when compared to the bi-directional algorithm in a perfectly known channel [17]. In particular, the bi-directional algo­ rithm is less complex and generates the exact same soft output. Moreover, it can be parallelized as can the L2VS algorithm. 2.1.2 R ed u ced S ta te Soft In pu t Soft O u tp u t (R S-SISO ) A lgorith m s Although the SISO algorithms show great performance in many applications, their complexity places some lim itation on implementing the SISO algorithms in reality. The complexity grows exponentially with the number of states required for the trellis decoding. The number of states is directly related to the constraint (memory) length of a FSM. The same complexity concerns also arise for the hard decision Viterbi Algorithm (VA). In this view, Reduced State Sequence Estim ation (RSSE) in [27] and Delayed Decision-Feedback Sequence Estim ation (DDFSE) in [25] are developed and widely used for complexity reduction in hard decision VA. The basic principle of these decision feedback techniques is to truncate some part of state information (e.g., input symbols in a simple FSM) and replace the truncated information with the decision feedback estimates. 16 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. In [13], these decision feedback techniques are applied to soft decision SISO al­ gorithm referred as Reduced State SISO (RS-SISO) algorithm. The truncated states for forward and backward recursion are respectively Sl ~ ( a k - l , •••> O fe-L i, f l f e - L i - l ) U k -L ) ( 2 - 6 ) ' ------------------------* ------------------------' ' -----------------------------V---------------------------- ' truncated state forw ard survivor path Sk = ( ®fc+L-Li-l) •••) , Ofe-1, Ofc-Li) (2 .7 ) v---------------------------- V---------------------------- ' ' ------------------------V------------------------' backward survivor path truncated state where L is the constraint length of a FSM and L\ is the length of the truncated states. It is noted th at the truncated parts are different according to the recursion direction. In Fig. 2.2, it is shown how the forward and backward survivor paths determine the reduced states respectively. (“ J t-l! a i- 2 ) “ k-3, - *.0 Si. o k -; o O O O O O O O k- 3 k- 2 fe-1 k k fc + 1 Figure 2.2: Reduced states for forward and backward recursions The m ajor contribution of the work in [13] are (1) to suggest the details of trun­ cating and decision feedback technique for reducing the number of states, and (2) to derive an appropriate branch metric for forward and backward recursion of bi­ directional SISO algorithm. It is noted that the RS-SISO algorithm is suboptimal even in a time invariant situation because of the truncated state. Fig. 2.3 illustrates 17 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. the block diagram of a RS-SISO. Because the RS-SISO algorithm is suboptimal, the soft output of a RS-SISO can feedback to the same RS-SISO, which is referred as self-iteration. The self iteration The simulation results show a great self-iteration gain for various applications. t u - p . y 1 RS-SISO — X I ----------- Self — Iteration Figure 2.3: Block diagram of RS-SISO 2.2 A d ap tive Iterative D etectio n (A ID ) 2.2.1 A d a p tiv e SISO W hen the channel is unknown and/or time varying, a tree search algorithm is optimal due to the correlated channel. Due to the exponential complexity of a tree search algorithm, joint channel and data estimation may be a reasonable approach with a force-folded trellis in Fig. 2.4. In the hard-decision case, approaches vary from simple single channel estim ator structures to the more advanced multiple estim ator structures, based on the Per Survivor Processing (PSP) concept [41]. In [8] the L2VS algorithm was augmented with a PSP-like estim ation for the special case L = D, where L is the memory size of a channel and D is the lag of the L2VS algorithm. For the general lag size D, a fix-lag, forward-only, adaptive soft output algorithm was 18 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 2.4: A tree structure and force-folded trellis introduced in [54], based on the Abend and Fritchman [1] approach. However, the exponential complexity of this algorithm with the lag D precludes its use for many applications of interest. In [7], a new bi-directional Fixed Interval (FI) A-SISO was introduced, having linear complexity with the observation record. Since we consider the time varying channel, the ACS algorithm is combined with a channel estim ation algorithm. We will refer to this as adaptive-ACS. Among various joint detection and estimation algorithms, we focus on PSP. In PSP, the code sequence of each survivor is used as the decision feedback for the per-survivor estimate of the unknown param eters [41]. The channel estim ate associated with state Sk+i is calculated as f(sf c +i) = (2.8) where f(sfc) = {fk(sk,i)}i=o is the unknown vector (i.e., the channel coefficients in this context) and cq(sfc) is the coded symbol associated with the survivor of the state 19 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. f£(*») O " ^ + i( s t+i) o o o k k + 1 k + 2 Figure 2.5: Channel adaptation in PSP manner sk. G denotes a generic estim ation operator. In this development, G is the Least Mean Square (LMS) channel estimation algorithm. In Fig. 2.5, the PSP algorithm is illustrated. Based on a survivor path after the forward and backward recursions, the channel estim ates &(sk), ^ (s k+i) are updated using the LMS algorithm The forward and backward LMS algorithm are respectively given by L f f (skG) = f f (sk^i,i)+ p {zk^ 1 - '£ ,a k - j - i ( s k) f f {sk-i,j)}a*k_i_1{sk) (2.9) 3= 0 L f b(sk,i) = f b{sk+i,i) + P[zk-^2ak-j(sk)P{sk+i,j)]a*k_i{sk) (2.10) j=o where % is an index between 0 and L and /? is a step-size of Least Mean Square (LMS) channel adaptation algorithm. The best choice of (3 depends on the and the channel dynamics. The forward channel estimate (s0) is initialized by running RLS 20 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. algorithm on a training sequence at the left edge. The backward channel initial estimate t (sN+i) are also computed in the same manner with the backward channel training sequence. The forward and backward channel initial coefficients are used for the forward and backward adaptive-ACS recursion, respectively. In [7], the optimal bi-directional FI-A-SISO algorithm was developed with the GM model for the unknown parameter. The optimal binding for the FI-A-SISO algorithm was described as p (z 0 i x o ) = P 0 o . x o ) P ( z k + v x k+i\s k+i)bp{gk\k, G k|fc , g b k|fc+1) G b k|fc+1) (2.11) where gk\ki9k\k+i are fhe forward channel estimate, the one-step backward channel prediction and Gk\k,Gb k|fc+1 are the corresponding covariances. Fig. 2.6 illustrates the recursions and binding with the corresponding messages. This optimal algorithm was simplified and sub-optimal versions were suggested. These algorithms have a forward and backward adaptive-ACS recursions and a modified completion operation with a binding term given by b{¥(sk),?(sk)) = (sf c ) - f ( s fe )||2 (2.12) 1 — /rr where f\ s k) is the forward channel estimate and (sk) is the backward channel esti­ mate and p is The binding term is added to the metric of the corresponding 21 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Sk S k + 1 p{zk , - \ 4 ~ x 9k\k Sk+2 s k+1 9 k + l\k + l fib ^fc+llfc+1 S f fc+2|fc+2 fib <Jrfc+2|fc+2 forward recursion backward recursion binding Figure 2.6: Forward/Backward recursions and binding on A-SISO path. The inconsistency ||f^(sk) — (s*)|| of the bi-directional channel estimates re­ sults in an increased metric. In other words, the binding effect takes into account the inconsistency between the causal and anti-causal channel estimates. 2.2.2 D eco u p led estim a to r A ID The variations of Adaptive-SISO(A-SISO) were reviewed in the previous section under the per path estim ator models. In many applications of iterative detection, the latency 22 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. may be a major lim itation of the iterative detection. As one approach to decrease the latency, decoupled single estim ator approaches are reviewed in this chapter in place of multiple per path estim ator approach. 2.2.2.1 Param eter Estim ation prior to Iterative detection Regarding the unknown param eter estimation using single channel estim ator, many options are available to reduce the overall detection complexity. The approach of us­ ing param eter estim ation prior to iterative detection has been developed in [30, 51]. Before the iterative detection, there is an external estim ator to adapt the unknown param eters with or without pilot symbols. After the estimation, the same estimates are used for all iterative detection. In [30], an external estim ation method was sug­ gested for a Serial Concatenated Convolutional Code (SCCC). A similar approach is followed in [51] for a Parallel Concatenated Convolutional Code (PCCC). These algorithms exclude the param eter estimators from the iterative detection. 2.2.2.2 Param eter Estim ation w ithin Iterative detection Under the assumption th at the channel estim ators based on the soft output of SISOs give more reliable channel estimates, several approaches were developed to effectively include the estim ators inside the iterative detection algorithms. Based on the type of 23 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Estimator Turbo decoder Estimator Turbo decoder Figure 2.7: Block diagram of decision directed AID (a)Hard decision directed (b)Soft decision directed soft information transferred to estimators, the algorithms can be classified as hard- decision directed or soft-decision directed algorithms. 2.2.2.3 Hard decision directed AID The soft information of SISOs is thresholded and passed to a decoupled estimator in hard decision directed AID. During iterations, it is expected th at the estimators regenerate the estimates based on more reliable information, and the refined estimates result in more reliable data estimates. As a result, with less number of iteration (with less latency) the desired performance can be achieved. In [49], the hard decision directed AID was proposed for Turbo code. A general model of the hard decision directed AID is shown in Fig. 2.7 (a). 24 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.2.2.4 Soft decision directed AID The basic principle of iterative detection is to take advantage of the soft information of the SISOs w ithout thresholding. In view of this fact, a question arises: How can the param eter estim ator take the soft information from SISOs. Recently [7] proposed a soft decision directed AID for phase tracking algorithm for both PCCC and SCCC decoding. It suggested an Adaptive Soft Demodulators (A-SODEM) which exchanges soft information with SISOs. The soft information from SISOs can be regarded as a priori of the d ata symbol a*,. Using the soft information, an average of a* . can be calculated and used at param eter estim ator in place of the hard decision of the data symbol. In Fig. 2.7 (b), a general model of the soft decision directed AID is shown. 25 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. C hapter 3 M odified A d ap tive Iterative D etectio n 3.1 C om parison of Tw o Fixed-L ag A -SIS Os 3.1.1 In tro d u ctio n The mobile channel is characterized by time varying m ultipath propagation which can result in severe inter symbol interference (ISI). A combination of Trellis Coded Modulation (TCM) and interleaving is often used to combat time-varying, ISI. Three receiver structures are: (i) hard decision decoding of the code with sequence detec­ tion on the inner ISI channel, (ii) soft-output “equalization” of the ISI channel with soft-decision decoding of the code, and (iii) iterative equalization-decoding using a soft-in/soft-out (SISO) module for each the decoder and the “equalizer” . These ap­ proaches are listed in the order of improving performance with (ii) a special case of 26 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (iii) (i.e., no iteration). In this work, we are interested in performing method (iii) when perfect knowledge of the channel is not available at the receiver. Several soft-output algorithms or SISOs have been suggested for the approaches (ii) and (iii), respectively, when the channel is known. Of particular interest in this work are fixed-lag (FL) algorithms which produce soft-output information of a quan­ tity Uk based on the soft-in information from time 0 to time k + D, where D is the smoothing lag. There are several SISOs th at compute the same (desired) soft- information with different structures. The Abend and Fritchman [1] and the Lee [35] and Li, Vucetic and Sato [36] (L2VS) algorithms produce the same soft-information using forward-only recursive processing. The complexity increases exponentially with D for the Abend-Fritchman algorithm and linearly in D for the L2VS version. Re­ cently, it has been shown th at a bi-directional FL SISO [14, 11] produces the same soft-outputs with complexity th at is less than th at of the L2VS [14, 18] with the same latency. Furthermore, these FL-SISOs have been shown to be generally applicable to iterative detection [11] in both sum-product or min-sum form, yielding soft-inverse of a finite state machine (FSM) based on the A-Posteriori Probability (APP) or Minimum Sequence Metric (MSM) [18], respectively. Thus, the bi-directional FL-algorithm is the desired structure when all system param eters are known. When the channel is unknown and/or time variant, joint channel and data esti­ mation may be a reasonable approach. In the hard-decision case, approaches vary 27 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. from simple single channel estim ator structures to the more advanced multiple esti­ mator structures, based on the Per Survivor Processing (PSP) concept [42]. In [8] the L2VS algorithm was augmented with a PSP-like estimation for the special case L = D, where L is the memory size of a channel and D is the lag of the L2VS al­ gorithm. For the general lag size D , a fixed-lag, forward-only, adaptive soft output algorithm was introduced in [54], based on the Abend and Fritchman [1] approach. However, the exponential complexity of this algorithm with the lag D precludes its use for many applications of interest. In [7], a new bi-directional fixed interval A-SISO was introduced, having linear complexity in the observation record. Nevertheless, in the above algorithm it is assumed that both a forward and backward channel training sequences are present. In many realistic situations, however, there is only a forward training sequence available. In addition there may be a necessity to process the re­ ceived data w ithout large delay. In these cases, a fixed-lag soft output algorithm is required. Thus, two interesting questions arise: (i) Can the fixed-interval A-SISO [7] be modified to a (bi-directional) fixed-lag A-SISO maintaining the significant perfor­ mance gains? and (ii) If so, which fixed-lag soft output algorithm architecture (i.e., L2VS vs. bi-direction) is more effective in terms of complexity and performance when modified to be adaptive? In this work we propose new Fixed-Lag A-SISO (FL-A-SISO) architectures. In particular the FL-A-SISO is realized both in the bi-directional form [14], as well as in the L2VS forward-only form, by synthesizing the PSP concept with the recent 28 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. results from [18, 7]. The resulting FL-A-SISO algorithms have linear complexity in D and estim ate the channel without a backward channel training sequence. The simulation results show th at the suggested FL-A-SISO architectures m aintain most of the performance gains dem onstrated for the bxed-interval case [7] and have almost the same performance as the algorithm in [54] with significantly less complexity. Among several reasonable options considered, the preferred FL-A-SISO algorithm is found to be bi-directional with forward-only channel estimation 3.1.2 S y stem D escrip tio n A typical TDMA cellular transmission system with trellis-coded modulation (TCM), interleaving, and frequency selective fading channel is considered. After the source symbol bk is trellis coded and modulated, the m odulated symbols are interleaved. The interleaved symbols are transm itted through a frequency selective fading channel with L + l taps. The signals are observed in additive white Gaussian noise (AWGN). A discrete-time model for the received signal is L zk = J 2 ak~ifk(i) +Kk (3.1) i=0 29 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. where, ak is the coded symbol, f k{i) is the coefficient of the frequency selective fading channel,1 and nk is a white circular complex Gaussian noise. The structure of overall system is shown in Fig. 3.1. Xk d k iterleaver TCM fading cliannel deiterleaver iterleaver SISO A-SISO Figure 3.1: The block diagram of TCM in interleaved frequency selective fading channel The receiver consists of two SISOs [11] which are counterparts of the two FSMs in the transm itter. The two SISOs are connected by an interleaver and a deinterleaver to exchange soft information iteratively. In [11], the architecture of standard iterative detection with SISOs was described. Iterative detection was applied to systems with unknown param eters in [7] where the inner SISO was adaptive and re-estimated the ISI channel during each activation (i.e., each iteration). The fixed-lag A-SISOs architectures of this proposal are based on the development of the A-SISOs in [7] and the relation between the L2VS and bi-directional architectures established in [18]. The adaptive iterative detection algorithm is the same as th at developed in [7] with the inner A-SISO changed from the FI version various fixed-lag A-SISO algorithms. 1This symbol-spaced approximation is used to allow for reasonable simulation effort. This model and the corresponding fading simulation used can be modified to account for more realistic front-end processing as in [19]. 30 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.1.3 B i-D irectio n a l F ixed Lag A -SISO A FL-BD-A-SISO algorithm is realized by the same procedure as the FI-BD-A-SISO algorithm [7], but the backward adaptive recursion is started with k2 = k + D to obtain the SO q+d (u^) where k denotes the time index of the current soft output. In Fig. 3.4(a), the backward recursion and the completion are illustrated. Unlike the FI algorithm, the FL algorithm does not require a backward training sequence. This may more accurately reflect the training methods used in current systems. The challenging problem is to initialize the backward channel estimates. Among the various possible solutions, the simplest method is to use the latest updated forward channel estimate (sk+D+i) as the backward channel initial coefficients ?(sfc+£>+i). We refer to this as FL-BD-BE (Bi-directional Estimation). Two completion methods in (2.5) with k2 — k + D are considered with the binding term in (2.12) and w ithout the binding term. A variation on this backward initialization is also considered where the forward adaptive recursion is processed up to k + D + d instead of k + D, d is an additional lag. W hen d is large enough, survivor merging will occur between times (k + D and k + D + d) with high probability. By storing all channel estim ates for this forward recursion, one can traceback from time k+D+d to find the channel estim ate associated with the best state at time k + D. This estim ate associated with the best survivor can then be used to initialize the backward channel estimates for all states. Another variation of the same problem is running backward ACS with a copy of the previous channel estimates for the forward ACS. This approach removes the backward channel 31 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. initialization problem and the binding term. We refer to this algorithm as FL-BD-FE (Forward-only Estim ation). 3.1.4 Forw ard-O nly F ix ed Lag A -SISO The FL-L2VS algorithm is based on a constrained forward ACS, defined by MSMk k (u k- d, sf c +i) = min [MSM*_1(itfc-d> sk) + M k(tk)\ (3.2) t k -.Sk + l , U k - d It is noted th at in the constrained ACS, all transitions tk-d th at are inconsistent with the conditional quantity Uk-d are term inated [18],[36]. In this context, we consider two variations of the FL-L2VS-A-SISO algorithm. In Fig. 3.4 the two FL- L2VS-A-SISO algorithms are illustrated. In Fig. 3.4(b), both the constrained forward ACS and the unconstrained forward ACS recursions are adaptively implemented. We refer to this as FL-L2VS-CE (i.e., constrained estimation). In Fig. 3.4(c), only the unconstrained forward ACS recursions are adaptively implemented whereas the channel estimates for the constrained forward ACS is a copy of the previous channel estimates for the unconstrained forward ACS. We refer to this as FL-L2VS-UE (i.e., unconstrained estimation). It is found th at the FL-L2VS-UE algorithm generates the exact same soft output as the FL-BD-FE algorithm. This can be understood as an extension of the previous work in the known channel [14]. 32 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. The FL-L2VS algorithm consists of forward only adaptive-ACS operations. There­ fore, neither the backward channel estim ate initialization nor the binding term (2.12) is applicable. Note th at this is the generalization of [8] to D > L which follows di­ rectly from the interpretation of the FL-L2VS-SISO given in [18] and summarized by (3.2). 3.1.5 N u m erical R esu lts The simulations were run based on the system described in Fig. 3.4. The convolutional code rate was 1/2 and the output of convolution code was m odulated by QPSK via the Gray mapping. The coded symbols are interleaved using a 57x30 block interleaver. Each column of block interleaved symbols with additional training sequence infor­ mation is referred to as a burst. Each burst is the processing unit of inner adaptive SISO. The interleaved symbols were sent over a 3-tap Rayleigh fading channel with normalized Doppler spread rq=0.005. All of the simulations were executed under the wide sense stationary, uncorrelated scatter (WSSUS) model. Each tap of the channel was generated as an independent Gaussian process with the Clarke spectrum [22]. For each E^/Nq we applied the optimal step-size (/?) for LMS channel estimation (i.e., determined empirically). In this section, each algorithm was identified by the following label • the type of observation (FI or FL) 33 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. • the type of FL structure (BD or L2VS) • the type binding (SB:Suboptimal Binding or NB:No Binding) • the type of L2VS adaptation (CE:Constrained Estim ation or UE:Unconstrained Estim ation only) • the type of bi-directional channel adaptation method (BE:Bi-directional Esti­ mation or FE:Forward-only Estimation) In Fig. 3.2 the performance of the adaptive iterative detector is shown for the first and the fifth iteration. For comparison, two FL methods (BD,L2VS) were shown with the FI algorithm. As the FL-BD-FE and FL-L2VS-UE algorithms generate the same soft output, they show the same performance in Fig. 3.2. The performance of the FL-BD -FE/FL-L2VS algorithm is about 5dB in Eb/N Q worse than th at of the FI algorithm at a BER of ICC3 for the first iteration but is only 2dB worse for the fifth iteration. Note th at the FL algorithm shows a larger iteration gain compared to the FI algorithm. For the FL-BD-FE algorithm, the backward ACS recursion was simulated with a copy of the previous forward channel estimates. For the FL-BD-BE algorithm we simulated with the simplest backward initialization m ethod which was to initialize backward channel estim ator with the latest estim ated value of the forward channel estimator. As a result, the backward estim ate has a strong dependency on the forward estimate. Therefore, we expect little inconsistency between the forward and backward channel estimates, and thus a small binding gain for the FL-BD-A-SISO. 34 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. This is in contrast to the FI algorithm, which has a large binding gain (i.e., 2dB). Al­ though we do not present the results here, the variation of the FL-BD algorithm using channel estim ates after merging was also simulated with no significant improvement observed. In conclusion, the attem pts for the backward channel estim ation without the backward channel training sequence has no advantage. In Fig. 3.2, the two different channel adaptation algorithms for the FL-L2VS are also compared. The FL-L2VS-UE performs better than the FL-L2VS-CE. The channel coefficients are estim ated based on the symbol estimates in the PSP manner, and the symbol estimates come from the ACS recursion. Therefore, the constrained ACS recursion provides a constraint on the channel estimates. This constraint on the channel estim ate apparently degrades the performance in the FL-L2VS-CE. Although the FL-L2VS-UE requires additional storage for the previous channel estimates, it yields a significant reduction in com putational complexity relative to the FL-L2VS- CE algorithm. Note th at the FL-BD-FE algorithm is much less complex than the FL-L2VS-UE algorithms while they show the same performance. In Fig. 3.3 the performance of the proposed FL algorithms was compared with ZFG’s algorithm [54] for different 35 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. lag sizes.2 The performance of the FL-BD -FE/FL-L2VS algorithms and ZFG’s al­ gorithm were found to be similar despite a significant difference in complexity (i.e., with D = 2,6, ZFG’s algorithm has 16 and 4096 states, respectively, as compared to 16 states for the FL-BD -FE/FL-L2VS). Moreover, for the FL-BD-BE algorithm, the performance was not improved noticeably when the lag size was larger than 3. How­ ever, for the FL-BD -FE/FL-L2VS-UE algorithms the performance gradually increases with the lag size. 3.1.6 C onclu sion As a conclusion, the FL-A-SISO algorithms were proposed and the performance were compared for TCM in interleaved frequency selective fading channel. It was shown th at both FL algorithm perform slightly worse than the FI version while maintaining the iteration gain of the FI algorithm. W ithout the necessity of backward channel es­ tim ation, the FL-BD-FE and FL-L2VS-UE algorithms showed the same performance and performed better than any proposed variations of the fixed-lag adaptive struc­ ture. Note th at the complexity of the FL-BD-FE is much lower than the FL-L2VS-UE algorithm. Thus, FL-BD algorithm with forward-only estim ation (FL-BD-FE) has both performance and complexity advantage relative to the FL-L2VS algorithm. 2Note that for this non-recursive FSM, one can use “L-early completion” - i.e., the backward recursion for the BD algorithm or the constrained forward recursions of the L2VS approach can be shortened by L steps. This was not done in the simulations, but a practical D = L (i.e., D = 2 in this case) version of the BD, L2VS, and ZFG will all be the same as given in [8]. 36 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. cc LD 0 0 1 s t Iteration 5th. Iteration e — FI-NB Fl-B FL-BD-BE-NB FL-BD-BE-B B— FL-L2VS-CE FL-BD -FE / FL-L2VS-UE Eb/No dB Figure 3.2: The performance of the fixed-interval and fixed-lag adaptive SISO algo­ rithms, D = 6 DC LD CO ■e— FL-BD-BE-B(D=2) - • — FL-BD-BE-B(D=6) FL-BD -FE(D=2)/ FL-L2VS-UE(D=2) -*— FL-BD -FE(D=6)/ FL-LVS-UE(D=6) -a— ZFG algorithm (D =5) 1 0 " ' 6 7 8 9 10 11 Eb/N o Figure 3.3: The performance of fixed-lag adaptive SISO algorithms for the different lag sizes 0 = 2 ,6 37 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. (a) (c) (d) Sk Sfc+l •Sfc+D+l ■ KZ^ — * 0 ----- "O * 0 "O * 0 - \ O - 0 — 0 CK (b) lag size D i U n c o n strain e d ACS : Sk •S fc + 1 ■ S f c + D - M • I — *> - - * o - - O - - O — t O - .....ZZ........r z . ......ZZ..... i / 4 ...../ .................... ........................- ............... i i eg uk + 1 \ r o — o l *o— o I v r * 0 * 0 -----------* 0 * ■ -* 0 — -*0----<5 1 ' | 0 - - 0 — O ----- -O \ ‘ - O - - o — -*o— o \ ' r * o KD * 0 -------* 0 -------O t t*o— *o— o — «o— >o \ ',r*o— >o— o — <3— *o— «o ‘ -O K D *0----*0----O n strained.A C S ... O [U n co n strain ed ACS i S k ■ ’ fc+l S k + D + l i Z I P .......... - h r h r i r -j r i........ / / i; a /; i\ / ' i: H /! AL-.J. ’ i: /: i: 1 : : i: if “f c + i! ■: I ; I 1 ^ " / k ------ ► '" r - k ----- ► '"V -^ “Ml \\ \ \ V\ -K'>V - . > \ 4 - 0 \ \ -o - ^ - o -^*o—f*:_' * \ 4 * * * » \ \rK > \- K > -\-K > -V -> - V - '--- V-' L - K , — K " • K . > K < “ — K ' < ----K ' .'.cinatr.anic'.d. AC S.......................................................... Sk Sifc+ l •Sfc+o+l ' ' X T T ' T " T “ ? U k x » j. i, f i lag D O F o rw ard ACS B ack w ard ACS C o m p u te d for c u rre n t so ft-o u tp u t P re v io u sly co m p u te d in fo rm a tio n C o m p le tio n C opied ch an n el e stim a te s S ta te w ith ch an n el e stim a to r S ta te w ith o u t c h an n e l e stim a to r Figure 3.4: Architectures for the FL algorithms: (d)FL-L2VS-UE (a)FL-BD, (c)FL-L2VS-CE, and 38 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.2 A d ap tive Iterative D etectio n for Turbo C odes on Flat Fading C hannels 3.2.1 In tro d u ctio n W hen a channel has one or more unknown, and possibly time varying, param eters in its model, those param eters may be tracked by a certain type of estim ator combined with data detection. The param eter estimation can be run only before iterative detec­ tion with the same estimates used for all successive iterations [37, 32, 51]. In contrast to this simple iterative detection algorithm, adaptive iterative detection (AID) algo­ rithms use updated information on data to re-estimate the channel param eters every iteration. There are two approaches to AID in the literature: (i) iterative detection with a decoupled estim ator th at is run each iteration by exchanging hard or soft in­ formation with data detection unit for both param eters and d ata [49, 50, 45], and (ii) iterative joint data and channel parameters estimation using adaptive Soft-In- Soft-out(SISO) blocks [5] or quantized channel parameters which is merged into data trellis [33]. In Fig. 3.5 two AID algorithms (b),(c) are shown with a non-adaptive algorithm (a). The AID with a decoupled estim ator which is depicted in Fig. 3.5 (b). During iterations, it is expected th at a decoupled estim ator re-estimates the channel pa­ rameters with updated information on the data from the decoder, and the refined 39 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. SO (6.) SO (bj) channel estim ator Turbo decoder channel estim ator Turbo decoder adative Turbo decoder Figure 3.5: The block diagram of receivers for turbo codes on fading channels (a) a decoupled channel estim ator run only before iterative detection (b) a decoupled adaptive iterative detection (c) iterative joint data and param eter estimation channel estimates result in more reliable data decoding. Based on the type of infor­ mation transferred to estimator, the decoupled-AID can be classified as Hard Decision Feedback (HDF) or Soft Decision Feedback (SDF). W hen a hard decoder (e.g, Viterbi decoder) is used, HDF uses simply a output of the hard decoder. W hen a soft decoder (e.g, turbo decoder) is used, HDF is based on thresholded reliability (soft) information from soft output of the soft decoder. In contrast to HDF, SDF uses an average value of data based on the whole soft output of the soft decoder. The decoupled-AID can also be classified based on the type of a decoupled channel estimator. W ith a prob­ abilistic channel param eter model, a Wiener filter (WF) or Kalman filter (KF) can be considered with HDF or SDF3. W ith a deterministic param eter model, a moving average filter (MA) or least square (LS) filter can be considered with HDF or SDF. In each of these cases, the former represents an open-loop estim ation filter while the latter represents an autoregressive filter. 3Smoother 40 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Initial versions of HDF for turbo codes on a slow flat fading channel were proposed in [49]. It suggested a MA filter with HDF (HDF-MA) which consists of hard decisions greater than a threshold. In [50], similar HDF-MA and HDF-W F algorithms were investigated on a slow and fast flat fading channel, respectively. A bandw idth efficient technique by replacing some parity symbols with pilot symbols was also suggested on a slow and fast flat fading channel in [50]. In [46], Sandell et al. compared the performance of HDF-LS with th at of SDF-LS for a GSM-like system on a frequency selective fading channel. They showed th at the performance of SDF-LS was slightly better than th at of HDF-LS in their considered system. Recently, the SDF-KF for frequency selective fading channels was dem onstrated and the performance of SDF- KF was compared to that of HDF-LMS (Least Mean Squares) in [7]. The SDF- PLL (Phase-Locked Loop) technique was investigated for the phase uncertainty and the performance of SDF-PLL was compared to th at of joint data and param eter estimation algorithm in [6]. Thus, although decoupled-AID has been investigated in the literature, no direct comparison of the various methods applied to turbo codes on a flat fading channel exists. Moreover, it is interesting to see the performance variation between HDF and SDF for different modulation methods. In this work, the decoupled-AID for turbo coding on a flat fading channel is presented. Based on a probabilistic param eter model, the W iener filter and Kalman filter are considered in a combination with HDF and SDF. The performance of several combinations between the kind of decision feedback and channel estim ator (HDF-KF, 41 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. SDF-KF, HDF-W F, etc.) are compared on two modulation methods (punctured- QPSK,8PSK). A new feedback m ethod referred as Partial Soft Decision Feedback (P-SDF) or Normalized Partial Soft Decision Feedback (NP-SDF)is proposed, which improves the performance in the simulated system. 3.2.2 S y stem D escrip tio n A standard turbo code on an interleaved flat fading channel is considered. After a block of source symbols bi is encoded in a turbo-encoder, it is m apped into a block of coded symbols corresponding to a modulation method. Then the block of coded symbols ay is passed through a channel interleaver. After the channel interleaver, two training sequences (20 symbols) are attached at the head and back of the interleaved symbol block (2040 symbols) and pilot symbols are inserted within the interleaved symbol block. In Fig. 3.6 the format of a transm itted symbol block is depicted, which is transm itted through a flat fading channel. The signals are observed in additive white Gaussian noise (AWGN). A simple discrete-time model for the received signal is Zk = xkfk + Wk (3.3) where, is the transm itted symbol, /*. is the coefficient of the flat fading channel, and Wk is white circular complex Gaussian noise. The structure of overall system is shown in Fig. 3.7. 42 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. training sequence pilot symbols T 20 30 1 30 1 1 30 20 data data data Figure 3.6: The format of a transm itted symbol block h Turbo M X i encoder l/N channel interleaver X k I c punctured QPSK, 8PSK fk (flat fading) — K *) w t ( A W G N ) — S Q ( 6 |) Turbo decoder fiy zi SO (*i) estimator (hard/soft) channel estimator ; re-encoder Figure 3.7: The block diagram of decoupled-AID for turbo codes on an interleaved flat fading channel The receiver consists of a standard turbo decoder [12] and a decoupled channel estim ator which are connected by an interleaver and a deinterleaver to exchange hard or soft information iteratively. For the first iteration, the channel is estim ated by the known training sequences and pilot symbols. After the first iteration, the hard or soft output of turbo decoder is fed back to the channel estim ator to refine the channel estimates. The refined channel estimates are passed back to the turbo decoder. In the following section the details of various decoupled AID algorithms are described. 43 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.2.3 D eco u p led A d a p tiv e Itera tiv e D e te c tio n A decoupled AID algorithms can be classified by the type of information which is passed from the turbo decoder to the channel estimator. 3.2.3.1 Hard decision feedback (H D F) The turbo decoder generates two kinds of soft output. One is for the coded symbol x,; and the other is for the source symbol 6,. A hard decision xt of the coded symbol is obtained by selecting the most reliable symbol among the N symbols, where N is the alphabet size of the coded symbol. The selection is based on the intrinsic soft output SOi(xi) which is in the form of negative-log A PP (A-Posteriori Probability) [18], Xi = axgmmSOi(xi) Xi = 1,..., N — 1. (3.4) X i This hard decision is made on the coded symbol estim ator between the turbo decoder and channel estim ator in Fig. 3.7. Then this hard decision Xk is fed back to the channel estim ator through a channel interleaver and used as conditional data values to obtain the refined channel estimate for the next iteration. 44 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.2.3.2 R e-encoded Hard decision feedback (R -H D F) After a hard decision of the source symbol f t* based on the intrinsic soft output SOi(bi), it is re-encoded using the same scheme of the transm itter and has output Xj. where /(•) represents a turbo encoder and a modulator. After a channel interleaver, it is passed to the channel estimator. 3.2.3.3 Soft decision feedback (SDF) Based on the extrinsic soft output S O e(xi) of the turbo decoder, the average value of Xi is computed. For this average computation, the negative exponential of the soft output is used as a priori probability of corresponding coded symbol (xi) after normalization, where the denominator is a normalization factor. The average calculation is done * = /(b ) (3.5) i V - 1 Zi = P(xi)xi (3.6) Xi— 0 N - 1 e - S O e(xi) (3.7) by the coded symbol estim ator between the turbo decoder and channel estim ator 45 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. in Fig. 3.7. Then the computed average value Xi is passed to an (average) channel estim ator which is modified to accept an average value instead of a hard decision. 3.2.3.4 (Norm alized) Partial Soft decision feedback ((N )P -S D F ) In contrast to SDF which uses all soft output information to compute the average value, P-SDF takes the soft output of the first n most reliable symbols among the N(>= n) coded symbols where N is the alphabet size of the transm itted signal set. After this selection, the sum of a priori probability of chosen n symbols are normalized to 1. Hard decision feedback can be viewed as a special case (n = 1) of P-SDF. The proper value n can be determined empirically to achieve the best performance according to the modulation method and the Doppler frequency of the fading channel. Based on simulation results, n is set to 2 in this work. W hen a Wiener filter is used as a channel estimator, the amplitude of this P-SDF \xi\ is normalized to 1 for a reduced complexity algorithm, which is referred as NP-SDF (normalized P-SDF). The details are described in the following section. 3.2.4 C han n el E stim ator Two kinds of channel estim ators are considered based on probabilistic channel models. 46 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.2.4.1 W iener filter On a flat fading channel with a M-PSK modulation which has a normalized power for all transm itted symbols \xi\2 = 1, data sequence conditioning can be decoupled from the MMSE filter design [52, 16]. The conditional data sequence x consists of a training sequence, pilots and the estimates of coded symbols. For the decoupled filter design, a new sequence y is obtained by Vk = ^ (3.8) Xk Based on this new observation sequence, the MMSE filter coefficients g* are obtained by the W iener-Hopf equation. g4 = r /y R y 1 = r ^f(R f + A ^ 0I)_1 (3.9) where r represents a cross correlation vector and R represents a correlation matrix. The estim ate of /*. is N fk y Vk— i9k (3.10) i = — N where 2N + 1 is the window size. In (3.9) the following correlation is used, which is based on Clarke’s model [22]. R f(m ) = J0(2-Kvdm ) (3-11) 47 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. where J 0(-) is the zero-order Bessel function of the first kind and U d is the normalized Doppler frequency. For this stationary model, the filter is time invariant. Hence the filter can be designed prior to actual filtering. 3.2.4.2 A verage W iener filter In contrast to hard decision feedback, the amplitude of the average value of the coded symbol ay is not necessarily 1 (i.e.,|ay|2 1). Therefore, the data decoupled W F design is not possible for the SDF method. Instead, the am plitude of the estimated symbols need to be computed and multiplied to the diagonal elements (i.e., N0\xi\2) of the identity m atrix in (3.9). Therefore the W iener-Hopt equation is modified as g‘ - r j y R y 1 = r ; f ( R f + N0A) 1 (3.12) where the m atrix A is 0 bfc+/v|2 (3.13) Because the filter coefficients g needs to be calculated every window, (3.12) requires heavy com putation especially when the window size is large. To avoid this heavy computation, NP-SDF which has normalized amplitude is proposed with W F as a reduced complexity algorithm. 48 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.2.4.3 K alm an filter As a Kalman filter, a fixed-interval Kalman smoother is considered. The fixed-interval Kalman smoother consists of forward filtering and backward smoothing process [39]. Based on the first order Gauss-Markov model fk = cxfk-i + W f c (where a is the correlation of adjacent channel parameters) the overall process is represented by fk\N = fk\b + A k[fk+1\N — fk+l\k] (3-14) where the fk\k, fk+i\k are obtained by the forward prediction and filtering as fk\k ~ fk\k— 1 + Kk(zk %kfk\k— l) (3.15) fk+l\k = &fk\k (3.16) where Kk is the Kalman gain. In (3.14) the A is obtained by the backward smoothing as Ak = (*Pk\kPk+l\k (3-17) where P is an error covariance m atrix [39]. In (3.15) the symbol estimates ik can be from HDF or SDF. The coded symbol estim ator between the turbo decoder and channel estim ator in Fig. 3.7 controls the type of decision feedback information. In [7, 6], the KF tied with the average processing of soft information (3.7) is referred as an Average Kalman filter. 49 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.2.5 N u m erical resu lts The simulations were run based on the system described in Section II and the iterative channel estim ation methods and estimators described in Section III and IV. The turbo encoder has a 2038 bit interleaver with two recursive systematic convolutional codes. The turbo code rate is 1/3 and the output of turbo encoder was modulated by punctured-QPSK or 8PSK. Hence the overall source/coded (modulated) symbol rate is 1 for both modulation schemes. The coded symbols were interleaved using a pseudo-random channel interleaver with 2040 symbols. W ith 20 symbol forward and backward training sequences at the head and back of 2040 data symbols, one pilot symbol was inserted every 30 data symbols. The transm itted data format is shown in Fig. 3.6. Then the pilot inserted symbols were sent over a flat fading channel with normalized Doppler spread ud = 0.01. As a channel estimator, the W F and AWF with a sliding-window (2N + 1 = 31) and the KF with fixed-interval (2040) smoothing process were considered. For the P-SDF, n was set to 2 for both punctured QPSK and 8PSK. In table 3.1 possible decision feedback algorithm and channel estimator combinations are shown with empty and dark circles. In this work the performance is shown only for the case of the empty circles. In Fig. 3.8 and Fig. 3.9, the performance of decoupled AID at the 10th iteration are shown for the case of punctured-QPSK and 8PSK respectively with normalized Doppler spread = 0.01. For comparison, the performance of hard or soft decision 50 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. HDF R-HDF SDF P-SDF NP-SDF W F O o - - o AWF - - • • - KF 0 o 0 o • Table 3.1: The decision feedback and channel filter combinations feedback schemes and two channel estim ators (W F,KF) are shown with th at of the channel param eter estim ation prior to iterative decoding and the known channel case. The performance of the known channel case was achieved w ithout training sequences and pilots. For the W F, The performance of HDF-W F is almost same at th at of NP-SDF-W F (which is a reduced complexity algorithm for SDF) for both modu­ lation cases. Therefore the proposed NP-SDF-W F algorithm has no performance or complexity advantage against the HDF-W F. The performance of R-HDF-W F is much worse than th at of HDF-W F for both modulation cases. For the KF, the performance differences between the algorithms are more noticeable. The performance of HDF- KF was better (up to 0.2dB) than th at of SDF-KF over the entire range of Eb/No for both modulation methods. The performance gain of HDF-KF over SDF-KF was larger for the 8PSK modulation than for the punctured QPSK modulation. The per­ formance of R-HDF-KF was much worse than th at of HDF-KF or SDF-KF for both modulation cases as in the case of W F. The performance of P-SDF-KF shows about O.ldB (0.15dB) gain over th at of HDF-KF and 0.35dB (0.2dB) gain over th at of SDF at a BER of 1CT2 for the punctured QPSK (8PSK) modulation. As a result, the HDF 51 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 0 ° ^estimation! prioir* to .d eco d in g . - - N P-SD F-W F - R-HDF-KF k n o w n c h a r j m e l 4 4.5 5 5.5 6 Eb/No Figure 3.8: The performance of different decoupled AID methods with punctured QPSK and ud — 0.01 at the 10th iteration algorithm is good for both complexity and performance for the KF. W ith a cost of complexity, we can get small performance gain with the proposed P-SDF algorithm. In Fig. 3.10 the performance of HDF-KF, SDF-KF, and P-SDF-KF are compared for several intervals of average fading amplitude at a BER of 6dB with 8PSK modu­ lation. Based on known channel assumption, average value of fading amplitudes for each block (2147 symbols) was computed. Then, the simulation was executed for each interval that the average values belong to. Therefore, Fig. 3.10 shows how differently the decoupled AID algorithms perform in various fading situation. At low average amplitude (less than 0.7) the performance of HDF is better than the performance of SDF. At high average am plitude (greater than 0.7), however, the performance of SDF is better then th at of HDF. Thus, HDF is more robust to deep fades whereas SDF 52 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 0 ° •estim ation prior to decoding-j • HDF-W F - - R-HDF-W F - - 0 - - R-HDF-KF 4.5 5 5.5 6 4 Eb/N o Figure 3.9: The performance of different decoupled AID methods with 8PSK and U d = 0.01 at the 10th iteration performs better at other fading situation in the simulated system. The performance of P-SDF is better than th at of HDF or SDF regardless of the average amplitude. 3.2.6 C onclu sion Several decoupled AID techniques were compared via computer simulation under the same conditions. The only method showing significantly worse performance was re-encoded HDF. In contrast to the results in the literature for frequency-selective fading, there is little gain to be had with SDF. HDF-W F and HDF-KF are reasonable choices with good performance and relatively low complexity. 53 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. T T T ^ T T r r iT T T r r r n T ^ T ^ T E i^ T r r r ^ T n T T T T T r r r r r r ^ T r n T n T T T T T T T r 1 0 ' c c L U C D - © S D F -K F —A — H D F-K F - a — P -S D F -K F 1 0 ' 0-0.65 0.65-0.7 0.7-0.75 0.75-0.8 0.8-0.85 0.85-0.9 0.9-0.95 In terv als fo r a v e r a g e fad in g a m p litu d e Figure 3.10: The performance for the different intervals of average fading amplitude at a BER of 6dB with 8PSK modulation 54 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. C hapter 4 A Sim ple Stopping C riteria for th e M IN -S U M Iterative D ecod in g A lgorithm 4.1 Introduction Complexity reduction is one of major goal when an iterative decoding is implemented. To reduce the average number of iteration, several stopping criteria have been sug­ gested in the literature. Those stopping conditions are based on a fact that, in relatively high SNR region, many of iteration is executed with little performance im­ provement. To know a proper instant of this convergence, a stopping rule based on the CE was shown as an effective method in [31]. However, the CE criteria is not realistic due to its complexity. In [47], two simplified version of CE criteria were suggested for Turbo decoding. One is checking the ratio of sign changes of extrinsic information during consecutive iterations. The ratio of sigh changes is compared to 55 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. a empirical threshold to determine the moment when iterative decoding is stopped. For example, the iteration stops when T(l) < (0.005 - 0.3)N (4.1) where T(l) is the number of sign changes in extrinsic information L^(uk) at Z-th iteration and N is the input block size. This algorithm was named as sign-change- ratio(SCR). The other is comparison of hard decision during consecutive iteration named as hard-decision-aided(HDA). If current hard decision is completely matched to the previous hard decision, the iterative decoding is stopped. 4 ° = 4 i-1),Vfc,l < k < N (4.2) The effectiveness of both algorithms were shown by sum-product algorithm on a Turbo decoder. The conclusion was th at the HDA criteria had less complexity (average number of iterations) at low and mid SNR region and the SCR criteria showed less complexity at high SNR. In this work, a new simple stopping criteria is proposed with better complexity reduction (i.e., less average number of iterations) and less required memory. Based on the fact th at a min-sum algorithm is a sequence decoding not a symbol decoding, the new stopping criteria checks if a decoded sequence is valid on the encoded trellis structure with the extrinsic information. Depending on the encoded system, each 56 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. state at time k can be mapped into only two possible states at time k + 1 in a binary input system and when a sequence completely follows this mapping structure we say it is valid (i.e., it is a valid transm itted codeword C). u f (0 G C (4.3) Note th at the intrinsic information of the min-sum algorithm always gives a valid decoded sequence. A simple loop-up table describing one transition of a trellis can be used for this valid sequence check. If an entire decoded sequence is a valid codeword, the iterative decoding is stopped. Unlike the CE and SCR criteria, there is no need to optimize the threshold empirically. 4.2 Sim ulation results A serially concatenated convolutional codes (SCCC) is considered w ith the input block size N = 1024. Each of constituent code is |-ra te systematic recursive convolutional code and the BPSK modulation is considered. The stopping algorithms are applied based on the output of outer Soft-In-Soft-Output (SISO) decoder at every iteration. In Fig. 4.2, the performance of different stopping rules are shown with the per­ formance of fixed iteration (10). The HDA and sequence check criteria (SEQ) show the same performance as th at of fixed-number iteration decoding. In other words, 57 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. when a stop criteria determine to stop an iterative decoding, the actual bit error at the corresponding iteration is zero. However, when the actual bit error is zero, the stop criteria may determine to continue the iterative decoding. This increases the required number of iteration (i.e., decoding complexity). For the SCR criteria, the performance and required number of iteration are design param eter. We chose the threshold T(l) = 0.03N which gives the same performance as th at of fixed iteration. When stop criteria has the same performance as th at of fixed-number iteration, the effectiveness of a stopping rule can be shown with the average number of iteration. In Fig. 4.3 the average number of iteration was shown for the HDA, SCR, and SEQ algorithms. The proposed SEQ algorithm has about \ number of iteration reduction compared to both HDA and SCR algorithms at intermediate and high SNR region for both block size. It means th at the SEQ requires less complexity than the HDA or SCR does. In addition, since the HDA and SCR algorithms need to compare hard decision of current iteration with those of previous iterations, it requires to save the previous decision values. In the mean time, the SEQ uses only current hard decision values and it does not need to save any decision value. Moreover, while the entire block should always be checked for the next iteration in the HDA and SCR rule, the SEQ is only executed by the point where the invalid sequence check occurs. This invalid sequence check usually occurs in the middle of a block except the last itera­ tion where the SEQ stops the iteration. These advantage of the SEQ algorithm save memory and complexity. 58 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Although the results were shown for SCCC case, the extension for Turbo code is trivial. Because Turbo code has different characteristic of soft information evolution with th at of SCCC, the effectiveness of stopping rules may different in Turbo decoding. © « — AW GN Inner C C O u ter CC O u te r S iS O Inner S IS O SO M A P [ c h e c k s t o p p i n g r u l e s ' ) Figure 4.1: Block diagram of SCCC encoder and decoder with stopping rules rr LU m 1 0 " ' — -x—- Fixed Iteration (10) — B — HDA — *— SCR — ©— VCW 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 Eb/No Figure 4.2: Performance of different stopping criteria with the fixed-number iterative decoding for a input block length A /’ =1024 59 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 10 C D ■ Q 3 0.6 0.8 1 .2 1 .4 1 .6 1 .8 2 2 .2 E b/N o Figure 4.3: Average number of iterations for different stopping criteria for a input block length A=1024 4.3 C onclusion In this work, we proposed the VCW stopping rule for min-sum iterative decoding algorithms. The performance and com putational complexity are compared with the simplified cross entropy-based SCR and hard decision algorithm HDA. The proposed VCW algorithm requires less com putational complexity and less memory while there is no performance degradation compared to the iterative decoding algorithm using a fixed number of iterations and there is no need for empirical parameters. 60 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. C hapter 5 A nalysis o f Scaling Soft Inform ation on SC C C and L D PC C based on d ensity evolution 5.1 Introduction Recently developed density evolution technique has achieved the analytic capacity of LDPC code. In [44], the density evolution technique was recursively used to track the density of extrinsic message between the variable nodes and check nodes of LDPC code under various channel conditions. A simplified version of the density evolu­ tion technique was introduced with a Gaussian approximation in [21]. The Gaussian approximation was based on the fact th at the extrinsic information can be well ap­ proximated as a Gaussian random variable as the number of iterations increases. Another approach for the LDPC density evolution was introduced in [4] , which pro­ posed a min-sum density evolution using an approximation at a check node. Because 61 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. the LDPC decoding algorithm can be represented by a simple bipartite graph, most of the analytical work has been focused on LDPC codes. Meanwhile, similar attem pts have been made for Turbo codes based on simulation in order to obtain the threshold, which in turn, determines the asym ptotic capacity of Turbo codes when the interleaver size and the number of iterations go to infinity [48, 29], The obtained thresholds agree well to the region where the bit error curves start to fall down. Because these analysis depend on simulation, the threshold has a limited precision compared to those of LDPC codes. In [24], the density evolution technique was used to explain many mysteries of turbo codes and SCCC. For exam­ ple, the role of systematic bits and the importance of recursive constituent codes etc. The input-output SNR curves were used to explain the evolution of exchanged soft information. The density evolution technique based on SNR curves were classified into two categories. The actual SNR curves were obtained during a standard itera­ tive decoding procedure by collecting the extrinsic information, while the Gaussian SNR curves were obtained by generating Gaussian random variable fed into each constituent decoder. An analytic technique for 2-state constituent codes was also introduced as an extension of LDPC code analysis. However, no results based on the analytic m ethod were presented. Based on all these capacity related works, we should notify th at when the decoding algorithm is not optimal, capacity depends on the de­ coding algorithm to which the density evolution technique is applied. Therefore, it can be understood as a decoding capacity rather than encoding (or code) capacity. 62 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. In this work, the analytic results for SCCC with 2-state constituent codes based on the sum-product and min-sum algorithms are shown. The computed thresholds are compared to other simulation based evolution methods including the actual SNR and Gaussian SNR evolution techniques. Therefore, it is shown th at several density evolution ideas independently developed in the literature yield consistent results. In particular, for the min-sum algorithm, the density evolution is used to analyze the scaling effect on the SCCC. The SCCC with a differential outer encoder shows a significant performance gain when the iterative decoder scales extrinsic information. This scaling gain is analyzed using the analytical and simulation based density evo­ lution methods. Similarly, we apply the same ideas to LDPC codes to show th at the scaling gain is noticeable in the LDPC code as well. In order to do this, we derive the min-sum analytic expressions for LDPC codes based on a Gaussian approximation and evaluate the thresholds for different scaling values. To validate the gain (i.e., difference of thresholds) of scaling, the actual SNR evolution and the Gaussian ap­ proximation SNR evolution as well as simulation performance curves were presented for both SCCC and LDPC codes. 63 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5.2 Sum -product analytic density evolu tion of SCCC a simple 2-state convolutional code as a constituent code is considered, which has a generator m atrix G(D ) = [1, The block diagram of this constituent code is shown in Fig. 5.1. In this simple constituent code, the input uk and outputs v\)v‘ l Uk n lb D K ) Figure 5.1: 2-state convolutional code (R = |) satisfy the relationship: vl = uk vl+1 = vl + uk Using this relationship, a Tanner graph of this constituent code is shown in Fig. 5.2. Because the current state sk is the same as the parity bit v% , the check node in Fig. 5.2 corresponds to one step of the trellis decoding for this particular constituent code. In other words, the message passing between variable and check nodes in this code is equivalent to the standard forward-backward decoding algorithm based on trellis structure. This makes it possible to adopt the analytic methods which have been 64 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6- ° check node T channel info # parity bit O inform ation bit Figure 5.2: Tanner graph of a 2-state convolutional code (R — \) used for LDPC codes. Generally, the trellis decoding can not be simply represented by check nodes for more complex constituent codes. W ith this 2-state constituent code, an SCCC1 can be built as in Fig. 6.3. For com putational convenience, we consider the same inner and outer code. S /P Figure 5.3: Block diagram of SCCC1 with 2-state constituent codes (R = |) A serially concatenated bipartite graph of the considered SCCCl can be repre­ sented as in Fig. 5.4. To emphasize the similarity to the analysis of LDPC codes, we rearranged the Tanner graph into the bipartite graph. The log-likelihood ratios (LLRs) as exchanged messages in a graph is considered. The LLR algebra in [31] is an effective way to handle the LLRs and is used for the analysis of density evolution in this work. 65 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. check node f channel info @ parity bit O information bit Figure 5.4: Serially concatenated bipartite graph for SCCC1 (R = |) The message from check node to information and parity variable nodes are rep­ resented by w and U\, u2 respectively. And the message from information and parity variable nodes to check node are represented by z and v \ , v2 respectively. Based on the sum-product message passing algorithm, the messages for the inner code are represented by Vi A c + U2 (5.1) V2 = A c + ui (5.2) z = A c + A j (5.3) ta n h ( y ) = t a n h ( |) ta n h ( y ) (5.4) ta n h ( y ) = tan h (^ ) ta n h ( y ) (5.5) ta n h ( - ) = tanh( U ) tanh( ^ ) (5.6) 66 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. where Ac represents a received message from channel and \ represents input extrinsic message from the outer constituent code. Because of the similarity to the LDPC code, the standard tanh() function was used for the check node. Similarly, the messages for outer code are represented by vl = \* + u * 2 (5.7) u2 * = A : + u{ (5.8) s* = A * (5.9) II* 7* 11* ta n h (— ^) = ta n h (— ) tan h (— ) (5.10) 2 2 2 II* 7* II* tan h (-y ) = tanh( — ) tan h (— ) (5.11) 2 2 2 in* a* 1 1 * tan h (— ) = tanh(-A-) tan h (— ) (5-12) 2 2 2 The messages exchanged between the inner and outer decoder are A „ = A * w + A c A 0 \ * * A„ = w °,p * i * u y T u 2 A * - o(A o + A * iP ) 67 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. where A 0, A*, and A*p represent the extrinsic information exchanged between the inner and outer decoders. Because of the independent and identically distributed assump­ tion [44, 20] for messages, we omit any time index in the message representation. Generally, the density evolution requires recursive tracking of a density function of the extrinsic information. To handle the continuous density function, one approach is a quantization of a density function and apply the density evolution equations approximately [44]. The other approach is to quantize messages and track probability mass function [20]. To avoid these complex computations, a Gaussian approximation approach was introduced in [21] which reduces an infinite dimensional problem into a one dimensional problem. In this Gaussian approximation, it is only needed to compute the mean and variance of exchanged extrinsic messages A0, A*, A * 0 between the two constituent codes recursively. In addition, based on the symmetry condition / (x) = f ( —x)ex developed in [43] for the density f(x) of an LLR message, the variance can be represented as two times the mean under the Gaussian approximation. As a result, we need only to track the mean of a Gaussian density. By taking expectation on the message from variable nodes for both inner and outer codes (5.1)(5.2)(5.3)(5.7)(5.8)(5.9), we get = A c + *( 0 = A c + A"/'_1) «•<') = \*{l) + u«l~V 68 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. z * m = \ * (l) where denotes the mean of random variable x at the 1-th iteration. Due to the independent and identically distributed assumption, and = Ul(/) = ■ Similarly, take expectation on the message from check nodes. For example, by taking expectation on each side of 5.10, we get the updated mean as £ [ta n h (y )] = E [ta n h (|)]£ [ta n h (y )] (5.13) Now, we define ^{y) as / o o 'jj J_ (y~y) ^ ta n h (-)-^ = = e W~ dy (5.14) = E[ ta n h (|)] (5.15) Using this dehnition and taking expectation on each side of (5.4)(5.5)(5.6)(5.10)(5.11)(5.12), the update mean of message from check node to variable node are ^ (w (i)) = ^ 2(uw ) 69 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Finally, the mean of exchange message between two constituent codes are A« = ^ ~ 1('iP(w^)) + Xc X*m = A(0 /xo /xo a ;1 '1 = r ' t t ’t®*'0)) ~KS = 2 <T1« ’(fl-<i))) A«,+>) = i(A;<'> + a;® ) By initially setting u*01, A|0’, iT ^ to zeio, we can recursively update the mean of each message until it converges to a finite value or goes to infinity. W hen it goes to infinity, it means that the density tends to a point mass at infinity or equivalently, the probability of error tends to zero. The threshold is calculated as the maximum noise level (i.e., minimum SNR) from the channel such th at the probability of error tends to zero [44, 21]. 70 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. To calculate the i>{-) and i/j 1(-) effectively, we use the following approximation which was used for the LDPC code in [21]. ip(x) 1_2\/fe_?(2_K_ i) : *>10 1 - eaxl+P : x < 10 (5.16) where a = — 0.4527, /? = 0.0218, and 7 = 0.86. Treshold (0.56dB) w 1 0 ' 32K 8K 0.2 0.4 0.6 0.8 1 1.2 1.4 Eb/No (dB) Figure 5.5: Comparison between simulation results and convergence threshold for SCCC1 with 2-state constituent codes The threshold, which is calculated by the above method, is 0.56dB and it is shown to agree with the simulation results of Fig. 5.5. Note th at the message passing schedule of the above method is fully parallel. In other words, every node is simultaneously activated in a decoding process. To see how the threshold agrees to the simulation results, the simulation is executed with two different decoding algorithms. One is a BCJR-type fixed-interval forward-backward decoding algorithm, and the other is 71 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. a flooding algorithm which activates every node in a decoder simultaneously. Two algorithms are simulated up to 20 iterations and 50 iterations respectively with a long interleaver size (16384 bits). Note that, after enough iterations, the performance of the flooding algorithm is almost the same as th at of the fixed interval forward- backward algorithm, and the computed threshold is well matched to the waterfall region of both simulation curves. It turns out th at the threshold computed based on a flooding activation schedule will be the same as th at based on a forward-backward activation schedule. 5.3 M in-sum analytic density evolution o f SCCC In this section, the density evolution of SCCC based on min-sum algorithm is pre­ sented. This work is based on the recent paper [4] where the density evolution under min-sum algorithm was derived for LDPC code without a Gaussian assumption on the message. We modify this work to analyze SCCC and adopt the Gaussian ap­ proximation on the message, which results in a simpler min-sum density evolution for SCCC. For the min-sum density evolution, the expression (5.4) of the output message from a check node under the sum-product algorithm, is changed as: ui = sign(zv2)min[\z\, |n2|]. (5-17) 72 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. This min-sum check node expression was shown in the literature [31, 40, 4]. The proof of 5.17 is given in Appendix using the generalized distributed law [3]. Using this expression, the output message from a check node can be represented by the sign of the product of the incoming messages and the minimum absolute value among the incoming messages, where the message from the node receiving the output message is excluded. The pdf of the output message from a check node under the eq.(5.17) was derived in [4] as: fu(x) = f v ( x ) ( l - Fz(x)) + f z(x)(l - Fv(x)) + f v(-x )F z(-x ) + f z(-x)F v(-x), x > 0 (5.18) fu(x) = f v(x)(l - Fz(-x)) + f z(x)(l - Fv(-x)) + f v{-x)F z(x) + f z{-x)F v(x), x < 0 (5.19) where f v{x) and Fv(x) are the probability density function (pdf) and the cumulative density function (cdf) of v respectively. Although the pdf f u(x) is not exactly Gaus­ sian, we model the message from check nodes as a Gaussian variable based on the empirical evidence similarly in [21]. In addition, since the symmetry property is not held in the pdf f u(x), it is necessary to track both mean and variance of the Gaussian density. 73 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Following the same notation from the previous section, the min-sum message passing algorithm for the inner code is represented by: — Ac + «2 (5.20) ^ 2 = A c + U\ (5.21) z = Ac + A * (5.22) U\ = sign(v2z)min[\v2\,\z\\ (5.23) U 2 = sign(viz)min[\vi\,\z\\ (5.24) w = sign(viV2)min[\vi\,\v2\] (5.25) The min-sum message passing algorithm for the outer code is similarly expressed. For the received message A c from channel, the symmetry condition holds because the symmetric channel is considered. Therefore, the variance alc is 2AC , and the following initialization is used for the first iteration: a, 2(0) _ < J xc — 2^c- 74 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. For the I-th iteration, the mean and variance of the messages from the variable nodes to the check node are: V® = \ c + n ^ (5.26) a2( 0 wv (5.27) - z il) = V + T (5.28) am w z - A + k T 1 1 (5.29) Based on (5.18),(5.19),(5.26),(5.27), the pdf f u(x) of message u is computed. After that, the mean and variance a ^ are numerically computed from f u(x). Similarly, the mean and variance a are computed as well. This recursive calculation is executed for a sufficient number of iterations (e.g, 1000) to see whether the message converges or not given a certain channel noise level. Table 5.1 shows the obtained thresholds in the standard deviation a of noise level and corresponding Eb/N o in dB for the sum-product and the min-sum algorithms. Note th a t the sum-product has lower threshold th at the min-sum, which is matched to simulation results in Fig. 5.5. Table 5.1: The threshold on analytic com putation for sum-product and min-sum algorithms Eb/No(dB) cr sum-product 0.56 1.3259 min-sum 0.75 1.2972 75 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5.4 S N R evolution of SCCC As another m ethod to present the evolution of soft information at iterative decoding, the SNR evolution was widely used in the literature. Particulary, when it is difficult to apply the previous analytic methods as in the case of Turbo code and SCCC with complex constituent codes, the SNR evolution is the only way to show the evolution of soft information and explain the characteristics of the iterative decoder [48, 29, 24]. The SNR evolution in the literature can be divided into two categories. One is actual SNR evolution and the other is Gaussian SNR evolution. The actual SNR evolution is calculated based on the collected LLRs at each iterative decoding step. From the collected LLRs, the mean n for the exchanged soft information between two constituent codes is computed. The SNR of a Gaussian random variable with mean fj, is approximated as | if we assume the symmetry condition: f(x) — f ( —x)ex for the exchanged message [43]. While the actual SNR evolution follows the simulation steps by collecting the LLRs, the Gaussian SNR evolution uses a generated soft information (LLRs) from a Gaussian random number generator, which is fed into each constituent decoder. This idea is based on the well known Gaussian nature of the extrinsic soft information. As we can control the mean value of generated LLRs with the Gaussian SNR evolution, the density evolution curves and threshold can be calculated with more precisely compared to th a t of the actual SNR evolution. 76 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2.5 0.5 Actual SNR Gaussian SNR 0 0.5 1 1.5 2 2.5 3 3.5 4 input SNR (outer), output SNR (inner) Figure 5.6: Actual density evolution and Gaussian approximation for SCCCl at Eb/No=0.56dB In Fig. 5.6, the actual SNR evolution curves are shown with the Gaussian SNR evolution curves at Eb/No=0.56dB, which is the analytically derived threshold. The two evolution curves slightly touch each other for both the actual and Gaussian SNR evolution techniques. As a result, it is shown th at the analytic density evolution is well matched to the simulation based SNR evolution, which are independently developed in the literature. Fig. 5.7 shows th at the upper and lower evolution curves separate and make a wider iteration tunnel, allowing the message SNR to converge toward infinity (i.e., correct symbol decision) at lower noise levels (higher Eb/N o). 77 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4 '— Actual SNR — • Gaussian SNR 3.5 i 2.5 c n D Q . C (5 D O I T 0.5 5 2 3 4 0 1 input SNR (outer), output SNR (inner) Figure 5.7: Actual density evolution and Gaussian approximation for SCCC1 at Eb/N o=0.8dB 5.5 Scaling soft inform ation o f SCCC It is believed th at degradation of soft information may slow down the convergence of iterative decoding and result in better performance. As a simple and effective method to degrade soft information, scaling or filtering of extrinsic information was introduced in the iterative decoding literature [15]. Another approach is to view the scaling as a m ethod to reduce the approximation error when the sum-product algorithm is replaced with the min-sum algorithm [23]. For both approaches, however, it is not clear when scaling and filtering result in a significant performance gain and it is quite heuristic to determine the optimal scaling(filtering) coefficient(s). Recently, it was shown th at filtering and scaling greatly enhance performance when a relatively strong convolutional code is serially concatenated with a differential encoder under 78 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. the min-sum decoding algorithm [26]. In this section, the density evolution and SNR evolution ideas are used to explain the gain obtained by scaling of the soft information in SCCC system. This approach shows th at scaling allows a higher noise level (i.e., lower threshold) for convergence of soft information. As in the previous sections, we assume the pdfs of all exchanged soft information are Gaussian and independent of each other. We consider a SCCC consisting of a simple rate | convolutional code (CC) and a differential encoder, as shown in Fig. 5.8. S /P Figure 5.8: A block diagram of SCCC2 with scaling As in section II, we can draw a serially concatenated bipartite graph with scaling coefficient a as in Fig. 5.9. The scaling coefficient a is only applied to the soft information from the outer to the inner decoder. This is based on the experimental rule th at the scaling (filter) should be applied to soft information transferred from the stronger code to the weaker code. The messages flowing on this bipartite graph are similar to those in section III except th at the information variable nodes of the inner bipartite graph do not receive messages from the channel. Hence, (5.23),(5.28), and (5.29) are changed to: z = Xi 79 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Scaling T> check node f channel info ® parity bit O information bit Figure 5.9: Serially concatenated bipartite graph with scaling a f (Q II 2(1 = K Because of the scaling coefficient a, the input extrinsic information A ^ from the outer decoder to the inner decoder and its mean Aj and variance a\i are changed to: Table 5.2 and Fig. 5.10 show the threshold values for various scaling coefficients a. The standard decoding without scaling corresponds to a = 1. Note th at with the 80 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. optimal scaling coefficient a = 0.7, the threshold is noticeably(0.25dB) lower than th at w ithout scaling. This expects th at the performance with optimal scaling is better (<0.25dB) than th a t without scaling in this particular system. The performance curves are shown in Fig. 5.11 and the computed thresholds are plotted on it to show how the analytic threshold is matched to simulation. Table 5.2: The threshold for SCCC2 on analytic com putation w ith/w ithout scaling a ®min— sum Eb/No(dB) 1 1.1844 1.54 0.9 1.2051 1.39 0.8 1.2176 1.30 0.7 1.2190 1.29 0.6 1.2037 1.40 0.5 0.8720 4.20 3.5 —V-~ 3 \ 0.5 0.6 0.7 0.8 0.9 1 scaling factor Figure 5.10: Thresholds on the different scaling factors for SCCCl We extend previous ideas to a system having a stronger 4-state outer convolutional code with a differential encoder. Because it is hard to get analytic expressions for this 4-state convolutional code, the SNR evolution method is used to explain the scaling 81 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. alpha=0.7 T h re sh o ld s .54 (aipha-1) T h resh o ld * !.29 (alpha=0.7) i .2 d B ''. c c L U c o 0.25 dB 1 0 '' 2 2.5 1 1.5 Eb/No Figure 5.11: Comparison between simulation results and convergence thresholds for SCCC1 with and without scaling gain. Fig. 5.13 shows the block diagram of the system under consideration, and Table 5.3 represents the thresholds based on the SNR curves. Table 5.3: The threshold for SCCC3 on simulation based com putation w ith/w ithout scaling a Eb/No(dB) 1 1.05 0.7 1.5 The actual SNR evolution curves are shown in Fig. 5.14 at Eb/No = 1.5dJ5. It is clear th at the iteration tunnel with scaling is wider than th at without scaling. In other words, the threshold with scaling is lower than th at w ithout scaling, which agrees to the results on previous SCCC2 system. 82 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. M T W , alpha=1.0 MTW, alpha=0,8 MTW, alpha=0.7 BER, alpha= 1.0 BER, alpha=0.8 BER, alpha*=0.7 10‘1 5 CC CD 0.5 0.3 0.4 •0.1 0 0.1 0.2 alpha(n-l) Figure 5.12: Comparison between the Minimal Tunnel W idth (MTW) and BER for SCCC2 with different filtering combinations iit. CC(2) S /P Figure 5.13: A block diagram of SCCC3 with scaling The simulation results are shown in Fig. 5.15. W ith a stronger outer code (4- state CC), the performance gain through scaling is more significant relative to when a weaker outer code (2-state CC) is used (see Fig. 5.11). The gain at computed thresholds (0.45dB) is well matched to the simulation gain (0.4dB). As the interleaver size and number of iteration go to infinity, the simulation gain will approach to the computed threshold gain. 83 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3 a l p h a - 1 — • — alpha=0.7 2 1 0 3 1 2 0 input SNR (outer), output SNR (inner) Figure 5.14: actual SNR evolution with and without scaling for SCCC2 at Eb/N o=1.5dB 5.6 M in-sum density evolu tion of L D PC code The density evolution of LDPC code under the sum-product algorithm was shown in [44] and its simplification using Gaussian assumption was presented in [21]. A similar analysis based on the min-sum algorithm was recently introduced without the Gaussian assumption in [4]. In this section, we derive a simplified min-sum density evolution technique for LDPC codes using a Gaussian assumption based on the previous works. Most of the ideas in this section have already been introduced in section III for the SCCC. 84 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. " 0 -. 0.4 dB 0.45 dB c c L U 03 1 0 ‘ " a l p h a - 1 — © — alpha=0.7 — T h re sh o ld ( a ip h a - i) Threshold (alpha=0.7) 0.8 1.4 1.8 2 1.2 1.6 2.2 Eb/No (dB) Figure 5.15: Comparison between simulation results and convergence thresholds for SCCC3 with and without scaling For regular (dc, dv) LDPC codes, the min-sum message passing algorithm is rep­ resented by dv 1 V A c T ^ ~ Ui i = 1 u = sign(vi, ■ • •, udc_i) m i n ^ l , • • •, |udc_i|] For the received message A c from the channel, the symmetry condition holds. Therefore, the variance a\c is 2AC . For the first iteration, we set: < ° > = \r V ^ = < = 2AC . 85 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. And for the I-th iteration, the mean and variance of the message from variable nodes to check node are (5.30) (5.31) Similar to the SCCC case, based on (5.18),(5.19),(5.30),(5.31), the pdf f u(x) of variance cAT from f u(x). This recursive calculation is executed for a sufficient large number of iterations (e.g, 1000) to determine whether the message converges or not on a certain channel noise level. Table 5.4 shows the computed thresholds amin- sum based on the described min-sum algorithm for various pairs (dv,dc). For comparison with performance curves, the corresponding values are also shown in dB inside parentheses. Similarly, the thresholds crsum_product of sum-product algorithm are also shown in the same table. 5.7 Scaling soft inform ation o f L D PC code We applied the scaling technique to LDPC code, and the LDPC code also shows a great scaling gain at both analytic threshold and simulation performance. Consider a regular (3,6) LDPC code and the min-sum density evolution technique described message u is computed. After that, we can numerically compute the mean and 86 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Table 5.4: The threshold on min-sum and sum-product algorithms for various pairs (dv, dc) dv dc rate G s u m —product ( jy j) ^ r ) (7 ( —k - 1 u m i n —s u m \ JV„ j r ! 3 6 0.5 0.8740(1.17) 0.8100(1.83) 4 8 0.5 0.8318(1.60) 0.7405(2.61) 5 10 0.5 0.7907(2.04) 0.6942(3.17) 3 5 0.4 0.9999(0.97) 0.9036(1.85) 4 6 1/3 1.0036(1.73) 0.8631(3.04) 3 4 0.25 1.2517(1.06) 1.0827(2.32) 4 10 0.6 0.7437(1.78) 0.6767(2.60) 3 9 2/3 0.7047(1.79) 0.6800(2.10) 3 12 0.75 0.6294(2.26) 0.6165(2.44) in the previous section. Due to the scaling of the extrinsic information from variable nodes to check nodes, equations (5.30),(5.31) are changed to: vm = aAc + (< („ - l)fi< 1 -1 1 (5.32) = a ^ l + o ^ \ (5.33) Table 5.5 and Fig. 5.16 show the thresholds corresponding to different scaling coefficients a. The standard decoding without scaling corresponds to a = 1. W ith the optimal a = 0.8, the threshold is 0.65 dB lower than th at of standard decoding. Fig. 5.17 represents how the mean and variance of the extrinsic information from variable node to check node are updated under the min-sum algorithm at Eb/No — 1.5dB. For the case in which the threshold is higher than 1.5 dB (a = 1, a = 0.6), the mean and variance do not increase beyond a certain value. However, for the case in 87 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Table 5.5: The threshold on analytic com putation w ith/w ithout scaling for LDPC code a &min— sum Eb/No(dB) 1 0.8100 1.83 0.9 0.8561 1.35 0.8 0.8730 1.18 0.7 0.8660 1.25 0.6 0.8356 1.56 1 .9 1 .8 1 .7 .6 1 .5 1 .4 1 .3 1 .2 1 .1 1 0.7 0.73 0.8 0.85 0.9 O . 0.6 O. s e a tin g c o e ffic ie n t (a lp h a ) Figure 5.16: Thresholds on the different scaling factors a for (3,6) LDPC code which the threshold is lower than 1.5 dB (a = 0.8), the mean increases exponentially while the variance increases linearly. Therefore, the message pdf ends up with a point mass at infinity, which results in zero error probability. As in the SCCC, the SNR evolution is also applicable to LDPC code. We regard each group of variable nodes and check nodes as outer and inner decoder respectively, and computed actual SNR values at Eb/No = 1.83dB which is the threshold without scaling. Fig. 5.18 shows the actual SNR evolution curves for (3,6) LDPC code with block size 10000. We can see th at the iteration tunnel with optimal scaling (a=0.8) is wider than th at without scaling. 88 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. — a -— m e a n (a lp h a --1) - - t v — Va r ( a l p h a s ) — • — mean (atpha=0.8; — - var (a!pha=0.8) — S— mean (mean=0.6) var (alpha=0.6) . 0 1 0 25 15 20 0 5 10 number of iterations Figure 5.17: Evolution of mean and variance on the different scaling factors a for (3,6) LDPC code at Eb/N o=1.5dB The performance curves are shown in Fig. 5.19 for the block sizes of 10000. The computed thresholds are plotted on it to show how the analytic thresholds match the simulation results. 5.8 C onclusion In this paper, several density evolution techniques were applied to Serially Concate­ nated Convolutional Code (SCCC) and LDPC code, and it was shown th at those ideas yield consistent results (tresholds). We computed the threshold analytically for a simple 2-state SCCC for both the sum-product and min-sum algorithms. The simulation based SNR evolution was also applied to the same system. It was shown th at the tresholds computed from different density evolution technique matched well 89 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. -© — alpha=1 2.5 0 ) £ > .5 > c c z c / ) 3 £1 c C C D Q. D O 0.5 5 2 3 4 0 1 input SNR (check), output SNR (variable) Figure 5.18: Actual SNR evolution w ith/w ithout scaling for (3,6,10000) LDPC code each other. Similar approaches were used to LDPC code with the min-sum algorithm and it was shown th at the threshold from density evolution matches well with the simulation performance. We also presented that both SCCC and LDPC have a great scaling gain with an optimal scaling coefficient and showed how density evolution techniques were used to determine the optimal scaling coefficient. The scaling gain was analyzed with both density evolution and simulation performance. The expected scaling gain by density evolution matched well with the achievable scaling gain from simulation results. 90 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. alpha=1.0 (10000) alpha=0.9 (10000) ■ B alpha-0.8 (10000) -© — alpha=0.7 (10000) - - - Threshold (alpha=1.0) Threshold (alpha=0.8) 0.6 dB ir L U m 1 0 " 0.65 dB 1.5 2.5 2 Eb/No (dB) Figure 5.19: Simulation results for regular (3,6) LDPC for two different block size 1000 and 10000 with and without scaling 91 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. C hapter 6 C onstrained Iterative D ecoding: perform ance and convergence analysis 6.1 Introduction Since turbo codes were introduced, Iterative Decoding (ID) has been researched to develop effective algorithms to yield either better performance or low complexity. These ID algorithms were understood as Belief Propagation (BP) algorithms which were known to be optimal for a graph without cycles [34, 38, 3]. However, the ID algorithms are suboptimal, because they are developed ignoring the cycles of codes (e.g., turbo codes, LDPC codes, SCCC etc). There have been several attem pts to find an optimal decoding algorithm for a code with cycles. As a special case, the optimal decoding of a tail-biting code, which has a single cycle, was investigated in [53, 2]. 92 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. W hen all cycles are removed from a code, the BP algorithm is optimal on the modified system. As a method of removing the cycles, we propose to put constraint on nodes (i.e., system variables) of cycles. In other words, each constraint of those nodes is represented by a set of values and the BP algorithm is repeatedly applied based on each constraint. We call this modified ID algorithm Constraint Iterative Decoding (CID). The tail-biting code is a good example of a single cycle code which is easily cut by putting a constraint on the node (the initial and final state variable). Therefore, CID is optimal for the tail-biting code. As a example with multiple cycles which are easily cut as well, we present a 4-cycle tail-biting code. This example simply shows th at an optimal decoding is still possible when a code has more than one cycle. This CID algorithm can be applied to decode more complex codes with many cycles, for example turbo codes, LDPC codes, and SCCC etc. Generally, optimal CID algorithms are not possible for these codes because too many constraints are required to cut all the cycles. Although the CID is not optim al for these codes, it outperforms the standard ID algorithm because it negates the effects of some cycles in the graphical representation of the codes. Along with the development of ID algorithms, analysis has been successful to unveil the characteristics of ID. Particularly, recently developed density evolution techniques have achieved the asymptotic capacity of Turbo codes, when the interleaver size and the number of iteration go to infinity [48, 29]. This asym ptotic capacity was empirically obtained based on input-output SNR evolution curves of each iterative 93 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. decoder. It was shown th at the threshold (capacity) is well matched to the region where the bit error curves start to fall down [29]. Density evolution was used to explain many mysteries of turbo codes and SCCC in [24], for example, the importance of systematic bits and recursive constituent codes etc. The m ethod of SNR evolution is based on the fact th at the exchanged soft information is well modelled by a Gaussian random variable. In this paper, we present CID algorithms with multiple constraint marginalization options. A tail-biting code and 4-cycle tail-biting code are given as examples where CID is optimal. The performance of CID is compared to th a t of standard tail-biting ID [9]. For more complex codes like SCCC and LDPC codes, it is shown th at the suboptimal CID yields better performance at the cost of larger complexity. Because of the parallel structure of CID, it could potentially achieve the standard ID performance with less latency (less interleaver size). In addition, the density evolution technique is used to show how CID improves the convergence compared to th at of standard ID. Specifically, the SNR evolution curves of CID shows lower threshold than that of standard ID. The rest of this paper is organized as follows. In Section II, the CID algorithm is developed as either optimal or suboptimal decoding algorithm depending on codes. Multiple constraint marginalization methods are suggested as well. The performance of CID for the optimal and suboptimal cases are shown in Section III. The simulation 94 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. results show a tradeoff between the complexity and the performance in CID. Some concluding remarks are given in Section IV. 6.2 C ID A lgorithm s 6.2.1 O p tim al C ID A lgorith m s W hen only a small number of nodes are shared by all cycles of a code and each node has only a small number of configurations, CID can be optim al and feasible. For example, each node can take two values and there are M nodes common to all cycles, then the number of possible constraints is 2M, with each constraint can be represented by an M x 1 vector. One simple example is a tail-biting code which has only one common node (start and end states: M = 1) of a single cycle and the number of condition values for the state variable is the number of configurations for th at node. [53]. In Fig. 6.1, the concept of state constrained decoding is illustrated for the 4-state non-recursive convolution code. There are two ways to perform constrained marginalization: Sum-marginalization[ 53]: p(bk,z) = Y p{bk,z\ci)p(ci) = Y P{bk, z\(k) C i< E C ctec 95 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 6.1: The state constrained decoding on tail-biting code with 4-state non­ recursive convolution code Max-margmaHzation: p(bk, z) = m axp(6fc ,z|ci)p(ci) = maxp(bk, z \q ) C j G C Ci^O where b and z represent the information sequence and noisy observation sequence, respectively, and C is a set of all possible constraints. Given a constraint q , p(bk,z\ci) can be obtained by either a sum-product algorithm or a min-sum algorithm as: p(bk, z|cj) = p(7,\b,Ci)p(b\ci) (s u m -p r o d u c t ) b:6 f c p(bk,z\ci) = m axp(z|b, Cj)p(b|c,) (m i n - s u m ) b:6f c As an another example where CID is optimal, we present a 4-cycle tail-biting code. The 4-cycle tail-biting code consists of 4 data segments and the first segment 96 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. is exactly same as a standard tail-biting code. After the first segment, the tail-biting bits of the first segment are appended at the end of each d ata segment to have the same starting and ending state. This example shows th at optim al decoding is possible when a code has more than one cycle and those cycles can be easily cut by constraints. Fig. 6.2 represents the data and tail bit arrangement of both tail-biting and 4-cycle tail-biting codes. In Fig.6.2, the shaded bits represent the tail-biting bits repeatedly used in multiple data segments and the capital letters A to B and A 1 to E' represent the starting and ending state of each segment. B A' B' O' O' E' j y i f = B' = C = D' = e A - B (a) (b ) Figure 6.2: D ata and tail bit arrangement with graph (a)tail-biting code (b)4-cycle tail-biting code 6.2.2 S u b o p tim a l C ID A lgorith m s W hen it is not possible to consider all constraint values of all common nodes, a small number of nodes can be selected and, for these selected nodes, CID may be applied as a suboptim al decoding algorithm. In other words, a few of the many cycles are removed from the graphical model of the code. For this suboptim al CID algorithm, the constrained marginalization is based on the selected constraints C' instead of all 97 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. possible constraints C (C' C C). In addition to the marginalization options (Sum, Max), we propose two ad hoc marginalization options for min-sum algorithm: Reliability-marginalization: C i = arg max \p(bk = 0, z|cj) - p(bk = 1, z |q )| C i bk = arg m axp(6fc ,z|Cj) O f e Sequence-marginalization: d — a r g m a x p ( z | b , C j ) p ( b | c j ) C i bk = arg m axp(6fc , z|cj) b k The Reliability-marginalization method takes the constraint which gives higher reliability at time index k, while the Sequence-marginalization m ethod takes the constraint which gives highest reliability for the whole sequence. In the following section, the performance of two ad-hoc methods are compared to th at of the previous two methods when CID is suboptimal. We applied the CID algorithm to a Serially Concatenated Convolutional Code (SCCC) and a regular (3,6) LDPC code, which have many cycles and nodes involved. At first, a small number of nodes M ' out of all common nodes M are selected. For 98 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. the binary case, the number of possible constraints is 2M'. Secondly, standard ID is executed for each of the 2M‘ constraints. We should note th at the constraints are explicitly applied only for the first iteration, however the soft information at subsequent iterations still evolves based on the constraints. After a fixed number of iterations (e.g, 10), the soft information from each constrained decoding is combined, and then marginalized to obtain the decoded sequence. Fig. 6.3 shows the SCCC system considered and the CID decoding blocks. S/P (b) — SOMAP 24 = 16 constraints - l 0000 0001 Outer SISO Inner SISO Figure 6.3: Block diagrams of (a) the SCCC encoder, (b) the associated CID decoder Similar CID algorithm was applied for (3,6) regular LDPC code. The constraints were imposed on the message from variable nodes to check nodes. 99 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6.3 Sim ulation and S N R evolution In this section, the performance of CID is shown, when it is optim al and suboptimal. For the optim al tail-biting decoding, Fig. 6.4 shows the performance of CID with the constraint on the starting and ending state. The performance of iterative tail- biting decoding was also shown with two wraps (iterations) [9]. For comparison, the performance of tail-bit decoding using Viterbi Algorithm (VA) was also presented. All decoding algorithms were based on min-sum algorithm with the block size N = 10. Because the starting and ending state of the tail-biting decoder can be determined correctly with high probability at high SNR and the tail-biting code saves the energy loss of tail-bits, both tail-biting decoders outperformed the tail-bit decoder at high SNR (i.e., the E b/N o reported includes the penalty for tail-bits). However, at low SNR, the probability of correct decision of the starting and ending state is very low. Therefore, the tail-bit decoder shows better performance despite tail-bit energy loss. The optimal CID performs slightly better than the iterative tail-biting decoding at all SNRs. Fig. 6.5 shows the performance of the 4-cycle tail-biting CID algorithm. The CID shows slightly better performance than the tail-bit code at all SNR. The 4-cycle structure increases the probability of correct decision of starting and ending state because of longer observation. At the same time, the tail-bit energy loss is decreased. This results in smaller performance difference. 100 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 0 ° tail-bits, VA tail-biting, std ID tail-biting, CID 101 1 0 '4 1 0e 0 1 2 3 4 5 6 Eb/No Figure 6.4: Performance of tail-biting and tail-bit convolutional codes based on the min-sum algorithm with the block size iV=10 1 0 c -©— tail-bits, VA -a — tail-biting, CID 101 1 0 ’ 5 0 1 2 3 4 5 6 Eb/No Figure 6.5: Performance of the 4-cycle tail-biting codes based on the min-sum algo­ rithm with the block size iV=40 1 0 1 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. std ID seq u en ce reliability 103 CID 1 0 5 2.4 2.2 1.6 1.8 2 Eb/No Figure 6.6: Performance for SCCC based on min-sum algorithm with the interleaver size 256. M '= 4 nodes were constrained. Fig. 6.6 represents the performance of the CID on the SCCC system in Fig. 6.3 with a small interleaver size N = 256. Table 6.1 shows four different CID options simulated. The constrains are posed at the inner Soft-In-Soft-Out (SISO) decoder. Because only some of the nodes were constrained, CID was suboptim al and iteration improves performance. Both standard ID and CID were simulated up to 10 iterations. W ith approximate 24 time higher complexity, CID shows about 0.4 dB gain at the 10-3 BER. Among the CID options, the reliability-marginalization showed the best performance. Although it was not shown in this figure, a tradeoff between complexity and performance of CID was observed. As the number of nodes constrained increases, the performance is better. The complexity and performance tradeoff in CID was shown for regular (3,6) LDPC code in Fig. 6.7. W ith the Reliability-marginalization, the CID showed about 0.3 dB gain compared to th at of standard ID. 102 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. T able 6.1: T he C ID op tion s of th e SC C C decoder SISO with constraints M ' marginalization Inner 4 Sum Inner 4 Max Inner 4 Sequence Inner 4 Reliability S td ID CID (2 n o d e constrained) CID (4 n o d e constrained) 1 0 ' cc L L I CO CID 1 0 ': 2.1 2 2.2 2.3 2 .4 2 .5 2.6 Figure 6.7: CID performance of regular (3,6) LDPC code based on MSM algorithm with interleaver size 500. (a)CID l: constrained at 2 nodes, (b)CID2: constrained at 4 nodes. Both CID1 and CID2 were constrained at check nodes with the Reliability- marginalization. For the convergence analysis of the CID algorithm, the mean ^ was obtained from a sufficient soft information block. Based on the symmetry property [43], the variance of soft information is 2/r. W ith this mean and variance, the SNR evolution curves were plotted. The SNR evolution curves have a wider iteration tunnel as the noise level decreases (i.e., as Eb/N o increases) A threshold of a certain decoding algorithm can be obtained when the SNR curves start to touch each other. Fig. 6.8 shows the SNR evolution curves of the standard ID and CID algorithms. At the threshold 103 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 1 .2 • o S td ID — B— CID (N=4) 1.1 1 o. 0 .9 0.8 B - 0 .7 0.6 0 .8 1 0 0.2 0 .4 0.6 input sn r (outer) / output sn r (inner) Figure 6.8: SNR evolution curves of standard ID and CID on SCCC at 0.53dB. (Eb/No=0.53dB) of standard ID, the CID SNR curves have an open iteration tunnel, while the standard ID SNR curves are closed. In other words, the threshold of CID is lower than th at of standard ID. Fig. 6.9 shows th at the standard ID and the CID have their thresholds at different noise level, Eb/N o=0.48dB and Eb/N o=0.53dB respectively. The difference (0.5dB) between two thresholds agrees with the difference of simulation performance (0.4dB at BER 10“3). 6.4 C onclusion A modification of the standard iterative decoding algorithm, CID, was introduced in this paper. CID is the generalization of the idea th a t optimal decoding of a tail-biting convolutional code can be achieved by breaking the cycle using a state 104 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. _c c: S 0 . 9 Cl _c « 1 > 1 C I o 0.7 0 0.2 0.4 0.6 0.8 1 in p u t s n r (o u te r) / o u tp u t s n r (inner) Figure 6.9: SNR evolution curves of standard ID and CID on SCCC at 0.53dB and 0.48dB respectively. variable constraint. The optimal and suboptimal CID algorithm were presented and the performance was compared th at of the standard ID. The suboptim al CID was applied to the SCCC and LDPC codes, which have many cycles, and it showed about 0.3-0.5 dB performance gain with higher complexity compared to th at of standard ID. Convergence analysis using density evolution techniques showed how the CID improved the performance compared to th at of standard ID. Future work is to find a good, big codes with a few cut-points, which is optimally decoded by CID instead of standard ID (suboptimal). 105 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. C hapter 7 C onclusions and Future W ork In this dissertation, several modified Iterative D ata Detection techniques (message- passing algorithms) were developed. For the unknown channel (e.g., frequency se­ lective channel, flat-fading channel), the channel adaptation methods combined with iterative detection were discussed in Chapter 3. As one of the complexity reduction techniques, the stopping criterion for iterative decoding were discussed and a simple and effective stopping criteria was proposed particularly for the MIN-SUM algorithm in Chapter 4. As an attem pt for optimal decoding of codes with cycle(s), the Con­ strained Iterative Decoding (CID) was introduced and its convergence property was analyzed by density evolution (DE) technique. The DE was explained and catego­ rized in Chapter 5. In addition, it was used to explain and optimize the scaling soft information. 106 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. These powerful DE techniques (i.e., analytical DE) can be applied to limited code structure which is a network of 2-state constituent codes. To overcome this limitation will give another breakthrough in term s of design of good codes. Another unsolved issue is an optimal decoding of codes with cycles. A development of the concept of CID may provide a clue to this problem at least in a case of short interleaver. 107 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. R eference List [1] K. Abend and B. D. Fritchan. Statistical detection for communication channels with intersymbol interference. Proc. IEEE, vol. 58, pp. 779-785, May 1970. [2] S.M. Aji, G.B. Horn, and R.J. McEliece. Iterative decoding on graphs with a single cycle. In Proc. IEEE Symposium on Information Theory, page 276, Cambrigde, MA, USA, August 1998. [3] S.M. Aji and R.J. McEliece. The generalized distributive law. IEEE Trans. In­ form. Theory, vol. 46, no. 2, pp. 325-343, March 2000. [4] A. Anastasopoulos. A comparison between the sum-product and the min-sum it­ erative detection algorithms based on density evolution, subm itted to Globecom, March 2001. [5] A. Anastasopoulos and K. M. Chugg. Phase tracking for turbo codes: An ap­ proach based on adaptive SISOs. 28-th Annual IEEE Communication Theory Workshop, May 1999. [6] A. Anastasopoulos and K. M. Chugg. Adaptive iterative detection for turbo codes with carrier-phase uncertainty, subm itted to IEEE Trans. Commun., February 2000. [7] A. Anastasopoulos and K. M. Chugg. Adaptive soft-input soft-output algorithms for iterative detection with param etric uncertainty, to be appeared in IEEE Trans. Commun., September 2000. [8] A. Anastasopoulos and A. Polydoros. Adaptive soft-decision algorithm for mo­ bile fading channels. ETT, vol. 9, no. 2, pp. 183-190, M arch/A pril 1998. [9] J. B. Anderson and S. M. Hladik. Tailbiting map decoders. IEEE J. Select. Areas Commun., vol. 16, no. 2, pp. 297-302, February 1998. [10] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv. Optimal decoding of linear codes for minimizing symbol error rate. IEEE Trans. Inform. Theory, vol. IT-20, pp. 284-287, March 1974. 108 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 11] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara. Soft-input soft-output modules for the construction and distributed iterative decoding of code networks. European Trans. Commun., , no. 2, M arch/A pril 1998. 12] C. Berrou, A. Glavieux, and P. Thitm ajshim a. Near shannon limit error- correcting coding and decoding: turbo-codes. In International Conference on Communications, pages 1064-1070, Geneva, Switzerland, May 1993. 13] X. Chen and K. M. Chugg. Reduced state soft-in/soft-out algorithms for com­ plexity reduction in iterative and non-iterative data detection, subm itted to ICC 2000. 14] X. Chen and K. M. Chugg. Near-optimal page detection for two-dimensional isi/awgn channels using concatenated modeling and iterative detection. In Proc. International Conf. Communications, (Atlanta, GA), 1998. 15] K. Chugg, A. Anastasopoulos, and X. Chen. Iterative Detection - Adaptivity, Complexity Reduction, and Applications. Kluwer Academic Press, 2001. 16] K. Chugg and K. Lerdsuwanakij. Fading channel sequence detection based on rational approximations to the clarke spectrum. In ICT, Porto Carras, Greece, 1998. 17] K. M. Chugg and X. Chen. Efficient architectures for soft-output algorithms. In Proc. International Conf. Communications, Atlanta, GA, 1998. 18] K. M. Chugg and X. Chen. On a-posteriori probability (APP) and minimum sequence metric (MSM) algorithms, subm itted to IEEE Trans. Commun., March 1999. 19] K. M. Chugg and A. Polydoros. MLSE for an unknown channel - part II: Tracking performance. IEEE Trans. Commun., pages 949-958, August 1996. 20] S. Y. Chung, G. D. Forney, T. J. Richardson, and R. L. Urbanke. On the design of low-density parity-check codes within 0.0045 db of the shannon limit. Communication letters, vol. 5, no. 2, pp. 58-60, February 2001. 21] S. Y. Chung, T. J. Richardson, and R. L. Urbanke. Analysis of sum-product de­ coding of low-density parity-check codes using a gaussian approximation. IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 657-670, February 2001. 22] R. Clark. A statistical theory of mobile radio reception. Bell Sys. Tech. J., vol. 47, pp. 97-1000, 1968. 23] G. Colavolpe, G. Ferrari, and R. Raheli. Extrinsic information in iterative de­ coding: a unified view, to be appeared in, 2001. 109 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. [24] D. Divsalar, S. Dolinar, and F. Pollara. Iterative turbo decoder analysis based on density evolution. TMO Progress Reps., Jet Propulsion Lab, February 2001. [25] A. Duel-Hallen and C. Heegard. Delayed decision feedback estimation. IEEE Trans. Commun., vol. 37, pp. 428-436, May 1989. [26] E. Ettelaie. Slowing the convergence using filtering. Class Project Report (EE666), University of Southern California, Spring 2001. [27] M. V. Eyuboglu and S. U. Qureshi. Reduced-state sequence estim ation with set partitioning and decision feedback. IEEE Trans. Commun., vol. COM-38, pp. 13-20, January 1988. [28] B. J. Frey. Graphical Models for Machine Learning and Digital Communication. The M IT Press, Cambridge, 1998. [29] H. El Gamal and Jr. A. R. Hammons. Analyzing the turbo decoder using the gaussian approximation. IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 671- 686, February 2001. [30] N. H. Ha and E. Shwedyk. Performance of turbo codes on frequency-selective rayleigh fading channel with joint data and channel estimation. In IEEE Press, 1999. [31] J. Hagenauer, E. Offer, and L. Papke. Iterative decoding of binary block and convolutional codes. IEEE Trans. Inform. Theory, vol. 42, no. 2, pp. 429-445, March 1996. [32] H. Hoang and E. Shwedyk. Performance of turbo codes on frequency-selective rayleigh fading channel with joint data and channel estimation. In Proc. Inter­ national Conf. Communications, pages 98-102, 1999. [33] Christos Komninakis and Richard D. Wesel. Pilot-aided joint d ata and channel estim ation in flat correlated fading. In Proc. Globecom Conf, pages 2534-2539, Rio de Janeiro, Brazil, 1999. [34] F.R. Kschischang and B.J. Frey. Iterative decoding of compond codes by prob­ ability propagation in graphical models. IEEE J. Select. Areas Commun., pages 219-231, February 1998. [35] L-N. Lee. Real-time minimal-bit-error probability decoding of convolutional codes. IEEE Trans. Commun., vol. 22, pp. 146-151, February 1974. [36] Y. Li, B. Vucetic, and Y. Sato. Optimum soft-output detection for channels with intersymbol interference. IEEE Trans. Inform. Theory, vol. 41, pp. 704-713, May 1995. 110 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. [37] L. Lu and S. W. Wilson. Synchronization of turbo coded m odulation systems at low snr. Proc. International Conf. Communications, 1998. [38] R. J. McEliece, D. J. C. MacKay, and J. F. Cheng. Turbo decoding as an instance of Pearl’s ’ ’belief propagation” algorithm. IEEE J. Select. Areas Commun., vol. 16, pp. 140-152, February 1998. [39] J. M. Mendel. Lessons in estimation theory for signal processing, communica­ tions, and control. Prentice Hall, Englewood Cliffs, NJ, 1995. [40] L. Ping, S. Chan, and K. L. Yeung. Iterative decoding of multi-dimensional concatenated single parity check codes. In Proc. International Conf. Communi­ cations, pages 131-135, June 1998. [41] R. Raheli, A. Polydoros, and C-K. Tzou. The principle of per-survivor process­ ing: A general approach to approximate and adaptive MLSE. In Proc. Globecom Conf, pages 33.3.1-33.1.6, 1991. [42] R. Raheli, A. Polydoros, and C-K. Tzou. Per-survivor processing: A general approach to MLSE in uncertain environments. IEEE Trans. Commun., vol. 43, pp. 354-364, Feb-Apr. 1995. [43] T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke. Design of capacity- approaching irregular low-density parity-check codes. IEEE Trans. Inform. The­ ory, vol. 47, no. 2, pp. 619-637, February 2001. [44] T. J. Richardson and R. L. Urbanke. The capacity of low-density parity-check codes under message-passing decoding. IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 599-618, February 2001. [45] M. Sandell, C. Luschi, P. Strauch, and R. Yan. Iterative channel estimation using soft decision feedback. In Proc. Globecom Conf, pages 3728-3733, 1998. [46] M. Sandell, C. Luschi, P. Strauch, and R. Yan. Turbo equalization for an 8-psk modulation scheme in a mobile tdm a communication system. In Proc. Vehicular Tech. Conf, pages 1605-1609, 1999. [47] R. Shao, S. Lin, and M. Fossorier. Two simple stopping criteria for turbo decod­ ing. IEEE Trans. Commun., vol. 47, pp. 1117-1120, August 1998. [48] S. ten Brink. Convergence of iterative decoding. Electronics Letters, vol. 35, no. 13, pp. 806-808, May 1999. [49] M. C. Valenti and B. D. Woerner. Refined channel estim ation for coherent detection of turbo codes over flat-fading channels. Electronic Letters, vol. 34, no. 17, pp. 1648-1649, August 1998. I l l Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. [50] M. C. Valenti and B. D. Woerner. A bandwidth efficient pilot symbol tech­ nique for coherent detection of turbo codes over fading channels. In Proc. IEEE Military Comm. Conf., pages 81-85, 1999. [51] M. C. Valenti and B. D. Woerner. Performance of turbo codes in interleaved flat fading channels with estim ated channel state information. In Proc. Vehicular Tech. Conf., pages 66-70, Phoenix, AZ, May 1999. [52] G. M. V itetta and D. P. Taylor. Maximum likelihood decoding of uncoded and coded PSK signal sequences transm itted over Rayleigh flat-fading channels. IEEE Trans. Commun., vol. 43, no. 11, pp. 2750-2758, November 1995. [53] Y. Wang, R. Ramesh, A. Hassan, and H. Koorapaty. On map decoding for tail- biting convolutional codes. In Proc. IEEE Symposium on Information Theory, page 225, June 1997. [54] Y. Zhang, M. P. Fitz, and S. B. Gelfand. Soft output demodulation on frequency- selective Rayleigh fading channels using AR channel models. In Proc. Globecom Conf, pages 327-331, Phoenix, Arizona, November 1997. 1 1 2 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. A p p endix A E quivalence s i g n ( L V l ( v i ) ) s i g n ( L v 2( v 2) ) m m [ \ L V l ( y i ) l \ L V2( y 2)\] to m in-sum algorithm The message from variable nodes v\ and v2 to check node c after taking negative log are where y\ and y2 are observations from channel. Rewrite LLRs L\ and L 2 as a function of Vi{vi) and V2{v2) Viiyi) = -logp(vi\yi) (A.1) V2{v2) = — \ogp{v2\y2) (A-2) 113 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. logp(ui = 0 |yi) - logp(ui = l\yi) L v 2{v 2) = log V\{v\ — 0) + V\{v\ — 1) p(v 2 = 0 |y2) p(v 2 = 1| y2) = logp ( v 2 = 0 |y 2) - logp ( v 2 = 1|y 2) = ~V2{v2 = 0) + V 2 ^ v2 = 1) Under the min-sum algorithm, the message U from check node to variable node v3 is U(v3) = m inU ^ni) + V2{y2) + xCu, An Uj) (A.3) where x(ni,w 2,n3) is an even parity check equation represented as X ( v i , v 2, v 3) = < 0 : i f v i + v 2 + v 3 = 0 00 : i f v i + v 2 + v 3 = 1 (A.4) Rewrite U(v3) for each value th at w 3 can have as U ( v 3 = 0) = min[ V i ( v i = 0) + V 2( v 2 = 0), V \{ V \ = 1) + V 2( v 2 = 1)] U(v3 = 1) = min[ V\(vx — 0) + U2(^2 = 1), V i { v \ = 1) + V 2( v 2 = 0)] (A.5) (A.6) 114 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Substitute (A.1),(A.2) into (A.5) and use p(v = 0) = and U(v3 = 0) = min[— \ogp(vi = (% i) - logp(v2 = 0\y2), - l o g p(vi = l\yi) - logp(v2 = l\y2)} = min[— logp(ui = 0\y\)p{v2 = 0 |j/2), log piyi = l\yi)p(v2 = l\y2)\ gLv 1(ui)gi'V2(U2) m m [- log ^ + ejLl/i(„1 ) ^ 1 + eLV 2(v2)^ ’ _ b g (I + eLviM ){l + eLv2M )^ = min[— Lvi(ui) - L V2(v2), 0] + l o g ( l + eiV l^ ) ( l + eiv'2^ ) Similarly, U (V 3 = 1 ) = minj— logp{v\ = Q\yi) - logp(v2 = 1|y2), - logp(ui = l\yi) - logp{v2 = 0 |i/2)] = m in [-lo g p (u 1 = 0 |yx)p{v2 - l\y2), -lo g p (u i = l\yi)p(v2 = 0 | y2)\ Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. e LVl(v i) = min[— log - lo g (1 + e Lv1( ^ i ) ) ( l _|_ e Lv2(v2) y g L v 2 (v2) (1 + eLvi^^)(l + eLv2 ^ y = min [ - L Vl{v i),-L v 2{v2)\ + log (1 + eLv^ Vl))(l + eLv^ V2)) Let Lu(vs) be the LLR of the message U(vs) then, Lu{v 3) = -U (v 3 = 0) + U(v3 = i) = - m in [ — Lvi(z;i) - L V2(v2),0] + m in [-L Vl(vi), ~ L V2(v2)\ (A.7) If L Vj(vi) > 0,Lv2(n2) > 0, then Lu(v 3) = Ly1(vi) + L v2(v2) + min [ - L Vl( v i ) ,- L V3(v2)\ = min[Lyi(ni),Ly2(n2)] = sign{LVx{vi))sign(LV2{v2))vciixi[\LVl{vx) l \LV2(v2)\] 116 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. If LV i (ui) < 0, L V2{v2) < 0, then Lu(v3) = m in[ - L Vl{vi), - LV2{v2)} = sign(LVl(vi))sign(LV2(v2))mm[\LVl(v-i)\, \LV2(v2)\\ If LVl(vi) > 0, L V2{v2) < 0, then Lu(v 3) = - m m [-L Vl(vi) - L V2(v2), 0] - L Vl(vi) = - m i n [LVl( v i ) ,- L V2(v2)} = si5n(Lv1('yi))sign(Lv2(v2))m in[|Lv1(fi)|, \Ly2(v2)\\ If L Vl{vi) < 0, L V2(v2) > 0, then L u(v3) = - m \ n [ - L Vl(vi) - L v2(v2),0] - L V2(v2) = - m i n [ - L Vl(vi),L V2(v2)} = sign(Lv1 (vi))sign(LV2(v2)) minflLyj (tq )|, \LV2(v2)\\ Therefore, (A.7) computed under the min-sum algorithm is equivalent to sign(LVl{vi))sign{LV2(v2))vnm[\LVl{vi)\: \Lv2{v2)\}- Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 
Linked assets
University of Southern California Dissertations and Theses
doctype icon
University of Southern California Dissertations and Theses 
Action button
Conceptually similar
Iterative data detection:  Bounding performance and complexity reduction
PDF
Iterative data detection: Bounding performance and complexity reduction 
Joint data detection and parameter estimation:  Fundamental limits and applications to optical fiber communications
PDF
Joint data detection and parameter estimation: Fundamental limits and applications to optical fiber communications 
Array processing algorithms for multipath fading and co-channel interference in wireless systems
PDF
Array processing algorithms for multipath fading and co-channel interference in wireless systems 
Data compression and detection
PDF
Data compression and detection 
Design and performance of space -time codes
PDF
Design and performance of space -time codes 
An evaluation of ultra -wideband propagation channels
PDF
An evaluation of ultra -wideband propagation channels 
Iterative detection for page -oriented optical data storage systems
PDF
Iterative detection for page -oriented optical data storage systems 
Contributions to efficient vector quantization and frequency assignment design and implementation
PDF
Contributions to efficient vector quantization and frequency assignment design and implementation 
Characterization and identification of ultrawideband radio propagation channels
PDF
Characterization and identification of ultrawideband radio propagation channels 
Contributions to image and video coding for reliable and secure communications
PDF
Contributions to image and video coding for reliable and secure communications 
Sequence Estimation In The Presence Of Parametric Uncertainty
PDF
Sequence Estimation In The Presence Of Parametric Uncertainty 
High-frequency mixed -signal silicon on insulator circuit designs for optical interconnections and communications
PDF
High-frequency mixed -signal silicon on insulator circuit designs for optical interconnections and communications 
Adaptive soft-input soft-output algorithms for iterative detection
PDF
Adaptive soft-input soft-output algorithms for iterative detection 
Dynamic logic synthesis for reconfigurable hardware
PDF
Dynamic logic synthesis for reconfigurable hardware 
A passive RLC notch filter design using spiral inductors and a broadband amplifier design for RF integrated circuits
PDF
A passive RLC notch filter design using spiral inductors and a broadband amplifier design for RF integrated circuits 
Code assignment and call admission control for OVSF-CDMA systems
PDF
Code assignment and call admission control for OVSF-CDMA systems 
Efficient acoustic noise suppression for audio signals
PDF
Efficient acoustic noise suppression for audio signals 
Error -control coding and multiuser detection
PDF
Error -control coding and multiuser detection 
Design, fabrication, and integration of a 3-D hybrid electronic/photonic smart camera
PDF
Design, fabrication, and integration of a 3-D hybrid electronic/photonic smart camera 
Algorithms for streaming, caching and storage of digital media
PDF
Algorithms for streaming, caching and storage of digital media 
Action button
Asset Metadata
Creator Heo, Jun (author) 
Core Title Iterative data detection:  Modified message -passing algorithms and convergence 
Contributor Digitized by ProQuest (provenance) 
Degree Doctor of Philosophy 
Degree Program Electrical Engineering 
Publisher University of Southern California (original), University of Southern California. Libraries (digital) 
Tag engineering, electronics and electrical,OAI-PMH Harvest 
Language English
Advisor Chugg, Keith (committee chair), Baxendale, Peter (committee member), Bereel, Peter (committee member) 
Permanent Link (DOI) https://doi.org/10.25549/usctheses-c16-239774 
Unique identifier UC11334805 
Identifier 3093416.pdf (filename),usctheses-c16-239774 (legacy record id) 
Legacy Identifier 3093416.pdf 
Dmrecord 239774 
Document Type Dissertation 
Rights Heo, Jun 
Type texts
Source University of Southern California (contributing entity), University of Southern California Dissertations and Theses (collection) 
Access Conditions The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au... 
Repository Name University of Southern California Digital Library
Repository Location USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
engineering, electronics and electrical