LTE 2009-2010

download LTE 2009-2010

of 199

Transcript of LTE 2009-2010

  • 7/31/2019 LTE 2009-2010

    1/199

  • 7/31/2019 LTE 2009-2010

    2/199

    Long Term Evolution

    ~ 1 ~

  • 7/31/2019 LTE 2009-2010

    3/199

    Long Term Evolution

    ~ 2 ~

    Project Group Members

    Ahmed Abd-Allah Ahmed

    Ahmed Adel Ahmed

    Ahmed Ali Mahmmod

    Ahmed Gaber Hussien

    Amr Abd-ElSalam Kabary

    Amr Yossef Hassan

    Ayman ElSayed Yasin

    Ayman Ibrahim Mansour

    Hazem Mohammed Adel

    Hossam Mohamed Abd-ElAzez

    Mohamed Mosaad Mostafa

    Mostafa Fahmy Darwesh

    Osama Anter Salem

    Tawfik Mohammed Samy

  • 7/31/2019 LTE 2009-2010

    4/199

    Long Term Evolution

    ~ 3 ~

    Acknowledgements

    This project was accomplished during our fourth year time at the Department

    of Communications Faculty of Engineering at the University of Alexandria

    and basically describes our work and study in our graduation project.

    Certainly, it could not have been written without the support and patience of

    many people. Therefore, we are obliged to thank & honor everyone who

    assisted us during that time.

    In particular, we want to express our gratitude to our supervisor

    Dr. Masoud Besher ELghonamy

    For all the valuable advice, encouragement, and discussions. The opportunity

    to work with him was a precious experience, he exerts all the effort and time to

    help us to learn, search, and do our best in this project.

    Also we want to thankOur Professors in the communication department, who

    made their best to teach us the soul of Communication Engineering, Specially

    Dr\ Ahmed K. Sultan Salem

    who accorded us with all the help and support whenever weasked, and ourdeep thanks to

    Eng \ Karim Ahmed Samy Banawan

    Who was our beacon through our project journey.

    Most of all, we thank Our beloved families for their immeasurable support,

    encouragement, and patience while working on this project. Without their love

    and understanding, this book and our project would not have come to fruition.

    At the end and the beginning, we would be remiss if we fail to express our

    profound gratitude to Allah who always we asking for his assistance and we

    owing to him with any success and progress we made in our life.

  • 7/31/2019 LTE 2009-2010

    5/199

    Long Term Evolution

    ~ 4 ~

    Mobile broadband is becoming a reality, as the Internet

    generation grows accustomed to having broadband access

    wherever they go, and not just at home or in the office. Out of the

    estimated 1.8 billion people who will have broadband by 2012,some two-thirds will be mobile broadband consumers and the

    majority of these will be served by HSPA (High Speed Packet

    Access) and LTE (Long Term Evolution) networks.

    People can already browse the Internet or send e-mails using

    HSPA-enabled notebooks, replace their fixed DSL modems with

    HSPA modems or USB dongles, and send and receive video or

    music using 3G phones. With LTE, the user experience will beeven better. It will further enhance more demanding applications

    like interactive TV, mobile video blogging, advanced games or

    professional services.

    LTE offers several important benefits for consumers and

    operators:

    Performance and capacity One of the requirements onLTE is to provide downlink peak rates of at least 100Mbit/s. The

    technology allows for speeds over 200Mbit/s. Furthermore, RAN

    (Radio Access Network) round-trip times shall be less than 10ms.

    In effect, this means that LTE more than any other technology

    already meets key 4G requirements.

    SimplicityFirst, LTE supports flexible carrier bandwidths,from below 5MHz up to 20MHz. LTE also supports both FDD

    (Frequency Division Duplex) and TDD (Time Division Duplex).

    Ten paired and four unpaired spectrum bands have so far beenidentified by 3GPP for LTE. And there are more bands to come.

    This means that an operator may introduce LTE in new bands

    where it is easiest to deploy 10MHz or 20MHz carriers, and

    eventually deploy LTE in all bands.

    Second, LTE radio network products will have a number of

    features that simplify the building and management of next-

    generation networks. For example, features like plug-and-play,

  • 7/31/2019 LTE 2009-2010

    6/199

    Long Term Evolution

    ~ 5 ~

    self-configuration and self-optimization will simplify and reduce

    the cost of network roll-out and management.

    Third, LTE will be deployed in parallel with simplified, IP-based

    core and transport networks that are easier to build, maintain and

    introduce services on.

    Wide range of terminals in addition to mobile phones,many computer and consumer electronic devices, such as

    notebooks, ultra-portables, gaming devices and cameras, will

    incorporate LTE embedded modules. Since LTE supports hand-

    over and roaming to existing mobile networks, all these devices

    can have ubiquitous mobile broadband coverage from day one.

    In summary, operators can introduce LTE flexibly to match theirexisting network, spectrum and business objectives for mobile

    broadband and multimedia services.

    Our projects aim is to introduce a simple simulation for the

    Physical Layer for a Down Link Traffic Channel in LTE system

    and investigating the output results at different system

    parameters.

    This project have been divided into two stages, the first stage wasthe background stage where the fundamentals of wireless

    communications had been investigated, digital communication

    principles, wireless channels problems and channel coding

    concepts have been grasped very well to provide us with a robust

    knowledge about all the essentials required to understand and

    deal with any advanced system.

    The second stage which is MATLAB simulations of most ofsystem blocks. Then the LTE system performance had been

    tested through these simulations of the system investigating the

    parameter variations using MATLAB platform.

    The hardware implementation of the LTE downlink receiver and

    connect it to the transmitter on the MATLAB.

  • 7/31/2019 LTE 2009-2010

    7/199

    Long Term Evolution

    ~ 6 ~

    CONTENT

    1 Channel Coding.....................................................................................................101.1 Introduction..................................................................................................11

    1.1.1 Channel Coding In Communication System....................................... 111.1.2 Coding principle............................................................................................. 121.1.3 Trade- off between BER and Bandwidth.................................................121.1.4 Error Control Techniques.............................................................................131.1.4.1 FORWARD ERROR CORRECTION (FEC)..................................... .................. .............131.1.4.2Automatic Repeat request (ARQ)......................................................................... 131.1.4.3 Hybrid ARQ....................................................................................................................14

    1.2 Cyclic Coding.................................................................................................. 141.2.1 Introduction............................................................................................................141.2.2 Generator polynomial............................................................................... .........151.2.3 Parity-Check Polynomial........................................................... ........................151.2.4 Generator and Parity-Check Matrices............................................................171.2.5 Encoder for Cyclic Codes.................................................................... ...............18

    1.2.5.1 Calculation of the Syndrome....................................................................201.2.5.2 The syndrome polynomial properties...................................................21

    1.2.6 Cyclic -redundancy check codes.......................................................................221.3 Convolutional Codes.....................................................................................29

    1.3.1 Convolutional Encoder.........................................................................................301.3.2 Viterbi Decoder.......................................................................................................411.3.3 MATLAB Codes and Results.....................47

    1.4 Turbo codes.......................................................................................................................53 1.4.1 Turbo Encoder.........................................................................................................551.4.2 Turbo Decoder.........................................................................................................591.4.3 MATLAB Codes and Results............................68

    1.5 Rate Matching........................................751.5.1 Rate Matching for Turbo Codes..................751.5.2 Rate Matching for Convolutional Codes..............................81

    1.6 Scrambler.................................822 Digital Modulation techniques...........................................89

    2.1 What Is the Modulation?........................ ..........................................................902.1.1 Why we modulate signals?.................................................................................. 902.1.2 Analog versus Digital............................................................................................ 912.1.3 Factors that influence the choice of digital modulation.............................922.1.4 The performance of a modulation scheme..................................................... 93

    2.1.4.1 Power efficiency P................................................................................... ............ .93

  • 7/31/2019 LTE 2009-2010

    8/199

    Long Term Evolution

    ~ 7 ~

    2.1.4.2 Bandwidth efficiency (Spectral efficiency) B......................................932.1.4.3 Bandwidth efficiency, Power efficiency Trade-off...942.1.4.4 System Complexity........................................................................................942.1.4.5 Other considerations....................................................................................95

    2.1.5 Geometric representation of Modulated signal (Constellation diagram).......952.1.5.1 Constellation diagram interpretation..................................................962.1.5.2 Probability of error and constellation diagram.............................96

    2.2 PHASE SHIFT KEYING MODULATION TECHNIQUES..........................972.2.1 Binary phase shift keying (BPSK)....................................................... 97

    2.2.1.1 Time domain................................................................................................ 972.2.1.2 Power sufficiency & bandwidth efficiency of BPSK.....................982.2.1.3 Probability of error of BPSK..............................................................98

    2.2.2 Quadrature phase shift keying (QPSK)...............................................992.2.2.1 Constellation Diagram and probability of error...........................992.2.2.2 QPSK Transmitter: ...................................................................................1002.2.2.3

    QPSK Receiver: ...........................................................................................101

    2.3 QUADRATURE AMPLITUDE MODULATION (QAM)...........................1022.3.1 Types of QAM........................................................................................103

    2.3.1.1 Circular QAM................................................................................................1032.3.1.2 Rectangular QAM.......................................................................................103

    2.3.2 Probability of symbol error calculations..........................................1042.3.3 QAM modulation...................................................................................1042.3.4 QAM demodulation: .............................................................................1052.3.5 BW efficiency.........................................................................................105

    2.4 MATLAB Codes and Results............................................1063 Orthogonal Frequency Division Multiplexing (OFDM).........................111

    3.1 Introduction.................................................................................................................................1123.1.1 History of OFDM....................................................................................112

    3.2 Why OFDM.......................................................................................................1133.2.1 Time domain analysis............................................................................1133.2.2 Frequency domain analysis..................................................................114

    3.3 Orthogonality.................................................................................................1153.3.1 Inter-symbol interference (ISI)............................................................1153.3.2 Inter-carrier interference (ICI)............................................................1153.3.3 How to avoid interference....................................................................1163.3.4 Orthogonality of OFDM..........................................................................1163.3.5 Comparing FDM to OFDM.........................117

    3.4 OFDM Modulation.........................................................................................1183.5 OFDM demodulation....................................................................................1203.6 Cyclic-prefix insertion.................................................................................121

  • 7/31/2019 LTE 2009-2010

    9/199

    Long Term Evolution

    ~ 8 ~

    3.6.1 Cyclic-prefix Drawbacks......................................................................1213.6.2 Cyclic-prefix Advantages.....................................................................122

    3.7 Selection of basic OFDM parameters.....................................................................1223.7.1 OFDM subcarrier spacing....................................................................1233.7.2 Number of subcarriers........................................................................1243.7.3 Cyclic-prefix length..............................................................................125

    3.8 OFDM as a user-multiplexing and multiple-access scheme..........1263.9 OFDM Drawbacks.........................................................................................127

    3.9.1 The Peak-to-Average Ratio.................................................................1283.10MATLAB Codes and Results...129

    4 Single Carrier FDMA....1314.1 Introduction..................................1324.2 SC-FDMA Signal Processing....................................1324.3 Subcarrier Mapping....................................1354.4 SC-FDMA in 3GPP Long Term Evolution...................137

    4.4.1 Uplink Time and Frequency Structure......1374.4.1.1 Frames and Slots........................................1384.4.1.2 Resource Blocks..................................................139

    4.4.2 Basic Uplink Physical Channel Processing.....1424.5 MATLAB Codes and Results...........144

    5 Diversity and MIMO Multi-Antenna Systems........................................1455.1 Diversity.....................................................................................................1465.2 Diversity types..........................................................................................146

    5.2.1 Time diversity..................................................................................1465.2.2 Frequency diversity: .......................................................................1475.2.3 Spatial (antenna) diversity: ..........................................................1485.2.4 Polarization diversity: ...................................................................1515.2.5 Angle diversity: ..............................................................................151

    5.3 Spatial multiplexing...............................................................................1525.3.1 Space Time Block Codes (STBC) using Alamouti method..........153

    6 Channel Problems and Modeling.........................................................1566.1 Introduction............................................................................................157

    6.1.1 Noise in the wireless channel.......................................................1576.1.2 Interference in the wireless channel..........................................1586.1.3 Dispersion in the wireless channel.............................................1596.1.4 Path Loss........................................................................................1596.1.5 Shadowing......................................................................................159

  • 7/31/2019 LTE 2009-2010

    10/199

    Long Term Evolution

    ~ 9 ~

    6.2 Large Scale Fading..................................................................................1606.2.1 Path loss............................................................................................160

    6.2.1.1 Free-Space Path Loss...........................................................................1606.2.1.2 Ray tracing.............................................................................................1626.2.1.3 Simplified Path Loss Model.............................................................1666.2.2 Shadow Fading....................................................................1686.2.3 Outage Probability under Path Loss & Shadowing........1706.2.4 Cell Coverage Area.............................................................171

    6.3 Small Scale Fading.................................................................................1736.3.1 Introduction...................................................................................1736.3.2 Small Scale Fading Concepts.........................................................174

    6.3.2.1 Definitions.............................................................................................1746.3.2.2 How fading happens..........................................................................1746.3.2.3 Factors influencing small scale fading....................................1766.3.2.4 Doppler shift.......................................................................................177

    6.3.3 Classifications of Small Scale Fading Channels.........................1786.3.3.1 Fading effects due to multipath Time delay spread........179

    6.3.3.1.1 Flat fading channels.................................................................................179

    6.3.3.1.2 Frequency selective fading channels...............................................181

    6.3.3.2 Fading effects due to Doppler spread.....................................1836.3.3.2.1 Fast fading channel.................................................................................183

    6.3.3.2.2 Slow fading channel.................................................................................185

    6.4 MATLAB Codes and Results7 Comparative study between LTE and WiMAX ...........187

    7.1 Introduction.....................1887.2 KEY TECHNOLOGIES.......................189

    7.2.1 Common key technologies.................................1897.2.1.1 Multiple Antenna support.............................1897.2.1.2 OFDMA transmission scheme........................190

    7.2.2 Key technologies for LTE system......1917.2.2.1 Spectrum flexibility................................1917.2.2.2 Dependentscheduling and rate adaptation...1947.2.2.3 Sc-fdma principles.............................195

    7.2.3

    Key technologies of Mobile WiMAX....1967.2.3.1 Spectrum, bandwidth options and duplexing arrangement........1967.2.3.2 Quality-of-service handling........................1977.2.3.3 Mobility.....................................1977.2.3.4 Fractional frequency reuse.........197

  • 7/31/2019 LTE 2009-2010

    11/199

    Long Term Evolution

    ~ 10 ~

    1Channel coding

    Ahmed Ali Mahmmod..CRC Codes

    Ayman Ibrahim Mansour..Convolutional Codes

    Hazem Mohammed Adel....Turbo Codes

    Hossam Mohamed Abd-ElAzez.....Turbo Codes

    Tawfik Mohammed Samy...Rate Matching

    Mostafa Fahmy Darwesh....Scrambler

  • 7/31/2019 LTE 2009-2010

    12/199

    Long Term Evolution

    ~ 11 ~

    1.1 Introduction:The task facing the designer of a digital communication

    system is that of providing a cost effective facility for

    transmitting information from one end of the system at a rate anda level of reliability and quality that are acceptable to a user at the

    other end.

    The two system parameters available to designer are transmitted

    signal power and channel band width ,this two parameters

    together with the power spectral density of receiver noise

    determine signal energy per bit-to-noise power spectral densityratio (Eb/No). This ratio uniquely determine the bit error rate for a

    particular modulation scheme practical consideration usually

    place limits on the value that we can assign to this ratio.

    Accordingly, in practice, we often arrive at a modulation scheme

    and find that it is not possible to provide acceptable data quality

    (low BER) for a fixed(Eb/No). the only practical option to providehigh quality is using error control coding.

    1.1.1 Channel coding in communicationsystem:

    In Fig (1.1) it shows the position of channel coding and decoding

    in any communication system which is done after the source

    coding and before the modulation.

    Fig (1.1) channel coding in communication system

  • 7/31/2019 LTE 2009-2010

    13/199

    Long Term Evolution

    ~ 12 ~

    1.1.2 Coding principleCoding is achieved by adding properly designed controlled

    redundant bits to each message, or makes an operation on the

    message to get it encoded with some methods. These redundant

    bits (digits) are used for detecting and/or correcting transmissionerrors, in other words for protecting data against channel

    impairments (e.g., noise, fading, interference). There are many

    codes that are used in different applications such as Parity check

    codes and reed Solomon used in CDs, Linear block and

    convolutional codes used in space communication, Internet

    Communication, Satellite communication, DVDs.

    1.1.3 Trade-off between BER and bandwidth

    From the curve if the is no coding used and it want to decrease

    the bit error rate from 10-2

    to 10-4

    from point A to point B in the

    Fig (1.2) so increase the Eb/N0 from 8dB to 9dB but if want to

    decrease the error rate at a constant Eb/N0 at 8dB from point A

    to C in the Fig (1.2) it must be use coding but the trade-off in

    this case is increasing the Bandwidth.

    Fig (1.2)

  • 7/31/2019 LTE 2009-2010

    14/199

    Long Term Evolution

    ~ 13 ~

    1.1.4 Error control techniques1.1.4.1 Forward error correction (fec)

    Fig (1.3) Forward error correction diagram In the Forward error correction (FCE)

    No feedback is required. (Simplex connection)

    Added redundancy is used to correct transmission errors at

    the receiver.

    The receiver tries to correct the error itself.

    Varying reliability, constant bit throughput

    1.1.4.2 Automatic repeat request (ARQ)

    Fig (1.4) AUTOMATIC REPEAT REQUEST diagram

  • 7/31/2019 LTE 2009-2010

    15/199

    Long Term Evolution

    ~ 14 ~

    Feedback channel is required. (Full duplex connection)

    The receiver sends a feedback to the transmitter, saying

    that if any error is detected in the received packet or not

    (Not-Acknowledgement (NACK) and Acknowledgement

    (ACK), respectively).

    The transmitter retransmits the previously sent packet if it

    receives NACK.

    Constant reliability, but varying throughput.

    1.1.4.3 Hybrid ARQ (ARQ+FEC) Full duplex connection Combination of the two above techniques to use

    advantages of both schemes.

    1.2Cyclic coding1.2.1 Introduction

    Cyclic coding forms a subclass of LBC .An advantage of cyclic

    codes over other types is that they are easy to encode using a

    well-defined mathematical structure. This led to the

    development of very efficient decoding schemes .A binary code

    is said to be a cyclic code if it exhibits two main properties:

    1.Linearity property: The sum of any two code words in thecode is also a code word.

    2.Cyclic property: Any cyclic shift of a code word in the code

    is also a code word.

  • 7/31/2019 LTE 2009-2010

    16/199

    Long Term Evolution

    ~ 15 ~

    1.2.2 Generator polynomial:The polynomial (X

    n+1) and its factors plays a vital role in the

    generation of cyclic codes.

    Let g(X) be a polynomial of degree n-k that is a factor of ( X

    n

    +1).g(X) may be expressed as:

    = 1 + 11 + Where the coefficient gi is equal to 0 or 1. According to this

    expansion, the polynomial(X) has two terms with coefficient 1

    separated by n-k-1 terms. The polynomial g(X) is called thegenerator polynomial of a cyclic code. A cyclic code is uniquely

    determined by the generator polynomial g(X) in that each code

    polynomial in the code can be expressed in the form of a

    polynomial product as follows:

    C(X) = a(X) g(X)

    Where a(X) is a polynomial in X with degree k-1.

    1.2.3 Parity-Check Polynomial:-An (n, k) cyclic code is uniquely specified by its generator

    polynomial of order (n, k). such a code is also uniquely specified

    by another polynomial of degree k, which is called the parity-

    check polynomial, defined by

    h(X) = 1 + +11 Where the coefficients hi are 0 or 1.

  • 7/31/2019 LTE 2009-2010

    17/199

    Long Term Evolution

    ~ 16 ~

    The parity-check polynomial h(X) has a form similar to the

    generator polynomial in that there are two terms with

    coefficient 1, but separated by k-1 terms.

    The generator polynomial g(X) is equivalent to the generator

    matrix G as a description of the code. Correspondingly, the

    parity-check polynomial, donated by h(X), is an equivalent

    representation of the parity-check matrix H. we thus find that

    the matrix relation HGT =0 for a LBCs corresponds to this

    relationship

    g(X) h(X) mod( Xn + 1) = 0

    This equation shows that the generator polynomial g(X) and the

    parity-check polynomial h(X) are factors of the polynomial

    (Xn+1), and could be shown as:

    g(X)h(X) = Xn

    + 1

    This property provides the basis for selecting the generator or

    parity-check polynomial of a cyclic code. In particular, we may

    state that if g(X) is a polynomial of degree (n-k) and it is also

    a factor of (Xn+1), then g(X) is the generator polynomial of an

    (n,k) cyclic code.

    Equivalently, we may state that if h(X) is a polynomial of degree

    k and it is also a factor of( Xn+1), then h(X) is the parity-check

    polynomial of an (n, k)cyclic code.

    A final comment is in order. Any factor of (Xn+1) with degree (n-

    k), the number of parity bits, can be used as a generator

    polynomial. For large values of n, the polynomial( Xn

    +1) may

  • 7/31/2019 LTE 2009-2010

    18/199

    Long Term Evolution

    ~ 17 ~

    have many factors of degree n-k. Some of these polynomial

    factors generate good cyclic codes, whereas some of them

    generate bad cyclic codes. The issue of how to select generator

    polynomials that produce good cyclic codes is very difficult to

    resolve. Indeed, coding theorists have expended much effort in

    the search for good cyclic codes.

    1.2.4 Generator and parity-check matrixGiven the generator polynomial g(x) of an (n, k) cyclic code, we

    may construct the generator matrix G by noting that {g(X), Xg(X),

    , Xk-1

    g(X)}the k polynomials span the code. The n-tuplescorresponding are used as arrows to generate the k-by-n matrix

    [G] The construction of the parity-check matrix H of the cyclic

    code Polynomial h(X) requires special attention, as described

    here Multiply this equation by a(X).

    g(X)h(X)=Xn+1

    Then using this equation

    C(X)=a(X)g(X)

    We get

    c(X)h(X)=a(X)+Xna(X)

    c(X) and h(X) defined before. The product on the left-hand side

    of the last equation contains powers extending up to n+k-1.

    On the other hand the polynomial a(X) has degree k-1 or less, so

    the powers Xk,X

    K+1,X

    K+2,X

    n-1do not appear in the polynomial on

    the right-hand side of this equation .thus we set the coefficients

    of these terms in the right-hand side by zero

  • 7/31/2019 LTE 2009-2010

    19/199

    Long Term Evolution

    ~ 18 ~

    ci+i=j hk+ji = 0 for 0jn-k-1Comparing with the parity-check equation of the LBC

    cHT=mGH

    T=0.

    We will arrange the coefficients in reversed order

    1 =(1 + 11 +) =1 11 So as shown the parity-check polynomial is a factor of X

    n+ 1 .

    the (n - k) polynomials Xkh(X

    -1), X

    k+1h(X

    -1),., X

    n-1h(X

    -1) may

    now used in rows of the (n - k) - by - n parity-check matrix H.

    1.2.5 Encoder for cyclic codesEarlier we showed how to generate an (n, k) cyclic code in

    systematic form involves three steps:

    Multiply the message polynomial m(X) by Xn-k

    Divide Xn-k

    m(X) by the generator polynomial g(X), obtaining

    the reminder b(X).

    Add b(X) to Xn-k

    m(X), obtaining the code polynomial C(X).

    These three steps can be implemented using a linear

    feedback shift register with (n - k) stages. As shown in fig

    (1.5).

  • 7/31/2019 LTE 2009-2010

    20/199

    Long Term Evolution

    ~ 19 ~

    Fig(1.5) the encoder of cyclic codes

    The encoder consists of some elements. The boxes represent flip

    flip-flops, The flip-flop is a device that resides in one of two

    possible states donated by 0 or 1. We use an external clock to

    control the operation of the flip-flops (initially set to zero).

    Every time the clock ticks, the contents of the flip-flops are

    shifted out in the direction of the arrows.In addition to the flip-

    flops ,the encoder includes a second set of logic elements,called

    adders which compute the modulo-2 sums of their respective

    inputs.Finally the multipliers multiply their respective inputs by

    associated coefficients .In particular , if the coefficient gi = 1 the

    multiplier is just a direct "connection", If,on the other hand,the

    coefficient gi = 0, the multiplier is "no connection" The operation

    of the encoder is as follows:

    The gate is switched on. Hence the k message bits are

    shifted into the channel. As soon as the k message bits have

  • 7/31/2019 LTE 2009-2010

    21/199

    Long Term Evolution

    ~ 20 ~

    entered the shift register, the resulting (n - k) bits in the

    register form the parity bits.

    The gate is switched off, thereby breaking the feedback

    connections.

    The contents of the shift register are read out into the

    channel.

    1.2.5.1 Calculation of the syndromeSuppose that the code word (c0, c1, c2,, cn1) is transmitted over

    a noisy channel, resulting in the received word(r0, r1, r2, , rn1).

    We know from the syndrome calculation of the LBC for

    the received word, that if the syndrome is zero, there are no

    transmission errors in the received word. If, on the other hand,

    syndrome is non-zero, the received word contains transmission

    errors that require correction.

    In the cyclic codes the syndrome could be calculated easily. Letthe received word be represented by the polynomial of degree

    n-1 or less, as shown by,

    r(X) = r0 + r1X++ rn1Xn1Let dividing r(X) with g(X) results in q(X) be the quotient and s(X)

    be the reminder. Therefore

    We can express r(X) as follows:r(X)= q(X)g(X)+s(X)

    The reminder s(X) is a polynomial of degree nk1 or less. It is

    called the syndrome polynomial because its coefficients make

    up the (n-k)-by-1 syndrome s.

  • 7/31/2019 LTE 2009-2010

    22/199

    Long Term Evolution

    ~ 21 ~

    Fig (1.6) shows a syndrome calculator

    1.2.5.2The syndrome polynomial properties The syndrome of the received word polynomial is also the

    syndrome of the corresponding error polynomial. Which is

    explained with the following equations, the received word

    polynomial results from transmitting the cyclic code with

    polynomial c(X) over a noisy channel?

    r(X)=c(X)+e(X)

    Where r(X) is the error polynomial. Using modulo-2 addition

    we may write:

    e(X)=c(X)+r(X)

    So using this last equation with these equations:C(X)=a(X) g(X)

    r(X)=q(X) g(X)+s(X)

    we can get,

    e(X)=u(X) g(X)+s(X)

    as u(X)

    u(X)=a(X)+q(X)

  • 7/31/2019 LTE 2009-2010

    23/199

    Long Term Evolution

    ~ 22 ~

    Let S(X) the syndrome of a received word polynomial

    S(X).Then, the syndrome of Xr(X), a cyclic shift of r(X), is Xs(X).

    Applying a cyclic shift to both sides of the equation of the

    received word we get:

    Xr(X)=Xq(X) g(X)+Xs(X).

    From that we can see that Xs(X) is the reminder of the

    division of Xr(X) by g(X). Hence, the syndrome of Xr(X) is Xs(X)

    as stated. We can generalize this result by stating that if s(X)

    is the syndrome of r(X), thenXir(X) is the syndrome of X

    ir(X)

    The syndrome polynomial s(X) is identical to the error

    polynomial e(X), assuming that the errors are confined to the (n

    -k) parity-check bits of the received word polynomial r(X).

    1.2.6Cyclic redundancy check codesCyclic redundancy check (CRC) coding is an error-control

    coding technique for detecting errors that occur when a

    message is transmitted. Unlike block or convolutional

    Codes, CRC codes do not have a built-in error-correction

    capability. Instead, when an error is detected in a received

    message word, the receiver requests the sender to

    retransmit the message word.

    In CRC coding, the transmitter applies a rule to each message

    word to create extra bits, called the CRC, or syndrome, and

    then appends the checksum to the message word. After

    receiving a transmitted word, the receiver applies the same

    rule to the received word. If the resulting checksum is

  • 7/31/2019 LTE 2009-2010

    24/199

    Long Term Evolution

    ~ 23 ~

    nonzero, an error has occurred, and the transmitter should

    resend the message word.

    CRCs and Data Integrity vs. Correctness

    CRCs are not, by themselves, suitable for protecting againstintentional alteration of data (for example, in authentication

    applications for data security), because their convenient

    mathematical properties make it easy to compute the CRC

    adjustment required to match any given change to the data.

    It is often falsely assumed that when a message and its CRC

    are received from an open channel and the CRC matches themessage's calculated CRC then the message cannot have

    been altered in transit. This assumption is false because CRC

    is not really encryption at all: it is supposed to be used for

    data integrity checks, but is occasionally assumed to be used

    for encryption. When a CRC is calculated, the message is left

    in clear text and the constant-size CRC is tacked onto the end(i.e. the message can be read just as easily).

    Although CRCs share a problem with message digests in that

    there cannot be a 1:1 relationship between all possible

    messages and all possible CRCs, the CRC function fares worse

    because it is not a trapdoor function. That is, it is easy to

    generate other messages that result in the same CRC,

    especially messages similar to the original. By design,

    however, a message that is too similar (differing only by a

    trivial noise pattern) will have a dramatically different CRC

    and thus be detected.

    Alternatively the message could just be intercepted and

    replaced by a phony message with a new, phony CRC

  • 7/31/2019 LTE 2009-2010

    25/199

    Long Term Evolution

    ~ 24 ~

    (creating a packet that would be verified by any Data-Link

    entity). Therefore, CRCs can be relied upon to verify integrity

    but not correctness. In contrast, an effective way to protect

    messages against intentional tampering is by the use of a

    message authentication code such as HMAC.

    CRC AlgorithmThe CRC algorithm accepts a binary data vector, corresponding

    to a polynomial M, and appends a checksum of r bits,

    corresponding to a polynomial C. The concatenation of the input

    vector and the checksum then corresponds to the polynomialsince multiplying by xor corresponds to shifting the input vector

    bits to the left. The algorithm chooses the checksum C so that T

    is divisible by a predefined polynomial P of degree r, called the

    generator polynomial. The algorithm divides T by P, and sets the

    checksum equal to the binary vector corresponding to the

    remainder. That is, if

    T=Q*P+R.

    Where R is a polynomial of degree less than r, the checksum is

    the binary vector corresponding to R. If necessary, the algorithm

    pre-pends zeros to the checksum so that it has length r.The CRC

    generation feature, which implements the transmission phase ofthe CRC algorithm, does the following:

    Left shifts the input data vector (M(x)) by r bits and divides

    the corresponding polynomial by G(x).

    Sets the checksum equal to the binary vector of length r,

    corresponding to the remainder from step 1. (R(x)).

  • 7/31/2019 LTE 2009-2010

    26/199

    Long Term Evolution

    ~ 25 ~

    R(x)=reminder of()

    Appends the checksum to the input data vector. The result is

    the output vector.

    T(x)=M(x)*Xr+R(x)

    The CRC detection feature computes the checksum for its

    entire input vector, as described above.

    CRC detection

    After receiving a transmitted word, the receiver applies thesame rule to the received word. If the resulting checksum is

    nonzero, an error has occurred, and the transmitter should

    resend the message word. The detection is done as follows:

    1)The receiver gets H(X)=T(X)+E(X) Where: E(X) is called errorpolynomial .It has the same degree as T(x).

    2)The receiver divides M(X)/G(X).(The same generator

    polynomial used in the transmitter).

    (

    )

    ()=

    +

    (

    )

    ()= ()() + ()() ++ + ()() + ()()= + ()()

  • 7/31/2019 LTE 2009-2010

    27/199

    Long Term Evolution

    ~ 26 ~

    Where: Q(x) has no reminder. Thus any non-zero residual results

    from a non-zero error E(X).So the receiver detect that there are

    an error and the receiver requests the sender to retransmit the

    message word.

  • 7/31/2019 LTE 2009-2010

    28/199

    Long Term Evolution

    ~ 27 ~

    Commonly used and standardized CRCs

  • 7/31/2019 LTE 2009-2010

    29/199

    Long Term Evolution

    ~ 28 ~

  • 7/31/2019 LTE 2009-2010

    30/199

    Long Term Evolution

    ~ 29 ~

    CRC segmentationThe turbo encoder accepts blocks with a certain number of bits.

    These bits are arranged in table. The target of the CRC

    segmentation is to preparing the blocks with number of bits that

    the turbo encoder can trade withthe number of the input bits is

    not in the K tables so a filler bitsare added to the input bits to

    make the block suitable to the turbo decoder. If the number of

    the input bits is greater than 6144, segmentation of the input

    bits is performed and an additional CRC sequence is attached to

    each code block If there are filler bits , They added to the

    beginning of the first block

    Note: Number of bits in the K table after the CRC attachment.

    1.3Convolutional codesIn block coding, the encoder accepts a k-bit message block and

    generates an n-bit code word. Thus, code words are produced

  • 7/31/2019 LTE 2009-2010

    31/199

    Long Term Evolution

    ~ 30 ~

    on a block-by-block basis. Clearly, provision must be made in the

    encoder to buffer an entire message block before generating the

    associated code words. There are applications, however, where

    the message bits come in serially rather than in large blocks, in

    which case the use of a buffer may be undesirable. In such

    situations, the use of convolutional coding may be the preferred

    method. A convolutional coder generates redundant bits by

    using modulo-2 convolutions, hence the name.

    1.3.1 Convolutional EncoderThe encoder of a binary convolutional code with rate 1/n,measured in bits per symbol, may be as a finite-state machine

    that consists of an M-stage shift register with prescribed

    connections to n modulo-2 adders, and a multiplexer that

    serializes the outputs of the adders. An L-bit message sequence

    produces a coded output sequence of length n(L + M) bits. The

    code rate is therefore given by = +Bits/symbolTypically, we have L >> M. Hence, the code rate simplifies to

    r1

    bits/symbol

    The constraint length of a convolutional code, expressed in

    terms of message bits, is defined as the number of shifts over

    which a single message bit can influence the encoder output. In

    an encoder with an M-stage shift register, the memory of the

    encoder equals M message bits, and K = M + 1 shifts are

    required for a message bit to enter the shift register and finally

    come out. Hence, the constraint length of the encoder is K.

  • 7/31/2019 LTE 2009-2010

    32/199

    Long Term Evolution

    ~ 31 ~

    Figure 1-7 a) shows a convolutional encoder with n = 2 and K =

    3. Hence, the code rate f this encoder is 1/2.

    We may generates a binary convolutional code with rate k/n by

    using k separate shift registers with prescribed connections to n

    modulo-2 adders, an input multiplexer and an output

    multiplexer.

    Fig (1-7) constraint length -3, rate -1/2 convolution encoder

  • 7/31/2019 LTE 2009-2010

    33/199

    Long Term Evolution

    ~ 32 ~

    Fig 1-8 constraint length-2, rate-2/3convolution encoder

    An example of such an encoder is shown in Figure 1-8),

    where k = 2, n = 3, and the two shift registers have K = 2 each.

    The code rate is 2/3. In this second example, the encoder

    processes the incoming message sequence two bits at a time.

    The convolutional codes generated by the encoder of Figure 3-10 are nonsystematic codes. Unlike block coding, the use of

    nonsystematic codes is ordinarily preferred over systematic

    codes in convolutional code.

    Each path connecting the output to the input of a convolutional

    encoder may be characterized in terms of its impulse response,

    defined as the response of that path to a symbol 1 applied to its

    input, with each flip-flop in the encoder set initially in the zero

    state.

    Equivalently, we may characterize each path in terms of a

    generator polynomial, defined as the unit-delay transform of the

    impulse response.

  • 7/31/2019 LTE 2009-2010

    34/199

    Long Term Evolution

    ~ 33 ~

    To be specific, let the generator sequence (g0(i), g1(i), g2(i), . . .

    ,gM(i)) denotes the impulse response of the ith

    path, where the

    coefficients g0(i), g1(i), g2(i), . . . , gM(i) equal 0 or 1.

    Correspondingly, the generator polynomial of the ith

    path is

    defined by

    g(i)

    (D)=0+1 + 22++ Where D denotes the unit-delay variable.

    The complete convolutional encoder is described by the set of

    generator polynomials {g

    (1)

    (D),g

    (2)

    (D), . . . ,g

    (n)

    (D)}. Traditionally,different variables are used for the description codes and X for

    cyclic codes.

    Encoding Example

    Consider the convolutional encoder of Figure 3-10a, which has

    two paths numbered 1 and 2 for convenience of reference. The

    impulse response of path 1 (i.e, upper path) is (1,1,1). Hence,

    the corresponding generator polynomial is given by

    g(1)

    (D)=1+D+D2

    The impulse response of path 2 (i.e, lower path) is (1,0,1).

    Hence, the corresponding generator polynomial is given by

    g(2)

    (D)=1 +D2

    For the message sequence (10011), say, we have the polynomial

    representation

    m(D)=1+D3+D

    4

  • 7/31/2019 LTE 2009-2010

    35/199

    Long Term Evolution

    ~ 34 ~

    As with Fourier transformation, convolution in the time domain

    is transformed into multiplication in the D-domain. Hence, the

    output polynomial of path 1 is given by

    C(1)

    (D)=g(1)

    (D)m(D)

    =(1+D+D2)( 1+D

    3+D

    4)

    = 1+D+D2+D

    3+D

    6

    From this we immediately deduce that the output sequence of

    path 1 is (1111001). Similarly, the output polynomial of path2 is

    given by

    C(2)

    (D)=g(2)

    (D)m(D)

    =(1+ D2)( 1+D

    3+D

    4)

    = 1+ D2+D

    3+D

    4+D

    5+D

    6

    The output sequence of path2 is therefore (1011111). Finally,

    multiplying the two output sequences of path 1 and 2, we get

    the encoded sequence

    C= (11, 10, 11, 11, 01, 01, 11)

    Note that the message sequence of length L = 5 bits produces an

    encoded sequence of length n (L + K - 1) = 14 bits. Note also that

    for shift register to be restored to its zero initial state, a

    terminating sequence of K 1 = 2 zeros is appended to the last

    input bit of the message sequence. The terminating sequence of

    K 1 zeros is called the tail of the message.

  • 7/31/2019 LTE 2009-2010

    36/199

    Long Term Evolution

    ~ 35 ~

    CODE TREE, TRELLIS, AND STATE DIAGRAM

    Traditionally, the structural properties of a convolutional

    encoder are portrayed in graphical form by using any one of

    three equivalent diagrams: code tree, trellis, and state diagram.

    We will use the convolutional encoder of Figure 3-10a as a

    running example to illustrate the insights that each one of these

    three diagrams can provide.

    We begin the discussion with code tree of Figure 1-9. Each

    branch of the tree represents an input symbol, with the

    corresponding pair of output binary symbols indicated on thebranch. The convention used to distinguish the input binary

    symbols 0 and 1 is as follows. An input 0 specifies the upper

    branch of a bifurcation, whereas input 1 specifies the lower

    branch. A specific path in the tree is traced from left to right in

    accordance with the input (message) sequence. The

    corresponding coded symbols on the branches of that pathconstitute the input (message) sequence. Consider, for example,

    the message sequence (10011) applied to the input of the

    encoder of Figure 1-7. Following the procedure just described,

    we find the corresponding encoded sequence is (11, 10, 11, 11,

    01), which agrees with the first 5 pairs of bits in the encoded

    sequence {ci} derived in our above example.

    From diagram of code tree. We observe that the tree becomes

    repetitive after the first three branches. Indeed, beyond the

    third branch, the two nodes labeled are identical, and so are all

    the other node pairs that are identically labeled.

  • 7/31/2019 LTE 2009-2010

    37/199

    Long Term Evolution

    ~ 36 ~

    We may establish this repetitive property of the tree by

    examining the associated encoder of Figure 1-7. The encoder

    has memory M = K 1 = 2 message bits. Hence, when the third

    message bit enters the encoder, the first message bit is shifted

    out of the register.

    Consequently, after the third branch, the message sequences

    (100m3m4 . . .) and (000m3m4 . . .) generate the same code

    symbols, and the pair of nodes labeled a may be joined

    together. The same reasoning applies to other nodes.

    Accordingly, we may collapse the code tree of Figure of codetree into the new form shown in Figure of trellis, which is called

    a trellis. It is so called since a trellis is a treelike structure with

    remerging branches.

    The convention used in Figure of trellis to distinguish between

    input symbols 0 and 1 is as follows. A code branch produced by

    an input 0 is drawn as a solid line, whereas a code branch

    produced by an input 1 is drawn as a dashed line.

    As before, each input (message) sequence corresponds to a

    specific path through the trellis. For example, we readily see

    from Figure of trellis that the message sequence (10011)

    produces the encoded output sequence (11, 10, 11, 11, 01),which agrees with our previous results.

    A trellis is more instructive than a tree in that it brings out

    explicitly the fact that the associated convolutional encoder is a

    finite-state machine. We define the state of a convolutional

    encoder of rate 1/n as the (K - 1) message bits stored in the

    encoder's shift register. At time j, the portion of the message

  • 7/31/2019 LTE 2009-2010

    38/199

    Long Term Evolution

    ~ 37 ~

    sequence containing the most recent K bits is written as (mj-K+1,

    . . . , mj-1, mj), where mj is the current bit. The (K - 1)-bit state of

    the encoder at time j is therefore written simply as (mj-1, . . . ,

    mj-K+2, mj-K+1). In the case of the simple convolutional encoder

    of Figure 1-7 we have (K - 1) = 2. Hence, the state of this encoder

    can assume any one of four possible values, as described in

    Table 3-1. The trellis contains (L + K) levels, where L is the length

    of the incoming message sequence, and K is the constraint

    length of the code. The levels of the trellis are labeled as j = 0, 1.

    . . L + K 1 in Figure of trellis for K = 3. Level j is also referred toas depth j; both terms are used interchangeably. The first (K -1)

    levels corresponds to the encoder's departure from the initial

    state a, and the last (K -1) levels correspond to the encoder's

    return to state a clearly, not all the states can be reached in

    these two portions of the trellis.

    However, in the central portion of the trellis, for which the level

    j lies in the range K 1 j L, all the states of the encoder are

    reachable. Note also that the central portion of the trellis

    exhibits a fixed periodic structure. Consider next portion of the

    trellis corresponding to times j and j + 1. We assume that j 2

    for the example at hand, so that it is possible for the current

    state of the encoder to be a, b, c, or d. For convenience of

    presentation, we have reproduced this portion of the trellis in

    Figure 1-10. The left nodes represent the four possible current

    states of the encoder, whereas the right nodes represent the

    next states. Clearly, we may clearly we coalesce the left and

    right nodes. By so doing, we obtain the state diagram of the

    encoder, shown in Figure 1-11. The nodes of the figure

  • 7/31/2019 LTE 2009-2010

    39/199

    Long Term Evolution

    ~ 38 ~

    represent the four possible states of the encoder, with each

    node having two incoming branches and two outgoing branches.

    A transition from one state to another in response to input 0 is

    represented by solid branch, whereas a transition in response to

    input 1 is represented by a dashed branch. The binary label on

    each branch represents the encoder's outputs as it moves from

    one state to another. Suppose, for example the current state of

    the encoder is (01), which is represented by node c. The

    application of input 1 to the encoder of Figure 1-7 results in the

    state (10) and the encoded output (00). Accordingly, with the

    help of this state diagram, we may readily determine the output

    of the encoder of Figure 1-7 for any incoming message

    sequence. We simply start at sate a, the all zero initial sate, and

    walk through the state diagram in accordance with the message

    sequence.

    We follow a solid branch if the input is a 0 and a dashed branchif it is a 1. As each branch is traversed, we output the

    corresponding binary label on the branch. Consider, for

    example, the message sequence (10011).

    For this input we follow the path abcabd, and therefore output

    sequence (11, 10, 11, 11, 01), which agrees exactly with our

    previous result. Thus, the input-output relation of a

    convolutional encoder is also completely described by its state

    diagram.

  • 7/31/2019 LTE 2009-2010

    40/199

    Long Term Evolution

    ~ 39 ~

    Fig 1-9 code tree for convolutional encoder

  • 7/31/2019 LTE 2009-2010

    41/199

    Long Term Evolution

    ~ 40 ~

    Fig 1-10 a portion of a central part of the trellis for encoder

    Fig 1-11 state diagram of the convolutional encoder

  • 7/31/2019 LTE 2009-2010

    42/199

    Long Term Evolution

    ~ 41 ~

    1.3.2Decoding of convolutional codes (ViterbiAlgorithm)

    The equivalence between maximum likelihood decoding and

    minimum distance decoding for a binary symmetric channel

    implies that we may decode a convolutional code by choosing a

    path in the code tree whose coded sequence differs from the

    received sequence in the fewest number of places. Since a code

    tree is equivalent to a trellis, we may equally limit our choice to

    the possible paths in the trellis representation of the code.

    The reason for preferring the trellis over the tree is that the

    number of nodes at any level of the trellis does not continue to

    grow as the number of incoming message bits increases; rather,

    it remains constant at 2K-1

    , where K is the constraint length of

    the code.

    Consider, for example, the trellis diagram of Figure of trellis

    above for a convolutional code with rate = 1/2 and constraint

    length K = 3. We observe that at level j = 3, there are two paths

    entering any of the four nodes in the trellis. Moreover, these

    two paths will be identical onward from that point. Clearly, a

    minimum distance decoder may make a decision at that point as

    to which of those two paths to retain, without any loss of

    performance. A similar decision may be made at level j = 4, and

    so on.

    This sequence of decisions is exactly what the Viterbi algorithm

    does as it walks through the trellis. The algorithm operates by

    computing a metric or discrepancy for every possible path in the

  • 7/31/2019 LTE 2009-2010

    43/199

    Long Term Evolution

    ~ 42 ~

    trellis. The metric for a particular path is defined as the

    Hamming distance between the coded sequence represented by

    that path and the received sequence. Thus, for each node (state)

    in the trellis of Figure of the trellis the algorithm compares the

    two paths entering the node. The path with the lower metric is

    retained, and the other path is discarded. This computation is

    repeated for every level j of the trellis in the range M j L

    where M = K - 1 is the encoder's memory and L is the length of

    the incoming message sequence. The paths that are retained by

    the algorithm are called survivor or active paths. For a

    convolutional code of constraint length K = 3, for example, no

    more than 2K-1

    = 4 survivor paths and their metrics will ever be

    stored. The list of 2K-1

    paths is always guaranteed to contain the

    maximum-likelihood choice.

    A difficulty that may arise in the application of Viterbi algorithm

    is the possibility that when the paths entering a state arecompared, their metrics are found to be identical. In such a

    situation, we make the choice by flipping a fair coin (i.e., simply

    make a guess).

    In summary, the Viterbi algorithm is a maximum-likelihood

    decoder, which is optimum for an AWGN channel. It proceeds in

    step-by-step fashion as follows:

    Initialization

    Label the left-most state of the trellis (i.e., the all-zero state at

    level 0) as 0, since there is no discrepancy at this point in the

    computation

  • 7/31/2019 LTE 2009-2010

    44/199

    Long Term Evolution

    ~ 43 ~

    Computation step j + 1

    Let j = 0, 1, 2. . . and suppose that at the previous step j we have

    done two things

    1)All survivor paths are identified

    2)The survivor path and its metric for each state of the trellis

    are stored

    Then, at level (clock time) j + 1, compute the metric for all paths

    entering each state of the trellis by adding the metric of theincoming branches to the metrics of the connecting survivor

    path from level j Hence, for each state, identify the path with

    the lowest metric as the survivor of step j + 1, thereby updating

    the computation.

    Final step

    Continue the computation until the algorithm completes its

    forward search through the trellis and therefore reach the

    termination node (i.e., all-zero state), at which time it makes a

    decision on the maximum likelihood path. Then, like a block

    decoder, the sequence of symbols associated with that path is

    released to the destination as the decoded version of thereceived sequence.

    Example (Correct Decoding of Received All-Zero Sequence)

    Suppose that the encoder of Figure 1-7 generates an all-zero

    sequence that is sent over a binary symmetric channel, and that

    the received sequence is (0100010000 . . .). There are two errors

    in the received sequence due to noise in the channel: one in the

  • 7/31/2019 LTE 2009-2010

    45/199

    Long Term Evolution

    ~ 44 ~

    second bit and the other in the sixth bit. We wish to show that

    this double-error pattern is correctable through the application

    of the Viterbi decoding algorithm.

    In Figure 1-12, we show the results of applying algorithm for

    level j = 1, 2, 3, 4, 5. We see that for j=2 there are (for the first

    time) four paths, one for each of the four states of the encoder.

    The figure also includes the metric of each path for each level in

    the computation.

    In the left side of Figure 1-12, for j = 3 we show the paths

    entering each of the states, together with their individual

    metrics. In the right side of the figure, we show the four

    survivors that result from application of the algorithm for level j

    = 3,4,5. Examining the four survivors in Figure 1-12 for j = 5, we

    see that the all-zero path has the smallest metric and will remain

    the path of smallest metric from this point forward. This clearly

    shows that the all-zero sequence is the maximum likelihood

    choice of the Viterbi decoding algorithm, which agrees exactly

    with the transmitted sequence.

  • 7/31/2019 LTE 2009-2010

    46/199

    Long Term Evolution

    ~ 45 ~

    Fig (1-12) illustrating step in Viterbi algorithm for the example

  • 7/31/2019 LTE 2009-2010

    47/199

    Long Term Evolution

    ~ 46 ~

    1.3.2.1 FREE DISTANCE OF A CONVOLUTIONAL CODEThe performance of a convolutional code depends not only on

    the decoding algorithm used but also on the distance properties

    of the code. In this context, the most important single measure

    of a convolutional code's ability to combat channel noise is the

    free distance, denoted by d-free. The free distance of a

    convolutional code is defined as the minimum Hamming

    distance between any two code words in the code. A

    convolutional code with free distance d-free can correct t errors

    if and only if d-free is greater than 2t. The free distance can be

    obtained from the state diagram of the convolutional encoder.

    Consider; for example, Figure 1-11, which shows the state

    diagram of the encoder of Figure 1-7. Any nonzero code

    sequence corresponds to complete path beginning and ending

    at 00 state (i.e., node a).

  • 7/31/2019 LTE 2009-2010

    48/199

    Long Term Evolution

    ~ 47 ~

    1.3.3MATLAB Codes and ResultsFirst the function of Convolutional Encoder

    function coded_bits=conv_encoder_lte(input_bits) conection_for_lte_standard_octal =['133'; '171'; '165'];conection_for_lte_standard_decimal =base2dec(conection_for_lte_standard_octal,8);

    conection_for_lte_standard_bin =double(dec2bin(conection_for_lte_standard_decimal))-48; state = zeros(64,1);output_one_prim=zeros(128,7);output_two_prim=zeros(128,7);output_three_prim=zeros(128,7);state_bin = zeros(64,6);state_bin_with_ip_zero=zeros(64,7);state_bin_with_ip_one=zeros(64,7);output_one=zeros(128,1);output_two=zeros(128,1);output_three=zeros(128,1);for state_index = 1:64

    state(state_index)=state_index-1;state_bin(state_index,:)= dec2bin(state(state_index),6)-48;state_bin_with_ip_zero(state_index,:)= [0 state_bin(state_index,:)];state_bin_with_ip_one(state_index,:)= [1 state_bin(state_index,:)];state_bin_final([2*state_index-1

    2*state_index],:)=[state_bin_with_ip_zero(state_index,:);state_bin_with_ip_one(state_index,:)];endfor state_index = 1:128

    output_one_prim(state_index,:) =bitand(conection_for_lte_standard_bin(1,:),state_bin_final(state_index,:));

    output_two_prim(state_index,:) =bitand(conection_for_lte_standard_bin(2,:),state_bin_final(state_index,:));

    output_three_prim(state_index,:) =bitand(conection_for_lte_standard_bin(3,:),state_bin_final(state_index,:));

    for output_index = 1:7output_one(state_index)=

    xor(output_one(state_index),output_one_prim(state_index,output_index)); output_two(state_index)=

    xor(output_two(state_index),output_two_prim(state_index,output_index)); output_three(state_index)=

    xor(output_three(state_index),output_three_prim(state_index,output_index)); end

    endoutput=[output_one output_two output_three];bits_of_test = [zeros(1,6) input_bits];place_prim =zeros(1,128);output_bits_of_encoder=zeros(1,3*length(input_bits)); for bits_index=1:length(input_bits)

    for state_index=1:128

    place_prim(state_index)=length(find(bitxor(state_bin_final(state_index,:),bits_of_test(bits_index+6:-1:bits_index))));

    endplace=find(place_prim==0);output_bits_of_encoder(3*bits_index-2:3*bits_index)=output(place,:);

    endcoded_bits=output_bits_of_encoder;

  • 7/31/2019 LTE 2009-2010

    49/199

    Long Term Evolution

    ~ 48 ~

    Second the function of Viterbi Decoder:-function decoded_bits=viterbi_dec_lte(coded_bits) No_of_bits=length(coded_bits)/3;conection_for_lte_standard_octal =['133'; '171'; '165'];conection_for_lte_standard_decimal = base2dec(conection_for_lte_standard_octal,8);conection_for_lte_standard_bin = double(dec2bin(conection_for_lte_standard_decimal))-48;state = zeros(64,1);output_one_prim=zeros(128,7);output_two_prim=zeros(128,7);output_three_prim=zeros(128,7);

    state_bin = zeros(64,6);state_bin_with_ip_zero=zeros(64,7);state_bin_with_ip_one=zeros(64,7);output_one=zeros(128,1);output_two=zeros(128,1);output_three=zeros(128,1);for state_index = 1:64

    state(state_index)=state_index-1;state_bin(state_index,:)= dec2bin(state(state_index),6)-48;state_bin_with_ip_zero(state_index,:)= [0 state_bin(state_index,:)];state_bin_with_ip_one(state_index,:)= [1 state_bin(state_index,:)];state_bin_final([2*state_index-1

    2*state_index],:)=[state_bin_with_ip_zero(state_index,:);state_bin_with_ip_one(state_index,:)]; endfor state_index = 1:128

    output_one_prim(state_index,:) =bitand(conection_for_lte_standard_bin(1,:),state_bin_final(state_index,:));

    output_two_prim(state_index,:) =bitand(conection_for_lte_standard_bin(2,:),state_bin_final(state_index,:));

    output_three_prim(state_index,:) =bitand(conection_for_lte_standard_bin(3,:),state_bin_final(state_index,:));

    for output_index = 1:7output_one(state_index)=

    xor(output_one(state_index),output_one_prim(state_index,output_index)); output_two(state_index)=

    xor(output_two(state_index),output_two_prim(state_index,output_index)); output_three(state_index)=

    xor(output_three(state_index),output_three_prim(state_index,output_index)); end

    endoutput=[output_one output_two output_three];present_state_input_next_state=[bin2dec(num2str(state_bin_final(:,2:7))) state_bin_final(:,1)bin2dec(num2str(state_bin_final(:,1:6)))];

    %%metric=zeros(128,No_of_bits);state_metric=zeros(128,3*No_of_bits);metric_final=zeros(64,No_of_bits);state_metric_final=zeros(64,3*No_of_bits); tot_places_of_state=zeros(1,128);decoded_bits=zeros(1,No_of_bits);for bits_index =1:No_of_bits

    if bits_index==1for state_index = 1:2

    metric(state_index,bits_index) = length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(state_index,:))));

    state_metric(state_index,3*bits_index-2:3*bits_index)=present_state_input_next_state(state_index,:);

    end

    elseif bits_index==2for state_index = 1:2place_of_state=find(present_state_input_next_state(:,1)==state_metric(state_index,3)); tot_places_of_state(2*state_index-1:2*state_index)=place_of_state; metric(tot_places_of_state(2*state_index-1),bits_index) = metric(2*state_index-

    1,bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(1),:))));

    metric(tot_places_of_state(2*state_index),bits_index) = metric(2*state_index-1,bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(2),:))));

    state_metric(tot_places_of_state(2*state_index-1:2*state_index),3*bits_index-2:3*bits_index)= present_state_input_next_state(place_of_state,:);

    endelseif bits_index==3

    for state_index=1:4

    place_of_state=find(present_state_input_next_state(:,1)==state_metric(tot_places_of_state(state_index),6));

    tot_places_of_state(2*state_index-1:2*state_index)=place_of_state;

  • 7/31/2019 LTE 2009-2010

    50/199

    Long Term Evolution

    ~ 49 ~

    metric(tot_places_of_state(2*state_index-1),bits_index) =metric(tot_places_of_state(state_index),bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(1),:))));

    metric(tot_places_of_state(2*state_index),bits_index) =metric(tot_places_of_state(state_index),bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(2),:))));

    state_metric(tot_places_of_state(2*state_index-1:2*state_index),3*bits_index-2:3*bits_index)= present_state_input_next_state(place_of_state,:);

    endelseif bits_index==4

    for state_index=1:8

    place_of_state=find(present_state_input_next_state(:,1)==state_metric(tot_places_of_state(state_index),9));

    tot_places_of_state(2*state_index-1:2*state_index)=place_of_state; metric(tot_places_of_state(2*state_index-1),bits_index) =

    metric(tot_places_of_state(state_index),bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(1),:))));

    metric(tot_places_of_state(2*state_index),bits_index) =metric(tot_places_of_state(state_index),bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(2),:))));

    state_metric(tot_places_of_state(2*state_index-1:2*state_index),3*bits_index-2:3*bits_index)= present_state_input_next_state(place_of_state,:);

    endelseif bits_index==5

    for state_index=1:16

    place_of_state=find(present_state_input_next_state(:,1)==state_metric(tot_places_of_state(state_in

    dex),12));tot_places_of_state(2*state_index-1:2*state_index)=place_of_state; metric(tot_places_of_state(2*state_index-1),bits_index) =

    metric(tot_places_of_state(state_index),bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(1),:))));

    metric(tot_places_of_state(2*state_index),bits_index) =metric(tot_places_of_state(state_index),bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(2),:))));

    state_metric(tot_places_of_state(2*state_index-1:2*state_index),3*bits_index-2:3*bits_index)= present_state_input_next_state(place_of_state,:);

    endelseif bits_index==6

    for state_index=1:32

    place_of_state=find(present_state_input_next_state(:,1)==state_metric(tot_places_of_state(state_index),15));

    tot_places_of_state(2*state_index-1:2*state_index)=place_of_state; metric(tot_places_of_state(2*state_index-1),bits_index) =metric(tot_places_of_state(state_index),bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(1),:))));

    metric(tot_places_of_state(2*state_index),bits_index) =metric(tot_places_of_state(state_index),bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(2),:))));

    state_metric(tot_places_of_state(2*state_index-1:2*state_index),3*bits_index-2:3*bits_index)= present_state_input_next_state(place_of_state,:);

    endelseif bits_index==7

    for state_index=1:64

    place_of_state=find(present_state_input_next_state(:,1)==state_metric(tot_places_of_state(state_index),3*(bits_index-1)));

    tot_places_of_state(2*state_index-1:2*state_index)=place_of_state; metric(tot_places_of_state(2*state_index-1),bits_index) =

    metric(tot_places_of_state(state_index),bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(1),:))));

    metric(tot_places_of_state(2*state_index),bits_index) =metric(tot_places_of_state(state_index),bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(2),:))));

    state_metric(tot_places_of_state(2*state_index-1:2*state_index),3*bits_index-2:3*bits_index)= present_state_input_next_state(place_of_state,:);

    endfor check_odd_index=1:4:125

    if metric(check_odd_index,bits_index)>=metric(check_odd_index+2,bits_index) metric(check_odd_index,bits_index)=0;state_metric(check_odd_index,3*bits_index)=0;

    metric_final((check_odd_index+1)/2,bits_index)=metric(check_odd_index+2,bits_index); state_metric_final((check_odd_index+1)/2,3*bits_index-

    2:3*bits_index)=state_metric(check_odd_index+2,3*bits_index-2:3*bits_index); else

    metric(check_odd_index+2,bits_index)=0;state_metric(check_odd_index+2,3*bits_index)=0;

  • 7/31/2019 LTE 2009-2010

    51/199

    Long Term Evolution

    ~ 50 ~

    metric_final((check_odd_index+1)/2,bits_index)=metric(check_odd_index,bits_index); state_metric_final((check_odd_index+1)/2,3*bits_index-

    2:3*bits_index)=state_metric(check_odd_index,3*bits_index-2:3*bits_index); endif metric(check_odd_index+1,bits_index)>=metric(check_odd_index+3,bits_index)

    metric(check_odd_index+1,bits_index)=0;state_metric(check_odd_index+1,3*bits_index)=0;

    metric_final((check_odd_index+3)/2,bits_index)=metric(check_odd_index+3,bits_index); state_metric_final((check_odd_index+3)/2,3*bits_index-

    2:3*bits_index)=state_metric(check_odd_index+3,3*bits_index-2:3*bits_index);

    elsemetric(check_odd_index+3,bits_index)=0;state_metric(check_odd_index+3,3*bits_index)=0;

    metric_final((check_odd_index+3)/2,bits_index)=metric(check_odd_index+1,bits_index); state_metric_final((check_odd_index+3)/2,3*bits_index-

    2:3*bits_index)=state_metric(check_odd_index+1,3*bits_index-2:3*bits_index); end

    endelse

    for state_index=1:64

    place_of_state=find(present_state_input_next_state(:,1)==state_metric_final(state_index,3*(bits_index-1)));

    tot_places_of_state(2*state_index-1:2*state_index)=place_of_state; metric(tot_places_of_state(2*state_index-1),bits_index) =

    metric_final(state_index,bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-

    2:3*bits_index),output(place_of_state(1),:)))); metric(tot_places_of_state(2*state_index),bits_index) =

    metric_final(state_index,bits_index-1)+length(find(bitxor(coded_bits(3*bits_index-2:3*bits_index),output(place_of_state(2),:))));

    state_metric(tot_places_of_state(2*state_index-1:2*state_index),3*bits_index-2:3*bits_index)= present_state_input_next_state(place_of_state,:);

    endfor check_odd_index=1:4:125

    if metric(check_odd_index,bits_index)>=metric(check_odd_index+2,bits_index) metric(check_odd_index,bits_index)=0;state_metric(check_odd_index,3*bits_index)=0;

    metric_final((check_odd_index+1)/2,bits_index)=metric(check_odd_index+2,bits_index); state_metric_final((check_odd_index+1)/2,3*bits_index-

    2:3*bits_index)=state_metric(check_odd_index+2,3*bits_index-2:3*bits_index); else

    metric(check_odd_index+2,bits_index)=0;state_metric(check_odd_index+2,3*bits_index)=0; metric_final((check_odd_index+1)/2,bits_index)=metric(check_odd_index,bits_index); state_metric_final((check_odd_index+1)/2,3*bits_index-

    2:3*bits_index)=state_metric(check_odd_index,3*bits_index-2:3*bits_index); endif metric(check_odd_index+1,bits_index)>=metric(check_odd_index+3,bits_index)

    metric(check_odd_index+1,bits_index)=0;state_metric(check_odd_index+1,3*bits_index)=0;

    metric_final((check_odd_index+3)/2,bits_index)=metric(check_odd_index+3,bits_index); state_metric_final((check_odd_index+3)/2,3*bits_index-

    2:3*bits_index)=state_metric(check_odd_index+3,3*bits_index-2:3*bits_index); else

    metric(check_odd_index+3,bits_index)=0;state_metric(check_odd_index+3,3*bits_index)=0;

    metric_final((check_odd_index+3)/2,bits_index)=metric(check_odd_index+1,bits_index); state_metric_final((check_odd_index+3)/2,3*bits_index-

    2:3*bits_index)=state_metric(check_odd_index+1,3*bits_index-2:3*bits_index); end

    endend

    endfor dec_index=No_of_bits:-1:7

    min_of_metric = min(metric_final(:,dec_index));place_of_input = find(metric_final(:,dec_index)==min_of_metric);N_of_equal_to_min=length(place_of_input); decoded_bits(dec_index)=state_metric_final(place_of_input(N_of_equal_to_min),3*dec_index-1);

    end

  • 7/31/2019 LTE 2009-2010

    52/199

    Long Term Evolution

    ~ 51 ~

    Finally the MATLAB code that check the LTE system with and without

    Convolutional coding

    close allclearclcSNR_dB = 0:1:20;SNR_dB_uncoded=SNR_dB-10*log(1/3);

    no_of_frames = 100;no_of_bits = 1024;BER_qpsk_epa_with=zeros(1,length(SNR_dB)); BER_qpsk_epa_without=zeros(1,length(SNR_dB)); for snr_index=1:length(SNR_dB)

    fprintf('****\n');fprintf('SNR=%i dB...\n',SNR_dB(snr_index));fprintf('****\n');errors_qpsk_epa_without=0;errors_qpsk_epa_with=0;snr=10^SNR_dB/10;snruncoded=10^SNR_dB_uncoded/10;for frame_index=1:no_of_frames

    fprintf('Processing iteration number %i out of %iiterations...\n',frame_index,no_of_frames);

    gen_bits=randint(1,no_of_bits);scrambled_bits = lte_scrambler(gen_bits);lte_constlength=7;lte_traceback=5*lte_constlength;lte_polynomial=[133 171 165];lte_trellis = poly2trellis(lte_constlength,lte_polynomial);coded_bits=convenc(scrambled_bits,lte_trellis); modulator_qpsk_lte=modem.pskmod( 'M',4,'SymbolOrder','gray','InputType','bit');mod_qpsk_bits_without= modulate(modulator_qpsk_lte,scrambled_bits');mod_qpsk_bits_with= modulate(modulator_qpsk_lte,coded_bits');ofdm_qpsk_bits_without=ifft(mod_qpsk_bits_without'); ofdm_qpsk_bits_with=ifft(mod_qpsk_bits_with'); cp_qpsk_bits_without=[ofdm_qpsk_bits_without((3*end/4)+1:end) ofdm_qpsk_bits_without];cp_qpsk_bits_with=[ofdm_qpsk_bits_with((3*end/4)+1:end) ofdm_qpsk_bits_with];

    [selctive_qpsk_bits_epa_without,channel_taps_qpsk_selctive_epa_without]=selective_fading_lte_epa(cp_qpsk_bits_without);

    [selctive_qpsk_bits_epa_with,channel_taps_qpsk_selctive_epa_with]=selective_fading_lte_epa(cp_qpsk_bits_with);

    rx_ofdm_qpsk_bits_epa_without=noisy_bits_without(((end-6)/5)+1:end-6); rx_ofdm_qpsk_bits_epa_with=noisy_bits_with(((end-6)/5)+1:end-6); rx_qpsk_bits_epa_without=fft(rx_ofdm_qpsk_bits_epa_without); rx_qpsk_bits_epa_with=fft(rx_ofdm_qpsk_bits_epa_with); equ_qpsk_tap_epa_without=fft([channel_taps_qpsk_selctive_epa_without

    zeros(1,length(rx_qpsk_bits_epa_without)-length(channel_taps_qpsk_selctive_epa_without))]); de_selective_fading_qpsk_epa_without=rx_qpsk_bits_epa_without./equ_qpsk_tap_epa_without; equ_qpsk_tap_epa_with=fft([channel_taps_qpsk_selctive_epa_with

    zeros(1,length(rx_qpsk_bits_epa_with)-length(channel_taps_qpsk_selctive_epa_with))]); de_selective_fading_qpsk_epa_with=rx_qpsk_bits_epa_with./equ_qpsk_tap_epa_with; demodulator_qpsk_lte=modem.pskdemod( 'M',4,'SymbolOrder','gray','OutputType','bit');demod_qpsk_bits_epa_without=

    demodulate(demodulator_qpsk_lte,de_selective_fading_qpsk_epa_without'); demod_qpsk_bits_epa_with=

    demodulate(demodulator_qpsk_lte,de_selective_fading_qpsk_epa_with');descrambled_qpsk_bits_epa_without = lte_descrambler(demod_qpsk_bits_epa_without');

    descrambled_qpsk_bits_epa_with = lte_descrambler(demod_qpsk_bits_epa_with');errors_qpsk_epa_without=errors_qpsk_epa_without+length(find(gen_bits(1:end)-

    descrambled_qpsk_bits_epa_without(1:end))); errors_qpsk_epa_with=errors_qpsk_epa_with+length(find(gen_bits(1:end)-

    descrambled_qpsk_bits_epa_with(1:end))); endBER_qpsk_epa_without(snr_index)= errors_qpsk_epa_without/(no_of_frames*no_of_bits);BER_qpsk_epa_with(snr_index)= errors_qpsk_epa_with/(no_of_frames*no_of_bits);

    endsemilogy(SNR_dB,BER_qpsk_epa_without, 'b')hold onsemilogy(SNR_dB,BER_qpsk_epa_with, 'm')grid on

  • 7/31/2019 LTE 2009-2010

    53/199

    Long Term Evolution

    ~ 52 ~

    The performance of LTE with convolutional encoder

    and Viterbi decoder Vs the performance of it

    without channel coding:-

  • 7/31/2019 LTE 2009-2010

    54/199

    Long Term Evolution

    ~ 53 ~

    1.4 Turbo codesThe design of good codes has been tackled by constructing

    codes with a great deal of algebraic structure , for which there

    are feasible decoding schemes .like the linear block codes and

    convolutional codes ,the difficulty with these codes is that ,in an

    effort to approach the theoretical limit for Shannon's channel

    capacity we need to increase the code-word length of a linear

    block codes or the constraint length of a convolutional code

    ,which, increase the computational complexity of the decoder.

    Various approach have been proposed for the constructing ofpowerful codes with large "equivalent" block length structured

    in such a way that the decoding can be split into a number of

    manageable steps ,depending on this approach the

    development of turbo codes has been by far most successful.

    Turbo codes as a concatenated codes

    Concatenated codes where first proposed by Forney as a

    method of obtaining large coding gains by combining two or

    more relatively simple blocks or components (some times called

    constituent codes),the resulting codes have the error-correction

    capability of much longer codes .

    A serial concatenation of codes is used for power-limits systemssuch as transmitter on deep-space probes which use Reed-

    Solomon with convolutional codes. A turbo can be thought of a

    refinement concatenated code plus an iterative algorithm for

    decoding the associated code.

  • 7/31/2019 LTE 2009-2010

    55/199

    Long Term Evolution

    ~ 54 ~

    First turbo codes

    Turbo codes were first proposed in 1993 by Berrou, Glaviuex and

    Thitimajshima ,where, a scheme reach 10-5

    BER using code rate

    1/2 ,BPSK modulation in AWGN at Eb/No 0.7 dB.

    The code constructed by using two or more component codes

    with interleaved versions of the same information sequence e

    introduced to each component where for concolutional codes

    the final step ay the decoder yields hard decision while for

    working properly the decoder components must exchange soft

    information between each one and the others .

    Fig (1.13) performance of 1/2 rate turbo code and un-coded in

    AWGN

  • 7/31/2019 LTE 2009-2010

    56/199

    Long Term Evolution

    ~ 55 ~

    1.4.1 Turbo encoderSystems use turbo encoder of 2 identical constituent component

    with an inter-leaver to introduce inter-leaved version of an

    information sequence.

    The output of turbo encoder is 3 streams systematic (same as

    information sequence), parity1 (output of 1stcomponent), prity2

    (output of 2ndcomponent). There are different varieties in

    choosing the number of constituent components And whether

    they are identical or not depending on application.

    Fig(1.14) parallel concatenated convolutional code(PCCC) turbo

    code.

    RSC encoder

    Each component is recursive systematic encoder (RSC) with

    constraint length equals 3 (short length ) , which is number of

    memory elements.

  • 7/31/2019 LTE 2009-2010

    57/199

    Long Term Evolution

    ~ 56 ~

    Fig(1.15) recursive systematic convolutional encoder (RSC).

    Ck: input bits.

    Xk: systematic bits.

    Zk: parity bits.

    Recursive means that fed one or more output taps to the input

    of the shift register this makes the internal state of the shift

    register depend on the previous output which increase the

    error-correcting capability .

    Generator matrix

    Each encoder must specified by a generator matrix where the

    second entry is the transfer function of the shift register, this

    transfer function is simply the output vector divided by theinput vector.

    G(D)=[1,1()2()]

    Where go(D) is feedback vector , g1(D ) is output vector

    G0(D)=1+D

    2+D

    3

  • 7/31/2019 LTE 2009-2010

    58/199

    Long Term Evolution

    ~ 57 ~

    G1(D)=1+D+D3

    Interleaver Design

    Interleaving is a process of rearranging the ordering of a data

    sequence in a one to one deterministic format. The inverse ofthis process is called De-interleaving which restores the received

    sequence to its original order. For turbo codes, an interleaver is

    used between two component encoder. The main function of an

    interleaver is to provide randomness to the input sequences.

    This serves to break up any recurrent error patterns between

    the two codes. Since both encoders receive the same input bits(but in different orders), the systematic nature (one of the

    output bits is the same as the input bit) of the encoders makes

    the systematic output of one encoder redundant to that of the

    other. Thus, the systematic output of the lower encoder is

    usually not transmitted, resulting in an overall code rate of 1/3.

    This code rate may be increased through a selective removal ofbits from the overall code output stream (puncturing).

    The presence of the interleaver in the structure of the Turbo

    encoder adds a considerable amount of complexity to the

    decoding process required to obtain the information bits at the

    receiver. The interleaver design is a key factor which determines

    the good performance of a turbo code. The interleaver ensures

    that two permutations the same input data are encoded to

    produce two different parity sequences. The effect of the

    interleaver is to tie together errors that are easily made in one

    half of the turbo encoder to errors that are exceptionally

    unlikely to occur in the other half.

  • 7/31/2019 LTE 2009-2010

    59/199

    Long Term Evolution

    ~ 58 ~

    Quadratic permutation polynomial (QPP) interleaver:

    n this interleaver the permutation is defined by:

    P (i) = (f1. i + f2.i2) mod K.

    The parameters f1 and f2 depend on the block size K and are

    summarized in Table 1.

    Turbo code internal interleaver parameters

    Trellis termination

    we have to separate each block enters the component from the

    next coming one this is done by setting the shift register to all

    zero states , simply we can introduce number of zeros equal to

    the constraint length to achieve this .

  • 7/31/2019 LTE 2009-2010

    60/199

    Long Term Evolution

    ~ 59 ~

    Code rates

    Different rates can be obtained by puncturing the 3 output

    stream, 1/3 rate is obtained from no puncturing we send all the

    3 streams, 1/2 ate is obtained by puncturing even bits of parity1

    and odd bits of parity2 while sending all the systematic bits. We

    rarely puncture the systematic bits as this degrades the

    performance.

    Summary

    Turbo codes has impressive performance which almost near

    Shannon's limit this performance is due to 3 reasons:

    RSC encoders.

    Interlaver.

    Iterative decoding algorithm.

    1.4.2 Turbo DecoderIntroductionThe general structure of an iterative turbo decoder is shown in Figure.

    Fig(1.16) turbo decoder

  • 7/31/2019 LTE 2009-2010

    61/199

    Long Term Evolution

    ~ 60 ~

    Two component decoders are linked by interleavers in a structure similar

    to that of the encoder. As seen in figure (1.16), each decoder takes

    three inputs the systematically encoded channel output bits, the

    parity bits transmitted from the associated component encoder and the

    information from the other component decoder about the likely valuesof the bits concerned. This information from the other decoder is

    referred to as a-priori information. The component decoders have to

    exploit both the inputs from the channel and this a-priori information.

    They must also provide what are known as soft outputs for the decoded

    bits. This means that as well as providing the decoded output bit

    sequence, the component decoders must also give the associated

    probabilities for each bit that it has been correctly decoded. The softoutputs are typically represented in terms of the so-called Log Likelihood

    Ratios (LLRs), the magnitude of which gives the sign of the bit, and the

    amplitude the probability of a correct decision.

    Two suitable decoders are the Soft Output Viterbi Algorithm (SOVA) and

    the Maximum A-Posteriori (MAP) algorithm. The decoder operates

    iteratively, and in the first iteration the first component decoder takeschannel output values only, and produces a soft output as its estimate of

    the data bits. The soft output from the first encoder is then used as

    additional information for the second decoder, which uses this

    information along with the channel outputs to calculate its estimate of

    the data bits. Now the second iteration can begin, and the first decoder

    decodes the channel outputs again, but now with additional information

    about the value of the input bits provided by the output of the second

    decoder in the first iteration. This additional information allows the first

    decoder to obtain a more accurate set of soft outputs, which are then

    used by the second decoder as a-priori information. This cycle is

    repeated, and with every iteration the Bit Error Rate (B