O'Reilly logo

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

VLSI Architectures for Modern Error-Correcting Codes

Book Description

Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity.

VLSI Architectures for Modern Error-Correcting Codes serves as a bridge connecting advancements in coding theory to practical hardware implementations. Instead of focusing on circuit-level design techniques, the book highlights integrated algorithmic and architectural transformations that lead to great improvements on throughput, silicon area requirement, and/or power consumption in the hardware implementation.

The goal of this book is to provide a comprehensive and systematic review of available techniques and architectures, so that they can be easily followed by system and hardware designers to develop en/decoder implementations that meet error-correcting performance and cost requirements. This book can be also used as a reference for graduate-level courses on VLSI design and error-correcting coding. Particular emphases are placed on hard- and soft-decision Reed-Solomon (RS) and Bose-Chaudhuri-Hocquenghem (BCH) codes, and binary and non-binary low-density parity-check (LDPC) codes. These codes are among the best candidates for modern and emerging applications due to their good error-correcting performance and lower implementation complexity compared to other codes. To help explain the computations and en/decoder architectures, many examples and case studies are included.

More importantly, discussions are provided on the advantages and drawbacks of different implementation approaches and architectures.

Table of Contents

  1. Cover Page
  2. Half title
  3. Title Page
  4. Copyright page
  5. Dedication
  6. Contents
  7. Preface
  8. List of Figures
  9. List of Tables
  10. 1 Finite field arithmetic
    1. 1.1 Definitions, properties, and element representations
    2. 1.2 Finite field arithmetic
      1. 1.2.1 Multiplication using basis representations
        1. 1.2.1.1 Standard basis multiplier
        2. 1.2.1.2 Normal basis multiplier
        3. 1.2.1.3 Dual basis multiplier
        4. 1.2.1.4 Multiplication in composite field
      2. 1.2.2 Inversion using basis representations
        1. 1.2.2.1 Inversion in original field
        2. 1.2.2.2 Inversion in composite field
    3. 1.3 Mapping between finite field element representations
      1. 1.3.1 Mapping between standard basis and composite field representations
      2. 1.3.2 Mapping between power and standard basis representations
      3. 1.3.3 Mapping between standard and normal basis representations
  11. 2 VLSI architecture design fundamentals
    1. 2.1 Definitions and graph representation
    2. 2.2 Pipelining and retiming
    3. 2.3 Parallel processing and unfolding
    4. 2.4 Folding
  12. 3 Root computations for polynomials over finite fields
    1. 3.1 Root computation for general polynomials
    2. 3.2 Root computation for linearized and affine polynomials
    3. 3.3 Root computation for polynomials of degree two or three
  13. 4 Reed-Solomon encoder & hard-decision and erasure decoder architectures
    1. 4.1 Reed-Solomon codes
    2. 4.2 Reed-Solomon encoder architectures
    3. 4.3 Hard-decision Reed-Solomon decoders
      1. 4.3.1 Peterson-Gorenstein-Zier 1er algorithm
      2. 4.3.2 Berlekamp-Massey algorithm
      3. 4.3.3 Reformulated inversionless Berlekamp-Massey algorithm and architectures
      4. 4.3.4 Syndrome, error location and magnitude computation architectures
      5. 4.3.5 Pipelined decoder architecture
    4. 4.4 Error-and-erasure Reed-Solomon decoders
  14. 5. Algebraic soft-decision Reed-Solomon decoder architectures
    1. 5.1 Algebraic soft-decision decoding algorithms
    2. 5.2 Re-encoded algebraic soft-decision decoder
    3. 5.3 Re-encoding algorithms and architectures
      1. 5.3.1 Simplified re-encoding and coordinate transformation schemes
      2. 5.3.2 Simplified re-encoder architectures
    4. 5.4 Interpolation algorithms and architectures
      1. 5.4.1 Kötter’s interpolation algorithm
      2. 5.4.2 Architectures for Kötter’s interpolation
        1. 5.4.2.1 Discrepancy coefficient computation architecture
        2. 5.4.2.2 Polynomial updating architecture
        3. 5.4.2.3 Computation scheduling for Kötter’s interpolation
        4. 5.4.2.4 Discrepancy coefficient properties and interpolation complexity reduction
        5. 5.4.2.5 Slow-downed interpolation architecture
      3. 5.4.3 Lee-O’Sullivan interpolation algorithm and simplifications
      4. 5.4.4 Architectures for Lee-O’Sullivan interpolation
        1. 5.4.4.1 Initial basis construction architecture
        2. 5.4.4.2 Basis conversion architecture
      5. 5.4.5 Kötter’s and Lee-O’Sullivan interpolation comparisons
    5. 5.5 Factorization algorithm and architectures
      1. 5.5.1 Prediction-based factorization architecture
      2. 5.5.2 Partial-parallel factorization architecture
  15. 6 Interpolation-based Chase and generalized minimum distance decoders
    1. 6.1 Interpolation-based Chase decoder
      1. 6.1.1 Backward-forward interpolation algorithms and architectures
        1. 6.1.1.1 Backward-forward interpolation
        2. 6.1.1.2 Unified backward-forward interpolation
        3. 6.1.1.3 Generalized backward interpolation
      2. 6.1.2 Eliminated factorization
      3. 6.1.3 Polynomial selection schemes
        1. 6.1.3.1 Polynomial selection based on root search
        2. 6.1.3.2 Polynomial selection based on evaluation value testing
        3. 6.1.3.3 Reduced-complexity Chase interpolation using evaluation value testing-based polynomial selection
      4. 6.1.4 Chien-search-based codeword recovery
      5. 6.1.5 Systematic re-encoding
      6. 6.1.6 Simplified interpolation-based Chase decoder architecture
    2. 6.2 Generalized minimum distance decoder
      1. 6.2.1 Kotter’s one-pass GMD decoder
      2. 6.2.2 Interpolation-based one-pass GMD decoder
  16. 7 BCH encoder & decoder architectures
    1. 7.1 BCH codes
    2. 7.2 BCH encoder architectures
    3. 7.3 Hard-decision BCH decoders
      1. 7.3.1 Peterson’s algorithm
      2. 7.3.2 The Berlekamp’s algorithm and implementation architectures
      3. 7.3.3 3-error-correcting BCH decoder architectures
    4. 7.4 Chase BCH decoder based on Berlekamp’s algorithm
    5. 7.5 Interpolation-based Chase BCH decoder architectures
  17. 8 Binary LDPC codes & decoder architectures
    1. 8.1 LDPC codes
    2. 8.2 LDPC decoding algorithms
      1. 8.2.1 Belief propagation algorithm
      2. 8.2.2 Min-sum algorithm
      3. 8.2.3 Majority-logic and bit-flipping algorithms
      4. 8.2.4 Finite alphabet iterative decoding algorithm
    3. 8.3 LDPC decoder architectures
      1. 8.3.1 Scheduling schemes
        1. 8.3.1.1 Flooding scheme
        2. 8.3.1.2 Sliced message-passing scheme
        3. 8.3.1.3 Row-layered decoding scheme
        4. 8.3.1.4 Shuffled decoding scheme
        5. 8.3.1.5 Scheduling scheme comparisons
      2. 8.3.2 VLSI architectures for CNUs and VNUs
        1. 8.3.2.1 CNU architectures
        2. 8.3.2.2 VNU architectures
        3. 8.3.2.3 Message compression
    4. 8.4 Low-power LDPC decoder design
  18. 9 Non-binary LDPC decoder architectures
    1. 9.1 Non-binary LDPC codes and decoding algorithms
      1. 9.1.1 Belief propagation decoding algorithms
        1. 9.1.1.1 Frequency-domain decoding algorithm
        2. 9.1.1.2 Log-domain decoding algorithm
        3. 9.1.1.3 Mixed-domain decoding algorithm
      2. 9.1.2 Extended Min-sum and Min-max algorithms
      3. 9.1.3 Iterative reliability-based majority-logic decoding
    2. 9.2 Min-max decoder architectures
      1. 9.2.1 Forward-backward Min-max check node processing
        1. 9.2.1.1 Elementary step architecture with all q messages and polynomial representation
        2. 9.2.1.2 Elementary step architecture with all q messages and power representation
        3. 9.2.1.3 Elementary step architecture with < q messages
        4. 9.2.1.4 Scheduling of elementary steps
      2. 9.2.2 Trellis-based path-construction Min-max check node processing
      3. 9.2.3 Simplified Min-max check node processing
      4. 9.2.4 Syndrome-based Min-max check node processing
      5. 9.2.5 Basis-construction Min-max check node processing
      6. 9.2.6 Variable node unit architectures
      7. 9.2.7 Overall NB-LDPC decoder architectures
    3. 9.3 Extended Min-sum decoder architectures
      1. 9.3.1 Extended Min-sum elementary step architecture keeping q&#8242; &lt; ′ < q messages
      2. 9.3.2 Trellis-based path-construction extended Min-sum check node processing
    4. 9.4 Iterative majority-logic decoder architectures
      1. 9.4.1 IHRB decoders for QCNB-LDPC codes
      2. 9.4.2 IHRB decoders for cyclic NB-LDPC codes
      3. 9.4.3 Enhanced IHRB decoding scheme and architectures
  19. Bibliography
  20. Index