Mathematical Foundations for Signal Processing, Communications, and Networking

Book description

Mathematical Foundations for Signal Processing, Communications, and Networking describes mathematical concepts and results important in the design, analysis, and optimization of signal processing algorithms, modern communication systems, and networks. Helping readers master key techniques and comprehend the current research literature, the book offers a comprehensive overview of methods and applications from linear algebra, numerical analysis, statistics, probability, stochastic processes, and optimization.

From basic transforms to Monte Carlo simulation to linear programming, the text covers a broad range of mathematical techniques essential to understanding the concepts and results in signal processing, telecommunications, and networking. Along with discussing mathematical theory, each self-contained chapter presents examples that illustrate the use of various mathematical concepts to solve different applications. Each chapter also includes a set of homework exercises and readings for additional study.

This text helps readers understand fundamental and advanced results as well as recent research trends in the interrelated fields of signal processing, telecommunications, and networking. It provides all the necessary mathematical background to prepare students for more advanced courses and train specialists working in these areas.

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. List of Figures
  7. List of Tables
  8. Preface
  9. Editors
  10. List of Contributors
  11. List of Acronyms
  12. Notations and Symbols
  13. 1 Introduction
  14. 2 Signal Processing Transforms
    1. 2.1 Introduction
    2. 2.2 Basic Transformations
    3. 2.3 Fourier Series and Transform
      1. 2.3.1 Basic Definitions
      2. 2.3.2 Fourier Series
      3. 2.3.3 Fourier Transform
    4. 2.4 Sampling
      1. 2.4.1 Impulse-Train Sampling
      2. 2.4.2 Aliasing
    5. 2.5 Cosine and Sine Transforms
      1. 2.5.1 Cosine Transform
      2. 2.5.2 Sine Transform
    6. 2.6 Laplace Transform
      1. 2.6.1 Properties of Laplace Transform
    7. 2.7 Hartley Transform
      1. 2.7.1 Properties of Hartley Transform
    8. 2.8 Hilbert Transform
      1. 2.8.1 Properties of Hilbert Transform
    9. 2.9 Discrete-Time Fourier Transform
    10. 2.10 The Z-Transform
      1. 2.10.1 Properties of Z-Transform
    11. 2.11 Conclusion and Further Reading
    12. 2.12 Exercises
    13. References
  15. 3 Linear Algebra
    1. 3.1 Vector Spaces
      1. 3.1.1 Subspaces and Direct Sums
      2. 3.1.2 Spanning and Linear Independency
      3. 3.1.3 Bases
      4. 3.1.4 Dimension
      5. 3.1.5 Ordered Basis
      6. 3.1.6 Norms
      7. 3.1.7 Inner-Products
      8. 3.1.8 Induced Norms
      9. 3.1.9 Orthogonality
      10. 3.1.10 Gram-Schmidt Orthogonalization
    2. 3.2 Linear Transformations
      1. 3.2.1 Range and Nullspace of a Linear Transformation
      2. 3.2.2 Composition and Invertibility
      3. 3.2.3 Matrix Representation of Linear Transformations
      4. 3.2.4 Projection Operators
      5. 3.2.5 Linear Functionals and Dual Spaces
      6. 3.2.6 Adjoint of a Linear Transformation
      7. 3.2.7 Four Fundamental Subspaces
    3. 3.3 Operator Norms and Matrix Norms
    4. 3.4 Systems of Linear Equations
    5. 3.5 Determinant, Adjoint, and Inverse of a Matrix
    6. 3.6 Cramer’s Rule
    7. 3.7 Unitary and Orthogonal Operators and Matrices
    8. 3.8 LU Decomposition
    9. 3.9 LDL and Cholesky Decomposition
    10. 3.10 QR Decomposition
    11. 3.11 Householder and Givens Transformations
      1. 3.11.1 Orthogonal Reduction
    12. 3.12 Best Approximations and Orthogonal Projections
    13. 3.13 Least Squares Approximations
    14. 3.14 Angles Between Subspaces
      1. 3.14.1 Principal Angles Between Subspaces
    15. 3.15 Eigenvalues and Eigenvectors
      1. 3.15.1 Diagonalization
    16. 3.16 Schur Factorization and Spectral Theorem
    17. 3.17 Singular Value Decomposition (SVD)
    18. 3.18 Rayleigh Quotient
    19. 3.19 Application of SVD and Rayleigh Quotient: Principal Component Analysis
    20. 3.20 Special Matrices
      1. 3.20.1 Block Matrices
      2. 3.20.2 Circulant Matrices
      3. 3.20.3 Toeplitz Matrices
      4. 3.20.4 Hankel Matrices
      5. 3.20.5 Vandermonde Matrices
      6. 3.20.6 Normal Matrices
      7. 3.20.7 Stochastic Matrices
      8. 3.20.8 Positive and Negative Definite Matrices
      9. 3.20.9 Matrix Condition Number
      10. 3.20.10 Sherman-Morrison-Woodbury Identity
      11. 3.20.11 Schur Complement
      12. 3.20.12 Generalized Inverses
    21. 3.21 Matrix Operations
      1. 3.21.1 Kronecker Product
      2. 3.21.2 Hadamard Product
      3. 3.21.3 Dot Product
      4. 3.21.4 Direct Sum
      5. 3.21.5 Differentiation of Matrix and Vector Functions
    22. 3.22 References and Further Studies
    23. 3.23 Exercises
    24. References
  16. 4 Elements of Galois Fields
    1. 4.1 Groups, Rings, and Fields
      1. 4.1.1 Groups
      2. 4.1.2 Rings
      3. 4.1.3 Fields
    2. 4.2 Galois Fields
    3. 4.3 Polynomials with Coefficients in GF(2)
    4. 4.4 Construction of GF(2m)
      1. 4.4.1 Conjugate Elements
      2. 4.4.2 Factorization of the Polynomial Xn + 1
    5. 4.5 Some Notes on Applications of Finite Fields
    6. 4.6 Exercises
    7. References
  17. 5 Numerical Analysis
    1. 5.1 Numerical Approximation
    2. 5.2 Sensitivity and Conditioning
    3. 5.3 Computer Arithmetic
    4. 5.4 Interpolation
      1. 5.4.1 Polynomial Interpolation
      2. 5.4.2 Piecewise Polynomial Interpolation
    5. 5.5 Nonlinear Equations
      1. 5.5.1 Interval Bisection
      2. 5.5.2 Newton’s Method
      3. 5.5.3 Secant Method
      4. 5.5.4 Muller’s Method
      5. 5.5.5 Linear Fractional Interpolation
      6. 5.5.6 Zeros of Polynomials
    6. 5.6 Eigenvalues and Singular Values
      1. 5.6.1 Power Iterations
      2. 5.6.2 QR Algorithm
      3. 5.6.3 Computing Singular Values
    7. 5.7 Further Reading
    8. 5.8 Exercises
    9. References
  18. 6 Combinatorics
    1. 6.1 Two Principles of Enumeration
    2. 6.2 Permutations and Combinations
    3. 6.3 The Principle of Inclusion and Exclusion
    4. 6.4 Generating Functions
    5. 6.5 Recurrence Relations
    6. 6.6 Graphs
    7. 6.7 Paths and Cycles in Graphs
    8. 6.8 Trees
    9. 6.9 Encoding and Decoding
    10. 6.10 Latin Squares
    11. 6.11 Balanced Incomplete Block Designs
    12. 6.12 Conclusion
    13. 6.13 Exercises
    14. References
  19. 7 Probability, Random Variables, and Stochastic Processes
    1. 7.1 Introduction to Probability
    2. 7.2 Random Variables
      1. 7.2.1 Discrete Random Variables
      2. 7.2.2 Continuous Random Variables
    3. 7.3 Joint Random Variables
      1. 7.3.1 Expected Values, Characteristic Functions
      2. 7.3.2 Inequalities
      3. 7.3.3 Functions of Multiple Random Variables
      4. 7.3.4 Convergence of Random Variables
      5. 7.3.5 Law of Large Numbers (LLN) and Central Limit Theorem (CLT)
    4. 7.4 Random Processes
      1. 7.4.1 Stationary Process
      2. 7.4.2 Random Process as the Input to a Linear System
    5. 7.5 Markov Process
      1. 7.5.1 Markov Chains
      2. 7.5.2 Continuous Time Markov Chains
      3. 7.5.3 Hidden Markov Model
    6. 7.6 Summary and Further Reading
    7. 7.7 Exercises
    8. References
  20. 8 Random Matrix Theory
    1. 8.1 Probability Notations
    2. 8.2 Spectral Distribution of Random Matrices
      1. 8.2.1 Wishart Matrices
      2. 8.2.2 Limiting Spectral Distribution
    3. 8.3 Spectral Analysis
      1. 8.3.1 Exact Eigenvalue Separation
      2. 8.3.2 Support of l.s.d.
    4. 8.4 Statistical Inference
    5. 8.5 Applications
      1. 8.5.1 Binary Hypothesis Testing
      2. 8.5.2 Parameter Estimation
    6. 8.6 Conclusion
    7. 8.7 Exercises
    8. References
  21. 9 Large Deviations
    1. 9.1 Introduction
    2. 9.2 Concentration Inequalities
    3. 9.3 Rate Function
    4. 9.4 Cramér’s Theorem
    5. 9.5 Method of Types
    6. 9.6 Sanov’s Theorem
    7. 9.7 Hypothesis Testing
    8. 9.8 Further Readings
    9. 9.9 Exercises
    10. References
  22. 10 Fundamentals of Estimation Theory
    1. 10.1 Introduction
    2. 10.2 Bound on Minimum Variance – Cramér-Rao Lower Bound
      1. 10.2.1 Computation of CRLB
      2. 10.2.2 Finding MVUE Attaining the CRLB
    3. 10.3 MVUE Using RBLS Theorem
      1. 10.3.1 Sufficient Statistics
      2. 10.3.2 Finding MVUE from Sufficient Statistics
    4. 10.4 Maximum Likelihood Estimation
      1. 10.4.1 ML Estimation Principle
      2. 10.4.2 Properties of the ML Estimator
    5. 10.5 Least Squares Estimation
      1. 10.5.1 Geometrical Interpretation
      2. 10.5.2 Recursive LS Estimation
      3. 10.5.3 Weighted LS and Iterative Reweighted LS
      4. 10.5.4 Constrained LS Estimation
    6. 10.6 Regularized LS Estimation
      1. 10.6.1 ℓ2 Regularization
      2. 10.6.2 LS Estimation with Quadratic Constraint
      3. 10.6.3 ℓ1 Regularization
    7. 10.7 Bayesian Estimation
      1. 10.7.1 Minimum Mean Square Error Estimation
      2. 10.7.2 General Bayesian Estimator
      3. 10.7.3 Handling Nuisance Parameters
    8. 10.8 References and Further Reading
    9. 10.9 Exercises
    10. References
  23. 11 Fundamentals of Detection Theory
    1. 11.1 Introduction
      1. 11.1.1 Statistical Decision Theory Framework
      2. 11.1.2 Probabilistic Structure for Observation Space
      3. 11.1.3 Conditional Density and Conditional Risk
      4. 11.1.4 Bayesian Approach
      5. 11.1.5 Minimax Approach
      6. 11.1.6 Randomized Decision Rules
      7. 11.1.7 General Method for Finding Bayes Rules
    2. 11.2 Bayesian Binary Detection
      1. 11.2.1 Likelihood Ratio Test
      2. 11.2.2 Uniform Costs
      3. 11.2.3 Examples
    3. 11.3 Binary Minimax Detection
      1. 11.3.1 Bayes Risk Line and Minimum Risk Curve
      2. 11.3.2 Equalizer Rule
      3. 11.3.3 Examples
    4. 11.4 Binary Neyman-Pearson Detection
      1. 11.4.1 Solution to the N-P Optimization Problem
      2. 11.4.2 N-P Rule and Receiver Operating Characteristic
      3. 11.4.3 Examples
    5. 11.5 Bayesian Composite Detection
    6. 11.6 Neyman-Pearson Composite Detection
      1. 11.6.1 UMP Detection with One Composite Hypothesis
      2. 11.6.2 UMP Detection with Both Composite Hypotheses
      3. 11.6.3 Generalized Likelihood Ratio (GLR) Detection
      4. 11.6.4 Locally Most Powerful (LMP) Detection
    7. 11.7 Binary Detection with Vector Observations
      1. 11.7.1 Conditionally Independent Observations
      2. 11.7.2 Deterministic Signals in Correlated Gaussian Noise
      3. 11.7.3 Gaussian Signals in Gaussian Noise
    8. 11.8 Summary and Further Reading
    9. 11.9 Exercises
    10. References
  24. 12 Monte Carlo Methods for Statistical Signal Processing
    1. 12.1 Introduction
      1. 12.1.1 Model-Based Signal Processing
      2. 12.1.2 Examples
    2. 12.2 Monte Carlo Methods
    3. 12.3 Markov Chain Monte Carlo (MCMC) Methods
      1. 12.3.1 General MCMC Algorithms
      2. 12.3.2 Applications of MCMC in Digital Communications
    4. 12.4 Sequential Monte Carlo (SMC) Methods
      1. 12.4.1 General SMC Algorithms
      2. 12.4.2 Resampling Procedures
      3. 12.4.3 Applications of SMC in Bioinformatics
    5. 12.5 Conclusions and Further Readings
    6. 12.6 Exercises
    7. References
  25. 13 Factor Graphs and Message Passing Algorithms
    1. 13.1 Introduction
      1. 13.1.1 Why Factor Graphs?
      2. 13.1.2 Literature Survey
      3. 13.1.3 Organization of the Chapter
    2. 13.2 Factor Graphs
    3. 13.3 Modeling Systems Using Factor Graphs
      1. 13.3.1 Behavioral Modeling
      2. 13.3.2 Probabilistic Modeling
    4. 13.4 Relationship with Other Probabilistic Graphical Models
      1. 13.4.1 Markov Random Fields
      2. 13.4.2 Bayesian Networks
    5. 13.5 Message Passing in Factor Graphs
      1. 13.5.1 Sum-Product Algorithm
      2. 13.5.2 Max-Product Algorithm
    6. 13.6 Factor Graphs with Cycles
      1. 13.6.1 Message Passing Schedules
      2. 13.6.2 Iterative Message Passing
    7. 13.7 Some General Remarks on Factor Graphs
      1. 13.7.1 Mappers
      2. 13.7.2 Hybrid Equality Constraint
      3. 13.7.3 Quantizers
      4. 13.7.4 Continuous Variables
    8. 13.8 Some Important Message Passing Algorithms
      1. 13.8.1 Forward/Backward Algorithm
      2. 13.8.2 The Viterbi Algorithm
      3. 13.8.3 Kalman Filter
      4. 13.8.4 Expectation Maximization (EM) Algorithm
    9. 13.9 Applications of Message Passing in Factor Graphs
      1. 13.9.1 Detection of OFDM Signals in the Presence of Carrier Frequency Offset and Phase Noise
    10. 13.10 Exercises
    11. References
  26. 14 Unconstrained and Constrained Optimization Problems
    1. 14.1 Basics of Convex Analysis
    2. 14.2 Unconstrained vs. Constrained Optimization
      1. 14.2.1 Optimality Conditions for Unconstrained Optimization
      2. 14.2.2 Optimality Conditions for Constrained Optimization
      3. 14.2.3 Lagrangian Duality
    3. 14.3 Application Examples
    4. 14.4 Exercises
    5. References
  27. 15 Linear Programming and Mixed Integer Programming
    1. 15.1 Linear Programming
      1. 15.1.1 General Presentation
      2. 15.1.2 Transformations of the Standard Problem
      3. 15.1.3 Optimality Characterization
      4. 15.1.4 Duality Aspects
      5. 15.1.5 The Simplex Method
      6. 15.1.6 Interior-Point Methods
    2. 15.2 Modeling Problems via Linear Programming
      1. 15.2.1 Optimization with 1-norm and ∞-norm
      2. 15.2.2 Chebyshev Center of a Polytope
      3. 15.2.3 Classification with Separating Hyperplanes
      4. 15.2.4 Linear Fractional Programming
      5. 15.2.5 Continuous Constraints and Discretization
    3. 15.3 Mixed Integer Programming
      1. 15.3.1 Problem Statement and LP Relaxation
      2. 15.3.2 Branch and Bound
      3. 15.3.3 Examples of Mixed Integer Problems
    4. 15.4 Historical Notes and Further Reading
    5. 15.5 Exercises
    6. References
  28. 16 Majorization Theory and Applications
    1. 16.1 Majorization Theory
      1. 16.1.1 Basic Concepts
      2. 16.1.2 Schur-Convex/Concave Functions
      3. 16.1.3 Relation to Matrix Theory
      4. 16.1.4 Multiplicative Majorization
      5. 16.1.5 Stochastic Majorization
    2. 16.2 Applications of Majorization Theory
      1. 16.2.1 CDMA Sequence Design
      2. 16.2.2 Linear MIMO Transceiver Design
      3. 16.2.3 Nonlinear MIMO Transceiver Design
      4. 16.2.4 Impact of Correlation
      5. 16.2.5 Robust Design
    3. 16.3 Conclusions and Further Readings
    4. 16.4 Exercises
    5. References
  29. 17 Queueing Theory
    1. 17.1 Introduction
    2. 17.2 Markov Chains
      1. 17.2.1 Discrete-Time Markov Chains
      2. 17.2.2 Continuous-Time Markov Chains
    3. 17.3 Queueing Models
    4. 17.4 M/M/1 Queue
      1. 17.4.1 Steady-State Probabilities for Number in System
      2. 17.4.2 Little’s Formula
      3. 17.4.3 Probability Distribution of Delay Through System
    5. 17.5 M/M/1/N Queue
      1. 17.5.1 Example: Queueing Model of a Packet Switch
    6. 17.6 M/M/N/N Queue
    7. 17.7 M/M/1 Queues in Tandem
      1. 17.7.1 Product Form
      2. 17.7.2 Jackson Networks
    8. 17.8 M/G/1 Queue
      1. 17.8.1 Embedded Markov Chain
      2. 17.8.2 Mean Number in System
      3. 17.8.3 Distribution of Number in System
      4. 17.8.4 Mean Delay Through System
      5. 17.8.5 Distribution of Delay Through System
      6. 17.8.6 Example: Mixed Packets
      7. 17.8.7 Example: Data Frame Retransmissions
    9. 17.9 Conclusions
    10. 17.10 Exercises
    11. References
  30. 18 Network Optimization Techniques
    1. 18.1 Introduction
    2. 18.2 Basic Multicommodity Flow Networks Optimization Models
      1. 18.2.1 Notions and Notations
      2. 18.2.2 Link-Path vs. Node-Link Formulation Applied to Allocation Problems
      3. 18.2.3 Dimensioning Problems
    3. 18.3 Optimization Methods for Multicommodity Flow Networks
      1. 18.3.1 Decomposition Methods
      2. 18.3.2 Solving MIP Problems
      3. 18.3.3 Heuristic Approaches
    4. 18.4 Optimization Models for Multistate Networks
      1. 18.4.1 Protection Models
      2. 18.4.2 Restoration Models
      3. 18.4.3 Multihour and Multiperiod Design
    5. 18.5 Concluding Remarks
    6. 18.6 Exercises
    7. References
  31. 19 Game Theory
    1. 19.1 Introduction
    2. 19.2 Utility Theory
    3. 19.3 Games on the Normal Form
      1. 19.3.1 Zero-Sum Games
      2. 19.3.2 Non-Zero-Sum Games
    4. 19.4 Noncooperative Games and the Nash Equilibrium
    5. 19.5 Cooperative Games
      1. 19.5.1 Bargaining without Transferable Utilities
      2. 19.5.2 Bargaining with Transferable Utilities
    6. 19.6 Games with Incomplete Information
    7. 19.7 Extensive Form Games
      1. 19.7.1 Nash Equilibrium in Extensive Form Games
      2. 19.7.2 Subgame-Perfect Equilibrium
      3. 19.7.3 Incomplete Information in Extensive Form Games
    8. 19.8 Repeated Games and Evolutionary Stability
      1. 19.8.1 Repeated Games
      2. 19.8.2 Evolutionary Game Theory
    9. 19.9 Coalitional Form/Characteristic Function Form
      1. 19.9.1 The Core
      2. 19.9.2 The Kernel
      3. 19.9.3 The Nucleolus
      4. 19.9.4 The Shapley Value
    10. 19.10 Mechanism Design and Implementation Theory
      1. 19.10.1 Social Choice
      2. 19.10.2 Auctions
      3. 19.10.3 Direct Revelation Principle
      4. 19.10.4 Clarke-Groves and AGV-Arrow Mechanism
    11. 19.11 Applications to Signal Processing and Communications
    12. 19.12 Acknowledgments
    13. 19.13 Exercises
    14. References
  32. 20 A Short Course on Frame Theory
    1. 20.1 Examples of Signal Expansions
    2. 20.2 Signal Expansions in Finite-Dimensional Spaces
      1. 20.2.1 Orthonormal Bases
      2. 20.2.2 General Bases
      3. 20.2.3 Redundant Signal Expansions
    3. 20.3 Frames for General Hilbert Spaces
      1. 20.3.1 The Frame Operator
      2. 20.3.2 The Canonical Dual Frame
      3. 20.3.3 Signal Expansions
      4. 20.3.4 Tight Frames
      5. 20.3.5 Exact Frames and Biorthonormality
    4. 20.4 The Sampling Theorem
      1. 20.4.1 Sampling Theorem as a Frame Expansion
      2. 20.4.2 Design Freedom in Oversampled A/D Conversion
      3. 20.4.3 Noise Reduction in Oversampled A/D Conversion
    5. 20.5 Important Classes of Frames
      1. 20.5.1 Weyl-Heisenberg Frames
      2. 20.5.2 Wavelets
    6. 20.6 Exercises
    7. References
  33. Index

Product information

  • Title: Mathematical Foundations for Signal Processing, Communications, and Networking
  • Author(s): Erchin Serpedin, Thomas Chen, Dinesh Rajan
  • Release date: December 2017
  • Publisher(s): CRC Press
  • ISBN: 9781466514089