Multi-Camera Networks

Book description

  • The first book, by the leading experts, on this rapidly developing field with applications to security, smart homes, multimedia, and environmental monitoring 
  • Comprehensive coverage of fundamentals, algorithms, design methodologies, system implementation issues, architectures, and applications
  • Presents in detail the latest developments in multi-camera calibration, active and heterogeneous camera networks, multi-camera object and event detection, tracking, coding, smart camera architecture and middleware

Table of contents

  1. Front Cover
  2. Multi-Camera Networks
  3. Copyright Page
  4. Table of Contents (1/3)
  5. Table of Contents (2/3)
  6. Table of Contents (3/3)
  7. Foreword
  8. Preface (1/2)
  9. Preface (2/2)
  10. Part 1: Multi-Camera Calibration and Topology
    1. Chapter 1. Multi-View Geometry for Camera Networks
      1. 1.1 Introduction
      2. 1.2 Image Formation
        1. 1.2.1 Perspective Projection
        2. 1.2.2 Camera Matrices
        3. 1.2.3 Estimating the Camera Matrix
      3. 1.3 Two-Camera Geometry (1/2)
      4. 1.3 Two-Camera Geometry (2/2)
        1. 1.3.1 Epipolar Geometry and Its Estimation
        2. 1.3.2 Relating the Fundamental Matrix to the Camera Matrices
        3. 1.3.3 Estimating the Fundamental Matrix
      5. 1.4 Projective Transformations
        1. 1.4.1 Estimating Projective Transformations
        2. 1.4.2 Rectifying Projective Transformations
      6. 1.5 Feature Detection and Matching
      7. 1.6 Multi-Camera Geometry
        1. 1.6.1 Affine Reconstruction
        2. 1.6.2 Projective Reconstruction
        3. 1.6.3 Metric Reconstruction
        4. 1.6.4 Bundle Adjustment
      8. 1.7 Conclusions
        1. 1.7.1 Resources
      9. References
    2. Chapter 2. Multi-View Calibration, Synchronization, and Dynamic Scene Reconstruction
      1. 2.1 Introduction
      2. 2.2 Camera Network Calibration and Synchronization (1/4)
      3. 2.2 Camera Network Calibration and Synchronization (2/4)
      4. 2.2 Camera Network Calibration and Synchronization (3/4)
      5. 2.2 Camera Network Calibration and Synchronization (4/4)
        1. 2.2.1 Epipolar Geometry from Dynamic Silhouettes
        2. 2.2.2 Related Work
        3. 2.2.3 Camera Network Calibration
        4. 2.2.4 Computing the Metric Reconstruction
        5. 2.2.5 Camera Network Synchronization
        6. 2.2.6 Results
      6. 2.3 Dynamic Scene Reconstruction from Silhouette Cues (1/5)
      7. 2.3 Dynamic Scene Reconstruction from Silhouette Cues (2/5)
      8. 2.3 Dynamic Scene Reconstruction from Silhouette Cues (3/5)
      9. 2.3 Dynamic Scene Reconstruction from Silhouette Cues (4/5)
      10. 2.3 Dynamic Scene Reconstruction from Silhouette Cues (5/5)
        1. 2.3.1 Related Work
        2. 2.3.2 Probabilistic Framework
        3. 2.3.3 Automatic Learning and Tracking
        4. 2.3.4 Results and Evaluation
      11. 2.4 Conclusions
      12. References
    3. Chapter 3. Actuation-Assisted Localization of Distributed Camera Sensor Networks
      1. 3.1 Introduction
      2. 3.2 Methodology
        1. 3.2.1 Base Triangle
        2. 3.2.2 Large-Scale Networks
        3. 3.2.3 Bundle Adjustment Refinement
      3. 3.3 Actuation Planning
        1. 3.3.1 Actuation Strategies
        2. 3.3.2 Actuation Termination Rules
      4. 3.4 System Description
        1. 3.4.1 Actuated Camera Platform
        2. 3.4.2 Optical Communication Beaconing
        3. 3.4.3 Network Architecture
      5. 3.5 Evaluation
        1. 3.5.1 Localization Accuracy
        2. 3.5.2 Node Density
        3. 3.5.3 Latency
      6. 3.6 Conclusions
      7. References
    4. Chapter 4. Building an Algebraic Topological Model of Wireless Camera Networks
      1. 4.1 Introduction
      2. 4.2 Mathematical Background
        1. 4.2.1 Simplicial Homology
        2. 4.2.2 Example
        3. 4.2.3 Cech Theorem
      3. 4.3 The Camera and the Environment Models
      4. 4.4 The CN-Complex
      5. 4.5 Recovering Topology: 2D Case
        1. 4.5.1 Algorithms
        2. 4.5.2 Simulation in 2D
      6. 4.6 Recovering Topology: 2.5D Case (1/2)
      7. 4.6 Recovering Topology: 2.5D Case (2/2)
        1. 4.6.1 Mapping from 2.5D to 2D
        2. 4.6.2 Building the CN-Complex
        3. 4.6.3 Experimentation
      8. 4.7 Conclusions
      9. References
    5. Chapter 5. Optimal Placement of Multiple Visual Sensors
      1. 5.1 Introduction
        1. 5.1.1 Related Work
        2. 5.1.2 Organization
      2. 5.2 Problem Formulation
        1. 5.2.1 Definitions
        2. 5.2.2 Problem Statements
        3. 5.2.3 Modeling a Camera’s Field of View
        4. 5.2.4 Modeling Space
      3. 5.3 Approaches (1/2)
      4. 5.3 Approaches (2/2)
        1. 5.3.1 Exact Algorithms
        2. 5.3.2 Heuristics
        3. 5.3.3 Random Selection and Placement
      5. 5.4 Experiments
        1. 5.4.1 Comparison of Approaches
        2. 5.4.2 Complex Space Examples
      6. 5.5 Possible Extensions
      7. 5.6 Conclusions
      8. References
    6. Chapter 6. Optimal Visual Sensor Network Configuration
      1. 6.1 Introduction
        1. 6.1.1 Organization
      2. 6.2 Related Work
      3. 6.3 General Visibility Model
      4. 6.4 Visibility Model for Visual Tagging
      5. 6.5 Optimal Camera Placement
        1. 6.5.1 Discretization of Camera and Tag Spaces
        2. 6.5.2 MIN_CAM: Minimizing the Number of Cameras for Target Visibility
        3. 6.5.3 FIX_CAM: Maximizing Visibility for a Given Number of Cameras
        4. 6.5.4 GREEDY: An Algorithm to Speed Up BIP
      6. 6.6 Experimental Results (1/2)
      7. 6.6 Experimental Results (2/2)
        1. 6.6.1 Optimal Camera Placement Simulation Experiments
        2. 6.6.2 Comparison with Other Camera Placement Strategies
      8. 6.7 Conclusions and Future Work
      9. References
  11. Part 2: Active and Heterogeneous Camera Networks
    1. Chapter 7. Collaborative Control of Active Cameras in Large-Scale Surveillance
      1. 7.1 Introduction
      2. 7.2 Related Work
      3. 7.3 System Overview
        1. 7.3.1 Planning
        2. 7.3.2 Tracking
      4. 7.4 Objective Function for PTZ Scheduling
      5. 7.5 Optimization
        1. 7.5.1 Asynchronous Optimization
        2. 7.5.2 Combinatorial Search
      6. 7.6 Quality Measures
        1. 7.6.1 View Angle
        2. 7.6.2 Target–Camera Distance
        3. 7.6.3 Target–Zone Boundary Distance
        4. 7.6.4 PTZ Limits
        5. 7.6.5 Combined Quality Measure
      7. 7.7 Idle Mode
      8. 7.8 Experiments
      9. 7.9 Conclusions
      10. References
    2. Chapter 8. Pan-Tilt-Zoom Camera Networks
      1. 8.1 Introduction
      2. 8.2 Related Work
      3. 8.3 Pan-Tilt-Zoom Camera Geometry
      4. 8.4 PTZ Camera Networks with Master–Slave Configuration
        1. 8.4.1 Minimal PTZ Camera Model Parameterization
      5. 8.5 Cooperative Target Tracking
        1. 8.5.1 Tracking Using SIFT Visual Landmarks
      6. 8.6 Extension to Wider Areas
      7. 8.7 The Vanishing Line for Zoomed Head Localization
      8. 8.8 Experimental Results
      9. 8.9 Conclusions
      10. References
    3. Chapter 9. Multi-Modal Data Fusion Techniques and Applications
      1. 9.1 Introduction
      2. 9.2 Architecture Design in Multi-Modal Systems (1/2)
      3. 9.2 Architecture Design in Multi-Modal Systems (2/2)
        1. 9.2.1 Logical Architecture Design
        2. 9.2.2 Physical Architecture Design
      4. 9.3 Fusion Techniques for Heterogeneous Sensor Networks (1/2)
      5. 9.3 Fusion Techniques for Heterogeneous Sensor Networks (2/2)
        1. 9.3.1 Data Alignment
        2. 9.3.2 Multi-Modal Techniques for State Estimation and Localization
        3. 9.3.3 Fusion of Multi-Modal Cues for Event Analysis
      6. 9.4 Applications
        1. 9.4.1 Surveillance Applications
        2. 9.4.2 Ambient Intelligence Applications
        3. 9.4.3 Video Conferencing
        4. 9.4.4 Automotive Applications
      7. 9.5 Conclusions
      8. References
    4. Chapter 10. Spherical Imaging in Omnidirectional Camera Networks
      1. 10.1 Introduction
      2. 10.2 Omnidirectional Imaging (1/2)
      3. 10.2 Omnidirectional Imaging (2/2)
        1. 10.2.1 Cameras
        2. 10.2.2 Projective Geometry for Catadioptric Systems
        3. 10.2.3 Spherical Camera Model
        4. 10.2.4 Image Processing on the Sphere
      4. 10.3 Calibration of Catadioptric Cameras
        1. 10.3.1 Intrinsic Parameters
        2. 10.3.2 Extrinsic Parameters
      5. 10.4 Multi-Camera Systems (1/2)
      6. 10.4 Multi-Camera Systems (2/2)
        1. 10.4.1 Epipolar Geometry for Paracatadioptric Cameras
        2. 10.4.2 Disparity Estimation
      7. 10.5 Sparse Approximations and Geometric Estimation
        1. 10.5.1 Correlation Estimation with Sparse Approximations
        2. 10.5.2 Distributed Coding of 3D Scenes
      8. 10.6 Conclusions
      9. References
  12. Part 3: Multi-View Coding
    1. Chapter 11. Video Compression for Camera Networks: A Distributed Approach
      1. 11.1 Introduction
      2. 11.2 Classic Approach to Video Coding
      3. 11.3 Distributed Source Coding (1/2)
      4. 11.3 Distributed Source Coding (2/2)
        1. 11.3.1 Slepian-Wolf Theorem
        2. 11.3.2 A Simple Example
        3. 11.3.3 Channel Codes for Binary Source DSC
        4. 11.3.4 Wyner-Ziv Theorem
      5. 11.4 From DSC to DVC (1/2)
      6. 11.4 From DSC to DVC (2/2)
        1. 11.4.1 Applying DSC to Video Coding
        2. 11.4.2 PRISM Codec
        3. 11.4.3 Stanford Approach
        4. 11.4.4 Remarks
      7. 11.5 Applying DVC to Multi-View Systems
        1. 11.5.1 Extending Mono-View Codecs
        2. 11.5.2 Remarks on Multi-View Problems
      8. 11.6 Conclusions
      9. References
    2. Chapter 12. Distributed Compression in Multi-Camera Systems
      1. 12.1 Introduction
      2. 12.2 Foundations of Distributed Source Coding
      3. 12.3 Structure and Properties of the Plenoptic Data
      4. 12.4 Distributed Compression of Multi-View Images
      5. 12.5 Multi-Terminal Distributed Video Coding
      6. 12.6 Conclusions
      7. References
  13. Part 4: Multi-Camera Human Detection, Tracking, Pose and Behavior Analysis
    1. Chapter 13. Online Learning of Person Detectors by Co-Training from Multiple Cameras
      1. 13.1 Introduction
      2. 13.2 Co-Training and Online Learning
        1. 13.2.1 Co-Training
        2. 13.2.2 Boosting for Feature Selection
      3. 13.3 Co-Training System
        1. 13.3.1 Scene Calibration
        2. 13.3.2 Online Co-Training
      4. 13.4 Experimental Results (1/2)
      5. 13.4 Experimental Results (2/2)
        1. 13.4.1 Test Data Description
        2. 13.4.2 Indoor Scenario
        3. 13.4.3 Outdoor Scenario
        4. 13.4.4 Resources
      6. 13.5 Conclusions and Future Work
      7. References
    2. Chapter 14. Real-Time 3D Body Pose Estimation
      1. 14.1 Introduction
      2. 14.2 Background
        1. 14.2.1 Tracking
        2. 14.2.2 Example-Based Methods
      3. 14.3 Segmentation
      4. 14.4 Reconstruction
      5. 14.5 Classifier
        1. 14.5.1 Classifier Overview
        2. 14.5.2 Linear Discriminant Analysis
        3. 14.5.3 Average Neighborhood Margin Maximization
      6. 14.6 Haarlets
        1. 14.6.1 3D Haarlets
        2. 14.6.2 Training
        3. 14.6.3 Classification
        4. 14.6.4 Experiments
      7. 14.7 Rotation Invariance
        1. 14.7.1 Overhead Tracker
        2. 14.7.2 Experiments
      8. 14.8 Results and Conclusions
      9. References
    3. Chapter 15. Multi-Person Bayesian Tracking with Multiple Cameras
      1. 15.1 Introduction
        1. 15.1.1 Key Factors and Related Work
        2. 15.1.2 Approach and Chapter Organization
      2. 15.2 Bayesian Tracking Problem Formulation
        1. 15.2.1 Single-Object 3D State and Model Representation
        2. 15.2.2 The Multi-Object State Space
      3. 15.3 Dynamic Model
        1. 15.3.1 Joint Dynamic Model
        2. 15.3.2 Single-Object Dynamic Model
      4. 15.4 Observation Model
        1. 15.4.1 Foreground Likelihood
        2. 15.4.2 Color Likelihood
      5. 15.5 Reversible-Jump MCMC
        1. 15.5.1 Human Detection
        2. 15.5.2 Move Proposals
        3. 15.5.3 Summary
      6. 15.6 Experiments
        1. 15.6.1 Calibration and Slant Removal
        2. 15.6.2 Results
      7. 15.7 Conclusions
      8. References
    4. Chapter 16. Statistical Pattern Recognition for Multi-Camera Detection, Tracking, and Trajectory Analysis
      1. 16.1 Introduction
      2. 16.2 Background Modeling
      3. 16.3 Single-Camera Person Tracking (1/2)
      4. 16.3 Single-Camera Person Tracking (2/2)
        1. 16.3.1 The Tracking Algorithm
        2. 16.3.2 Occlusion Detection and Classification
      5. 16.4 Bayesian-Competitive Consistent Labeling
      6. 16.5 Trajectory Shape Analysis for Abnormal Path Detection
        1. 16.5.1 Trajectory Shape Classification
      7. 16.6 Experimental Results
      8. References
    5. Chapter 17. Object Association Across Multiple Cameras
      1. 17.1 Introduction
      2. 17.2 Related Work
        1. 17.2.1 Multiple Stationary Cameras with Overlapping Fields of View
        2. 17.2.2 Multiple Stationary Cameras with Nonoverlapping Fields of View
        3. 17.2.3 Multiple Pan-Tilt-Zoom Cameras
      3. 17.3 Inference Framework
      4. 17.4 Evaluating an Association Using Appearance Information
        1. 17.4.1 Estimating the Subspace of BTFs Between Cameras
      5. 17.5 Evaluating an Association Using Motion Information (1/2)
      6. 17.5 Evaluating an Association Using Motion Information (2/2)
        1. 17.5.1 Data Model
        2. 17.5.2 Maximum Likelihood Estimation
        3. 17.5.3 Simulations
        4. 17.5.4 Real Sequences
      7. 17.6 Conclusions
      8. References
    6. Chapter 18. Video Surveillance Using a Multi-Camera Tracking and Fusion System
      1. 18.1 Introduction
      2. 18.2 Single-Camera Surveillance System Architecture
      3. 18.3 Multi-Camera Surveillance System Architecture (1/2)
      4. 18.3 Multi-Camera Surveillance System Architecture (2/2)
        1. 18.3.1 Data Sharing
        2. 18.3.2 System Design
        3. 18.3.3 Cross-Camera Calibration
        4. 18.3.4 Data Fusion
      5. 18.4 Examples
        1. 18.4.1 Critical Infrastructure Protection
        2. 18.4.2 Hazardous Lab Safety Verification
      6. 18.5 Testing and Results
      7. 18.6 Future Work
      8. 18.7 Conclusions
      9. References
    7. Chapter 19. Composite Event Detection in Multi-Camera and Multi-Sensor Surveillance Networks
      1. 19.1 Introduction
      2. 19.2 Related Work
      3. 19.3 Spatio-Temporal Composite Event Detection (1/2)
      4. 19.3 Spatio-Temporal Composite Event Detection (2/2)
        1. 19.3.1 System Infrastructure
        2. 19.3.2 Event Representation and Detection
        3. 19.3.3 Event Description Language
        4. 19.3.4 Primitive Events and User Interfaces
      5. 19.4 Composite Event Search
        1. 19.4.1 IBM Smart Surveillance Solution
        2. 19.4.2 Query-Based Search and Browsing
      6. 19.5 Case Studies
        1. 19.5.1 Application: Retail Loss Prevention
        2. 19.5.2 Application: Tailgating Detection
        3. 19.5.3 Application: False Positive Reduction
      7. 19.6 Conclusions and Future Work
      8. References
  14. Part 5: Smart Camera Networks: Architecture, Middleware, and Applications
    1. Chapter 20. Toward Pervasive Smart Camera Networks
      1. 20.1 Introduction
      2. 20.2 The Evolution of Smart Camera Systems
        1. 20.2.1 Single Smart Cameras
        2. 20.2.2 Distributed Smart Cameras
        3. 20.2.3 Smart Cameras in Sensor Networks
      3. 20.3 Future and Challenges
        1. 20.3.1 Distributed Algorithms
        2. 20.3.2 Dynamic and Heterogeneous Network Architectures
        3. 20.3.3 Privacy and Security
        4. 20.3.4 Service Orientation and User Interaction
      4. 20.4 Conclusions
      5. References
    2. Chapter 21. Smart Cameras for Wireless Camera Networks: Architecture Overview
      1. 21.1 Introduction
      2. 21.2 Processing in a Smart Camera Network
        1. 21.2.1 Centralized Processing
        2. 21.2.2 Distributed Processing
      3. 21.3 Smart Camera Architecture
        1. 21.3.1 Sensor Modules
        2. 21.3.2 Processing Module
        3. 21.3.3 Communication Modules
      4. 21.4 Example Wireless Smart Cameras
        1. 21.4.1 MeshEye
        2. 21.4.2 CMUcam3
        3. 21.4.3 WiCa
        4. 21.4.4 CITRIC
      5. 21.5 Conclusions
      6. References
    3. Chapter 22. Embedded Middleware for Smart Camera Networks and Sensor Fusion
      1. 22.1 Introduction
      2. 22.2 Smart Cameras
      3. 22.3 Distributed Smart Cameras
        1. 22.3.1 Challenges of Distributed Smart Cameras
        2. 22.3.2 Application Development for Distributed Smart Cameras
      4. 22.4 Embedded Middleware for Smart Camera Networks
        1. 22.4.1 Middleware Architecture
        2. 22.4.2 General-Purpose Middleware
        3. 22.4.3 Middleware for Embedded Systems
        4. 22.4.4 Specific Requirements of Distributed Smart Cameras
      5. 22.5 The Agent-Oriented Approach
        1. 22.5.1 From Objects to Agents
        2. 22.5.2 Mobile Agents
        3. 22.5.3 Code Mobility and Programming Languages
        4. 22.5.4 Mobile Agents for Embedded Smart Cameras
      6. 22.6 An Agent System for Distributed Smart Cameras (1/3)
      7. 22.6 An Agent System for Distributed Smart Cameras (2/3)
      8. 22.6 An Agent System for Distributed Smart Cameras (3/3)
        1. 22.6.1 DSCAgents
        2. 22.6.2 Decentralized Multi-Camera Tracking
        3. 22.6.3 Sensor Fusion
      9. 22.7 Conclusions
      10. References
    4. Chapter 23. Cluster-Based Object Tracking by Wireless Camera Networks
      1. 23.1 Introduction
      2. 23.2 Related Work
        1. 23.2.1 Event-Driven Clustering Protocols
        2. 23.2.2 Distributed Kalman Filtering
      3. 23.3 Camera Clustering Protocol (1/2)
      4. 23.3 Camera Clustering Protocol (2/2)
        1. 23.3.1 Object Tracking with Wireless Camera Networks
        2. 23.3.2 Clustering Protocol
      5. 23.4 Cluster-Based Kalman Filter Algorithm (1/2)
      6. 23.4 Cluster-Based Kalman Filter Algorithm (2/2)
        1. 23.4.1 Kalman Filter Equations
        2. 23.4.2 State Estimation
        3. 23.4.3 System Initialization
      7. 23.5 Experimental Results (1/2)
      8. 23.5 Experimental Results (2/2)
        1. 23.5.1 Simulator Environment
        2. 23.5.2 Testbed Implementation
      9. 23.6 Conclusions and Future Work
      10. References
  15. Outlook (1/2)
  16. Outlook (2/2)
  17. Index (1/4)
  18. Index (2/4)
  19. Index (3/4)
  20. Index (4/4)

Product information

  • Title: Multi-Camera Networks
  • Author(s): Hamid Aghajan, Andrea Cavallaro
  • Release date: May 2009
  • Publisher(s): Academic Press
  • ISBN: 9780080878003