20.1 Introduction

In Chapter 19, we presented a method for tracking the pose trajectory (position and orientation as a function of time) of a rigid falling body using only external video camera sensors. In this method, video images from multiple cameras are used to view feature points positioned on the surface of the rigid body. Thus, the measurement set consists only of feature point positions viewed by the cameras, with feature point occlusions creating data dropouts for certain rigid body orientations. In addition, no velocity or acceleration measurements are directly obtainable, so estimates of higher order derivative terms related to the position and orientation of the rigid body have reduced accuracy.

In this chapter, we examine the utility of adding additional body-mounted inertial sensors that include multiaxis accelerometers and gyroscopes which, when packaged together, are referred to as an inertial measurement unit (IMU). An IMU provides measurements of linear accelerations along the three body axes and angular velocities about the three body axes. This presents a requirement to fuse the temporal steam of IMU measurement data with the temporal stream of video image data to produce a tracking filter that utilizes both in an asynchronous measurement data stream.

Methods for tracking a rigid body using only successive temporal images of feature points from a single camera (monocular imaging) were first implemented in Refs [1], [2]. In the paper by Iu and Wohn [1], they ...

Get Bayesian Estimation and Tracking: A Practical Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.