Data Fusion

The fusion of data sets (LRFdata, panoramic images) starts with the transformation of coordinate systems (i.e., those of LRF and camera attitudes) into a uniform reference coordinate system (i.e., a world coordinate system). For this step, the attitudes of both systems need to be known (see Chapter 5). A transformation of LRF data into the reference coordinate system applies an affine transform between 3D coordinate systems (see Chapter 3). The known 3D object points Pw (vertices of the triangulation) are given by the LRF system, and are textured with color information provided by the recorded panoramic images. Therefore, panoramic image coordinates are calculated in relation to object points Pw, and this is the main subject in this chapter.

10.1 Determination of Camera Image Coordinates

This section is organized into four subsections, each addressing one possible option for an “acquisition set-up”. As discussed before, we differentiate between single-projection-center panoramas, multi-projection-center panoramas with ω = 0, and multi-projection-center panoramas with ω ≠ 0 (typically used as a stereo setup). A sensor-line camera used should provide “sufficient” rotational accuracy (i.e., on a circular path), even for (minor) mechanical variations. However, in the fourth subsection we also consider the general case, also taking inaccuracies into account, thus moving further away from idealizing assumptions.

We start with some definitions using the calculated (calibrated) ...

Get Panoramic Imaging: Sensor-Line Cameras and Laser Range-Finders now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.