Appendix ECAMERA CALIBRATION

   

As described in Section 11.2, a digital image is a discrete array of gray level values. The objective of camera calibration is to determine all of the parameters that are necessary to relate the pixel coordinates (r, c) to the world coordinates (x, y, z) of a point in the camera’s field of view. In other words, given the coordinates of a point P relative to the world coordinate frame, after we have calibrated the camera we will be able to compute (r, c), the image pixel coordinates for the projection of this point.

Camera calibration is a ubiquitous problem in computer vision. Numerous solution methods have been developed, many of which have been implemented in open-source software libraries (e.g., OpenCV [17] and Matlab’s Computer Vision Toolbox [26]). Here, we present an approach that is conceptually straightforward and relatively easy to implement.

E.1 The Image Plane and the Sensor Array

In order to relate digital images to the 3D world, we must first determine the relationship between the image-plane coordinates (u, v) and the pixel coordinates (r, c). We typically define the origin of the pixel array to be located at a corner of the image rather than at the center of the image. Let the pixel array coordinates of the pixel that contains the principal point be given by (or, oc). In general, the sensing elements in the camera will not be of unit size, nor will they necessarily be square. Denote by sx and sy the horizontal and vertical dimensions, ...

Get Robot Modeling and Control, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.