Stereo Imaging
Now we are in a position to address stereo imaging.[196] We all are familiar with the stereo imaging capability that our eyes give us. To what degree can we emulate this capability in computational systems? Computers accomplish this task by finding correspondences between points that are seen by one imager and the same points as seen by the other imager. With such correspondences and a known baseline separation between cameras, we can compute the 3D location of the points. Although the search for corresponding points can be computationally expensive, we can use our knowledge of the geometry of the system to narrow down the search space as much as possible. In practice, stereo imaging involves four steps when using two cameras.
Mathematically remove radial and tangential lens distortion; this is called undistortion and is detailed in Chapter 11. The outputs of this step are undistorted images.
Adjust for the angles and distances between cameras, a process called rectification. The outputs of this step are images that are row-aligned[197] and rectified.
Find the same features in the left and right[198] camera views, a process known as correspondence. The output of this step is a disparity map, where the disparities are the differences in x-coordinates on the image planes of the same feature viewed in the left and right cameras: xl – xr.
If we know the geometric arrangement of the cameras, then we can turn the disparity map into distances by triangulation. This step is called ...
Get Learning OpenCV now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.