Errata

Learning OpenCV 3

Errata for Learning OpenCV 3

Submit your own errata for this product.

The errata list is a list of errors and their corrections that were found after the product was released.

The following errata were submitted by our customers and have not yet been approved or disproved by the author or editor. They solely represent the opinion of the customer.

Color Key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update

Version Location Description Submitted by Date submitted
PDF Page p.198
The last paragraph

> To create a sequence entry, you first provide the string name for the entry, and then the entry itself.

The sequence does not have string name. It should be "map"?
"To create a sequence entry" -> "To create a map entry"

Kouichi Matsuda  Jun 07, 2017 
Printed 651

Rotation matrix for Rx Ry and Rz the -/+ sign in front of the sin() term in completely the opposite of conventional definition.Which way of rotation do you define as positive?

shadow_wxh  Sep 09, 2017 
Printed footnote 18

"(i.e., j \in [0,k-1])" is really not a "that is" enhance of the preceding sentence. I'd just drop it, but maybe it could be changed to something useful.

Douglas Morgan  Nov 20, 2017 
ePub Page Learning Opencv Example About Mahalanobis Distance
C++ example code implementation

cv::Mahalanobis function is consuming inverse Covariance matrix as function parameter, but in the example the inverse is NOT computed, instead original covariance matrix is submitted which is WRONG

Pavel  Mar 12, 2023 
Printed Page 1
last paragraph

In the Last paragraph of the page

"in our previous example,if it.nplanes were 4,then it.size would have been 16"
There is no example matching that description also the example code in the preceding doesnt fully explain the use of NaryMatIterator. Since the number of it.nplanes is always 1,it doesn't make much point to run a "for loop" which only execute once.
Example should at least present one case that the array is noncontinuous and therefor generating multiple planes and that way make the plane iterator usefull


Xinhua Wen  Mar 23, 2017 
PDF Page 30
2nd code snippet

createTrackbar("Position", "Example 2-4", &g_slider_position, frames, onTrackbarSlide);

should be

cv::createTrackbar("Position", "Example 2-4", &g_slider_position, frames, onTrackbarSlide);

(as full code in Example 2-4)

Anonymous  Feb 15, 2018 
Printed Page 33
top of page

"Such sampling introduces high frequencies into the resulting signal (image). To avoid this, we want to first run a high-pass filter over the signal to band-limit its frequencies so that they are all below the sampling frequency."

Did the authors mean low-pass filter? A high-pass filter allows high-frequency values to pass while blocking low frequencies below a certain threshold, which seems like the opposite of what the authors meant to say.

The example 2-6 code which immediately follows this paragraph calls pyrDown() which applies a Gaussian blur to the image - I believe Gaussian blurring is a low-pass filter.

Anonymous  Aug 21, 2018 
Printed Page 35
last line of Example 2_8

Should

img_cny.at<uchar>(x, y) = 128;

instead be

img_cny.at<uchar>(y, x) = 128;

?

Joel Merritt  Apr 03, 2017 
Printed Page 50
last line

Should the

cv::Mat33f::eye()

in the last line of p. 50 be

cv::Matx33f::eye()

?

Joel Merritt  Apr 03, 2017 
Printed Page 69
Exercises

On p. 69 in question 2.a.

Should cv::Mat33f be cv::Matx33f ?

In question 3.a.

Should cv::Mat<> be cv::Matx<> ?

In question 3.c.

Should cv::Mat<> be cv::Matx<> ?

Joel Merritt  Apr 03, 2017 
PDF Page 72
2nd paragraph

( i_0, i_i, ...,i_{N_d-1})

Shouldn't it be the following?

( i_0, i_1, ...,i_{N_d-1})

(2nd i's index: i -> 1)

Anonymous  Feb 19, 2018 
PDF Page 76
Table 4-2. title

Table 4-2 titled "cv::Mat constructors that copy data from other cv::Mats
" lists 5 copy constructors, which DO NOT copy underlying data. According to https://docs.opencv.org/4.2.0: "No data is copied by these constructors". That title is misleading.

Paul Jurczak  Apr 04, 2020 
PDF Page 78
last code snippet

The first line of the code snippet,

cv::Mat m = cv::Mat::eye( 10, 10, 32FC1 );

should it be the following?

cv::Mat m = cv::Mat::eye( 10, 10, CV_32FC1 );

If so, the same applied to the first code snippet of page 79 (32FC2 -> CV_32FC2).

Anonymous  Feb 20, 2018 
ePub Page 80
Kindle App: code above section "The N-ary Array Iterator: NAryMatIterator

The code example for computing the L2 Norm using the MatConstIterator needed three items corrected to compile.

1) Add "_" to the MatConstIterator
2) Add template cv::Vec3f to the Mat m.begin() function
3) Add same template to the Mat m.end() function

// Original Code from book
//cv::MatConstIterator<cv::Vec3f> it = m.begin();
//while( it != m.end() )

// Modified code
cv::MatConstIterator_<cv::Vec3f> it = m.begin<cv::Vec3f>();
while( it != m.end<cv::Vec3f>() )

John Nafziger  Mar 22, 2018 
PDF Page 94
last code snippet

Does Mat_<T> have a non-template version of 'at' member?
Second line of code does not compile on MSVC2015 using OpenCV 3.2.

Anonymous  Jan 26, 2017 
Printed Page 96
Second paragraph from bottom, lines 2-3

Should

const cv::SparseMat_<t>*

be instead

const cv::SparseMat_<T>*

?

Joel Merritt  Apr 04, 2017 
PDF Page 100
2nd row, 1st column

cv::arrToMat

should be

cv::cvarrToMat

Anonymous  Feb 25, 2018 
Printed Page 105
imshow line of Example 5-1

In the last paragraph, it states that "the results are put into src1 and displayed."

But in the imshow command in the Example 5-1 listing src2 is shown.

Joel Merritt  Apr 04, 2017 
Printed Page 108
First paragraph of text

It says

cv::bitwise_and() is a per-element "exclusive or" (XOR) operation.

should say

cv::bitwise_xor() is a per-element "exclusive or" (XOR) operation.

Joel Merritt  Apr 04, 2017 
Printed Page 151
1st paragraph

"if B exists, it is unique". No, it's not, -B works as well.

Douglas Morgan  Nov 03, 2017 
Printed Page 162
Footnote 2

Original text:

The algorithm used by cv::fillComvexPoly() is ...

Correct text:

The alogrihtm used by cv::fillConvexPoly() is ...

Benjamin Crawford  Jul 16, 2019 
Printed Page 176
second paragraph

I think the "lefthand side(x)" in the second paragraph should be "lefthand side (A)" or "lefthand side (U, W, Vt)". The linear system's structure has nothing to do with x.

Shihao Xu  Nov 01, 2017 
Printed Page 176
2nd paragraph

"same lefthand side (x)". Should be:
"same left hand side (UWV^T)".

Douglas Morgan  Nov 03, 2017 
Printed Page 176
footnote 176

There should be a statement that other diagonal elements of diag(W)^{-1} are zero. As written we don't know what the do when \lambda_i < \epsilon.

Douglas Morgan  Nov 03, 2017 
Printed Page 179
3rd pargraph

1) "covariation" should be "covariance"
2) "rotate back to your original basis..." This does not correctly describe how to weight a vector of IID zero mean variables to have a given covariance. You don't rotate and don't even think about bases. You multiply by a square root, S (in the sense of S*S^T=covariance) of the covariance. That typically has nothing to do with a rotation or change of basis.

Douglas Morgan  Nov 05, 2017 
Printed Page 179
3rd paragraph

Correction/addition to my previous correction:
2) 2) "rotate back to your original basis..." First off, if you change the basis of a vector of zero mean unit variance uncorrelated variables, the result is again zero mean, unit variance, and uncorrelated. If you first make the variables have variances equal to the eigenvalues of the covariance, then you can change the basis to have the new vector's covariance match the given covariance. However, in N-D, a change of basis is not generally a rotation. So rewrite however you want, but text as written is wrong in several ways.

Douglas Morgan  Nov 06, 2017 
Printed Page 189
3rd paragraph

"cv::Mat::empty()==true". Strange mix of static member function notation to describe a situation with a non-static member function. Better would be something like "empty() returns true" or just drop the parenthetical phrase -- it's hardly seems necessary.

The same thing happens on some other page and maybe pages. Search for them.

Douglas Morgan  Nov 05, 2017 
Printed Page 193
4th paragraph

"returns true only if grab was successful" should be something like "returns true if grab was successful and false otherwise". As stated, grab could be successful and return false.

Douglas Morgan  Nov 05, 2017 
Printed Page 199
near bottom of page

The << operator is used on a cv::Mat object. I don't think, but can't search it, that the operator was ever discussed before. Seems like a good thing to introduce if indeed it hasn't been.

Douglas Morgan  Nov 05, 2017 
Printed Page 202
1st paragraph

"The returned value is an enumerated type..." should be
"The returned value is a sum of enumerators..."

Douglas Morgan  Nov 20, 2017 
Printed Page 205
Exercise 2 a

Exercise 2 a. reads "For the program of Exercise 2 . . ." I believe it is actually referring to Exercise 1.

Anonymous  Jun 06, 2019 
PDF Page 224
End of section on Affine Transformation

Possibly add a reference to estimateRigidTransform in the section on getAffineTransform.

Donald Munro  May 20, 2016 
Printed Page 249
3rd paragraph

"filters (also called kernels)" I've heard of kernels loosely referred to as filters just as shorthand for the filter implemented when the kernel is used in a cross-correlation filter. However, if asked, no one would ever say that the kernel is the filter or that that filter is the kernel. It would be better to keep the terms separate. The problem may be elsewhere also, so search and modify would be good.

Doug R Morgan  Nov 06, 2017 
Printed Page 250
1st paragraph

"linear kernels". Do a google search of:
"linear kernel" -learn -learning
and you'll quickly see that nobody uses the term "linear kernels" in regard to filtering. It is only used in learning and classifiers. So, the discussion of them ought to be modified to remover that term.

Douglas Morgan  Nov 05, 2017 
Printed Page 250
2nd paragraph

"convolution, though the term is often used somewhat casually in the computer vision community to include the application of any filter (linear or otherwise) over an entire image"

Why not be correct and just say filter when you mean filter? I can't recall anyone calling a non-linear filter a convolution. This is all over the book. Just search for convolution and check each usage.

Douglas Morgan  Nov 05, 2017 
Printed Page 250
2nd paragraph

"is also known as convolution" No, the filter is a cross-correlation filter. A convolution is different.

Douglas Morgan  Nov 07, 2017 
Printed Page 259
declaration of adaptiveThreshold

"Example 10-3" Shouldn't that better be "Table 10-2"? The example doesn't cover all the options that the table does.

Douglas Morgan  Nov 07, 2017 
Printed Page 261
Last paragraph on the page

Grammar problems:

OpenCV offers five different smoothing operations, each with its own associated library function, which each accomplish slightly different kinds of smoothing.

Possibly this could be the correct version:

OpenCV offers five different smoothing operations, each with its own associated library function, with each accomplishing slightly different kinds of smoothing.

Anonymous  Jan 05, 2018 
Printed Page 266
middle of page

The definitions of sigma_x and sigma_x each have one too many "-1"'s. n_x and n_y should just be ksize.width and ksize.height, not ksize.width-1 and ksize.height-1. At least browsing the code it looks that way.

Douglas Morgan  Nov 07, 2017 
Printed Page 276
2nd paragraph

"pixel values covered by the kernel"

You never defined what you mean by "covered". If you already know, it is obvious that it means in the kernel's support (though the definition of support was relegated to a dismissive footnote, so you might just have to redo the zero/non-zero discussion). If you don't already know I'm sure it is not obvious at all.

Douglas Morgan  Nov 07, 2017 
Printed Page 277
footnote 16 and caption to figure 10-20

"under a square kernel" and "under the kernel" Same problem as with "covered by the kernel" on page 276.

Douglas Morgan  Nov 07, 2017 
Printed Page 280
definition of erode and dilate

"(i,j) \in kernel" So it looks like you are equating kernel and the support of a kernel. Describing this morphology-specific notion of "kernel" seems to have slipped through the cracks.

Douglas Morgan  Nov 07, 2017 
Printed Page 283
figure 10-25

In the middle graph, the two outermost non-zero step values should be zero. An erode operation will zero out the boundary pixels. Interestingly, the final graph is OK.

The graph would be OK if the outermost non-zero pixels were on the image boundary and pixel replication was used for out-of-image pixels. However, nothing in the graph makes it look like there is any image boundary involved.

Douglas Morgan  Nov 07, 2017 
Printed Page 284
figure 10-27

Opposite problem as figure 10-25 on page 283. Here you need to tack on non-zero pixels on either side of the blob in the middle graph. The heights are taken from the outermost non-zero pixels of the graph on the left. Again, interestingly, the graph on the right is OK.

Douglas Morgan  Nov 07, 2017 
Printed Page 290
table 10-4

"Ellipse" should be "Elliptical disk" or "Filled ellipse".

Douglas Morgan  Nov 07, 2017 
Printed Page 291
footnote 20

"n^2log(n)" should be n^2log^2(n).

Douglas Morgan  Nov 07, 2017 
Printed Page 293
first heading

cv::getDerivKernel() should be cv::getDerivKernels().

Douglas Morgan  Nov 07, 2017 
Printed Page 294
only equation

The numerator should have "/2" instead of just "2".
.

Douglas Morgan  Nov 07, 2017 
PDF Page 295
top most

"3. Load an interesting image, and then blur it with cv::smooth() using a Gaussian filter."
as i kown,cv::smooth() is out of use for quite a long time ,then how to solve this problem,and where is the param1?param2?
it should change cv::smooth() to cv:blur or something like.

jsxyhelu  Jul 01, 2017 
Printed Page 307
the equatios

the A \equv ... B\equiv ... string of equations is very difficult to understand at a glance (or even under close inspection). It would be simple to read if extra space or a comma with space were inserted before the B, T, X and X'.

Douglas Morgan  Nov 07, 2017 
Printed Page 307
last paragraph

"quadrangle". Though Stanford has a nice quadrangle, the right word here is quadrilateral.

Douglas Morgan  Nov 07, 2017 
Printed Page 307
footnote 9

"orthogonal" should be "nonsingular".

Douglas Morgan  Nov 07, 2017 
Printed Page 308
figure 10-3

"Trapezoids" should be "Quadrilaterals".

Douglas Morgan  Nov 07, 2017 
Printed Page 309
the equation and later

Three errors:

The expression for dst(x,y) is the one that is used when WARP_INVERSE_MAP is given to warpAffine.

This sentence in the following paragraph: "This option is a convenience that allows for inverse warping from dst to src instead of from src to dst." doesn't really make any sense. You always go from src to dst, you just change how you interpret the transform matrix M.

In the next to last paragraph an "of" is missing in "application cv::warpAffine()"

Douglas Morgan  Nov 08, 2017 
Printed Page 311
last line (a matrix)

"-(1-alpha)*center_y" should not have the leading "-".

Douglas Morgan  Nov 08, 2017 
Printed Page 312
next to last paragraph (and above)

"In this case, all the of the third-channel entries must be set to 1 (i.e., the points must be supplied in homogeneous coordinates)." You can just pass in an array of 2 channel and transform() will add the final 1's itself. Further, none of the text leading up to here properly tells you that src can have the same number of channels as mtx.cols or one less than that.

Douglas Morgan  Nov 08, 2017 
Printed Page 313
equation at bottom

The equation for dst(x,y) is for when cv::WARP_INVERSE_MAP is used.

Douglas Morgan  Nov 08, 2017 
Printed Page 314
3rd paragraph

"(generally) some rhombus" would not be so jarring as "a quadrilateral (often a rhombus)"

Douglas Morgan  Nov 08, 2017 
Printed Page 315
1st paragraph

The f's in "(x=f*X/Z,f*Y/Z)" don't belong here.

Douglas Morgan  Nov 08, 2017 
Printed Page 326
3rd paragraph

"or type cv::U8" should be "of depth cv::8U". The cv:U8 part of the errors has already be noted in the errata.

Douglas Morgan  Nov 08, 2017 
Printed Page 326
3rd paragraph

Correction to earlier correction. I said:

"or type cv::U8" should be "of depth cv::8U". The cv:U8 part of the errors has already be noted in the errata.

But the cv::8U should be CV_8U (as has been previous properly corrected by others)

Douglas Morgan  Nov 12, 2017 
PDF Page 344
3rd paragraph

Using these integral images, you may calculate sums, means, and standard deviations
over arbitrary upright or “tilted” rectangular regions of the image. As a simple example,
to sum over a simple rectangular region described by the corner points (x^1, y^1)
and (x^2, y^2), where x^2 > x^1 and y^2 > y^1, we’d compute:

In this paragraph, the number 1 and 2 are superscripts. But the equation right below the paragraph which is related to this explanation uses 1 and 2 as subscripts.
I think they should be the same format. Which is the right expression?

Anonymous  Feb 08, 2018 
PDF Page 352

12-7 The Canny edge detector (param1=50, param2=150) is run first, with the results shown in gray, and the progressive probabilistic Hough transform (param1=50, param2=10) is run next, with the results overlaid in white; you can see that the strong lines are generally picked up by the Hough transform](../images/ch12/fig12-7.png)

may be

12-7 The Canny edge detector (threshold1=50, threshold2=150) is run first, with the results shown in gray, and the progressive probabilistic Hough transform (minLineLength =50, maxLineGap =10) is run next, with the results overlaid in white; you can see that the strong lines are generally picked up by the Hough transform](../images/ch12/fig12-7.png)

Kouichi Matsuda  Oct 04, 2017 
PDF Page 359
1st paragraph

The content of the Figure 12-9 is incorrect. It should be the image of Figure 6-21 in the previous version of this book.

Figure 12-9. First a Canny edge detector was run with param1=100 and param2=200;
then the distance transform was run with the output scaled by a factor of 5 to increase
visibility

Kouichi Matsuda  Apr 06, 2017 
PDF Page 376
1st paragraph

The following paragraph should be deleted(it might come from the previous version). OpenCV3 does not have a data type for representing histograms. It's just an array.

OpenCV has a data type for representing histograms. The histogram data structure is
capable of representing histograms in one or many dimensions, and it contains all the
data necessary to track bins of both uniform and nonuniform sizes. And, as you
might expect, it comes equipped with a variety of useful functions that allow us to
easily perform common operations on our histograms.

Kouichi Matsuda  May 07, 2017 
PDF Page 400
The last two lines

In the last two expressions. When calculating the number of pixels. It should be changed to read 'w·h' instead of 'w-h'.

Jianfeng Liu  Oct 11, 2018 
PDF Page 419
top most

1、when i use

int i, nccomps = cv::connectedComponentsWithStats (
img_edge, labels,
stats, cv::noArray()
);

OpenCV gvies me an ERROR,
I had change my code to

cv::Mat centroids;
int nccomps = cv::connectedComponentsWithStats (
img_edge, labels,
stats, centroids
);

2、i think

for( i = 1; i <= nccomps; i++ ) {

colors[i] = Vec3b(rand()%256, rand()%256, rand()%256);

if( stats.at<int>(i-1, cv::CC_STAT_AREA) < 100 )

colors[i] = Vec3b(0,0,0); // small regions are painted with black too.

}

may be changed to

for( i = 1; i < nccomps; i++ ) {
colors[i] = Vec3b(rand()%256, rand()%256, rand()%256);
if( stats.at<int>(i, cv::CC_STAT_AREA) < 100 )
colors[i] = Vec3b(0,0,0); // small regions are painted with black too.
}

and the below code gives me the right result.

Anonymous  Aug 26, 2017 
Printed Page 425, 762
whole subsections on fitline

There are two full descriptions of fitline. Both have nearly identical tables, etc.

Also the declaration description on pg 426 only says 2-dimensional points.

Also, the declaration description on pg 762 says "Nx2,, 2xN". That clearly is nonsense -- what happens when N is 2? What is wrong is that the function handles Nx2 only, not both.

Douglas Morgan  Nov 08, 2017 
Printed Page 428
6th paragraph

The original text about the return value of function cv::pointPolygonTest() is
"the function returns the distance to the nearest contour edge; that distance is 0 if the point is inside the contour and positive if the point is outside."

It should be
"that distance is positive if the point is inside the contour and negative if the point is outside."

Wei Ren  Aug 28, 2017 
Printed Page 430
1st paragraph and footnote10

"the result is the length of the contour" and all of footnote 10 are wrong. The opencv documentation says:

The moments of a contour are defined in the same way but computed using the Green’s formula...

The only sensible way to interpret this is that for contours, moments() uses Green's theorem to compute the double integral of the various integrands (with I(x,y) always 1) over the area inside the contours. So no length's are involved, only area. Also, cv::drawContours() is not what you'd used to approximate m_{00}, you have to fill the contours and then measure the area.

So, footnote 10 is pretty much completely wrong. It can just be eliminated if the main text says something like "for a contour, cv::moments computes the momemts by integrating over the interior of the contour assuming the image function is 1 everywhere in the interior.

Douglas Morgan  Nov 12, 2017 
PDF Page 430
footnote 10

Should cv::Array be cv::Mat?

Anonymous  Apr 03, 2018 
Printed Page 435
Table 14-4

The algorithm given for

CONTOURS_MATCH_I3

is incorrect is should be MAX not SUM

see here

http://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#gaf2b97a230b51856d09a2d934b78c015f

and

https://github.com/opencv/opencv/blob/10e639cdb9408929891fe14daa7c032c0ad8a4bf/modules/imgproc/src/matchcontours.cpp#L155-L156

Dave Watts  Apr 05, 2017 
Printed Page 435
declaration of cv::MatchShapes

This uses: cv:U8C1 instead of CV_8UC1. The single : makes it a slightly different typo than in earlier errata. Possibly something different to search for.

Douglas Morgan  Nov 13, 2017 
Printed Page 436
bottom

class ShapeContextDistanceExtractor : public ShapeDistanceExtractor {

should be:

class ShapeDistanceExtractor : public Algorithm

Douglas Morgan  Nov 13, 2017 
Printed Page 438
last paragraph

"method" should be "class"

Douglas Morgan  Nov 13, 2017 
Printed Page 447
footnote2

"Here we are using mathematician's definition of compact,"
should be
"Here we are not using the mathematician's definition of compact,"

On the off chance that this is a technical mistake and not a typo, I'll just note that any finite set of pixels is compact and in the continuous plane, any set bounded by a simple contour is compact if and only if the contour is completely included in the set (making the set bounded and closed). Neither of these is anything like the usage of "compact" in the first paragraph.

Douglas Morgan  Nov 14, 2017 
Printed Page 485
declaration of apply

"apply()(" should be "apply(".

Also an earlier errata correctly says to eliminate footnote 18. This is just a reminder that you also need to add the "=0" 's in the member function declarations.

Douglas Morgan  Nov 15, 2017 
Printed Page 487
last paragraph

"Here, the second constructor ..." There is only one constructor (and that is only some sort-of-constructor-like function). The talk of two constructors continues to the end of the paragraph.

Douglas Morgan  Nov 15, 2017 
Printed Page 498
2nd paragraph and footnote 3

"These equations form a linear systems that can be solved by inversion of a single autocorrelation matrix. In practice, this matrix is not always invertible owing to small eigenvalues arising from pixels very close to p."

"Autocorrelation matrix" has a fairly accepted definition and it is not the A^T*A matrix of the normal equations for the linear least squares system: A*x = b. A^T*A is indeed the zero offset matrix coefficient of the multichannel autocorrelation function of A, where the rows of A are considered a time series -- but that would be an odd tangential factoid to bring up in this context.

Better to just say that the equations for a linear least squares system that can be solved for the image location. Then you don't imply that the code uses normal equations, QR, SVD or something else to solve the system.

Then, footnote 3 should go away.

Fiinally, it makes no sense that "small eigenvalues..." have anything to do with why there is a rejection zone. After all, you could just extend A with zeros and still get the same answer for the non-zero parts. Further, adding more or less random junk to A will tend to make A^T*A further from nonsingular, not closer. Indeed you really get close to singular when a rank 1 matrix with an extremely *large* eigenvalue is added to A^T*A. I strongly suspect (without looking into the algorithm to be sure) that the rejection zone is meant to exclude points that introduce more noise into the solution than points further out.

Douglas Morgan  Nov 16, 2017 
Printed Page 501-503
whole thing

The section "How Lucas Kanade works" is off in multiple ways. It can be considerably simplied so that most of the problematic parts just disappear. The main problem is that there is no reason to ever introduce \partial f/ \partial t, its chain rule expansion, any discussion of \partial I / \partial t, or, in fact, any discussion of time. All you are trying to do is find the unknown fixed shift, h, that has been applied to a known function, I(x), to give the also known function I'(x) = I(x-h) (equivalently, I'(x + h) = I(x)). For a given x0, just solve the equation I'(x0 + h') = I(x0) for h' (hopefully getting h' = h). By definition of I', we know \partial I'(x) / \partial x evaluated at x0 + h is the same as \partial I(x) / \partial x evaluated at x0. Now we have all that is necessary to do the semi-Newton's method solution for h'. Much more simple and direct. Also, in the 1-D case all the partials are regular derivatives.

Here are some of the problems with the current section.

The notation f(x,t) = I(x(t), t) and beyond conflates two definitions of x and two definitions of t. The first x is really the value of x(t) (a function of t) at some fixed time. So x means two different things. Better to have something like x0 for the first one and x(t) for the second. Now we aren't told when the fixed time of x0 is supposed to be, but it seems from one of the equations with the partial of I wrt x being evaluated at t and from the 16-5 that t is the fixed time. So now t is both a free variable used to define a functional relation and a fixed time of interest.

Some of the partial derivatives expressions are nonstandard. One with f(x) drops just one of the two variables (x,t) goes to (x). Traditionally, neither or both variables are dropped. Two expressions with the "evaluated at" symbols have a t and an x(t), but both are really evaluated at (t, x(t)).

There is talk of velocity, but then it seems the times step is always 1. Ok, but strange to introduce velocity without parameterizing with the time between images. Seems better to just stick with a pixel offset between images without mentioning time and let the user worry about timing issues.

Finally, everything to do with "temporal derivative" is just wrong (perhaps it approaches "not even wrong"). As one example, take a look at figure 16-6 at the vertical offset labeled as "Temporal derivative at 2nd iteration". This is a tiny value, and it just get tinier with every iteration. How could each decreasing offset be a temporal derivative? They can't be. Each is simply the offset of the current I'(x0+h_n) from the desired I(x0). We don't know what time step is involved between iterations so it can't be any sort of temporal derivative.

All these problems just go away with the suggested rewrite.

Douglas Morgan  Nov 17, 2017 
Printed Page 503
1st paragraph

"This is known as Newton's method." No, in Newton's method, the derivative is computed at the current guess. Here, the derivative is magically available (through assumptions) at the true solution. The two algorithms are very similar, but not the same.

Douglas Morgan  Nov 17, 2017 
Printed Page 503-505
distributed

The unnecessary introduction of time and time derivatives of the 1D case of pages 501-503 continues on in the 2D case.

I googled around about optical flow and found out that the use of time and time derivatives is quite common. Wondering how that could be, I checked the original 1981 LK DARPA paper. It doesn't mention time or time derivatives (hey, it even uses h for the offset). Neither does Jean-Yves Bouguet's 2001 "Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the algorithm." But sometime someone applying the LK technique to optic flow decided to inject superfluous equations from continuous vector flow analysis into an otherwise simple algorithm derivation involving two images shifted with respect to each other. Regardless of who did it or when, it's a good idea to quit doing it now.

Douglas Morgan  Nov 18, 2017 
Printed Page 508
cv::OPTFLOW_LK_GET_MIN_EIGENVALS description

"average per-pixel change" should be something like
"average per-pixel absolute value of change"

Douglas Morgan  Nov 18, 2017 
Printed Page 518r
4th paragraph

Probably you ought to explicitly say here (or somewhere) that whenever a function uses an associated keypoint vector and descriptor matrix that keypoint i goes with matrix row i. A reader should certainly guess that this would be true, but good to say it anyway.

Douglas Morgan  Nov 20, 2017 
Printed Page 520-523
multiple places

In at least seven places what could or should be "descriptor" is instead the incorrect "keypoint" or the OK but long "keypoint descriptor". Just check each usage of keypoint to be sure it is what you really want.

Douglas Morgan  Nov 20, 2017 
Printed Page 528
1st paragraph

"following autocorrelation function, for". No, it is not an autocorrelation. It is true that two of the four indexes x, y, delta x, and delta y have the same pattern as seen in a real autocorrelation and that one term in expanding the square is an autocorrelation. But neither of those make it an autocorrelation. You could just say something like "following function defined using".

Douglas Morgan  Nov 20, 2017 
Printed Page 529,531,532
multiple places

Now "autocorrelation" is applied to matrix that has absolutely nothing to do with autocorrelation. The matrix M is matrix of 2nd moments (assuming the w_{i,j} are greater than zero, sum to one, and are to be treated as a probability mass function). Maybe "M" is for "Moments"? I don't know. "autocorrelation" in this context shows up at least eight times.

Also, the last paragraph of pg 53 mentions "Hessian." Hessians deal with second derivatives, not second moments of first derivatives.

Douglas Morgan  Nov 20, 2017 
Printed Page 540
1st paragraph

"If a pixel has a higher value in the"
should be:
"If a pixel has a higher absolute value in the"

You can see this must be true since figure 16-17 shows both the front tire (dark) and the sideview mirror (bright) generating features.

Douglas Morgan  Nov 21, 2017 
Printed Page 544
2nd paragraph

The discussion of nOctaveLayer is a bit confusing. 1) It is the first mention of an image pyramid, but that is referred to as "the image pyramid" as though it had already been discussed somewhere. Probably good to bring it in earlier. 2) nOctaveLayer is defined as "how many layers..." and right after that the definition is contradicted with nOctaveLayers now being "how many layers..." minus 3. It would be better to say something like: "nOctaveLayers is the number of DoG layers per octave in which extreme values are found. Two extra DoG layers sandwich these so that full 3x3x3 regions are always tested. To generate these nOctaveLayer+2 DoG layers, the algorithm needs to compute nOctaveLayer+3 smoothed images per octave.

Douglas Morgan  Nov 25, 2017 
Printed Page 545
2nd paragraph

"must be a reference to an STL-style vector"
The user cannot specify that an argument is a reference, so the above should be just
"must be an STL-style vector"

Douglas Morgan  Nov 25, 2017 
Printed Page 545
2nd paragraph

"should always be of type CV_8U"
should be
"should always be of depth CV_8U"

The next usage of "type" (for the mask) is probably OK but would be more clear with a CV_8UC1 instead of (the equivalent) CV_8U.

Douglas Morgan  Nov 25, 2017 
Printed Page 553
2nd and 3rd paragraphs

"autocorrelation" in three places. These refer back to a section where autocorrelation was used in two distinct ways (nether of which actually was an autocorrelation) so it is hard (though probably possible) to pin down what the word is now referring to. Some rewording would probably help.

Douglas Morgan  Nov 25, 2017 
Printed Page 562
2nd paragraph

"gradients (they are divided by their mean) that give the orientation of the gradient direction"
should be something like:
"moments (they are divided by the mean) that give the orientation"

Douglas Morgan  Nov 25, 2017 
Printed Page 581
1st paragraph

"radius" should be "diameter". Try it or read the source to see.

Douglas Morgan  Nov 06, 2017 
Printed Page 582
1st paragraph

"The second form is just a convenience, which is useful when you want to visualize the response to many different match computations at once."

It's also useful for visualizing the single same match computations done in knnMatch and radiusMatch.

Douglas Morgan  Nov 25, 2017 
Printed Page 595
declaration of createOptFlow_DualTVL1

The return type can't be a Ptr to DenseOpticalFlow if the user is expected to set member data in OpticalFlowDual_TVL1. The user would have to downcast and a library writer would never require that.
Indeed the the real return is a Ptr to DualTVL1OpticalFlow.

Douglas Morgan  Nov 25, 2017 
Printed Page 595
last paragraph

"...arguments to the create function"
No create function is shown other than createOptFlow_DualTVL1() which has no arguments. But there really is a create( blah, blah) member function that has the arguments. It's just not shown.

Douglas Morgan  Nov 25, 2017 
Printed Page 615
1st paragraph and footnote 18

The use of "model" for the state is highly nonstandard. Almost always, model refers to the coefficients of the Kalman filter. Further, the first use of model only makes sense if it refers to the estimated mean. Later uses make more sense if they refer to the mean and covariance. So, "model" ought to be swapped out for mean and covariance (as appropriate) of the a posteriori state distribution.

The footnote's definition of "maximizes the a posteriori probability" is just wrong. The a posteriori probability is a probability density and that isn't the most likely of anything by any measure of likeliness. It just is -- and it is computed by the Kalman filter. The mean does maximize a Gaussian probability distribution and that is what "maximizes the a posteriori probabilitty" refers to, the mean part of the Kalman filter.

Douglas Morgan  Nov 26, 2017 
Printed Page 616
2nd and 3rd paragraphs

In just these paragraphs, "state" is used incorrectly. "State" is never used to mean the combination of mean and covariance (though it is the state of the Kalman filter equations, that potential usage is never used). Better to stick with mean and covariance. Just to better see the problem, the last line of pg 616 says: "the state at time k multiplied by some matrix F". If the state were the mean and covariance, that would make no sense. If the state is just the mean state or the mean (like it really means here), it does make sense.

Douglas Morgan  Nov 26, 2017 
Printed Page 617
last paragraph

"It turn out this product is another Gaussian distribution"
should be
"It turn out this product is proportional to another Gaussian distribution"

Douglas Morgan  Nov 26, 2017 
Printed Page 617
2nd paragraph

Again, "the model that has the highest probability of being correct" is just wrong. The Kalman filter computes the correct distribution, period, no probability of being correct is involved. The mean maximizing that distribution is something different.

Douglas Morgan  Nov 26, 2017 
Printed Page 624
last equation

Two errors in the equation for \vec{x_k^-}:

1) The star superscript is missing from \vec{x_{k-1}}. The x without a star is the true state, not the estimated mean.
2) The " + \vec{w_k} " shouldn't be there. The estimate can't include unknown noises.

Douglas Morgan  Nov 27, 2017 
Printed Page 625
4th equation

The \vec{x_k} is missing the star superscript that distinguishes the true state from the estimated mean.

Douglas Morgan  Nov 27, 2017 
Printed Page 626
Set of equations at top

\vec{x_k} is missing the star superscript.

Many of the right hand side symbols do not match what came earlier. There weren't any hats in the earlier text.

Douglas Morgan  Nov 27, 2017 
Printed Page 633,634
4th paragraph to end of EKF section, including footnote 27

The discussion of the EKF completely misses the important idea that the regular KF applies to small offsets from a nominal trajectory. You need to introduce separate variable for the nominal trajectory with something like an \vec{X_k^\star} and \vec{X_k^-} that update like:

\vec{X_k^-} = \vec{f}(\vec{X_{k-1}^\star}, \vec{u_k}
\vec{X_k^\star} = \vec{X_k^-} + \vec{x_k^\star}

Here, \vec{x_k^-} is always zero, so it's update equation can be eliminated.

Also, the partials for F and H are computed at \vec{X_{k-1}^\star}, \vec{u_k} for F and at \vec{X_k^- for H (note the text uses the wrong sub/super scripts for x, but it is the wrong x anyway). Also, the text below the equations for F and H mention the wrong scripting for H.

Footnote 27 should be reworked after adding the nominal trajectory. The "bulk" of the effect of u_k is absorbed into the update for \vec{X_k^-} using \vec{f}. The effect of changing u_k on changing the partial wrt \vec{X} is included in F_k, but that effect will likely be less than the effect on X.

Douglas Morgan  Nov 27, 2017 
Printed Page 643
last paragraph and equation

r with a overhead arrow is defined as a row vector in the text and then used as a column vector in the equation for R. Some transposing is needed somewhere.

Douglas Morgan  Nov 08, 2017 
PDF Page 646
2d and 4d paragraph

The result of the formula is not corrected coordinates but distorted coordinates.

Jose Angel  Jul 07, 2020 
Printed Page 650-652
whole subsection

I suggest you take a shot at completely rewriting the "Rotation Matrix and Translation Vector" section. Here is list of some of the problems with the section:

1) The main result should finally be that:
\vec{P_c} = R * \vec{P_o} + \vec{t}
which is not the equation on page 652. The equation I just gave is the correct way to use the rvec and tvec outputs of various pose finding and calibration functions (the one on pg 653 is not). There is hardly any reason to wait to give this result. Give it first and then explain it.

2) The text and figure 18-8 switch between \vec{t} and \vec{T}.

3) "Ultimately, a rotation is equivalent to introducing a new description of a point's location in a different coordinate system." That is way too strong a statement, making it pretty much untrue. A rotation is never a coordinate transform. The same matrix can represent both, but that is different. Same problem with the next sentence. Better to say it right, than say it wrong twice.

4) "around the x-, y-, and z-axes in sequence" then the footnote reverses the sequence just given. Why not say it the right way in the first place.

5) The three rotation matrices are all left-handed. If you want the R computed from their product to the usual roll-pitch-yaw rotation matrix, they have to be right-handed.

6) I can't figure out how Figure 18-9 is supposed to illustrate what the caption says. And again, the caption is wrong by using "is the same as". They are quite different things that happen to use the same matrix.

7) "identity matrix consisting of 1s along the diagonal and 0s everywhere else" The reader is in way over their head if they need this definition.

8) The whole discussion of translation vector at the bottom of pg 651 is way off the mark. For instance, one of origin_{object} or origin_{camera} will be zero, depending on what coordinate system the vectors are in. But that vital information is never specified. Also, what is the "first" and "second" coordinate system. Finally, it jumps to the incorrect expression for \vec{P_c} on pg 652.

All in all, lots of text should be changed, math redone, and figures updated.



Douglas Morgan  Nov 08, 2017 
Printed Page 662
footnote 29 and s'es it refers to

I cannot see how the s referred to in the footnote is the product of other s'es. To me it looks like the equation of [x y 1]^T sets it's s, call it s', to a specific value. Suppose the equation for H uses that same s', then the s in the equation for \vec{q} would have to be 1.0. Suppose the equation for H used a different arbitrary s, call it s'', the the s in the equation for \vec{q} would have to be s'/s''. Or you could not require the 3rd component of \vec{q} to be 1 and then the final s could be anything.

In any case, I don't see any product of s'es involved.

Douglas Morgan  Nov 09, 2017 
Printed Page 665
2nd paragraph

"Scaling the homography could be applied to the ninth homography parameter, ..." I have no idea what this could mean. I would think you would always just scale the whole matrix. Is that right?

Douglas Morgan  Nov 09, 2017 
Printed Page 669,670
multiple places

Without checking, I think you may have used equations from a source paper that allowed some skew terms without modifying them for your case without skew. Just a guess though. Why would I think that?:

B_{12} is supposed to be zero given your solution for B on pg 668. But, B_{12} is used in the expressions for f_y, c_y, and \lambda. That makes no sense, you "should" just use zero there.

Also, B_{33} is a function of the other elements of B. Seems kind of odd to use B_{33} in the expression for \lambda, though maybe that is OK.

In any case, it looks like you must have been working from some other definition of M that did not put a fixed zero at B_{12} and maybe had more leeway in B_{33}.

Douglas Morgan  Nov 09, 2017 
Printed Page 669
lat paragraph (not the "and:")

"..if K>=2 then this equation can be solved..." That should be K>=3 since V is 2Kx6. You need to at least get to 6x6.

Douglas Morgan  Nov 09, 2017 
Printed Page 670
2nd paragraph

"\lambda -1/...." should be "\lambda = 1/..."

Douglas Morgan  Nov 09, 2017 
Printed Page 670
footnote 33

"This is mainly due to precision errors."
It is hard to believe that the overwhelming source of error would not be measurement noise. I doubt numerical precision plays any significant role in the R deviating from a rotation matrix. Perhaps the equations have some feature that enforces orthogonality (but for numerical precision), but I really doubt it.

Douglas Morgan  Nov 09, 2017 
Printed Page 671
1st paragraph

The fx, fy, cx, and cy should be removed from the 1st equation on printed page 671. That is, xp = XW / ZW, yp = YW / ZW.

The xp and xd should be swapped, and the yp and yd should be swapped in the 2nd equation on printed page 671.

Vincent Yang  Oct 18, 2018 
Printed Page 671
Second equation

The (xp, yp) on the left side of the equation needs to be replaced by (x_d, y_d). The (x_d, y_d) on the right side of the equation needs to be replaced by (X_w,/Z_w, Y_w/Z_w) because the distortion is not a function of the actual image position but a function of the point calculated from the linear projection.Then, the distorted point is multiplied by the focal length, and you add c_x and c_y to these values just like the first equation of the P.671 does.

Takuma Nakamura  Jan 18, 2019 
Printed Page 677
function prototype & 3rd paragraph


The 10th argument of cv::solvePnPRansac() function looks changed
from
int minInliersCount = 100
to
double confidence = 0.99
in the OpenCV 3.x

The function prototype and related paragraph should be changed.

Seunghoon Jung  Jun 26, 2017 
Printed Page 677
end of "Computing extrinsics only with cv::solvePnPRansac()" section

Nobody (e.g., OpenCV documentation) seems to mention that the rvec and tvec computed by solvePnPRansac are just from some selection of 4 points and that this is a *bad* thing. If you want accurate r rvec and tvec, after running solvePnPRansac, you must run the inliers through solvePnP (or do something similar). Otherwise, there is all sorts of unnecessary noise in rvec and tvec..

Something to consider mentioning.

Douglas Morgan  Nov 09, 2017 
Printed Page 677
footnote 39

"The default value of reprojectionError corresponds to an average distance of sqrt(2) between each of these oints and its corresponding reprojection." No, it is the max allowed reprojection error for a point to make it into the inliers.

Douglas Morgan  Nov 09, 2017 
Printed Page 678,679
whole "Undistortion Maps" section

The entire section is written under a misunderstanding of how distortion maps are represented. The text and figure 18-18 are all definite that a map's pixel represents a location in the destination image. This is not the case. Each map pixel tells where in the source image you should go to find the value to put in the destination image at the same pixel index as the map.

So much of the section needs rewriting and the figure needs to be redrawn. At the end, footnote 41 should go away as superfluous.

Douglas Morgan  Nov 10, 2017 
Printed Page 680
1st paragraph

"in any of four formats ... into any of the others" No, it doesn't convert between the floating formats nor between the two formats with CV_16SC2 for map1.

Douglas Morgan  Nov 10, 2017 
Printed Page 689
Last line.

"overfitting" It might better be known as "ignoring prior uncertainties".

Douglas Morgan  Nov 10, 2017 
Printed Page 692,693
declaration of projectPoints and 2nd paragraph pg 693

The comments in the declaration say:
// 3XN/Nx3 .....
That cannot be right since it is ambiguous if N is 3. The function only allows the NX3 case.

Same problem in the 2nd paragraph of pg 693.

Douglas Morgan  Nov 10, 2017 
Printed Page 694
1st paragraph

"trapezoid" should be "quadrilateral".

Douglas Morgan  Nov 10, 2017 
Printed Page 700-702
various

A paragraph and a footnote are approximately repeated. The 2nd paragraph on pg 700 is like the 2nd one of pg 702. Footnotes 7 and 9 are similar.

Douglas Morgan  Nov 10, 2017 
Printed Page 710
4th bullet

"We'll now summarize some facts ....
Order is preserved"

That is only true for some surfaces (and who knows what else). Look at puffed up hair (or close in to lots of small branches) and you'll see lots of order not being preserved. So this is often a good assumption, but somewhat less than a "fact".

Douglas Morgan  Nov 10, 2017 
PDF Page 711
2nd paragraph

Let’s pick one set of coordinate systems, left or right, to work in and do our calcula‐
tions there. Either one is just as good, but we’ll choose the coordinates centered on O l
-> of the left camera. In these coordinates, the location of the observed point is p l ->
and the origin of the other camera is located at O →

" the origin of the other camera is located at O ->" should be "T"?

Kouichi Matsuda  Sep 30, 2017 
PDF Page 712
2nd paragraph

The equation states p_r = R * (P_l - T) . Correctwise it should be a capital P_r, because the equation transforms the world point w.r.t the coordinate frame of the left camera to the frame of the right camera.

In the last paragraph of page 712 it seems to be correct.

Matthias  Jul 15, 2019 
PDF Page 713
2nd equation

There is a typo in the first row of the S-matrix. The matrix is skew-symmetric, so -T_x must be -T_z.

Matthias  Jul 15, 2019 
PDF Page 716

Example 19-2. Computing the fundamental matrix using RANSAC

should be

Example 19-2. Computing the fundamental matrix using Eight-point algorithm.

Kouichi Matsuda  Sep 30, 2017 
PDF Page 721
3rd paragraph

> void cv::computeCorrespondEpilines(
> cv::InputArray points, // Input points, Nx1 or 1xN (Nc=2)
> // or vector<Point2f>
> int whichImage, // Index of image which contains
> // points ('1' or '2')
> cv::InputArray F, // Fundamental matrix
> cv::OutputArray lines // Output vector of lines, encoded as
> // tuples (a,b,c)
>);

> Here the first argument, points , is the input array of two- or three-dimensional points

should be

Here the first argument, points , is the input array of two-dimensional
points
(three-dimensional points are not available?)

Kouichi Matsuda  Sep 30, 2017 
PDF Page 733
3

Hi, I’m been reading the LearningOpenCV and I have meet technical question which I really need help.

I got a cross-eyed stereo camera (camera is tilt inward to each other) and to produce a disparity map to find the height at the end.

From the book it says “Unsetting flags means that we want the cameras verging toward each other (i.e., slightly “cross-eyed”) so that zero disparity occurs at a finite distance (this might be necessary for greater depth resolution in the proximity of that particular distance).” Which meet my case as I need to find finite item (3x3cm, and the units goes to mm)

I am using usb camera with lens of 75mm (4cm FOV) to capture the item. I have done calibration, stereo calibration, stereorectify, InitUndistortRectifyMap,Remap,and lastly compute StereoSGBM.

My item is putting on a flat surface which means it should returning constant depth, however, due to cross eyed camera, it has a gradient increase of disparity.

And I have read that the stereoRectify() is more suitable for parallel frontal stereo camera. However , I cant found other ways.

I have also try using StereoRectifyUncalibrated() but the disparity map is worse.

May I need some help which my final product is I want the disparity map which looks flat as it is from cross eyed stereo camera so that I can compute its depth.

Anonymous  Mar 25, 2021 
PDF Page 755/756
Bottom of 755, beginning of 756 in call to cv::stereoCalibrate

Call to stereoCalibrate has lasttwo parameters inverted (showsTermCriteria then flags whereas http://docs.opencv.org/trunk/d9/d0c/group__calib3d.html#ga246253dcc6de2e0376c599e7d692303a and source code shows thereverse, i.e., flags then TermCriteria).

mpapini  Jun 18, 2017 
PDF Page 793
Paragraph 2

In Example 20-1, we mentioned briefly the possibility that the data might be arranged in the space in a highly asymmetrical way.

No, we did not mention it even briefly.

Anonymous  Feb 20, 2017 
PDF Page 801
2nd paragraph

The methods of cv::StatModel provide mechanisms for reading and writing...

should read

The methods of cv::Algorithm provide mechanisms for reading and writing...

Anonymous  Feb 20, 2017 
PDF Page 803
Paragraphs 2 & 3

layout = cv::ml::ROW_SAMPLE
Means that the feature vectors are stored as rows (this is the most common layout)

should read

Means that data points are stored as rows (this is the most common layout)

And the similar error in the description of layout = cv::ml::COL_SAMPLE

Anonymous  Feb 20, 2017 
PDF Page 806
Para 3

... what exactly is going on inside of cv::ml::TrainData::create() if cv::ml::Train
Data is a virtual class type?

There's no such thing as "virtual class" in C++ (other than virtual base class which has nothing to do with this case). The sentence should read

... what exactly is going on inside of cv::ml::TrainData::create() if cv::ml::Train
Data is an abstract class type?

Anonymous  Feb 21, 2017 
PDF Page 807
3rd para

Finally, the third “shuffle” method will randomly assign the train and test vectors (while keeping the number of each fixed).

Fixed to what? This function is not documented in the official docs (apart from just signature), and this sentence does not add much. What part of data is randomly assigned to the training set?

Anonymous  Feb 21, 2017 
PDF Page 809
Footnote 8

The var_idx parameter you used with train()...

should read

The varIdx parameter you used with create()...

The train() function is not passed varIdx.

Anonymous  Feb 21, 2017 
PDF Page 811
2nd para

In our case, the node O can take values “face,” “car,” or “flower,”...

There's no such thing as "node O" in the figure. Presumably, the sentence should read:

In our case, the root node can take values “face,” “car,” or “flower,”...

Anonymous  Feb 21, 2017 
PDF Page 820
3rd para

The whole description of the structure TreeParams is wrong. There's no such structure in OpenCV. Parameters are set directly for DTree object.

Anonymous  Feb 21, 2017 
PDF Page 821
Footnote 16

one needs to try 2N – 2 subsets...

No, 2**N - 2 subsets

Anonymous  Feb 21, 2017 
Printed Page 860
final sentence of the page

refers to both item being well inside class 1 and class 2 with the same equation of x . w + b > = +1. I believe that the strongly inside class 2 equation should be x . w + b <= -1.

dean  Apr 26, 2017 
PDF Page 889
1st para

For asymmetric data, the “trick” of flipping an image on its vertical axis was described previously in the subsection “Rejection cascades” on page 880.

It seems that authors meant "symmetric", as that is what is discussed on page 880. Also, flipping on vertical axis is only meaningful when an image has this kind of symmetry.

Anonymous  Feb 27, 2017 
PDF Page 927
The last para

The function calcVoronoi() is a protected member function, so it cannot be called directly. Is it so neccessary to mention it at all when describing public API?

Anonymous  Mar 01, 2017 
PDF Page 929
Para 3 and later

...*edge will contain one of edges of the facet.

The "edge" argument is of type int& rather than int*. Thus "*edge" is a syntactic error amd must be replaced with "edge". The same pertains to later paragraphs.

Anonymous  Mar 02, 2017