In our discussion of clustering, we primarily used the standard Euclidean distance between vectors in a vector space:
Euclidean distance is also known as the L2 norm. There are several other metrics that are commonly used in applications:
One variation of Euclidean distance is the L1 norm, also known as Manhattan distance (because it counts the number of “blocks” between two points on a grid):
Another is the L∞ norm, defined as the following:
For vectors of binary values or bits, you can use Hamming distance, which is the number of bits in common between x and y. This can be computed as:
where H(v) is the Hamming weight; that is, the number of “1” bits in v. If the points you compare are of different bit length, the shorter one will need to be prepended with zeros.
For lists, you can use the Jaccard similarity:
The Jaccard similarity computes the number of elements in common between x and y, normalized by the total number of elements in the intersection. One useful property of Jaccard similarity is that you can use it to compare lists of different lengths.
The L1 and L2 metrics in vector spaces suffer from what is known as the “curse of dimensionality.” This phrase refers to the principle that as the number of dimensions increases, all points seem to be roughly equally distant from one another. Thus, ...