One of the biggest drawbacks of K-means and similar algorithms is the explicit request for the number of clusters. Sometimes this piece of information is imposed by external constraints (for example, in the example of breast cancer, there are only two possible diagnoses), but in many cases (when an exploratory analysis is needed), the data scientist has to check different configurations and evaluate them. The simplest way to evaluate K-means performance and choose an appropriate number of clusters is based on the comparison of different final inertias.
Let's start with the following simpler example based on 12 very compact Gaussian blobs generated with the scikit-learn function make_blobs():
from sklearn.datasets import ...