Although the F1-score was close to 0.97, the normalized confusion matrix shows that the classes are strongly unbalanced, and that the classifier has just learned how to classify the most popular ones properly. To improve the results, we can re-sample each class which will, in effect, to try to balance the training dataset better.
First, let's count how many cases there are in the training dataset for each class:
In: train_composition = (train_df.groupBy("target") .count() .rdd .collectAsMap()) print(train_composition)Out: {'neptune': 107201, 'nmap': 231, 'portsweep': 1040, 'back': 2203, 'warezclient': 1020, 'normal': 97278, ... 'loadmodule': 9, 'phf': 4}
This is clear evidence of a strong imbalance. We can try to improve the ...