August 2019
Beginner to intermediate
353 pages
18h 48m
English
In [1]:
# setup from mlwpy import * %matplotlib inline iris = datasets.load_iris() # standard iris dataset tts = skms.train_test_split(iris.data, iris.target, test_size=.33, random_state=21) (iris_train_ftrs, iris_test_ftrs, iris_train_tgt, iris_test_tgt) = tts # one-class variation useclass = 1 tts_1c = skms.train_test_split(iris.data, iris.target==useclass, test_size=.33, random_state = 21) (iris_1c_train_ftrs, iris_1c_test_ftrs, iris_1c_train_tgt, iris_1c_test_tgt) = tts_1c
So far, we’ve discussed two classifiers: Naive Bayes (NB) and k-Nearest Neighbors (k-NN). I want to add to our classification toolkit—but first, I want to revisit what is happening ...