November 2017
Intermediate to advanced
374 pages
10h 19m
English
The mechanics might be clear, but we should talk a little more about why and define what was meant by best. At each step in the cross-validation process, the model scores an error against the test sample. By default, it's essentially a squared error.
We can force the RidgeCV object to store the cross-validation values; this will let us visualize what it's doing:
alphas_to_test = np.linspace(0.01, 1)rcv3 = RidgeCV(alphas=alphas_to_test, store_cv_values=True)rcv3.fit(reg_data, reg_target)
As you can see, we test a bunch of points (50 in total) between 0.01 and 1. Since we passed store_cv_values as True, we can access these values:
rcv3.cv_values_.shape(100L, 50L)
So, we had 100 values in the initial regression and tested 50 ...
Read now
Unlock full access