December 2018
Beginner to intermediate
684 pages
21h 9m
English
The following table lists key parameters available for this purpose in the sklearn decision tree implementation. After introducing the most important parameters, we will illustrate how to use cross-validation to optimize the hyperparameter settings with respect to the bias-variance tradeoff and lower prediction errors:
|
Parameter |
Default |
Options |
Description |
|
max_depth |
None |
int |
Maximum number of levels: split nodes until reaching max_depth or all leaves are pure or contain fewer than min_samples_split samples. |
|
max_features |
None |
None: all features; int float: fraction auto, sqrt: sqrt(n_features) log2: log2(n_features) |
Number of features to consider for a split. |
|
max_leaf_nodes ... |