OverviewIntroductionTechnical requirementsIntroduction to linear regressionSimplifying the problemFrom one to N-dimensionsThe linear regression algorithmExercise 148 – using linear regression to predict the accuracy of the median values of our datasetLinear regression functionTesting data with cross-validationExercise 149 – using the cross_val_score function to get accurate results on the datasetRegularization – Ridge and LassoK-nearest neighbors, decision trees, and random forestsK-nearest neighborsExercise 150 – using k-nearest neighbors to find the median value of the datasetExercise 151 – K-nearest neighbors with GridSearchCV to find the optimal number of neighborsDecision trees and random forestsExercise 152 – building decision trees and random forestsRandom forest hyperparametersExercise 153 – tuning a random forest using RandomizedSearchCVClassification modelsExercise 154 – preparing the pulsar dataset and checking for null valuesLogistic regressionExercise 155 – using logistic regression to predict data accuracyOther classifiersNaive BayesExercise 156 – using GaussianNB, KNeighborsClassifier, DecisionTreeClassifier, and RandomForestClassifier to predict the accuracy of our datasetConfusion matrixExercise 157 – finding the pulsar percentage from the datasetExercise 158 – confusion matrix and classification report for the pulsar datasetBoosting algorithmsAdaBoostXGBoostExercise 159 – using AdaBoost and XGBoost to predict pulsarsExercise 160 –using AdaBoost and XGBoost to predict median house values in BostonActivity 25 – using ML to predict customer return rate accuracySummary