So far, we've built and displayed the output, piece by piece. It's also possible to put all the operations in a cascade and set them as stages of a pipeline. In fact, we can chain together what we've seen so far (the four label encoders, vector builder, and classifier) in a standalone pipeline, fit it to the training dataset, and finally use it on the test dataset to obtain the predictions.
This way to operate is more effective, but you'll lose the exploratory power of the step-by-step analysis. Readers who are data scientists are advised to use end-to-end pipelines only when they are completely sure of what's going on inside, and only to build production models. To show that the pipeline is equivalent ...