The command line is an often forgotten but powerful ally to the data scientist. Many very powerful operations on the data can be achieved with the right shell commands and executed blazingly fast. To illustrate this, we will use shell commands to shuffle, split, and create training and validation subsets of the Ames Housing dataset:
- First, extract the first line into a separate file, ames_housing_header.csv and remove it from the original file:
$ head -n 1 ames_housing.csv > ames_housing_header.csv
- We just tail all the lines after the first one into the same file:
$ tail -n +2 ames_housing.csv > ames_housing_nohead.csv
- Then randomly sort the rows into a temporary file. (gshuf is the OSX ...