February 2018
Intermediate to advanced
300 pages
6h 17m
English
Sahara facilitates the execution of jobs and bursting workloads in big data clusters running any supported EDP workload platform in OpenStack. As we have rapidly deployed a Spark cluster in the previous section, associated jobs in Sahara can be managed very easily.
Running jobs in Sahara requires the localization of the data source and destination from which the Sahara engine will fetch, analyze, and store them respectively. Sahara supports mainly three types of input/output data storage: