Executing jobs

Sahara facilitates the execution of jobs and bursting workloads in big data clusters running any supported EDP workload platform in OpenStack. As we have rapidly deployed a Spark cluster in the previous section, associated jobs in Sahara can be managed very easily.

Running jobs in Sahara requires the localization of the data source and destination from which the Sahara engine will fetch, analyze, and store them respectively. Sahara supports mainly three types of input/output data storage:

  • Swift: This designates the OpenStack object storage as the main location for data input and the destination of the output result
  • HDFS: This uses any running OpenStack instance backed by HDFS storage
  • Manila: This uses the OpenStack network ...

Get Extending OpenStack now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.