O'Reilly logo

Hadoop 2.x Administration Cookbook by Gurmukh Singh

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Fair Scheduler configuration

Getting ready

To go through the recipe in this section, we need Hadoop Cluster setup and running. By default, Apache Hadoop 1.x distribution uses FIFO scheduler and Hadoop 2.x uses Capacity Scheduler. In a cluster with multiple jobs, it is not good to use FIFO scheduler, as it will starve the jobs for resources and only the very first job in the queue is executed; all other jobs have to wait.

To address the preceding issue, there are two commonly used Schedulers: Fair Scheduler, and Capacity Scheduler, to allocate the cluster resources in a fair manner. In this recipe, we will see how to configure Fair Scheduler. Simply put, Fair Scheduler shares resources fairly among running jobs based on queues and weights assigned. ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required