Chapter 8. Cluster Maintenance
Hadoop clusters require a moderate amount of day-to-day care and feeding in order to remain healthy and in optimal working condition. Maintenance tasks are usually performed in response to events: expanding the cluster, dealing with failures or errant jobs, managing logs, or upgrading software in a production environment. This chapter is written in “run book form,” with common tasks called out and simple processes for dealing with those situations. It’s not meant to supplant a complete understanding of the system, and as always, the normal caveats apply when dealing with systems that store data or serve critical functions.
Managing Hadoop Processes
It’s not at all unusual to need to start, stop, or restart Hadoop daemons because of configuration changes or as part of a larger process. Depending on the selected deployment model and distribution, this can be as simple as using standard service init scripts or by way of specialized scripts for Hadoop. Some administrators may use configuration management systems such as Puppet and Chef to manage processes.
Starting and Stopping Processes with Init Scripts
The most common reason administrators restart Hadoop processes is to enact configuration changes. Other common reasons are to upgrade Hadoop, add or remove worker nodes, or react to incidents. The effect of starting or stopping a process is entirely dependent upon the process in question. Starting a namenode will bring it into service after it loads the ...
Get Hadoop Operations now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.