Time for action – adding an additional fsimage location

Let's now configure our NameNode to simultaneously write multiple copies of fsimage to give us our desired data resilience. To do this, we require an NFS-exported directory.

  1. Ensure the cluster is stopped.
    $ stopall.sh
    
  2. Add the following property to Hadoop/conf/core-site.xml, modifying the second path to point to an NFS-mounted location to which the additional copy of NameNode data can be written.
    <property>
    <name>dfs.name.dir</name>
    <value>${hadoop.tmp.dir}/dfs/name,/share/backup/namenode</value>
    </property>
  3. Delete any existing contents of the newly added directory.
    $ rm -f /share/backup/namenode
    
  4. Start the cluster.
    $ start-all.sh
    
  5. Verify that fsimage is being written to both the specified locations ...

Get Hadoop Beginner's Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.