Data storage process in HDFS

The following points should give a good idea of the data storage process:

All data in HDFS is written in blocks, usually of size 128 MB. Thus, a single file of say size 512 MB would be split into four blocks (4 * 128 MB). These blocks are then written to DataNodes. To maintain redundancy and high availability, each block is replicated to create duplicate copies. In general, Hadoop installations have a replication factor of three, indicating that each block of data is replicated three times.

This guarantees redundancy such that in the event one of the servers fails or stops responding, there would always be a second and even a third copy available. To ensure that this process works seamlessly, the DataNode places ...

Get Practical Big Data Analytics now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.