creating a total of three copies. In this way, if a particular node fails, then the data is not lost
because that data is also stored on at least two other nodes by default.
Figure 2.1 shows a very large text file, split into smaller pieces of information called blocks
and they are of the same size. HDFS deals only with the blocks of a file, where each block except
the last is of the same size. This block is also the unit for replication and fault tolerance.
Based on various considerations that will be covered in the subsequent chapter, it has been
observed that a block size of 128 megabytes (1 megabyte is 10 raised ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month, and much more.