Chapter 5. Storage
Hadoop clusters are about working with data, usually lots and lots of data, often orders of magnitude larger than ever before. Cloud providers supply different ways to store that data on their vast infrastructure, to complement the compute capabilities that operate on the data and the networking facilities that move the data around. Each form of storage serves a different purpose in Hadoop architectures.
Block Storage
The most common type of storage offered by a cloud provider is the disk-like storage that comes along with each instance that you provision. This storage is usually called block storage, but they are almost always accessed as filesystem mounts. Each unit of block storage is called a volume or simply a disk. A unit of storage may not necessarily map to a single physical device, or even to hardware directly connected to an instance’s actual host hardware.
Persistent volumes survive beyond the lifetime of the initial instances that spawned them. A persistent volume can be detached from an instance and attached to another instance, in a way similar to moving physical hard drives from computer to computer. While you wouldn’t usually do that with physical drives, it is much easier to do so in a cloud provider, and it opens up new usage patterns. For example, you could maintain a volume loaded with important data or applications over a long period of time, but only attach it to an instance once in a while to do work on it.
Volumes that are limited to ...
Get Moving Hadoop to the Cloud now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.