A standard Hadoop architecture 

Let's understand a standard Hadoop architecture:

  • Hadoop File System (HDFS): A distributed filesystem instantiated across a set of local disks attached to the compute nodes in the Hadoop cluster
  • Map: The embarrassingly parallel computation that is applied to every chunk of data read from HDFS (in parallel)
  • Reduce: The phase that takes map results and combines them to perform the final computation

The final results are typically stored back into HDFS. The benefits of Serengeti (open source project) provide ease of provisioning, multi-tenancy, and flexibility to scale up or out. BDE allows Serengeti to be triggered from a vRealize blueprint, making it easy to self-provision a Hadoop cluster of a given size: ...

Get Intelligent Automation with VMware now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.