Skip to Main Content
Large Scale and Big Data
book

Large Scale and Big Data

by Sherif Sakr, Mohamed Gaber
June 2014
Intermediate to advanced content levelIntermediate to advanced
636 pages
23h 13m
English
Auerbach Publications
Content preview from Large Scale and Big Data
117iMapReduce
To implement persistent tasks, there should be enough available task slots. The
number of available map/reduce task slots is the number of map/reduce tasks that
the framework can accommodate (allows to be executed) simultaneously. In Hadoop
MapReduce, the master splits a job into many small map/reduce tasks, the number
of map/reduce tasks executed simultaneously cannot be larger than the number of
the available map/reduce task slots (the default number in Hadoop is 2 per slave
worker). Once a slave worker completes an assigned task, it requests another one
from the master. iMapReduce requires to guarantee that there are sufcient available
task slots for all the persistent tasks to start at the beginning. This means that the ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Reinventing the Organization for GenAI and LLMs

Reinventing the Organization for GenAI and LLMs

Ethan Mollick
Big Data Analytics for Internet of Things

Big Data Analytics for Internet of Things

Tausifa Jan Saleem, Mohammad Ahsan Chishti
Scala:Applied Machine Learning

Scala:Applied Machine Learning

Pascal Bugnion, Patrick R. Nicolas, Alex Kozlov
Topics in Parallel and Distributed Computing

Topics in Parallel and Distributed Computing

Sushil K Prasad, Anshul Gupta, Arnold L Rosenberg, Alan Sussman, Charles C Weems

Publisher Resources

ISBN: 9781466581500