O'Reilly logo

Data Algorithms by Mahmoud Parsian

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 30. Huge Cache for MapReduce

This chapter will show how to use and read a huge cache (i.e., composed of billions of key-value pairs that cannot fit in a commodity server’s memory) in MapReduce algorithms. The algorithms presented in this chapter are generic enough to be used in any MapReduce paradigms (such as MapReduce/Hadoop and Spark).

There are some MapReduce algorithms that might require access to some huge (i.e., containing billions of records) static reference relational tables. Typically, these reference relational tables do not change for a long period of time, but they are needed in either the map() or reduce() phase of MapReduce programs. One example of such a table is a “position feature” table, which is used for germline1 data type ingestion and variant classification. The position feature table might have the attributes shown in Table 30-1 (a composite key is (chromosome_id, position).

Table 30-1. Attributes of a position feature table
Column name Characteristics
chromosome_id Key-1
position Key-2
feature_id Basic attribute
mrna_feature_id Basic attribute
sequence_data_type_id Basic attribute
mapping Basic attribute

In expressing your solution in the MapReduce paradigm, either in map() or reduce(), given a key=(chromosome_id, position), you want to return a List<String> where each element of the list comprises the remaining attributes {feature_id, mrna_feature_id, sequence_data_type_id, mapping}. For the germline data type, a position ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required