Chapter 10. Securing Data Ingest

The preceding chapters have focused on securing Hadoop from a storage and data processing perspective. We’ve assumed that you have data in Hadoop and you want to secure access to it or to control how users share analytic resources, but we’ve neglected to explain how data gets into Hadoop in the first place.

There are many ways for data to be ingested into Hadoop. The simplest method is to copy files from a local filesystem (e.g., a local hard disk or an NFS mount) to HDFS using Hadoop’s put command, as shown in Example 10-1.

Example 10-1. Ingesting files from the command line
[alice@hadoop01 ~]$ hdfs dfs -put /mnt/data/sea*.json /data/raw/sea_fire_911/

While this method might work for some datasets, it’s much more common to ingest data from existing relational systems or set up flows of event- or log-oriented data. For these use cases, users use Sqoop and Flume, respectively.

Sqoop is designed to either pull data from a relational database into Hadoop or to push data from Hadoop into a remote database. In both cases, Sqoop launches a MapReduce job that does that actual data transfer. By default, Sqoop uses JDBC drivers to transport data between the map tasks and the database. This is called generic mode and it makes it easy to use Sqoop with new data stores, as the only requirement is the availability of JDBC drivers. For performance reasons, Sqoop also supports connectors that can use vendor-specific tools and interfaces to optimize ...

Get Hadoop Security now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.