Skip to Content
Data Lake for Enterprises
book

Data Lake for Enterprises

by Vivek Mishra, Tomcy John, Pankaj Misra
May 2017
Beginner to intermediate
596 pages
15h 2m
English
Packt Publishing
Content preview from Data Lake for Enterprises

File Data Load

  1. Files from a linux machine can be easily copied into HDFS cluster by using fs put command. This command is part of Hadoop client  which can be installed on any Linux machine. In our case, Hadoop client is available as part of Hadoop pseudo-distributed setup.

A general syntax of this command is as given:

hdfs dfs -put /local/path/test.file hdfs://namenode:9000/user/stage
  1. For this example, let us create a raw area of data in HDFS (a folder in HDFS). This area would contain the data in its most natural form as acquired from the source using the command:
hdfs dfs -mkdir -p /<any-path>/raw/txt

Once the previous command is executed, it will create the folder structure (<any-path>/raw/txt) in HDFS which can be viewed using ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

The Enterprise Big Data Lake

The Enterprise Big Data Lake

Alex Gorelik
Operationalizing the Data Lake

Operationalizing the Data Lake

Holden Ackerman, Jon King
Data Lakes

Data Lakes

Anne Laurent, Dominique Laurent, Cédrine Madera

Publisher Resources

ISBN: 9781787281349Supplemental Content