Sqoop provides an excellent way to import data in parallel from existing RDBMs to HDFS. We get an exact set of table structures that are imported. This happens because of parallel processing. These files can have text delimited by ',' '|', and so on. After manipulating imported records by using MapReduce or Hive, the output result set can be exported back to RDBMS. The data imported can be done in real time or in the batch process (using a cron job).
HBase and Hadoop cluster must be up and running.
You can do a wget to http://mirrors.gigenet.com/apache/sqoop/1.4.6/sqoop-1.4.6.tar.gz
Untar it to
tar –zxvf sqoop-1.4.6.tar.gz
It will create a /u/HbaseB/sqoop-1.4.6 folder.
A Sqoop user is created ...