O'Reilly logo

Mastering Apache Spark by Mike Frampton

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Moving data

Some of the methods of moving data in and out of Databricks have already been explained in Chapter 8, Spark Databricks and Chapter 9, Databricks Visualization. What I would like to do in this section is provide an overview of all of the methods available for moving data. I will examine the options for tables, workspaces, jobs, and Spark code.

The table data

The table import functionality for Databricks cloud allows data to be imported from an AWS S3 bucket, from the Databricks file system (DBFS), via JDBC and finally from a local file. This section gives an overview of each type of import, starting with S3. Importing the table data from AWS S3 requires the AWS Key, the AWS secret key, and the S3 bucket name. The following screenshot ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required