Interactive data analysis with pyspark

Apache Spark distribution comes with an interactive shell called pyspark. Since we are dealing with interpreted programming languages like Python, we can write interactive programs while learning.

If you remember, we have installed Spark with Apache Ambari. So we have to follow the standard directory locations of Apache Ambari to access the Spark-related binaries:

[hive@node-3 ~]$ cd /usr/hdp/current/spark2-client/ [hive@node-3 spark2-client]$ ./bin/pyspark Python 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). ...

Get Modern Big Data Processing with Hadoop now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.