O'Reilly logo

Apache Spark for Data Science Cookbook by Padma Priya Chitturi

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Concatenating and merging operations over DataFrames

This recipe shows how to concatenate, merge/join, and perform complex operations over Pandas DataFrames as well as Spark DataFrames.

Getting ready

To step through this recipe, you will need a running Spark cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. Also, have Python and IPython installed on the Linux machine, that is, Ubuntu 14.04.

How to do it…

  1. Invoke ipython console -profile=pyspark:
     In [1]: from pyspark import SparkConf, SparkContext, SQLContext In [2]: import pandas as pd In [3]: sqlcontext = SQLContext(sc) In [4]: pdf1 = pd.DataFrame({'A':['A0','A1','A2','A3'], 'B': ['B0','B1','B2','B3'], 'C':['C0','C1','C2','C3'],'D': ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required