Working with Spark DataFrames

So far, we've described how to load DataFrames from CSV and Parquet files, but not how to create them from an existing RDD. In order to do so, you just need to create one Row object for each record in the RDD and call the createDataFrame method of the SQL context. Finally, you can register it as a temp table to use the power of the SQL syntax fully:

In: from pyspark.sql import Row    rdd_gender = \    sc.parallelize([Row(short_gender="M", long_gender="Male"),    Row(short_gender="F", long_gender="Female")])    (sqlContext.createDataFrame(rdd_gender)     .registerTempTable("gender_maps"))    sqlContext.table("gender_maps").show()Out: +-----------+------------+     |long_gender|short_gender|     +-----------+------------+     |       Male|           M| | Female| ...

Get Python Data Science Essentials - Third Edition now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.