Working with Spark DataFrames

So far, we've described how to load DataFrames from CSV and Parquet files, but not how to create them from an existing RDD. In order to do so, you just need to create one Row object for each record in the RDD and call the createDataFrame method of the SQL context. Finally, you can register it as a temp table to use the power of the SQL syntax fully:

In: from pyspark.sql import Row    rdd_gender = \    sc.parallelize([Row(short_gender="M", long_gender="Male"),    Row(short_gender="F", long_gender="Female")])    (sqlContext.createDataFrame(rdd_gender)     .registerTempTable("gender_maps"))    sqlContext.table("gender_maps").show()Out: +-----------+------------+     |long_gender|short_gender|     +-----------+------------+     |       Male|           M| | Female| ...

Get Python Data Science Essentials - Third Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.