Spark

As a general-purpose data engine, Apache Spark can integrate with Hive closely. Spark SQL has supported a subset of HQL and can leverage the Hive metastore to write or query data in Hive. This approach is also called Spark over Hive. To configure Spark, use Hive the metastore, you only need to copy the hive-site.xml to the ${SPARK_HOME}/conf directory. After that, running the spark-sql command will enter the Spark SQL interactive environment, where you can write SQL to query Hive tables.

On the other hand, Hive over Spark is a similar approach, but lets Hive use Spark as an alternative engine. In this case, users still stay in Hive and write HQL, but run over the Spark engine transparently. Hive over Spark requires the Yarn FairScheduler ...

Get Apache Hive Essentials now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.