There's more...

Do not be demotivated by the complex setup. This is a one-off operation. Instead, check whether the computing model underlying Spark is one that you like or not.

Spark abstracts away lots of issues related to parallelization, to the point that you actually wonder what's going on. If you follow the Java processes that are doing the work, you will see that many of them are using several cores to perform computation. If you are comfortable with that level of distance to the implementation, then Spark is for you. If not, check the Dask recipe.

If you read the Spark documentation, you will find lots of examples related to resilient distributed datasets (RDDs). This is an interesting avenue to explore, but here we are using one ...

Get Bioinformatics with Python Cookbook - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.