Implementing a crime detection application
In this recipe, we'll see how to run deep learning models on various sets of data to detect crime in the city of Chicago.
Getting ready
To step through this recipe, you will need a running Spark Cluster in any one of the following modes: Local, standalone, YARN, Mesos. Include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java. Also, install Sparkling Water as discussed in the preceding recipe.
How to do it…
- Please download the following datasets from the following locations:
Weather data: https://github.com/ChitturiPadma/datasets/blob/master/chicagoAllWeather.csv.
Census data:https://github.com/ChitturiPadma/datasets/blob/master/chicagoCensus.csv ...
Get Apache Spark for Data Science Cookbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.