June 2019
Intermediate to advanced
308 pages
7h 21m
English
We can evaluate three different aspects of the models:
On a desktop (Intel Xenon CPU E5-1650 v3@3.5GHz and 32 GB RAM) with GPU support, the training of LSTM on the CPU-utilization dataset and the autoencoder on the KDD layered wise dataset (reduced dataset) took a few minutes. The DNN model on the overall dataset took a little over an hour, which was expected as it has been trained on a larger dataset (KDD's overall 10% dataset).
The storage requirement of a model is an essential consideration in resource-constrained IoT devices. The following screenshot presents the storage requirements for the three models we tested for the two use cases:
As shown ...
Read now
Unlock full access