Deploying the trained NTM model and running the inference

In this section, we will deploy the NTM model, run the inference, and interpret the results. Let's get started:

  1. First, we deploy the trained NTM model as an endpoint, as follows:
ntm_predctr = ntm_estmtr.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')

In the preceding code, we call the deploy() method of the SageMaker Estimator object, ntm_estmtr, to create an endpoint. We pass the number and type of instances required to deploy the model. The NTM Docker image is used to create the endpoint. SageMaker takes a few minutes to deploy the model. The following screenshot shows the endpoint that was provisioned:

You can see the endpoint you've created by navigating to the ...

Get Hands-On Artificial Intelligence on Amazon Web Services now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.