Chapter 8. Deploying Tornado

Until now, we’ve been running only single Tornado processes in our examples for simplicity’s sake. It made testing an application and making quick changes extremely easy, but it is not an appropriate deployment strategy. Deploying an application to a production environment presents new challenges, with both maximizing performance and managing the individual processes. This chapter presents strategies to harden your Tornado application and increase request throughput, as well as tools that make deploying Tornado servers easier.

Reasons for Running Multiple Tornado Instances

In most cases, assembling a web page is not a particularly computationally intensive process. The server needs to parse the request, fetch the appropriate data, and assemble the various components that make up the response. If your application makes blocking calls to query a database or access the filesystem, the server will not be able to respond to an incoming request while it is waiting for the call to complete. In these moments, the server hardware will have surplus CPU time while it waits for I/O operations to complete.

Given that most of the elapsed time responding to an HTTP request is spent with the CPU idle, we’d like to take advantage of this downtime and maximize the number of requests we can handle at a given time. That is, we’d like the server to be able to accept as many new requests as possible while the processes handling open requests are waiting for data.

As we saw in ...

Get Introduction to Tornado now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.