Chapter 4. Capacity planning and tuning 61
between any one client JVM and any one container JVM regardless of how many threads
the client is running; that connection has only so much capacity.
5. Run the computers at 60% processor usage, and measure the create, retrieve, update,
and delete transaction rate.
This measurement provides the throughput on two servers. This number doubles with four
servers, doubles again at eight servers, and so on. This scaling assumes that the network
capacity and client capacity are also able to scale. As a result, WebSphere eXtreme Scale
response time must be stable as the number of servers is scaled up. The transaction
throughput must scale linearly as computers are added to the data grid.
Now, let us take a look at parallel transactions. Parallel transactions touch a subset of the
servers. The subset can be all of the servers.
If the transaction touches all servers, the throughput is limited to the throughput of either the
client initiating the transaction
or the slowest server being touched. Larger grids will spread
the data out more and provide more CPU, memory, network, and so on. But, the client must
wait for the slowest server to respond
and the client must then consume the results.
When the transaction touches M of N servers, the throughput is N/M times faster than the
throughput of the slowest server, for example, with 20 servers and a transaction that touches
five servers. The throughput is 4x (20/5) the throughput of the slowest server in the grid.
When a parallel transaction completes, the results are sent to the client thread that started the
transaction. This client must then aggregate the results (if any) single threaded. This
aggregation time increases as the number of servers touched for the transaction grows.
However, this time depends on the application, because it is possible that each server returns
a smaller result as the data grid grows.
Typically, parallel transactions touch all of the servers in the data grid, because partitions are
uniformly distributed over the grid. In this case, throughput is limited to the first case.
4.1.10 An example of a WebSphere eXtreme Scale sizing exercise
Assume that we need to store four types of objects. Each of the four objects has the following
average key sizes and value sizes:
򐂰 Customer: Avg. key = 10 bytes, avg value = 2 KB, peak number = 1,000,000
򐂰 Product: Avg. key = 4 bytes, avg value = 40 KB, peak number = 100,000
򐂰 Order: Avg. key = 8 bytes, avg value = 20 KB, peak number = 500,000
򐂰 OrderItem: Avg. key = 8 bytes, avg value = 8 bytes, peak number = 5,000,000
Our primaryObjectMemory value is then:
(2010 bytes * 1,000,000) +
(40,004 bytes * 100,000) +
(20,008 bytes * 500,000) +
(16 bytes * 5,000,000) =
16.1GB
Follow these steps for the sizing exercise:
1. We want to have a single replica for high-availability purposes so we will need to store
16.1 GB objects * 2, or totalObjectMemory = 32.2 GB in the grid including replication.

Get WebSphere eXtreme Scale Best Practices for Operation and Management now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.