52 WebSphere eXtreme Scale Best Practices for Operation and Management
numberOfPartitions value divided by the number of JVMs controls the number of shards
per JVM.
Ideally, choose the numberOfPartitions value so that the difference in the number of
shards stored on any of two JVMs in the grid is less than approximately 10%, both in
normal use and when various failures occur. The process that we describe in this chapter
helps you achieve this result.
3. Determine the CPU size and server count.
When performing capacity planning, especially to work out the number of required
physical servers for the application, there are two primary factors to consider:
The memory required to store the data on a single physical server
The throughput required to process the requests on a single physical server
The processing power that is provided by the CPU has a great impact on the throughput,
assuming that there is enough physical memory and network bandwidth. Normally, the
faster the CPU is, the greater the throughput that can be achieved. Unlike memory use,
CPU use is often impossible to estimate to any useful accuracy; it is usually necessary to
prototype and measure, whether for WebSphere eXtreme Scale or any other software. In
WebSphere eXtreme Scale, the CPU costs include these costs:
Cost of servicing create, retrieve, update, and delete (CRUD) operations from clients.
This cost tends to be fairly small.
Cost of replicating from other JVMs. This cost also tends to be fairly small.
Cost of invalidating (usually small).
Cost of eviction policy. For default evictors, this cost is usually small.
Garbage collection cost. This cost is tunable using standard techniques.
Application logic cost (for custom evictors, agents, loaders, and so on). This cost is
usually the biggest factor, but it might again be modest with proper custom code
design.
The server count resulting from this CPU sizing step might differ from the number resulting
from the previous step, “Determine the memory requirements and partition count.” You will
need to take the larger of the two server counts as the final number.
For example, if you need 10 servers for memory requirements but that number provides
only 50% of the required throughput because of CPU saturation, you will need twice as
many servers (20).
In a similar way, if you determine that you need two servers based on the CPU sizing
exercise, but you need six servers based on the memory requirements, you will need six
servers instead of two.
Now, we take a closer look at the sizing calculation process for the previous steps.
4.1.4 Calculating the required memory for the data in the grid
The first step in determining the maximum size of the data to be stored in WebSphere
eXtreme Scale (known as totalObjectMemory) is to estimate the maximum number of objects
(numberOfObjects) and the average size of each object (averageObjectSize). It is critical that
this estimate will be based on real data. For example, if you want to size for a Product object
containing a “description”, do not estimate using descriptions that all read “this is a typical
description” but instead use real descriptions (which are probably much longer).
The value for numberOfObjects usually must be estimated based on the number of users,
number of requests, or another estimate that relates to the number of objects needed at

Get WebSphere eXtreme Scale Best Practices for Operation and Management now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.