Consider Caching Return Values from Distributed Calls
At one company for which I worked, my first task was described in the following way: “This call is too slow. See if you can make it faster.”
I looked at the call. It was slow. It involved 18 separate modules on the server and implicitly depended on about 40,000 lines of code, and all I could think was, “Oh yeah. I’ll optimize this in my first week on the job.”
Because the task was impossible, I decided to cheat. I thought, “Hmmm. If I can’t make the call faster, I can at least use a cache to try and make the call less often.” This turned out to be good enough for the application in question, and is often good enough in distributed applications.
The benefits of caching return values on the client side are:
- Faster performance for a given sequence of operations
Instead of making a call to the server, which involves network and marshalling overhead, you retrieve the value from an in-process cache. Unless you’re doing something truly stupid, this should be faster.
- More reliable and predictable performance for a given sequence of operations
In addition to the performance benefit, looking in a local data structure is more reliable than calling a remote server, and it has a predictable level of overhead (unlike a remote method call, which can take more time if the network is busy).
- Lower bandwidth
Bandwidth is the scarcest resource for distributed applications because it is shared by all distributed applications, and because upgrading ...