Appendix A. Microbenchmarking
In this appendix, we will consider the specifics of measuring low-level Java performance numbers directly. The dynamic nature of the JVM means that performance numbers are often harder to handle than many developers expect. As a result, there are a lot of inaccurate or misleading performance numbers floating around on the internet.
A primary goal of this appendix is to ensure that you are aware of these possible pitfalls and only produce performance numbers that you and others can rely upon. In particular, the measurement of small pieces of Java code (microbenchmarking) is notoriously subtle and difficult to do correctly, and this subject and its proper usage by performance engineers is a major theme throughout this appendix.
The Feynman quote we met way back in Chapter 2 is especially relevant when applied to microbenchmarks.
The second portion of this appendix describes how to use the gold standard of microbenchmarking tools: JMH. If, even after all the warnings and caveats, you really feel that your application and use cases warrant the use of microbenchmarks, then you will need to avoid numerous well-known pitfalls and “bear traps” by starting with the most reliable and advanced of the available tools.
Introduction to Measuring Low-Level Java Performance
In “Java Performance Overview”, we described performance analysis as a synthesis between different aspects of the craft that has resulted in a discipline that is fundamentally an experimental ...
Get Optimizing Cloud Native Java, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.