Web performance tools: Synthetic vs. RUM

Learn the difference between synthetic and real-user monitoring tools.

By Rick Viscomi, Andy Davies and Marcel Duran
April 7, 2016
Wall of tools Wall of tools (source: Lachlan Donald via Flickr)

Web performance tools tend to be divided based on which big question they answer: “How fast is it?” or “How can it be made faster?” The two classifications of tools are commonly referred to as synthetic and real-user monitoring (RUM). WebPageTest falls under the synthetic category.

There’s a saying that when you have a hammer, all of your problems start to look like nails. Similarly, no one type of web performance tool can answer all of your questions. It’s important to distinguish what each type does and how it can be used so that you know when to use the right tool for the job:

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more
Synthetic RUM
Laboratory-like testing Measures performance of real users
Low variability, controlled High variability, unrestricted
Ad hoc tests Continuous data collection

Tools like WebPageTest are considered to be synthetic because of their artificial testing environments. Akin to a clean room in a laboratory, WebPageTest gives its testers granular control over many of the variables that contribute to performance changes, such as geographic location and type of network connection. By making these variables constant, the root causes of poor frontend performance can be more easily identified and measured.

Unlike the control afforded by synthetic testing, RUM does what its name implies and measures the actual performance real users are experiencing in the wild. The unbridled variations in browsers and bandwidth are all accounted for in the reporting so that each and every user’s unique environment is represented. By looking at the raw data, you can draw definitive statistical conclusions. For instance, with access to the performance results, you are able to determine the page-load time for any given percentile. RUM is also considered to be monitoring because data tends to be continuously recorded and streamed to a dashboard. By monitoring performance, developers are able to get instant notification when page speed takes an unexpected turn; a decline in speed could theoretically page an engineer immediately if necessary. This is especially useful for mission-critical applications for which performance is just as important as availability.

For attempting to determine the overall speed of a page, it’s clear that RUM is the appropriate solution because it accurately represents the performance of actual users. When starting out with WebPageTest, one pitfall is to assume that the synthetic results are like real-user metrics. The reality, however, is that synthetic tools are deliberately designed to focus on the performance of a web page under strict conditions that are otherwise highly volatile in real-user performance.

To help illustrate this pitfall, imagine that you run a synthetic test of your home page and come to find that the load time is 10 seconds. “That’s crazy,” you think, because it never feels that slow to you. Your real-world experience does not coincide with the test results. It’s not that the test is necessarily wrong. The test configuration is meant to represent one particular use case. If it isn’t set up to match your browser, in your city, over your connection speed, you’re unlikely to get comparable results. The test is only an artificial representation of what someone under similar conditions might experience. It’s up to you to configure the test in a way that mimics the conditions that you want to compare.

Post topics: Performance
Share: