Capturing the performance of real users

Five Questions for Philip Tellis: Insights on the organizational benefits of RUM, and new techniques for measuring user performance and emotion.

By Brian Anderson and Philip Tellis
September 15, 2016
Industrial gauges Industrial gauges (source: Fernando50 via Pixabay)

I recently sat down with Philip Tellis, Chief Architect and RUM Distiller at SOASTA, to discuss the nuts and bolts of measuring real user performance and reaction. Here are some highlights from our talk.

What is RUM (and how is it different from synthetic monitoring)?

RUM stands for real user measurement. It is the practice of measuring the experience of real users while they browse through a site. Humans have many differences from robots, and these differences are what makes RUM stand out from synthetic monitoring.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Unlike synthetic monitors, real users have emotions, are impatient, can be delighted or frustrated by an experience, make buying decisions that include emotional intuition, and have a high monetary value on their time. The performance experienced by a real user during their browsing can directly affect how they react to that experience. Real user measurement aims to measure not just the experience, but also the user’s reaction to that experience.

While RUM on its own goes beyond performance, at Velocity our focus is primarily on performance.

How do you measure real user performance?

Most modern browsers have APIs that help us measure the time it takes to carry out various actions in the browser. The NavigationTiming API tells us how long it took a page to load, the ResourceTiming API tells us how long it took individual resources to load, and the UserTiming API can be unleashed to determine how long different execution points in the page took. These APIs are great to provide us with operational metrics about the page’s structure and the network infrastructure that it was loaded over. Where they are found lacking is in their ability to tell us user reaction, or to measure things beyond a full page load, like Single Page Applications (SPAs), Accelerated Mobile Pages (AMP), offline apps, and post load user interactions.

To measure the rest of these, we need to write some creative JavaScript that hooks into and proxies several other browser APIs like MutationObserver, PerformanceObserver, XMLHttpRequest and event handlers. Hooking these things up is simple, but doing it right is hard. We’ve seen many libraries hook into these APIs but get a small detail wrong, which ends up breaking the site in some edge cases.

Beyond performance, we need to measure how much time a user spends on a site, how many pages they visit, and the kinds of actions they take—for example, clicking a Like button, adding something to their shopping cart, clicking an ad, going through a checkout process, signing up for a newsletter.

We use the boomerang JavaScript library to encapsulate all these APIs and measure real user performance and reaction.

What does measuring real user performance allow an organization to do?

With RUM data, an organization can quickly determine if performance is affecting business metrics and revenue. They can put a dollar value on every second of load time that a user experiences, and segment users based on how they might react to different load times. They can tell whether adding a new feature that may slow down the site is worth it based on how much revenue that change in load time is worth, or they can budget for performance improvements based on what those improvements will bring in.

Organizations can also determine which pages are the most important to optimize and, with real-time data, can quickly tell if a newly launched campaign has the right results (and if not, why).

Real-time data when combined with anomaly detection algorithms can also tell an organization if things are beyond the norm, which allows for dynamic alerting, saving an operations team from either having too many false positives or risking missing a single false negative. This can be scary at first, but after a few nights of uninterrupted sleep it starts to feel pretty good.

What are some exciting new techniques for measuring user performance?

The availability of NavigationTiming2 and ResourceTiming2 are definitely exciting, but beyond that the ability to garner some idea of user emotion by measuring rage clicks, mouse movements, eyebrow tracking, and mind reading is where real groundbreaking research is happening.

You’re speaking at the Velocity Conference in New York this September. What presentations are you looking forward to attending while there?

It’s actually quite hard to choose. I looked at my personal schedule for Velocity, and there are at least seven slots when I want to attend three talks in parallel. The talks about AMP, low powered devices, data science and all the case studies are what I look forward to the most.

Post topics: Performance
Share: