What Is Real User Monitoring in an Observability World? It Is Not APM "Agents" - Part 1
March 26, 2024

Eric Futoran
Embrace

Share this

Agent-based approaches to real user monitoring (RUM) simply do not work. If you are pitched to install an "agent" in your mobile or web environments, you should run for the hills.

The world is now all about end-users. This paradigm of focusing on the end-user was simply not true a few years ago, as backend metrics generally revolved around uptime, SLAs, latency, and the like. DevOps teams always pitched and presented the metrics they thought were the most correlated to the end-user experience.

But let's be blunt: Unless there was an egregious fire, the correlated metrics were super loose or entirely false.

Instead, your teams should prioritize alerts, monitoring, and work based on impact to the end-user, as it directly affects your businesses. And your developers and DevOps teams should collect data, monitor, prioritize, and resolve issues accordingly.

The agent-based RUM problem

"Agents" are a mechanism that does not work in the current end-user centric world. They were born out of shimmying the principles of the backend to mobile, web, and the myriad of other ways users interact with the world.

Let's compare the difference between user environments and backend environments:

User environments are open, unstructured, and uncontrollable as they are unowned devices and browsers with the central figure being an unpredictable user.

Backend environments are closed, structured, and controlled as they are composed of relatively homogenous physical and cloud applications.

With closed systems that have fewer external variables, agents focus on a known set of errors to monitor and to trigger data collection for resolution. However, monitoring systems outside of the backend is complex because there are a multitude of types of errors way beyond crashes, error logs, network traces, and API errors.

In an observability world, real user monitoring is about collecting "all" the data for every session — good or bad — and not just a sampled set based on predefined error types. Only by collecting the entirety of every session can the best vendors have the opportunity to analyze and provide the utmost value to your teams.

These vendors have evolved beyond agents to surface every type of user-impacting issue, help resolve them by comparing against good sessions, and prioritize overall impact across the complete set of issue types. For example, the same crash for two different users could have different root causes because of the environments, third-party SDKs, and API timeout parameters.

To hit the difference home, watch a developer, outside of DevOps, open a RUM dashboard for a vendor who uses the agent-based approach. The core dashboard will have the following:

■ A geographical map laying out the incidents

■ A generic list of error logs and crashes

■ Some sort of mapping of network errors

■ A single health score

The developer reviewing this dashboard will not come back to it regularly or at all. And it's not hard to see why.

The dashboard does not tell them which users are affected, where to prioritize their efforts, or the types of bugs and optimizations that they should care most about. It's not built for them from the data collected to data organization and display. There is a reason why these developers always implement and use other vendors — even for simple concepts like error logging and crashes — alongside those application performance monitoring vendors.

Let's deep dive into the core differences between these approaches and explore what a true real user monitoring methodology looks like. That way, you will know it when you see it and can create the best experience for your end-users as well as your developers and DevOps team.

The spider web problem

To illustrate the core implication of an agent mentality, let's focus on the "spider webs." You know the ones I'm talking about. You've seen the cool demos with a picture connecting nodes across your systems to demonstrate "visibility" across all the apps running on your servers and machines.

Everything is connected by an ever-expanding spider web of nodes and lines — every app, compute instance, API call, etc. Oh, it's very pretty to see all the apps and API calls going to and from each other. It's also a nice source of confidence that the agents are collecting the data required to monitor, identify, and resolve potential issues.

However, the very nature of this mental model of a spider web is it assumes all the issues occur on the lines between the nodes or on the nodes themselves:

■ An increase in network latency means you should look at the connected database, server, or service calls.

■ An increase in downtime means you should look at the connected servers to see if they're under heavy load.

■ An increase in transaction failures means you should look at the connected service calls for a point of failure.

The paradigm of agents is one of looking for a closed set of known symptoms for broken apps, failing processes, and poorly designed code. To help resolve these symptoms, the agents collect samples of app and process information, so that when an API throws an error or a process has downtime, the agent collects the corresponding data in reaction to the error.

And this approach works … on the backend, for a known set of errors, in a controlled environment, with little external pressure from the outside world.

But when applied to the client side of web and mobile, what happens when the complexity explodes? 

What happens when there are an infinite number of unknown pressures, from the users, the devices, the operating systems, the app versions, the network connectivities, and the other apps running?

How do you truly understand your team's effectiveness when the biggest issues are not related to downtime or following individual service calls throughout a distributed system?

The problem with uncontrolled environments

Uncontrolled environments are any digital experience that's external to data centers. Beyond just smartphones and web browsers, they're point of sales, VR and AR devices, tablets in the field, and smart cars. And the world is increasingly one of uncontrolled environments for business-critical touchpoints.

The most effective developer and DevOps teams monitor these client-side environments with early warning systems to determine when users are impacted so they can triage and resolve issues. They flip the traditional application monitoring paradigm.

Traditional application monitoring: Sample data by looking for a known set of errors, then gather context around them.

Modern application monitoring: Gather data without knowing its full value, correlate those data points to user impact from the end-user vantage point, then determine the error, measure the impact in order to prioritize it, and route it accordingly.

In order to collect, identify, and resolve errors correctly, DevOps teams must understand the challenges that come along with running apps in these types of uncontrolled environments. After all, the assumptions about where failure points can happen are vastly different.

Start with: What Is Real User Monitoring in an Observability World? It Is Not APM "Agents" - Part 2

Eric Futoran is CEO of Embrace
Share this

The Latest

November 21, 2024

Broad proliferation of cloud infrastructure combined with continued support for remote workers is driving increased complexity and visibility challenges for network operations teams, according to new research conducted by Dimensional Research and sponsored by Broadcom ...

November 20, 2024

New research from ServiceNow and ThoughtLab reveals that less than 30% of banks feel their transformation efforts are meeting evolving customer digital needs. Additionally, 52% say they must revamp their strategy to counter competition from outside the sector. Adapting to these challenges isn't just about staying competitive — it's about staying in business ...

November 19, 2024

Leaders in the financial services sector are bullish on AI, with 95% of business and IT decision makers saying that AI is a top C-Suite priority, and 96% of respondents believing it provides their business a competitive advantage, according to Riverbed's Global AI and Digital Experience Survey ...

November 18, 2024

SLOs have long been a staple for DevOps teams to monitor the health of their applications and infrastructure ... Now, as digital trends have shifted, more and more teams are looking to adapt this model for the mobile environment. This, however, is not without its challenges ...

November 14, 2024

Modernizing IT infrastructure has become essential for organizations striving to remain competitive. This modernization extends beyond merely upgrading hardware or software; it involves strategically leveraging new technologies like AI and cloud computing to enhance operational efficiency, increase data accessibility, and improve the end-user experience ...

November 13, 2024

AI sure grew fast in popularity, but are AI apps any good? ... If companies are going to keep integrating AI applications into their tech stack at the rate they are, then they need to be aware of AI's limitations. More importantly, they need to evolve their testing regiment ...

November 12, 2024

If you were lucky, you found out about the massive CrowdStrike/Microsoft outage last July by reading about it over coffee. Those less fortunate were awoken hours earlier by frantic calls from work ... Whether you were directly affected or not, there's an important lesson: all organizations should be conducting in-depth reviews of testing and change management ...

November 08, 2024

In MEAN TIME TO INSIGHT Episode 11, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Secure Access Service Edge (SASE) ...

November 07, 2024

On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...

November 06, 2024

Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...