Can APM Really Handle Serverless? - Part 1
October 27, 2020

Chris Farrell
Instana

Share this

I remember the moment I heard about Serverless technology. On a bus back to the hotel at a conference, I overheard a CTO telling one of her developers about this "new" thing called Lambda. She said (and I'm paraphrasing): "so, the code is there, but it's not running anywhere — until you need it, then it appears, executes and disappears again."

I literally (YES, literally) got goosebumps. I had thought containers were cool, but this? O-M-G!!!

That night I had visions of millions of pieces of code just waiting in the wings for its time to be executed. Of course, the reality of today is that Serverless is a big part of modern application strategy, but not executing every workload like one might think.

There are three key reasons for that:

1. Architecting a serverless function into your operating applications isn't (or wasn't) the easiest thing in the world to do.

2. While the idea of serverless workload execution promises minimal cloud operating costs, the reality of serverless platform pricing is that sometimes it might be more.

3. The monitoring and performance management tools relied upon by IT shops around the globe couldn't handle serverless,

Now, you might be thinking "but wait. Many application monitoring tools struggled for years with containers, but that technology took off like a rocket."

And you would be right. That's one of the reasons I asked myself this important question: Can APM tools Manage Serverless Workloads?

And the answer is "No, not really."

No, don't go searching the web for serverless monitoring to look for a lack of functional claims. Every monitoring solution in the world claims support for monitoring serverless platforms (at least one of them).

What I mean by my answer is that the "APM" solutions we've come to love over the last 2 decades can't handle Serverless Functions or deliver the same performance and operational details that they deliver for other architectural constructs — including App Servers, Frameworks, Cloud, even Containers. And the reason is that they're methodologies for collecting performance data simply won't operate with the same characteristics as it would in persistent code.

To fully understand the nuanced differences between running an agent and capturing data from an API as it relates to monitoring, let's look at some of the operational costs of running serverless code.

Let's first look at what I call the Unicorn of Serverless application functionality — a seldomly called stateless functional piece of work — calculating a payment would be a good example. The inputs are the loan amount, the number of payments and the annual interest rate — the outputs are the interest payment and full payment. The function is called seldomly, requires very few resources to run (meaning little setup) and operates statelessly.

The Unicorn function can be loaded onto a serverless platform such as Lambda with zero permanent persistence (saves money). And a cold start doesn't hurt performance, so it can literally open up and shut down when you need it (also saving money). Now that we've established the perfect way to operate a serverless workload from a financial efficiency perspective, let's consider the three prerequisites:

■ Seldomly called — in the realm of efficient development, services that are never called are either deprecated or rolled into other functionality to make storage and operations as efficient as possible. Thus, a meaningful piece of code that is seldomly called is not really a thing anymore.

■ Requires few resources — again, in the realm of meaningful functions, the need for resources (memory, storage, I/O, etc.) is usually directly related to how important a piece of code is. Which maps back to the same decision point as seldomly called — a function that requires few resources is unlikely to operate on its own, instead being part of a shared service with active listeners, triggers, etc.

■ Is stateless — this is perhaps the least likely of scenarios to be present in today's microservice applications. Even plain old informational websites contain state of users — history, cache, setup, preferences, etc. The odds of having any kind of critical application service that doesn't have a personalized aspect to the workload is rare.

That's why the Unicorn Serverless operation is a rarity, and why cost isn't necessarily less anymore. Since (almost) every function requires some level of resources to use and/or a state — or access to state through a known memory location, two things become a concern.

First is performance — if you have to spin up resource libraries every time you want to run your piece of code, that can have a significant overhead, depending on how complex and resource intensive your piece of code is. I'm going to come back to this in a minute or two, so remember how just setting up your libraries can cause a relative performance impact of 50 — 500%.

Given the performance conundrum, the solution is to use functionality in the serverless platforms, like Lambda, to keep a warm pulse of libraries running so that there's no performance impact. This is referred to as a warm start serverless function.

Now, while this may address the performance issue, naturally it begins to detract from our cost savings. It's one thing to only pay for CPU cycles when you need to run the function — quite another when you're still ALWAYS paying for something, just a little less than you normally would.

Go to: Can APM Really Handle Serverless? - Part 2

Chris Farrell is Observability and APM Strategist at Instana
Share this

The Latest

October 17, 2024

Monitoring your cloud infrastructure on Microsoft Azure is crucial for maintaining its optimal functioning ... In this blog, we will discuss the key aspects you need to consider when selecting the right Azure monitoring software for your business ...

October 16, 2024

All eyes are on the value AI can provide to enterprises. Whether it's simplifying the lives of developers, more accurately forecasting business decisions, or empowering teams to do more with less, AI has already become deeply integrated into businesses. However, it's still early to evaluate its impact using traditional methods. Here's how engineering and IT leaders can make educated decisions despite the ambiguity ...

October 15, 2024

2024 is the year of AI adoption on the mainframe, according to the State of Mainframe Modernization Survey from Kyndryl ...

October 10, 2024

When employees encounter tech friction or feel frustrated with the tools they are asked to use, they will find a workaround. In fact, one in two office workers admit to using personal devices to log into work networks, with 32% of them revealing their employers are unaware of this practice, according to Securing the Digital Employee Experience ...

October 10, 2024

In today's high-stakes race to deliver innovative products without disruptions, the importance of feature management and experimentation has never been more clear. But what strategies are driving success, and which tools are truly moving the needle? ...

October 09, 2024
A well-performing application is no longer a luxury; it has become a necessity for many business organizations worldwide. End users expect applications to be fast, reliable, and responsive — anything less can cause user frustration, app abandonment, and ultimately lost revenue. This is where application performance testing comes in ....
October 08, 2024

The demand for real-time AI capabilities is pushing data scientists to develop and manage infrastructure that can handle massive volumes of data in motion. This includes streaming data pipelines, edge computing, scalable cloud architecture, and data quality and governance. These new responsibilities require data scientists to expand their skill sets significantly ...

October 07, 2024

As the digital landscape constantly evolves, it's critical for businesses to stay ahead, especially when it comes to operating systems updates. A recent ControlUp study revealed that 82% of enterprise Windows endpoint devices have yet to migrate to Windows 11. With Microsoft's cutoff date on October 14, 2025, for Windows 10 support fast approaching, the urgency cannot be overstated ...

October 04, 2024

In Part 1 of this two-part series, I defined multi-CDN and explored how and why this approach is used by streaming services, e-commerce platforms, gaming companies and global enterprises for fast and reliable content delivery ... Now, in Part 2 of the series, I'll explore one of the biggest challenges of multi-CDN: observability.

October 03, 2024

CDNs consist of geographically distributed data centers with servers that cache and serve content close to end users to reduce latency and improve load times. Each data center is strategically placed so that digital signals can rapidly travel from one "point of presence" to the next, getting the digital signal to the viewer as fast as possible ... Multi-CDN refers to the strategy of utilizing multiple CDNs to deliver digital content across the internet ...