Streamlining Anomaly Detection and Remediation with Edge Observability
June 07, 2022

Ozan Unlu
Edge Delta

Share this

Over the past several years, architectures have become increasingly distributed and datasets have grown at unprecedented rates. Despite these shifts, the tools available to detect issues within your most critical applications and services have remained stuck in a centralized model. In this centralized model, teams must collect, ingest, and index datasets before asking questions upon them to derive any value.

This approach worked well five years ago for most use cases, and now, it still suffices for batching, common information models, correlation, threat feeds, and more. However, when it comes to real-time analytics at large scale — specifically anomaly detection and resolution — there are inherent limitations. As a result, it has become increasingly difficult for DevOps and SRE teams to minimize the impact of issues and ensure high-quality end-user experiences.

In this blog, I'm going to propose a new approach to support real-time use cases — edge observability — that enables you to detect issues as they occur and resolve them in minutes. But first, let' s walk through the current centralized model and the limitations it imposes on DevOps and SRE teams.

Centralized Observability Limits Visibility, Proactive Alerting, and Performance

The challenges created by centralized observability are largely a byproduct of exponential data growth. Shipping, ingesting, and indexing terabytes or even petabytes of data each day is difficult and cost-prohibitive for many businesses. So, teams are forced to predict which datasets meet the criteria to be centralized. The rest is banished to a cold storage destination, where you cannot apply real-time analytics on top of the dataset. For DevOps and SRE teams, this means less visibility and creates the potential that an issue could be present in a non-indexed dataset — meaning the team is unable to detect it.

On top of that, engineers must manually define monitoring logic within their observability platforms to uncover issues in real-time. This is not only time-consuming but puts the onus on the engineer to know every pattern they' d like to alert on upfront. This approach is reactive in nature since teams are often looking for behaviors they' re aware of or have seen before.

Root causing an issue and writing an effective unit test for it has been around for ages, but what happens when you need to detect and resolve an issue that' s never occurred before?

Lastly, the whole process is slow and begs the question, "how fast is real-time?"

Engineers must collect, compress, encrypt, and transfer data to a centralized cloud or data center. Then, they must unpack, ingest, index, and query the data before they can dashboard and alert. These steps naturally create a delta between when an issue actually occurs and when it's alerted upon. This delta grows as volumes increase and query performance degrades.

What is Edge Observability?

To detect issues in real-time and repair them in minutes, teams need to complement traditional observability with distributed stream processing and machine learning. Edge observability uses these technologies to push intelligence upstream to the data source. In other words, it calls for starting the analysis on raw telemetry within an organization' s computing environment before routing to downstream platforms.

By starting to analyze your telemetry data at the source, you no longer need to choose which datasets to centralize and which to neglect. Instead, you can process data as it' s created unlocking complete visibility into every dataset — and in turn, every issue.

Machine learning complements this approach by automatically:

■ baselining the datasets

■ detecting changes in behavior

■ determining the likelihood of an anomaly or issue

■ triggering an alert in real-time

Because these operations are all running at the source, alerts are triggered orders of magnitude faster than is possible with the old centralized approach.

It' s critical to point out that the use of machine learning wipes out the need for engineers to build and maintain complex monitoring logic within an observability platform. Instead, the machine learning picks up on negative patterns — even unknown unknowns — and surfaces the full context of the issue (including the raw data associated with it) to streamline root-cause analysis. Though operationalizing machine learning for real-time insights into high volumes has always proved a challenge at scale, distributing this machine learning gives teams the ability to have full access and deep views into all data sets.

Edge Observability Cuts MTTR from Hours to Minutes

Taking this approach, teams can detect anomalous changes in system behavior as soon as they occur and then pinpoint the affected systems/components in a few clicks — all without requiring an engineer to build regex, define parse statements, or run manual queries.

Organizations of all sizes and backgrounds are seeing the value of edge observability. Some are using it to dramatically reduce debugging times while others are gaining visibility into issues they didn' t know were going on. In all situations, it' s clear that analyzing massive volumes of data in real-time calls for a new approach — and this will only become clearer as data continues to grow exponentially. This new approach starts at the edge.

Ozan Unlu is CEO of Edge Delta
Share this

The Latest

December 18, 2024

Industry experts offer predictions on how NetOps, Network Performance Management, Network Observability and related technologies will evolve and impact business in 2025 ...

December 17, 2024

In APMdigest's 2025 Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 6 covers cloud, the edge and IT outages ...

December 16, 2024

In APMdigest's 2025 Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 5 covers user experience, Digital Experience Management (DEM) and the hybrid workforce ...

December 12, 2024

In APMdigest's 2025 Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 4 covers logs and Observability data ...

December 11, 2024

In APMdigest's 2025 Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 3 covers OpenTelemetry, DevOps and more ...

December 10, 2024

In APMdigest's 2025 Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 2 covers AI's impact on Observability, including AI Observability, AI-Powered Observability and AIOps ...

December 09, 2024

The Holiday Season means it is time for APMdigest's annual list of predictions, covering IT performance topics. Industry experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how Observability, APM, AIOps and related technologies will evolve and impact business in 2025 ...

December 05, 2024
Generative AI represents more than just a technological advancement; it's a transformative shift in how businesses operate. Companies are beginning to tap into its ability to enhance processes, innovate products and improve customer experiences. According to a new IDC InfoBrief sponsored by Endava, 60% of CEOs globally highlight deploying AI, including generative AI, as their top modernization priority to support digital business ambitions over the next two years ...
December 04, 2024

Technology leaders will invest in AI-driven customer experience (CX) strategies in the year ahead as they build more dynamic, relevant and meaningful connections with their target audiences ... As AI shifts the CX paradigm from reactive to proactive, tech leaders and their teams will embrace these five AI-driven strategies that will improve customer support and cybersecurity while providing smoother, more reliable service offerings ...

December 03, 2024

We're at a critical inflection point in the data landscape. In our recent survey of executive leaders in the data space — The State of Data Observability in 2024 — we found that while 92% of organizations now consider data reliability core to their strategy, most still struggle with fundamental visibility challenges ...