Organizations are becoming increasingly dependent on the performance of their critical business applications. These are continually developing to meet the changing needs of the business; new applications are created, new users and features added and new ways of accessing the applications introduced, such as BYOD.
However, no technology changes come without a price, and today’s complex applications put an increasing strain on the organization’s network and server infrastructure. Furthermore, user expectations of rapid response times mean that the network infrastructure is no longer just the "plumbing". It supports business-critical applications, provides the data on which decisions are made and facilitates communications with customers, partners, suppliers and co-workers, making it a strategic asset to the business. Any downtime or degradation in network or application performance will directly impact an organization’s bottom line.
Historically the network has been considered as a separate, well-defined entity, making it relatively straightforward to write tools to understand and analyze its performance. These fall into two categories: Network Management Systems (NMS) and packet capture and analysis tools.
Most NMS have been infrastructure focused, addressing device monitoring, capacity planning, configuration management, fault management, analysis of interface traffic etc. and ignoring the applications and data traversing the network. They do not perform analytics on application response time, TCP errors and other issues that impact applications.
Application Performance Management (APM) systems typically support auto-discovery of all the applications in the network, providing transaction analysis, application usage analysis, end-user experience analysis, user-defined transaction profiling and the basic functions to monitor the health and performance of all configured application infrastructure assets. However, if an application is running slowly they find it difficult to identify if the problem is application or network based.
Whereas separate systems were once sufficient to stay on top of problems, the increased interdependency of network and applications and cost of downtime means it is no longer enough to use a discrete tool and say "it’s not the network" or "my servers are fine". These tools are not designed to manage the interplay between network and applications environments, which needs to be understood and managed to optimize the user experience.
IT teams need to work together using correlated data to find the root cause and solve issues quickly before they impact the business.
Leveraging Application and Network Performance Methodologies
They require complete visibility of the network across all layers, from the data center to the branch office. The solution is AANPM: Application Aware Network Performance Management. AANPM is a method of monitoring, analyzing and troubleshooting both networks and applications. It takes an application-centric view of everything happening across the network, providing end-to-end visibility of the network and applications and their interdependencies, and enabling engineers to monitor and optimize the end user experience. It does not look at applications from a coding perspective, but in terms of how they are deployed and how they are performing.
By leveraging data points from both application and network performance methodologies, AANPM helps all branches of IT work together to ensure optimal performance of applications and network.
AANPM offers specific, tangible business benefits:
• End-to-end infrastructure visibility
• Faster problem-solving
• Improved user experience
• Enhanced productivity
• Cost savings
• Improved infrastructure optimization
• Better business understanding of IT
Bruce Kosbab is CTO of Fluke Networks.
The Latest
On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...
Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...