APMdigest's 2017 Application Performance Management Predictions is a forecast by the top minds in APM today. Industry experts — from analysts and consultants to users and the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2017. Part 2 covers the expanding scope of Application Performance Management (APM) and Network Performance Management (NPM).
Start with 2017 Application Performance Management Predictions - Part 1
10. CONVERGENCE: APM/DPM + ITOA/ITOM
APM has already largely morphed into Digital Performance Management (DPM), reflecting the realization that today's applications are increasingly likely to depend upon end-to-end digital transactions. In 2017, we predict greater convergence between APM/DPM and IT Operations Analytics and Management (ITOA/ITOM), as infrastructure and application issues are increasingly intertwined.
Jason Bloomberg
President, Intellyx
11. CONVERGENCE: APM AND INFRASTRUCTURE MONITORING
The current disparate worlds of APM and infrastructure management will converge in 2017. IT executives, managers and administrators will realize that having different consoles for application code visibility vs. IT infrastructure management leaves visibility gaps and demands a higher degree of expertise from all involved.
Srinivas Ramanathan
CEO, eG Innovations
Read Srinivas Ramanathan's blog: Majority Looking to Improve Citrix Performance Management
12. APM COVERS ALL APPLICATION TYPES
2017 will be the year of “not just” in APM. As in “not just agent-based transaction tracking” or “not just for DevOps.” But most importantly, “not just for home-grown code.” In the coming year, APM will fully embrace the words behind the acronym to include tools and techniques that allow management of all application types — from those developed in-house to customized-off-the-shelf ones to pure “shrink wrap” apps that enterprises purchase, install, and run as-is (yes, some of those really do still exist!).
Leon Adato
Head Geek, SolarWinds
Read Leon Adato's blog: More on SolarWinds Prediction for APM in 2017
13. APM EXTENDS END TO END
In 2017, we will start to see demand for situational and predictive visibility across the entire IT environment, not just the application stack. Currently, ITOps and DevOps teams are overrun with information from a variety of APM tooling, yet the health of the entire environment is still not represented. Because of this complexity, no one APM tool has enough situational awareness to help Ops practitioners effectively diagnose a disruption. In the coming year, solutions that offer data correlation and convergence, along with pattern and anomaly detection, will see heavy adoption as they allow organizations to see the full picture of their infrastructure and respond to incidents and performance issues quicker.
Tim Armandpour
SVP Product Development, PagerDuty
APM is increasingly shifting away from application and server-based technologies towards true end-to-end solutions that can analyze and understand the entire scope of client-server-backend communication processes. APM must include the client system, WAN connectivity, datacenter performance, and application server availability in reported metrics and application health status. This change will require APM solutions to be embedded within other IT architecture components to show full value across security, application delivery, networking, and other uses.
Frank Yue
Director of Application Delivery Solutions, Radware
14. ITSM CONTINUES TRANSFORMATION
IT Service Management will continue its transformation. What started in 2010 with the rampant move to modernize legacy ITSM systems to agile, cloud based systems will continue at a rapid pace. Consolidating multiple systems of record and expanding the scope of IT Service Management to the rest of the enterprise is a trend that will continue – penetrating non-IT departments, such as HR, Legal, Customer Service, and Security. The drive to improve productivity for all employees will continue, enabled through modern self-service portals and workflow automation.
Kevin Murray
Senior Director, ServiceNow
15. SHIFT FROM DATA-CENTRIC TO EVENT-CENTRIC
Businesses will shift from being data-centric to event-centric. In the past, data stores were considered the source of truth for any organization – with huge data lakes where insight supposedly existed and unexplored opportunity was close. Looking to 2017 and beyond, truth and insight will instead be found in the log of data events, or the "state" of an event, and an organization's ability to react to said event – both in realtime and over time. This event-driven model will require a technology change, moving away from request-driven integration and creating an application architecture optimized for agility, resiliency and elasticity.
Sean Bowen
CEO, Push Technology
16. APM TAKES ON BIG DATA WORKLOADS
In 2017, we can expect APM to be running in tandem with workloads on more Big Data systems. As with any enterprise-class deployment, you need to make sure that all applications are free of faults – that's where APM comes in, filling a void by providing insight into how the workloads are running.
Kunal Agarwal
CEO, Unravel Data
17. APM TAKES ON IOT
A trend of note is the rise of Internet of Things (IoT) implementations. Enterprises will bring more connected devices online and will have to deal with massive amounts of data that stream into their big data stores. IT departments should look to adopt the right APM solution to simplify the management of these complex environments and to contend with their dynamic resource requirements.
Arun Balachandran
Applications Manager Market Analyst, ManageEngine
Many IoT applications are designed to visualize or organize data collected from one or more sensors. The data tends to be old, and hence may not accurately reflect the current state at the moment the user is interacting with the application. This can lead to frustration on the part of the user, and lack of trust in the information presented. In 2017, a focus will emerge on providing applications that are based upon real-time data, which will necessitate not just reliable, ubiquitous and high-performance networks, but also the ability to process as much of the data as possible on the edge in order to minimize latency for the end user. This focus will expose some of the significant limitations inherent in many of the LPWAN technologies to achieve the performance required for real-time data applications, and will highlight the benefits of technologies such as mesh and cellular that are architected to support this type of user experience.
Don Reeves
CTO, Silver Spring Networks
18. NETWORK TEAMS GET PROACTIVE
Network teams will become proactive in order to ensure that networks don't go down. In an era when requirements and expectations continue to rise, network engineers will need to have a global view of QoS for their stakeholders and users. There has been a shift from reactive problem solving (based on complaints and trouble-tickets) to proactive, with network engineers having tools that will show them where and how a network is having issues before users are even aware. This ability to anticipate problems in more complex environments will help ensure that the network of 2017 doesn't go down. The enterprise network represents the heartbeat of the organization. It is dynamic.
Larry Zulch
President and CEO, Savvius
19. NETWORK SPEEDS PRESENT NEW CHALLENGES
The coming year will bring an increased need for analysis on higher speed links. Traditionally 40Gbps and 100Gbps have only been used for backhaul or data center interconnect, but faster speeds at the edge will force organizations to go beyond the current 10Gbps speed plateau that we've seen business-critical services and apps running at for several years. Without the ability to scale both architecturally and technologically, organizations that continue to use legacy monitoring systems will struggle with increased monitoring costs and lower end-user customer satisfaction.
Jim Berkman
Senior Director of Marketing, cPacket Networks
20. NPM GETS CLOUD-FRIENDLY
Network Performance Management (NPM) has been stuck in a time warp. APM has largely tracked the rise of the cloud and the distributed nature of modern, cloudified application development. By contrast, NPM as a practice, architecture and toolset has largely stayed in the mode of legacy datacenters, sparse WAN chokepoints and physical connectivity. In 2016, recognition started to really build for the need to change this picture, particularly since APM isn't truly complete without sound NPM. In 2017, the industry will start to get serious about practicing cloud-friendly NPM.
Avi Freedman
Co-Founder and CEO, Kentik
21. NETWORK-BASED VISIBILITY
Application performance in an IaaS environment will increasingly become constrained by communications between different application tiers and inbound/outbound traffic rather than by storage and compute. Network-based visibility will become a growing theme and requirement for managing application performance in an IaaS environment.
Shehzad Merchant
CTO, Gigamon
Read 2017 Application Performance Management Predictions - Part 3.
The Latest
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...
Organizations recognize the benefits of generative AI (GenAI) yet need help to implement the infrastructure necessary to deploy it, according to The Future of AI in IT Operations: Benefits and Challenges, a new report commissioned by ScienceLogic ...
Splunk's latest research reveals that companies embracing observability aren't just keeping up, they're pulling ahead. Whether it's unlocking advantages across their digital infrastructure, achieving deeper understanding of their IT environments or uncovering faster insights, organizations are slashing through resolution times like never before ...
A majority of IT workers surveyed (79%) believe the current service desk model will be unrecognizable within three years, with nearly as many (77%) saying new technologies will render it "redundant" by 2027, according to The Death (and Rebirth) of the Service Desk from Nexthink ...
Monitoring your cloud infrastructure on Microsoft Azure is crucial for maintaining its optimal functioning ... In this blog, we will discuss the key aspects you need to consider when selecting the right Azure monitoring software for your business ...
All eyes are on the value AI can provide to enterprises. Whether it's simplifying the lives of developers, more accurately forecasting business decisions, or empowering teams to do more with less, AI has already become deeply integrated into businesses. However, it's still early to evaluate its impact using traditional methods. Here's how engineering and IT leaders can make educated decisions despite the ambiguity ...