In IT we literally build the future, and people are naturally excited by exploring the next new possibility. But there can't be many professions where the past is less valued. Take the term "legacy,” in our world it's a synonym for "needs replacing.” And yet, in music, art, even architecture and engineering, a legacy is more often something that is deeply cherished.
If historic application code is as worthless as we often treat it, it's a sad reflection on the value of the developer's craft and everything that we are building today. Looking purely at features and performance, a vintage car will never compare to something hot-off a 2021 production line. And, while plenty of cars reach end-of-life and end up as a 50x50cm crushed box, we still recognize and value classics. Moreover, we value the tools, people and skills that can keep them running at peak performance.
The Reality for Nine Out of Ten of Us
Amongst the hype of exciting new cloud trends revealed in the IDG's 2020 Cloud Survey (published last August), a quick reframing of the stats shows that 91% of organizations still rely on what are increasingly termed "enterprise applications” i.e. non-cloud-native applications running on a traditional, physical infrastructure. Whether to break up, migrate or containerize these applications is a lengthy and extremely case-specific argument. One for a different time and place.
Currently, (and most likely well into the future) the overwhelming majority of organizations still need to monitor and maintain these enterprise applications. Moreover, where these are complex systems developed, debugged and refined over years, often decades, around a business's core processes, there can also be very strong practical arguments for viewing them as classics. They can offer a valuable legacy, one best left where it is, doing what it does, how it always has done.
In this situation, a bespoke hybrid APM that can incorporate these enterprise applications becomes a vital tool. There is a need to monitor the compound applications and linked services that run through cloud native front ends and APIs, into and back out of, classic enterprise applications.
If you need your 1930's Bugatti to purr, roar and spit fire like the day it was first tuned-up, a modern torque driver will save a lot of time, but you won't get far with on-board diagnostics cables. Likewise, monitoring traditional enterprise applications by solely using cloud-native tooling can be equally misguided.
Tooling Approaches: Getting By or Thriving
Much of the problem with hybrid APM is that the modern cloud native paradigm tends to dominate. If new development is happening in cloud/container-based DevOps pipelines, this naturally becomes the focus and the location for monitoring. Centralizing data in a modern DevOps dashboard isn't the issue, it's more a question of how this is done.
Just as with the software industry's attitude to legacy applications in general, there is vendor derision towards "legacy monitoring." Again, this is a question of how we view our legacy. Using outdated technology for critical APM is clearly unwise. However, using modern, dedicated tools for monitoring back-end services that run on physical infrastructure seems logical. The "one size fits all" world of cloud monitoring offers a potent tool for cloud deployments but, faced with the common reality of a modern hybrid infrastructure, they struggle to monitor enterprise applications and their underlying physical infrastructure.
Most DevOps engineers are capable of developing the tools and skills to monitor enterprise applications. With enough effort, it is possible to build or adapt an agent that draws out something approximating modern telemetry from an enterprise application and its underlying hardware platform. This can then be pushed into your monitoring solution of choice. It's just very questionable whether this is an effective use of a DevOps engineer's time.
Alternatively, with the larger cloud monitoring and observability suites, you may even be able to buy additional, dedicated solutions for enterprise application monitoring. If you are already bought into the single-vendor suite, you are most likely numbed to the costs, so there may be a strong temptation to stick with the model. However, these solutions are typically built as an after-thought by a vendor with very different core expertise. They tend to offer cute graphics but, under pressure, these add-on solutions will deliver a severe lack of actionable data for maintaining enterprise software applications running on physical hardware.
Buy or build, the cost of drawing data directly from enterprise applications and transferring this straight into a cloud-native monitoring solution is high, and the results are typically awkward. Enterprise software requires a different understanding, different treatment and different monitoring to a cloud native application. This is, after all, the essence of the DevOps/ITOps monitoring divide. You may end up with monitoring data in the same place, but you are more likely staring at the ingredients of an unnecessarily complex fruit salad than comparing apples with apples.
Integrating Tools, Teams and Valuing Expertise
There is, however, a third, perhaps more natural way to deliver hybrid monitoring. Selecting best-of-breed tools and integrating these through APIs, is the bedrock of the DevOps approach to tooling. The tools used to monitor traditional enterprise applications and physical infrastructure have been developed over decades: evolving around end-users to solve their challenges and answer their needs. And, in a world of integrating tools, there is little point in rebuilding them from scratch.
Features like auto discovery that are available in some free and open source monitoring tools can offer a working solution in minutes. With popular open source check/agent libraries, 90% of enterprise applications can be monitored out of the box. And where these tools have evolved to offer well documented APIs these can be used to feed data into cloud native or DevOps dashboard solutions.
However, there are much larger strategic benefits at play. Regardless of your exact organizational structure using best-of-breed monitoring solutions as an intelligent gateway or filter for enterprise application metrics offers a far stronger solution.
For DevOps teams operating in isolation, tools that have evolved to monitor enterprise applications and physical infrastructure can deliver an opinionated view as a starting point. Typically these will serve up the key enterprise metrics, based on historic end-user preference — decades of ITOps best-practice is implicitly laid-out on the default problem dashboard. In addition, for systems such as databases or networks, there is generally an opinionated dashboard that surfaces the data that is needed to solve 90% of problems, with the other 10% within easy reach. DevOps engineers no longer have to grok the intricacies of an alien environment before they monitor it.
Perhaps the more common situation is that effective hybrid application monitoring will necessitate collaboration between DevOps and an established ITOps team. In this scenario, freedom to use preferred tools can make or break this collaboration.
Forcing the ITOps team to work in an awkward cloud-native monitoring environment (built or bought) that is ill-suited to enterprise monitoring is unlikely to promote much collaborative spirit. In addition to the unfamiliarity of the tool, and often the terminology, cloud native monitoring can lack the customization needed to work within the less homogeneous hybrid environments. It makes more sense to let ITOps work in a best-of-breed enterprise solution which delivers APIs for DevOps practitioners to leverage while building their own platform specific tools. The tooling becomes an enabler for advanced collaboration rather than a barrier.
Teams that practice a DevOps approach gain the ultimate opinionated enterprise monitoring solution, one built on the expertise of their ITOps team. ITOps have an evolved enterprise monitoring solution to serve up the datapoints that DevOps practitioners really need, rather than their best guess. The two teams can capitalize on each other's experience and expertise to build fine-tuned hybrid applications with chains of services running smoothly across cloud-native and on-prem architectures — the architectures that 91% of organizations still rely on.
The Latest
Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...
Organizations recognize the benefits of generative AI (GenAI) yet need help to implement the infrastructure necessary to deploy it, according to The Future of AI in IT Operations: Benefits and Challenges, a new report commissioned by ScienceLogic ...