In IT we literally build the future, and people are naturally excited by exploring the next new possibility. But there can't be many professions where the past is less valued. Take the term "legacy,” in our world it's a synonym for "needs replacing.” And yet, in music, art, even architecture and engineering, a legacy is more often something that is deeply cherished.
If historic application code is as worthless as we often treat it, it's a sad reflection on the value of the developer's craft and everything that we are building today. Looking purely at features and performance, a vintage car will never compare to something hot-off a 2021 production line. And, while plenty of cars reach end-of-life and end up as a 50x50cm crushed box, we still recognize and value classics. Moreover, we value the tools, people and skills that can keep them running at peak performance.
The Reality for Nine Out of Ten of Us
Amongst the hype of exciting new cloud trends revealed in the IDG's 2020 Cloud Survey (published last August), a quick reframing of the stats shows that 91% of organizations still rely on what are increasingly termed "enterprise applications” i.e. non-cloud-native applications running on a traditional, physical infrastructure. Whether to break up, migrate or containerize these applications is a lengthy and extremely case-specific argument. One for a different time and place.
Currently, (and most likely well into the future) the overwhelming majority of organizations still need to monitor and maintain these enterprise applications. Moreover, where these are complex systems developed, debugged and refined over years, often decades, around a business's core processes, there can also be very strong practical arguments for viewing them as classics. They can offer a valuable legacy, one best left where it is, doing what it does, how it always has done.
In this situation, a bespoke hybrid APM that can incorporate these enterprise applications becomes a vital tool. There is a need to monitor the compound applications and linked services that run through cloud native front ends and APIs, into and back out of, classic enterprise applications.
If you need your 1930's Bugatti to purr, roar and spit fire like the day it was first tuned-up, a modern torque driver will save a lot of time, but you won't get far with on-board diagnostics cables. Likewise, monitoring traditional enterprise applications by solely using cloud-native tooling can be equally misguided.
Tooling Approaches: Getting By or Thriving
Much of the problem with hybrid APM is that the modern cloud native paradigm tends to dominate. If new development is happening in cloud/container-based DevOps pipelines, this naturally becomes the focus and the location for monitoring. Centralizing data in a modern DevOps dashboard isn't the issue, it's more a question of how this is done.
Just as with the software industry's attitude to legacy applications in general, there is vendor derision towards "legacy monitoring." Again, this is a question of how we view our legacy. Using outdated technology for critical APM is clearly unwise. However, using modern, dedicated tools for monitoring back-end services that run on physical infrastructure seems logical. The "one size fits all" world of cloud monitoring offers a potent tool for cloud deployments but, faced with the common reality of a modern hybrid infrastructure, they struggle to monitor enterprise applications and their underlying physical infrastructure.
Most DevOps engineers are capable of developing the tools and skills to monitor enterprise applications. With enough effort, it is possible to build or adapt an agent that draws out something approximating modern telemetry from an enterprise application and its underlying hardware platform. This can then be pushed into your monitoring solution of choice. It's just very questionable whether this is an effective use of a DevOps engineer's time.
Alternatively, with the larger cloud monitoring and observability suites, you may even be able to buy additional, dedicated solutions for enterprise application monitoring. If you are already bought into the single-vendor suite, you are most likely numbed to the costs, so there may be a strong temptation to stick with the model. However, these solutions are typically built as an after-thought by a vendor with very different core expertise. They tend to offer cute graphics but, under pressure, these add-on solutions will deliver a severe lack of actionable data for maintaining enterprise software applications running on physical hardware.
Buy or build, the cost of drawing data directly from enterprise applications and transferring this straight into a cloud-native monitoring solution is high, and the results are typically awkward. Enterprise software requires a different understanding, different treatment and different monitoring to a cloud native application. This is, after all, the essence of the DevOps/ITOps monitoring divide. You may end up with monitoring data in the same place, but you are more likely staring at the ingredients of an unnecessarily complex fruit salad than comparing apples with apples.
Integrating Tools, Teams and Valuing Expertise
There is, however, a third, perhaps more natural way to deliver hybrid monitoring. Selecting best-of-breed tools and integrating these through APIs, is the bedrock of the DevOps approach to tooling. The tools used to monitor traditional enterprise applications and physical infrastructure have been developed over decades: evolving around end-users to solve their challenges and answer their needs. And, in a world of integrating tools, there is little point in rebuilding them from scratch.
Features like auto discovery that are available in some free and open source monitoring tools can offer a working solution in minutes. With popular open source check/agent libraries, 90% of enterprise applications can be monitored out of the box. And where these tools have evolved to offer well documented APIs these can be used to feed data into cloud native or DevOps dashboard solutions.
However, there are much larger strategic benefits at play. Regardless of your exact organizational structure using best-of-breed monitoring solutions as an intelligent gateway or filter for enterprise application metrics offers a far stronger solution.
For DevOps teams operating in isolation, tools that have evolved to monitor enterprise applications and physical infrastructure can deliver an opinionated view as a starting point. Typically these will serve up the key enterprise metrics, based on historic end-user preference — decades of ITOps best-practice is implicitly laid-out on the default problem dashboard. In addition, for systems such as databases or networks, there is generally an opinionated dashboard that surfaces the data that is needed to solve 90% of problems, with the other 10% within easy reach. DevOps engineers no longer have to grok the intricacies of an alien environment before they monitor it.
Perhaps the more common situation is that effective hybrid application monitoring will necessitate collaboration between DevOps and an established ITOps team. In this scenario, freedom to use preferred tools can make or break this collaboration.
Forcing the ITOps team to work in an awkward cloud-native monitoring environment (built or bought) that is ill-suited to enterprise monitoring is unlikely to promote much collaborative spirit. In addition to the unfamiliarity of the tool, and often the terminology, cloud native monitoring can lack the customization needed to work within the less homogeneous hybrid environments. It makes more sense to let ITOps work in a best-of-breed enterprise solution which delivers APIs for DevOps practitioners to leverage while building their own platform specific tools. The tooling becomes an enabler for advanced collaboration rather than a barrier.
Teams that practice a DevOps approach gain the ultimate opinionated enterprise monitoring solution, one built on the expertise of their ITOps team. ITOps have an evolved enterprise monitoring solution to serve up the datapoints that DevOps practitioners really need, rather than their best guess. The two teams can capitalize on each other's experience and expertise to build fine-tuned hybrid applications with chains of services running smoothly across cloud-native and on-prem architectures — the architectures that 91% of organizations still rely on.
The Latest
In the heat of the holiday online shopping rush, retailers face persistent challenges such as increased web traffic or cyber threats that can lead to high-impact outages. With profit margins under high pressure, retailers are prioritizing strategic investments to help drive business value while improving the customer experience ...
In a fast-paced industry where customer service is a priority, the opportunity to use AI to personalize products and services, revolutionize delivery channels, and effectively manage peaks in demand such as Black Friday and Cyber Monday are vast. By leveraging AI to streamline demand forecasting, optimize inventory, personalize customer interactions, and adjust pricing, retailers can have a better handle on these stress points, and deliver a seamless digital experience ...
Broad proliferation of cloud infrastructure combined with continued support for remote workers is driving increased complexity and visibility challenges for network operations teams, according to new research conducted by Dimensional Research and sponsored by Broadcom ...
New research from ServiceNow and ThoughtLab reveals that less than 30% of banks feel their transformation efforts are meeting evolving customer digital needs. Additionally, 52% say they must revamp their strategy to counter competition from outside the sector. Adapting to these challenges isn't just about staying competitive — it's about staying in business ...
Leaders in the financial services sector are bullish on AI, with 95% of business and IT decision makers saying that AI is a top C-Suite priority, and 96% of respondents believing it provides their business a competitive advantage, according to Riverbed's Global AI and Digital Experience Survey ...
SLOs have long been a staple for DevOps teams to monitor the health of their applications and infrastructure ... Now, as digital trends have shifted, more and more teams are looking to adapt this model for the mobile environment. This, however, is not without its challenges ...
Modernizing IT infrastructure has become essential for organizations striving to remain competitive. This modernization extends beyond merely upgrading hardware or software; it involves strategically leveraging new technologies like AI and cloud computing to enhance operational efficiency, increase data accessibility, and improve the end-user experience ...
AI sure grew fast in popularity, but are AI apps any good? ... If companies are going to keep integrating AI applications into their tech stack at the rate they are, then they need to be aware of AI's limitations. More importantly, they need to evolve their testing regiment ...
If you were lucky, you found out about the massive CrowdStrike/Microsoft outage last July by reading about it over coffee. Those less fortunate were awoken hours earlier by frantic calls from work ... Whether you were directly affected or not, there's an important lesson: all organizations should be conducting in-depth reviews of testing and change management ...
In MEAN TIME TO INSIGHT Episode 11, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Secure Access Service Edge (SASE) ...