Don't Get Caught Up In Cloud Monitoring Hype
April 01, 2015

Dirk Paessler
Paessler AG

Share this

The cloud monitoring market has been on fire in the early part of 2015, between acquisitions and a VC spending spree. The money is truly flying fast in Silicon Valley and beyond. But money isn’t everything, and while cloud monitoring has its place, it’s not a panacea.
 
It’s easy to get caught up in the hype-cycle, but cloud monitoring startups face some serious headwinds, including the fact that they are solving a problem many businesses simply don’t have. Many of these young companies have solved relatively easy problems – the ability to monitor cloud workloads. They have capitalized on a variety of trends in computing, notably the movement towards cloud applications and the Internet of Things. They have generated much publicity, achieving “next big thing” status, but in many ways they’re missing the point. Hardware matters, LAN matters, and both will continue to matter. No one is saying that moving to the cloud is a bad idea – on the contrary, it makes total sense in many cases, and cloud monitoring has a role. But, not everything can be displaced.

Networks can contain literally millions of switches, servers, firewalls and more – and a lot of that hardware is out of date. Knowing how to monitor everything on the network is critical – it’s more than just being able to connect to the APIs of a few leading cloud providers and call it a day. Businesses rely on hardware, and the simple fact of the matter is most hardware on the planet is old. Cloud monitoring is optimized to handle the latest and greatest, but when it comes down to it networking hardware is both business critical, and in many cases, quite dated.

One of the most talked about topics in monitoring is the Internet of Things, and it is here that cloud monitoring shows its weakness. One of the most exciting aspects of the Internet of Things is its potential to transform the industrial economy. While many focus on how IoT will empower consumers to control their thermostat and refrigerator remotely, the connected factory is truly transformational. And, the connected factory is a perfect illustration of why monitoring is not about cloud, but about a willingness to do a lot of dirty work.

The connected factory will not run on 21st century technology alone. In all industrial businesses, be it manufacturing or energy production, operations are dependent upon legacy hardware, including some systems that are homegrown. SCADA systems are a perfect example. These systems are the operational backbone of the business, and they are expensive to implement – many years have to go by before the costs are amortized. These systems will need to be connected, and it takes deep institutional knowledge and years of hardware experience to do it successfully. Monitoring providers need to offer a way for end users to work with old hardware, be it through custom designed sensors or an easy-to-use template.

Additionally, there are just some processes that require a LAN connection. Factories will never move all workloads to the cloud, it is just not possible. Machines must be connected by secure, LAN connections, over fiber, copper or Wi-Fi, with ultra-high bandwidth and reliability in the five-nines range. Cloud systems simply cannot offer that at present time. No factory owner is going to accept lower availability or connectivity problems that are out of his control. Cloud outages happen, but no one is ever going to walk off the factory floor because Amazon is down.

Network monitoring has required, and will continue to require, “boots on the ground”. Monitoring software needs to be able to communicate with everything, whether it’s AWS or a 25-year-old SCADA system, regardless of connection quality. IT departments need to be able to monitor everything from cloud applications to valves in an oil pipeline or a power station in a remote area. It takes many years of expertise to develop tools that can accomplish this, much more than it takes to link up with an API. Most of the internet is run off of very old servers and switches – understanding the places where monitoring has been is critical to its future.

Dirk Paessler is CEO and Founder of Paessler AG.

Share this

The Latest

October 17, 2024

Monitoring your cloud infrastructure on Microsoft Azure is crucial for maintaining its optimal functioning ... In this blog, we will discuss the key aspects you need to consider when selecting the right Azure monitoring software for your business ...

October 16, 2024

All eyes are on the value AI can provide to enterprises. Whether it's simplifying the lives of developers, more accurately forecasting business decisions, or empowering teams to do more with less, AI has already become deeply integrated into businesses. However, it's still early to evaluate its impact using traditional methods. Here's how engineering and IT leaders can make educated decisions despite the ambiguity ...

October 15, 2024

2024 is the year of AI adoption on the mainframe, according to the State of Mainframe Modernization Survey from Kyndryl ...

October 10, 2024

When employees encounter tech friction or feel frustrated with the tools they are asked to use, they will find a workaround. In fact, one in two office workers admit to using personal devices to log into work networks, with 32% of them revealing their employers are unaware of this practice, according to Securing the Digital Employee Experience ...

October 10, 2024

In today's high-stakes race to deliver innovative products without disruptions, the importance of feature management and experimentation has never been more clear. But what strategies are driving success, and which tools are truly moving the needle? ...

October 09, 2024
A well-performing application is no longer a luxury; it has become a necessity for many business organizations worldwide. End users expect applications to be fast, reliable, and responsive — anything less can cause user frustration, app abandonment, and ultimately lost revenue. This is where application performance testing comes in ....
October 08, 2024

The demand for real-time AI capabilities is pushing data scientists to develop and manage infrastructure that can handle massive volumes of data in motion. This includes streaming data pipelines, edge computing, scalable cloud architecture, and data quality and governance. These new responsibilities require data scientists to expand their skill sets significantly ...

October 07, 2024

As the digital landscape constantly evolves, it's critical for businesses to stay ahead, especially when it comes to operating systems updates. A recent ControlUp study revealed that 82% of enterprise Windows endpoint devices have yet to migrate to Windows 11. With Microsoft's cutoff date on October 14, 2025, for Windows 10 support fast approaching, the urgency cannot be overstated ...

October 04, 2024

In Part 1 of this two-part series, I defined multi-CDN and explored how and why this approach is used by streaming services, e-commerce platforms, gaming companies and global enterprises for fast and reliable content delivery ... Now, in Part 2 of the series, I'll explore one of the biggest challenges of multi-CDN: observability.

October 03, 2024

CDNs consist of geographically distributed data centers with servers that cache and serve content close to end users to reduce latency and improve load times. Each data center is strategically placed so that digital signals can rapidly travel from one "point of presence" to the next, getting the digital signal to the viewer as fast as possible ... Multi-CDN refers to the strategy of utilizing multiple CDNs to deliver digital content across the internet ...