Root Cause Analysis: Causal Versus Derived Events
April 15, 2014

Tom Molfetto
SericeNow

Share this

Today’s business landscape is saturated with data. Big Data has become one of the most hyped trends in the tech space, and all indicators point to the reality that this volume of data is only going to grow. IDC estimates that we’ll see a 60% growth in structured and unstructured data annually. Global 2000 organizations are investing billions of dollars into harnessing the power of Big Data to help make it meaningful and actionable. In other words, organizations are spending a ton of money in an effort to translate data into information.

Data – in and of itself – is fairly useless. When data is interpreted, processed and analyzed – when its true meaning is unearthed – it becomes useful and is called information. Thus the race between players like Splunk, QlikView and others to be the first or the best to harness the power of Big Data by translating it into actionable information.

Helping data center personnel and enterprise IT professionals translate their data into information by isolating causal versus derived events is really relevant to businesses these days. In most of my explorations, I have discovered that organizations are using a best-of-breed approach to monitoring, in what has resulted in a sort of Balkanization of the data center. In a common use case: network teams may be using Cisco for monitoring, the database teams use Oracle and web server teams uses Nagios. But nothing ties all of that information together in a unified view. There is no monitor of monitors, or manager of managers, so to speak. Let alone a unified view that goes beyond the IT components and maps them to their associated business services.

So what happens when a LAN port fails, and the app server and database server that both communicate through that LAN port also fail as a result? In that scenario, the LAN port failure is the causal event and the app/database server failures are derived events. By being able to quickly distinguish between the two types of events, and isolate the root cause of the failure, the dependent business services can be restored while minimizing negative impact on overall operations.

Standard monitoring solutions will trigger a bunch of red flags showing failures, but in order to make the map “come alive” it needs to be architected and displayed in a topological format. This is what allows easier assessment of root cause versus derived events, and what contributed to a dramatically reduced Meant-Time-To-Know (MTTK) with regard to diagnosing the underlying issues impacting business services.

Best-of-breed monitoring tools should continue to be leveraged in their respective domains, but the most forward-thinking organizations are unifying these tools from a service-centric perspective to create a monitor of monitors that maps IT components to associated business services, and connects with the best-of-breed solutions to create a complete and up-to-date topology that empowers IT to do their jobs more effectively.

Providing IT with the tools required to interpret data meaningfully and isolate the root cause of problems helps to create an informed perspective from which decisions can be made and responses taken.

Tom Molfetto is Marketing Director for Neebula.

Share this

The Latest

May 22, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 5 offers some interesting final thoughts ...

May 19, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 4 covers automation and the dynamic IT environment ...

May 18, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 3 covers monitoring and user experience ...

May 17, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 2 covers visibility and data ...

May 16, 2017

Managing application performance today requires analytics. IT Operations Analytics (ITOA) is often used to augment or built into Application Performance Management solutions to process the massive amounts of metrics coming out of today's IT environment. But today ITOA stands at a crossroads as revolutionary technologies and capabilities are emerging to push it into new realms. So where is ITOA going next? With this question in mind, APMdigest asked experts across the industry — including analysts, consultants and vendors — for their opinions on the next steps for ITOA ...

May 15, 2017

Digital transformation initiatives are more successful when they have buy-in from across the business, according to a new report titled Digital Transformation Trailblazing: A Data-Driven Approach ...

May 11, 2017

The growing market for analytics in IT is one of the more exciting areas to watch in the technology industry. Exciting because of the variety and types of vendor innovation in this area. And exciting as well because our research indicates the adoption of advanced IT analytics supports data sharing and joint decision making in a way that's catalytic for both IT and digital transformation ...

May 10, 2017

Colin Fletcher, Research Director at Gartner, talks about Algorithmic IT Operations (AIOps) and the challenges and recommendations for AIOps adoption ...

May 09, 2017

In APMdigest's exclusive interview, Colin Fletcher, Research Director at Gartner, talks about Algorithmic IT Operations (AIOps) and how it will impact ITOA and APM ...

May 05, 2017

Microsoft is expected to essentially wind down Windows 7 support by 2020 so inevitably Windows 10 will be on the IT task list. It would be beneficial, now, to examine some of the issues relating to migrating to Windows 10 OS and how these pain points can be alleviated and addressed. Here are 7 practices that are key to facilitating migration ...