Losing $$ Due to Ticket Times? Hack Response Time Using Data
June 03, 2016

Collin Firenze
Optanix

Share this

Without the proper expertise and tools in place to quickly isolate, diagnose, and resolve an incident, a quick routine error can result in hours of downtime – causing significant interruption in business operations that can impact both business revenue and employee productivity. How can we stop these little instances from turning into major fallouts? Major companies and organizations, take heed:

1. Identify the correlation between issues to expedite time to notify and time to resolve

Not understanding the correlation between issues is detrimental to timely resolutions. With a network monitoring solution in place, lack of automated correlation can generate excess "noise." This then requires support teams to act on numerous individualized alerts, rather than a single ticket that has all relevant events and information for the support end-user.

The correlated monitoring approach provides a holistic view into the network failure for support teams. Enabling support teams to analyze the network failure by utilizing the correlated events to efficiently identify the root cause will provide them the opportunity to promptly execute the corrective action to resolve the issue at hand.

Correlation consolidates all relevant information into a single ticket allowing support teams to largely reduce their staffing models, with only one support engineer needed to act on the incident as opposed to numerous resources engaging on individualized alerts.

2. Constantly analyzing raw data for trends helps IT teams proactively spot and prevent recurring issues

Aside from the standard reactive response of a support team, there is substantial benefit in the proactive analysis of raw data from your environment. By being proactive, trends and failures can be identified, followed by corrective and preventative actions taken to ensure support teams are not spending time investigating repeat issues. This approach not only creates a more stable environment with fewer failures, but also allows support teams to reduce manual hours and cost by avoiding "wasted" investigation on known and reoccurring issues.

Within a support organization, a Problem Management Group (PMG) is often implemented to fulfill the role of proactive analysis on raw data. In such instances, a PMG will create various scripts and calculation that will turn the raw data into a meaningful representation of the data set, to identify areas of concern such as:

■ Common types of failures

■ Failures within a specific region or location

■ Issues with a specific end-device type or model

■ Reoccurring issues at a specific time/day

■ Any trends in software or firmware revisions.

Once the raw data is analyzed by the PMG, the results can be relayed to the support team for review so a plan can be formalized to take the appropriate preventative action. The support team will work to present the data and their proposed solution, and seek approval to execute the corrective/preventative steps.

3. Present data in interactive dashboards and business intelligence reports to ensure proper understanding

Not every support team has the benefit of a PMG. In this specific circumstance, it's important that the system monitoring tools are fulfilling the role of the PMG analysis, and presenting the data in an easy-to-understand format for the end-user. From a tools perspective, the data analysis can be approached from both an interactive dashboard perspective, as well as through the use of business intelligence reports.

Interactive dashboards are a great way of presenting data in a format that caters to all audiences, from administrative and management level, and technical engineers. A combination of both graphs (i.e. pie charts, line graphs, etc.) and summarized metrics (i.e. Today, This Week, Last 30 days, etc.) are utilized to display the analyzed data, with the ability to filter capabilities to allow the end-user to view only desired information without the interference of all analyzed data which may not be applicable to their investigation.

In fact, a more "customizable" approach to raw data analysis would be a Business Intelligence Reporting Solution (BIRS). Essentially, the BIRS collects the raw data for the end-user, and provides drag and drop reporting, so that any desired data elements of interest can be incorporated into a customized on-demand report. What is particularly helpful for the user is the easy ability to save "filtering criteria" that would be beneficial to utilize repeatedly (i.e. Monthly Business Review Reports).

With routine errors, the main goal is to stay ahead of them by using data to identify correlations. Through effective event correlation, and by empowering teams with raw data, you can ensure that issues are quickly mitigated and don't pose the risk of impacting company ROI and system availability.

Collin Firenze is Associate Director at Optanix.

Share this

The Latest

May 25, 2017

According to most industry perceptions, application performance management (APM) and application portfolio management (APM) might seem to be worlds apart — or at best connected by a very thin thread. In this blog, I'd like to highlight three areas that are bridging the APM-to-APM divide: digital experience management, application discovery and dependency mapping (ADDM), and agile/DevOps lifecycle planning ...

May 24, 2017

In today's digital world, it is possible to gauge the cost implications of an IT outage on employee productivity, revenue generation but it is usually much more tricky to measure the negative impacts on the very IT people's lives ...

May 22, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 5 offers some interesting final thoughts ...

May 19, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 4 covers automation and the dynamic IT environment ...

May 18, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 3 covers monitoring and user experience ...

May 17, 2017

APMdigest asked experts across the industry for their opinions on the next steps for ITOA. Part 2 covers visibility and data ...

May 16, 2017

Managing application performance today requires analytics. IT Operations Analytics (ITOA) is often used to augment or built into Application Performance Management solutions to process the massive amounts of metrics coming out of today's IT environment. But today ITOA stands at a crossroads as revolutionary technologies and capabilities are emerging to push it into new realms. So where is ITOA going next? With this question in mind, APMdigest asked experts across the industry — including analysts, consultants and vendors — for their opinions on the next steps for ITOA ...

May 15, 2017

Digital transformation initiatives are more successful when they have buy-in from across the business, according to a new report titled Digital Transformation Trailblazing: A Data-Driven Approach ...

May 11, 2017

The growing market for analytics in IT is one of the more exciting areas to watch in the technology industry. Exciting because of the variety and types of vendor innovation in this area. And exciting as well because our research indicates the adoption of advanced IT analytics supports data sharing and joint decision making in a way that's catalytic for both IT and digital transformation ...

May 10, 2017

Colin Fletcher, Research Director at Gartner, talks about Algorithmic IT Operations (AIOps) and the challenges and recommendations for AIOps adoption ...