Lack of Network History Costs Large Enterprises Millions of Dollars Each Year, Survey Says
November 14, 2012
Share this

Large organizations have estimated the cost of network downtime ranges from hundreds of thousands to millions of dollars per hour, according to Endace's 2012 Network Visibility Survey.

The survey findings highlight the operational challenges being faced by IT teams as they come to terms with the latest high-speed, network-centric technologies such as cloud, unified communications and VDI.

Highlights from the survey, which is based on more than 100 interviews with senior network IT professionals from organizations with 5,000 to 200,000 employees, include the following about the current state of operational effectiveness:

- 23 percent of organizations experience serious service-affecting problems on a daily basis

- An additional 25 percent admit to experiencing serious network issues each month

- Organizations’ hardest network problems can take up to 30 days or more to rectify, making MAX-TTR (maximum time-to-resolution) an expensive issue for large, resource-constrained organizations

- Organizations can have up to 250 performance-related trouble tickets open at any given time, with half of respondents reporting that at least 50 percent of their trouble tickets stay open for more than 24 hours

- Nearly 40 percent of respondents noted that they do not know which applications are in use on their network, while 53 percent admit that employees use applications that violate IT policies

- Despite an abundance of monitoring tools, nearly 30 percent of organizations do not have a clear understanding of bandwidth utilization, which makes troubleshooting end-user issues extremely challenging

For obvious reasons, minimizing Time-to-Resolution on all types of service-affecting issues has become a top priority for organizations, putting IT operational teams squarely in the spotlight.

Comments from survey respondents confirmed that processes for diagnosing and remediating difficult issues are often ad-hoc.

“Most IT shops have invested heavily in detection technologies that alert on issues and correlation technologies that attempt to filter and triage the most important issues. But what we’ve learned from this study is that many shops still face long resolution times far too often,” said Spencer Greene, senior vice president of marketing and product management at Endace.

Share this

The Latest

April 18, 2024

A vast majority (89%) of organizations have rapidly expanded their technology in the past few years and three quarters (76%) say it's brought with it increased "chaos" that they have to manage, according to Situation Report 2024: Managing Technology Chaos from Software AG ...

April 17, 2024

In 2024 the number one challenge facing IT teams is a lack of skilled workers, and many are turning to automation as an answer, according to IT Trends: 2024 Industry Report ...

April 16, 2024

Organizations are continuing to embrace multicloud environments and cloud-native architectures to enable rapid transformation and deliver secure innovation. However, despite the speed, scale, and agility enabled by these modern cloud ecosystems, organizations are struggling to manage the explosion of data they create, according to The state of observability 2024: Overcoming complexity through AI-driven analytics and automation strategies, a report from Dynatrace ...

April 15, 2024

Organizations recognize the value of observability, but only 10% of them are actually practicing full observability of their applications and infrastructure. This is among the key findings from the recently completed Logz.io 2024 Observability Pulse Survey and Report ...

April 11, 2024

Businesses must adopt a comprehensive Internet Performance Monitoring (IPM) strategy, says Enterprise Management Associates (EMA), a leading IT analyst research firm. This strategy is crucial to bridge the significant observability gap within today's complex IT infrastructures. The recommendation is particularly timely, given that 99% of enterprises are expanding their use of the Internet as a primary connectivity conduit while facing challenges due to the inefficiency of multiple, disjointed monitoring tools, according to Modern Enterprises Must Boost Observability with Internet Performance Monitoring, a new report from EMA and Catchpoint ...

April 10, 2024

Choosing the right approach is critical with cloud monitoring in hybrid environments. Otherwise, you may drive up costs with features you don’t need and risk diminishing the visibility of your on-premises IT ...

April 09, 2024

Consumers ranked the marketing strategies and missteps that most significantly impact brand trust, which 73% say is their biggest motivator to share first-party data, according to The Rules of the Marketing Game, a 2023 report from Pantheon ...

April 08, 2024

Digital experience monitoring is the practice of monitoring and analyzing the complete digital user journey of your applications, websites, APIs, and other digital services. It involves tracking the performance of your web application from the perspective of the end user, providing detailed insights on user experience, app performance, and customer satisfaction ...

April 04, 2024
Modern organizations race to launch their high-quality cloud applications as soon as possible. On the other hand, time to market also plays an essential role in determining the application's success. However, without effective testing, it's hard to be confident in the final product ...
April 03, 2024

Enterprises are experiencing a 13% year-over-year increase in customer-facing incidents, reflecting rising levels of complexity and risk as businesses drive operational transformation at scale, according to the 2024 State of Digital Operations study from PagerDuty ...