Network (In)Visibility Leads to IT Blame Game
Time for IT Managers to Take Back Control
August 19, 2014

Mike Heumann

Share this

Significant changes in the structure and use of IT, including such seismic trends as Bring Your Own Device (BYOD), virtualization and cloud computing, have introduced new challenges to IT administrators and staff. Added layers of complexity require new skill sets and knowledge bases as well as tools to effectively run a modern enterprise network. This raises a few questions about how IT teams are coping with the changes.

Well, it appears that IT teams are struggling to gain visibility into what is causing IT problems, and are in many cases not implementing monitoring tools to help. In an Emulex survey of 547 US and European network and security operations (NetOps and SecOps) professionals conducted in the spring of 2014, 77% of respondents said that they had inaccurately reported the root cause of a network or security event to their executive team on at least one occasion. Additionally, 73% of surveyed IT staff said they currently have unresolved network events.

With more than half of US respondents (52%) confirming it costs their organization more than half a million dollars in revenue per hour when they have a network outage or performance degradation, you would assume that identifying unresolved network events would be a critical priority for IT organizations. This expectation is very much not the case – our survey revealed that 45% of organizations are still manually monitoring their networks.

With the flood of “unknown” devices resulting from BYOD (this can be hundreds or thousands of new devices daily), it would seem impossible for IT teams to derive the root cause of any network or security events if they do not have automated network surveillance tools. Startlingly, more than a quarter (26%) of European respondents said they have no plans to monitor the network for performance issues related to BYOD.

As a result of this lack of visibility, 79% of organizations have experienced network events that were attributed to the wrong IT group. This creates an “IT blame game” in which departments have to spend cycles proving their innocence, rather than getting to the root cause of network events and fixing them. If this trend continues, in tandem with increased virtualization and device proliferation, it will almost certainly lead to more outages and lost revenue.

It is also interesting to note that 83% of respondents said there has been an increase in the number of security events they have investigated in the past year. What will it take to make IT teams realize that without 100% visibility across their networks, the business is in jeopardy? The time is now for IT managers to take back control.

Mike Heumann is VP of Product Marketing and Alliances at Emulex.

Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...