IT Can't Afford to be Static - It's Time to Automate Visibility
November 06, 2015

Ananda Rajagopal
Gigamon

Share this

As any network administrator can tell you, network traffic doesn't stand still. It is constantly changing and increasing in complexity. Networks have fundamentally changed, and the demands put on them by new technology, customers, mobility, and other factors are forcing IT to develop networks that are more agile and dynamic than ever before. While it seems like IT departments are bombarded with new challenges, there are three major trends that are making it difficult to gain visibility into networks: the increased adoption of virtualized infrastructure, enterprise mobility and the rise in encrypted traffic.

Virtualization and associated software-defined networking (SDN) approaches have created tremendous change in the data center, while mobility and encryption have created blind spots in infrastructure that traditional monitoring tools do not recognize. Compounding this problem is the fact that network administrators have been compelled to meet the needs of an organization's cybersecurity initiatives – which requires that they have full visibility into their infrastructure – and it's clear how difficult the problem they are facing is. Simply put, network administrators need to be able to see every packet to guarantee the performance and security of their networks, but the accelerated rate of change, and the complexities that has wrought, have made it nearly impossible.

Since networks and infrastructure are constantly changing, the methods that are used to gain visibility into them cannot afford to be static. When done well, visibility shines light on blind spots, enables detection of anomalous behavior and gives administrators the power to fix network and application issues proactively before they become problems for end users. But, giving administrators the power to be proactive is not enough in today's complex environment. It is no longer enough to simply point to a network bottleneck or send an alert for a spike in bandwidth demand – visibility must be automated so that the information is shared instantly. Manual intervention is a point of failure for network operations and security operations teams, and can be eliminated if the tools we use for visibility are designed to take action.

To automate visibility, we must architect visibility as a critical layer of infrastructure. Once designed in this fashion, an administrator is empowered with the ability to intelligently deliver any portion of network traffic to as many appliances and tools that need to monitor and analyze it. The administrator can use policies to select specific traffic that needs to be delivered to each of these tools. Such an architectural approach to visibility has the additional benefit of abstracting the operational tools needed to secure and manage a network from the specifics of the underlying network. Once such a layer is created, all security and operational tools can get access to critical network traffic from anywhere in the infrastructure. Further, when the intelligence derived from visibility is united with the rest of the network and security infrastructure, it is possible to automate policy management so that the tools can programmatically control the information they receive from the Visibility Fabric. Such automation improves responsiveness and effectiveness, simplifies tasks and establishes a framework for continuous monitoring and analytics of the infrastructure.

Technology will continue to be transformative – in the data center and beyond. No one can afford to sit still in this environment, least of all IT departments. Automating visibility is a critical step in getting control of the dramatic changes affecting infrastructure, and one that should be taken sooner rather than later – the next big challenge is likely right around the corner.

Ananda Rajagopal is VP of Product Management at Gigamon.

Share this

The Latest

March 27, 2017

Monitoring a business means monitoring an entire business – not just IT or application performance. If businesses truly care about differentiating themselves from the competition, they must approach monitoring holistically. Separate, siloed monitoring systems are quickly becoming a thing of the past ...

March 24, 2017

A growing IT delivery gap is slowing down the majority of the businesses surveyed and directly putting revenue at risk, according to MuleSoft's 2017 Connectivity Benchmark Report on digital transformation initiatives and the business impact of APIs ...

March 23, 2017

Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. Without the right foresight, DevOps and IT teams may lose a lot of visibility into these containers resulting in operational blind spots and even more haystacks to find the presumptive performance issue needle ...

March 22, 2017

Much emphasis is placed on servers and storage when discussing Application Performance, mainly because the application lives on a server and uses storage. However, the network has considerable importance, certainly in the case of WANs where there are ways of speeding up the transmission of data of a network ...

March 21, 2017

The majority of IT executives believe investment in IT Service Management (ITSM) is important to gain the agility needed to compete in an era of global, cross-industry disruption and digital transformation, according to Delivering Value to Today’s Digital Enterprise: The State of IT Service Management 2017, a report by BMC, conducted in association with Forbes ...

March 17, 2017

Let’s say your company has examined all the potential pros and cons, and moved your critical business applications to the cloud. The advertised benefits of the cloud seem like they’ll work out great. And in many ways, life is easier for you now. But as often happens when things seem too good to be true, reality has a way of kicking in to reveal just exactly how many things can go wrong with your cloud setup – things that can directly impact your business ...

March 16, 2017

IT leadership is more driven to be innovative than ever, but also more in need of justifying costs and showing value than ever. Combining the two is no mean feat, especially when individual technologies are put forward as the single tantalizing answer ...

March 15, 2017

The move to Citrix 7.X is in full swing. This has improved the centralizing of Management and reduction of costs, but End User Experience is becoming top of the business objectives list. However, delivering that is not something to be considered after the upgrade ...

March 14, 2017

As organizations understand the findings of the Cyber Monday Web Performance Index and look to improve their site performance for the next Cyber Monday shopping day, I wanted to offer a few recommendations to help any organization improve in 2017 ...

March 13, 2017

Online retailers stand to make a lot of money on Cyber Monday as long as their infrastructure can keep up with customers. If your company's site goes offline or substantially slows down, you're going to lose sales. And even top ecommerce sites experience performance or stability issues at peak loads, like Cyber Monday, according to Apica's Cyber Monday Web Performance Index ...