Application or network downtime is expensive, and given the growing numbers and types of high-availability and mission-critical applications, systems and networks — and our increasing reliance on them — ensuring consistent access to mission-critical applications is essential for ensuring customer loyalty and keeping employees productive. Businesses must recognize that applications availability depends on the network and implement a strategy to ensure network-aware application performance monitoring.
As most enterprises go cloud-first and cloud-smart, a key component in providing full network-aware application and security monitoring is eliminating blind spots in the public cloud. A good network visibility solution must be able to reliably monitor traffic across an organization's current and future hybrid network architecture — with physical, virtual, and cloud-native elements deployed across the data centers, branch offices and multi-cloud environments.
Unfortunately for IT teams, up until mid-2019, every major public cloud platform was a black box from the above perspective. Companies could have rich insight into network and application performance across their private data center network, as well as into and out of the cloud, but what happened inside the cloud itself was a mystery. This made application performance monitoring and security assurance difficult and porting of on-premise investigation and resolution workflows virtually impossible.
Companies worked around this lack of visibility with a variety of compromised methods, including deploying traffic forwarding agents (or container-based sensors) and using log-based monitoring. Both have limitations. Feature-constrained forwarding agents and sensors must be deployed for every instance and every tool — a costly IT management headache — or there is a risk of blind spots and inconsistent insight. Event logging must be well-planned and instrumented in advance and can only prepare for anticipated issues as snapshots in time. Neither provides the high-quality and continuous data, such as packet data, that would provide the required depth needed to troubleshoot complex application, security or user experience issues.
To solve this problem, public clouds like AWS and Google Cloud have introduced game-changing features over the last year such as VPC traffic/packet mirroring that significantly impact the ability of IT departments to monitor cloud deployments.
Microsoft Azure had introduced a virtual TAP feature for the same purpose, but it has been put on hold for now. It’s worth a closer look to assess what it means for network and application management, and security use cases.
In mid-2019 Amazon, followed by Google Cloud, introduced traffic mirroring (packet mirroring in case of Google) functionality as part of their respective Virtual Private Cloud (VPC) offerings. Simply stated, this traffic mirroring feature duplicates network traffic to and from the client’s applications and forwards it to cloud-native performance and security monitoring tool sets for assessment. This eliminates the need to deploy ad-hoc forwarding agents or sensors in each VPC instance for every monitoring tool and reduces complexity. Compared to log data, it delivers much richer and deeper situational awareness that’s needed for network and application monitoring or security investigations. The result is simplicity, elasticity and cost savings.
Traffic or packet mirroring isn’t enough on its own, however. Just like the agent or sensor approach, it simply provides the access to raw packet data (equivalent to TAPs in the physical world) which is not quite ready to feed directly into monitoring and security tools. The complete solution is to use traffic mirroring along with cloud-based virtual packet brokering, packet capture, flow generation and analytics middleware. This adds value in a variety of ways.
In Amazon or Google Cloud, virtual/cloud packet broker can multiply the value of VPC mirrored traffic by pre-processing operations such as header stripping, filtering, deduplicating and load-balancing the traffic feeds to cloud-native tools, which saves on costs while forwarding the right data to the right tools.
In Azure, if the virtual packet broker supports an "inline mode" it can be a viable alternative to VPC traffic mirroring or agent-based mirroring features. One or more of the feeds from the packet broker can be fed to a packet-to-flow gateway tier to generate flow data such as Netflow/IPFIX if certain tools prefer flow data. A virtual/cloud packet capture tier can take a feed from the packet broker as well to record interesting data to cloud storage for later retrieval, playback and analysis. This is particularly useful for security-centric Network Detection and Response, forensics and incident response.
While most of the above value on top of cloud traffic mirroring (inline or non-inline) involves data or network intelligence delivery, more value comes from correlating and analyzing the data to spit out something more meaningful, useful and actionable. This is where the rich network analytics tier comes in. These tools consume the fine-grain metadata extracted from the above middleware and turns that into visualizations and dashboards that enable IT NetOps, SecOps, AppOps and CloudOps teams to effectively perform their jobs. The high-quality metadata can be exported to other tools such as threat detection, behavioral analytics and service monitoring solutions to enrich their effectiveness. Features such as baselining, application dependency mapping and automated alerting, coupled with artificial intelligence (AI) and machine learning (ML) capabilities add the ultimate value for today’s demanding ITOps — headed to AIOps.
In summary, a cohesive hybrid visibility suite that integrates with the new VPC traffic mirroring capabilities offered by the leading cloud providers allows organizations to use a consistent mix of tools, workflows, data and insight when managing hybrid environments (the proverbial "single pane of glass"). The ability to gather the same deep insights across both private and public infrastructure is a game changer for application and network performance monitoring and security. Black boxes shouldn’t exist in corporate networks, making fully network-aware public cloud monitoring a welcome change. This simplifies network and application performance management and speeds up mean time to resolution — ultimately enhancing end-user experience and reducing customer churn — all by de-risking IT infrastructure and operations.
The Latest
Broad proliferation of cloud infrastructure combined with continued support for remote workers is driving increased complexity and visibility challenges for network operations teams, according to new research conducted by Dimensional Research and sponsored by Broadcom ...
New research from ServiceNow and ThoughtLab reveals that less than 30% of banks feel their transformation efforts are meeting evolving customer digital needs. Additionally, 52% say they must revamp their strategy to counter competition from outside the sector. Adapting to these challenges isn't just about staying competitive — it's about staying in business ...
Leaders in the financial services sector are bullish on AI, with 95% of business and IT decision makers saying that AI is a top C-Suite priority, and 96% of respondents believing it provides their business a competitive advantage, according to Riverbed's Global AI and Digital Experience Survey ...
SLOs have long been a staple for DevOps teams to monitor the health of their applications and infrastructure ... Now, as digital trends have shifted, more and more teams are looking to adapt this model for the mobile environment. This, however, is not without its challenges ...
Modernizing IT infrastructure has become essential for organizations striving to remain competitive. This modernization extends beyond merely upgrading hardware or software; it involves strategically leveraging new technologies like AI and cloud computing to enhance operational efficiency, increase data accessibility, and improve the end-user experience ...
AI sure grew fast in popularity, but are AI apps any good? ... If companies are going to keep integrating AI applications into their tech stack at the rate they are, then they need to be aware of AI's limitations. More importantly, they need to evolve their testing regiment ...
If you were lucky, you found out about the massive CrowdStrike/Microsoft outage last July by reading about it over coffee. Those less fortunate were awoken hours earlier by frantic calls from work ... Whether you were directly affected or not, there's an important lesson: all organizations should be conducting in-depth reviews of testing and change management ...
In MEAN TIME TO INSIGHT Episode 11, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Secure Access Service Edge (SASE) ...
On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...
Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...