Best Practices for Modeling and Managing Today's Network - Part 1
August 22, 2016

Stefan Dietrich
Glue Networks

Share this

The challenge today for network operations (NetOps) is how to maintain and evolve the network while demand for network services continues to grow. Software-Defined Networking (SDN) promises to make the network more agile and adaptable. Various solutions exist, yet most are missing a layer to orchestrate new features and policies in a standardized, automated and replicable manner while providing sufficient customization to meet enterprise-level requirements.

NetOps is often working with wide area networks ("WANs") that are geographically diverse, use a plethora of technologies from different services providers and are feeling the strain from increasing use of video and cloud application services. Hybrid WAN architectures with advanced application-level traffic routing are of particular interest. They combine the reliability of private lines for critical business applications with the cost-effectiveness of broadband/Internet connectivity for non-critical traffic.

Here's the issue: many of the network management tools available today are insufficient to deploy such architectures at scale over the existing network. Most of them still apply blocks of configuration data to network devices to enable features that in turn enable an overall network policy. To allow adjustment of configuration data to address differences in hardware and OS/firmware levels, those scripts are using "wildcards" replacing certain configuration data. These scripts are heavily tested, carefully curated and subject to stringent change management procedures. The tiniest mistake can bring a network down, resulting in potentially disastrous business losses.

NetOps teams are seeing first-hand how inadequate this approach is. As they deploy hybrid WAN architectures and application-specific routing, network operations teams are experiencing the limits to this approach. Even if the existing hardware already supports all the functionality required, existing network configurations that reflect past user requirements are rarely well understood. As each business unit is asking for specific requirements to ensure that their applications run optimally on the network, networks need to be continuously updated and optimized. Such tasks range from a simple adjustment of the configuration parameters to more complex changes of the underlying network architecture, such as removing and installing upgraded circuits, replacing hardware or even deploying new network architectures.

In these instances, senior network architects must be heavily relied upon to determine potential risk of unintentional consequences on the existing network, but waiting for the next change maintenance window may no longer be an acceptable option. Businesses are not concerned with the details; they want the networks to simply "work."

Moving Forward: the Ideal vs. the Real

What needs to happen in order for the network to simply work? Traditional network management tools are mature and well understood. Network architects and implementation teams are familiar with them, including all of the limitations and difficulties, and any potential change of these tools is immediately vetted against the additional learning curve required vis-à-vis potential benefits in managing the network.

An ideal situation would be one in which the network policies are defined independently of implementation or operational concerns. It starts with mapping of the required functionality into a logical model, assembling these models into one overall network policy, verifying interdependencies and inconsistencies, and deploying and maintaining them consistently throughout the network life cycle.

The current situation is less than ideal, though. The industry has launched a variety of activities to improve network management, but those initiatives are still maturing. For example, YANG is a data modeling language for the NETCONF network configuration protocol. OpenStack Networking (Neutron) is providing an extensible framework to manage networks and IP addresses within the larger realm of cloud computing, focusing on network services such as intrusion detection systems (IDS), load balancing, firewalls and virtual private networks (VPN) to enable multi-tenancy and massive scalability. But neither approach can proactively detect interdependencies or inconsistencies, and both require network engineers to dive into programming, for example, to manage data entry and storage.

It makes sense, then, that some vendors are offering fully integrated solutions, built on appliances managed through a proprietary network management tool. This model allows businesses to deploy solutions quickly, at the cost of additional training, limited capability for customization and new hardware purchases.

In order for transformation to occur, the focus of new network management capabilities needs to be on assembling complete network policies from individual device-specific features, detecting inconsistencies and dependencies, and allowing deployment and ongoing network management. Simply updating wildcards in custom configuration templates and deploying them onto devices is no longer sufficient.

As needs and technologies shift and evolve, network architectures or routing protocol changes may need to be changed on live production networks. Managing such changes at large scale is difficult or even infeasible. This is especially true in large organizations where any change will always have to be validated by e.g. security. This creates unacceptable delays for implementation.

To find out more about solving these network operations challenges, read Best Practices for Modeling and Managing Today's Network - Part 2

Dr. Stefan Dietrich is VP of Product Strategy at Glue Networks.

Share this

The Latest

April 17, 2024

In 2024 the number one challenge facing IT teams is a lack of skilled workers, and many are turning to automation as an answer, according to IT Trends: 2024 Industry Report ...

April 16, 2024

Organizations are continuing to embrace multicloud environments and cloud-native architectures to enable rapid transformation and deliver secure innovation. However, despite the speed, scale, and agility enabled by these modern cloud ecosystems, organizations are struggling to manage the explosion of data they create, according to The state of observability 2024: Overcoming complexity through AI-driven analytics and automation strategies, a report from Dynatrace ...

April 15, 2024

Organizations recognize the value of observability, but only 10% of them are actually practicing full observability of their applications and infrastructure. This is among the key findings from the recently completed Logz.io 2024 Observability Pulse Survey and Report ...

April 11, 2024

Businesses must adopt a comprehensive Internet Performance Monitoring (IPM) strategy, says Enterprise Management Associates (EMA), a leading IT analyst research firm. This strategy is crucial to bridge the significant observability gap within today's complex IT infrastructures. The recommendation is particularly timely, given that 99% of enterprises are expanding their use of the Internet as a primary connectivity conduit while facing challenges due to the inefficiency of multiple, disjointed monitoring tools, according to Modern Enterprises Must Boost Observability with Internet Performance Monitoring, a new report from EMA and Catchpoint ...

April 10, 2024

Choosing the right approach is critical with cloud monitoring in hybrid environments. Otherwise, you may drive up costs with features you don’t need and risk diminishing the visibility of your on-premises IT ...

April 09, 2024

Consumers ranked the marketing strategies and missteps that most significantly impact brand trust, which 73% say is their biggest motivator to share first-party data, according to The Rules of the Marketing Game, a 2023 report from Pantheon ...

April 08, 2024

Digital experience monitoring is the practice of monitoring and analyzing the complete digital user journey of your applications, websites, APIs, and other digital services. It involves tracking the performance of your web application from the perspective of the end user, providing detailed insights on user experience, app performance, and customer satisfaction ...

April 04, 2024
Modern organizations race to launch their high-quality cloud applications as soon as possible. On the other hand, time to market also plays an essential role in determining the application's success. However, without effective testing, it's hard to be confident in the final product ...
April 03, 2024

Enterprises are experiencing a 13% year-over-year increase in customer-facing incidents, reflecting rising levels of complexity and risk as businesses drive operational transformation at scale, according to the 2024 State of Digital Operations study from PagerDuty ...

April 02, 2024

According to Grafana Labs' 2024 Observability Survey, it doesn't matter what industry a company is in or the number of employees they have, the truth is: the more mature their observability practices are, the more time and money they save. From AI to OpenTelemetry — here are four key takeaways from this year's report ...