The Sochi Winter Olympics are officially underway, and NBC will once again be providing viewers access to live streaming in a multitude of mediums.
On the NBC Olympics page, computer users can enter their cable or digital television provider personal user name and password and watch live video of the events. Mobile viewers can also download the free NBC Live Extra App. The iPhone, Droid and iPad app will have live and recorded events, and on demand HD video. And for the first time, NBCUniversal will stream video on Facebook as part of a partnership deal with the social media giant.
The games run until February 23, which includes 10 business days of events. Given the time difference, many of the events will air during normal working hours throughout the US. As NBC makes it easier and easier to bring the Olympics viewing experience to the office, are network operations staff prepared for the potential bandwidth onslaught?
Employees don't always realize the impact they can have on network performance and don't understand how watching something as exciting as the cross country skiing finals could impact their entire company.
Streaming video can be an enormous bandwidth hog and can occupy much more network resources than almost any other application. At a remote office location, even one person watching live video coverage of the Olympics can bring an entire LAN to a standstill. And it doesn't take more than a handful of viewers at large sites to slow the network to a point where customers have difficulty accessing the company's Web site or the quality of Internet-based telecommunications tools (like Skype) degrades.
This problem has only been exacerbated by the influx of personal mobile devices into the enterprise, all of which are sucking up bandwidth from the corporate wireless network, which is generally more bandwidth constricted than the fixed-line Ethernet network.
The only way to analyze this traffic and be able to reroute it or add more capacity is to have full visibility into the network. Here are a couple of "best practices" that make performance of this task more likely to result in a successful outcome:
- Baseline your networks BEFORE you need to start "allocating" bandwidth. If you know what your normal network needs are, you are in a better position to set Quality of Service (QoS) policies to guarantee bandwidth for your mission-critical applications. Most importantly, don't be satisfied with simply knowing the "average" bandwidth required – look across a several-day baseline to see usage by hour, and pay close attention to if/when you have microburst activity (applications causing this will most likely be the ones impacted first if your network becomes saturated).
- Since it is likely that most "non-business web browsing" will happen on Bring your Own Devices (BYODs), which are nearly universally wireless, think about isolating your wireless network from your mission-critical network, and consider putting limits on the outside bandwidth served to that network.
- Monitor your network closely, and look for signs of issues proactively. High-resolution network visibility tools are critical to ensuring you will see problems before they impact your enterprise.
- Assume you will run into issues, and plan what your options are when they occur. If your playbook has already thought-out and documented options to deal with issues, it is far more likely that you can mitigate issues quickly.
ABOUT Mike Heumann
Mike Heumann is Senior Director of Marketing, Endace portfolio, Emulex. In this role, he manages outbound marketing, including messaging, go-to-market coordination, and worldwide channel field marketing. Heumann brings more than 30 years of experience to Emulex, including 15 years of data center networking and storage networking expertise in marketing, sales, product management and engineering roles. Prior to Emulex, he served as SVP of sales and marketing at NextIO and Astute Networks, VP of marketing at Dot Hill Systems, director of product management and marketing at JNI/AMCC, and project leader at Sony. He has also held previous engineering at Stac, National Dispatch Center and Horizon Technology Group. Heumann holds a Bachelor of Science degree in electrical engineering from the University of Wisconsin, and a Master of Science degree in industrial/organizational psychology from Purdue University. He also completed the Stanford Executive Institute program.
Related Links:
Read Mike Heumann's first blog on the Vendor Forum: Reducing the Risks Associated with Deploying New Network-Centric Applications
The Latest
The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...
Organizations recognize the benefits of generative AI (GenAI) yet need help to implement the infrastructure necessary to deploy it, according to The Future of AI in IT Operations: Benefits and Challenges, a new report commissioned by ScienceLogic ...
Splunk's latest research reveals that companies embracing observability aren't just keeping up, they're pulling ahead. Whether it's unlocking advantages across their digital infrastructure, achieving deeper understanding of their IT environments or uncovering faster insights, organizations are slashing through resolution times like never before ...