One Way to Improve Hospital Application Performance
August 16, 2018

Keith Bromley
Ixia

Share this

IT organizations are constantly trying to optimize operations and troubleshooting activities and for good reason. Once established, end users' perception of "slowness" can be hard to get rid of. According to research that Enterprise Management Associates (EMA) performed in late 2016, 41% of organizations surveyed spend more than 50% of their time responding to network and application performance problems.

This is obviously a large source of time and energy. It can also be an unwanted high-profile activity. Let's look at one example for the medical industry. Networked applications, such as electronic medical records (EMR), are vital for hospitals to provide outstanding service to their patients and physicians. These applications enable 24x7 application transaction monitoring, packet storage, and network analysis, while providing integrated software add-ons for dependency mapping, SNMP reporting, database monitoring, and pre-deployment application testing.

However, a networking team can often not be aware of slow response times on the remotely hosted EMR application until a physician or someone else calls in to complain. Once the problem is identified, it takes time to get troubleshooting equipment into place to sort out if the root cause of the issue is the application or the network.

A simple solution to the problem is to add taps and network packet brokers (NPBs). Taps can be inserted anywhere in the network, They make a complete copy of all traffic between the network data flow and your monitoring tools (or NPB) to improve the quality of monitoring data and time to data acquisition. Once the tap is installed into the network, it is a permanent passive device that gives you constant and consistent access to your critical monitoring data. This means that in most cases, you don't have to ask the Change Board for permission to touch the network again. You touch it once to install the tap, and then you are done.

Next, you will want to deploy a NPB between those taps and the security and monitoring tools to optimize the data sent to the tools. You can plug in (as well as unplug) whatever tools you want into the NPB with no impact to the network. With the NPB, you can perform data filtering, deduplication, packet slicing, header stripping, and many other functions to optimize the data before it is sent to your application performance management (APM) tools. It is not uncommon for 50% or more of the monitoring traffic to be duplicate packets, especially if you are pulling data from a SPAN port (instead of a tap) or have overlapping data taps in your architecture. Duplicate packets are bad because they can decrease the processing efficiency of your APM tool. It can also lead to less available on-board storage capacity for useful packet data (essential for "back in time" analysis).

Just by implementing taps and NPBs, it is possible to reduce your mean time to repair (MTTR) by up to 80%. A significant portion of that time reduction comes from the reduction (and probable elimination) of Change Board approvals.

In the end, some of the benefits of this type of monitoring architecture include:

■ Complete network traffic visibility for application performance analysis

■ Faster troubleshooting that reduces problem isolation from days to hours

■ Proactive observation and resolution of issues before they become problems

■ Improvements in the efficiency of APM system by removing duplicate packets

■ The elimination of interference with other department's network probes thanks to SPAN and tap sharing

■ Easy access to data to perform application performance trending

■ A reduction in your MTTR and other key performance indicators — due to focused root cause analysis

Keith Bromley is Senior Manager, Solutions Marketing at Ixia Solutions Group, a Keysight Technologies business
Share this

The Latest

November 07, 2024

On average, only 48% of digital initiatives enterprise-wide meet or exceed their business outcome targets according to Gartner's annual global survey of CIOs and technology executives ...

November 06, 2024

Artificial intelligence (AI) is rapidly reshaping industries around the world. From optimizing business processes to unlocking new levels of innovation, AI is a critical driver of success for modern enterprises. As a result, business leaders — from DevOps engineers to CTOs — are under pressure to incorporate AI into their workflows to stay competitive. But the question isn't whether AI should be adopted — it's how ...

November 05, 2024

The mobile app industry continues to grow in size, complexity, and competition. Also not slowing down? Consumer expectations are rising exponentially along with the use of mobile apps. To meet these expectations, mobile teams need to take a comprehensive, holistic approach to their app experience ...

November 04, 2024

Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...

November 01, 2024

Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...

October 31, 2024

Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...

October 30, 2024

On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...

October 29, 2024

Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...

October 28, 2024

Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...

October 24, 2024

High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...