Even as the federal government appears to have untangled the worst of HealthCare.gov's problems, the finger pointing and agonizing about what went wrong with the Affordable Care Act's centerpiece website are unlikely to die down any time soon.
The political dimensions aside, there's a persistent curiosity about how such a high-profile project could have failed so spectacularly. It was possibly the world's most important IT project of the moment, yet it performed as if it were rolled out the door without so much as a cursory kick of the tires.
That's because it probably was – and that's far from unusual.
A recent LinkedIn/Empirix survey found that at most companies and public agencies, pre-deployment testing is half-hearted at best and non-existent at worst. Public agencies and private companies alike have abysmal records for testing customer-facing IT projects, such as customer service and e-commerce portals.
This is despite the importance that most organizations place on creating a consistently positive customer experience; almost 60 percent of the contact center executives interviewed for Dimension Data's 2012 Contact Center Benchmarking Report named customer satisfaction as their most important metric.
It's not that IT doesn't test anything before they roll out a project. It's that they don't test the system the way customers will interact with it. They test the individual components — web interfaces, fulfillment systems, Interactive Voice Recognition systems (IVRs), call routing systems — but not the system as a whole under real-world loads. This almost guarantees that customers will encounter problems that will reflect on the company or agency.
Empirix and LinkedIn surveyed more than 1,000 executives and managers in a variety of industries. The survey asked how companies:
- tested new customer contact technology before it was implemented
- evaluated the voice quality of customer/service agent calls
- monitored overall contact center performance to maintain post-implementation quality
The results are a series of contradictions. While it appears from overall numbers that pre-deployment testing rates are high — 80 percent or better — the numbers are actually much less impressive than they appear.
In truth, the overall picture isn't good. More than 80 percent of respondents said their companies do not test contact center technology under real-world conditions before go-live. They do some form of testing, but it's not comprehensive enough to reveal all of the issues that can affect customer service.
They're a little bit better about testing upgrades to existing systems: 82 percent reported testing upgrades. There's grade inflation in this number, however. Sixty-two percent use comparatively inaccurate manual testing methods.
While better than not testing at all, manual testing does not accurately reflect real-world conditions. Manual tests usually occur during off-peak times, which do not accurately predict how systems will work at full capacity. Because manual testing is difficult to repeat, it is usually done only once or twice. That makes it harder to pinpoint problems — and ensure they are resolved — even if they are detected pre-deployment.
Another 20 percent don't test new technology at all; they just "pray that it works” (14 percent) or react to customer complaints (3 percent). The remaining 3 percent are included with the non-testers because they only test major upgrades. They're included with the non-testers because of the obvious flaw in their reasoning that only major upgrades are test-worthy. A small change can erode performance or cause a system crash just as easily as a major upgrade. In fact, small upgrades can create performance drags that are harder to pinpoint because unlike large upgrades, they do not have the IT organization's full attention.
Only about 18 percent of respondents said that their companies use automated testing for all contact center upgrades. That's the second-largest block of users after the manual testing group, but a low overall percentage of the total. These companies use testing software to evaluate the performance of new functionality, equipment, applications and system upgrades under realistic traffic conditions. This approach yields the most accurate results and rapid understanding of where and why problems are occurring.
The Spoken Afterthought
HealthCare.gov's problems highlighted shortcomings with web portal testing, but voice applications face similar neglect. Indeed, when the President advised people to use their phone to call and apply for healthcare, many of the call centers set up to field applicants also had trouble handling the spike in caller traffic.
Voice quality can be a significant drag on short- and long-term call center ROI. Contact center agents who must ask customers to repeat themselves because of poor voice connections — or worse, ask customers to hang up and call in again — are less productive than those who can hear customers clearly. In the long term, repetition and multiple calls erode customer satisfaction levels.
The vast majority of professionals who responded to the LinkedIn/Empirix survey — 68 percent reported that their companies never monitor contact center voice quality. Only 14 percent continuously monitor voice quality, while the remaining 17 percent periodically monitor on a daily, weekly or monthly basis.
This failure carries heavy risks. Globally, 79 percent of consumers replying to a Customer Experience Foundation survey said they experienced poor voice quality on contact center calls. Almost as many — 68 percent — said they will hang up if they experience poor voice quality. If they are calling about a new product or service, they will likely call a competing company instead.
Between misdirected efforts and testing rates like these, it's no wonder people aren't surprised when a major initiative like online healthcare enrollment goes off the rails, or customers calling a contact center get funneled down a blind alley in the IVR system. Customers who run into obstacles like those are on a fast track to becoming former customers.
Testing and performance monitoring can effectively stem those losses. Businesses that test and monitor customer service systems are better able to achieve maximum ROI on their customer service systems (CSS) by identifying and remediating problems quickly. An end-to-end monitoring solution provides organizations with deep visibility into complex customer service technology environments, enabling businesses to reduce the time it takes to understand the source of a problem — and fix it — before customers ever notice the glitch.
ABOUT Matthew Ainsworth
Matthew Ainsworth is Senior Vice President, Americas and Japan at Empirix. He has 15 years of experience in contact centers and unified communications solutions.
Related Links:
The Latest
We're at a critical inflection point in the data landscape. In our recent survey of executive leaders in the data space — The State of Data Observability in 2024 — we found that while 92% of organizations now consider data reliability core to their strategy, most still struggle with fundamental visibility challenges ...
From the accelerating adoption of artificial intelligence (AI) and generative AI (GenAI) to the ongoing challenges of cost optimization and security, these IT leaders are navigating a complex and rapidly evolving landscape. Here's what you should know about the top priorities shaping the year ahead ...
In the heat of the holiday online shopping rush, retailers face persistent challenges such as increased web traffic or cyber threats that can lead to high-impact outages. With profit margins under high pressure, retailers are prioritizing strategic investments to help drive business value while improving the customer experience ...
In a fast-paced industry where customer service is a priority, the opportunity to use AI to personalize products and services, revolutionize delivery channels, and effectively manage peaks in demand such as Black Friday and Cyber Monday are vast. By leveraging AI to streamline demand forecasting, optimize inventory, personalize customer interactions, and adjust pricing, retailers can have a better handle on these stress points, and deliver a seamless digital experience ...
Broad proliferation of cloud infrastructure combined with continued support for remote workers is driving increased complexity and visibility challenges for network operations teams, according to new research conducted by Dimensional Research and sponsored by Broadcom ...
New research from ServiceNow and ThoughtLab reveals that less than 30% of banks feel their transformation efforts are meeting evolving customer digital needs. Additionally, 52% say they must revamp their strategy to counter competition from outside the sector. Adapting to these challenges isn't just about staying competitive — it's about staying in business ...
Leaders in the financial services sector are bullish on AI, with 95% of business and IT decision makers saying that AI is a top C-Suite priority, and 96% of respondents believing it provides their business a competitive advantage, according to Riverbed's Global AI and Digital Experience Survey ...
SLOs have long been a staple for DevOps teams to monitor the health of their applications and infrastructure ... Now, as digital trends have shifted, more and more teams are looking to adapt this model for the mobile environment. This, however, is not without its challenges ...
Modernizing IT infrastructure has become essential for organizations striving to remain competitive. This modernization extends beyond merely upgrading hardware or software; it involves strategically leveraging new technologies like AI and cloud computing to enhance operational efficiency, increase data accessibility, and improve the end-user experience ...
AI sure grew fast in popularity, but are AI apps any good? ... If companies are going to keep integrating AI applications into their tech stack at the rate they are, then they need to be aware of AI's limitations. More importantly, they need to evolve their testing regiment ...