Why You Can't Keep Throwing More People at IT Issues
November 03, 2017

Vincent Geffray
Everbridge

Share this

With the increased complexity of IT environments, the rising cyber threats and the growing number of IT alerts, IT organizations have come to the realization that throwing more people at IT issues doesn't solve the problem. According to a recent DEJ study, putting more people on a particular IT issue is not an effective approach, so organizations are finding themselves at a turning point — and they have to take notice.


Respondents to the survey said that they experienced, on average, an 88 percent increase in processed metrics, events and alerts over the last 12 months. The study also found that 42 percent of organizations are reporting that the technology solutions they purchased in the past are not as effective when working with this level of volume and velocity of data.

What Do the Findings Tell Us?

Today, IT Organizations need to adapt quickly to new consumer behaviors which are driving increasingly growing business demands for IT services. And as the demand for digital services increases, so does the risk for service outages. Everyone in IT knows that major IT issues are unpredictable and unavoidable, and that 20th century tools and processes are no longer up for the task. Senior IT executives, along with business leaders, really need to rethink their IT strategy if they want to be able to fully embrace the future — made of big dta, AI and IoT.

Modern IT Stacks, Yet Operating with 1990's Processes

Engaging into digital transformation too late can severely hurt the business competitiveness

Every day we talk to IT leaders, we have conversations about the importance of modernizing their digital footprint so they can offer more — and faster. There is a consensus that customers' fast-changing expectations are the major driver behind digital transformation, and that engaging into digital transformation too late can severely hurt the business competitiveness. Discussions move quickly into Agile Development, Scrum team structures and DevOps, which is a good thing. It is now generally admitted that the old way of building IT services and applications (waterfall development) is no longer compatible with customers' high expectations of time to delivery and digital experience.

At the same time, there's a growing disconnect between the complexity of the new technology stack and tools organizations acquire, and the rudimentary processes they still use. This can quickly hurt both the effectiveness of the support functions, as well as the very ability of the organization to deliver new releases according to schedule.

Even in a perfect digital world, bad stuff will happen — retail websites slow down, they might not be available (DDoS, cyberattack), they might be experiencing a network outage, applications may fail, you may lose connection to your ERP, EMR, Supply Chain which impacts productivity and increases user frustration … in other words, the very same customers that you are trying to please with faster delivery may now be very frustrated with a poor quality of service when things break.

Faster Release Cycles Require Faster Response Cycles

IT leaders must review the three dimensions of their operations; their people, their processes and their technology.

Interestingly enough, the same DEJ study shows that IT Leaders have come to the conclusion that:

■ They cannot keep throwing more people to cope with the increasing number of IT issues

■ The investment they made in their ITSM platform, while necessary, is not sufficient any longer

■ Contextual information is critical when dealing with IT critical issues

■ Automation is no longer only used for tactical cost-cutting initiatives but that it is a must-have component to ensure consistent quality and delivery of IT services


What Now?

As organizations acquire new technology and adopt new digital service delivery methods, they must also inspect their processes and people assignments to ensure that their processes will:

■ Support their service delivery goals (frequency of release)

■ Enable the cross functional teams to collaborate and participate in

■ Meet their SLAs and protect business users experience when issues occur

■ Provide Senior IT Executives insight into their response team performance for continuous improvement

■ Give a way to perform post-mortem reviews using the metrics and information collected

■ Store full audit trails including conversation recording for compliance

Recommendations

IT leaders should turn to Closed-loop Response Management solutions, which help to automate the traditional, manual and time-consuming processes including:

■ Automatically gauge the severity and context of the event

■ Identify in real time the right teams and personnel based on who's on-call, location, skillset, etc.

■ Engage the right teams in real time, Escalate, Collaborate and Orchestrate

■ Gain visibility into Incident Response across all areas of IT: Service Operations, Security Operations, DevOps and IT BC/DR

Vincent Geffray is Senior Director, Product Marketing, at Everbridge
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...