Data Downtime Nearly Doubled Year Over Year
May 04, 2023
Share this

Data downtime — periods of time when an organization's data is missing, wrong or otherwise inaccurate — nearly doubled year over year (1.89x), according to the State of Data Quality report from Monte Carlo.


The Wakefield Research survey, which was commissioned by Monte Carlo and polled 200 data professionals in March 2023, found that three critical factors contributed to this increase in data downtime. These factors included:

■ A rise in monthly data incidents, from 59 in 2022 to 67 in 2023.

■ 68% of respondents reported an average time of detection for data incidents of four hours or more, up from 62% of respondents in 2022.

■ A 166% increase in average time to resolution, rising to an average of 15 hours per incident across respondents.

More than half of respondents reported 25% or more of revenue was subjected to data quality issues. The average percentage of impacted revenue jumped to 31%, up from 26% in 2022. Additionally, an astounding 74% reported business stakeholders identify issues first, "all or most of the time," up from 47% in 2022.

These findings suggest data quality remains among the biggest problems facing data teams, with bad data having more severe repercussions on an organization's revenue and data trust than in years prior.

The survey also suggests data teams are making a tradeoff between data downtime and the amount of time spent on data quality as their datasets grow.

For instance, organizations with fewer tables reported spending less time on data quality than their peers with more tables, but their average time to detection and average time to resolution was comparatively higher. Conversely, organizations with more tables reported lower average time to detection and average time to resolution, but spent a greater percentage of their team's time to do so.

■ Respondents that spent more than 50% of their time on data quality had more tables (average 2,571) compared to respondents that spent less than 50% of their time on data quality (average 208).

■ Respondents that took less than 4 hours to detect an issue had more tables (average 1,269) than those who took longer than 4 hours to detect an issue (average 346).

■ Respondents that took less than 4 hours to resolve an issue had more tables (average 1,172) than those who took longer than 4 hours to resolve an issue (average 330).

"These results show teams having to make a lose-lose choice between spending too much time solving for data quality or suffering adverse consequences to their bottom line," said Barr Moses, CEO and co-founder of Monte Carlo. "In this economic climate, it's more urgent than ever for data leaders to turn this lose-lose into a win-win by leveraging data quality solutions that will lower BOTH the amount of time teams spend tackling data downtime and mitigating its consequences. As an industry, we need to prioritize data trust to optimize the potential of our data investments."

The survey revealed additional insights on the state of data quality management, including:

■ 50% of respondents reported data engineering is primarily responsible for data quality, compared to:
- 22% for data analysts
- 9% for software engineering
- 7% for data reliability engineering
- 6% for analytics engineering
- 5% for the data governance team
- 3% for non-technical business stakeholders

■ Respondents averaged 642 tables across their data lake, lakehouse, or warehouse environments.

■ Respondents reported having an average of 24 dbt models, and 41% reported having 25 or more dbt models.

■ Respondents averaged 290 manually-written tests across their data pipelines.

■ The number one reason for launching a data quality initiative was that the data organization identified data quality as a need (28%), followed by a migration or modernization of the data platform or systems (23%).

"Data testing remains data engineers' number one defense against data quality issues — and that's clearly not cutting it," said Lior Gavish, Monte Carlo CTO and Co-Founder. "Incidents fall through the cracks, stakeholders are the first to identify problems, and teams fall further behind. Leaning into more robust incident management processes and automated, ML-driven approaches like data observability is the future of data engineering at scale."

Share this

The Latest

October 29, 2024

Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...

October 28, 2024

Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...

October 24, 2024

High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...

October 23, 2024

Organizations recognize the benefits of generative AI (GenAI) yet need help to implement the infrastructure necessary to deploy it, according to The Future of AI in IT Operations: Benefits and Challenges, a new report commissioned by ScienceLogic ...

October 22, 2024

Splunk's latest research reveals that companies embracing observability aren't just keeping up, they're pulling ahead. Whether it's unlocking advantages across their digital infrastructure, achieving deeper understanding of their IT environments or uncovering faster insights, organizations are slashing through resolution times like never before ...

October 21, 2024

A majority of IT workers surveyed (79%) believe the current service desk model will be unrecognizable within three years, with nearly as many (77%) saying new technologies will render it "redundant" by 2027, according to The Death (and Rebirth) of the Service Desk from Nexthink ...

October 17, 2024

Monitoring your cloud infrastructure on Microsoft Azure is crucial for maintaining its optimal functioning ... In this blog, we will discuss the key aspects you need to consider when selecting the right Azure monitoring software for your business ...

October 16, 2024

All eyes are on the value AI can provide to enterprises. Whether it's simplifying the lives of developers, more accurately forecasting business decisions, or empowering teams to do more with less, AI has already become deeply integrated into businesses. However, it's still early to evaluate its impact using traditional methods. Here's how engineering and IT leaders can make educated decisions despite the ambiguity ...

October 15, 2024

2024 is the year of AI adoption on the mainframe, according to the State of Mainframe Modernization Survey from Kyndryl ...

October 10, 2024

When employees encounter tech friction or feel frustrated with the tools they are asked to use, they will find a workaround. In fact, one in two office workers admit to using personal devices to log into work networks, with 32% of them revealing their employers are unaware of this practice, according to Securing the Digital Employee Experience ...