Datadog Releases Data Jobs Monitoring
June 24, 2024
Share this

Datadog announced the general availability of Data Jobs Monitoring, a new product that helps data platform teams and data engineers detect problematic Spark and Databricks jobs anywhere in their data pipelines, remediate failed and long-running-jobs faster, and optimize overprovisioned compute resources to reduce costs.

Data Jobs Monitoring immediately surfaces specific jobs that need optimization and reliability improvements while enabling teams to drill down into job execution traces so that they can correlate their job telemetry to their cloud infrastructure for fast debugging.

“When data pipelines fail, data quality is impacted, which can hurt stakeholder trust and slow down decision making. Long-running jobs can lead to spikes in cost, making it critical for teams to understand how to provision the optimal resources,” said Michael Whetten, VP of Product at Datadog. “Data Jobs Monitoring helps teams do just that by giving data platform engineers full visibility into their largest, most expensive jobs to help them improve data quality, optimize their pipelines and prioritize cost savings.”

Data Jobs Monitoring helps teams to:

- Detect job failures and latency spikes: Out-of-the-box alerts immediately notify teams when jobs have failed or are running beyond automatically detected baselines so that they can be addressed before there are negative impacts to the end user experience. Recommended filters surface the most important issues that are impacting job and cluster health, so that they can be prioritized.

- Pinpoint and resolve erroneous jobs faster: Detailed trace views show teams exactly where a job failed in its execution flow so they have the full context for faster troubleshooting. Multiple job runs can be compared to one another to expedite root cause analysis and identify trends and changes in run duration, Spark performance metrics, cluster utilization and configuration.

- Identify opportunities for cost savings: Resource utilization and Spark application metrics help teams identify ways to lower compute costs for overprovisioned clusters and optimize inefficient job runs.

Data Jobs Monitoring is now generally available.

Share this

The Latest

September 16, 2024

For the last 18 years — through pandemic times, boom times, pullbacks, and more — little has been predictable except one thing: Worldwide cloud spending will be higher this year than last year and a lot higher next year. But as companies spend more, are they spending more intelligently? Just how efficient are our modern SaaS systems? ...

September 12, 2024

The OpenTelemetry End-User SIG surveyed more than 100 OpenTelemetry users to learn more about their observability journeys and what resources deliver the most value when establishing an observability practice ... Regardless of experience level, there's a clear need for more support and continued education ...

September 11, 2024

A silo is, by definition, an isolated component of an organization that doesn't interact with those around it in any meaningful way. This is the antithesis of collaboration, but its effects are even more insidious than the shutting down of effective conversation ...

September 10, 2024

New Relic's 2024 State of Observability for Industrials, Materials, and Manufacturing report outlines the adoption and business value of observability for the industrials, materials, and manufacturing industries ... Here are 8 key takeaways from the report ...

September 09, 2024

For mission-critical applications, it's often easy to justify an investment in a solution designed to ensure that the application is available no less than 99.99% of the time — easy because the cost to the organization of that app being offline would quickly surpass the cost of a high availability (HA) solution ... But not every application warrants the investment in an HA solution with redundant infrastructure spanning multiple data centers or cloud availability zones ...

September 05, 2024

The edge brings computing resources and data storage closer to end users, which explains the rapid boom in edge computing, but it also generates a huge amount of data ... 44% of organizations are investing in edge IT to create new customer experiences and improve engagement. To achieve those goals, edge services observability should be a centerpoint of that investment ...

September 04, 2024

The growing adoption of efficiency-boosting technologies like artificial intelligence (AI) and machine learning (ML) helps counteract staffing shortages, rising labor costs, and talent gaps, while giving employees more time to focus on strategic projects. This trend is especially evident in the government contracting sector, where, according to Deltek's 2024 Clarity Report, 34% of GovCon leaders rank AI and ML in their top three technology investment priorities for 2024, above perennial focus areas like cybersecurity, data management and integration, business automation and cloud infrastructure ...

September 03, 2024

While IT leaders are preparing organizations for accelerated generative AI (GenAI) adoption, C-suite executives' confidence in their IT team's ability to deliver basic services is declining, according to a study conducted by the IBM Institute for Business Value ...

August 29, 2024

The consequences of outages have become a pressing issue as the largest IT outage in history continues to rock the world with severe ramifications ... According to the Catchpoint Internet Resilience Report, these types of disruptions, internet outages in particular, can have severe financial and reputational impacts and enterprises should strongly consider their resilience ...

August 28, 2024

Everyday AI and digital employee experience (DEX) are projected to reach mainstream adoption in less than two years according to the Gartner, Inc. Hype Cycle for Digital Workplace Applications, 2024 ...