Avoiding Digital Disaster: How the US Census Can Deliver a Smooth Digital Experience
April 23, 2020

Tal Weiss
OverOps

Share this

Most of us have personally felt the effects of an application failing on us on some level. Sometimes, the impact a software outage has on us is trivial — we are temporarily unable to book a flight for vacation, we are forced to call a customer service rep rather than using an online portal, or we're unable to watch an on-demand movie. While frustrating, these experiences are often inconvenient rather than catastrophic.

But what about when the application failure has much larger implications? For example, the site you use to refill important prescriptions malfunctions and delays access to medication, your banking app fails on the day rent is due, or, in the recent case of the Iowa caucus app, a software outage affects your ability to participate in our nation's democratic process.

As more organizations go digital, including critical government agencies, reliable software is paramount. The recent Iowa caucus voting app failure is a case study in software testing and delivery mistakes, with many key learnings for those tasked with building and managing mission-critical applications — like the US Census Bureau.


The 2020 US Census, which is collecting responses beginning April 1, has the potential to significantly influence fair political representation, the allocation of vital federal funds, and more. This year marks the first time in our nation’s history that participants have the option to fill out an online questionnaire rather than mailing in their responses. While this is an exciting digital milestone for the US Census Bureau, experience tells us that the course of digital transformation rarely does run smooth.

In order to ensure the accuracy of results that will impact our nation for the next decade, the census software needs to operate seamlessly. But considering what happened with the Iowa caucus, how confident are we that the census app isn’t going to fail?

Below are two key takeaways from the Iowa Caucus app disaster that should serve as a valuable lesson not only for the IT team supporting the US Census Bureau, but for any engineering team tasked with delivering a mission-critical application with minimal room for error.

Takeaway #1: Test Early. Test Often.

There is a rule in software known as the Rule of Ten: the cost of finding and fixing software defects increases 10X the further you are in the software delivery lifecycle. When pushing out an important new release that will be highly trafficked and highly visible, the more proactive you can be about preventing errors from reaching production, the better. In the case of the US Census app, responses are only collected within a short window, so any unexpected production issues could waste precious minutes or hours.

To address this, many organizations are starting to understand the merits of adopting a "Shift Left" approach to quality. By increasing quality measures taken in the development and testing phases of software delivery, you can significantly reduce the odds of production issues. Of course, you can’t fully anticipate all potential production failure scenarios, but the more testing you do up front, the more confidence you’ll have in your release, rather than relying solely on production monitoring.

Takeaway #2: Automate, Automate, Automate

As the process of writing code remains a very human driven process (AI has yet to pass a coding "Turing test"), companies will need to find ways of automating the way by which they test, deliver and operate their software to ensure speed and reliability.

A decade ago, when Test-Driven Development (TDD) just started gaining traction, it promised to improve productivity and quality. Since then, release cycles shortened, CI/CD is no longer a buzzword, and new companies that develop pipeline automation products are mature enough to IPO.

Building on the points above, testing is more relevant than ever, but when moving fast is table stakes, relying on traditional tests alone in your shift left strategy is no longer an option. Building in automated quality gates and feedback loops will allow for thorough, fast testing that doesn’t hold up release timelines. This can be done by leveraging a variety of automated testing methods within your CI/CD pipeline, such as static and dynamic code analysis.

Further, even with a sophisticated testing pipeline, the occasional error will inevitably reach production from time to time, and your ability to detect, troubleshoot and recover quickly will make all the difference to your users. Developers are great at writing code but inherently limited in their ability to foresee where it will break down later. For this reason, not to mention the massive operational data volume and noise which high scale environments produce, the task of detecting software issues and gathering the information on them in production should be automated. The 30% of time and resources traditionally allocated to manual identification, routing and reproduction of issues during the software delivery lifecycle will most likely become a thing of the past.

As the US Census Bureau takes this high-stakes step toward innovation, it’s my sincere hope that they’ve been able to put some of these methodologies and tools in place. The more proactive you can be about ensuring quality, and the more tasks you can automate throughout the process, the less you will have to fear when it’s time to put your software to the true test — your users.

Tal Weiss is Co-Founder and CTO of OverOps
Share this

The Latest

May 09, 2024

App sprawl has been a concern for technologists for some time, but it has never presented such a challenge as now. As organizations move to implement generative AI into their applications, it's only going to become more complex ... Observability is a necessary component for understanding the vast amounts of complex data within AI-infused applications, and it must be the centerpiece of an app- and data-centric strategy to truly manage app sprawl ...

May 08, 2024

Fundamentally, investments in digital transformation — often an amorphous budget category for enterprises — have not yielded their anticipated productivity and value ... In the wake of the tsunami of money thrown at digital transformation, most businesses don't actually know what technology they've acquired, or the extent of it, and how it's being used, which is directly tied to how people do their jobs. Now, AI transformation represents the biggest change management challenge organizations will face in the next one to two years ...

May 07, 2024

As businesses focus more and more on uncovering new ways to unlock the value of their data, generative AI (GenAI) is presenting some new opportunities to do so, particularly when it comes to data management and how organizations collect, process, analyze, and derive insights from their assets. In the near future, I expect to see six key ways in which GenAI will reshape our current data management landscape ...

May 06, 2024

The rise of AI is ushering in a new disrupt-or-die era. "Data-ready enterprises that connect and unify broad structured and unstructured data sets into an intelligent data infrastructure are best positioned to win in the age of AI ...

May 02, 2024

A majority (61%) of organizations are forced to evolve or rethink their data and analytics (D&A) operating model because of the impact of disruptive artificial intelligence (AI) technologies, according to a new Gartner survey ...