Why ACID transactions matter in an eventually consistent world

Systems with weak consistency guarantees can be expensive in unexpected ways.

By Emily Drevets
August 9, 2016
Iterations 7. Iterations 7. (source: Nick Hughes on Flickr)

In 1983, Andreas Reuter and Theo Härder coined the term ACID to describe the properties of a reliable transactional system. It stands for atomic, consistent, isolated, and durable. It’s a great acronym.

Fast forward to present-day. Many developers have moved to distributed computing models with weaker consistency guarantees in exchange for that sweet, sweet speed. Not only is ACID generally considered to be too expensive for apps in terms of performance and availability, some think the term is now little more than marketing shellac. This is somewhat true—very few systems that claim to support 100% ACID transactions actually do.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

But is that a problem? Not every application benefits equally from strong ACID. While it’s crucial in some—especially in high-transaction speed industries like financial services—a mostly-ACID system could work in others. However, many database implementers don’t or can’t know what “mostly” means for their operations.

This is especially true when it comes to isolation, which is where many systems that claim ACID transactions fall short. According to Peter Bailis, even though weak models like Read Committed Isolation or Snapshot Isolation represent “ACID in practice,” it is difficult to know how they actually behave because they have not been thoroughly studied. The lack of information makes it hard to practice low-level isolation responsibly. To make matters worse, weak isolation can be an insidious problem, silently corrupting data until someone notices.

That’s not to say that models like eventual consistency, which make no ACID guarantees, don’t have their place. They were built to be fast and highly available during failures, and there they do excel. As a result, however, they burden application logic with handling temporary inconsistencies. The tradeoff might not be worth it, especially if you find out later on that weak consistency doesn’t allow you to provide the kinds of guarantees that you need for your business (as was found in the case of Twitter’s Manhattan).

In contrast, strongly transactional apps offer more straightforward code since atomicity and isolation allow the developer to skip building the code needed to handle partial failure and conflicting concurrent access. This can make a big difference. It can actually speed up apps by removing workarounds while at the same time shrinking the playing field for bugs and errors. It also reduces the need for distributed systems expertise at the user level, which was one of Google’s criteria for building MillWheel.

Engineers can do almost anything given enough time and resources. They can fix bugs, restore data, and retrofit systems that have fallen behind a business’ needs. However, not everyone has a warehouse full of them waiting for a big distributed computing project. Not to mention that it’s difficult to build full ACID transactions into a system if it wasn’t a part of the original design.

When building a system, it’s important to consider that what seems easier or faster or cheaper at the beginning could end up being none of those things. The question isn’t, “Does this work?” but “If my app succeeds, will this level of consistency be a liability?” And considering the option of ACID systems that do provide high levels of performance and availability, is “ACID-in-practice” or eventual consistency good enough for your app?

This post is a collaboration between VoltDB and O’Reilly. See our statement of editorial independence.

Post topics: Data science