diagram.mmd — flowchart
CAP Theorem Model flowchart diagram

The CAP theorem states that a distributed data system can provide at most two of three properties simultaneously: Consistency, Availability, and Partition Tolerance — and that in the presence of a network partition, you must choose between consistency and availability.

This diagram maps the three CAP properties and shows where real systems fall when forced to make the trade-off during a network partition. Consistency (C) means every read receives the most recent write or an error — there are no stale reads. Availability (A) means every request receives a non-error response, though the data may not be the most recent. Partition Tolerance (P) means the system continues operating even when network partitions prevent some nodes from communicating.

Because network partitions are a physical reality in any distributed system, partition tolerance is effectively mandatory. The real choice is between C and A during a partition. A CP system (like Zookeeper, HBase, or a synchronously replicated PostgreSQL cluster) will refuse to serve reads or writes from a node that cannot confirm it has the latest data, sacrificing availability. A CA system is theoretical — it assumes no partitions and corresponds to a single-node relational database.

An AP system (like DynamoDB, Cassandra, or CouchDB) will continue serving reads and writes during a partition, accepting that different nodes may return different values. These systems implement Eventual Consistency to converge diverged state after the partition heals.

For developers, CAP determines the fundamental behavior contract of your database. If your application cannot tolerate stale reads, choose a CP database and design around potential unavailability during partition events. If uptime is paramount and you can handle stale data, an AP system with well-understood conflict resolution is appropriate.

Free online editor
Edit this diagram in Graphlet
Fork, modify, and export to SVG or PNG. No sign-up required.
Open in Graphlet →

Frequently asked questions

The CAP theorem states that a distributed data system can guarantee at most two of three properties simultaneously: Consistency (every read returns the most recent write), Availability (every request receives a non-error response), and Partition Tolerance (the system operates despite network partitions). Because network partitions cannot be eliminated in practice, the real choice is between C and A during a partition.
CP databases like Zookeeper or a synchronously replicated PostgreSQL cluster refuse to serve requests from nodes that cannot confirm they hold the latest data, sacrificing availability during partitions. AP databases like Cassandra or DynamoDB continue serving reads and writes during a partition, accepting that nodes may return diverged values. Truly CA systems (no partition tolerance) only apply to single-node deployments.
Choose CP when your application cannot tolerate stale reads — financial ledgers, inventory systems, or any domain where reading an outdated value leads to incorrect decisions. Choose AP when maximum uptime is essential and your application can handle eventual convergence — social feeds, user presence, analytics, or features where brief stale reads are acceptable.
A common misconception is treating CAP as a static classification — most modern databases (CockroachDB, YugabyteDB, Cosmos DB) let you tune consistency levels per operation, blending CP and AP behaviors. Another misconception is that CAP covers all distributed tradeoffs; the PACELC model extends CAP by also capturing the latency vs. consistency trade-off that exists even when there is no partition.
mermaid
flowchart TD CAP[CAP Theorem] --> C[Consistency\nEvery read gets latest write] CAP --> A[Availability\nEvery request gets a response] CAP --> P[Partition Tolerance\nSystem works despite network split] P --> MustChoose{Network partition occurs\nChoose C or A} MustChoose -->|Choose Consistency| CP[CP Systems] MustChoose -->|Choose Availability| AP[AP Systems] CP --> CPEx1[Zookeeper] CP --> CPEx2[HBase] CP --> CPEx3[Etcd] CP --> CPBehavior[Refuses requests\nif node is out of sync] AP --> APEx1[DynamoDB] AP --> APEx2[Cassandra] AP --> APEx3[CouchDB] AP --> APBehavior[Serves stale data\nconverges after partition heals] CA[CA - Single node only\nNo partition tolerance] --> CAEx[Traditional\nRelational DB\nno distribution]
Copied to clipboard