diagram.mmd — flowchart
Write Back Cache flowchart diagram

Write-back caching (also called write-behind) is a strategy where writes go to the cache first and are acknowledged immediately, with the cache asynchronously flushing the updated data to the database in the background.

This diagram shows the write path. The application writes to the cache, the cache marks the entry as dirty (modified but not yet persisted), and immediately acknowledges the write. The application can continue without waiting for a database round-trip. A background flush process — triggered by a timer, dirty entry threshold, or cache eviction — later writes the dirty entries to the database.

The primary advantage of write-back is write throughput: multiple writes to the same key can be coalesced into a single database write. If a record is updated 100 times in one second, the cache absorbs all 100 updates and flushes once. This dramatically reduces write amplification to the database, making write-back ideal for high-frequency update scenarios such as counters, analytics aggregations, or game state.

The critical risk is data loss: if the cache crashes before a dirty entry is flushed to the database, that data is permanently lost. For this reason write-back caches are inappropriate for financial transactions or any data where durability is critical. Mitigation strategies include persisting the dirty log to disk (Redis AOF mode), replicating the cache itself, or using short flush intervals.

Write-back differs fundamentally from Write Through Cache on durability: write-through ensures every write is immediately in the database, while write-back optimizes for speed at the cost of a durability window. The Cache Aside Pattern avoids this trade-off entirely by keeping the cache and database logically separate, but requires the application to manage consistency manually.

Free online editor
Edit this diagram in Graphlet
Fork, modify, and export to SVG or PNG. No sign-up required.
Open in Graphlet →

Frequently asked questions

Write-back caching (also called write-behind) is a strategy where writes go to the cache first and are acknowledged immediately, with the cache asynchronously flushing dirty entries to the database in the background. The application never waits for a database round-trip on writes, which dramatically reduces write latency.
The application writes to the cache. The cache marks the entry as dirty and immediately acknowledges success. A background flush process — triggered by a timer, a dirty-entry count threshold, or cache eviction pressure — batches dirty entries and writes them to the database. Multiple updates to the same key are coalesced into a single database write, reducing write amplification.
Use write-back for workloads with high-frequency writes to the same keys where write throughput or latency is the bottleneck — counters, leaderboards, analytics aggregations, session state, or game scores. It is a poor fit for financial transactions, inventory management, or any data where losing a dirty entry would cause a business-logic error, because a cache crash before a flush results in permanent data loss.
Data loss is the primary risk: dirty entries not yet flushed to the database are lost if the cache crashes. Mitigate with Redis AOF (append-only file) persistence, cache replication, or short flush intervals. Another risk is write ordering issues if multiple keys are flushed non-atomically to the database — ensure the flush order does not violate referential integrity constraints.
mermaid
flowchart TD App[Application Write] --> CacheUpdate[Update cache entry] CacheUpdate --> MarkDirty[Mark entry as dirty] MarkDirty --> ImmediateAck[Immediate write ACK to app] MarkDirty --> DirtyQueue[Dirty entry queue] DirtyQueue --> FlushTrigger{Flush trigger} FlushTrigger -->|Timer elapsed| FlushDB[(Flush dirty entries\nto database)] FlushTrigger -->|Dirty threshold reached| FlushDB FlushTrigger -->|Entry evicted| FlushDB FlushDB --> MarkClean[Mark entries clean] App --> ReadReq[Read Request] ReadReq --> CacheHit{Cache Hit?} CacheHit -->|Yes| ReturnCache[Return from cache\nlatest value] CacheHit -->|No| ReadDB[(Read from database)] ReadDB --> PopulateCache[Populate cache] PopulateCache --> ReturnCache CacheCrash([Cache failure]) --> DataLoss[Dirty entries lost\nbefore flush]
Copied to clipboard