diagram.mmd — flowchart
Write Through Cache flowchart diagram

Write-through caching is a strategy where every write operation updates both the cache and the database synchronously before the write is acknowledged to the client, ensuring the cache never holds data that differs from the database.

This diagram shows the write path. When the application writes data, the cache layer intercepts the write, persists the new value to the cache, then immediately forwards the write to the database. Only after both the cache write and the database write complete successfully is the response returned to the application. If either write fails, the entire operation is treated as failed.

The key benefit of write-through is cache coherence: the cache always reflects what is in the database. Read operations after a write are guaranteed to hit the cache and return accurate data without any inconsistency window. This makes write-through well-suited for data that is read frequently immediately after being written.

The trade-off is write latency: every write incurs the cost of two storage operations instead of one. The write must complete on both the cache (fast) and the database (slower). For write-heavy workloads or high-volume event streams, this additional latency per write can become a bottleneck.

Write-through also causes cache pollution: data is written to the cache regardless of whether it will ever be read. A record written once and never read again occupies cache memory until its TTL expires. This can crowd out frequently read data and reduce overall cache hit rates.

Compare with Cache Aside Pattern, where the cache is populated lazily on reads, and Write Back Cache, where the database write is deferred, making writes faster at the cost of potential data loss if the cache fails before flushing.

Free online editor
Edit this diagram in Graphlet
Fork, modify, and export to SVG or PNG. No sign-up required.
Open in Graphlet →

Frequently asked questions

Write-through caching is a strategy where every write operation updates both the cache and the database synchronously before the write is acknowledged to the client. The cache layer intercepts the write, persists it to the cache, then immediately forwards it to the database. Only after both writes complete is the response returned.
Because every write goes to both the cache and database atomically before acknowledgment, the cache always holds a value that matches the database. There is no inconsistency window — a read immediately after a write is guaranteed to return the newly written value from the cache, without any staleness risk.
Use write-through for data that is read frequently immediately after being written, such as user profile updates, configuration values, or product details in a content management system. It is best suited for read-heavy workloads with moderate write rates where cache coherence is more important than minimising write latency.
Write latency is the primary cost: every write incurs two storage operations instead of one, and the write cannot return until the slower database write completes. Cache pollution is another issue: data written once and never subsequently read occupies cache memory until TTL expiry, crowding out frequently read data and reducing overall cache efficiency. For write-heavy workloads, these costs can outweigh the coherence benefit.
mermaid
flowchart LR App[Application] --> CacheWrite[Write to Cache] CacheWrite --> DBWrite[(Write to Database)] DBWrite --> WriteAck[Write Acknowledged] WriteAck --> App App --> ReadReq[Read Request] ReadReq --> CacheHit{Cache Hit?} CacheHit -->|Yes| CacheReturn[Return from cache\nalways up-to-date] CacheHit -->|No| DBRead[(Read from Database)] DBRead --> PopulateCache[Populate cache] PopulateCache --> CacheReturn CacheReturn --> App CacheWrite --> CacheFail{Cache write\nfailed?} CacheFail -->|Yes| RollbackDB[Rollback DB write\nreturn error] CacheFail -->|No| DBWrite
Copied to clipboard