diagram.mmd — flowchart
Distributed Locking flowchart diagram

Distributed locking is a coordination mechanism that ensures only one node in a cluster can access a shared resource or execute a critical section at any given time, preventing race conditions that would corrupt shared state.

What the diagram shows

The diagram shows two competing Worker Nodes (Node A and Node B) both attempting to acquire a lock from a Lock Service (typically Redis with the Redlock algorithm, ZooKeeper, or etcd). Node A sends an SETNX (set if not exists) command with a lock key and a TTL. The Lock Service grants the lock to Node A and returns success; Node B's concurrent attempt receives a failure response because the key already exists.

Node A proceeds to execute the Critical Section — for example, processing a job or updating a shared counter. Upon completion, Node A explicitly releases the lock by deleting the key (using a Lua script to ensure it only deletes its own lock token, not one acquired by a different node after TTL expiry). If Node A crashes mid-execution, the TTL ensures the lock expires automatically, preventing deadlock.

Why this matters

Distributed locking solves the "thundering herd" problem in job processing — without it, multiple workers may pick up the same job from a queue and process it multiple times, causing duplicate side effects. The critical subtleties are lock TTL tuning (too short and a slow job loses the lock; too long and a crashed node blocks progress) and fencing tokens (monotonically increasing IDs that let downstream systems reject requests from a node that lost its lock). For leader election — a closely related primitive — see Leader Election.

Free online editor
Edit this diagram in Graphlet
Fork, modify, and export to SVG or PNG. No sign-up required.
Open in Graphlet →

Frequently asked questions

Distributed locking is a coordination mechanism that ensures only one node in a cluster can enter a critical section or access a shared resource at any given time, preventing race conditions that would corrupt shared state across multiple processes or machines.
A node sends an atomic set-if-not-exists (SETNX) command with a unique lock token and a TTL to a shared coordination service (Redis, etcd, or ZooKeeper). If the key does not exist, the lock is granted. Competing nodes receive a failure. The lock holder releases it explicitly using a Lua script to ensure only its own token is deleted; the TTL provides automatic release if the holder crashes.
Use distributed locking when multiple workers compete over a shared resource that cannot tolerate concurrent access — for example, deduplicating scheduled jobs, coordinating database migrations, or enforcing rate limits across a cluster of stateless services.
mermaid
flowchart LR NodeA[Worker Node A] -->|SETNX lock-key TTL=30s| LockSvc[(Lock Service\nRedis / etcd)] NodeB[Worker Node B] -->|SETNX lock-key TTL=30s| LockSvc LockSvc -->|Lock granted| NodeA LockSvc -->|Lock denied - key exists| NodeB NodeA --> Critical[Execute Critical Section\nProcess job / Update resource] NodeB --> RetryWait[Wait & Retry\nafter backoff] Critical --> Release[Release Lock\nDEL lock-key with token check] Release --> LockSvc RetryWait -->|Retry acquire| LockSvc Critical -.->|Node crashes| TTLExpiry[TTL expires\nLock auto-released] TTLExpiry --> LockSvc
Copied to clipboard