diagram.mmd — flowchart
Consensus Algorithm flowchart diagram

A consensus algorithm enables a cluster of distributed nodes to agree on a single value or sequence of values despite network partitions and node failures — the fundamental building block for replicated state machines, distributed databases, and coordination services.

What the diagram shows

The diagram models the Raft consensus algorithm's core flow across a 5-node cluster. The flow begins with Leader Election: if no leader exists or a heartbeat timeout fires, a node increments its term and transitions to Candidate, broadcasting RequestVote RPCs to all peers. Nodes that haven't voted in this term cast a vote; a candidate that receives votes from a majority (3 of 5) becomes the new Leader.

Once elected, the Leader processes client writes via the Log Replication phase: it appends the new entry to its own log, sends AppendEntries RPCs to all followers in parallel, and waits for a majority acknowledgment. When the majority confirms, the entry is committed and the Leader applies it to the state machine and responds to the client. Followers apply committed entries in order, maintaining identical state machine replicas.

Why this matters

Consensus is hard because you must handle the intersection of concurrency, partial failure, and network unreliability. Raft was designed to be more understandable than Paxos by partitioning the problem into leader election, log replication, and safety — each with well-defined invariants. Any system that claims strong consistency (etcd, CockroachDB, TiKV, Consul) uses a consensus algorithm under the hood. For leader election as a standalone use case, see Leader Election. For the locking primitive built on top of consensus stores, see Distributed Locking.

Free online editor
Edit this diagram in Graphlet
Fork, modify, and export to SVG or PNG. No sign-up required.
Open in Graphlet →

Frequently asked questions

A consensus algorithm is a protocol that allows a cluster of distributed nodes to agree on a single value or ordered sequence of values, even when some nodes crash or network messages are delayed — forming the foundation for replicated state machines and distributed databases.
Raft partitions consensus into three sub-problems: leader election (nodes vote to elect a single leader per term), log replication (the leader appends entries and waits for majority acknowledgment before committing), and safety (only nodes with the most complete log can become leader).
Use consensus when you need strong consistency across replicated nodes — for example, in distributed databases, coordination services (etcd, ZooKeeper), distributed job schedulers, or any system that must make exactly one node the authoritative decision-maker at any time.
mermaid
flowchart TD Start([Node detects\nleader timeout]) --> BecomeCandidate[Increment term\nBecome Candidate] BecomeCandidate --> RequestVotes[Broadcast RequestVote\nRPCs to all peers] RequestVotes --> VoteCount{Received majority\nvotes?} VoteCount -->|No - lost election| Follower[Revert to Follower\nawait new election] VoteCount -->|Yes| BecomeLeader[Become Leader\nSend heartbeats] BecomeLeader --> ClientWrite[Client sends\nwrite request] ClientWrite --> AppendLocal[Leader appends entry\nto local log] AppendLocal --> AppendEntries[Broadcast AppendEntries\nRPCs to all followers] AppendEntries --> MajorityAck{Majority of nodes\nacknowledged?} MajorityAck -->|No - node failures| WaitRetry[Retry unreachable\nnodes] MajorityAck -->|Yes| CommitEntry[Commit log entry\nApply to state machine] CommitEntry --> RespondClient[Respond success\nto client] RespondClient --> NotifyFollowers[Notify followers\nto commit entry] NotifyFollowers --> FollowersApply[Followers apply entry\nto their state machines]
Copied to clipboard