diagram.mmd — flowchart
Database Sharding flowchart diagram

Database sharding is a horizontal scaling technique that partitions a dataset across multiple independent database instances — called shards — so that each node holds only a subset of the total data.

This diagram shows how a shard router (sometimes called a proxy or query router) sits between the application and the database tier. When the application issues a query, the router inspects the shard key — a field chosen to distribute rows evenly, such as user ID or tenant ID. A hash function or range map translates the key value into a shard ID, which identifies which physical database node owns the data for that key.

Each shard is a fully independent database instance. Shard 1 might hold user IDs 1–1,000,000, Shard 2 holds 1,000,001–2,000,000, and so on for range-based sharding. In consistent hash sharding, the key is hashed into a ring and each node owns an arc of the ring. Both approaches aim to minimize hot spots — situations where a disproportionate share of traffic lands on one shard.

Sharding solves write throughput limits that replication alone cannot address: no matter how many replicas you add, all writes still funnel through one primary. Sharding splits the write load across N primaries. The cost is cross-shard queries: joins, aggregations, or transactions that span shard boundaries require scatter-gather execution at the router layer, which is slow and complex.

Sharding is often paired with per-shard replication. Each shard primary has its own set of replicas for read scaling and failover. This combination is visible in the Database Replication diagram. For lighter-weight partitioning within a single instance, see Partitioned Table Architecture. At the application level, Connection Pooling manages the increased number of database connections that a sharded architecture requires.

Free online editor
Edit this diagram in Graphlet
Fork, modify, and export to SVG or PNG. No sign-up required.
Open in Graphlet →

Frequently asked questions

Database sharding is a horizontal scaling technique that partitions a dataset across multiple independent database instances (shards), each holding only a subset of the total rows. A shard router maps each request to the correct shard using a shard key, allowing write throughput to scale beyond what a single database can handle.
Sharding distributes data across multiple separate database instances — each shard is a fully independent server with its own storage, compute, and connections. Partitioning divides a large table into sub-tables within a single database instance. Sharding provides horizontal write scalability across machines; partitioning improves query performance and manageability within one server.
Range sharding assigns rows to shards based on value ranges of the shard key (e.g., user IDs 1–1M on Shard 1, 1M–2M on Shard 2). It is simple and supports efficient range scans but risks hot spots if inserts cluster at the high end of the range. Consistent hash sharding hashes the key into a ring and distributes load more evenly, but range queries require scatter-gather across all shards.
Cross-shard joins and transactions are the hardest problems: queries that span multiple shards require scatter-gather execution at the router layer and cannot use standard SQL joins efficiently. Re-sharding (redistributing data when adding new shards) is operationally complex. Choosing the wrong shard key — one that creates hot spots or frequently requires cross-shard queries — often makes sharding worse than the problem it was meant to solve.
Read replicas scale read throughput: every replica holds a full copy of the data and serves only SELECT queries. Sharding scales write throughput: each shard holds a subset of the data and accepts writes for its partition. The two are complementary — a common production architecture pairs per-shard replication with horizontal sharding to scale both reads and writes.
mermaid
flowchart TD App[Application] --> Router{Shard Router} Router --> HashFn[Hash shard key] HashFn --> ShardID{Shard ID?} ShardID -->|0| Shard0[(Shard 0\nUsers 0-999k)] ShardID -->|1| Shard1[(Shard 1\nUsers 1M-1.9M)] ShardID -->|2| Shard2[(Shard 2\nUsers 2M-2.9M)] ShardID -->|3| Shard3[(Shard 3\nUsers 3M+)] Shard0 --> Result[Query Result] Shard1 --> Result Shard2 --> Result Shard3 --> Result Result --> App
Copied to clipboard