diagram.mmd — flowchart
Edge Computing Architecture flowchart diagram

Edge computing is a distributed computing paradigm that moves data processing and application logic physically closer to the source of data — IoT devices, mobile users, or branch offices — reducing latency, bandwidth consumption, and dependence on centralized cloud data centers.

In a traditional cloud architecture, every request travels to a central cloud region potentially hundreds of milliseconds away. For latency-sensitive applications — real-time gaming, video processing, industrial IoT, augmented reality — this round-trip is unacceptable. Edge computing inserts a processing tier at the network edge: within CDN Points of Presence, carrier infrastructure, on-premise edge servers, or on the device itself.

CDN edge functions (Cloudflare Workers, AWS Lambda@Edge, Fastly Compute@Edge) run application code at CDN PoPs worldwide. They can personalize responses, perform A/B testing, handle authentication, and rewrite requests — all without a round-trip to the origin. Response times drop from 200ms to under 10ms.

Industrial edge nodes in factory or retail environments run containerized workloads locally (Kubernetes distributions like K3s or MicroK8s). They process sensor data, run ML inference for quality control or anomaly detection, and act autonomously when the WAN link to the cloud is unavailable — syncing results when connectivity is restored.

Edge-cloud sync replicates processed results, aggregated metrics, and model updates between edge and cloud. The cloud remains the system of record and the source of updated ML models and configuration.

See CDN Edge Caching for how static content is served from edge nodes, and Cloud Monitoring Pipeline for aggregating telemetry from edge locations.

Free online editor
Edit this diagram in Graphlet
Fork, modify, and export to SVG or PNG. No sign-up required.
Open in Graphlet →

Frequently asked questions

Edge computing is a distributed computing paradigm that moves data processing and application logic physically closer to the source of data — IoT devices, mobile users, or branch offices — reducing latency, bandwidth consumption, and dependence on centralized cloud regions.
By placing compute at CDN PoPs, carrier infrastructure, or on-premise edge servers, processing happens within milliseconds of the user or device. Rather than a round-trip to a cloud region potentially hundreds of milliseconds away, edge functions respond locally — dropping response times from 200ms to under 10ms for suitable workloads.
Use edge computing when latency requirements are under 20ms, when regulatory data residency rules prohibit sending data to centralized regions, or when WAN connectivity is unreliable and local autonomous processing is needed. Use centralized cloud for workloads where latency is not critical, data must be aggregated across many edge sites, or where heavy compute (ML training, batch processing) benefits from cloud-scale resources.
mermaid
flowchart LR IoT([IoT Sensors\nand Devices]) --> EdgeNode[Edge Node\nK3s / MicroK8s] MobileUser([Mobile Users]) --> CDNEdge[CDN Edge PoP\nCloudflare / Lambda@Edge] BranchOffice([Branch Office\nClients]) --> EdgeNode EdgeNode --> LocalProcess[Local Processing\nML inference, filtering] CDNEdge --> EdgeFunction[Edge Function\nauth, personalisation, A/B] LocalProcess --> LocalStore[(Local Edge Storage\nbuffer when offline)] LocalStore --> Sync{WAN Available?} Sync -->|Yes| CloudSync[Sync to Cloud\naggregated results] Sync -->|No| OfflineMode[Offline Autonomous\noperation continues] EdgeFunction --> CacheHit{Cached\nResponse?} CacheHit -->|Hit| ReturnCached([Return from edge cache\nsub-10ms]) CacheHit -->|Miss| OriginFetch[Fetch from Origin\nCloud Region] OriginFetch --> UpdateCache[Update Edge Cache] UpdateCache --> ReturnFetched([Return to User]) CloudSync --> CloudRegion[Cloud Data Centre\nprocessing and storage] CloudRegion --> ModelUpdate[Updated ML Models\nor Config] ModelUpdate --> EdgeNode
Copied to clipboard