Networks at the Edge: Designing Low-Latency, High-Throughput Systems
MTA
Architectures and protocols for edge computing, CDN design, and real-time services
2nd Edition
*Networks at the Edge* provides a comprehensive technical guide to architecting and operating globally distributed systems that prioritize low latency and high throughput. The book argues that as user expectations shift toward real-time interactivity, the "edge imperative" has moved compute, storage, and networking away from centralized data centers to locations physically closer to the end-user. By dissecting the specific requirements of streaming media, multiplayer gaming, and IoT, the text establishes a foundation for designing systems that can overcome the physical constraints of distance and the inherent unreliability of the public internet.
The technical core of the book explores the anatomy of edge platforms, emphasizing the coordination between a high-performance data plane and a strategic control plane. It details critical mechanics such as request routing via GeoDNS and Anycast, advanced caching strategies that balance freshness with performance, and the transition to modern protocols like QUIC and HTTP/3. Specialized chapters on load balancing, queueing theory, and congestion control (including loss-based and model-based algorithms like BBR) provide practitioners with the tools to manage high-concurrency workloads while avoiding "tail latency" explosions and cascading failures.
Beyond architecture, the book focuses on the operational and governance challenges of distributed scale. It introduces Site Reliability Engineering (SRE) principles tailored for the edge, such as error budgets, automated remediation, and chaos engineering. Significant attention is given to the emerging role of edge compute runtimes, particularly WebAssembly, and the complexities of managing data consistency through Conflict-free Replicated Data Types (CRDTs). The final chapters address the non-technical constraints of global deployment, including data residency laws (GDPR), privacy-enhancing technologies, and cost modeling.
Ultimately, the book frames edge computing not as a single product, but as a discipline. Through a series of real-world "war stories" and case studies, it illustrates the subtle pitfalls of distributed design, such as hidden cross-service latency and thundering herd problems. It concludes by providing a strategic roadmap for organizations to transition from initial pilots to global deployments, ensuring that performance optimizations are balanced with security, reliability, and financial sustainability.
This book is designed for systems architects, site reliability engineers (SREs), and backend developers who are responsible for building and scaling performance-critical applications. It is particularly valuable for professionals working on live streaming, multiplayer gaming, or industrial IoT platforms where sub-millisecond latency is a competitive necessity. Readers looking to transition from centralized cloud architectures to a distributed edge-first strategy will find the technical trade-offs and 'war stories' especially instructive.
MixCache.com
View booksJanuary 14, 2026
63,951 words
4 hours 29 minutes
Get unlimited access to this book + all MixCache.com books for $11.99/month
Subscribe to MTAOr purchase this book individually below
$6.99 USD
Click to buy this ebook:
Buy NowFull ebook will be available immediately
- read online or download as a PDF file.
Full ebook will be available immediately
- read online or download as a PDF file.
$5 account credit for all new MixCache.com accounts!
Have a question about the content? Ask our AI assistant!
Start by asking a question about "Networks at the Edge: Designing Low-Latency, High-Throughput Systems"
Example: "Does this book mention William Shakespeare?"
Thinking...