In this topic talking about
Requet Level Cache - HashMap
Application Level Cache - Caffeine Cache
Cluster Level Cache - Redis
DoorDash standardized caching across its microservices to address fragmentation and performance issues. Their new multi-layered system boosts scalability while simplifying adoption for engineering teams.
Problems Faced
Teams used varied tools like Caffeine, Redis Lettuce, and HashMaps, leading to repeated issues such as cache staleness, Redis overload, and inconsistent key schemas. This fragmented approach complicated observability and debugging, especially under high traffic in services like DashPass.
Core Solution
Engineers created a shared Kotlin-based library with two key interfaces: CacheManager for cache creation and fallbacks, and CacheKey for abstracting keys. This enables uniform API calls via dependency injection and polymorphism, hiding backend details from business logic.
Cache Layers
Request Local Cache: HashMap-bound to a single request's lifecycle for ultra-fast access.
Local Cache: Caffeine-powered, shared across workers in one JVM.
Redis Cache: Distributed via Lettuce, accessible across pods in a Redis cluster.
Data flows from fastest (local) to slowest (Redis), populating upper layers on misses.
Key Features
Runtime controls let operators toggle layers, adjust TTLs, or enable shadow mode (sampling cache vs. source-of-truth for validation). Built-in metrics track hits/misses, latency, and staleness, with logging for observability
Client Request
|
v
+--------------------+
| 1. Request Local | (HashMap, request-lifetime)
| Cache (Fastest) |
+--------------------+
| Miss?
v Yes
+--------------------+
| 2. Local Cache | (Caffeine, JVM-wide)
+--------------------+
| Miss?
v Yes
+--------------------+
| 3. Redis Cache | (Lettuce, Cluster-wide)
+--------------------+
| Miss?
v Yes
+--------------------+
| Source of Truth | (DB/Service)
+--------------------+
^
| Populate all layers on hit
No comments:
Post a Comment