logo hsb.horse
← Back to snippets index

Snippets

Cache-First, Live-Fetch Orchestration Pattern

Orchestration combining a fast path from cache and a slow path from remote. Measures cache hit/miss, fetch latency, and final outcome as metrics, delegating side effects outward.

Published: Updated:

When combining cache and remote fetching, logic tends to become entangled. Encapsulating everything in an orchestration function keeps the cache-hit and cache-miss paths clean while centralizing metrics instrumentation and side-effect delegation.

Type Definitions

interface CacheProvider<T> {
get(key: string): Promise<T | undefined>;
set(key: string, value: T): Promise<void>;
}
interface RemoteProvider<T> {
fetch(key: string): Promise<T>;
}
interface MetricsReporter {
recordCacheHit(key: string, latencyMs: number): void;
recordCacheMiss(key: string): void;
recordFetchLatency(key: string, latencyMs: number): void;
recordOutcome(key: string, outcome: 'success' | 'error', latencyMs: number): void;
}

Orchestration Function

async function orchestrate<T>(
key: string,
cache: CacheProvider<T>,
remote: RemoteProvider<T>,
metrics: MetricsReporter,
): Promise<T> {
const start = performance.now();
// Fast path: cache hit
const cached = await cache.get(key);
if (cached !== undefined) {
metrics.recordCacheHit(key, performance.now() - start);
return cached;
}
// Slow path: live fetch
metrics.recordCacheMiss(key);
const fetchStart = performance.now();
try {
const data = await remote.fetch(key);
metrics.recordFetchLatency(key, performance.now() - fetchStart);
// Side effect delegated out — fire and forget
void cache.set(key, data);
metrics.recordOutcome(key, 'success', performance.now() - start);
return data;
} catch (error) {
metrics.recordOutcome(key, 'error', performance.now() - start);
throw error;
}
}

Usage Example

const result = await orchestrate(
'user:42',
redisCache,
userApiClient,
datadogMetrics,
);

Explanation

  • Fast path: If the cache hits, return immediately — the remote is never called.
  • Slow path: Falls back to the remote only on a cache miss. After fetching, the cache is updated as a fire-and-forget side effect.
  • Metrics: Cache hits, misses, fetch latency, and final outcome are all instrumented. Delegating to MetricsReporter keeps the orchestrator decoupled from Datadog, Prometheus, or any logging backend.
  • Side-effect delegation: Cache writes and metric calls are pushed outward. This makes the orchestration function easy to test in isolation and simple to swap implementations.

Applications

This pattern fits well for:

  • Importers: Skip already-processed records with a fast cache check before hitting the source.
  • Data enrichment jobs: Avoid re-enriching entities that are already complete.
  • Synchronization handlers: Prevent duplicate fetches while tracking deltas with metrics.
  • Expensive UI actions: Execute the remote call only on first load; serve subsequent requests from cache.

Because CacheProvider, RemoteProvider, and MetricsReporter are defined as interfaces, any combination of implementations — Redis/in-memory, REST/gRPC, Datadog/StatsD — can be plugged in without changing the orchestration logic.