Stale caching has long been the secret behind the speed and reliability of deco.cx websites. With ecommerce APIs often experiencing latency between 1 to 7 seconds, crafting a rendering engine that responds in a sub-second timescale requires a meticulously designed caching layer. Today, we're excited to announce a pivotal shift in how stale caching operates on deco.cx, introducing a simpler API for developers and significantly improving observability.
In the past, utilizing deco.cx's stale caching layer meant developers had to employ a custom fetch
implementation provided by the deco.cx framework. This custom fetch modified the fetched URL, routing the request through deco's CDN and incorporating stale caching. For example, when fetching https://example.com
, the framework would actually retrieve https://decocache.com?src="https://example.com"
. However, with the growing number of apps on our deco.store, some services already provided libraries for interacting with their APIs, making our custom fetch implementation impractical. Moreover, integrating tracing was challenging due to the framework's lack of control over HTTP connections. Consequently, we're transitioning away from the custom fetch
implementation, relocating the caching layer to our loaders. This new API offers seamless support for both stale caching and tracing.
To start using the new loaders API, set the environment variable:
export ENABLE_LOADER_CACHE=true
By default, stale caching remains inactive in loaders. Activating this feature requires a simple modification in your loader's code:
// ... loader code
export const cache = 'stale-while-revalidate'
You have the option to set the cache variable to either stale-while-revalidate
or no-store
, with the default being no-store
.
Now, your loader is stale cached! To confirm, open a page where your loader is used, add a ?<em></em>d
query string to the URL, and observe tracing information in the terminal, including cache states such as HIT
, MISS
, STALE
, and BYPASS
.
[200] 1312ms /camisa-masc-classic-linen-ml-verde-forest-46661-206/p
[====================] 1312ms 200 /camisa-masc-classic-linen-ml-verde-forest-46661-206/p
[ ] 0ms load-page
[ ] 23ms router
[ ===================] 1234ms load-data
[ ===================] 1233ms Product Page@sections
[ ] 0ms PDP Loader@data.slug
[ ===================] 1231ms STALE PDP Loader@extensions.0
[ ] 47ms render-to-string
HIT: Cache considered fresh (standard max-age: 60 seconds). STALE: Cache not fresh, but stale content served (background validation triggered). MISS: Loader is cacheable, but data not in the cache. BYPASS: Loader not cacheable, possibly due to specific configurations.
Note that the cache key varies with the props content passed to the loader. To vary the loader's response with the Request
object, read the next section.
Some loaders may need to vary based on properties available only in the Request
object, such as query strings, cookies, etc. To achieve this, open your loader's file and export the cacheKey
function:
export const cacheKey = (req: Request, ctx: AppContext): string => '';
This function allows the developer to precisely describe how the cache should vary.
To illustrate, let's consider a practical scenario. Suppose you're building a loader for fetching a paginated response, with the pagination information stored in a ?page=
parameter. Here's how you can correctly implement this loader:
interface Props {
// ... loader props
}
const loader = (props: Props, req: Request, ctx: AppContext) {
// Implementation details for fetching paginated response
}
export const cache = 'stale-while-revalidate'
// Vary the cache with the page parameter
export const cacheKey = (req: Request, ctx: AppContext) => {
const url = new URL(req.url)
return url.searchParams.get('page')
}
In this example, the loader
function encapsulates the logic for fetching a paginated response, while the cache
setting ensures stale-while-revalidate caching. The cacheKey
function intelligently varies the cache based on the page
parameter, allowing for a dynamic and efficient caching strategy tailored to your specific use case.
These changes mark the beginning of forthcoming improvements in both performance and observability for websites built on deco.cx. We'll start testing this new feature and gradually roll it out for our customers by default.
For more info, check out the original PR on GitHub.