Today, we're thrilled to introduce a groundbreaking feature that will redefine your website's performance. Imagine unlocking the low latencies of the edge, eliminating the dependency on third-party APIs – welcome to the era of Async Rendering.
In the realm of React applications, the conventional approach to Server Side Rendering (SSR) is the fetch-then-render model, where rendering is split between two steps, one for data fetching and the other for generating html. To illustrate, let's envision a typical scenario on an e-commerce website's home page. This page comprises two shelves, each displaying a collection of products. To populate these shelves, the application needs to make two requests to the e-commerce API.
Now, here's where the challenge arises: React, following the fetch-then-render paradigm, insists that both requests must be completed before any server-side rendering (SSR) can take place. Picture this – a user lands on the home page, eager to explore the latest products, and two API requests are initiated by the edge server. If, for any reason, one of these requests hangs or encounters a delay, the entire SSR process comes to a standstill, leaving the user in a frustrating waiting game.
This latency bottleneck becomes particularly evident in data-rich applications, hindering the user experience and potentially leading to abandonment. It's a scenario where the slowest API response time dictates the overall rendering speed, introducing unnecessary delays that impact user satisfaction.
For the nerds in the audience, we can derive a simple equation that may give us the expected waiting time for the fetch-then-render model. Say that for rendering a page, we perform n
requests in parallel, and each request has a probability p
of being considered a "fast" request. Then, for the page to be "fast", all requests need to be "fast", thus the probability of the page being "fast" is:
p_fast = p^n
99% of requests usually responds within 500ms, and slow requests take around 3 seconds. Let's call these quantities lfast and lslow respectively. Thus, the expected latency for a web page in the fetch-then-render model is:
l_page = l_fast * p_fast + l_slow * (1-p_fast)
Due to issues with commerce APIs, we have some customers that, for rendering a page, perform 97 requests. Plugin-in these numbers into the equation we have an expected page latency is around 2 seconds, even tough 99% of the time the requests respond in less than 500ms! This means that even if we improve the APIs, the mathematical model behind this approach prohibits us from being fast. Note that this model can also be used for caching. If our cache hit rate is 99% (which is around 30% in practice), the expected latency is dominated by the slow APIs. Math is cruel my friend, but there's a hope!
Enter Progressive Loading – a technique that is imune to slow requests. Rather than waiting for all content to be ready on the server before responding to the client, Progressive Loading renders content from fast requests, defaulting to leaving skeletons and loading states on slow requests contents, offering users an immediate visual experience.
This approach decreases user anxiety by displaying visual feedback the system is working into the desired state. Highly dynamic systems like Youtube and Instagram implement this kind of skeleton based approach, so internet users are used to this kind of interaction. However, manually implementing Progressive Loading in your website can be challenging and cumbersome.
Here comes Async Rendering, our latest feature designed to simplify the Progressive Loading paradigm. The magic lies in tight coupling Progressive loading into our framework. Here's how it works:
Loaders are now tied to a time budget. Once this threshold is reached, loaders that have finished their work will have their content rendered to the final html as usual. Loaders consuming slow APIs will raise an exception and a loading state will be rendered in sections consuming this loader. This loading state will use our Partials feature to hydrate and replace the missing section lazily.
This means that there's no need to manually add the Deferred
section anymore. Any section is now deferred depending on the API they are consuming.
To make your website truly yours, we've simplified the customization of loading states. Just export a LoadingFallback
component from your section and this will be used as your loading state. A standard blank loading state will be used if no LoadingFallback
component is exported.
For instance, let's get this Section
export default function Section(props: Props) { ... }
To add a custom loading state:
export default function Section(props: Props) { ... }
export function LoadingFallback () {
return <div>loading...</div>
}
Tip: try exporting a component named ErrorFallback to handle any possible error this section may encounter
to learn more, check out or docs
As you may know, partially rendering a page's content is not ideal to SEO. For this reason, whenever our system detects it's google or any other search engine bot requesting your page, out system will fallback to fetch-then-render approach, delivering the best SEO possible.
Ready to experience the power of Async Rendering firsthand? Join us for a guided tour on activating this feature within Deco.cx's admin interface. Discover how easy it is to enhance your website's performance with a few simple steps.
1. Make sure your project is up-to-date to the latest deco and apps versions by running:
deno run -A https://deco.cx/update
2. Open your site's app on deco.cx admin and look for Async Rendering option. Set it to 0 to disable or any other value to enable it.
That's it! now your sections are async rendering!
Async rendering makes your website more responsive, reducing user anxiety and improving the overall experience. However, it may affect your Web Vitals indicators. Let's analyze how async rendering influences your final score.
FCP is significantly improved because the page doesn't need to wait for slow third-party APIs for responses.
LCP may either improve or worsen depending on the source of your LCP element. If images come from APIs, it could lower the score. However, if images are uploaded via Deco's admin, it will notably increase. For example, on home pages where the banner is uploaded using Deco's CMS, the LCP will be much better. However, on product pages, the LCP score might decrease since we need to wait for a few round trips before downloading the product image.
CLS should remain unchanged. Well-designed LoadingFallback
components shouldn't contribute to any layout shifts. If you observe any changes in CLS, consult your developer to enhance LoadingFallback
components.
FID might slightly increase since the page isn't rendered all at once, thereby better utilizing CPU time.
Overall, you should notice an improvement in Web Vitals indicators.
Curious to see Async Rendering in action? Head over to our storefront template and witness the transformative impact on website performance.