Levix

Levix's zone

x
telegram

How to Optimize Frontend Performance

This is not a beginner's tutorial; it mainly lists some key points for performance optimization.

For users, the first screen loading page is mainly represented by the Login page or the homepages of various roles. The first step in building a fast-loading website is to receive the HTML response from the server in a timely manner. When you enter a URL in the browser's address bar, the browser sends a GET request to the server to retrieve it, ensuring that the HTML arrives quickly with minimal delay is a key performance goal.

TTFB#

The initial request for HTML goes through multiple steps, each of which takes some time, and reducing the time spent on each step can shorten the Time to First Byte (TTFB).

While TTFB is not the only metric to focus on when front-end developers are concerned about page loading speed, a high TTFB can indeed make it difficult to reach the specified "good" thresholds, such as Largest Contentful Paint (LCP) and First Contentful Paint (FCP). A simple conclusion can be drawn: relying solely on front-end optimization cannot achieve optimal performance; cooperation with the back-end is also necessary.

Most websites should strive to keep TTFB at 0.8 seconds or shorter.

image

The Server-Timing response header can be used to measure interfaces that may cause high latency.

// Two metrics with descriptions and values
Server-Timing: db;desc="Database";dur=121.3, ssr;desc="Server-side Rendering";dur=212.2

Any page with a Server-Timing response header can retrieve it through the serverTiming property in the Navigation Timing API:

// Get the serverTiming entry for the first navigation request:
performance.getEntries("navigation")[0].serverTiming.forEach(entry => {
    // Log the server timing data:
    console.log(entry.name, entry.description, entry.duration);
});

For methods on how to optimize TTFB, you can refer to this article:

Optimize Time to First Byte | Articles | web.dev

Static Resource Response Compression#

Responses based on static files (such as HTML, JavaScript, CSS, and SVG images) need to be compressed to reduce their transmission costs over the network for faster downloads. The most widely used compression algorithms currently are gzip and Brotli, with Brotli improving compression by about 15% to 20% over gzip.

Most CDN service providers typically set up compression automatically, but if you can configure or adjust the compression settings yourself, use Brotli whenever possible. Brotli offers a significant improvement over gzip, and all major browsers support Brotli.

Use Brotli whenever possible, but if your website has a large number of users on older browsers, ensure to use gzip as a compatibility measure, as any compression is better than no compression.

CDN#

Content Delivery Networks (CDNs) improve website performance by distributing resources to users through a distributed network of servers. Since CDNs can alleviate the load on servers, they can reduce server costs and are very suitable for handling traffic spikes.

image

CDNs are designed to reduce latency by distributing resources to servers geographically closer to users. This is why the core advantage of CDNs is their ability to enhance loading performance, especially after introducing a CDN, where the Time to First Byte (TTFB) for resources can be significantly improved, which is crucial for enhancing the LCP metric.

For more information on using CDNs to improve website loading speed, you can refer to this article:

Content delivery networks (CDNs) | Articles | web.dev

blocking="render" - Experimental Feature#

As an experimental feature, you can now add blocking=render as an attribute and its value to <script>, <style>, or <link> tags for stylesheets, explicitly setting them to be render-blocking. The main purpose is to prevent the flash of unstyled content or to prevent users from interacting with a partially loaded page, which is generally caused by scripts or stylesheets inserted by scripts or client-side A/B testing.

Browser compatibility:

"blocking" | Can I use... Support tables for HTML5, CSS3, etc

Currently, all browsers have a built-in render-blocking mechanism: after page navigation, the browser will not render any pixels to the screen until all stylesheets and synchronous scripts in the <head> are loaded and processed. This prevents the flash of unstyled content (FOUC) and ensures that critical scripts, such as framework code, are executed, allowing the page to function normally after the first rendering cycle.

https://github.com/whatwg/html/pull/7474

CSS#

At the most basic level, CSS compression is an optimization method that can effectively enhance website performance, improving the First Contentful Paint (FCP) and, in some cases, even the Largest Contentful Paint (LCP). Bundling tools (like Webpack, Vite, etc.) can automatically perform these optimizations in your production builds.

Before rendering page content, the browser must download and parse all CSS stylesheets, including styles that are not used on the current page, which are actually unnecessary. If the bundling tool you use merges all CSS into one file, this may lead to users downloading more CSS than is necessary for rendering the current page.

The coverage tool in Chrome DevTools can be used to detect unused CSS (or JavaScript) on the current page.

image

JavaScript#

JavaScript is responsible for most of the interactivity on web pages, but this comes at a cost.

Loading too much JavaScript code can make the webpage respond slowly during loading and may even cause interaction delays.

The async and defer attributes allow external scripts to load without blocking the HTML parser, while scripts of type module (including inline scripts) are automatically deferred. However, it is essential to understand some key differences between async and defer.

image
Sourced from https://html.spec.whatwg.org/multipage/scripting.html

Scripts loaded with the async attribute will be parsed and executed immediately after downloading, while scripts loaded with the defer attribute will wait until the HTML document is fully parsed before executing—this occurs synchronously with the browser's DOMContentLoaded event. Additionally, scripts with the async attribute may not execute in order, while scripts with the defer attribute will execute in the order they appear in the page.

Moreover, JavaScript compression is more thorough than compression for other resources (like CSS). During JavaScript compression, not only are non-code contents like spaces, tabs, and comments removed, but variable and function names in the source code are also shortened. This process is sometimes referred to as "uglification."

Preconnect#

By using preconnect, you can predict that the browser will soon need to connect to a specific cross-origin server, and the browser should immediately initiate that connection, ideally before the HTML parser or preloader starts working.

preconnect is commonly used for Google Fonts services. Google Fonts recommends preconnecting to the domain https://fonts.googleapis.com for providing @font-face declarations and also suggests connecting to https://fonts.gstatic.com for serving font files.

<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

dns-prefetch#

While opening connections to cross-origin servers in advance can significantly speed up the initial loading time of a page, establishing multiple cross-origin connections simultaneously may be unreasonable and impractical.

If you're concerned about overusing preconnect, a more resource-efficient approach is to use dns-prefetch. As its name suggests, dns-prefetch does not establish server connections but only performs DNS resolution for the domain.

During the process of resolving a domain to its corresponding IP address, while device and network-level DNS caching can speed up this process, it still consumes some time.

preload#

The preload directive is used to request resources necessary for rendering the page in advance:

<link rel="preload" href="/font.woff2" as="font" crossorigin>

If the <link> element does not set the as attribute in the preload directive, the resource will be downloaded twice. For various values of the as attribute, you can refer to the relevant explanations in the MDN documentation.

prefetch#

The prefetch directive is used to initiate low-priority requests for resources that may be used in future navigation:

<link rel="prefetch" href="/next-page.css" as="style">

In some cases, prefetch can be very helpful—if you identify a specific user flow that most users on your site will follow, preloading (prefetching) critical resources for those future pages can help shorten their loading times.

Note: Since prefetch is essentially a speculative operation, one potential downside is that if users do not visit the final page that requires the prefetched resource, the data consumed to fetch the resource may be wasted. You need to decide whether to apply prefetch based on your site's analytics data or other usage pattern information. Additionally, for users who have set preferences to reduce data usage, you can also use the Save-Data hint to avoid performing prefetch operations.

It is generally recommended to avoid using <link rel="prefetch"> to prefetch cross-origin documents, as there is a public issue regarding prefetching cross-origin documents that can lead to duplicate requests. Similarly, you should avoid prefetching personalized same-origin documents—such as HTML responses dynamically generated for authenticated sessions—because these resources are generally not cacheable and are likely to remain unused, ultimately wasting bandwidth.

In Chromium-based browsers, you can use the Speculation Rules API to prefetch documents. Speculation Rules are defined as a JSON object that can be embedded in the page's HTML or dynamically injected via JavaScript:

<script type="speculationrules">
{
  "prefetch": [{
    "source": "list",
    "urls": ["/page-a", "/page-b"]
  }]
}
</script>

Libraries like Quicklink optimize page navigation experiences by dynamically prefetching or prerendering links within the user's viewport, which is more likely to lead to users ultimately browsing to those pages compared to prefetching all links on a page.

Prerender#

In addition to prefetching resources, you can also use the browser to pre-render pages that users are about to visit. This approach allows for near-instantaneous page loading, as the page and its resources have already been loaded and processed in the background. When users visit that page, it displays immediately.

The prerendering feature can be implemented through the Speculation Rules API:

<script type="speculationrules">
{
  "prerender": [
    {
      "source": "list",
      "urls": ["/page-a", "page-b"]
    }
  ]
}
</script>

Chrome also supports using <link rel="prerender" href="/page"> as a resource hint. However, starting from Chrome 63, this approach introduced NoState Prefetch, which is only used to load resources required for the page without rendering the page or executing JavaScript.

Full prerendering will also run JavaScript in the prerendered page. Given that JavaScript is a large and computationally intensive resource, it is advisable to use prerender cautiously and only when you are confident that users are about to visit that prerendered page.

Service Worker Pre-Caching#

The pre-caching feature of service workers can leverage the Cache API to fetch and store resources, allowing the browser to respond to requests solely through the Cache API without needing to connect to the network. Service worker pre-caching employs a very efficient caching strategy known as the "cache-only strategy," which is extremely efficient; once resources are stored in the service worker cache, they can be retrieved almost instantaneously when requested.

To utilize service workers for resource pre-caching, you can use the Workbox tool. Of course, if you prefer manual control, you can also write your own code to cache specific sets of files.

Regardless of how you implement resource pre-caching, you must understand that this process occurs during the installation of the service worker. Once installed, all pre-cached resources can be called and used by any page managed by the service worker on your site.

Workbox | Chrome for Developers

Like prefetching or prerendering resources using resource hints or speculation rules, service worker pre-caching also consumes network bandwidth, storage space, and CPU processing power. Therefore, it is advisable to pre-cache only resources that are likely to be used, avoiding including too many resources in the pre-cache list. When uncertain about which resources to pre-cache, it is better to pre-cache fewer resources and let runtime caching handle filling the service worker cache, using various strategies to balance loading speed and resource freshness. For more practical tips and pitfalls regarding pre-caching resources, read the Dos and Don'ts of Pre-Caching.

Fetch Priority API#

By using the fetchpriority attribute, you can utilize the Fetch Priority API to enhance the loading priority of resources. This attribute applies to <link>, <img>, and <script> elements.

<div class="gallery">
  <div class="poster">
    <img src="img/poster-1.jpg" fetchpriority="high">
  </div>
  <div class="thumbnails">
    <img src="img/thumbnail-2.jpg" fetchpriority="low">
  </div>
</div>

Images#

Images are often the largest and most common resources on web pages. Therefore, optimizing images can significantly improve webpage performance. In most cases, optimizing images means reducing the amount of data transmitted to shorten network transfer times, but it can also involve providing images that are suitable for the user's device size.

Modern browsers support various image file formats. Compared to PNG or JPEG, modern image formats like WebP and AVIF offer better compression, resulting in smaller image file sizes and shorter download times. Serving images in modern formats can reduce resource loading times and may lower the Largest Contentful Paint (LCP).

JavaScript Code Splitting#

Loading large JavaScript resources can severely impact page loading speed. If JavaScript is split into smaller chunks and only the code necessary for page functionality is downloaded at startup, it can significantly improve the page's loading responsiveness.

When a page downloads, parses, and compiles large JavaScript files, there may be temporary unresponsiveness, even though page elements are already visible because they belong to the initial HTML of the page and have applied CSS styles. However, the JavaScript responsible for driving these interactive elements and other scripts loaded by the page may be executing, causing them to malfunction, leading users to experience noticeable delays in interaction or even complete unavailability.

Lighthouse will display a warning when JavaScript execution time exceeds 2 seconds and will consider it a failure if it exceeds 3.5 seconds. Excessive JavaScript parsing and execution can be problematic at any stage of the page lifecycle, as it may increase input latency when users interact with the page, especially when running synchronously with the main thread handling JavaScript processing and execution tasks.

Moreover, excessive JavaScript execution and parsing are particularly problematic during the initial page load, as users are very likely to interact with the page at this time. In fact, Total Blocking Time (TBT)—a metric that measures loading responsiveness—is highly correlated with Interaction to Next Paint (INP), meaning users are likely to attempt interactions when the page initially loads.

By using the dynamic import() function, code splitting can be achieved. This function—unlike the <script> element that requests specific JavaScript resources at startup—can request JavaScript resources later in the page lifecycle.

document.querySelectorAll('#myForm input').addEventListener('blur', async () => {
  // Get the form validation named export from the module through destructuring:
  const { validateForm } = await import('/validate-form.mjs');

  // Validate the form:
  validateForm();
}, { once: true });
Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.