Skip to main content

Development

5 Strategies for Improving Page Speed: Serial & Async Loading

sfdc-guide-foolproof-customer-service-community-rollout

There is a tendency in web development to tightly couple the pages with their data source. An example of this would be building a page that uses its backend process to retrieve and render data in a table. This is fine for small pages or quick-and-dirty prototypes, but as pages grow in complexity, this approach begins slowing the page response time above the magic threshold of 270ms.
Most in the web industry have decided they are willing to sacrifice the experience of the user to add in more complexity, and as a result the web has gotten slower and slower page load times. According to MachMetrics.com, “The average time it takes to fully load the average mobile landing page is 22 seconds. However, research also indicates 53% of people will leave a mobile page if it takes longer than 3 seconds to load.”
Luckily, technology has grown in just the right direction for page response times to stay small but still support complex dynamic data. Using asynchronous calls to load dynamic data into the DOM of the current page, you can have a snappy page as well as complicated, responsive data. Libraries like Barba.js take this to the extreme, dynamically replacing anchor links with intelligent, predictive pre-loading async calls to make an already fast page appear seemingly faster than the user can click.

5 Best Practices for Improving Your Page Speed

When building web pages, there are a few basics to keep in mind to optimize both the page speed and the user experience.

1. Do Less

The ultimate speed improvement is just to do less. If you have an option to display a static HTML page with minimal content, this is what the internet technology is designed to optimize. Every additional javascript library included, every <script> tag that must be processed, will slowly eat away at your performance.

2. First Things First

Load parts of the page that don’t require complex logic or rendering first. A basic HTML structure, the CSS style, or any images; these should all be placed near the top of the page. Javascript libraries should be added at the bottom of the <body> and any listeners or special actions should be done inside a ready event listener like this:
document.addEventListener(‘DOMContentLoaded’, function() { /* here* /}

3. Wait For the Last Second

When you are loading dynamic content, the tendency is to preload or pre-cache on page load so that the page is ready for use right away. This adds long load times up front and makes the user experience worse! Instead, wait to load dynamic content until its container is already being displayed or is just about to. You can see this on sites that use Endless Scrolling for content. Instead of loading hundreds of images and text, it loads just the next few pieces.

4. Chunky vs Chatty

When retrieving dynamic data, there is a debate over whether grabbing a huge block of data once is better, or multiple tiny blocks spread out over time.
When content is chunky, you reduce the network overhead. This is why things like sprite sheets in CSS are so effective; one request can get a huge amount of information. However, when you make your data too chunky it becomes difficult to make changes. A small change can require the entire call to be made again, which is slower than a smaller, chatty call.
Chatty has its own ups and downs. Each request will be small and on-demand, meaning individual page components can change rapidly without having to reload the whole data block again. With modern bandwidth and ping being so fast, it is rarely an issue to have tens or hundreds of calls start and complete in a few seconds. However, on limited bandwidth platforms like mobile phones or tablets, these have a much higher cost.
It’s important to consider the target audience for your site, and make sure you appropriately balance your chunky and chatty async calls.

5. Predict

Prediction is difficult to do effectively, and overzealous prediction can just waste bandwidth or even bog down desired interactions. Libraries like BarbaJS use link hover prediction to preemptively gather content and then replace the content in the target container when the link is “followed.” This can result in a very streamlined, faster-than-thought type of website, but it requires a lot of care in the architecture to not be difficult to manage.

Overview

Keeping the core page simple and only adding asynchronous dynamic content at the end of the page or on-request will greatly reduce your page load times without sacrificing large datasets or complicated interactions.

References

https://www.blurbusters.com/human-reflex-input-lag-and-the-limits-of-human-reaction-time/2/
https://www.machmetrics.com/speed-blog/average-page-load-times-websites-2018/
http://barbajs.org/how-it-works.html

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.