Understanding the updates to HTTP over the last few years can help adjust asset delivery strategies to take advantage of the major advances in the protocol, leading to faster sites with better performance optimization scores.
Below we’ll look at the evolution of the protocol and discuss how to evolve our asset delivery strategy to deliver CSS and scripts in the fastest way possible for the latest version.
1997 – HTTP/1.1 – History and Asset Loading Strategies
Implemented in 1997, HTTP/1.1’s main performance pain points were head-of-line-blocking and redundancy of the request header happening for each call.
Connecting to the server, transmitting headers, and requesting data was so time consuming for each file that we tried to get as much as we could into each file we called.
To do this, we preprocessed our scripts and styles in single file calls, inlined code, vulcanized, and made image sprites. It was paramount to limit the number of calls to the server. Much of the internet is still built this way.
2015 – HTTP/2 – Let there be multiplexing.
What is HTTP multiplexing?
From Stack Overflow:
Put simply, multiplexing allows your Browser to fire off multiple requests at once on the same connection and receive the requests back in any order.
As browser support became real in 2015, HTTP/2 offered compressed headers and multiplexing to a single TCP connection. This allows multiple assets to be loaded simultaneously, rendering the page faster.
While this is a vast improvement, there is still a catch.
If any asset being requested within that multiplexed TCP connection experiences packet loss, then the entire group of multiplexed streams after that point would be paused until the data packet could be corrected.
Because TCP doesn’t have any context on the multiplexed files to know which can move on and which can’t, packet loss can have HTTP/2 still dealing with head of line blocking.
HTTP/3 – Now and the Future
HTTP/3 seeks to solve the problems of HTTP/2 by not using the TCP at all and instead using QUIC. Because QUIC provides native multiplexing, lost packets only impact the streams where data has been lost.
But can I use it? Kinda.
Most browsers support it by default. However, (as of this writing) in Safari you need to activate it as an option on your device.
To take advantage of multiplexing we have to break out of the concatenation mindset where the focus was on limiting calls and have a more nuanced asset delivery strategy prioritizing assets crucial to render.
Most browsers and servers seem to default the concurrent connections to about 100. Smaller files delivered largely simultaneously can drastically improve render time and page speed performance.
Google Page Speed Performance Metrics
Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift are Google’s strategy to help improve web performance; this is cumulatively named “Core Web Vitals.” The biggest enemy of scoring well in these metrics is render blocking assets.
Luckily HTTP/2/3 gives us ways to address render blocking issues much better than before.
Google’s algorithms will down-rank pages that have pipelines with render blocking resources. However, unused JS and CSS resources are weighted almost as strongly.
Although Google only knows for sure (these algorithms aren’t public), it is likely because unused resources bloat a site’s payload and are often render blocking, resulting in causing the user waiting for resources that aren’t even used on a page.
Taking on the task of eliminating unused CSS and JS can also cut the file sizes of your minified assets. Seek out parts of your code base that aren’t used on every page/post/condition and find ways to dynamically enqueue the related assets.
For example ACF Gutenberg blocks:
It is common practice to be concatenating and enqueueing one large CSS and or JS file for all of your custom blocks. ACF block registration lets you enqueue related assets only if the block is used on a page eliminating unused CSS while not duplicating it by including it inline. This is a reasonably easy enhancement that provides a good result for minimal effort.
Another point of attack could be any global options set via the admin that might trigger a class or attribute change that you are then applying CSS or JS to affect (any global option affecting render). Those styles could be in their own sheet and the condition used to set the class or attribute could be used to enqueue the asset instead.
These kinds of strategies may require a more complicated preprocessing of files and JS route building, but the results will be substantial. You will serve less unused code and by pulling the conditional code out it will also make your concatenated files smaller and therefore less render blocking, leading to faster page loads and better Google ranking.
Knowing how much there is to be gained it’s easy to over simplify your asset loading strategy by loading every little file independently—going to the opposite extreme. There are limits, however.
Servers and browsers set their own restrictions for concurrent connections so we should focus on loading render driving assets first but related parts of your code base should still be concatenated and minified. Each project is different and will require a slightly different approach but HTTP/2/3 gives us the options to improve end user experience and make the internet generally more efficient.