CSS optimisation

Ramón Saquete

Written by Ramón Saquete

In this CSS (Cascading Style Sheets) optimisation guide we are going to examine what is necessary to make the download and painting of a page to be done in an optimal way.

In our JavaScript optimisation guide we had already talked about how the importance of client-side WPO has grown, allowing us to get more loyal users and better search engine rankings on mobile devices.

Ever since users started to use their mobile phones 📱more to browse the Internet, the importance of WPO has increased, as it allows us to get better rankings in search engine and more loyal users.Click To Tweet

Let’s recap the Google’s RAIL performance model from our JavaScript guide and what were the optimal values we must strive to get:

  • Response: Interface response in less than 100ms.
  • Animation frames each 16ms, which are 60 FPS or 60 images per second.
  • Idle time: when a user doesn’t interact with the page, work performed in the background should take 50ms or less.
  • Load: the full page must be loaded in 1000ms.

In terms of RAIL, the CSS affects animations and page load. Let’s also remember that, to load the page, the browser follows the critical path representation: HTML download >> DOM creation >> CSS and JS download >> CSSOM creation >> representation tree creation >> layout or reflow >> painting >> composition.

The optimisation techniques we are going to see in this post are going to affect all these stages, so if we carry them out correctly, the total download time can improve substantially.

Download and interpretation optimisation

At this point it’s important that we understand the network waterfall in DevTools. When a page is downloaded, some files call other files, and we get a dependency tree structure with levels, which are as follows:

When a page is downloaded, some files call other files, and we get a dependency tree structure with levels.Click To Tweet
  1. On the first level we have the HTML, which is the first to get downloaded and is considered to be the trunk of the tree.
  2. On the second level (branches) we would have the image, CSS, and JavaScript resources linked from the HTML.
  3. On the third level (leaves) we have additional images, fonts, and CSS, linked from the CSS downloaded in the previous step. The JavaScript, once it’s running, can also request additional files of any type.
  4. We could have a fourth level (which could be the leaf veins), as it’s possible to have a CSS download another CSS, which in turn will download another one, and so on, resulting in a massive amount of levels, and making the download entirely inefficient.

Now that we know how files depend on others, let’s see how a network waterfall is created:

The browser uses an algorithm called a preload scanner, to determine which files requested in the HTML block the critical path representation, to request these critical resources as soon as possible. It also has a preload scanner for the CSS. These algorithms barely understand HTML or CSS, they only look for files linked within to request them as soon as possible. Critical resources from each level are attempted to be downloaded in parallel, which means that these files are requested and downloaded at the same time. However, the move from one level to another is carried out sequentially (the files from one level are requested after the previous one), as the preload scanner needs to analyse files first, to know which other files need to be downloaded.

Given that the sequential file request is much slower than launching parallel requests (it is faster to request information as soon as possible), most techniques I explore below are focussed on how to avoid new dependency tree levels (to avoid sequential requests), or to reduce as much as possible the number of critical files in upper levels, as they will be the last to be downloaded.

Sequential file requests are much slower than parallel requests, as it is faster to request the information as soon as possible.Click To Tweet

As for the download, I think it’s important to explain that in HTTP1.1 parallel download is possible up to a set number of files, depending on the browser. In HTTP2, however, the number of files downloaded in parallel is unlimited, because in reality,  it is a multiplexed sequential download over the same data flow. In other words, with HTTP2 it is as if we’re downloading one large file, which will take the same network resources than various files in parallel, but with less information and without the added cost of sending requests separately and sequentially, in blocks.

Below there’s a graphic example of how a download waterfall happens:

download waterfall
Download waterfall in Chrome DevTools

To summarise, for the download to be faster, we need our dependency trees to have as few critical resources as possible in our branches and leaves (which aren’t necessary during the initial painting of the page).

For the download to be faster, we need our dependency trees to have as few critical resources as possible in our branches and leaves.Click To Tweet

Unify, minify and compress from a preprocessor

It’s always best not to use CSS directly, but use a language on top of it, namely Sass, Less or Stylus. These languages help you to keep your code cleaner and more maintainable, because their syntax allows you to write less, you can define variables, reuse code, divide code into files, and additionally, in the case of Sass, to use programming control structures. A file written in any of these languages has to go through a preprocessor that generates unified and minified CSS with its associated source map. These are the .map files, which will allow us to analyse CSS in Chrome, as if we were working with the original files. This task can be done before uploading the file to the website, using a tool like webpack, gulp, grunt, etc. or directly using the native tool of any these languages. There also are libraries allowing you to do this from the website’s code. This option, however, is not recommended, because it’s an expensive process. But if it is done this way, we must always cache the result to prevent the generation of the CSS with each visit.

Just as with JavaScript, I recommend loading on each page two CSS files, a global one for the entire website, and another one specific to each page. This way, we won’t make the user download styles that aren’t necessary for the page they’re looking at.

I recommend loading on each page two CSS files, a global one for the entire website, and another one specific to each page. This way, we won't make the user download styles that aren't necessary for the page they're looking at.Click To Tweet

Cache in the browser using HTTP cache headers

We will use the cache-control header with public and max-age values, and a large cache time, of at least 1 year. Whenever we want to invalidate the cache, we will have to change the file name. The file name change can be done automatically by the framework used to code, and the tool to transpile or preprocess CSS.

Compress with brotli q11 and gzip

For CSS files we can follow the same recommendations given in the post on JavaScript optimisation, as well as the post on Gzip compression.

Remove unused selectors

CSS works with selectors, which are patterns composed mainly of elements, classes, identifiers, pseudo-classes, and pseudo-elements. They are used to select DOM parts of the page, which we want to style through properties like “color”, which can be assigned different values. For example, if we have the following simple rule:

#page .product a{  /* <- selector{ */ 
     color:blue;      /* <- declaration formed by: property:value; */ 
}

We are saying that in a page with a structure like the following, the links will be blue:

<main id="page">
   <article class="product">
       <a href="/url.html">name of the product</a>
   </article>
</main>

If we don’t have a structure on the page matching this rule, this piece of code won’t be necessary. With the UnCSS tool, we can include this technique into our CSS generation process from Sass, if we use gulp, grunt or broccoli.

Taking advantage of inheritance and cascading styles to create less rules

Certain properties like color, line-height, text-indent, text-transform, font and its variants…, when applied to an HTML element, all elements contained inside it inherit this property’s value. So, for example, we can create the following rule:

body{
font-family:sans-serif;
}

This way, we are setting the font type for the entire document, without having to specify the type of font for each individual element.

CSS is an acronym for cascading style sheets, and “cascading” means that styles are applied in order, in which higher priority rules override others. How can we use this to write less? For example, in a blog like this one, we can create rules to style all the entries of a list, but then we can override some of its properties to style featured posts, without partially duplicating what we have already created. To know how to do this, the developer must be familiar with the specificity rules of CSS selectors, because besides the location of the styles and the order of their rules, that is what determines which rules have priority over others. For example, let’s imagine we have a CSS with the following rules in this order:

/* Rule 1 */ 
.class2 div.class1 div {
  color:green;
}

/* Rule 2 */ 
div {
  color:blue!important;
}

/* Rule 3 */ 
div.class1 div:first-child {
  color:yellow;
}

/* Rule 4 */ 
#id1 .class1 div {
  color:red;
}

And this would be the HTML fragment to which we want to apply style:

<div id="id1" class="class2">
    <div class="class1">
        <div>Hello</div>
    </div>
</div>
CSS is an acronym for cascading style sheets, and cascading means styles are applied in order, where higher priority rules override others.Click To Tweet

All the previous rules affect the colour of the word “Hello”. A developer who knows the specificity rules of selectors should see without giving it a second thought that the rules apply in the following order, from highest to lowest priority: Rule 2, Rule 4, Rule 1, Rule 3. Therefore, in this case the word “Hello” would be blue as per rule 2, even though it’s using the “important” directive, something that a good developer wouldn’t usually do, with the exception of some specific cases.

Grouping rules

If you have to apply the same style with several different selectors, group the selectors separated by commas. For example, instead of writing:

.class1{
    color:red;
}

.class2{
    color:red;
}

.class3{
   color:red;
}

You can shorten it to:

.class1,.class2,.class3{
    color:red;
}

Use shortened declarations

We can shorten various declarations, like in the example below:

padding-top: 5px;
padding-right: 5px;
padding-bottom: 5px;
padding-left: 5px;

To do this, we must use the corresponding abbreviated declaration:

padding: 5px;

This way, we will write less, and the CSS will have less code to download.

Smart use of CSS sprites

The term “sprite” in computer science initially came from the world of video games, to refer to squares containing images and characters (sprites). In CSS, the term “sprite” refers to image rectangles to be displayed on the website, which are grouped inside a larger image. When we group several images into one, we avoid having to download several files, and this way, we don’t limit the browser’s parallel download ability if we use HTTP1.1. With HTTP2 this technique doesn’t improve anything.

Try to group images, the characteristics of which will adapt better to a specific image format compression type. This way, we can have some JPG sprites, PNG-8 sprites, PNG-24 sprites, and so on.

Using this technique correctly doesn’t mean loading 300 icons in an image, to then only use one. This is a common practice, and it’s not correct, because we’re loading and processing a large image, and then only show a small part of it.

Google home page CSS sprites (the image is rotated and reduced in size, so that you can see it without scrolling).
Google home page CSS sprites (the image is rotated and reduced in size, so that you can see it without scrolling).

Embed images with DATA-URIs

The DATA-URI schema allows us to embed images with text strings encoded inside CSS. This makes images weigh more, because a file in binary weighs less than text. Here’s a quick example: if a file has the 1111 1111 binary value, which is the number 255 in decimal format, the text will have the 2 – 5 – 5 characters, which in binary format are encoded in ASCII with all these zeros and ones: 0011 0010 – 0011 01 01 – 0011 – 0101. There are more zeros and ones than we originally had, so the file will take up more space. This increase in weight will be practically gone, if we compress the CSS with brotli or gzip, so it’s not an issue. Nonetheless, it’s a very good optimisation technique: we’re improving our loading time, because we’re preventing ourselves from descending one additional level in the tree to download images.

If we use this technique, we don’t need to use CSS sprites, but if we like organising images in sprites, we can combine both techniques.

We have to keep in mind that if the same image is used in several parts of the CSS, we will have to reorganise the layout for it to appear only once, assigning the same class to the HTML elements that are going to show the image. If we don’t do it this way, we’ll be downloading the same image several times.

We should also pay attention to unused images, or those only used in one part of a website, because as opposed to images linked normally, we are always going to download these images, whether we use them or not. Therefore, this is a technique that should only be applied to images, which are considered to be a part of critical resources. If you haven’t followed the recommendation of having two CSS files on each page (one with global styles and another one for each individual page), be careful with this.

Embedded images are better kept at the end of the CSS, and they shouldn’t be too big, because the CSS is a critical resource blocking page representation, and it should not take too long to download.

And finally, we can use this technique to inline images directly in the HTML, inside the “src” attribute” of the “img” element”, but in this case I also do not recommend using this technique with large images. If we have too many bytes in the HTML that do not belong to the content, we could have indexing issues. We should also keep in mind that whenever the image appears in various parts of the page, or several pages, it is best not to inline it in the HTML, so that it can be cached, preventing it from being downloaded several times.

Pre-load images and fonts from the HTML

This technique is better than using DATA-URIs, because images can be cached in the browser separately from the CSS, and without increasing its size. We can implement it by adding tags like this one to the header:

<link rel="preload" href="bg-image-narrow.png" as="image" media="(max-width: 600px)" />

If we prioritise the request for images linked from the CSS, we won’t have to wait for the CSS to download and to be analysed by the pre-load scanner. This should only be applied to critical resources, as we do not want to pre-load unnecessary resources.

With this tag we can indicate which file we are going to pre-load, and what type it is. We can even specify a media query so that it only downloads in specific screen sizes.

Don’t confuse it with rel=”prefetch”, which is to used to pre-load resources for the next page that is going to be viewed.

Do not use @import

The CSS import rule (do not confuse it with the Sass or Less import) allows us to import CSS files from within others. If you’ve been paying attention, you’ll already know that this is very bad for performance, because it’ll add more levels to the dependency tree, making one CSS download another one, which in turn can download another one, or download images.

Use media=”print” for print styles

We can set how a website is going to be sent to print to a printer in a style sheet. This can be done within the global CSS with a media query or extracted to a standalone style sheet as follows:

<head>
   ...
   <link rel="stylesheet" type="text/css" href="/css/main.css">
   <link rel="stylesheet" type="text/css" href="/css/print.css" media="print">
   ...
</head>

In the above code the main.css file blocks the page representation, so it is sent to be downloaded immediately by the pre-load scanner. However, the print.css file doesn’t block it and it downloads with low priority. This way, we won’t be blocking the painting with the download of print styles.

We can also use this technique with the media queries used for making the website responsive, but if it’s going to affect the way in which we organise our code negatively, I do not recommend it.

Request styles from the header

Style sheets must be requested using link elements from the header, because besides it being their correct location, they are critical resources, and as such they should be downloaded as soon as possible to paint the page.

Inline CSS in the HTML only if it’s a very small amount

In pages made with AMP technology you are forced to inline CSS code into it, and limit it to up to 50,000 bytes. This technique can also be applied to pages that aren’t AMP, and here’s how it’s done:

<style> /* <style amp-custom> if it's AMP */ 
/* CSS code */ 
body{
    background:#0AF;
}
</style>

I’ve placed some comments in the example, but the code must be included completely minified and uncommented.

This technique prevents us from having to request the CSS file (and going down to the second level of the dependency tree), but if the CSS file is very big, Google won’t get to index the page content bytes, and for that reason we should only do this with small files. Ideally, we would implement it in a way that it won’t get in the way of the regular work flow, and it won’t duplicate the same code on several pages. This way, we can use this technique to load the CSS that is specific to the page we’re visiting, and to load the global CSS using the link tag.

Do not enter inline styles

If we use the style attribute on each element of the page to style it, we’ll be mixing the style with the content, and duplicating styles we use throughout various pages and elements. For that reason, this attribute is considered a performance and maintainability anti-pattern.

If we use the style attribute on each element of the page to style it, we'll be mixing the style with the content, and duplicating styles we use throughout various pages and elements.Click To Tweet

If it’s a case in which we know the style is not going to be repeated, we can make an exception to this rule. For example, when we have to put a background image for decorative purposes that is only used in one place, we do it the following way:

<div style=»background:url(‘/img/imagen.jpg’);»></div>

This will make the image download before the CSS has to be requested, and we won’t be duplicating code. Although if we don’t want to mix style and content, we can make it in a cleaner way inside a <style> tag, and give the layer an identifier to be able to reference it.

Organise the CSS well to avoid making more rules than necessary

There are many ways to organise the CSS code in a clean and elegant manner. There are many methods: ITCSS, BEM, SMACSS, OOCSS, SUITCSS, Atomic… Regardless of the one you use, the important thing is for the code to be structured correctly, as it will get repeated less, and besides affecting maintainability, it will also affect performance.

If during a defining moment there is a case where the CSS code doesn’t fit with any of the recommendations of the method used, I recommend coming up with new rules of our own so as not to lose organisation.

Avoid complex HTML

We shouldn’t assign styles by creating new elements in the HTML, instead of classes and identifiers, and we shouldn’t create HTML elements that won’t be necessary, neither to style nor to contribute semantic information.

This is because if we have less DOM nodes, it will load faster, take up less memory, and consequently, CSSOM will be less complex. Moreover, by using identifiers and classes, we will have more efficient CSS selectors.

Font optimisation

Fonts are a critical resource blocking the painting of texts on the page. While the font is being downloaded, some browsers will display what the page will look like with a system font, and others will display it without text. When the download is finished, the user will see how the font changes or disappears, which doesn’t generate a good impression.

This subject needs a whole other post to be explored on its own, as a general rule, I’ll say that we shouldn’t load more fonts than are necessary, be they a version of the same font (different bold, italics or bold and italics types, …) or sets of characters we aren’t going to use with the language on the current page. Use Gzip or Brotli compression only on older formats like EOT and TTF (but not with WOFF or WOFF2 because they are already compressed), and if you want to control how the font is loaded, use JavaScript’s Font Loading API.

You can also preload them from the HTML, just so:

<link rel="preload" href="fonts/cicle_fina-webfont.woff" as="font" type="font/woff" crossorigin="anonymous">

This post provides a more in-depth explanation on how to optimise the fonts of a website.

Painting optimisation

Avoid CSS expressions

CSS expressions allow us to calculate property values using JavaScript. Besides violating the separation between behaviour and style, this is also extremely inefficient for the browser. So it is best to replace these expressions by JavaScript.

Create animations with CSS instead of JavaScript

With CSS animations you can make animations creating interpolations between the values of the properties throughout a series of key photograms, just as it used to be done with Flash. If you don’t know what I’m saying, interpolation is a mathematical operation, used to calculate the intermediate values out of the various given values. In this case, it allows us to calculate the values of the CSS properties throughout the key points set, so that the browser can make the animation. Ideally, we should only use the “transform” property, which allows us to rotate, move, enlarge or stretch any layer; and the “opacity” property, for making transparencies.

Besides only using transform and opacity properties, I recommend using layers with a fixed or absolute position (to prevent the animation from moving the remaining layers), because this way, we prevent the browser from launching reflow and the painting, and it will only need to run the layer composition.

Example:

<style>
#animated_div_wrap{
    position: relative;
    height: 70px;
}

#animated_div {
    position: absolute;
    width: 70px;
    height: 47px;
    box-sizing: border-box;
    background: #ba1c1e;
    color: #ffffff;
    font-weight: bold;
    font-size: 20px;
    padding: 10px;
    animation: animated_div 5s infinite;
    border-radius: 5px;
}

@keyframes animated_div
{
    0% {
        transform: rotate(0deg);
    }
    25% {
        transform: rotate(-5deg);
    }
    50% {
        transform: translateX(320px) rotate(-20deg);
    }
    100% {
        transform: translateX(0px) rotate(-360deg);
    }
}
</style>

<div id="animated_div_wrap">
       <div id="animated_div">WPO</div>
</div>

Result:

WPO

Making the same animation would be much less efficient with JavaScript, because it would run in the same JavaScript thread, together with many other things, and the browser is optimised to run CSS animations in a fluid way, prioritising speed over smoothness. This means that if an animation consists in moving a layer 1,000 pixels to the right in one second, during which we have to display 60 photograms, the layer will move 16 pixels in each photogram. But if the browser only has time to paint half of those photograms, the layer will move 32 pixels in each repainting.

Prevent reflow and repainting specifying the image dimensions with a layer on top

When a page is painted, some images may load at the end, making everything underneath shift after they’ve loaded. To prevent these jumps, each image must be contained within a layer occupying the same height that the image will need after it finished loading. This has to be implemented keeping in mind that the image will have different sizes depending on the device, and it’s done as explained below:

First, we calculate the image height-width ratio:

Height ratio = height/width * 100

This gives us a percentage, which we will assign to the “padding-top” property of the layer prior to the image, this way, the layer will occupy the image height regardless of whether the width will be modified. The image will have absolute position with regard to the container layer, so that it’s mounted on top of the layer setting the height. Let’s see an example:

<style>
#wrap-img-ej{
   position:relative;
   max-width:800px;
}

#wrap-img-e div{
   padding-top:56.25%;
}

#wrap-img-e img{
   position:absolute;
   top:0;
}
<style>
<div id="wrap-img-ej">
   <div></div>
   <img src="/img/image.jpg" alt="SEO text" />
</div>

This technique should be applied to sliders and carrousels as well, because it’s a common occurrence for a jump to happen during the initial load when JavaScript is executed.

For a more agile use of this technique, we can create a template out of the sample code provided previously with a web component or by using the web component amp-img directly, from the AMP component library. The AMP component equivalent code to the previous example will be as follows:

<amp-img alt="SEO text" src="/img/image.jpg" width="800" height="450" layout="responsive"></amp-img>

If there’s an identifier in the selector, we don’t need to overcomplicate this

Let’s imagine we have a rule with a selector like this one below:

div .class1 #my-id.class2{ 
    background:red; 
}

With this HTML:

<div>
    <div class="class1">
        <div id="my-id" class="class2">
            <div class="class3">WPO</div>
            <div>CSS</div>
        </div>
    </div>
</div>

This selector doesn’t make much sense, because in reality there can only be one “my-id” identifier on the page, so the next selector will work exactly the same, and it has less code, which means it takes less in loading and running.

#mi-id{ 
    background:red; 
}

I’m not saying that when there’s an identifier in the rule, we should systematically remove everything else. The following selector does make sense:

#my-id .clasd3{ 
    background:blue; 
}

Use translateZ so that the browser uses the GPU graphic acceleration

Use will-change:transform; or translateZ(0) in layers with animations or with transparency (opacity property) to promote the layer to the GPU hardware, where it will get painted faster. But avoid using these GPU promotion rules too much, as the layers require memory and management for the browser.

will-change:transform;

transform:translateZ(0);

Both properties will have the same effect, but “translateZ(0);” which we use to tell it to move by the Z axis to the initial position, works in all browsers, while will-change is the correct way to implement it, because it doesn’t modify the layer properties.

Group several transformations into a rotation matrix

Transformations of a vector object are implemented by multiplying the vector of each vertex of the object using a matrix that gives us the final position of each vertex. This matrix defines the type of transformation we are going to apply (position, scale, rotation, …) This is what happens internally in the browser when a CSS layer is applied a bidimensional or tridimensional transformation with the transform property.

If we multiply all the matrices, which generate the transformations we are going to make, either of the position, scale, rotation or deformation, we can apply them all at the same time in just one step with the resulting matrix, writing much less code and in a computationally optimal way, because we will use the matrix function as follows: matrix(a11, a21, a12, a22, a13, a14) instead of something like: transform:transform(x,y) skew(x,y) rotate(x,y) scale(x,y). If it’s a 3D transformation, we will use matrix3D instead of matrix. Moreover, this function allows us to make the mirror effect transformation, which cannot be done any other way:

WPO CSS

To quickly calculate the transformation matrix (it’s not easy to do this mentally), we’ll have to use some library, program or Sass function, always keeping in mind that transformations, same as matrix multiplications are not commutative (moving and rotating is not the same as rotating and moving).

2D transformation matrices
2D transformation matrices

Simplify the complexity of effects that are difficult to represent

The filter property is computationally costly to apply when we use it for blurring, changes of contrast, brightness, etc.. Nevertheless, it can be useful to avoid loading more images if we want to apply any of these effects when the mouse cursor is hovered over any of them.

Other effects, the painting of which is costly, are shadows and gradients. If you have these effects and you detect that the final painting stage of the page takes too long, try removing them.

Use CSS before images

Generally speaking, painting something with CSS will always be faster than using its image equivalent, because it takes time for the images to be downloaded, read, decompressed and painted, usually exceeding the time it takes to download and paint the CSS. For that reason, if you cannot simplify the design, as I suggest in a previous point, you shouldn’t replace the shadow of a text with CSS by its image equivalent (for example).

Do not design with tables

This may seem like a recommendation from the ancient history of the web, but it continues to be true to this day, especially if we want to get a good performance.

Tables are inefficient, because the change in the size of a cell affects the position of all the elements there are below it, and a change in width affects all cells above it.

Design using grid layout and flexbox

Using Grid Layout and/or flexbox is the most optimal and flexible way, than designing with floating layers, and of course, tables. Here we can use a framework like Bootstrap, which already uses flexboxes in version 4.

With this new way to design, you have the advantage of being able to visually change the location of any element, regardless of the order in which the HTML is painted, giving you much more freedom than floating layers. This allows us to have an entirely different distribution on desktop and mobile devices, leaving on top of the HTML the first thing we want to get indexed for SEO purposes.

Avoid extremely complex selectors

When using automatic tools, it’s commonly recommended to avoid complex selectors. I, myself, do not recommended paying too much attention to this rule, because code maintainability is always more important than performance. So my recommendation is to avoid only extremely complex selectors, although it would be truly rare that you would have to do something as inefficient as this, for example:

div p a *{}

To know when a selector is complex or inefficient we should take into account that the more general parts of the selector (the elements) are less efficient than those which are more specific (identifiers). We should also understand that selectors are interpreted from right to left, so keeping the more general parts to the right makes the rule more inefficient, because it makes the browser reexamine more DOM parts to know whether it has to apply the rule. In the above example the operator * selects all the elements of the page, then looks whether all the predecessors of all elements of a page have a link, then it looks whether all these links have a p predecessor element, and so on, unless at some point no element is found, in which case the selector testing stops.

CSS3 pseudo elements and pseudo classes are also slow (:first-child, :nth-child(), ::firts-line, ::first-letter).

Tools

Use Firefox and Chrome Developer Tools to analyse loading and painting times, adjusting the tests to see what happens in devices with a more limited processing power and a slower network. In Google Chrome Dev Tools these latter options appear by clicking on the red cog wheel, on the right hand side of the “Performance” tab. Here we also have the option “Enable advanced paint instrumentation”, which allows us to see the tab “Paint Profiler” when we click on a painting event. This way, we can see everything the browser needs to do to paint our page.

Paint profilers Chrome Dev Tools
With Chrome Developer Tools and the Performance tab we can examine what happens during the painting with the “Paint Profiler”.

To study the speed of the CSS, and how it affects JavaScript, the options contained in the Rendering tab are very useful, and they show us in real time, amongst other things, which rectangles are repainted on the page and the speed in fps they’re painted at. This option can be accessed by clicking on the same screen we see in the previous screenshot, the three dots at the top-right corner and “Show console drawer”, and then on the “Rendering” tab:

"Rendering" tab options
“Rendering” tab options in Google Chrome DevTools, to see in real time how scroll movements or animations affect performance.

Final recommendations

It’s important to always prioritise maintainability over performance, which is why I recommend always writing the code in the cleanest way possible, using some organisation method, giving relevant names to our classes and identifiers. And, if you make the mistake of not using preprocessors, at least put all tabs, line breaks and spaces, because then you will be able to minify the code with a tool.

Always keep in mind Amdahl’s law, which states:

The enhancement achieved in the performance of a system as a result of the alteration of one of its components is limited by the time fraction said component is used.

This means we should always give more priority to the optimisation of parts which are more time-consuming, because it will have the biggest impact on global performance.

Ramón Saquete
Autor: Ramón Saquete
Web developer at Human Level Communications online marketing agency. He's an expert in WPO, PHP development and MySQL databases.

Leave a comment

Your email address will not be published. Required fields are marked *