How to comply with Google’s Quality Guidelines in 2020

María Navarro

Written by María Navarro

Google provides a series of guidelines on how your website’s content should be in order to appear in the search results.

Within Google’s guidelines we encounter several categories:

  • Webmasters guidelines.
  • General guidelines.
  • Content-specific guidelines.
  • Quality guidelines.

In this article we are going to focus on the latter, but first we need to know what they are about.

The quality guidelines describe techniques, the use of which is forbidden, and if used, they could make your page or your website not be displayed in Google search results.Click To Tweet

Moreover, putting these forbidden techniques into practice can lead our website to be penalised.

Quality guidelines

  • Automatically-generated content.
  • Misleading redirects.
  • Link schemes.
  • Cloaking.
  • Text and hidden links.
  • Doorway pages.
  • Copied content.
  • Excessive use of keyword.
  • Creation of pages with malicious content.
  • Spam directives in user-generated content.

Everything you are going to find in this article is published in the Quality Guidelines contained inside Google’s support section. However, we wanted to recap these specifications to help you quickly identify what are the techniques which can negatively affect the rankings of your website.

google quality guidelines

Automatically-generated content

Google’s primary goal is to offer the user unique and quality content. Creating our own content implies a high cost in resources and time, so one of the easiest and most common practices is to copy it or generate it automatically.

If Google detects content generated automatically, it considers it a ranking manipulation attempt, and may apply a penalty.

Texts, which are considered to be automatic are:

  • Texts lacking sense and with lots of keywords.
  • Texts translated by automatic tools, without human review and editing.
  • Texts generated via automated processes, such as Markov strings. A Markov string is a sequence of random variables.
  • Text obtained through the application of obfuscation techniques or generated with the use of automatic synonyms.
  • Text generated by merging the content of several pages without any added value.
  • Text generated from Atom/RSS feeds or search results.

Certain common techniques for generating content automatically include the translation of content to other languages, or scrap & spin (to copy, fragment and recombine in a different order) of text strings.

  • The technique of translating texts is based on scraping a piece of content in a different language, translating it and then publishing it on your website.
  • The technique of “spinning” texts has the goal of extracting texts existing on other websites and entering syntactic variations that will make them seem “new and original”. This process involves some manual work, because syntactic changes need to be made, although there are also tools to automate it.

Our recommendation is not to use automatic methods in the generation of content, on serious websites on which our brand or business image depends. Although these actions can work for a time, we risk getting a penalty. The quality guidelines make it clear that the search engine doesn’t like content automation at all.

Misleading redirects

A redirect is an automatic forwarding carried out by the server from one URL to another. There are many situations where a redirect is the best way to inform Google that a URL has been changed, and in such a case it is perfectly licit. For example, when we find duplicate content on several pages and we want to consolidate it inside one unique URL, or when a URL has changed and using a redirect we want to indicate which one is the current one.

However, there are cases where redirects are applied with the purpose of deceiving search engines, displaying content that is different from what we show to our users. This type of misleading redirects infringe Google’s quality guidelines, and we could be penalised if it considers it to be detrimental for user experience.

Keep in mind that some developers do these redirects consciously and with a concrete purpose, but there can also be cases of misleading redirects on mobile devices, which are done without the owners even realising it, for example, after a website has suffered an attack.

A common example of a misleading redirect: let’s imagine we search for something and the provided result has the same URL for both mobile and desktop versions. The user clicks on the result of the desktop device, and the URL opens as it should, all correct. The problem arises when the user clicks on the same result on a mobile device, and instead of landing on the expected URL, they are redirected to a completely unrelated one. You can understand it much better with the following infographic:

penalty for misleading redirects

Link schemes

Another very common infraction is to get links with the purpose of manipulating PageRank. This type of links can negatively affect the website.

Some of the most common examples of this manipulation attempt are as follows:

  • Sale and purchase of links aiming to manipulate PageRank.
  • Sending free products to get someone to write about us or exchanging services for links.
  • Link exchanges between websites.
  • Automated links.
  • Marketing with posts carrying anchor texts and keywords on a large scale.
  • Forcing the customer to include a follow link for providing a service. The most common case of this instance happens when a web developer includes a link inside the footer or other place on the website with information such as “Developed by [name of the web development company”]

The best way to get inbound links without being penalised by Google is to get other websites to want to include links pointing to our website, just because it contains unique, relevant, useful and quality information. Such content will quickly acquire popularity thanks to users themselves.

Cloaking

The content should be the same, regardless of whether a user sees it, or a search engine. If that’s not the case, we could be incurring in cloaking, a penalty-inducing case, as a result of a direct breach of Google’s quality guidelines.

Cloaking examples:

  • Configuring the server to display a different piece of content based on who requests the page. Manipulating content, inserting additional text or keywords when we detect requests from a search engine.
  • Displaying a page with JavaScript, images or Flash to human users, while search engines are shown an HTML page.

In general, these cloaking techniques are becoming less and less common, search engines have evolved so much that they can quickly detect and penalise them.

Texts and hidden links

Once again, we are looking at a rather common case: hiding content and links on websites. Very often it isn’t done consciously, and we don’t even realise it’s a manipulation technique. But it is one indeed, and it can bring negative consequences, which is why it is recommended to review whether there’s any sort of hidden content, and fix it.

Here are some common content hiding techniques:

  • Using CSS to include hidden text with display:none, or for example, to include text off the screen so that the users can’t see it.
  • Including white text on white background.
  • Including text behind images.
  • Setting the font to 0px so that it can’t be seen.
  • Hiding links in a single-character anchor text, so that it goes unseen, or hide via other CSS methods.

We must keep in mind that hidden content is not always penalty-inducing. There are exceptions, which are commonly related to accessibility improvements.

If our website uses technologies that make crawling more difficult for search engines, such as images, JavaScript or Flash, we recommend adding a descriptive text to make it easier for them.

Users can also benefit from these descriptions if, for whatever reason, they cannot view this type of content.

Accessibility improvement examples:

  • Images: add an “alt” attribute with a descriptive text.
  • JavaScript: include the same JavaScript content inside a <noscript> tag.
  • Videos: include text describing the video in HTML.

Doorway pages

The so-called doorway pages are websites or web pages created with the sole purpose of ranking in search for very specific results, with one or two keywords. Generally, this type of landings are at risk of getting a Google penalty, because it doesn’t look as good for the user, as the results they are going to see are going to be very similar.

These pages are usually created to channel the traffic of users to a website or main page, and they are focussed on ranking in search engines, but not to provide users with a quality result.

They are commonly low-quality pages that do not offer any added value to the user, besides using automated content with slight variations.

Doorway pages are most commonly used to rank services by city names. For example: “Pest control in Murcia”, “Pest control in Alicante”…

doorway pages

Some more examples of doorway pages:

  • Pages to channel visitors to the main page, or the most relevant or useful page of the website.
  • Pages with similar content that look more like search results than a clearly defined hierarchy where we can search for something.
  • Several pages or domain names focussed on specific cities or regions, to channel the user to a particular page.

To be able to provide a recommendation it’s important to analyse each case in detail. Nevertheless, if we’re penalised due to this issue, it must be corrected as soon as possible.

Copied content

As we’ve mentioned previously in a previous section regarding automated content, we are aware that content creation is hard work and it requires a lot of resources to generate quality content. Just as automated content is penalty-inducing, so it is copying content from other websites.

Moreover, in this case we would be infringing copyright policies, and we could be reported for it to the competent authorities.

At Human Level we will always remind you about the importance of generating our own original content, that is relevant and has good quality.Click To Tweet

Plagiarised content examples:

  • Websites that copy and paste content from other websites without providing additional value.
  • Websites that copy and paste content from other websites, with a slight modification, for example, using synonyms or automatic techniques.
  • Websites, that reproduce content feeds without providing value to the user.

Excessive keyword use

An excessive use of keywords aka keyword stuffing is one of the oldest SEO techniques. While it did work many years ago, it no longer does. In the present day, including an excessive amount of keywords inside content, links, meta data, etc. breaches the quality guidelines and it qualifies for a Google penalty. We recommend not using this method, and focus on generating content which includes keywords and their synonyms as needed, in an appropriate and natural manner and context.

Creation of pages with malicious behaviour

Creating pages, which behave in a different way to what users expect, potentially making their experience worse when browsing your website, and with a malicious purpose, is another clear way of breaching Google’s quality guidelines.

This is much easier to understand if we look at some examples that perfectly display this malicious behaviour you’ve probably suffered in many cases:

  • Installation of malware on your computer system, namely Trojans, virus, spyware… I would venture to say that this happened to almost all of us at least once.
  • Inclusion of unwanted files in the download requested by the user.
  • Tricking the user into clicking on a button or link that doesn’t actually do the function the users thinks it does.
  • Changing search preferences, or the default browser page, without previously informing the user or getting their consent.

Guidelines for user-generated spam

All the aforementioned points were about intentional manipulation techniques, created by the website owner. Occasionally, users can also have foul intentions and be the ones to generate spam on a quality website.

Usually, this issue happens to websites, which allow adding content of some kind, or when end users are allowed to create pages.

The primary cases of user-generated spam are:

  • Spam in blog comments.
  • Fraudulent posts in forum threads.
  • Fraudulent accounts in free hosting services.

Spam-filled pages give off a bad impression to the users. It is recommended to disable this feature if it’s not useful for the users, or if we don’t have time to regularly supervise the comments posted.

To prevent this type of spam we recommend:

  • Enabling comment and profile creation moderation.
  • Using tools to prevent spam (Honeypot, reCaptcha).
  • Using rel=”nofollow” or rel=”sponsored” attributes in links.
  • If your website allows the creation of pages, like for example, profiles, threads in forums or websites, you can use the “noindex” meta tag to block the access to pages of new users, or those, which you do not trust. You can also use the robots.txt standard to block the page temporarily: Disallow:/guestbook/newpost.php

Our recommendation

Although getting a website to rank may seem like an arduous, costly, and long-term task, falling into the temptation of shortcuts by applying black-hat techniques, especially without any experience, can involve search engine penalties. We recommend to frequently read Google’s quality guidelines, to ensure that our website respects them, and especially, to be alert regarding new recommendations made by the search engine.

María Navarro
Autor: María Navarro
Search Marketing consultant and web developer at Human Level.

Leave a comment

Your email address will not be published. Required fields are marked *