Written by Guillermo Fernández
Index
Can you imagine a Google that ‘sees’ like you, ‘listens’ like you, and browses the web exactly as you would? This isn’t science fiction: artificial intelligence is radically transforming how search engines assess website quality. The SEO we know today might soon become unrecognizable. At Human Level, we’re working and researching to stay ahead of the next revolution in organic search, and what we’re seeing is surprising: the metrics you now consider critical could become obsolete once AI truly begins to evaluate user experience.
Are you ready for SEO in the age of Artificial Intelligence? Discover in this article what changes are coming and, more importantly, how you can get ahead of them to turn this transformation into a competitive advantage for your business.
The evolution of ranking criteria
The way different aspects or parameters of a site are configured to achieve better organic results has evolved, just as the evaluation criteria used by Google (and other search engines) have become increasingly demanding.
This evolution is healthy and necessary, mainly for two reasons:
- A better browsing experience: In general, Google’s requirements are aimed at improving the user experience. It’s natural for these requirements to change over time, as they must adapt to both our browsing habits and the available technology.
- A better search experience: Google, as a search engine, needs to differentiate itself from its competitors. One effective way to do this is by de-indexing or deprioritizing results that do not ensure a good browsing experience.
In other words, SEO is not a set of fixed configurations; rather, it is constantly changing and evolving.
An example of this was the introduction of Core Web Vitals in 2020. With their arrival, the way the browsing experience on a site is evaluated was improved, and we should be grateful for them from two different perspectives:
- As website owners, optimizing its performance will give us better positioning, providing an advantage over our competitors.
- As Google users, the search engine provides us with sites that will respond to our query, even when our network or hardware conditions are limited.
Despite this remarkable progress, there are some metrics that could be considered inaccurate with respect to their purpose or simply could be improved.
This is where Artificial Intelligence could mark a turning point in how user experience is evaluated on a site. Thanks to it, Google could improve the way it evaluates some aspects or even take into account new ones that were not possible to evaluate before.
Could AI impact the way Google rates a site?
If there’s one thing Google should be credited with as a search engine, it’s for being at the forefront of the technological edge in terms of its efficiency as a content classifier.
A sign of this is that certain sites de-indexed by Google are still shown – and even preferentially – in other search engines.
Therefore, we deduce that Google will make use of any possible technology or technique if it means an improvement in its product.
Although ambiguously, John Mueller already suggested during the Search Central Live event last March 20 that Artificial Intelligence will bring many changes to the ecosystem:
“It definitely affects how users interact. So I guess where I’m kind of headed here is that there are lots of things happening around the whole ecosystem, which means that we also kind of have to adjust and react to some of that. (It definitely affects the way users interact. So I guess what I’m referring to is that there’s a lot going on across the ecosystem, which means we also have to adapt and react to some of that.)“
As a first action, the “AI Overviews” began to be deployed in Spain in March and, although they may give rise to many objections, I personally believe that they are the most immediate and visible way of incorporating Artificial Intelligence into the search engine.
It is to be expected that Google’s application of Artificial Intelligence will not end with AI Overviews, but that they will extend this technology by integrating it in one way or another into their ranking algorithms.
Project Mariner
Google is working on Project Mariner, an experimental technology that aims to allow artificial intelligence agents to navigate and interact with websites directly from the browser, simulating human behavior.
I should clarify that although this does not mean that its purpose is directly related to the search engine, it could end up deriving from it.
Project Mariner claims to be able to understand information presented in the browser such as text, code, images, etc. It is not unreasonable to think that this technology, or a similar one, could be applied in the future for analytics purposes within Google’s search ecosystem.
Some examples where current configurations could be improved are as follows:
Better interpretation of multimedia content
As far as images are concerned, Google currently seems to be governed by too elementary signals that may result in incomplete analysis:
- How large is the image?
- Is it offered in an optimal format and dimensions?
- Does it contain a descriptive ALT attribute?
However, their ability to relate the alt attribute to what an image actually shows is uncertain.
According to the W3C web standard, the alt attribute of an image must describe what an image shows or the purpose of the image if it is an interaction element. The reality is that this use of the alt attribute is rare, in most sites it is used as an additional location where a keyword must be entered. And this, depending on each specific case, is not necessarily an incorrect SEO practice, at least with Google’s current analysis capabilities.
But what would happen if Google used Artificial Intelligence to better understand an image and relate it to its alt attribute, what if it could play and understand content in audio/video format, would it become a more important positioning factor, would it become a more important positioning factor?
Better interpretation of the user experience
Google already has different metrics to analyze the degree of satisfaction during the browsing experience, although some new ones could emerge or improve the existing ones.
Some examples could be the following:
Largest Contentful Paint
It measures the rendering time of the largest visible element in the viewport, on the understanding that this is the main element of interest to the user: usually text or a featured image. However, is that LCP element really important for the user experience?
We often encounter cases where the cookie warning banner is interpreted as an LCP element. This is a “dirty data” and does not necessarily serve to measure the level of user satisfaction.
This metric can also become inaccurate, as the most useful element for the user does not necessarily have to be the largest. This is where AI could help pinpoint what that element is on each page, rather than relying on something as basic as the dimensions or weight of the element.
Visibility of useful page content
This is one of Google’s unfinished business, although we must recognize that it poses serious difficulties without the participation of Artificial Intelligence.
Is the unique and useful content of the page easily visible? Regardless of whether its rendering occurs in a timely manner, has the user had to scroll down the page, close any unexpected pop-up windows, display any content or even click on any internal links?
These signals are easily associated with content that has not been easily accessible. However, some of them are actually interpreted as positive signals in the browsing experience. An algorithm may understand that if I access a website and click on an internal link it is because I like the content found and want to continue browsing. However, I may have had to forcefully execute certain interactions to access the expected content, which should be interpreted as a negative signal.
Better interpretation of correct accessibility
Facilitating communication between a website and people with disabilities of different types and degrees is a complex task. Unfortunately, in very few cases it is fulfilled in conditions.
There is still a minority of sites suitable for users with total or partial blindness, color blindness, or other similar types of limitations. One of the reasons could be that, at present, they are not aspects with much weight in search engine ranking criteria and are identified as low priority improvements.
This is not fair and, although there are organizations that promote accessibility to web content, it would be much more effective if search engines could evaluate and score this aspect.
By setting two simple parameters natively interpretable by modern screen readers(tabindex and aria-label), the browsing experience of a person with blindness can go from being a real challenge to a very enjoyable read. But how many sites use it?
The problem lies in the fact that these aspects are difficult to evaluate by a traditional algorithmic approach. Artificial Intelligence could also help analyze accessibility, thus influencing positioning. If it were to become an important ranking criterion it is more likely that companies would invest in improving their site navigation for people with disabilities. In addition, we cannot forget the need for compliance with the European Accessibility Act, approved in 2019, although it is in June 2025 when the deadline set by the European Union for the effective transfer of its guidelines to the national laws of EU member countries is met. This law sets minimum accessibility requirements for different products and services, including websites.
The importance of anticipating change
It is not uncommon for doubts to arise as to whether a certain configuration is or will be part of the calculation of search engine rankings. Generally, it is easy to reach a conclusion:
Does such a configuration provide a benefit to the user? If the answer is yes, then it will certainly be part of the position calculation, if it is not already.
For example, in the case of accessibility, although it is not currently a very important organic positioning factor, it can be deduced that in the future it will be.
One way to understand it may be the following:
Imagine an individual, who leaves his house at 5 o’clock in the morning with a shovel and a big bag.
In this scenario, each person can imagine what he is doing based on many factors. However, if you know in advance that it is a farmer, you will deduce that he is probably getting up early to make the most of the day, that he is carrying a shovel for digging or sowing, and that he is carrying other tools in his bag. Even if you have agricultural knowledge, you will be able to deduce in advance some of the things he will do in the next few hours.
This is exactly what should apply in relation to SEO. If you know in advance that Google wants the best for the user, you can also deduce for yourself what things are important because if they are not important today, they will be important tomorrow: if you understand what it is, you will know what it will do.
Therefore, it is very important to delegate the SEO strategy to professionals who have sufficient technological context and who understand not only where we are, but also where we are heading.
Many of these issues are time-consuming and resource-intensive, so it is always best to “have your homework done before the teacher calls the roll”.
When are these changes coming to Google?
It is impossible to say what algorithmic changes Google will roll out and when. We may be talking about a year, five years, twenty years, or even some of them may never come.
It is also unclear whether it will be Google’s own spiders that will analyze metrics like those above or whether they will delegate these analyses to users, as they already do with Chrome User Experience.
The important thing is not to know if or when they will arrive, but to be prepared for when they do, if they do. No improvement will ever be detrimental.
My personal opinion
The advent of Artificial Intelligence opens up a wide range of possibilities in terms of SEO and user experience.
In 2025, Google is still struggling to get its spiders to render some JavaScript-derived sites correctly. Analyzing a site with AI would likely require a much greater amount of time and resources, so I can’t imagine a scenario where Google would allocate in-house resources to do so.
However, it is a more feasible and even more reliable analysis if user resources are used, as is already the case with the Chrome User Experience program. This could be based on the Project Mariner prototype, which could perhaps be integrated natively into Chrome in the future.
My personal bet:
- There are not going to be close and significant changes in the way Google spiders crawl and process content.
- Although it does not seem to be their native purpose, prototypes such as Project Mariner are the first step to assess new metrics or improve the evaluation of existing ones directly from the user’s browser, as is already the case with Core Web Vitals.
- Aspects such as accessibility or multimedia content will be part of the calculation of positions in the Core Web Vitals, as long as the user’s browser has tools to measure them reliably.
All this will largely depend on the future of Chrome, which is currently uncertain due to the antitrust trial in which Google is immersed, and whose judgment could mean incalculable damage for the search engine.
What is unquestionable is the importance of surrounding yourself with professionals who know the current context and understand the direction it will take in the future. Many first opportunities are still to come and each one of them can turn us into a reliable source of information for Google, with the benefits that this entails.
An optimal SEO strategy is not to react quickly to algorithmic changes, but to have acted before they occur.
If tomorrow Google were to deploy a scenario similar to the one proposed in this article, would your site be prepared?