For those who are not very familiar with this language, we explain a little:
Execution of the client-side and server-side
Traditionally, as is the case of static pages in HTML, the code is executed inside the server (server-side execution). When you visit a website, Googlebot receives all the content, ready to show, you just need to download the CSS file and display it in your browser.
Getting Googlebot to correctly interpret your website requires you to focus on understanding both the content and the links on the site. If Googlebot does not understand the links, it will be practically impossible to find them. And if you do not adequately capture the content of the website, Googlebot will not be able to see it.
Here are the steps you must follow …
1. Command “site:”
First of all the command “site”: it will show you how many pages of your website are indexed by Google. If many of them are not indexed it can still mean that there are problems in the execution of your internal links.
It is important to clarify that this will not work if you use the “cache” command : since the versions of your site that are stored in the cache are the originals ( static HTML ) and not the complete code.
2. Chrome 41
In August of 2017 Google updated its Search Guide and announced that they were using Chrome 41 to do the rendering. This was a radical change for SEO because from that moment you can already verify how Google downloads and views your site instead of supposing and expecting the best result.
Now you only need to download Google Chrome 41 and verify how a website or a page is executed and viewed by Googlebot.
3. Chrome DevTools
- Open your site from Chrome
- Open the “Elements” tab in DevTools
- Verify how your site is downloaded by viewing the site’s built-in DOM in the browser – make sure all the crucial navigation content is present there.
We recommend doing it from Google Chrome 41. With this you will make sure that you will be seeing it as Googlebot does.
You can also do it through the version you have of Chrome to compare the information that is displayed.
4. Google Search Console
Another tool that helps us an idea of how Google download and read our website is through the function “Fetch and Render” in the Google Search Console .
First, you must copy and paste the URL of your site. Then choose the “Fetch and Render” option and wait a bit. This will allow you to verify if Googlebot can download and read your website, view related articles, copy or follow links.
5. Analysis of your server’s log
Another way to verify how Googlebot crawls on your site is through the analysis of your server’s log. By taking a deep look at them you can check if specific URLs were visited by Googlebot and which sections were not.
You can also check if Googlebot sees all the pages of your site, if it does not also it could mean that there is a problem with the decoding of the information.
Possible problems with the execution of your website
Even if your site is running correctly in “Fetch and Render” of the search console, it does not mean you can rest easy. There are still some other problems that you need to pay attention to.
Let’s start with one of the biggest problems you’ll have to solve …
Although the waiting times are not specified, it is said that Google can not wait more than 5 seconds for a script. It is important to remember that “Fetch and Render” is much more lenient than a common Googlebot, so you will have to go a step further to make sure that the scripts that are running are capable of being deciphered in less than 5 seconds.
Therefore the best solution is to download the Chrome 41 browser (the exact version that Google uses to run sites) and familiarize yourself with it. Check the console log to see where the errors occur and ask the developers to correct them.
Content that requires user interaction to run
I know this was mentioned, but it’s worth repeating: Googlebot does not act like the user. Googlebot does not click on buttons, does not expand “read more”, does not fill out forms … it just reads and continues on its way.
This means that the entire content that you want to be read by Google must be uploaded immediately to the DOM and not after the action has been taken. This is particularly important for content in “read more” and menus.
What can I do to help Googlebot run websites better?
The execution of a website by Googlebot is one way. There are many things that developers can do to make the process easier, helping the things that you want Googlebot to read, be a little more obvious.
Search engines treat onclick = “windowlocation =” as ordinary links, which means that in most cases, they will not follow this type of navigation. And search engines almost certainly will not treat them as internal link signals.
It is crucial that the links are in the DOM before clicking. You can check this by opening the Developers Tools in Chrome 41 and confirm that the important links have already been executed without the need for action by the user.
Unique URLs for unique pieces of content
Each piece of your content must be located somewhere for the search engine to index it. This is why it is important to remember that if you change your content dynamically without changing the URL, you are preventing search engines from accessing it.
Avoid # in URLs
The identifier fragment is not backed by the Googlebot and is ignored. Then, instead of using the URL structure example.com/#url , try to stick to the clean URL format, example.com/url .
“We recommend making sure that Googlebot can access all the resources used that contribute significantly to the visible content of your site or its design …”
So, do not do things like this:
Using this solution is quite simple, you just need to incorporate a middleware or snippet to your server.