Site audit - 20 checks to get the site in the TOP

As I have said more than once, and you yourself know that the main thing on the site is the content (its content, content). 

Everything else is secondary and unimportantStop. Secondary yes, but still very important.

How the content is designed, what it is wrapped in, how it is served, and how it all looks from the point of view of search engines is very important

This will not give you some unthinkable odds against competitors (provided that they regularly audit the site) but will allow you to stand on the same level with them (to catch up with the hungry).

SEO now includes a number of factors of varying degrees of importance (we'll talk about this separately). 

For example, for Yandex, texts are in the first place, and for Google, links are still in priority (without them, traffic growth is unlikely). 

But if the site has errors of a technical or structural type, usability is lame and behavioral factors are dragging down, then don’t expect any good from this.

Site audit - 20 checks to get the site in the TOP

Sites that are promoted by commercial requests should add to the audit (what is it?) And also an inventory of the presence of all commercial factors (and there are many of them), without which now there is nothing to do in this area. 

In general, there is a lot to keep an eye on, but what exactly will be discussed in detail below in the text. In fact, everything is very, very simple.

How to start a site audit?

First, a few words about who and why this may be needed. In short, everyone who has websites and who wants nothing to interfere with the growth of their popularity (attendance). It can be a newly created site or a project that has been around for a long time.

The audit is shown to both one-pagers and monsters with millions of pages (the latter, by the way, will feel the return from the audit to a greater extent). 

Everyone needs an auditAnother thing is that it can be carried out by pros with decent price tags, or you can do it yourself by forming a list of tasks for your programmer at the end (if there is none, then freelance exchanges are looking forward to your offers).

In principle, there is nothing complicated about this. Sit for an hour or two at the computer, draw up tasks for revision according to my templates, look for performers on the exchanges (although it’s not so easy because there are a lot of intermediaries) and that’s all. 

If you wish, you can edit all this yourself, but here you won’t get by for a couple of hours, although I will still leave the links for especially inquisitive minds.

In principle, where to start the audit is not so important, because all the checklist items must be completed, and in what order does not matter. 

The absence of errors on the site that hinder promotion is a complex value and it is achieved only by a comprehensive and comprehensive check. Go.

Stop, a few more words of digression must be said. The fact is that there are sites that are promoted only by information requests (my blog is a vivid example of this). 

It is easier for them in life, because the competition in the issuance is not so high and the search engines forgive some flaws but do not pay attention to something at all.

But there are sites that also (or mostly) commercial requests are promoted, but here there is a real feeling in the search results. 

For such sites, promotion is walking through a minefield. Everything is important here: the volume of text, the occurrence of keys, the location of the site elements, the structure of the sections, and the so-called commercial factors (phone of the required format, contacts like others, etc.).

Why am I saying this? And then, that the audit will be universal, and suitable for commercial and informational sites. If the requests promoted on your site are informational, then you simply omit those items that are necessary to promote commercial requests. It will be too much for you. Well, selling sites will need to go through all the points of this checklist strictly.

By the way, a quick check of your site can be done by driving its URL into this form:

Well, now let's go...

SEO site audit. Top 8 Best SEO Website Audit Services of 2022

The World Wide Web is developing rapidly, and it has long swallowed up all areas of business. 

It's no secret that in order to be at the top of search engines, you need to constantly analyze your activities and improve them so that potential customers become part of your community. And especially for this, below we will talk about the 8 best SEO site audits that will take your business to the next level.

RostSite

Notebook

The best site audit, allows you to find out about the attendance of your resource. All statistics appear in the form of a clear and simple graph. 

In addition, RostSite checks all versions of the Internet portal (smartphone, computer, and tablet). Thus, you can find out the true indicators of the popularity of the platform in all search engines.

Also, you can not go past the speed of site analysis. The finished audit will appear on your screen after 30 seconds.

The sound of the wind

Audit

This audit is a resource popularity scanner on the Internet. That is, by ordering a site audit, you can learn about the opinions of users and their reviews in various social networks.
The analysis takes place within a minute (depending on requests). Also, it is worth noting that the finished “Sound of the Wind” audit can be sent for revision or your platform can be scanned again to get the true indicators.

SEO specialists note that Sounds of the Wind is one of the best audits on the Internet because on the resource you can conduct not only popularity analysis but also order search engine promotion of the site and carry out branding, making the brand fresher.

MegaIndex

MegaIndex

This audit is shareware, but the free version offers more than 20 applications for analysis. 

It is worth noting that you can use the resource only after registration, although the above examples allow you to order an audit without going through this procedure.

You can not go past the main advantage of "MegaIndex" which is the search for common links through search engines and other Internet resources. 

A great opportunity to run through all the information on your site and find out which of the competitors offer the same thing. This advantage will allow you to change the working strategy and make the site more original.

Be1.ru

Program

Also a conditionally free audit, but in a limited version, there are also many functions that allow you to analyze not only the popularity of your resource but also the rating of competitors. A significant plus is the HTML editor, thanks to which you can fix all the technical gaps in the source code of the page.

In addition, the creators of the resource are constantly updating tools for analysis, thereby allowing users to look at their site from different angles and notice all aspects that need improvement.

PR-CY

Schedule

With the help of this site, you can conduct a comprehensive audit of the site. In addition, you can conduct a technical audit of the site and correct all errors immediately after the report.

Also, it is worth noting that all information is displayed on one page. You do not have to open several reports in parallel and compare indicators.

Although the resource has a subscription that opens up the entire range of tools, a limited version is also suitable. 

Of course, it will have slightly fewer opportunities, but everyone will be able to analyze and improve their activities.

content watch

Information on the web

The most important thing for the user is the uniqueness of the information. And "Content Watch" will help you with this. The service is one of the best sites for checking the uniqueness of the text.

Before publishing information on your resource, check it through the "Content Watch". Thus, you will get ahead of your competitors with the uniqueness and freshness of the text that you want to show to all users.

Sitechecker

Content

A dream for all webmasters. Using this service, you can audit your site and the resource of competitors. 

On average, the whole process takes a little over 2 minutes. Also, when ordering an audit, site crawling takes place according to 50 different parameters. So, you can improve not only the content but also technical performance by getting rid of bugs.

In addition, "Sitechecker" offers a solution to problems after the analysis. The service is able to help correct errors, and not just point them out.

Yandex Webmaster

Yandex Webmaster

This is the best semantic markup page validator. Using it, you can check for technical bugs. 

The service, like the previous version, offers a solution to the problems found, but unlike many other resources (for example, validator.w3.org), it does not consider microdata to be an error. Thus, you can fix all technical problems from the first time and make the site more user-friendly.

The one who adapts first will be able to conquer the top of the business. Using all of the above audits, even a beginner will be able to take top places in search engines. Each of the services will give a true report on the state of your resource. 

Thus, you will always be one step ahead of your competitors. But do not forget that even the top-end tool will not give you the whole position and for the effective promotion of your site, you should contact a team of specialists.

Check for blocking by Internet regulators (Roskomnadzor)

In today's realities, this is very important. Both your site (whole or only individual pages) and the site sitting with you on the same IP (shared hosting or CDN) can get blocked.

Checking the site for presence in the databases of Roskomnadzor is quite simple:

  1. eais.rkn.gov.ru/

If you find that yes, and your resource is found on this list, then you will need to immediately start a correspondence with Roskomnadzor to find out the reasons and fulfill the conditions as soon as possible in order to remove yours from this list.

Even if only one page of your site fell under the sanctions of the RKN, there is no guarantee at all that only it will be blocked. Some Internet providers simply do not bother and block the entire site.

If you did not find your site in these lists, then this does not mean at all that it is not subject to blockingIf you have shared hosting without a dedicated IP or you use a CDN (especially if it's something free in the manner of CloudFler ), then there is a high probability of being blocked not for your sins.

Therefore, in order to avoid such a fate, it is better to stop using free CDNs (and paid ones too) and spend money on getting a dedicated IP for your site.


Checking the site's cross-browser compatibility

As you understand, users can access your site from completely different browsers (only the main browsers, that's how many ). 

In addition, now a very large proportion of traffic comes from mobile devices (already more than half of the total flow of visitors). Naturally, you need to know for sure that everything is in order with the display of your site in these browsers.

How to check it? Well, you can download Chrome, Mozilla, Opera, Yandex Browser, Safari, IE, and others

You need to look at them not only for displaying the main page but also for other pages that are indicative of your site (with articles, product cards, a product listing, with contacts). The same is true for browsers on popular mobile devices (Android and iOs).

There is also a more universal option - the BrowserShots online service. It's free but quite functional. Supports a large number of browsers, and outputs screenshots of your site taken in your chosen browsers. Works not fast, but quite tolerably.

If something goes wrong somewhere (design goes wrong, styles don’t connect, something gets skewed), then write the first item in the task for the programmer: to achieve the correct display of the site in such and such browsers.

Html and CSS Validation

This stage of the audit is somewhat similar to the previous one. Validation ( what is it all about ?) of code and CSS styles is needed not by itself (for show), but precisely in order to avoid possible problems with the display of the site, which are often quite difficult to identify with visual methods.

Moreover, there is no need to be smart here. There are official validators from the W3C consortium (it is responsible for the formation of modern markup language standards), in which you just need to add the address of your site page:

  1. HTML Code Validator
  2. CSS Code Validator

Again, you must understand that not only the main page but also other "indicative" ones need to be run through the validators. For example, pages with articles, pages with products, a list of products (or articles), etc. Each of them may have its own errors and comments.

Code validation during site audit

What to do with the found errorsWell, don't panic, that's for sure. If you yourself understand where you need to fix it, then go. 

But often, to fix errors in the markup code, you will have to dig deep and get into the code of the engine or plugins (extensions). In this case, you give the task to the programmer: to remove, if possible, all the errors issued by the validators for the "indicative" pages of your site.

Website responsiveness check

As I said, more than half of the traffic in RuNet already comes from mobile devices. Therefore, whether we want it or not, our sites must be able to “shrink beautifully” (adapt) to low-resolution gadget screens. 

Otherwise, you will simply kill the behavioral factors of your wonderful site, which will require horizontal scrolling to be displayed on a mobile phone screen.

What are the ways at this stage of the audit to identify the problem or make sure that it does not existBasically, you can just open your site on your mobile phone and see everything with your own eyes. 

There is an excellent service IloveaDaptive (I recommend). Or even easier - right on the computer, reduce the size of the browser window by grabbing it with the mouse by the corner. Does horizontal scrolling appear? Do the pictures fit into the new dimensions?

If you have narrowed the screen dozens of times, and the text, pictures, buttons, and other designs successfully adapt to this size (rebuilds), then this is already wonderful. But even if everything seems good to you there, this does not mean that everything will be fine for search engines. It is better to play it safe and look at the site through the eyes of search engines.

Google has a special test that checks the optimization of any site for mobile devices :

Audit for site adaptation for mobile devices

Here, too, it would be better to check all the “demonstration” pages to make sure that there are no responsiveness problems on your entire site.

Yandex also has a similar tool, but it is available only from the webmaster panel on the "Tools" tab - "Check mobile pages":

Site adaptability audit in Yandex Webmaster

What to do if your site did not pass the adaptability test during the audit? Don't panic, but take your feet in your hands and look for a solution. 

Personally, I made my site adaptive on my own and even described this process in detail.

But it doesn't have to be done by yourself. Adaptation for mobile devices is not such a difficult task for a programmer (coder), so just add one more item to your terms of reference formulated based on the results of this audit. 

I would recommend doing exactly the adaptation, and not a separate mobile version of the site.

Microdata audit

Micro-markup is a necessary attribute of site optimization when promoting commercial requestsMarked product cards, contacts, phone numbers, and so on. It is hidden from the eyes of ordinary users and is added programmatically to the Html code in the form of additional attributes.

Why is she needed? Well, so that the search engines clearly and unmistakably understand what's what. 

With micromarketing, you literally poke your finger at the search engine - here is the price, but here is the photo of the product, but here are the seller’s contacts, and here is his phone number.

Why do search engines need it? Well, well, according to these signs, they evaluate the commercial factors of the site (this will be a separate article) and decide whether to take it into the “clip” of trusted sellers or is it bullshit (doorway, for example) masquerading as a commercial site.

Data from microdata can be included in a snippet (a description of your site in search engine results) and be an additional attractive element. In general, micro-markup is required for commercial sites, but it is far from a fact that you have done it correctly.

How to check microdata? Again, it’s quite simple, because both Google and Yandex provide micro-markup validators for this purpose :

  1. Checking microdata for errors in Yandex
  2. Google structured data validation

An audit of your site according to this item of the checklist will consist only in substituting the URLs of the pages where you need to check the microdata for errors.


The main thing here, it seems to me, is the contact pageThe company name, postal address, zip code, contact phone number, fax, and e-mail must be marked. If something is missing there or there are errors in the micro-markup, then the CF (commercial factors) of your site can be underestimated by Yandex with all the consequences.

Encoding check

Browsers are now very smart and can automatically distinguish between Russian language encodings, of which there are a lot

But the most correct option would be to specify the type of encoding used at the very beginning of the Html code of all pages of the site. So you are 100% protected from krakozyabry (unreadable characters displayed instead of Russian letters).

Auditing this checklist item will be fairly straightforward. Open any page of the site in any browser and click on an empty space on the page with the right mouse button. 

Select from the context menu an item of the form “View page code” (or press Ctrl + U on the keyboard) and in the window that opens, look at the very top immediately after the DOCTYPE directive for a construction (called a meta tag) that specifies the encoding:

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />

In my code it looks like this:

Checking the Site Encoding Job

Instead of UTF-8, you can have windows-1251 - it doesn’t matter, although Unicode (UTF) is now considered a recognized standard (it’s best to use it for a new site).

If you did not find such a line in the source code of the pages (you can use the search on the page “Ctrl” + F), then you will need to add it there. 

In most used engines, this is not difficult at all. In WordPress, for example, this code can be added to the header.php file from the folder with the theme you are using. Well, for a programmer, this is generally a matter of minutes (just add this to the TK compiled based on the results of the audit).

Website load speed audit

Page loading speed has long been a factor influencing website promotion. This is especially true now when most of the traffic comes from cell phones, and the mobile Internet, after all, does not always “catch” well. And therefore, it is for mobile users that jambs in the site settings result in a very long page load.

In reality, no one will wait more than 4 seconds. The search engines are fixing this case and taking action. But first, the problem must be diagnosed.

It is clear that you can measure the loading speed of a site in a dozen different ways, up to what is called "by eye", opening new pages of the site in a browser where you have not opened it before or after clearing the browser cache

But this is not all that, because it is required to look at the problem from the side of the search engines.

In this regard, Google's service for evaluating site loading speed called PageSpeed ​​Insights can serve as an excellent audit tool :

Website page load speed audit

As before, I never get tired of repeating - you need to check not only the main page but also all other "indicative" (created according to different templates) pages. 

Test results should be at least good, and ideally excellent. I went to such results for five years, probably. So long that now there is no joy from achieving an excellent result.

But you, of course, do not have to suffer yourself to increase the speed of the site, because this is a really difficult task that requires knowledge not only in programming but also in fine-tuning the web server. 

Just add a clause to your TOR for the programmer that such and such a list of pages should produce an excellent or at least good result when audited in PageSpeed ​​Insights.

Favicon check

The favicon has been around for many years and the format in which it was originally set (ico) has long been obsolete (it was completely replaced by PNG ). However, favicon.ico is the liveliest of all living things.

On most sites, for this purpose, an icon graphic file is used, saved in the outdated ico format. Its size is usually 16 by 16 pixels (that's how it is for me), but you can make it bigger (32 by 32, for example, Avito uses). Yandex, for this purpose, uses a png file of size 64 by 64. Gul has an ico of size 32 by 32. In general, who is in the forest, and who is for firewood?

What is a favicon, how to create it, and how to connect it, I have already written in some detail? 

In short, it is displayed in Yandex search results next to the name of your site and on browser tabs/tabs. You can set it simply by adding a line of code to the same place where we just looked at the encoding:

<link rel="shortcut icon" type="image/ico" href="https://ktonanovenkogo.ru/wp-content/themes/Organic/favicon.ico" />

For Yandex, it looks a little different (but it can, in theory, break all conceivable rules):

<link rel="shortcut icon" href="//yastatic.net/iconostasis/_/8lFaTHLDzmsEZz-5XaQg9iTWZGE.png">

But that's not all. Have you heard of the Apple Touch Icon yet? Not? I also recently heard about this. And you know what's the matter? This turns out to be a special icon that will be displayed on a mobile device if your site is pulled out on a "home screen" (an analog of the desktop in iOs).

The problem is also that on different devices from Apple, this file is required in different sizes (from 57 by 57 pixels to 180 by 180). In addition to iOs, icons of the “Apple Touch Icon” format, despite the name, are also supported by Android devices. What sizes are needed there is not at all clear.

The official page for Apple developers currently offers two options for specifying the storage locations for the Apple Touch Icon - set a common icon for all or several for each size. 

The second option seems too cumbersome to me (add four entries), so I chose the first option with the path to the icon (it must be in PNG format ) 180 by 180 pixels in size:

<link rel="apple-touch-icon" href="https://ktonanovenkogo.ru/apple-touch-icon.png">

If you didn’t understand anything of the above about the favicon and the Apple Touch Icon, and in the source code of the site (Ctrl + U) search on the page (Ctrl + F) did not find anything like this (“shortcut icon” and “apple-touch-icon”) , then just copy this text to your programmer and tell him: "so it was."

Auditing the presence of a favicon and Apple Touch Icon in the site code

By the way, there is an excellent service Favicon Icon Generator ", which from one picture you upload will make the entire necessary set of images that you can download in one archive.

Creating all kinds of favicons for the site

Moreover, he will even provide you with a code for connecting all these beauties to the site. True, there are already a lot of them out there, but you are always free to sacrifice some of the compactness of the source code of your site pages.

Sitemap audit in sitemap.xml format

Search engines certainly get smarter from year to year, but nevertheless, it will be useful for them to unambiguously “point the finger” at those pages of your site that need to be indexed. 

For this, a so-called sitemap file is provided, which is usually called sitemap.xml, and placed at the root of the site (this is where Yandex and Google bots will first look for it).

The sitemap.xml file should ideally be updated with the appearance of new pages on the site (in the article just mentioned above, I mentioned a WordPress plugin that quickly creates a blog map). This file should contain pages that are subject to indexing by search engines.

I can share with you a few sitemap.xml hacks that you can use:

  1. Do not stupidly place sitemap.xml in the root of the site and specify the path to it in the robots.txt file (we will talk about it below). Why? Well, why make life easier for those who steal your content (and there will always be such people).

    Checking the sitemap file

    It is better to hide this file deeper and name it differently (jfhfhdk.xtml, for example). After that, just go to the Google and Yandex Webmaster panels to indicate the path to this file there. All. The search engines are aware, and let the rest get lost.
  2. If you have a huge site with a large number of pages and a lot of their nesting, then, for sure, there are problems with its indexing. 
  3. There are many ways to get a page into the index (the same IndexGator or GetBot ) and keep it there (a through block with a random list of pages), but you can also use sitemap.xml for this purpose.

    Website indexing speed up

    Ask your programmer to only add pages to the sitemap file that are currently not in the index or have fallen out of it. 
  4. This will speed up access to them by search engine bots because they will not be distracted by already indexed pages.

In general, this stage of the audit consists in making sure that the site has at any time an up-to-date map file of all pages to be indexed. 

If it is not there, you didn’t find it, or you didn’t like it, write another item in the TOR to the programmer. For him, everything is like two fingers ...

Checking the response to a 404 error

No one is perfect, and even more so those who will link to your site on blogs, forums, social networks, and other places. People tend to err. 

And one mistake in the URL of the page leads to the fact that your server (where the site lives) will respond to such a request (following a broken link) with a 404 code. A common thing, but ...

It is very bad if the browser processes this error and gives the user a white sheet with a small inscription "404 not found". 

In this case, this visitor will be lost to youSuch errors should be processed by the server itself and issue a 404 error page in the design of your site, which will greatly increase the likelihood that this user will still find something interesting from you (to your delight). Clear?

How to check? Just add any rubbish like "https://ktonanovenkogo.ru/fdfdfsf" to the address of your site through a slash in the address bar of your browser and look at the result. Do you see the white sheet with the small inscription? Write another TOR item for a programmer or set up a 404 page yourself, for example, based on the article above.

Second important point. You need to make sure that your web server ( this is not the server that is in the hoster's rack, but the web server program) issues exactly the 404 code in response to a request for non-existent pages

If he gives them code 200, then trouble. There will be many incomprehensible and unnecessary problems.

How to check the response code? There are a lot of services. For example, as part of Yandex Webmaster or this one hereInsert into it the address of a page that is definitely not on the site (like "https://ktonanovenkogo.ru/fdfdfsf") and look at the response code of your server:

Auditing Your Server Responses

If there is not a 404, then you urgently contact the programmer and ask to fix everything urgently. In fact, this point of the audit is very important, because it is fraught with very unpleasant consequences.

Audit File Robots

Since ancient times, search robots, before crawling a site, first look for the robots.txt file in its root. 

It usually contains directives for search bots, which either prohibit the indexing of some pages and sections (for example, those where the engine files are located) or allow indexing, for example, subdirectories in a directory prohibited for indexing.

Read more about robots.txt and the robot meta tag in the article linked here. In principle, this file may be empty or it may not exist at all. Search engines, of course, will be able to figure out over time (over the years) what you have is garbage and what is “valuable content”. But as with the sitemap.xml file, it’s better if the search bot “points the finger”.

Robots.txt serves mainly to indicate - go there, don't go here (and here the fish was wrapped). 

This does not mean that the bot will not really go there (the google bot generally goes everywhere), but you have completed your task and the technical files (engine, theme, scripts) will not go into the index and the bot will most likely not spend time indexing them will be.

Thesis on checking robots.txt :

  1. The audit of the file you have must be carried out in the Google panel and Yandex Webmaster .

    Analysis of robots.txt in Yandex Webmaster

    Enter the URLs of "demonstration" pages and make sure they are available for indexing. 
  2. Then enter the addresses of technical files (folders) and make sure that they are closed for indexing. The main thing is to choose the right pages and not miss anything important. If something is not right, then add an item to the TOR to the programmer.
  3. Previously, in robots.txt, it was required to add the main site mirror in a separate block for Yandex using the Hosts directive. Yandex recently canceled this case, but many still prescribe the directive.
  4. Also, many add to Robots the path to the sitemap in xml format. But a little higher, we came to the conclusion that it’s better not to burn and not make life easier for copy-pasters.
  5. Directives that prohibit indexing in the Robots.txt file do not mean that the bot will not go to these pages and index them. Google is especially indifferent to such directives.

    Robots Meta Tag Checker

    If you want to 100% close the page from getting into the index, add the robots meta tag to its html code (see screenshot). For example, my pages with temporary archives are closed in this way (via the OlInSEoPak plugin ).

    In the robots meta tag, the “noindex” value prohibits indexing this document (page, that is, its content), and “follow” allows the bot to follow the links on it (but you can even prohibit this). See the Robots article above for more details.

Checking image titles and alts

HTML has heading levels from H1 to H6 , which are highly desirable to use on the page in descending order. The main heading is H1. It’s not even worth putting H3 first, then H1, and then H2 for the sake of beauty. Order is order.

Another thing is that many people cheat and make large inscriptions (ala main headings) not at all as headings, but as ordinary blocks (for example, div tags are used instead of H1), the size and type of text in which are simply set using CSS.

The fact is that in this inscription (unlike H1) it is not necessary to use keywords and it can be made as attractive as possible and not loaded with keys. And the H1 itself can go below and visually look like an ordinary heading or even like a piece of text.

H2 will go further (as well as their subsections - H3, if it comes to that), which, when promoting for commercial queries, it is better not to clog with keys at all - LSI phrases will be just right for them. 

We will talk about them in more detail in a separate article, but in short, these are phrases used by experts in this field, and not by those who do not like anything in this.

Well, did you understand? In principle, this is a common practice and works well, especially on selling pages, where the main thing is to persuade the buyer to act with large and bright inviting texts, and for search engines to use key phrases in a modestly designed H1 (and title) (again, you need to look at competitors from the Top by this request).

Very often, the content of H1 coincides with the content of the Title ( read the link for what a title and description are). 

In principle, H1 allows variation in the use of a different version of the key, and for information queries, when the cluster of keys for one page is huge, headings of all levels make it possible to use as many keywords as possible to increase traffic to the article.

At this stage of the audit, the main thing for you to find out is whether the hierarchy of headings is observed on the main types of pages (main, categories, articles, product cards) of your site and whether it is H1 everywhere. 

For this purpose, the Web Developer browser plugin is perfect, which can clearly show all the headings used on the page. 

You can also simply move the mouse to the title and select the item "View Code" (or "Explore Element") from the right-click menu, but this will not be so fast and visual.

Very often, H2-H6 tags can be used by template developers in theme design (menu titles, for example). It is better to remove all this so that the headings are only in the unique part of this particular page (in its body, and not in the body kit).

With regards to the Alt and Title attributes of the images used on the site (in the img tag)The Alt attributes must be mandatory (their content will be displayed instead of the image if the browser fails to load it), and the Title attribute is optional (its content is shown when you move the mouse over the image on the site page).

Attention! Now in Alta it is strictly forbidden to spam with keywords, hoping that the visitor does not see them, and therefore inhuman n-grams like “buy a refrigerator Moscow” are shoved there. Many sites have been filtered for this.

Often, the owners added keys even to images related to the design of the site (logos, arrows, and similar tinsel). Now Alt should only describe what is shown in the picture (well, maybe even using LSI phrases - I will write a separate article about them).

How to audit altosTo find pages where there are no Alt attributes at all, you can use the HTML Code Validator already mentioned above

He notes the absence of violas as a mistake. To check the content of alt tags, the Web Developer browser plugin already mentioned above is perfect (it can display alt content for all images on the page right next to the image).

Naturally, when finding discrepancies, correct them yourself or add another item to the TOR for the programmer.

Audit SEO tags for all pages of the site

What SEO Tags Are There? Well, initially there were three of them: Title, description, and keywords

However, keywords have long since become a vestige. Imagine, it was originally invented so that search engines understand what queries your site should rank for (add to the Top). They were stupid back then.

What is the main thing within this stage of the audit:

  1. Firstly, Title and description must be (which ones you can read in the above article). I mean, to be present. A page without a Title is considered non-existent for search engines. The description is also important, but not anymore. Google and Yandex Webmaster will gladly tell you about such pages (without titles) in blocks with found errors. This can also be clearly seen using a free program like Xenu Link if you sort the data collected by it by the “title” column (you will see empty cells).
  2. Secondly, there should not be identical Titles for pages within the same site. They should be at least somewhat different, even if they are cards with similar products. You will again be informed about such errors in the Yandex and Google webmaster panels (I don’t have such errors, and therefore I can’t give a screen, but you can dig yourself). If the site is small, then you can identify the same Title tags using Xenu Link.

If the site is very small, then you can view the Titles and Descriptions manually by simply pressing Ctrl + U on the keyboard (view the source code of the page):

Auditing the presence of titles and descriptions on the site

What to do if errors of this kind were found? If these are articles or product cards, then just change the Titles a bit or set up their automatic generation for cards based on the template.

I had such a problem on pages with pagination (like the main one on this blog, where there is number at the bottom). I just closed the pagination pages from indexing (via the Robots meta tag) and added the Canonical meta tag to them, just in case, indicating the first page as canonical.

If you didn’t understand anything, then the programmer will definitely understand and quickly solve the problems identified at this stage of the audit for you. Why else would you pay him money?

Checking the bonding of mirrors

Search engines are so stupid that they consider some pages that are completely identical in our opinion to be different. For example, for Yandex and Google, pages with and without a slash at the end will be different:

https://ktonanovenkogo.ru

and

https://ktonanovenkogo.ru/

And also the domain name of the site with www and without www is perceived for them as two different sites:

https://ktonanovenkogo.ru

and

https://www.ktonanovenkogo.ru

Well, it’s okay, what will we lose from this or something. Taki will disappear. The content on these pages and sites will be the same. And what's that? Correctly. Duplicate content, which search engines don't like very much. 

Why? Because we fill their base with the same thing, which means that more hardware and more money are needed to store it. Search engines don't like spending extra money.

The second obvious disadvantage. External links will be put down by people and so and so (with a slash, without it, with www and without it). 

If these mirrors are not glued together, then the search engine (it's actually not that dumb) will choose one or the other as the main mirror. And what will happen to those links that do not lead to the main mirror? That's right, they will not be taken into account, which will affect the ranking of the site for the worse.

Therefore, mirrors need to be glued together using a 301 redirect (we tell search engines by this that this redirect is done forever, unlike a temporary 302 redirect). After that, duplicates will disappear in the eyes of search engines, and all links will be taken into account regardless of whether they had a slash at the end or not, and whether there was www or not.

Checking the site for gluing mirrors

How to audit the gluing of mirrorsPretty simple. Open your site in a browser and start mocking it in the address bar. To get started, add three letters www before the domain (after http://) or, conversely, delete them if they were there. Then press Enter on your keyboard. What happened?

  1. The page has been refreshed, and your change has been saved in the address bar. This is bad - the mirrors are not glued.
  2. The page has been refreshed, but www has disappeared from the address bar. This is great because this is how 301 redirects work for glued mirrors. Check all the same with other pages of the site. Ideally, it should redirect everywhere to the main mirror (you can have it both with www and without www  - it doesn’t matter at all).

Now do the same experiment with the slash. If on the main page in the address bar it is not at the end, then add it and press Enter on the keyboard. 

If the slash remains, then there is no gluing of this type of mirror - it will be necessary to refine it. If the slash disappeared, then everything is OK and mirrors of this type also turned out to be glued together.

We check further. Add "/index.php" to the end of the home page address to make it "https://ktonanovenkogo.ru/index.php". Press Enter. Redirects to the main mirror (in my case "https://ktonanovenkogo.ru"). If not (/index.php remains), then congratulations - you have found another unglued mirror.

Do the same with /index.html and the like. I forgot to say that the normal option for such additives would also be the appearance of a 404 page (not found). In this case, there will also be no duplicates, and links from the index after the URL of the main page are generated only by site engines, but none of the people will put an external link to you in this form.

What to do if at this stage of the audit you find non-glued mirrorsPraise yourself, and then either try to fix this disgrace yourself based on this and this publication or add one more item to the TOR for the programmer. For him, this problem won't be worth a damn.


PS If the audit showed that the mirrors (for example, with www and without www) were not glued together, then it is better to choose the one that the search engine considered as such as the main one. For commercial requests, the main search engine is Yandex, and therefore you need to enter the domain of your site in its line and move the mouse cursor to the name of your site in the search results. In the lower-left corner of the browser, you will see the address of your site and this is the type you should choose for the main mirror (with or without www).

Checking broken links on a website

What it is? There will always be two types of links on your site:

  1. Internal - leading to other pages of your site.
  2. External - leading, respectively, to other resources.

So, both of these types of links can be broken and corrupted, that is, not leading to the opening of the page to which they are supposed to lead.

Why this might happen:

  1. You yourself made a mistake when inserting the link (entered the URL with an error, made mistakes in the hyperlink tag ).
  2. The page to which this link leads could change its address over time or even be deleted (transferred). You could do it yourself on your site (or your programmer) and not take this change into account in relinking. The same thing could happen on someone else's site and the external link became broken as a result.
  3. If it was an external link, then not only the page itself, but the entire site could even disappear. By the way, this is unspeakably sad to watch.

In any case, you should regularly conduct a full audit of all links on the site to see if any of them have broken (leading to nowhere). Why is it so important? Search bots only follow links, and if you have too many paths that cannot be followed, they may get offended.

At best, your site with a bunch of broken links will look unpresentable for search enginesIt's like a store with a peeling window, broken glass, and broken steps - it does not inspire confidence. Broken links will always and everywhere - such is their nature. You just need to regularly check and correct (or delete) those paths that cannot be passed.

Naturally, it will not be necessary to follow all the links on all pages of the site. This process can be easily automated. How to do it? You can read about it here - Checking for broken links.

I do like this:

  1. Once every few months I run my blog through the Broken Link Checker service (read more about it in the article above). He conducts an audit very quickly - in about ten minutes, probably (but he does not find everything). I treat or delete the paths that lead to nowhere and forget about it for a few more months.
  2. Once a year or so, I run the site through the Xenu Link Sleuth program. It is free and will tell you everything about your site. It works for a long time - but it will find everything that is beaten or broken. Then I treat it all for a long and tedious time in order to forget about it for another year or two.

Naturally, all this can be shifted onto the shoulders of the programmer; nothing will prevent you from monitoring the work he has done using the methods described above.

Checking for NC Addresses

There is such a term as CNC. This means human-readable URLs. URLs are the URLs of the pages on your site. Basically, they can be of two types:

  1. Formed by site engines, when a question mark and a bunch of incomprehensible parameters (number letters) follow the domain. For example, "https://ktonanovenkogo.ru/?p=59164". This is not very good, because a person looking at such an address understands little.
  2. But there are Urls transformed to a human-understandable form (CNC). Like this, example:
    https://ktonanovenkogo.ru/wordpress/bitye-ssylki-proverka-paneli-yandeksa-google-programmoj-xenu-link-sleuth-wordpress-pluginom-broken-link-checker.html

    Here you can see the sections in which this article is placed and, by the way, you can open them simply by deleting everything superfluous to the right of the last slash (I myself sometimes do this on other sites if there are no breadcrumbs there). 

    It is considered optimal to use the Latin alphabet (transliteration from Russian), rather than the Cyrillic alphabet. This is due to the peculiarities of the work of search engines and some other problems that arise out of the blue.

  3. Yes, and the Url itself is a completely readable version of the title, written in transliteration ( what is it ?).

What to do if, looking at the address bar of your (or you're promoted) site, you find that the URLs are not of the CNC type? It all depends on the age of the site. 

If it has just appeared and has not even really been indexed yet, and the traffic to it is not high yet, feel free to connect to the CNC. 

I already wrote how to connect the CNC in WordPress, but your programmer can easily do this for any engine.

If the site already has decent traffic from search engines, then don't touch anything. Yes, the CNC is better, but if you connect it now, you will lose traffic (although if you do it correctly, you may not lose it, you need a specialist). 

It is possible that over time he will return, but I would not risk it. Consider that the result of this stage of the audit for you was the knowledge of how to do it if it was possible to start all over again.

Audit of external links

This is not an SEO audit for the quality, quantity, and content of links placed on your site (we will talk about this in a separate article). This is an audit of leading links from your siteIdeally, nothing superfluous should be affixed from it, and even more open for indexing, but this is ideal.

Where can extra external links come from? Well, there are options.

  1. You have made a website and sewed into it a link to the developer company.
  2. You were given a template with external links embedded in it (or even Sapa's code for selling links).
  3. The site is infected with a virus (or it was hacked), which again can be expressed in the appearance of external links that you have not authorized.
  4. Your programmer or administrator is making a profit by selling links from the sites they maintain (they have full access to the code, after all).

If you think that you can find them visually, then I can disappoint you, because this is extremely unlikely. And therefore, you need to look at the site through the eyes of a search engine, or rather its bot. This can only be done using software or special servicesWhich ones?

  1. Yes, the same Xenu Link Sleuth program collects data on all the leading links from your site. Sort its results by the first column "Address" and you will see the whole list of external links.
  2. You can use the wonderful SEO browser plugin RdsBarIn its settings, turn on "Highlight external links":

    Audit of external links of the site

    As a result, on the page opened in the browser, all external links will be crossed out, and those that are open for indexing will be circled with a red dotted line (this is really bad):

    Checking external links

    Carefully inspect all "indicative" pages in this way, especially around the perimeter (header, footer, sidebar). But then again, not everything can be seen visually.
  3. You can use this service (only registration is required). Enter the address of the page there and get a list of all the links leading from it. External links will be highlighted in red. If its anchor (link text) in the corresponding column is crossed out, then it is closed for indexing (the attribute rel="nofollow" is registered in it, although the weight on it will still leak).
  4. There is a similar tool for paginating outbound links in Pr-Cy . Look in the "External Links" column for those lines where there is no red NOFOLLOW.

If "left" external links are found as a result of the audit, then give the task to the programmer to remove them, or try to understand yourself "where their legs grow from." Often they can be deeply embedded in the template code (encrypted) and certain skills are required to remove them. 

I think the programmer can do it. In extreme cases, change the template (theme).

Canonical Tag

What is the meaning of Canonical? This is a tool relevant for Google (to a lesser extent for Yandex), which allows you to leave a mark on pages with completely or partially duplicated content that this is not a page in itself, but a copy of the canonical version of the page (in the meta tag, just its address is indicated) and it does not need to be indexed.

Where and when might it be needed? Most often, in this way, they solve the problem with pages that have pagination (this is when there will be a transition to the second, third, and other parts of this page below). I have this "Home" and pages of headings.

Pagination

Such pages have the same Title, H1, and Description (often a small SEO text that is added for search engines), which means that in the eyes of search engines these are duplicates. In order not to tempt fate, for pages like “https://ktonanovenkogo.ru/page/2”, add the Canonical tag to the html code indicating the URL of the canonical (main) page for it (in our example, this is “https://ktonanovenkogo.ru "):

Canonical

What is this stage of the auditJust go through all the pages with pagination and see if this meta tag is registered on the second, third, and so on pages of pagination (search in the source code using Ctrl+U and Ctrl+F) and whether it leads to the correct canonical page (the one that without a number, i.e. the first in pagination is the mother).

In addition to pagination, Canonical can be used in other cases. For example, it makes sense to use it for print versions of a page or a separate mobile version. It also allows you to remove duplicates created by the site engine (due to some of its internal bugs).

Canonical also allows you not to clog the index with various filtering (sorting) pages used in online stores (unless each of these pages has its own Titles, H1, and SEO texts, which makes them unique).

Add all the comments found in the TOR to your programmer. If you are interested, here is my version of setting up Canonical for a blog on WordPress.

Website indexing audit

The essence of the check is to understand how completely and correctly our resource is indexed. To do this, it will be enough just to compare the number of pages in the Yandex and Google index using the RDS-bar browser plugin already mentioned above.

Website indexing audit

As you can see, I have third more pages in the Yandex index than in Google. In principle, this is a bell that must be checked (somehow I identified a serious problem in this way by comparing the number of pages in the index).

For a detailed analysis, you can click on the numbers in the RDS bar window and go to the Yandex or Google search results. Very often, rubbish pages like “site search” get into the index, which must be closed from indexing through the Robots meta tag (read about this above), but many people forget. 

Also, duplicate pages that were not closed using Canonical, as described above (for example, pagination pages), can also get into the index.

I didn’t have any problems (more like a plugin bug), because I checked the number of pages in the index through Yandex Webmaster (from the left menu “Indexing” - “Pages in search” and at the bottom of the page that opened, unloading in Excel format) and the number of pages unloaded from there coincided with what was on google. But it's better to check once again than to rake problems later.

PS If the site uses a subdomain structure (a common occurrence for regional expansion under Yandex is a separate domain for each new region), then you need to take into account that in Google all pages on the main domain and subdomains fall into a common index because it considers them all to be one site. 

Yandex, on the other hand, considers subdomains (third-level domains) to be different sites.

I'll add here a quick check of the structure of your siteWhen promoting commercial requests, everything is important (it's like walking through a minefield, where it is important to follow in the footsteps of those who have already passed it). 

The structure of the future site itself is ideally written off from those who are already at the Top for your requests. 

Just stupidly write out the structure (menu items) of 10 sites from the top (excluding aggregators, such as Avito), remove repetitions, and here you have the best that can be.

The audit of the structure in this case will be very fast (superficial). Just write out the average number of pages of your competitors from the Top, which is in the search engine index. 

Calculate the average temperature in the hospital from them and compare it with the average number of pages on your site in the Yandex and Gul index. Based on this, you conclude whether you still need to expand the structure (if the number of pages on your site is seriously less) or not.

Checklist for usability and commercial factors

Usability is the convenience of interacting with the site, which is largely related to compliance with those standards that Internet users are already accustomed to and not finding which they will be “very upset”. 

Why is social media so popular? Because everything is familiar there, in its place, clear and simple. But even on ordinary sites, there are de facto standards that should not be violated.

Commercial factors are such elements of the site (and not only it) that are vital to have when promoting commercial requests. 

There are quite a few of them and not all owners pay attention to this (there will be a separate article on this topic - subscribe so as not to miss it ). Meanwhile, this is one of the critical factors, which, for all that, is very easy to wind up (pull up to the desired level) with minimal effort.

The peculiarity of this stage of the audit is that all these factors work in combination. That is, they should be at least "for the most part." At this stage, the main thing for you is to check for the presence of all these things on your site. And according to its results, draw up a TOR for the programmer for revision.

  1. Copyright (copyright sign, name, and year of foundation)  is a trifle, which is usually displayed at the very bottom of the page and looks like this:
    © KtoNaNovenkogo.ru, 2009-2018 | All rights reserved
    Must be a must. If not, then do it yourself or puzzle the programmer.
  2. A logo is a mandatory attribute (graphic or text, like mine) and at the same time, it must be a link leading to the main page. This is a dogma, which in no case should be neglected.
  3. Formatting texts  - the page should not look like a piece of text stuck together in one lump. Convenient paragraphs, usable lists, tables, beautiful headings (subheadings), etc. should be worked out at the CSS level. 
  4. The visitor makes a decision “to stay or leave” in just a couple of seconds and at this time he looks not at the content, but at its presentation.
  5. Buttons on the site  - they can be graphic or text, but the main thing here is not the presentation itself, but the understanding by visitors that this is a button. How do achieve this? There is such a thing in CSS as a hover effect, which allows you to make a button change its appearance when you move the mouse over it. Usually, either the background of the button or the color of the text changes, but other effects are possible. The main thing is to make it clear that this is a button and you can (should) click on it. If this is not there somewhere, then puzzle yourself or the programmer with a solution to this problem.
  6. Fonts  - the perception of the site is very spoiled (usability suffers) when too many different fonts are used on it. Everything is good in moderation and you need to achieve conciseness, not pretentiousness. If you do not trust your eyes, then right-click on the word and select "View Code".
  7. Hours and working days are very important commercial factors. The search engines take it all in stride. The work schedule should be made in the form of a cross-cutting block displayed on all pages, and placed at the top (in the header) or at the bottom of the template (in the footer).
  8. The region of work is also a very important commercial factor that adds convenience to users and is liked by search engines (Yandex). How to arrange all this better, look at more successful competitors from the Top, according to your main key queries.
  9. Site search is an important usability element that should be mandatory on any site. There are many ways to implement it. For example, you can use a site search from Yandex or a script from GoogleEach website or online store engine has its own search capabilities.
  10. Breadcrumbs are navigational links that help visitors understand which page they are currently on and, if necessary, navigate to the section related to that page. There are a lot of implementation options (even though I described them here and here ).

    breadcrumbs

    They greatly increase usability and it would be better to implement them in one way or another. Their placement is also important. It is generally accepted that they should be located in the upper left part of the pages, where most visitors will look for them.

    In general, all the basic elements of sites should be located where users are used to seeing them on leading sites (in your niche).

    There is probably a breadcrumb plugin for each engine, or your programmer implements them yourself, which is also easy.
  11. Personal data processing policy  - became relevant recently in connection with the new federal law 152-FZ on the collection and protection of personal data. For non-compliance with these requirements, fairly decent fines are provided. At least it's worth posting a privacy policy and linking to it somewhere in the footer of the site.
  12. Scroll up button  - for usability reasons, such a button is desirable (I don't have one), but it's better to look at sites from the Top on your topic.
  13. An online consultant is a great tool to increase not only the commercial factors of a site but also to improve behavioral characteristics and increase conversions. Do not think that in order to implement it, you will need to hire employees to answer or be in touch all the time yourself. Just develop a database of answers to frequently asked questions and the “bot” will be responsible for you. In case of difficulty, he will offer the client to leave the phone for consultations, and this is the first step toward sales. In general, the thing is incredibly effective when properly configured.
  14. A callback is also a great and almost indispensable tool for commercial sites (see an example here ). It is desirable that this button be available on the first screen of any page of your site.
  15. Links to social networks are comme il faut if your site is presented in the main social networks (VK, FB, OK, Twitter, etc.) and links to these social networks are placed on its pages (I have done this in the upper right part of the site).

    Audit of the links of the leading website in the social network

    This is an important factor taken into account by search engines. Even for projects that seem to have nothing to write about on social networks, it is possible to post expert materials there with industry news in general. It is enough to post materials in one social network, and you can simply copy them to the rest.
  16. The ability to share on social networks is relevant both for information (buttons for sharing articles in social networks, for example, and for commercial requests (the ability to share on social networks should be on all product cards).
  17. A block with news (or new articles) is a mandatory attribute of any site. It is desirable that this block be accessible from the main page or even be cross-cutting.
  18. Email on a domain is an important factor that speaks in favor of your site. Look at the email address that is listed in your contacts. If the Email does not end with your domain (as, for example, admin@ktonanovenkogo.ru), but with yandex.ru or gmail.com, then consider that you have identified a serious problem with the help of this audit. Don't worry, it's easy to create a mailbox on your domain and you can use it in your usual interface if you wish (read about Email to domains in Yandex and the same in Google ).
  19. For online stores, relinking product cards is relevant. It is carried out using the blocks "Similar products", "Recently viewed", "Usually buy with this", recommend, etc. It has become de facto a mandatory attribute, the absence of which will immediately work in the negative for your resource. It is also desirable to offer visitors a one-click purchase, which has also become a good practice.
  20. The "Contacts" section is one of the most important commercial factors that greatly affect the success (and even the possibility) of promotion. This section must contain the address (full postal address of the office with a zip code), phone number (with area code or 8-800, but by no means a mobile number), working hours, and Email (very good if there is a feedback form )There must also be a map of travel by private car, by public transport and on foot. I once wrote about how you can create a location map based on Yandex Maps and get a location map for a site from Google.
  21. Payment and delivery options  - a commercial site must have such pages with a detailed step-by-step description (sometimes even with the addition of video explanations). In addition to the presence of such pages, there should also be links to them (in a conspicuous place) in the cards with goods. This is also a very important commercial factor (characterizing your "helpfulness").

It's time, to sum up

Going through all the points of the above audit, you may find a lot of things that would make sense to change, correct, or remove. 

Looking at the site with the naked eye, most of the problems voiced above will go unnoticed. Consider that I armed you. And remember, there is nothing unimportant here because every little thing is important (after all, it is she who can separate you from success).

After you go through all the points of the audit and identify any errors, you prepare the final version of the TOR for the programmer. 

Choosing a programmer is a topic for a separate article. You can search for them on freelance exchanges, but you should approach the choice carefully because it is difficult to rely on reviews and ratings due to their frequent cheating.

The programmer evaluates the scope of work and announces the cost and deadlines, after which you approve all this and he begins to make the necessary changes to the site (it is better to pay him in stages after checking the implementation of each item of the TOR). Alternatively, you can do everything yourself, but some points will be very difficult for beginners and you can do the “wrong”.

In the following articles of this column, I plan to talk in detail about commercial ranking factors, what SEO is today, about whether links are needed for a promotion, and how to get the most ideal of them. Let's talk in detail about SEO texts, LSI phrases, and much, much more. Subscribe so you don't miss out.

PS Google has just published a new version of their guide to search engine optimization for beginners. Do you know what's the funniest thing? Most of their recommendation was voiced by me above. So, this is not “evil” SEO at all, it’s quite “good” in itself. In many ways, it is these moments that are very important for the search engines themselves.

10 main points of a website technical audit in Netpeak Spider

Being at the initial stage of learning about search engine optimization, many marketers and future SEO specialists are faced with a flurry of heterogeneous information that is difficult to organize and structure at first.

Netpeak Spider

What aspects of SEO should be paid attention to in the first place? How to start website optimization? How do conduct an initial technical audit and process the information received? You will find answers to these and many other questions in this article.

1. Indexing Instructions

Setting up instructions for search robots is perhaps the first thing that every novice specialist has to deal with, and definitely the first thing that makes the bulk of fatal mistakes that hinder the search engine promotion of the site.

The most common mistakes include:

  1. pages that need to be indexed and can potentially bring traffic are closed from indexing;
  2. files that affect the appearance of the page are closed from indexing;
  3. Pages that have a permanent redirect or are listed as canonical are closed from indexing;
  4. the rel=nofollow attribute is set for links to internal pages, due to which part of the link weight is lost.

You can use the crawler to determine which instructions are displayed on the pages within your site and whether there are any problems associated with the incorrect indication of robots.txt, Meta Robots, or X-Robots-Tag directives. We will use Netpeak Spider as an SEO audit tool.

So, to analyze search directives, you need to do the following:

  1. Launch Netpeak Spider.
  2. Open the main menu "Settings" → "General".
  3. Set the default settings.
  4. To ensure that .js and .css files are not closed from indexing, enable JavaScript and CSS validation.
  5. Save your settings and return to the main program window.
  6. Go to the sidebar and open the Options tab.
  7. Make sure that all the options under "Indexing" are enabled.
  8. Enter the site address in the "Start URL" field and start scanning.

As a result of the scan, you can get detailed information in several ways:

  1. A table with information on all crawled URLs.

    As a result of the scan, you will see a table with a complete list of scanned pages. In the "Allowed in robots.txt" column, you will see if each individual URL is closed from indexing, and in the "Meta Robots" and "X-Robots-Tag" columns, the directives applied to this URL will be indicated.

    Information table
  2. Summary tab (Reports) in the sidebar.

    By selecting the attribute you are interested in, which corresponds to certain pages, you will filter the crawl results, focusing exclusively on those URLs for which Disallow is registered in robots.txt, or, for example, on which nofollow, noindex are set using Meta Robots or X- Robots Tag.

    Summary tab
  3. Bugs (Reports) tab on the sidebar.
    Based on the scan results, the program determines several dozen types of errors, including errors (warnings) associated with instructions for indexing.
  4. Dashboard with indexing status data.

    On the "Dashboard" tab, next to the table of crawl results, information about indexed and non-indexed pages of the site will be presented in the form of a pie chart. There you can also find a diagram that clearly demonstrates the reasons for non-indexability.

    Each of the segments of the chart is clickable and acts similarly to the filtering described above.

2. Meta tags Title and Description

Optimization of Title and Description meta tags is one of the most important stages of website search engine optimization. First, their content is carefully analyzed by search robots to form a general idea of ​​the content of the page. Secondly, they form the page snippet in the search results.

In the context of the SEO audit of the site, it is necessary to analyze:

  1. whether all pages have Title and Description;
  2. how many characters do the Title (at least 10 and no more than 70 characters on average) and Description (at least 50 and no more than 260-320 characters) consist of;
  3. whether there are duplicates, titles, and Descriptions inside the site;
  4. whether there are several Titles and Descriptions on the page at once.

To find out what the situation is with these meta tags within your site, be sure to check the parameters "Title", "Title length", "Description" and "Description length" in the sidebar.

head tags

By the way, if you need the Title and Description to be of a strictly defined length, you can set the range of valid values ​​in the "Settings" → "Restrictions" section.

During the scanning process, the program will determine the length of each of the meta tags, and on the tab with errors, it will indicate all existing Title and Description problems.

See errors

3. XML Sitemap

Before you start reviewing your sitemap, ask yourself two questions:

  1. Does the site being analyzed have an XML sitemap?
  2. If so, is its address listed in the robots.txt file?

If the answer to both questions is “yes”, the only thing left for the audit specialist is to analyze the map for various kinds of errors. The best way to do this is with Netpeak Spider's built-in XML Sitemap Validator tool. To run a check you need:

  1. Launch Netpeak Spider.
  2. In the upper right corner, click "Run" → "XML Sitemap Validator".
    xml sitemap validator
  3. Enter the address of the card and press "Start" to start.

    Enter card address
    *Clicking on an image will open it full size in a new window

  4. Wait for the scan to finish and view the list of found errors in the sidebar. All pages included in the sitemap on which errors were found can be transferred to the main table by clicking the "Transfer URL and close" button.

4. Server response time

The faster your site works, the better. This concerns the speed of the server response, which should respond as quickly as possible to the request sent by your browser, as well as the download speed of the content itself.

To analyze both indicators, we will use a crawler. The scanning procedure is performed by analogy with the one we described above: the main thing is not to forget to check the parameters “Server response time” and “Content loading time” before starting the analysis.

Server response time
*Clicking on an image will open it full size in a new window

At the end of the scan, pages that are too slow will be marked as pages with a medium criticality error "Long server response time".

5. Duplicates

Full or partial duplication of content within the site, even if unintentional, can significantly complicate the path of your site to the top of the organic search results. Search engines react extremely negatively to duplicates, so as part of an SEO audit, you should pay attention to all the main types of duplicates within your site in order to eliminate them later.

By crawling a website with Netpeak Spider, you can detect full-page duplicates, as well as duplicate Title and Description meta tags (we already talked about them in paragraph 2), H1 headers, and text content.

6. Content

Well-optimized page content should include:

  1. competent and unique text of at least 500 characters;
  2. one main heading of the first order H1;
  3. optimized images with the prescribed ALT attribute.

If you have a clear understanding of the limits within which the image weight, H1 size, and text volume can vary, you can set them manually in the “Settings” → “Restrictions” section.

To check each of the above aspects, you should:

  1. Launch Netpeak Spider.
  2. In the list of options on the sidebar, be sure to check:
    1. "Content" → "Images";
    2. "Headings H1-H6" → "Content H1", "Length H1", "Headings H1";
    3. "Metrics" → "Content size".

    If you also want to check the site (section of the site, list of pages) for the presence of subheadings H2 and H3, check the appropriate options in the "Headings H1-H6" item.

  3. Enter the site address and start scanning.
  4. After completing the procedure, in the main table with the scan results, you will see columns with data on the volume of content, as well as on the length of H1. In the sidebar on the Reports → Issues tab, errors related to content optimization will be highlighted, including issues with H1 headings, content size, and missing ALT attribute on images.

Lists of URLs containing one (or more) of the errors above can be filtered and downloaded as a separate report.

7. Broken links

There are many ambiguous search engine optimization factors, regarding which there is no unequivocal opinion in the expert community. However, broken links (links giving 404 server response codes) are definitely not among them: all experienced specialists unanimously talk about them being extremely detrimental to website optimization.

Finding broken links is a procedure that you have to perform on a regular basis, and not just as part of a global primary SEO audit.

To analyze the site in order to search for broken links, it will be enough to activate only three parameters: "Server response code", "Content-Type", and "Errors".

Key Reports

As a result of the scan, you can find all the pages that give 404 errors in the main table with the scan results (opposite them the corresponding server response code will be indicated), in the "Summary" panel, as well as in the list of pages filtered by "Broken links" and "Broken Images".

Summary

8. Redirects

Installing redirects, or redirects is a mandatory step in optimizing any website. Redirects help fight duplicates and direct users to the right pages if they go to non-existent URLs.

It is important to remember that a redirect set on a permanent basis should only give a 301 (not 302) response code, and should not lead to a page with a further redirect. 

Also, keep in mind that in order for a landing page to be successfully indexed by search robots, it must not be closed using robots.txt, Meta Robots, or X-Robots-Tag.

Thus, among the main problems associated with redirection, we can name the following:

  1. Broken redirect (redirect to an inaccessible or non-existent page).
  2. Infinite redirect (redirect from the current page to itself).
  3. A maximum number of redirects (more than 4 redirects in sequence by default).
  4. Redirect blocked in robots.txt.
  5. Redirects with malformed URLs (redirects with malformed URLs in the HTTP headers of the server response).

In the process of conducting a technical audit, you will be able to identify all the above-mentioned problems on your site, as well as control the correctness of specifying the final URLs for redirecting.

End urls for redirects
*Clicking on an image will open it full size in a new window

9. rel=canonical attribute

It is possible that on the way to the ninth point it might have seemed to you for a moment that all the most important things had already been left behind. However, until the Canonical attribute analysis is included in the SEO audit checklist, it cannot be considered complete. What problems can be associated with incorrect Canonical settings?

  1. Canonical chain;
  2. Canonical blocked in robots.txt (in other words, the priority page for indexing returns a response code other than 200 OK).

Also, do not forget about the possibility that some of the attributes were set by mistake, and therefore do not allow important pages of your site to be indexed properly. To control this aspect of optimization, we recommend using Netpeak Spider error filtering (low severity warnings) such as "Non-canonical pages" and "Duplicate Canonical URLs".

The first shows non-canonical pages whose URL in the tag <link rel="canonical" /​>points to another page, the second shows pages with duplicate tags <​link rel="canonical" /​>(when using this filter, all URLs will be grouped by the "Canonical URL" parameter).

canonical url
*Clicking on an image will open it full size in a new window

10. Mixed Content

If you previously attempted to "move" to the secure HTTPS protocol but did not set up the site as necessary, you will come across such a concept as "mixed content". It means that the site simultaneously has pages with a secure and insecure protocol. To check if this also applies to your site, run a scan with the default settings.

The list of pages whose protocol has not changed after moving to HTTPS can be found in the sidebar on the Summary tab.

http summary

They will also be highlighted as error pages (low criticality) "Not HTTPS protocol".

Briefly about the main

To conduct a basic technical SEO audit of a site before starting work on a new project, including a check of several main points:

  1. Checking indexing instructions,
  2. Checking the correctness of filling Meta Title and Meta Description,
  3. XML sitemap validation,
  4. Server response time analysis,
  5. Search for duplicates,
  6. Checking content optimization,
  7. Checking the site for broken links
  8. Checking set redirects,
  9. Checking the rel=canonical attribute,
  10. Search for mixed content that occurred when trying to move to the HTTPS protocol.

For analysis, you will need a crawler, which we used Netpeak Spider as . which will identify all the key problems of the site for their subsequent elimination.

Next Post Previous Post
No Comment
Add Comment
comment url