عوامل موثر در سئو

جهت آموزش بهینه سازی سایت و آشنایی هرچه بیشتر شما عزیزان در رابطه با بهینه سازی سایت، در ادامه لیستی از عوامل موثر در سئو (SEO Factors) به همراه توضیحات جمع آوری شده است.

۴xx errors often point to a problem on a website. For example, if you have a broken link on a page, and visitors click it, they may see a 4xx error. It’s important to regularly monitor these errors and investigate their causes, because they may have negative impact and lower site authority in users’ eyes.

۵xx error messages are sent when the server is aware that it has a problem or error. It’s important to regularly monitor these errors and investigate their causes, because they may have negative impact and lower site authority in search engines’ eyes.

A custom 404 error page can help you keep users on the website. In a perfect world, it should inform users that the page they are looking for doesn’t exist, and feature such elements as: HTML sitemap, navigation bar, and a search field.

But most importantly, a 404 error page should return 404 response code. This may sound obvious, but unfortunately it’s rarely so. See why it happens and what it can result in, according to Google Search Console:

“Just because a page displays a 404 File Not Found message doesn’t mean that it’s a 404 page. It’s like a giraffe wearing a name tag that says “dog.” Just because it says it’s a dog, doesn’t mean it’s actually a dog. Similarly, just because a page says 404, doesn’t mean it’s returning a 404…

Returning a code other than 404 or 410 for a non-existent page… can be problematic. Firstly, it tells search engines that there’s a real page at that URL. As a result, that URL may be crawled and its content indexed. Because of the time Googlebot spends on non-existent pages, your unique URLs may not be discovered as quickly or visited as frequently and your site’s crawl coverage may be impacted.

We recommend that you always return a 404 (Not found) or a 410 (Gone) response code in response to a request for a non-existing page.”

Robots.txt file is automatically crawled by robots when they arrive at your website. This file should contain commands for robots, such as which pages should or should not be indexed. It must be well-formatted to ensure search engines can crawl and read it.

If you want to disallow indexing of some content (for example, pages with private or duplicate content), just use an appropriate rule in the robots.txt file.

For more information on such rules, check out robotstxt.org.

Please note that commands placed in the robots.txt file are more like directives rather than absolute rules for robots to follow. There’s no guarantee that some disobedient robot will not check the content that you have disallowed. Therefore, if you’ve got any secret or sensitive content on your site, robots.txt is not a way to lock it away from public.

An XML sitemap should contain all of the website pages that you want to get indexed, and should be located on the website one directory structure away from the homepage (ex. http://www.site.com/sitemap.xml). In general, it serves to aid indexing and saturation. It should be updated when new pages are added to the website, and needs to be correctly coded.

Besides, in this sitemap you can set the of each page, telling search engines which pages they are supposed to crawl more often (i.e. they are more frequently updated).

Learn how to create an .xml sitemap at sitemaps.org.

A page can be restricted from indexing in several ways:

– in the robots.txt file
– by Noindex X-Robots tag
– by Noindex Meta tag.

Each of this is a line of HTML code that says how crawlers should handle specific pages on the site. Specifically, the tag tells crawlers if they are not allowed to index the page, follow its links, and/or archive its contents.

So make sure that pages with unique and useful content are available for indexing.

Usually, websites are available with and without “www” in the domain name. This issue is quite common, and people link to both www and non-www versions. Fixing this will help you prevent search engines from indexing two versions of a website.

Although such indexation won’t cause penalty, setting one version as a priority is best practice, especially because it helps you save link juice from links with and without www for one common version.

Using secure encryption is highly recommended for many websites (for instance, those taking transactions and collecting sensitive user information.) However, in many cases, webmasters face technical issues when installing SSL certificates and setting up the HTTP/HTTPS versions of the website.

In case you’re using an invalid SSL certificate (ex. untrusted or expired one), most Web browsers will prevent users from visiting your site by showing them an “insecure connection” notification.

If the HTTP and HTTPS versions of your website are not set properly, both of them can get indexed by search engines and cause duplicate content issues that may undermine your website rankings.

۳۰۲ redirects are temporary so they don’t pass any link juice. If you use them instead of 301s, search engines might continue to index the old URL, and disregard the new one as a duplicate, or they might divide the link popularity between the two versions, thus hurting search rankings.

That’s why it is not recommended to use 302 redirects if you are permanently moving a page or a website. Instead, stick to a 301 redirect to preserve link juice and avoid duplicate content issues.

۳۰۱ redirects are permanent and are usually used to solve problems with duplicate content, or if URLs are no longer necessary. The use of 301 redirects is absolutely legitimate, and it’s good for SEO because 301 redirect will funnel link juice from the old page to the new one. Just make sure you redirect old URLs to the most relevant pages.

Basically, meta refresh may be seen as violation of Google’s Quality Guidelines and therefore is not recommended from SEO point of view.

As one of Google’s representatives points out: “In general, we recommend not using meta-refresh type redirects, as this can cause confusion with users (and search engine crawlers, who might mistake that for an attempted redirect)… This is currently not causing any problems with regards to crawling, indexing, or ranking, but it would still be a good idea to remove that.”

So stick to the permanent 301 redirect instead.

In most cases duplicate URLs are handled via 301 redirects. However sometimes, for example when the same product appears in two categories with two different URLs and both need to be live, you can specify which page should be considered a priority with the help of rel=”canonical” tags. It should be correctly implemented within thetag of the page and point to the version which should rank.

According to Google, the mobile-friendly algorithm affects mobile searches in all languages worldwide and has a significant impact on Google rankings. This algorithm works on a page-by-page basis – it is not about how mobile-friendly your pages are, it is simply are you mobile-friendly or not. The algo is based on such criteria as small font sizes, tap targets/links, readable content, your viewpoint, etc.

Having duplicate rel=canonical code on a page happens frequently in conjunction with SEO plugins that often insert a default rel=canonical link, possibly unknown to the webmaster who installed the plugin. Double-checking the page’s source code will help correct the issue.

Frames allow displaying more than one HTML document in the same browser window. As a result, text and hyperlinks (the most important signals for search engines) seem missing from such documents.

If you use Frames, search engines will fail to properly index your valuable content, and won’t rank your website high.

The validation is usually performed via the W3C Markup Validation Service. And although it’s not obligatory and will not have direct SEO effect, bad code may be the cause of Google not indexing your important content properly.

We recommend checking your website pages for broken code to avoid issues with search engine spiders.

Basically, heavy pages load longer. That’s why the general rule of thumb is to keep your html page size up to 256kB.

Of course, it’s not always possible. For example, if you have an e-commerce website with a large number of images, you can push this up to more kilobytes, but this can significantly impact page loading time for users with a slow connection speed.

URLs that contain dynamic characters like “?”, “_” and parameters are not user-friendly, while they are not descriptive and are harder to memorize. To increase your pages’ chances to rank, it’s best to setup dynamic URLs so that they would be descriptive and include keywords, not numbers in parameters.

As Google Webmaster Guidelines state, “URLs should be clean coded for best practice, and not contain dynamic characters.”

URLs shorter than 115 characters are easier to read by end users and search engines, and will work to keep the website user-friendly.

Broken outgoing links can be a quality signal to search engines and users. If a site has many broken links it is logical to conclude that it has not been updated for some time. As a result, the site’s rankings may be downgraded.

Although 1-2 broken links won’t cause Google penalty, try to regularly check your website and fix broken links if any, and make sure their number doesn’t grow. Besides, your users will like you more if you don’t show them broken links pointing to non-existing pages.

According to Matt Cutts (head of Google’s Webspam team), “…there’s still a good reason to recommend keeping to under a hundred links or so: the user experience. If you’re showing well over 100 links per page, you could be overwhelming your users and giving them a bad experience. A page might look good to you until you put on your “user hat” and see what it looks like to a new visitor.”

Although Google keeps talking about users experience, what they can really hurt if they see way too many links on a page is its rankings. So the rule is simple: the fewer links on a page, the fewer problems with its rankings.

In fact, there’s nothing to add here. Just try to stick to the best practices and keep the number of outgoing links (internal and external) up to 100.

If a page doesn’t have a title, or the title tag is empty (i.e. it just looks like this in the code:), Google and other search engines will decide on their own, what content to show on the results page. Thus if the page ranks on Google for a keyword, and someone sees it in Google’s results for their search, they may not want to click on it simply because it says something absolutely not appealing.

No webmaster would want this, because in this case you cannot control what people see on Google when they find your page. Therefore, every time you are creating a webpage, don’t forget to add a meaningful title that would attract people.

A page title is often treated as the most important on-page element. It is a strong relevancy signal for search engines because it tells them what the page is really about. It is of course important that title includes your most important keyword. But more to that, every page should have a unique title to ensure that search engines have no trouble in determining which of the website pages is relevant for this or that query. Pages with duplicate titles have fewer chances to rank high. Even more, if your site has pages with duplicate titles, other pages may be hard to get ranked as well.

Every page should have a unique, keyword rich title. At the same time, you should try to keep title tags not too long. Titles that are longer than 55 characters get truncated by search engines and will look unappealing in search results. You’re trying to get your pages ranked on page 1 in search engines, but if the title is shortened and incomplete, it won’t attract as many clicks as you deserve.

Although meta descriptions don’t have direct influence on rankings, they are still important while they form the snippet people see in search results. Therefore, it should “sell” the webpage to the searcher and encourage him to click through.

If the meta description is empty, search engines will themselves decide what to include in a snippet. Most often it’ll be the first sentence on the page. As a result, such snippets may be unappealing and irrelevant.

That’s why you should write meta descriptions for each of your website pages (at least for the landing pages) and include marketing text that can lure a user to click.

According to Matt Cutts, it is better to have unique meta descriptions and even no meta descriptions at all, then to show duplicate meta descriptions across pages. That’s why make sure that your top important pages have unique and optimized descriptions.

Although meta descriptions don’t have direct influence on rankings, they are still important while they form the snippet people see in search results. Therefore, it should “sell” the webpage to the searcher and encourage him to click through. If the meta description is too long, it’ll get cut by the search engine and may look unappealing to users.

The alt and title attributes of an image are commonly referred to as alt tag or alt text and title tag even though they’re not technically tags. The alt text describes what’s on the image and the function of the image on the page. So if you have an image that’s used as a button to buy product X, the alt text would say: “button to buy product X”.

The alt tag is used by screen readers, the browsers used by blind and visually impaired people, to tell them what is on the image. The title attribute is shown as a tooltip when you hover over the element, so in case of an image button, the button could contain an extra call-to-action, like “Buy product X now for $19!”.

Each image should have an alt text. Not just for SEO purposes but also because blind and visually impaired people otherwise won’t know what the image is for. A title attribute is not required. It can be useful but in most cases, leaving it out shouldn’t be much of an issue.