NoIndex Video Tutorial Transcript
The NoIndex tag is a great tool to have in your SEO arsenal. Today I’m going to explain how to implement it and when you should use it.
What Does a NoIndex Tag Do?
A NoIndex tag is an instruction to the search engines that you don’t want a page to be kept within their search results. You should use this when you believe you have a page that search engines might consider to be of poor quality.
Now that is a very subjective definition! However, if you have pages on your site that are very thin on content, but you want to keep them for your users, these might be good candidates for the NoIndex tag.
You should note that a NoIndex tag does not tell a search engine not to crawl a page. Over time, search engines may crawl the page less, but there is no guarantee that they will.
A compelling digital strategy aligns customer experiences, business execution, and the right technology as the market and your competition constantly evolve. Our Digital Essentials highlight the most compelling aspects of all three to help you react and respond to this ongoing evolution.
In addition, pages with a NoIndex tag can still accumulate PageRank, and then pass PageRank to other pages via links on them.
How do you implement a NoIndex tag?
- Identify the pages on which you want to place a NoIndex tag.
- Place the tag in the header section of each of those pages.
- Write out the actual NoIndex tag as shown below.
- Update the source page on your live web site.
|<meta name=”robots” content=”noindex”>|
But Wait, There’s More!
There are other types of tags to consider depending on what you’re ultimately trying to accomplish. Remember that this tag does not prevent crawling. And, while it can pass PageRank through links, it may not be as effective for you as other tags. There are definitely other solutions to consider:
First would be Rel=Canonical tags, which are the best choice for dealing with duplicate content, alternate sort orders, or filters.
Then you have Rel=Prev and Rel=Next, which are your go-to tags for paginated content.
Rel=NoFollow tags are going to be your best bet for blocking passing PageRank through links.
And finally Robots.txt is the best way for preventing the crawling of web pages. Thank you.
More of Digital Marketing Classroom
For other tutorials in this series see: Digital Marketing Classroom.