Skip to main content

SEO

Why CTR Is(n’t) a Ranking Factor

Abstract Defocused Orange And Red Circular Light Pattern

UPDATE 31 October 2017: Added video of Eric discussing this topic with Google’s Gary Illyes
Click through rate (CTR) on your listing in the SERPs, isn’t used by Google as a direct ranking factor. But, on the other hand, it might cause your rankings to rise for indirect reasons. That good for a confusing start? Excellent!
In this post, I plan to walk you through why it’s not used by Google as a direct ranking signal. However, Google does use it as a way to test the quality of its search results. Understanding this distinction is important if you are serious about digital marketing, and that’s what I’ll explain in today’s post. In addition, I’ll show you how understanding this can help you get better results from your SEO efforts.
But first, let’s give you a direct answer to your question:

Does Google Use CTR as a Direct Ranking Factor

No, it doesn’t, except in limited niche situations. CTR is used primarily in quality control testing of new proposed algorithms, and possibly to audit search quality from time to time. This limited approach is intended to be only one part of how Google evaluates user engagement and satisfaction with the search results, and this data is then used to make decisions about further tuning their core algorithms.
Below, you will see some data and analysis that explains the above statement, as well as examples of tests conducted by Rand Fishkin that provide some examples of tests where CTR has had a temporary impact on rankings. As modified to the prior paragraph, there does appear to be evidence that CTR may be used within Google’s freshness algorithm to discover trending topics.
I believe that this may be what has happened in Rand’s tests, but that this usage is limited to this specific niche situation.

Why CTR Isn’t a Direct Ranking Factor

I’ll give you this one in a sentence: “CTR in the search results isn’t a direct ranking factor because it’s too easy a signal to game, and in many other cases too hard a signal to use effectively.” The following graphic can help illustrate:
Why CTR Isn't a Ranking Factor
If it became clear that CTR was a direct signal, people would implement bots, or pay large numbers of people in third world countries to click on their pages in the search results to drive their rankings up. This would already be happening at a massive scale.
You may have also heard of a more sophisticated concept called pogosticking as well.
How Pogosticking Works
Basically, a user clicks on one result in a SERP, spends a short period of time there, and then bounces back to the SERP, and clicks on a different result, and doesn’t return. In theory, this sounds bad for the first page clicked on, as it implies that the user wasn’t satisfied with the first page they clicked on, but did like the second one.
The problem with this logic is that there are some types of user scenarios where that conclusion doesn’t work. For example, if a user is in comparison shopping mode, they will want to see more than one result.
Or, if they are looking for multiple pieces of information off of the one query, got part of what they wanted with the first (they were satisfied), and then went to the second one, didn’t get what they wanted, but got interrupted before continuing to do more research, so they never go back to that SERP.
Another major issue is trying to define what makes a short visit vs. a long visit. There are so many gray areas in how you could try to approach something like this.
In addition, looking at this from a more theoretical perspective, CTR behavior in the SERPs is something you can only measure on content which is ranking in the SERPs. That’s a pretty limited sample set. Contrast that with a signal like links, which is based on crawling, and will allow Google to consider every page they crawl on the web (not just those in the SERPs).

How Google Does Use CTR and Other Quality Measurements

The heart of this is derived from Paul Haahr’s excellent presentation at SMX West, which was titled How Google Works, a Ranking Engineer’s Perspective. Some of the key points of this line of thinking were also confirmed in the conversation that Google’s Andrey Lipattsev had with Rand Fishkin, Ammon Johns, and myself on Anton Shulke’s Google Q&A Show on March 23, 2016.
To summarize, Google uses controlled click-through rate testing to validate the quality of their search results. Please note the emphasis on the word “controlled”. In this case, what that means is that they will selectively test specific portions of their results, or new algorithms and measure the results they get.
Think of it as an audit process, and one that is run under controlled circumstances where they can be sure that the results aren’t being named by someone in the process.
Equally important is how the data is used. In short, it’s used to make tweaks to direct ranking factors (such as links, content, location, measures of relevance, …) that tend to improve user engagement with the SERPs.
This type of signal is just one of many tests Google uses. We also know that Google uses manual testing based on large numbers of people who evaluate search quality as well. Google has a handbook they use to train the raters that you can see here.
These inputs, and likely many other signals are used to implement a process that may look something like this:
Potential Google Testing Process
Ultimately, this feeds into a larger scale process of testing and refining search quality on a regular basis:
For those of you who are thinking about how to manipulate this, good luck. Google is only doing these things on a very limited basis as part of a quality auditing process. This is not really amenable to large scale CTR fraud.

What About Rand Fishkin’s CTR and Pogosticking Tests?

There have been a number of times at conferences where Rand Fishkin has run tests with the audience where he has asked them to click on a search result en masse to see if that might alter the ranking for that result. This was discussed in the recent Google Q&A Show I referenced earlier. What follows is a cleaned up excerpt from that conversation:

Rand: Seven or eight times in the last two years. I’ve done something where I’ll be standing on a stage, in front of 500 to a couple thousand people, and I’ll ask them, “Hey. Can you all jump on your mobile phones or on your laptops and do a search? I want you to click the seventh result. Then over the next 24 hours, let’s observe what happens to that page’s ranking for that query.”
I’ve had seven or eight of those that have been successful, and I’ve had four or five where the ranking did not change. I’ve run a few of them over Twitter, with a few where the ranking did change pretty quickly and usually sticks around a day or two, and a few where nothing happened. What’s happening there that’s making the ranking for the page that gets clicked on change so rapidly? Then what’s happening when it falls back down again, relatively rapidly, over the next two to three days?
Andrey: It’s hard to judge immediately, without actually looking at the data in front of me. But, in my opinion, my best guess here would be is the general interest that you generate around that subject, and you generate exactly the sort of signals that we are looking out for, mentions and links and social tweets and social mentions, which are basically more links to the page, more mentions of this context. I suppose it throws us off for a while until we’re able to establish that none of that is relevant to the user intent, I suppose.
Eric: It sounds to me that in the case of some of the tests that Rand has done, that there might be something going on that’s temporal in nature, where you’re seeing a burst of activity around something, and you’re responding to it like a trending item.
Andrey: The thing is without looking the way that data is changing inside the system, we can’t really tell.

Eric’s Interpretation: I’ll stand by my guess that I made in the above conversation that some part of the Google algos that are designed to pick up on hot news events is triggering the behavior seen in Rand’s experiments. This would explain the rise in the results and the drop afterward when the click activity tapered off. But, we can’t know 100% for sure.
Google Elevates Trending Content

How Would Google Measure Your Chances of Higher User Engagement?

Before answering that question directly, let’s recap:

  1. Using CTR as a ranking factor would be highly susceptible to being gamed by spammers. If it were that simple, you would see ads from companies promoting easy rankings gains for you, and you’d be getting daily emails about it, but, you’re not.
  2. Google has confirmed that they use CTR and other user engagement metrics as part of the process of evaluating search quality, but indicates that they do this only in controlled testing scenarios.
  3. Rand Fishkin’s tests have shown some aspect of CTR that does appear to work at times. Given the nature of the results, it appears to be from a Google also that is highly responsive, and is therefore likely to be from an algo designed to detect trending activity.

Considering point 1, I’d advise that you don’t invest any energy in trying to use click behavior to manipulate search rankings. Google is obviously highly aware of the potential for this type of activity, and has long ago largely addressed it.
That said, I do think that Rand’s testing has uncovered something important, and that has something to with Google’s trending activity algorithms. However, if you simply sustained the higher click volume, it would no longer be trending, it would be steady state and the benefits would go away rapidly.
Looking at point 2 above, the logical outcome of valuing CTR and user engagement as a search quality check is that Google would look for (less game-able) signals on the web that would be indicators that users would engage more with one particular site than another. What would those be? I thought you’d never ask!
Next, are some sample signals Google could possibly use. Before you read these, I need to emphasize that I am NOT saying they DO use these. I’m just offering them as Now, let’s summarize 3 examples of signals that Google could use:
Example 1: Author Authority
Yes, the Google Authorship program was ended, but that doesn’t mean that Google ended their interest in using author authority as a signal. In our industry, for example, if Danny Sullivan writes an article, you’re very likely to give it more weight than an article from someone you didn’t know.
Google May Elevate Content From Authoritative Authors
This will likely increase your chances of clicking on one of his articles in relevant search results, and also more likely to stay on the article page longer and spend more time reading it. Chances are that you are less likely to go back to the search results quickly, and you’re less likely to need to read a related article by someone else.
You could measure authority in several ways, including the average number of links to an author’s post, social activity shortly after it goes live, links to their author/bio pages, search volume on their name, and how often compound searches with their name along with a subject matter title (e.g. “Danny Sullivan Panda”) take place.
I’d bet that this is something that Google actually does use in some fashion, though it may be specific to personalized results. You could imagine that they tested this signal, saw that it generated improved overall user engagement in their SERPs, with no unacceptable downsides, and then rolled it out. I also consider it likely that they have tested hundreds of versions of this in a systematic fashion to tune to the best possible way to leverage a signal like this.
Example 2: Brand Authority
This is something that Google can also measure, and it’s clear that people will spend more time engaging with a brand they know and trust. This is perfectly analogous to the situation with author authority. One metric you could look at would be the brand search volume. This would be a weak signal though, as it would be easily gamed by simply doing large numbers of searches on your brand.
However, the counter to this attempt to game the system is an easy one. Does the brand’s home page get tons of links, including many from authoritative, highly trusted, sites. How many mentions of the brands are there on the web? How often does the brand name show up in the anchor text of links? How does the brands metrics compare to that of competitors in the same market space?
Brand Authority May Be a Ranking Signal
Note that I deliberately chose a somewhat deceiving example to illustrate that this type of analysis isn’t simple. Far more people search on “Coca Cola” then they do on “The Coca Cola Company”, and sorting out these types of situations would be one of the many issues that Google would have to address to make this particular signal work.
As with author authority, it would not surprise me at all if Google uses some type of signal like this. The concept makes complete sense. The only question is whether or not they’ve found a good combination of signals similar to the ones that I suggested above that makes it a clean signal.
Example 3: Leverage Search History to Evaluate Content Quality
Of course, Google tried to evaluate this at many levels, but let me focus on how this can be used to measure user engagement. What if you could measure the likelihood that a user will find what they want on your page after entering a particular search query? For reference, I first started writing about this publicly in August 2015.
How could Google measure something like this? Simple, Google has data on all the searches users perform. They can see when you enter a query, such as “oil filters”, for example. They can then see the follow on query. Perhaps this is for “oil”, “oil filter wrench”, ” or “windshield wipers”. Google could then decide that one company’s oil filter page is better than another if they address more of these types of follow-on needs.
Google Can Leverage Search History Data what does a user search on next? and then show some choices
Google could even manually select specific signals for their algo based on controlled tests across eCommerce sites, and observing that users respond better to sites with shopping carts (helps lower the chances of an affiliate site ranking) or privacy policies that are readily visible on the page.

What’s the Win In it For Me?

These are all just ideas based on free brainstorming on my part, but you can see how they might find various types of signals that are leading indicators of sites that might offer better engagement, without actually trying to use something like CTR directly. As you can imagine, Google has tons of brilliant people that brainstorm these ideas for a living. Then they run these ideas through a systematic testing process to find out what works and what doesn’t. These tests are used to evolve the ideas into something that works on a massive scale (the whole web). It also involves many levels of internal review to make sure it’s truly a good set of signals.
Google also systematically works to make sure that each new signal is subject to confirming signals. That’s why they would never use brand search volume as a signal by itself. As a result, many of these signal ideas are probably combined with other checks involving tried and true signals like links.
For that reason, you need to care deeply about user engagement on your site. Here is a checklist of eight things to think about as you do that:

  1. Are you working on building your brand?
  2. Do you have a recognizable expert working with your business? This has a similar effect to having a recognizable brand.
  3. Review analytics data on major pages to see bounce rate, exit rate, and other engagement metrics. Google may not use them, but that doesn’t mean you can’t.
  4. Check your site search to see what things people aren’t finding. People are leaving clues about what they want for you there.
  5. Review the sites of your competition to see what appears to be working for them.
  6. Perform advanced content comparison analysis to see what deficiencies you may find in your content.
  7. Make your site a sweet experience for Smartphone users.
  8. Do everything you can to make your site as fast as possible.

8 Steps For You to Focus On
As you do each of these things, consider what their impact is on what percentage of users are likely to be satisfied with what they see on your site. Rand Fishkin has a great phrase for this, “task completion”. Do you help the visitors to your pages complete the task they had in mind when they got there? You can see more about how Rand get to this conclusion here

BONUS: Google’s Gary Illyes Discusses CTR & Ranking with Eric Enge

Wrapping Up

So here’s where I am with all this:

  1. Google cares a lot about overall user engagement with the results they show in the SERPs.
  2. Google does not use CTR as a direct core ranking signal.
  3. As uncovered by Rand, there does seem to be something related to freshness that can be driven by CTR.
  4. Google does use CTR in testing of search quality, and in testing of proposed new algos.
  5. This will lead Google to try and define easily measurable factors that correlate strongly with overall user engagement
  6. As a publisher, the difference to you shouldn’t really be that significant.
  7. Go back and read point 1 in this list. If user engagement matters a ton to Google, then it should matter to you as well.

Thoughts on “Why CTR Is(n’t) a Ranking Factor”

  1. Hi eric. nice article. my $0.02 here:
    if ctr and task completion is the arbiter of google “quality tests”, then it impacts rank. Meaning, if google is testing your page for relevancy to a particular keyword, and if you want that test to go your way, you better get great CTR and task completion rates. otherwise, you’ll fail the quality test and someone else will get chosen. I’m not saying CTR is a “core algo” thing. instead, i’m asking: who cares! if it impacts rank (which i believe it does), then it matters. (and, even if it doesn’t, you should still care!).
    Additionally, one job of SEOs is to be a bit ahead of the curve. People who were invested in quality content before panda/penguin benefited greatly in the post panda/penguin era. As google perfects their new machine learning algos that use user engagement to score test, it’s not hard to believe that they would expand the use of those algos to other areas, like spam detection, video search, and yes, even core search algo. if it works, why wouldn’t they use it. they have nearly 2 decades of experience combating click fraud in adwords (a $50 billion dollar business where advertisers and publishers have motive to commit click fraud) and the leanings could easily be applied to organic search as well.
    Finally, the only thing missing here is data to back up your hypothesis. I realize that what you’re saying here is pretty aligned with how google says it works on their official hangouts, etc. but they don’t always give away every detail (nor is it their job to do so). they sometimes omit stuff. I personally believe (based on my own research) there is more going on than what they are publicly saying and published my data here: https://moz.com/blog/does-organic-ctr-impact-seo-rankings-new-data
    anyway while we may differ slightly around the semantics i’m glad we both are in agreement with your #1 point: Google cares a lot about overall user engagement with the results they show in the SERPs. To this, i couldn’t agree more.
    Larry

  2. Hi Larry – thanks for the detailed comment.
    We do disagree on a few key points, but let’s start with the main one.
    From my perspective engagement metrics, such as CTR, are an indirect ranking factor. I say that in the above article. You believe it’s a direct ranking factor, and in your comment above you know that I’m lacking data to backup my hypothesis. I think what you’re missing is that the data in YOUR article could be used to prove my hypothesis. I do believe that Google is trying to use signals that will result in higher levels of engagement in their SERPs, and the expected outcome of that would be that higher CTR, longer read time content will tend to rank higher.
    As you also note in your article, tests that used bots to jack up CTR failed to demonstrate direct causation of CTR as a ranking factor. So that means we must agree that Google is doing something to mask that fact that a straight CTR signal is a highly game-able signal. Your position is that it’s a causal factor. I’m choosing to take Paul Haahr at his word when he says it’s an indirect (and therefore a correlating) factor. It’s OK that we differ on this point, but just want to be clear on what that difference is.
    The other thing that’s important to note, is that based on my dialogues with numerous Googler’s RankBrain is an algo designed to focus on improving their understanding of search queries. That’s an amazingly complex piece of machine learning to do that. They definitely will use machine learning algorithms to impact other areas of search, but those won’t be RankBrain. You don’t simply take a machine learning algo like one that dramatically improves language parsing, and extend that same algo to cover other completely different aspects of search.
    That would be a bit like a search marketer taking an approach they use to optimize paid search and then using that as a starting place for organic SEO. They are just radically different. Those will be other machine learning algorithms, which will probably have other names.
    As you note though, we do agree on the main point, which is the importance of engagement in the SERPs. Google is doing everything they can to maximize that. That should still set a clear path for people as to what they need to with their sites.
    Eric

  3. To me, it’s totally understandable that CTR isn’t a ranking factor. It’s something like wetting your finger and putting it up in the air to find wind direction, then eating your finger. CTR shouldn’t be a ranking factor because it begins to feed upon itself if it is a factor – it becomes essentially self-verifying.

  4. CTR should of the factors which influence ranking. As per my knowledge google calculates organic clicks separately and derive the CTR out of it. And its quite logical to make it as one of factors

  5. Joey, I’m left wondering if you read the article before commenting. Eric gives some very clear arguments (and statements from Google itself) on why CTR is likely NOT a direct ranking factor (and that word “direct” is very important here). Can you read over Eric’s statements and then respond to them specifically, rather than speculating on why you think it “should” be a factor? Thanks!

  6. I have seen some 11th ranking up to the first page by continuesly clicking over a longer period of time by more people (no fresh content). I think that the click relevancy of -> click, wait 1-3 minutes -> click further on the same page and wait 1-3 minutes and click (never go back [to Google]) influences the result. This is not a 500 people click me in a day. This is a people click me 500 times over a longer period of time. BUT, this is not a University research it is an observation. and I might be wrong – even when I saw it working on the test objects.

  7. Michael, I’d direct you first to the “What About Rand Fishkin’s Tests” section of this article. In the past, and under more controlled and broader circumstances than your one-off test, we were able sometimes to cause a local ranking to rise by having many people click on it in a short time. However, it didn’t work every time, and the results were always very temporary. Within a few days the result had fallen back to where it was. More recently, we have not been able to make similar tests work at all, which may indicate that Google has tightened this up because they knew about these experiments (Googlers were present in the audience when Rand ran these tests).
    So at best, this worked haphazardly and temporarily, and may not even work as much at all anymore. Not something I’d want to create a strategy around.

  8. It is a much to easy game cheating ctr. If I would be Google i won’t concentrate on factors like serp ctr as direct Factor.
    And so often i check result 1, than 2 just to see more than one result. How often i think result 1 was better… But I never returned.
    I like the trending idea

  9. I really can’t believe a thing like “PogoSticking”.
    Thats great and true: “you would see adds from companies promoting easy rankings gains for you, and you’d be getting daily emails about it, but, you’re not” but I saw a lot of sites bying traffic via Google included staytime and a lot pageviews. They didn’t ranked better, but for a european site written in german, they had a lot organic traffic wich stays on site from brazil

  10. The problem with discounting the Fishkin test is that the clicks were done in a very shot burst of time, resulting in SERPs being affected for only a very short period of time.
    Nobody has ever done a test (and how could they?) of what would happen if hundreds of clicks from completely different real users (not easy to spot bots) on completely different IP addresses occurred every single day for an extended period of time, which is exactly what would happen if you had an impossible to resist title tag and meta description.
    People assume that Rand’s test resulted in short lived rankings because it was a Google “freshness” thing. They refuse to consider the possibility that if Rand had done this test every single day for three years with a completely different group of people, that the rise in search position would not have also been maintained for three years. Until such a test is done over a long period of time with real people, nobody can say what the results of Rand’s test really means.

  11. Larry@ Your site is displayed on future snippet aka zero position for many query (for ex, how much does adwords cost, and many others query on which G article is not your competitor). if you look into your Google Webmaster Tool, you will see. for many impression you’re getting clicks. If you do simple math, then you will know how many % clicks you’re getting from Google for that particular query. And still your few pages don’t ranking in first position why? You’re getting more than 40% CTR for your future spinet but still you’re not ranking in first position. Those clicks are real, and can pass through click fraud as adwords use them.
    “Google use CTR with user intent”. For some query Google may return page A in first position but when large number of people click backs and go for the second page, that time Google use CTR> But when both pages talk about same thing, then CTR does not play a role.

  12. CTR has to be a ranking factor since there is always a brand in a niche people know. And they always look for it down the page & click.
    But this is just marginal. People should just focus on getting more relevant Links. Just my 2 Cents.
    – Max

  13. I agree with you Eric. The CTR can be pushed much too easily through bots or is simply not telling enough about the websides quality as it should that it could be used as a direct ranking signal. The fact that Google uses it as an indirect rancing signal to test and improve other factors is probably the most likely. With all the optimization it should not be forgotten that websites are all about having a good user experience and it is always good when google goes a step in this direction.
    -Jessica

  14. I can confirm with Eric. For my site, I have a position of 2.1, with a very low CTR 0.5% during 3 months. But still, the position is always maintained at this position.
    So Google didn’t use CTR to re-evaluate the ranking position.

  15. Amazing Article!
    I agree with many others in this Comment section, that ctr-rate as a factor is too easy to manipulate.

  16. I agree that CTR by itself is not a ranking factor. Google cannot calculate the click through rate of another website, like twitter. How many people read the tweet and how many followed through to click on the links in Rand Fishkins case. But what I do accept is, an increase of traffic and an increased positive engagement with your website together will deliver improved rankings. Lets put it another way, if all the people who visited Rand’s website immediately bounced back to twitter, (negative engagement) would he have still got positive rankings boost? I don’t think so! It was the the positive engagement that delivered the results in my opinion.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Eric Enge

Eric Enge is part of the Digital Marketing practice at Perficient. He designs studies and produces industry-related research to help prove, debunk, or evolve assumptions about digital marketing practices and their value. Eric is a writer, blogger, researcher, teacher, and keynote speaker and panelist at major industry conferences. Partnering with several other experts, Eric served as the lead author of The Art of SEO.

More from this Author

Categories
Follow Us