Skip to main content

Digital Marketing

Pubcon Keynote with Gary Illyes and Eric Enge

Particle Wave

RankBrain, CTR, Penguin 4.0, and More

Earlier this year at Pubcon Las Vegas I had the opportunity to do a keynote together with Google’s Gary Illyes: The Iron Man Breakfast Keynote Interview : Gary Illyes of Google and Eric Enge. In this post we present the video as the fourth in our series of Virtual Keynotes , along with my commentary on what I consider the key takeaways.

Here’s the video of the complete keynote. My insights follow below the video, along with a complete transcript of the session.

RankBrain and Machine Learning at Google

In particular, I started the discussion talking about RankBrain, and Gary shared some perspectives that I had not heard from him before. Historically, Google has always talked about RankBrain as being focused on improving the handling of long tail queries. Gary continued to reiterate that theme, but here is what was new:

  1. RankBrain is part of the core algorithm, and that means that it’s always running. However, it only has a material impact on certain types of queries, especially those that Google has never seen before. Note, that something like 15% of the queries Google receives every day are ones that they’ve never seen before.

[Tweet “RankBrain is part of Google’s core algo, but only impacts certain queries – @methode. More at”]

Mark Traphagen, Gary Illyes, Eric Enge - Pubcon 2016

Mark, Gary & Eric at Pubcon

  1. You may recall that the Googler quoted in the original Bloomberg article: Google Turning Its Lucrative Web Search Over to AI Machines said that RankBrain was the third largest ranking factor. It appears that this was a reflection of the number of queries that RankBrain impacts. Now, remember that the great majority of queries are long tail in nature. That means that RankBrain probably isn’t impacting the rankings of large scale queries such as “windows” or “bruno mars”, which are far more popular queries.
  • Now here is the most interesting part in my view. Gary also indicated that RankBrain makes its decisions based on evaluating the historical performance data for queries that are judged by RankBrain to be very similar (in machine learning-speak, this is determined by seeing how similar a given query is given to historical queries in high-dimensional vector space). Google can use the historical performance of these other queries to adjust the rankings results for the new long tail query as it comes in.
  • I asked Gary to weigh in on claims that RankBrain is driving other parts of their algorithm, and he reiterated that it does not change those algorithms. So, the link related algorithms—Penguin, Panda, and other algos—are completely unchanged by RankBrain.

[Tweet “RankBrain does not affect link-based algos such as Penguin – @methode at #pubcon. More at”]

  1. We also spoke at length about Machine Learning (ML) in general. We agreed that one myth out there is that ML algos are inherently better than human-generated ones. This doesn’t prove to be true. First of all, in many cases, it does not work at all well in scenarios with sparse data, and ML algos are also opaque, meaning that it’s hard to be clear on exactly what they are doing. This makes maintenance and testing that much harder to do.

[Tweet “Machine learning algorithms are not inherently better than human-controlled ones – Gary Illyes at #Pubcon. More at”]

For more on this topic, see my post “RankBrain Myth Busting.”

User Engagement and CTR as a Ranking Factor

In the next section of the keynote, Gary and I talked about how Google might leverage user engagement signals, such as click-through rate (CTR), as ranking signals. At SMX West in 2016, ranking engineer Paul Haahr stated that Google uses these signals as an indirect ranking factor. Basically, what that means is that you have a feedback loop that looks something like this:

Google CTR ranking algorithm test flow
Over time, this type of feedback loop will result in the pages that get the highest engagement (including the highest CTR) moving up in the search results. The subtlety in what Haahr said is that Google is evidently not measuring engagement signals directly. Instead, they are tuning their use of other signals so that the pages with higher engagement are moved up to the top of the rankings.

[Tweet “Google uses engagement signals like CTR to fine-tune algo, but not as direct ranking signals. More at”]

Many parties have challenged this assertion by Google, and for that reason, I asked Gary about his thoughts on the matter. Here is what he shared:

  1. User signals, like CTR, tend to be very noisy on the open web, and Google doesn’t find them to be a reliable signal.
  2. In a controlled environment, they work pretty well, and Google does use them in this manner. (For the rest of this point, I’m extrapolating from Gary’s comments a bit). The way this is done is by sampling tests they do to evaluate search quality (Gary suggested that they might sample 1% of users). Based on the results of this testing, they evaluate the quality of their core algorithms. Depending on the results, Google may adjust its factors and re-evaluate again.

Executing this type of ongoing QC/QA process will indeed result in higher CTR posts drifting up in the overall SERPs.

  1. The main issue with using engagement signals such as CTR as a direct factor is that the sporadic nature of CTR would likely cause some wild movements in the SERPs at times, and this isn’t necessarily desirable (once again, I extrapolated a bit).
  2. In a controlled testing environment, Google can recognize bad data sets and simply discard them, giving them much better control over the result.
  3. CTR is one of the things looked at in this manner, but there are other factors as well.

For more on this topic, see my post “Why CTR Is(n’t) a Ranking Factor”

Penguin 4.0

I started by asking Gary what Penguin 4.0 does, and here are the main points he shared:

  1. Penguin is now considered part of the core Google algorithm. Among other things, this means that Google won’t be informing us of updates anymore.
  2. When Penguin finds links that it thinks of as having no value, it now just discounts those links. So it no longer makes ranking adjustments.
  3. For that reason, its impact on links is granular in nature; it can act on links to any part of your site. As a result, if you have one page with many bad links, and all those are discounted, only the rankings for that page will be impacted.
  4. It’s “real time.” That means that Penguin acts on links as it finds them.

Gary also referred to Penguin as one of the “nicest” webspam algos they have (because it only discounts the links). I asked him about whether or not the new Penguin would spell the end of negative SEO.

Gary said that he has never seen any real examples of negative SEO. In every case that the Google team has examined where negative SEO was claimed, they have always found an alternative explanation. This position was repeated by Google’s Nathan Johns in a later panel that I moderated.

[Tweet “Gary Illyes of Google: We’ve never seen a verified instance of negative SEO. More at”]

I added my thoughts on it as well. While I could believe that negative SEO was possible in the past, it just never made any sense to me. Why would you spend your money to lower ONE competitor’s ranking, rather than spending your money to lift your own rankings compared to ALL of your competitors?

It just seems counter-intuitive to me. In addition, you face the risk that the “bad links” you acquire actually end up helping your competitor.

Transcript of Video

Eric: Awesome. All right, so we’re gonna jump right in. And one of the things I want to talk about, because everybody loves to hear about it, is RankBrain.

Gary: That’s great. Okay.

Eric: What is it? All right, I’ll ask the question differently, what does it do?

Gary: RankBrain is a machine learning algorithm, deep machine learning algorithm, basically layered neural networks, and is trying to predict what would be a better result from a given result set based on historical data. It looks at a bunch of signals but essentially it adjusts ranking based on the query. It works really, really well with queries that we’ve never seen before for most…

Eric: So long-tail queries.

Gary: Yeah, long-tail queries. We typically use, in ranking, data collected for specific queries to decide how to rank results for a specific query. But if the query is new, then we don’t have that data yet and that’s where RankBrain excels. It can make predictions based on similar queries from the past, how to rank results.

Eric: So is that because it’s doing language analysis? Is it doing historical query analysis? What’s the kind of stuff that’s…

Gary: I think you’re going too deep here. It’s doing many things.

Eric: Many things?

Gary: Yeah.

Eric: Well, can we get a list?

Gary: You could. Although no, you can’t.

Eric: It was originally discussed like it was primarily focused on query analysis. That’s what I thought the original indications that people gave about what it did. But it sounds like there may be some other elements to it as well.

Gary: It has multiple elements…or looking at a query and trying to predict what…or looking at a query and similar queries from the past is the main thing that it’s doing.

Eric: So if we were to geek out on that for a moment, it’s recognizing similarity of queries based on high-dimensional vector space analysis and the proximity of one query to another. You all got that, right? So it hasn’t taken over the entire algorithm?

Gary: No. I can probably say that it will never take over the whole algorithm or the core algorithm. It’s one of the components that we have in ranking and it’s really just one of the hundreds of signals that we have.

Eric: So it’s not involved in links or local search or spam analysis?

Gary: No. They may use their own machine learning algorithms but RankBrain specifically is only focusing on re-ranking based on query similarities.

Eric: Right, okay. And the algorithm has been very successful for you?

Gary: Yeah. It quickly became the third most important algorithm that we have in ranking.

Eric: That’s great. Third most, that’s like a big number.

Gary: Yes. It is a big number. That’s because we have tons of long-tail queries.

Eric: I’m sorry.

Gary: We have tons of long-tail queries. So it’s quite easy to become the third most important when you are working on long-tail queries.

Eric: So this relates to something I heard a long time ago, which is that a very large percentage of queries that Google gets every day have never been seen before. And at one point, I forget who it was, somebody said it was as high as 20% or 25%. Is that coming down over time or…

Gary: I don’t know the exact number now.

Eric: But, okay, so if it’s responsible for something like 20% of the queries are very long tail or first time ever seen, that actually would be a lot of queries that RankBrain is actually acting on potentially?

Gary: Yeah.

Eric: I also had a conversation with a Google spokesperson that I can’t name, it wasn’t you. And what he told me is that it isn’t that RankBrain is only applied to certain queries, it’s actually present all the time. It just only has an impact on certain queries, isn’t it?

Gary: Yeah. So if you think about it, RankBrain is part of the core algorithm. The core algorithm works on every single query and then the many components that we have in the core algorithm, we’ll try to make a decision whether they should do something or not. RankBrain is one of those. It looks at the query and it decides that, “Okay, I have nothing to do here. It’s already ranked pretty well, so I will not do anything.” And in long-tail cases, it will make adjustments to the result set, the already ranked result set.

Eric: Why did you end up calling it RankBrain? Because that makes it sound like it’s the whole algorithm.

Gary: I have no idea. I have absolutely no idea. Google Brain is the machine learning algorithm or framework that we typically use inside Google, and since it’s ranking results like RankBrain, BrainRank would sound weird, so…

Eric: BrainRank.

Gary: That sounds weird.

Eric: Yeah. That’s a tweetable moment there, BrainRank as your new hashtag. So let’s talk about machine learning a little more generally. It’s obviously a very big initiative in many ways at Google. JG [editor: John Giannandrea], because that’s the only way I can pronounce his name…

Gary: That’s how we pronounce it too.

Eric: …took over as head of search and he’s a big proponent of machine learning. Jeff Dean, one of your most senior computer scientists, is pushing machine learning. Sundar, the CEO, has talked about it. So can you tell us more about what Google is doing with machine learning generally without oversharing? Actually, you can overshare if you want.

Gary: Machine learning is a tool. It’s like having a Swiss knife, you can use it for many things. It’s not appropriate for everything but you can use it in many cases. Just like with Swiss knife, you probably don’t want to try to punch a hole in concrete because it will not work, the same way

RankBrain will not work in certain scenarios. But in other scenarios, it will really make your life much easier. So we are using it where we can. Obviously, we are not trying to deprecate all our manual algorithms. That doesn’t make sense for us. Basically, if it’s working well, then we prefer not touching an algorithm. But we are experimenting with what else can we do in ranking and in other products in general with machine learning.

Eric: People hear about machine learning and they assume that it’s in general just going to be better than any human-generated algorithm.

Gary: No.

Eric: But that’s not true.

Gary: That’s not true.

Eric: So what are some of the problems with machine learning algorithms? What are reasons why you wouldn’t use them?

Gary: Scarce data, for example. If your training data for a machine learning algorithm is very scarce, then you will get really weird results. We have this problem all the time where either we picked out the wrong training set, wrong data, and then the results just look odd. Or we just don’t have enough data to actually start using machine learning for something.

And then you have to step back and you have to ask yourself, is machine learning the right approach to this problem that you want to solve or a manual algorithm would do much better? And very often, a manual algorithm would do much better just because you don’t have enough data or your data is just way too noisy even for machine learning algorithm.

Eric: So in many cases then, human-generated algorithms are better than machine algorithms?

Gary: Yeah, absolutely.

Eric: And I think one of the reasons that I’ve heard people talk about for that is, when it’s a human-generated algorithm, the people who work on it have an intuition as to what it’s doing and how it’s working and that’s harder with machine algorithm.

Gary: Yeah. Especially with deep neural networks, when you have multiple layers of neural networks, it’s very, I guess, close to impossible to track back the decision flow of the algorithm. Let’s say that you already trained an algorithm and you have your neural network. In every layer, you will have tons of neurons that are basically making decisions, based on the decision in the next layer. It’s the question, let’s say, is routed to a different neuron.

And if you have multiple layers, it’s close to impossible to track back why a specific decision was made. We have tools to debug some of these issues but generally, that works by simplifying the neural network, basically creating a single layer, linear decision flow. And then you can track back what happens inside, or well, you can make a good assumption about what’s happening inside the deep network.

Eric: So if I were to summarize that, then one of the issues is that when you are doing machine learning, you can’t necessarily or you don’t necessarily have the right data set to train the algorithm well. So that would be one reason for not using it.

Another reason might be that your human-generated algorithm is working really, really well, and since you understand it really well, there is no need to replace it with something that’s actually harder to understand.

But you are making some really big investments in technology. I think I read in one article in backchannel.com about something called Tensor Processing Units or something like that but it’s basically to help you try to see inside machine learning algorithm and have more insight to what it does.

Gary: I guess, yeah. I haven’t read the article. And inside Google, I don’t think the code name is Tensor, so I don’t actually…

Eric: Okay. This is what they said in this article. So let’s talk about user engagement and search. It’s a favorite topic, right?

Gary: Yeah, mainly because it comes up pretty much at every conference where I do Q&As, but sure.

Eric: So at SMX West earlier this year, Paul Haahr talked about some ways that you actually do use user engagement measurements, including click-through rate, to help with search quality. But just for clarity for everybody, the way that was described was not as a direct ranking factor but rather as a way of doing QA on the other ranking factors to see whether or not they were doing a good job of surfacing the best results. Is that a fair summary? And you’re gonna elaborate a lot more about what you…

Gary: Yeah. Well, you already answered. So it’s an interesting topic and probably that’s why it comes up every now and then, every single freaking time. But if you think about it, clicks, in general, are incredibly noisy. People do weird things on the search result pages. They click around like crazy, and in general, it’s really, really hard to clean up that data. When you have a controlled environment, for example, when you are doing QA for another ranking component, then you are in control of the users whose clicks you are looking at and they don’t know about that, that you are looking at their clicks.

We also don’t know which user is it. We don’t specifically target Eric Enge, for example. We just don’t like you, sorry. But you have, let’s say 1% of the users. You don’t know which users, they don’t know if they are in the experiment. And then you look at how or their click patterns. Is the old algorithm doing better clicks-wise than the modified version? If it does, then yeah, let’s go to a lunch meeting with it. If it doesn’t, then you have to either step back and ask yourself whether you want to do this or you have to just go back to a drawing board and…

Eric: Right. And you’re probably looking at more than just click-through rate. You’re looking at other things that are indications of user satisfaction.

Gary: There are tons of things that we are looking at. But one of the most important ones is clicks there. We also use it, for example, for personalization. Basically if you search for Amazon, for example, and you typically click on results pointing to information about Amazon area instead of the shopping mall, online shopping mall I guess, then you will see that slowly we will learn that when you search for Amazon, we will push up results about the rain forest or the river instead of the shopping mall. And that’s because if you, as a user, want to mess up your search results, sure, go ahead. If you don’t like the experience, then… But yeah. In general, they are really, really just noisy.

Eric: Right, so I’ll just try my summary of that one as well. So it doesn’t make sense to use as a direct ranking signal because it’s way too noisy and you can’t control the environment. But you can use it as a quality control on other ranking signals to see whether you’re getting better user engagement. And if you get a dirty sample and you’re trying to sample test that, you can discard that until you find a really clean set of test data, something like that?

Gary: Sounds fair. You’re getting better at summarizing me.

Eric: You know, just trying to help out here. So let’s talk about Penguin 4.0, when’s it coming out? I was so expecting that that was gonna be the question, by the way, that it was gonna be when it was coming out but it actually came out. So it finally came out.

Gary: Do you know what was the first question I got after I posted the post? The very first question: “So when is the next iteration launching?” That was literally the first question.

Eric: So when is the next iteration…sorry. So tell us about Penguin 4.0, what did it do? What are the things that changed with the release of this algorithm?

Gary: There are two, yeah, let’s say two, major things. One is that, it’s real time and the second thing is…no, three. Three major things. First, it’s real-time. Basically, it can discount…or incoming spam to your site in real time as we re-crawl and reindex the page. And it’s also more granular. It can work on more…I mean it can work on pages or sections of site instead of demoting entire sites for that incoming spam. These are the three things. These are the most important things and, I think, the most exciting. And I really believe that this was one of the nicest web spam launches that we had. (Mark Traphagen walks on stage and gives Gary some coffee) This is Mark. This is my favorite person this morning.

Mark: My name is Mark, I’ll be your server today. If there’s anything we can get you, just let us know.

Eric: He didn’t plan that, trust me. So how awesome is that? See? He’s gonna be awake and alert now. We should’ve done this in the very beginning.

Eric: So it discounts links rather than penalizing you for them, yes?

Gary: Yes.

Eric: So that’s a big shift. I would think that have a real impact on, well, what do they call it, negative SEO, right?

Gary: So the thing about negative SEO is that, to this date, I haven’t seen a single…well, not just me but also the ranking team hasn’t seen a single case where it was really negative SEO. It was more about clients not revealing details to the SEO who was doing the cleanup, for example. I know about at least three cases where someone approached me on SEO and they were like, “No, this is a negative SEO, this is a negative SEO.” And we asked them, after looking at our data, “Can you go back to the client and just try to force all the information about how all these links got there?”
And in these cases, it was the client doing something weird, basically posting on…well, not posting but creating, for example, forum profiles, empty forum profiles with just a link to the site and stuff like that. We did spend quite a few hours with the ranking team looking at cases that I heard of or we heard of, and to this day, we haven’t seen a single case where it was real.

Eric: But even if you were worried about negative SEO, they should reduce that worry because you’re discounting the links rather than penalizing. So that’s a good step. And then you referred to it as real time, and if I’m not mistaken, what real-time means, in this case, is as you discover links during your crawl that you think are not worth anything, you make the decision in real time at the moment of discovery that it’s not worth counting. So it’s not like…real time doesn’t mean you instantly reprocess the entire web. It’s sort of as you do your ongoing crawls

Gary: Yeah. That’s correct.

Eric: Yeah, okay. So if somebody was hit by a Penguin in the past, by now, they should have recovered because it’s been out for a few weeks.

Gary: Or have all their links discounted.

Eric: Well, the links are discounted. You can’t change that. So that’s not really a penalty but it’s actually…

Gary: I think one aspect that people don’t realize or one thing that people don’t realize is that, if you had only bad links, then Penguin or this Penguin release will not help you. Yeah, will not help you. The fact that we removed the old penalties from Penguin 3, I think, we are not counting like…we don’t have version numbers but I think it was 3. Basically, the Penguin 3 was still demoting.

Recently, we removed those demotions and if the site or the sites ranking didn’t change, then that means that Penguin 4 pretty much discounted all the links and then you have to either look at what links do you have and disavow, remove links standard process or…no, actually, that’s what you have to do.

Eric: So is there an aspect of Penguin 4.0 that if you…that’s what the industry calls it, by the way, in case you didn’t know. And so is there an aspect of Penguin today where if it sees a site that has a particularly large set of poor-quality links, that might raise a flag to the manual spam review team?

Gary: So I don’t work on the manual actions team. As far as I know, there is no such flag. Basically, we or they can see which links were affected by Penguin and they might want to take a deeper look at other sites link profile if they see that the vast majority of the links are bad. But generally, I don’t think that we have any mechanism that would just raise a flag for them.
Eric: But if you have a lot of bad links for your site or you think you have bad links, should you proactively go clean them up and disavow or not worry about it if it isn’t because you did something wrong?

Gary: So I’m tempted to say that you don’t have to worry about it but people will worry about it anyway. I think it’s a good habit to every now and then take a look at your links and just see like where your links are coming from. And if you don’t like something, then disavow it. I think that’s good practice and just remember that a disavow tool is really, really powerful. So try not to disavow links that could be good or pay lots of attention at what are you disavowing because you can easily hurt your site with a disavow tool.

Eric: There used to be classic advice out there which was, gosh, what you ought to do is take your links and actually try to get them removed from the site where they were and…

Gary: It’s still the best advice.

Eric: …this is a huge amount of email to go out to people. Is that still something that makes sense?

Gary: Yeah. I mean that’s still the best advice but I think we realized that that’s often not feasible and that’s why the disavow tool was created. But I think the manual actions team’s recommendation is still to first go out and take down the links that you can and then use a disavow tool.

Eric: So if you have a manual penalty, then it actually makes sense to try to do those, actually getting removed because the spam team that reviews this will actually appreciate the effort effectively. So that’s one of those hidden parts of doing…of trying to recover from a manual link penalty. If you can remember, you’re sending your reconsideration request into human beings and you wanna actually make them feel enthusiastic about you.

Gary: Just to make sure, manual actions or reconsideration requests will never ever help with the Penguin.

Eric: Correct. Yeah. Agreed.

Gary: Okay.

Eric: Awesome. So I think it’s time to do some more questions. Mark, where are you? Okay, Mr. Mark Traphagen.

Gary: He’s the coffee guy.

Eric: Yes, the coffee guy. All right.

Mark: Is there anything else I can get you, sir? We want your stay here at Pubcon to be…

Eric: First question, yes.

Mark: So we will take questions here live from the audience. We also have some questions we’ve been accumulating from Twitter over the last couple of days that I’ll bring in as requested. But the first one comes from Twitter and it’s from a, let me see this to be sure, Tark Maphagen, who asked…

Why are the Perficient Digital “Here’s Why” videos so awesome? Oh, wait a minute. I think it’s a spammer. I’m sorry. We’ll go on here. Let’s start with a question from someone real on Twitter and let me get those open.

Gary: Are you sure the person is real?

Mark: Someone real on Twitter, that just goes without saying, right? All right, here we go. This kind of relates to some of what you chat about, about clicks and click-through rate and things like that. So Grant Simmons wants to know just, in general, how does Google assess user engagement. Which implies that you do. I think you gave some answers to that. Are there any other ways that user engagement is important to you?

Gary: Comments and I guess on forums for an activity. Those are user engagements in a way. Rich snippets, those are also based on user engagement typically or in the vast majority of the cases at least. Comments are important for us to assess whether a site is of quality, pretty much.

Eric: So the user is putting in comments that…

Gary: Yeah, users are interacting with the site.

Eric: A little bit more than just, “Nice job,” but actually a meaningful comment probably, right?

Gary: Yeah.

Mark: Some level of real engagement.

Mark: Okay, let’s get some questions in the house. I’m sure there are some. Just raise your hand high if you have a question you’d like to ask Gary. There’s got to be some questions here. Let me throw another one from Twitter while we’re waiting. Let’s see. This is from Scott Clark. He says, “If you were designing a one-page SEO audit with five categories of issues, what would they be?” What’s one of the top five issues that you’d be looking for?

Gary: That’s all? Wow. Every week, I send out at least 20 emails to sites who, for whatever reason, put up a no-index on their most important page. That would be one thing. Basically technical SEO, that would be the first point. Review what you are doing with your robots.txt file, whether you have other robots directives on your important pages at least. Go to Webmaster search console…

Mark: It’s a new product, Webmaster search console. Look for it today.

Gary: Or bring Webmaster Tools to see if you have…that is Webmaster Tools, right? And see if you have server errors or whether pages that you actually care about are returning 404. First, do a technical SEO audit, and then go after quality. We do care a lot about quality of the content and the site in general. So that would be the next step. Try to read out loud, for example, some of your pieces, some of your content. See if it reads natural. If it doesn’t read natural, then you have a problem. What else? Why did he ask for five?

Mark: I know he’s gonna be very unhappy if he doesn’t get five.

Gary: Look at your links. It’s a good exercise to look at your links and just see like who’s linking to you and why are they linking to you. Keep an eye on the future. Look at what are the next things that major players on the Internet are looking at, stuff like instant articles and progressive web apps. Try to play with the idea, what would be if you had any of those? What could you do? You don’t have to rush into any of those just to be clear, but it is a good exercise to look at them, understand what they do, and why could they help you and your site and then maybe implement one of them or all of them.

Eric: So if you were to advise someone whether to do AMP or a progressive web app, where would you start?

Gary: I’m trying not to…I mean, it depends, right? Progressive web apps can be pretty awesome but you don’t necessarily need them. And in fact, progressive web apps just means, or web apps, just means that you have a service worker on the page and then you have a progressive web app. And on the other hand, it can help you with tons of things. It’s extremely fast and if you, for example, you don’t have a mobile site yet, which is why don’t you have a mobile site yet, then it’s by default mobile-friendly.

So you will get at least two things out of one technology. It will be blazing fast and you will have a mobile site. So yeah, it depends. It’s pretty much the same as apps and a web page. It’s like, “Do you need an app? I’m just like, “I don’t know.” It depends on your business model and what you want to achieve on the internet.

Eric: All right.

Mark: Okay, earlier this week, there was a story in “Washington Post…”

Gary: Oh God.

Eric: With the fun title “Dozens of suspicious court cases with misleading defendants aim at getting web pages taken down or de-indexed in Google.” Apparently, the situation is a less-than-ethical reputation management company sues some website for defamation. They can’t find the webpage owner. A court rules that, “Yes, indeed we’re gonna rule for the plaintiff.” They take this to Google and say, “Look, here’s a court case.” Apparently, there’s dozens of these cases and then the website gets taken down. Apparently, it’s in highly competitive verticals, of course.

So is there anything that you can talk about – I know it goes to a legal question – is there anything you can talk about that Google may be doing to assure that this can’t randomly happen to us or anyone in the audience?

Gary: So I don’t know our involvement in these cases, and because of that, I will refrain from answering because if it’s an ongoing litigation, for example, then I will or we will get in trouble for…probably I will get in trouble and I don’t like that. So yeah, in private, if you want to ask me, then I can probably say a few things. I know that there’s a camera, for example, so I would refrain from saying anything.

Mark: Okay. We have a question over there.

Gary: Where?

Brian: Hi, Gary, my name’s Brian Patterson with Go Fish Digital. My question is about Chrome data. Is Chrome data used, such as direct visits or time spent on the site or interactions with the site, is that user’s behavior used in the ranking algorithm, individually or in aggregate?

Gary: I think this falls…I don’t know exactly about Chrome data per se but I think this also falls in the same category as clicks. Basically, you get a crap ton of noisy data and then you have to figure out if you can use it somehow. Yeah, I don’t have a better answer for you.

Blair: Hi.

Gary: Over there.

Blair: This is Blair Kuhnen from Builders Digital Experience. Many of our clients want to be fast and they’re getting fast by having their developers implement tools that are good at making that happen, such as Angular. And what we’re finding is Google is having a lot of trouble crawling those pages accurately. And many of the guides to fixing that seem to encourage people to do things that sound like a violation of Webmaster Tools guidelines. Can you comment on AngularJS and how it should be implemented or are there guides for that?

Gary: Angular is a Google thing, right?

Blair: Yes.

Gary: Okay. So one thing is that we don’t work, we as in search, we don’t work with Angular to make their pages crawlable or indexable. We did have public discussions with them, as far as I remember. Like we had hangouts, public hangouts with them where we were trying to help them, but we are not proactive about this and we are not reaching out to them because it would kind of create a conflict of interest for us.

I think Angular is capable of server-side rendering, basically pushing rendered HTML page to the client, in this case Google bot or a browser, and that should solve the problems. And that would be my implementation advice. If you can enable that, then do that, if something doesn’t work with Angular. Yeah, otherwise, I don’t think I have anything else to add.

Maryanne: Hi, my name is Marian Widmer and I’m with Delaware North Companies out of Buffalo, New York. First question for you, we are currently using Google Analytics free version and we’re attempting to build the case for why we should move up to Google Premium. Can you help us with what you see as the two top reasons for why we should pursue that move? And by the way, we have only about 75 websites in our portfolio.

Gary: So you’re asking me to make a sales pitch for Analytics Premium?

Eric: Was that the plan?

Gary: I can’t do that. I’m not a sales guy. I don’t think I ever tried to sell, for example, AMP, and the same way, I will not try to sell Analytics Premium. Probably, you can find tools that can deliver the same thing for you as Analytics Premium and are either much cheaper or even free. I don’t know what it can provide you that would be…I can’t find a word.

Male 2: Valuable.

Gary: Yeah. Thank you. I don’t know if it would be valuable for you. I know that it costs lots of money. It depends what it can offer for you. I think you have to make that sales pitch, not me. I think I’m trying to advocate against it right now.

Eric: What we need is a Google Analytics booth in the expo halls and they could to these for next year.

Male 3: Hello, Gary, I’m [inaudible 00:43:27] from Brazil. I wanna ask you about the algorithm in other languages like Portuguese that changes [SP] like Penguin and RankBrain. It’s going the same time to Portuguese and other languages or the English version of the algorithm is smarter than the others? What can you tell us about it?

Gary: So our goal is to release most of…

Eric: Most?

Gary: Yeah. Most of the algorithms in all languages. And that also means that when we are testing the algorithms, we do test those with different kinds of or…queries in different languages. One of the QA that we do on ranking algorithms is using raters. Raters essentially will try to assess…or the same thing that automatic, the one-person experiments, would do.

They will try to say that, “Okay, the left side is better than the right side,” and they will try to also say why and we have raters in many, many languages. I don’t remember how many languages. I think like 50, maybe. And I’m very certain that both Portuguese and Brazilian Portuguese is covered. I know for sure that Brazilian Portuguese is covered and then I would assume that Portuguese would be covered as well.

Mark: We are in our last…go ahead, Eric.

Eric: I was just gonna say, so in the case of RankBrain, for example, when that launched, it was basically global when it went out, as I understood it.

Gary: I think so, yes.

Eric: I mean it, basically, it operated in all languages.

Gary: Yeah. I want to emphasize that I think it was global. I don’t remember if it was global.

Mark: We are in our last minute here. So we’re gonna slide in one more question. Here it is.

Jeff: Good Morning. Jeff Nichols here. My question is about the Penguin update you mentioned earlier where you’re discounting bad links. How does that impact the disavow tools? Should you still do that or is that basically…

Gary: Yeah. We’ll still use it. Yeah, we’ll still use it.

Thoughts on “Pubcon Keynote with Gary Illyes and Eric Enge”

  1. Very and long good post,
    I liked a lot what you’re talking about the importance of CTR as a Google factor. Some digital marketers talk very well about it, some of them don’t, but none has statistics or real facts for showing real proofs.
    I think this is time for reading your article about it.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Eric Enge

Eric Enge is part of the Digital Marketing practice at Perficient. He designs studies and produces industry-related research to help prove, debunk, or evolve assumptions about digital marketing practices and their value. Eric is a writer, blogger, researcher, teacher, and keynote speaker and panelist at major industry conferences. Partnering with several other experts, Eric served as the lead author of The Art of SEO.

More from this Author

Follow Us
TwitterLinkedinFacebookYoutubeInstagram