Skip to main content

Here's Why Videos

Why You Must Know about the New Evergreen Googlebot – Here’s Why #217

Eric Enge and Martin Splitt on Why You Must Know about the New Evergreen Googlebot

Google made an announcement at Google I/O in early May of 2019 that Googlebot is now evergreen. What does it mean for the search community?

In this episode of the popular Here’s Why digital marketing video series, Eric Enge, together with Google’s Martin Splitt, explains of the new evergreen Googlebot in search including rendering hash URLs, <div> tags, and infinite scroll.

Don’t miss a single episode of Here’s Why. Click the subscribe button below to be notified via email each time a new video is published.

Subscribe to Here’s Why

Resources

Transcript

Eric: Hey, everybody. My name is Eric Enge and today I’m excited to bring to you Martin Splitt, a Google Webmaster trends analyst based out of Zurich, I believe.

Martin: Yes.

Eric: Say hi, Martin.

Martin: Hello, everyone. Very nice to be here. Thank you very much, Eric, for the opportunity to be a guest here as well. And yes, I am, in fact, based in Zurich.

Eric: Awesome. Great. Today, we want to talk a little bit about what happened to Google I/O related to the announcement that Googlebot became evergreen, which means that it will be on an ongoing basis on the latest version of Chrome— in this case, Chrome 74, for right now. So, what are some of the things that that means, and what are some of the things that still won’t be supported as a result of this move?

Martin: What it means is that we now support many, many features. I think it’s 1,000 features or so that haven’t been supported beforehand. I think most notably, ES 2015 or ES 6, and onwards. We have now upgraded to a modern version of JavaScript. A lot of language features are now supported by default; a bunch of new web APIs is supported, such as the intersection observer or the web components APIs version, one of which is the stable ones. That being said, there is a bunch of stuff that just doesn’t make sense for Googlebot and that we continue not to support.

To give you examples, there is the service worker. We’re not supporting that because users clicking onto your page from the search result might never have been there beforehand. So, it doesn’t make sense for us to run the service worker who is basically caching or which is basically caching data for later visits. We do not support things that have permission requests such as webcam or the geolocation API or push notifications. If those block your content, Googlebot will decline these requests, and if that means that your content doesn’t show up, it means that Googlebot doesn’t see your content either. Those are the most important ones.

Also, Googlebot is still stateless. That means we’re still not supporting cookies, session storage, local storage or IndexedDB across page load. So, if you wanna store data in any of these mechanisms, that is possible, but it will be cleared out before the next URL or the next page comes on.

Eric: Got it. There are some other common things that I’ve seen that people do that maybe you could comment on. I’ll give you three. One is putting or having URLs that have hash marks in them and rendering that as separate content. Another one is infinite scroll, and then a third one is links, implemented as <div> tags.

Martin: All of the examples you gave us, we have very good reasons not to implement. The hash URLs—the issue there is that you’re using a hack. The URL protocol was not designed to be used that way. The hash URL— the fragments these bits with a hash in front of them—they are supposed to be a part of the page content and not different kinds of content. Using hash URLs will not be supported still. Using links in things that are not links, like buttons or <div> tags or anything else, would still not be supported because we’re not clicking on things—that’s ridiculously expensive and also a very, very bad accessibility practice. You should definitely use proper links. What was the third one?

Eric: Infinite scroll.

Martin: Yes, infinite scroll is a different story. Googlebot still doesn’t scroll, but if you’re using techniques such as the Intersection Observer that we are pointing out in our documentation, I highly recommend using that and then you should be fine. You should still test it and we need to update the testing tools at this point. We’re working on that sooner rather than later. But generally speaking, lazy loading and infinite scroll is working better than before.

Eric: One of the things that I believe is still true is that the actual rendering of JavaScript-based content is deferred from the crawl process. So, that also has some impact on sites. Can you talk about that?

Martin: Yes. Absolutely. As you know, we have been talking about this last year as well as this year. Again, we do have a render queue. It’s not always easy to figure out when rendering is the culprit or when crawling is the culprit because you don’t see the difference necessarily or that easily. But basically, we are working on removing this separation as well, but there’s nothing to announce at this point. If you have a site that has a high-frequency change of content—let’s say, a news site where news stories may change every couple of minutes—then you are probably well off considering something like server-side rendering or dynamic rendering to get this content seen a little faster. If you are a site like an auction portal, you might want to do the same thing. Basically, if you have lots of pages—and I’m talking about millions—that content basically continuously changes. Then you probably want to consider an alternative to client-side rendering.

Eric: Right. One of the things that used to be recommended was this idea of dynamic rendering. If you have one of these issues where you’re using infinite scroll or you have real-time content or some of the other things that we talked about, dynamic rendering allows an already pre-rendered, if you will, version of the content to be delivered to Googlebot. Is that something that you still recommend?

Martin: It’s not a recommendation, per se. If you can make the investment in server-side rendering or server-side rendering in hydration or pre-rendering, where pre-rendering means if you have a website that only changes so often and you know when it changes. Let’s say you have a marketing site that you update every month—then you know when you have the update, so you could use your JavaScript to be run whenever you deploy something new on your site and then create static HTML content from it.

We recommend making these investments as a long-term strategy because they also speed up the experience for the user, whereas dynamic rendering only speeds it up or makes it more plausible for crawlers and not for users, specifically. It’s more a workaround than a recommendation, but it still can get you out of hot water if you can’t make the investment in server-side rendering, pre-rendering or server-side rendering in hydration yet, or if you are basically on the way there but need something for the interim.

Eric: Awesome. Any final comments about JavaScript before we wrap up?

Martin: I would love to see more people experimenting and working with JavaScript rather than just downright disregarding it. JavaScript brings a lot of cool features and fantastic capabilities to the web. However, as it is with every other tool, if you use it the wrong way then you might hurt yourself.

Eric: Awesome. Thanks, Martin.

Martin: You’re welcome, Eric.

Don’t miss a single episode of Here’s Why. Click the subscribe button below to be notified via email each time a new video is published.

Subscribe to Here’s Why

See all of our Here’s Why Videos | Subscribe to our YouTube Channel

Thoughts on “Why You Must Know about the New Evergreen Googlebot – Here’s Why #217”

  1. I have been looking for this article for a very long time finally, I got it. This is one of the best articles I have ever read about creating a blog perfectly. 🙂
    Brother thanks for sharing with us.
    Keep up the good work.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Eric Enge

Eric Enge is part of the Digital Marketing practice at Perficient. He designs studies and produces industry-related research to help prove, debunk, or evolve assumptions about digital marketing practices and their value. Eric is a writer, blogger, researcher, teacher, and keynote speaker and panelist at major industry conferences. Partnering with several other experts, Eric served as the lead author of The Art of SEO.

More from this Author

Follow Us
TwitterLinkedinFacebookYoutubeInstagram