Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 27 June 2014

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

JOHN MUELLER: OK. Welcome everyone to today's Google Webmaster Central Office Hours Hangout. We have a bunch of people here already and a lot of questions that were submitted. Simon had a question just before we started regarding whether or not it still makes sense to make websites for the users, or with all of these new technologies like Ajax et cetera, if there isn't a large component involved that requires making something special for the search engines. Is that about right?

SIMON: Yeah, that sounds very good. Thank you.

JOHN MUELLER: OK. So I think, in general, it still makes a lot sense to primarily focus on the user and to make sure that their user experience is as good as possible, that you have something really fantastic that you can offer them. At the same time, if you need to make sure that this content is also available in search engines, there's some technical kind of requirements that you need to fulfill. On the one hand, we need to be able to crawl these pages so that we can actually find this content. On the other hand, we need to be able to understand this content on those pages as well. So for instance, if a page is just an image, then that's sometimes very hard for us to understand. But if there is something like a subtitle on an image, that helps us a lot. If there are things like links with anchors within the website to those images, that helps us a lot as well. So I think, primarily, you still want to focus on the user and make sure that they are able to use the website as good as possible. If you need to have search engines crawl and index your content, there's some basic requirements that are still required. So you can't take any random new technology and say, this works great in some browsers, and assume that it will also work for search engines. Specifically for Ajax, there's the Ajax crawling technique where, like you mentioned, you can make a parallel website for crawlers. We're getting better and better at understanding Ajax content as well. But even when we can process the JavaScript directly and see what content is being pulled in, we need to have unique URLs. So it needs to be that you have some kind of a URL structure for this content that we can pick up on. And we need to have some kind of basic unique content on these pages when JavaScript is not enabled. So we need that for our technical side. But at the same time, those few users who don't have JavaScript enabled, they need that as well. So I believe that Guardian did a study about that not too long ago. They found that there was still a significant portion of users who don't have JavaScript enabled, which might be for things like blocking advertising or maybe blocking malware problems that they're afraid of, those kinds of things. Or maybe they're using some older or simplified browser. And that was, I think, somewhere around 5%, something like that. So it's a small number of users, but it's still something to keep in mind. So if you want to go towards Ajax, I think that's a great opportunity. You just need to make sure that you have unique URLs and that, on these URLs, when they're crawled, there's at least some minimal amount of content that's available without JavaScript. If you expand that with Ajax, with JavaScript, that's fantastic. But at least it needs to have some minimal amount.

SIMON: OK. I understand. Thank you. So but the second part of the question was more like websites which focus on images and multimedia content. Would you recommend doing some marketing blah, blah at the bottom or the left side, which is very not so much important for a user, but doesn't at least disturb him, to show search engines what the page is about?

JOHN MUELLER: I generally wouldn't do that, if it's just for search engines. So if this is a text that's not meant for the user, then that's almost seen as keyword stuffing, and it gets close to being hidden text, all of those kind of problems And usually, we do a fairly good job of understanding what these pages are about, based on the context within the website and based on the context from outside of the website, so when people are recommending those pages to us, maybe a forum or somewhere else. So I definitely wouldn't add text to these pages just for search engines.

SIMON: OK. So the final addition to the question. If a website which is based on images has a great user experience is it equally likely to be ranked in Google as high as websites which do provide a lot of text? I mean, is the amount of text important for the ranking? Or is it really the user experience that counts at the end?

JOHN MUELLER: Yeah. I think the user experience is very important for us. But we need to have some minimal amount of information on these pages. So an extreme example, for example, is lots of photographers have a very simplified gallery on their pages where they have the title is DSE and then some long numbers dot JPEG, which is the name of the file that they took that photo of. Then they have this nice photo on there. And for us, when we see that, it's very hard to say, OK, this is a nice photo, but what should we rank it for? We don't really know. Whereas, if someone were to take this photo and bring it into a forum, for example, and they write a forum post about their vacation that they had, and they include this photo and say, this is kind of what it looked like at my hotel, then we have a lot of information on that page that gives us a little bit more context about the image. And at the same time, that's something which users sometimes also want to find. For example, if they don't just want to find the image, they might want to find some context about this. So if someone is searching for hotel vacation in that location, for example, then maybe that forum thread is a lot more relevant for them, because it has this information on it, even though it's the same image as the one on that fairly blank gallery page.

SIMON: OK. Thank you.

PANKAJ: Hi, John. This is Pankaj.


PANKAJ: I have the same question regarding in this context. I have two questions for you regarding the Ajax. One is we are serving HTML snapshot of the HTML pages for the request of the URLs. However, that snapshot is taking time to generate because there are lots of images used on that page. But Googlebots have very limited time to crawl and just skip away. So how can we deal with that problem?

JOHN MUELLER: If you have an HTML snapshot like that, I think what you'd need to do is some kind of caching on your side, so that Googlebot can still crawl fairly quickly. And you don't have to refresh that snapshot as often as may be otherwise necessary. But it really depends a lot on your website and how you have that implemented within your site.

PANKAJ: OK. So we have lots of images which are coming from the different system. So if we can stop those images while we're creating a snapshot, would it be considered as clocking? Because we are solving the same content, just we are blocking images just to serve the snapshot faster. So would it be fine?

JOHN MUELLER: So with snapshot, you mean like the escaped fragment version of the page?

PANKAJ: Yes. Yes. Yes.

JOHN MUELLER: For this escaped fragment version, the images aren't actually included. It's just like the image link there. It's not that the image itself is embedded. So that would be, actually, separate from the escaped fragment part. But you can definitely block images and robots.txt if you think that it takes too much to crawl them or causes problems. You just need to keep in mind then, without those images, we can't show those images in image search. But maybe that's OK.

PANKAJ: OK. Fine. Then I have a second question. We have implemented that, but how can we check that? Because if there is a normal URL, we can check it in Fetch & Render option. But we are creating a URL which is temporary URL and we don't want to serve to the users. So if we do the Fetch & Render of that URL, we can see the content in that. However, if we are putting that content of that hashtag thing, so we are not able to see that. So how can we confirm that it is serving the purpose?

JOHN MUELLER: You would have to compare the escaped fragment version in Fetch as Google and compare it to a version that you see directly in your browser. So that's not something that you would be able to use the Fetch as Google for, because the Fetch as Google doesn't support fragments directly.

PANKAJ: Yes. So our question is, how can we check that we are serving the right content in this Google site? Because we know that we have already implemented that. But Fetch & Render is something that we can cross-check and confirm.


PANKAJ: So how can we do that?

JOHN MUELLER: I would take a look at the preview image that's shown in Fetch & Render and compare that to what you see in the browser.

PANKAJ: OK. So you mean to say we have Fetch and Escape URL?


PANKAJ: Not Fetch & Render? And if it is the same, so we are serving the purpose.

JOHN MUELLER: Yeah. Exactly.

PANKAJ: Thank you. So Ajax is something that we are discussing. Now I have second question, my other aspect to discuss about is related to the dynamically serving the content on the same URL.


PANKAJ: So in that, let's say we have country-specific content. And we have only single URL, let's say, On that, we are serving, for US users, US content, for India users, India content. But my bind is, will it leverage or give the same priority of using the ccTLD for these two documents?

JOHN MUELLER: If you use the exact same URL to serve the different content, we'll only see one of those versions. We won't see both versions. So if you dynamically serve the different country or the different language content on the same URL, we'll index only one version, probably only the US version. So we won't even see the Indian version. So what you would need to do is have the Indian version on a separate URL, or the other language, other location versions on separate URLs, so that we can crawl and index them separately. And then, using something like the [INAUDIBLE], you can refer us to the different versions.

PANKAJ: Wouldn't we find that means we are using various HTTP requests to serve this? So that means we are telling Google that we have different content. So wouldn't it be the same purpose serving as ccTLDs on these different URLs?

JOHN MUELLER: If you use different URLs and you use the geotargeting feature in Webmaster Tools, yes, that would be equivalent. Yeah. So if you have a dot com and you have slash India, for example, slash USA, and in Webmaster Tools you say slash India is for India, slash USA is for the United States, then that's fine for us.

PANKAJ: OK. So let's say it's country-specific. Now, if we talk about the device-specific, we are serving content for mobile users, and we are serving content for desktop users. So in that case, also, Google will index only one type of content?

JOHN MUELLER: I would check out the smartphone guidelines that we put together. There are the three options that you can use for that. You can use different URLs. You can use the same URL. But you have to let us know about the settings. But in the guidelines, you should find all of that information.

PANKAJ: What we look at the guidelines, we didn't find that they will index only the one content or the two content. What it says that we need to use a various HTTP header to serve the content to inform the Google and searching bots that we are serving different content in different userbot agents, so just to avoid the clocking. But I just want to ensure that in searches for, let's say, mobile searches, we are serving the mobile content. In desktop searches, we are serving the desktop content on the same URL. That, I want to confirm that.

JOHN MUELLER: What we would do is we would index the desktop version. And we would recognize that there's a mobile version that's equivalent to this content. Then we would show it for the mobile users in the search results. So in a case like that, we don't need to index it separately, because it's the equivalent content.


JOHN MUELLER: You access it once on desktop, once on mobile. Maybe the layout is different. Maybe the sidebar is different. Maybe there's some content missing. But if it's equivalent, then that's fine for us. Then we don't need to index it separately.

PANKAJ: So can we conclude that we have to have two different URLs to index to different content, whether it be a different language only?

JOHN MUELLER: Yeah. For language and location, you would need to have to have different URLs. For different devices, you can use the same URLs.

PANKAJ: OK. Fine. Thank you.

JOHN MUELLER: Sure. All right. Let's grab some questions from the Q&A. And if you guys have anything to add in between, feel free to jump in. [MICROPHONE FEEDBACK] Oops. Let me just mute you. I get negative SEO attacks. Page A loses rankings. I see new spammy links and even bad 302 redirects pointing to it. I disavow those URLs with the domain directive. And the ranking for that page pops back up in three to four days. Is this actually working? You're probably seeing something different than anything involved with negative SEO or with those specific links, because it generally takes a bit of time for us to actually crawl and re-index and reprocess all of that information. So if you're seeing the ranking pop up and down within a couple of days, then that's probably something different. Usually, that happens when our quality algorithms that kind of on the edge. And they don't really know, is this a great page? Or is this a middle quality page? And it's kind of fluctuating between those two variations. So what I recommend doing there is just making sure that your website is as high quality as possible, so that we don't fluctuate back and forth between thinking it's a great site and thinking it's maybe not such a great site. So I'd focus more on making sure the quality of your page is as high as possible, and probably worry less about these negative SEO attacks. If you see problematic links, you're welcome to disavow them. But you generally wouldn't see effects within a couple of days.



JOSHUA BERG: So what you're saying there is that if your site fails one filter, like doesn't do so good on Panda, then it might get a little more focus from another filter?

JOHN MUELLER: These algorithms tend to run completely independently, because we try to make sure that our algorithms are able to focus on one specific aspect, so that it's not this overlapping aspect there where one algorithm triggers. And because of this algorithm, the next one triggers, the next one triggers, leading to a cascading failure. Instead, we want to make sure that these algorithms run as independently as possible so that there is, essentially, no big overlap between them. So just because your site isn't seen as being such a high quality site, doesn't mean that links will be seen as more problematic.

JOSHUA BERG: OK. Yeah, because I remember one time there was a discussion about-- or Matt was saying last summer when they were looking at softening up some of the algorithms for sites that were kind of on the borderline, that they would look for additional signals. So in my mind, I'm thinking, OK, if that's Panda looking at quality, then are those additional signals related to something else, like off-site links or something? Or are we looking at more specifically that algorithm where it is just looking closer at the quality.

JOHN MUELLER: Yeah. Yeah. Usually, we try to keep the signals as separate as possible, so that it doesn't happen that one change triggers a lot of different algorithms. Because then, we could probably just simplify those different algorithms into one. And having fewer algorithms always makes life easier. Is it true that your website needs fresh content to stay ranked high in the search engines? For example, is adding a blog to your website giving you better search results for your keywords on your home page, because Google sees that your website keeps fresh? No, it's not true that you need to keep updating content on your website. Sometimes, websites are really fantastic, and they haven't changed in years. And that doesn't mean that they're low quality. That just means that the content there, essentially, has been the same for a long period of time. And people might be happy with that content as it is. So it's definitely not the case that you need to keep updating your content. On the other hand, especially in things like blogs, they give a lot of extra context for your website. So maybe, if you only write about your articles, your products, or services that you're selling within your main website, and your blog gives a little bit more context to the bigger picture of how these products and services might be used, then that could attract a different set of users. So it's not the case that the blog itself makes it look like the website is fresher and therefore ranks more, but it's just that the blog has slightly different content that might be attracting different users. So I'd definitely look into different kinds of content that you might be able to add to your website. That could be a blog. That could be something more about-- I don't know, maybe photos. Maybe even a feature where people who use your products and services can comment on it directly, like reviews, for example, or something like that. So essentially, adding more content to your website is almost always a good thing, as long as you can make sure that this content is really high quality and something you'd like to stand for on your website.

SIMON: OK. But is it so that the blog also tells something about the home page, if you read a lot of blog articles about the subject? Or does Google read the home page as a separate page?

JOHN MUELLER: We look at that separately. What might be happening is, if you have the blog also, like in a sidebar on the home page, something like that, we recognize that the home page is changing more frequently. And we try to crawl and re-index it more frequently. But just because we crawl and re-index it more frequently doesn't mean that we'd rank it more. So we see the crawling frequency more like a technical aspect. And the ranking side is something based on the various signals that we collect there that isn't necessarily based on how often we crawl and re-index a page.

SIMON: OK. Thank you.

JOHN MUELLER: There are URLs-- yeah, go for it.

PANKAJ: John? Yes, I have a question for you that, basically, is very much important and to this scenario to rank higher, right? Am I correct? What signals?

JOHN MUELLER: Not that important, but it's something which users definitely recognize, yes.

PANKAJ: OK. So in mobile contest, how much valuable it is [INAUDIBLE].

JOHN MUELLER: At the moment, we don't use any kind of page [INAUDIBLE] measurement for the mobile ranking. So we try to use the desktop rankings on mobile, except for situations where we find a mobile doesn't work at all on mobile. So for example, if a mobile page has a lot of Flash content, then that something where we'd say this doesn't really make sense to show it to people on smartphones, because they can't see it. So that's when we would demote it in the search results. But if the mobile page is kind of reasonable, we would show it normally, like we would with the desktop page, even if it's fairly slow. So I think, over time, we might be able to go into that direction and say, your mobile page should be really fast and should be able to render within a couple of seconds. But at the moment, where it's still very early on the mobile side and that so many websites are still doing so many mistakes on mobile, that we can't really be too picky about what we would show in the mobile search results.

PANKAJ: OK. so it means if we make up a site response-- so we may have multiple CSS and different JSS. But it means Google to crawl that. So somewhere, is it helping in crawling our website?

JOHN MUELLER: If you block crawling of the JavaScript and the CSS, usually, that can be fine. We recommend not doing that, primarily so that we can recognize the content on your pages. So if you're using JavaScript or CSS, for example, to do a responsive design for mobile, and we can't crawl that JavaScript or CSS, then we can't recognize that you're doing something really smart. So if you let us crawl that, then we can credit your website with that extra value that you're providing.

PANKAJ: OK. So if we are offering the JSS and all this stuff accessible, and if you find that it is something providing value to users, so you will value those signals, right?


PANKAJ: Correct?

JOHN MUELLER: Yes. I mean, if we can pick up more information about your website from the JavaScript, from the CSS, then that definitely helps us. It's not that we'll put your website on number one ranking for all of your keywords, but it definitely helps us to understand the context and your content a lot better.

PANKAJ: OK. And now, I have two questions related to internal links.

JOHN MUELLER: Let me get through some of these submitted questions first. And I'll open it up for general discussions in a little bit.

PANKAJ: OK. Thank you.

JOHN MUELLER: All right. There are URLs in my Disavow file that are no longer in Google index or cache. If no one is looking to these URLs and Google never revisits them, will they ever get disavowed? Generally, if they're not in our index, you don't need to disavow them, because we're not going to use them for any links there. So theoretically, you could remove them from your Disavow file. You could also leave them there, if you're worried that, at some point, we might crawl and index them again. But if they're not in our index, then that's essentially fine. Why is it taking so long to refresh Penguin? Come on, Google, you should be faster. We talked with some of the engineers about this, and they are working on this. So it's something where we're working on an update. At the moment, I don't have anything to share with regards to when that will actually happen. But it's been a while since the last one, so I can see that they're probably working a little bit harder on this. I broke my blog posts into pages with 50 comments. The pagination is reported as duplicate content in Webmaster Tools. I already use canonical URLs. What else can I do? Essentially, that can be fine. If you're just paginating based on comments, and you have a lot of comments on your pages, then that's something that we'll recognize. We'll try to crawl and index those individual pages with the different sets of comments. And even if the titles and maybe part of the article is exactly the same, we'll try to recognize the unique parts there. So if somewhere were to search for that general article, and you have the whole article copied on all of these pages, then we'll probably pick one of these pages, maybe the first one, generally the first one, probably. If someone is searching for something unique within the comments, then we'll recognize that these comments are separate and try to write that in the search result. So duplicate content, on its own, isn't something that you need to be afraid of. It can have totally normal purpose here. Sometimes, there are technical reasons for that. Sometimes with pagination you have things like this as well. So nothing really to worry about there. If Google gives a domain a 10 out of 10 ranking regarding intended geographic location, what score would a dot com that's been sent to Webmaster Tools to UK? We don't give any ratings to ccTLDs. But generally speaking, if you have a generic top-level domain, and you use Webmaster Tools to set geotargeting, that would be equivalent to a ccTLD. A lot of new generic top-level domains are coming out. I think it's also important to keep in mind, with these new top-level domains that are coming out, some of them look like they might be geographic, but they actually are seen as being generic on our side. So if you have something like .nyc or .berlin-- I think these are two that are coming out-- that's something which we would, I think, at first look at and treat as a generic top-level domain. So you can use geotargeting, if you want, for that. If, over time, we recognize that this top-level domain only has content from a very specific region, then we could take that into account. But definitely, at the start, we'll be treating all of these new top-level domains as generic top-level domains. So you can use geotargeting, set that up however you want. I'm a designer. And I want to promote my design portfolio blog. How would you recommend optimizing individual pages when content is image or video heavy? Also, is a page with one or two images and, say, 200 words considered low quality? So our algorithms don't count the words on a page. It's not that you need to have more than 200 words on a page to be seen as high quality content. Sometimes, a page has very little textual content and is very important and high quality. So just because there are words on this page doesn't make it higher quality. At the same time, as I mentioned before, if we can't understand what this page is about based on the words on this page, then we'll have a hard time showing it in the right search results. So a minimum amount of text is definitely a good thing to have on these pages. It really depends on how far you want to go. There are some sites I've seen where a graphic designer has created something that looks really nice in a browser window, but it has no textual content at all. And for us, that's something which makes it extremely hard to rank, because we don't know what we should be showing for this. Maybe we can't even recognize what's shown on the picture, or it's not that it's a photo of something very specific. So sometimes those are very tricky situations. But at the same time, if you have something like your home page, or if you have a separate blog, if you have other pages that do have a lot of text, then that can compensate. At least those pages can rank for those queries. It doesn't have to be that all pages within your website have to have a lot of text on them. Sometimes, like a gallery, that's totally fine.

SIMON: I have a similar question about video in your website. Does Google see when you embed videos from YouTube in a website? For example, you're making a video course that can be very interesting to people. But how do the search engine look at your page when you only have video and a few words? Does it recognize the YouTube video with the keywords in it?

JOHN MUELLER: Good question. I don't think we pull in content from the YouTube video. So we try to primarily focus on your page and how that page has a context within your website, within the internet on the whole. So things like anchor text for links leading to that page, that helps us a lot. But if there is primarily two to three videos and very little text, maybe just a heading, then that's really hard for us to figure out what we should be showing this for. In comparison to, maybe, the YouTube landing page that has a lot of context on it, maybe that even has a lot of comments on it, that's a lot of text that we can use to rank for. But if we don't have much, it's very hard.

SIMON: OK. But does Google see the video? Because it's on an IFrame, does it read the IFrame? Or totally not?

JOHN MUELLER: We generally try to pull in content from an IFrame. I imagine, with YouTube videos, that's really hard, though, because a lot of that content is also in JavaScript or Flash. And I don't really know how YouTube pulls in things like the titles of the videos, or the comments, those kind of things. So I would assume that we're not going to see a lot of content for those IFrame embeds.

SIMON: OK. So a little text will be nice.

JOHN MUELLER: Definitely. Yeah. I mean, sometimes it's also the context of these pages that makes a difference where, if you're seen as an absolute authority in this topical area, and you compile a list of two to three or four videos that are really important that maybe you created yourself where everyone says this is the absolute best compilation on this specific topic, and they link to you with relevant anchor text, that's something where we could pick up on that as well and say, OK, this is a really good page on this topic. We should be showing it when someone is searching for it, even if there's not a lot of text on this page.


JOHN MUELLER: But text definitely always helps.

SIMON: OK. Thank you.

JOSHUA BERG: By your authority, do you mean the channel, John?


JOSHUA BERG: You said we can tell if, say, you're an authority on this topic of these videos. Then do you mean the channel, or those who are sharing the video?

JOHN MUELLER: More about your website itself. So if we can recognize that your website really matches this topic that someone is searching for and this page is very specific to something unique that they're searching for. So I don't know. It's hard to come up with an example, offhand. But you could imagine, maybe, let's say, a car manufacturer that is known to produce a lot of good cars. And they have a page with some advertising videos on that, advertising campaigns that they had. And if someone is searching for this car manufacturer plus advertising videos, then maybe that kind of landing page where they compile those videos could have a good chance. Because we understand that this website is about this car manufacturer. It's something that's essentially seen as being very relevant to this website. And we can recognize that maybe this page has the title, "Advertising Videos" on it. And we can kind of work to combine those two. So that's something where we try to understand your website and understand how it fits in with the query. And if there's something unique within your website that matches the exact query, then we'll try to pick up on that.

JOSHUA BERG: So in other words, an authoritative page is a page that it's on maybe passing like a topical page rank or a topical authority back to the video that is showing?

JOHN MUELLER: I wouldn't necessarily look at it like that, but I think that the general concept is quite similar in that when we recognize that this website is very relevant for this general kind of query, and it matches the specifics of that query, than we think that might be a good result as well. But again, if this is something that you're creating, and you have these videos on your website, then adding more text definitely helps us understand it better. And it's not something that I'd say you can always count on happening, because maybe we see a YouTube channel that has a lot of descriptions that also matches this kind of content that the person is looking for.

SIMON: So does it help to get a link from YouTube when you make those YouTube videos to link them to your website?

JOHN MUELLER: I think those links are nofollow. I'm not sure.


JOHN MUELLER: OK. Let's grab another one here. If Google can detect unnatural links, why do we still need to disavow them instead of Google ignoring them automatically. I read that competitors with negative SEOs and spammy links to harm their competitors [INAUDIBLE]. I think the whole negative SEO topic is something we've talked about a lot in some of the previous Hangouts, so I'll generally skip that part here. In general, we do try to recognize problematic links, but we understand that we can't catch all of them. So the big problem here, from our point of view, is we see a lot of really problematic links or a lot of problematic things that, maybe, you're doing on your website or outside of your website. And that leads us to generally, maybe not trust your website as much as we would otherwise, because we don't know if all the other signals that we're seeing from your website are really unique and compelling and really authentic signals that we should be able to trust. So from that point of view, it's something where we see these problematic links, and we don't really know how we should react to that. It's not that we can just close our eyes and say, oh well, we can recognize these problematic links and ignore them. It's more that we don't know what we should do with all of the other signals that we find attached to your website, if we realize that we really can't trust one of these. So that's something where I generally work on making sure that your website, on the whole, provides really consistent signals, that it's not doing something really badly in one area and doing really well in other areas, so that the algorithms, when they look at it, they don't really know is the whole website bad. Is the whole website good? Should I treat it somewhere in between? And instead, if you can make sure that everything across the board looks really good, then that's always a good sign for us. With regards to negative SEO, I think we generally do a pretty good job of catching those kind of things. If you run across situations where you think we're not catching something properly, then you're welcome to forward that on to us. And we'll take a look at that with engineers.

PANKAJ: Hey, John?


PANKAJ: Now can I [INAUDIBLE] questions?

JOHN MUELLER: One more. One more. Just one second. Could you elaborate on the choice of removing other snippets from web results? I don't know. Joshua, you made this cool plug-in, I saw, that restores them, right?

JOSHUA BERG: Yeah. That's a useful one. This extension could restore your authorship photo. And then it could also restore your ranking, you know? So you can see it however you want to see it, you know?

JOHN MUELLER: I thought that was really cool, yeah.

JOSHUA BERG: Sometimes like Nexus+.

JOHN MUELLER: Yeah. I think that was a cool idea. So in general, our idea here is to try to make the search results a little bit more consistent, especially with regards to mobile or desktop. And on mobile, adding additional images on a page adds a lot of latency. And it takes away a lot of room that could otherwise be used for other things. So we're trying to make it a little bit more consistent. And from our experiments, we've noticed that the click-through rate is more or less the same. So that's something where we think this is the right move to take to make sure that things are consistent and that authorship is used appropriately. So we're still using authorship. We still use Google+. This is not the end of Google+ or the end of anything. We still process authorship information on these pages. We still keep that. We'll show your name in the search results, when we can recognize that. And it's essentially just a change in the search UI.

JOSHUA BERG: But you mentioned, or was it was mentioned that, in the news, or in some articles, the publisher or page might show a different and smaller version what we see some page images used more or sometime--

JOHN MUELLER: I think the results that come in from Google News, which are in this News Universal block, they might still have authorship photos.

JOSHUA BERG: Yeah, In-Depth.

JOHN MUELLER: The In-Depth articles, I'm not sure. They might also still have authorship photos. But at least, for the general web results where removing the photo in the circle count and just making the name and the link to the profile there, unless, of course, you have your extension installed.

JOSHUA BERG: Yeah. Thank you, guys, on that.

JOHN MUELLER: All right.

PANKAJ: Hello, John?


PANKAJ: Now can I make [INAUDIBLE]?


PANKAJ: Thank you. My question is related to internal links. When we go into the Webmaster, we found a lot of internal links numbers that Google showed. But we have a couple of [INAUDIBLE] links which are consistent. And those, they are a variation in the numbers significant. Let's say we have, if we have terms and conditions, they have 70,000. We have a Contact Us page, or maybe some Customer Care page which has 73,000 or 75,000. So there's a huge difference in that internal link numbers. So how does this happen?

JOHN MUELLER: Especially if you have things that are consistently linked across the website, then I think those numbers are hard to use directly. I use them more with caution. So I would more worry about pages that you find that don't have any internal links at all, which might be a sign that there's something technically broken with your internal linking structure. But as long as you see some information there, I think you're probably on the right path.

PANKAJ: OK. Then the next question is, when we look at the top pages for internal links, we have same total links. But all that means two different URLs or internal links, other links have different top pages. So how Google will acknowledge that these are the top pages for this link, and these are the top pages for this one? What are the [INAUDIBLE] being considered?

JOHN MUELLER: So within the ranking within your website you mean?

PANKAJ: It's internal links within the website.

JOHN MUELLER: Yeah. I'm not really sure where the exact number comes from. I've seen those fluctuations that you mentioned, as well, where the same page will be linked internally across the whole website, but they have different counts. I'm not absolutely sure where that would come from. But that's something that would be on our side. That wouldn't be something that you would need to worry about.

PANKAJ: OK. Thank you. And one thing that I would like to know that, if we using one of the videos, Matt mentioned that Google reserves the right to create links in footer differently as the links in main body. So how differently means what rights you are talking about? Can you elaborate that?

JOHN MUELLER: OK. So in general, what we find is when a page is linked in a footer across a website, it might have a lot of internal links, but that doesn't mean that it's the most important page within your website. For example, you might have a Contact page that's linked across all of your whole website. But that doesn't automatically mean that this Contact Us page is the most important page for your website. Maybe the home page is even more important. So that's kind of why we try to understand that these are footer links. These are, we call them, boilerplate information, for example. So we have to look at the whole page. And we see that maybe the header, maybe the footer are things that are copied across the whole website. And from our point of view, that helps us to understand which part of the page actually changes, which part of the page is important to focus on. So if there are footer links there, then we'll recognize that and say, there are a lot of footer links here. But it's not something where we'd say, this is the most important part of the website. So essentially, that's kind of what we do there. We try to recognize on a page which part is boilerplate, so which part is repeated across the website. And we try to discount that part of the page a little bit so that, when we crawl and index those content, or when we use them for ranking, we focus on the main content that's actually visible on the page, that users would feel is the main content as well.

PANKAJ: So if you want to be more conservative, means, all right, what we call it as go give the page rank to other pages. So if you put those pages for the links, would it be fine?

JOHN MUELLER: I wouldn't necessarily do that. So you're going into the area of page rank sculpting then. And that's not something you need to do there. So we recognize these are footer links. And we'll treat them appropriately. It's not that we'll funnel all of your site's page rank into those pages. We recognize that automatically. And we can deal with that automatically. You don't need to manually tweak those links within your website.

PANKAJ: Thank you.


PANKAJ: Thanks.

JOHN MUELLER: All right. We have 10 minutes. I could go through more questions, or take questions from you guys. Do you have anything? Nothing special? OK. I'll just go through more of these questions here in the Q&A. If an IP address was used for spam, et cetera, does that IP stay marked when recycled through a new user or a company? Does Google see a change of use of the IP and release it as a fresh IP? Or do you suggest switching to a clean IP? In general, there are a very limited number of IP addresses left. And almost all of them were probably used for something at some point or another. So that's not something you could really know the whole history of the IP addresses involved. I think there are very few IP addresses which we see as being problematic, in the sense that we can't really crawl them. I forgot what the name was for those kinds of IP addresses. But usually, you can't access them normally. So those are the kind of things where you need to be careful about. But if there was content there before, if content was indexed and crawled normally before on these IP addresses, even if that content was spammy, then that's not something I'd worry about. If you have a new website and you put it on one of these IP addresses, that's absolutely fine. Sometimes, when you look at things like content delivery networks, those IP addresses change all the time. And they get switched between different websites. And that's not something you could even control. So we wouldn't necessarily take action, based on the IP address of a website.

SIMON: So it doesn't matter if you change your website to another hosting company with another IP? That shouldn't matter.

JOHN MUELLER: That shouldn't matter. We have recently put out an article about site moves which covers that kind of situation as well. I'd just make sure that, from a technical point of view, you're doing the right things to move from one IP address to another. But it's not something where you'd see any changes in the search results because of that.


JOHN MUELLER: It used to be that we would focus a lot on the IP address for geotargeting. Or we'd try to recognize the location of that IP address. But with the ccTLDs and the geotargeting feature in Webmaster Tools and [INAUDIBLE], that doesn't really play a role any more. So if your IP address is located in, I don't know, Italy, or in France, or in the UK, or the US, that's not something you'd need to worry about. Some CDMs even automatically switch, depending on the user. So that's all fine. We have to be able to deal with that.

SIMON: OK. That's great. Can I ask another question?


SIMON: When we transport the website, the company gets a new website, and yes, different languages, but we want to drop one language, does that give a negative result because all pages give errors?


SIMON: Sometimes from a language that we don't use any more?

JOHN MUELLER: If nobody is searching for those pages, that's fine. If people are searching for those pages, of course, they'll be kind of disappointed. And maybe you'll see a drop in traffic for those pages, at least. But if those pages are ones that you don't want to have on your website anymore, removing them is fine. It's not that we would say there are a large number of crawl errors here, therefore, we'll demote the website. So we see that more as a technical problem and say, this website is serving crawl errors, which is technically correct. It's the right way to handle pages that disappear. And that's fine. That's not something that we would say we would penalize or demote a website for.

SIMON: OK, great.

JOSHUA BERG: John? Another question related to the authorship. For a little while there, we were seeing a few different levels of authorship. Some people were calling it A and B, or first and second level. And then some were also query related. And maybe, if your authority wasn't good enough, or a lot of your content was not as high quality as it should be, then your image didn't appear. And sometimes, your authorship entirely didn't appear in search results. So is that still effectively the same? In other words, I imagine it would be. The content you associate with the things you put on contributor to and you post to, these things are all still going to affect your authority and authorship, just as they ever were. Is that correct?

JOHN MUELLER: I think, for the most part, what we would do there is just try to focus on the technical aspects of whether or not authorship is implemented correctly and just show that now. And now that we don't have to differentiate between showing the photo or not showing the photo, it's also more consistent, I think. If we don't show authorship at all, usually, that means that the technical requirements are missing, or broken, or implemented incorrectly. But if you have it technically implemented properly, we've been able to crawl and index and process all of that, we should be showing authorship, at least with the name. Going forward, of course, with just a name.

JOSHUA BERG: Yeah, because I think it was Matt or someone mentioned at a conference how, when they started doing that, they found, if they reduced the authorship showing by about 15%, based on the quality related to authors, then they were able to greatly improve, significantly improve the results of the displaying author results.

JOHN MUELLER: Yeah. I think, especially with the photos, that definitely makes sense. Maybe that's something we have to reevaluate when we have more experience with just the name-based authorship annotations that we have now. So I think these are all things which we're always experimenting with where you're bound to see changes over time. So this particular change that we did now is, I think, just one of the many changes that we've done in authorship over the years. I think we've had it now for two years now, something like that. And I can definitely see the team reevaluating how it's working now, maybe looking at this in a couple of months, maybe in a year and saying, oh well, maybe we need to tweak it slightly differently, based on this, to make sure that people are recognizing that these are great authors, or to make it easier for people to see the names or the history behind these people who are creating this content. It really depends on how things evolve, how users react to these changes, how we see the community around it evolving as well.

JOSHUA BERG: Right. And as with everything in algorithm, nothing's permanent. I said the only constant with Google is change. So you can also look at it as experimental. But in that sense, everything is. So is that what you're saying?


JOSHUA BERG: I know what's going to happen with them.

JOHN MUELLER: Well, we try to keep things consistent, because users would be totally confused if we changed everything. But we do have to react to the changes in the environment as well. So for example, if we were to show the authorship photo for all search results, then maybe that would be too much for the majority of the users, even if we had that information. So that's something where, in the beginning when only very few sites implemented authorship, maybe it made sense to show them all. Maybe now that a lot of sites are implementing authorship, maybe it makes sense to reduce that, or maybe to switch over to the text-based annotation. It's something that always has to be reevaluated. And when you look at, maybe, eye tracking studies that were done two years ago with the search results, they'll be completely different now when users are actually looking at it and when they've had some experience with the search results as they are now. So that's something that, I think, is always evolving, that makes it interesting for us, because we can't stop. Sometimes we'll see people come to us and say, hey, what are you still working on search for? It works, you know? Stop changing things. At the same time, the expectations change all the time. People are using search on different devices, on their phones, on the watches now from Google I/O. So it's always evolving.

JOSHUA BERG: What were you referring to there? Was that internal eye tracking studies, or something that you've seen a study about?

JOHN MUELLER: I saw something a while back, but I'm not really sure where that was. That was extremely somewhere. But we do a lot of studies internally. And as you can imagine, we don't make changes that easily in the search results. It's not that we wake up in the morning and say, hey, we should change everything to green, because that's a cool color. We really try to evaluate the changes that we make and make sure that we're doing the right thing and that we're headed in the right direction. So we definitely do a lot of internal studies for that. But I know, externally, people do studies about Google as well. So I'm sure there are some eye tracking studies around authorship as well somewhere.

JOSHUA BERG: We do studies on Google all the time.

JOHN MUELLER: Yeah. Yeah. I totally understand. It can be frustrating when changes like these happen. But it's something that will always be evolving. It's never going to be static. And there's going to be new things that are coming up. And there are going to be old things that are going away. And things that maybe worked out well, maybe things that didn't work out so well. So that's something that comes with the internet. It kind of keeps you young, or at least keeps you active. All right. I think we're out of time. But if any of you have one last question, feel free to go ahead. No last questions. All right. I see there's still a bunch of questions in the Q&A. I am going to set up some new Hangouts in July. I'm going to be in Boston for a while, so maybe we'll switch time zones a little bit and have more US ranking ones then. But I'll definitely set up some new ones, so you can add your questions there, if you weren't able to get an answer here now. And until then, I want to thank you guys for joining, thank you for all the questions. It's been really insightful, interesting again.

SIMON: OK. Thank you, very much.

PANKAJ: Thanks, so much, John.

JOHN MUELLER: Have a good weekend.

SIMON: Thank you, very much. It's really appreciated.

JOSHUA BERG: You too. See you.

PANKAJ: Thank you.

JOHN MUELLER: Bye. | Copyright 2019