Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 13 March 2015

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

JOHN MUELLER: OK, Welcome everyone to today's Google Webmaster Central office-hours Hangout. My name is John Mueller. I'm a webmaster trends analyst at Google in Switzerland. And part of what I do is talk with webmasters like you to help answer your questions and make sure that information flows both directions, as well. I have a bit of a cold today, so I don't know if we'll do the full hour. But we'll see how far we can go. Otherwise, you'll have to resort to asking me yes or no questions. I don't know if that works. We'll see. Do any of you want to get started?

MIHAI APERGHIS: I'll take a shot, since nobody else wants to. It is actually a pending question, since you said that you're not currently refreshing monthly or very regularily, but you plan to do that. Is it fair to say that there has been no refreshes of data of Penguin this year?

JOHN MUELLER: I don't know. It's possible. I don't know the specific dates. So I think that might be correct, but I can't confirm it 100%.

MIHAI APERGHIS: So on the next refresh, will you be able to announce it? Or might it just go?

JOHN MUELLER: I don't think we preannounce it. But if this is something bigger that happens that people have been waiting for, we will try to confirm it. That, definitely.


JOHN MUELLER: We have locations page with lots of locations. It loads with AJAX. How do we mark it up? I took a quick look at this page, and it looks really nice. But a lot of the JavaScript files are blocked by robots.txt. So we can't actually see any of the markup that you're generating there with AJAX. You can test that with Fetch as Google in Webmaster Tools, and you'll see which script files are blocked so that you can go through that individually and clean that up. Once we can crawl the page normally, we can render it normally. And then we can pick up the markup even if you're embedding it with JavaScript. What's the best route to go? TLD across countries, folders, or whatever? Which variation should I put my x-default tag on? Essentially, for geotargeting, for country language targeting, you can pick whichever option works best for you. So if you prefer to use a generic top level domain and folders, or subdomains, that's essentially up to you. That's whatever works best for you on your side. Your x-default is also up to you. So if you decide that you want the French version as the default, that would be fine. Hold on. I'm going to mute you.

MALE SPEAKER: Sorry John, can you repeat that again?

JOHN MUELLER: So the x-default is also up to you. If you want your French version as a default, that's fine. If you prefer your English version as the x-default, that's fine too. That's essentially however you want to be seen by people that you're not specifically targeting within your existing hreflang sets.

MALE SPEAKER: So the reason why our cost equation is because if it [INAUDIBLE], and com is our main type, basically--

JOHN MUELLER: Aw, man. I think you got muted somehow accidentally.


JOHN MUELLER: Try again. Yes?

MALE SPEAKER: OK. What's going on?

JOHN MUELLER: Maybe ask again.

MALE SPEAKER: So the reason why I ask that question is because what if-- dot com is our main TLD, right? And would it be better then to have it indirectly, making com the main [INAUDIBLE], rather than having them across the 10 TLDs?

JOHN MUELLER: It's essentially up to you. You can use different TLDs if you prefer, or if there are legal reasons, maybe you need to do that. But essentially, it's up to you. And from a ranking point of view, we will treat these equivalently if we have the right geotargeting information for those individual versions. So that's not something where I'd say you need to use country code top level domains or you need to use generic with subdomains or with subdirectories. That's essentially whatever works best for you. Maybe there are legal reasons to go with one or the other. Maybe there are just practical reasons that you have a CMS that doesn't support different subdirectories. Maybe you'd want to put it on different subdomains or on different domains completely. But that's essentially however you want to handle it.

MALE SPEAKER: OK. And then obviously, doing it across these, one would then have to get hreflang implemented properly, right?

JOHN MUELLER: Yeah. Regardless of how you set it up, make sure that you have the hreflang set up between the different versions, even if it's on the same TLD. So hreflang works everywhere.



MIHAI APERGHIS: John, if I can do a small follow up. The reason a lot of webmasters are asking this and SEOs is because sometimes, we are afraid that using separate TLDs, separate domains might increase the work that they need to do to get some exposure to their domains so that Google ranks them better, rather than having subdomains and subfolders where everything is consolidated into a single domain. Would hreflang work somewhat like a command code? So Google sees all these different domains as part of the same entity, as the same website, and ranking signals once one domain might also influence the other domains since the hreflang kind of binds them together? Or is it not the case?

JOHN MUELLER: A little bit, but not like you would have with a rel canonical, because then we can really fold everything together. And we need to be careful that we don't run into this situation where one really strong company in one location, for example, suddenly ranks number one everywhere just because they're very strong in one location. So if you take a really big, let's say, American distributor, and they're really big in the US, because everyone goes there. Just because they open up a store in Italy doesn't mean that store in Italy is suddenly going to be number one in Italy just because they're number one in the US. So some amount of cross talk always happens there. But it's something we do try to look at on an individual basis. So it's not that the strength from one version automatically goes into the other one. The issue with separate TLDs or separate folders, I think, is something where you kind of have to think about how much content you actually have in those individual languages and different locations. So it's not something where I'd say you can create country-specific versions for all the countries out there. And Google will just rank it as local content automatically, because we do expect that this content is somehow specific to that area. So you have to balance between being able to target these individual countries and languages, but at the same time, holding yourself back from spamming and diluting your content across all of these different languages. So sometimes it makes sense to broaden that. Sometimes it makes sense to concentrate more and say, OK, I'll limit myself to two or three countries, and people outside of those countries will still be able to find my content. But I know that the content for these countries is really, really strong, because I've been able to focus my energy on it. I have great content that matches there. This is great stuff for ranking. So there's no automatic answer between splitting things up and concentrating. You have to look at what you have available, where you want to go, and what you want to do.

MIHAI APERGHIS: Right. I was mainly referring to if we did decide to split up between the few countries where it makes sense to, whether going separate TLDs and the separate subfolders by geolocating those subfolders or subdomains. Would it be the equivalent or because we're on the same domain with the subfolders and subdomains Google will-- or it would matter that there is a single domain rather than separate domains, multiple websites?

JOHN MUELLER: I suspect there might be some small factor involved there. But it's not something that I'd really worry about from a practical point of view. If you have strong preferences internally for one or the other variation, I'd just go with that. Like I said, I caution away from just splitting things up for the sake of splitting it up. But if you prefer TLDs, fine. If you prefer having everything on one domain, that's fine too.


MALE SPEAKER: Maybe I can give you some background, John. We actually had it all in one domain and added the countries with different languages into directories. Since we launched our new sight, we then moved to TLDs. And our traffic has dropped off for the past year. All the 301 redirects were done correctly page by page, not just on domain to domain or directly to domain. But I've been wondering over the past year. It's just dropping and dropping. It's actually not having a benefit to us.

JOHN MUELLER: Yeah. I would expect that that wouldn't be related to that. So anytime you make a move like that when you split up a site or you concentrate a site, you will see some fluctuations. It's not always smooth sailing, because we can't take all of the signals and just transfer them immediately like we could with a site move if you're splitting things up. But if you're looking at a period of over a half a year, then I suspect that these are just normal fluctuations, normal changes in the way that things are evolving in search around that market.

MALE SPEAKER: So you wouldn't suggest we go back to that, having it indirect again seeing as we moved it over a year ago.

JOHN MUELLER: I wouldn't go back just for that. I'd really think about what else might be happening there outside of just the SEO side of things? And that's where I'd focus and try to put more weight on that. I don't think switching something like that back will have any significant effect.

MALE SPEAKER: OK. Thank you.

JOHN MUELLER: How important is it to have a mobile site versus an app? We notice a lot of ecommerce sites these days are now moving to apps instead of a mobile site. How does Google treat these sites? This is always an interesting question. But I think on the one hand, we are able to understand how apps work more and more. Especially Android apps, we're able to use them for app indexing, which means we can crawl the app content, and we can tie that together with your web content, which might be desktop content. And we can show links directly to your app content in search instead of to your desktop pages, for example. And that can be a really good user experience. So if you're searching for specific blue shoes and you find your favorite distributor in there with an app deep-linked to their app, to their apps page, on blue shoes, then that can be a really good user experience. So that's something that could make a lot of sense. On the other hand, if you have apps, you need to keep in mind that they're very platform specific. If you make an Android app and you don't have an iPhone app, then users on an iPhone are not going to be that happy with just the desktop pages. So my recommendation would generally be to make sure that you always have a mobile-friendly website. And if you want to go a step further and also provide an app for your pages, then by all means, take that step, and go down that road. If you want to tie that into your website, I'd definitely take a look at the app indexing functionality. That's something I suspect will just gain more on prominence over years or over the near future. So that's something where if you're already working on an app, you already have a website, you can tie those in together. And that'll make them a little bit easier for us to show appropriately and search directly.

MALE SPEAKER: Hey, John? Can I ask a follow up question on mobile rankings?


MALE SPEAKER: I know a couple of years ago, I tried to google if the response was better compared to an But is it now the case that either of them can rank equally as well?

JOHN MUELLER: Yeah. So in our documentation, we have three different variations. The m dots are separate URLs, responsive sites, and also dynamic serving where you use the same URLs, and you just serve the content dynamically depending on the device. And all three of those, if they're implemented properly, are perfectly fine from our point of view. And that's something where I'd say you need to go to a responsive to get a ranking benefit. From our point of view, they're essentially equivalent. For users, they click on the result. They land on a mobile friendly page, and that's essentially what we're looking for. We're not looking for promoting any specific technology. It just should work well on a mobile device.


JOHN MUELLER: One of my clients was hit by Panda after copying his old articles as LinkedIn. Is it possible to be hit even if your original article was posted on your blog before posting it to stronger websites? Can't use a canonical on LinkedIn. So in general, the Panda algorithm is a quality algorithm that we run which tries to promote higher quality content, which looks at a variety of factors around content quality around the content. So just by copying an individual article to another website and publishing it there, that's unlikely to upset our algorithms. That's something where in general, when we look at a website, we try to understand the quality of this website. And we look at everything around this website. So by copying one particle over to another site, that's not going to affect that. So my recommendation there would be to not worry so much about this specific thing that was happening there, but really to take a step back and look at the quality of the website overall and maybe find some people who aren't associated with your website who can give you a little bit more insight, as well about what might be happening there or how they feel about your website when they come to it for the first time, when they see it for the first time. How they navigate within the website, how trustworthy they feel this website is, if this is something that they'd give their credit card number out to. So these are the kind of things where you'd want to have someone give you really hard feedback about your website and things that you probably don't want to hear directly, because someone might be criticizing something that you've been working on for years. But I think a lot of times, it's really important to get this neutral feedback, even if it's harsh. Even if it means that you have to rethink your whole model. So from that point of view, I wouldn't focus so much about this individual article that got copied to LinkedIn, but rather, think about how your website interacts with users overall-- how they feel about it when they go there. And maybe do some user studies, maybe find some people who aren't associated with your website who can give you a little bit more help. We syndicate some of our products to third party websites. For example, affiliates, aggregator deal sites all link back to our site with rel nofollow. An SEO guru said this could give us a penalty, despite the nofollow. Is that true? That's not true. So if these links are there with a nofollow, then we drop them from our link graph. That's as simple as that. So it's not something that you'd have to worry about there. And links from those kind of sites with a nofollow is perfectly fine. They drive traffic to your website. They make it possible for people to go to your business to buy these things that you're offering. And that's perfectly fine. That's a good use of a nofollow. So from that point of view, that's absolutely not a problem. If I have a paginated list and change the default sort order but keep the same URL, does that matter? The tricky part here is we'll crawl it in one variation, and we'll look at that variation. So we can't see how you're changing the sort order for users. And if we can find all the individual items that are linked from there, then that's perfectly fine. If we can't find them unless someone does the right step of clicks and gets the right cookie, the right setting to actually get that collection of links, then that's more of a problem. But if this is something we can still crawl through, we can find all of your content, that's fine.

MALE SPEAKER: John? Can I step in with a follow up on this?


MALE SPEAKER: How about the secondary, tertiary pages, and so on, so fourth? If those pages contain a noindex tag, you still see all the products, which maybe there can be thousands. Those specific pages have a noindex. How do you treat those products on those pages? They have noindex follow.

JOHN MUELLER: Yeah. So we follow the links. We crawl them. We don't index those pages. But we follow the links.

MALE SPEAKER: That means you won't index, even the products, of course?

JOHN MUELLER: Well, if it has the noindex, you're telling us not to index it.

MALE SPEAKER: Yeah. Then it makes sense that those pages have rel prev/next, have the pagination if they have no index?

JOHN MUELLER: If they are no indexed-- I don't think it really makes sense to do it like the rel previous and next, at least not for Google. But at the same time, we'd probably just ignore it there. So it's not that you would cause any problems. If you're using rel next and previous for other reasons, then maybe keep them there. But at least from Google's point of view, if we can't index them, then we can't take them as a collection of pages about this topic.

MALE SPEAKER: I understand. So if we drop the noindex, then the rel prev/next pagination would make perfect sense, right?


MALE SPEAKER: OK. Thank you very much.

MIHAI APERGHIS: By the way, John, would you recommend the rel prev/next tags over noindexing a page 2, 3, 4 of a category? Because a lot of CMSes kind of noindex these pages automatically. Even Wordpress and [INAUDIBLE] gives you this option [INAUDIBLE]. So would you recommend the rel prev/next and leaving the pages to index or rather just noindex them?

JOHN MUELLER: I don't have any strong preference either way. I'd leave that up to you. I don't think this is something where Google would say, you should do it this way or that way. I can see perfectly good reasons to say I want to noindex these pages because they're not really that interesting. It might also be that there are situations where you have sites where this is actually useful and interesting content that make sense to index or that makes sense to at least index as a part of a series. So it's up to you.


JOHN MUELLER: I noticed new markup for video games. If putting this on an official site, will this directly influence the Knowledge Graph? Will this impact any other searches? I saw this for the first time. So thanks for linking that. That's really cool. The important part here, I think it's worth a mention that this is for an official site. So if you have a fan page, if you have an affiliate page for one of these video games, then that's not really the markup that you need to use. But for official sites, this is a great way to give us the information that you really want to have used in the Knowledge Graph, in the Knowledge panel on the side. So that's something if you have an official site for a video game, then by all means, give us that information. I think there's attributes like publisher name, those kinds of things in there which we do try to show in the Knowledge Graph. And if you can give us an authoritative source, then that's even better. So that's something we will try to take into account. It doesn't apply for other searches though. So I'm going to the other part of the question there. What's the best way to find the number of pages indexed in Google? Site query shows a different number, index status, site maps, et cetera. So this site query is more like a very, very, very rough number which can sometimes be a few orders of magnitude off. So plus or minus a couple zeroes at the end. And I don't think that makes it very actionable. So as an SEO, I wouldn't focus on that. The index status is essentially the correct number of pages that we have indexed. That can be a bit misleading, because it shows everything that we happened to run across, an index from a website, which could include a lot of duplicate content, which could include a lot of things that you don't really care about. Personally, I prefer the site maps count, because that's something where you can submit the URLs that you really care about. You can say, these are the URLs I actually do want to have indexed. And we'll give you information saying, well, out of this set of URLs that you submitted, this many actually are indexed at the moment. And we look at the exact URLs. So if you submit one version like with slash index HTML in the site map, and we index it just with slash at the end, then that's something that we wouldn't count as being indexed. But at the same time, for you that's interesting information, because it tells you that maybe your site map file isn't completely perfect if you see bigger mismatches there. So I'd focus as much as possible on the site map count there. I really like site maps.

MALE SPEAKER: John, can I ask a follow up question please?


MALE SPEAKER: Is there any way to download a list of your indexed URLs from Webmaster Tools? I can't seem to find anything like that. It shows you how many are indexed. But if I'm missing something on a site map, can I find out which ones in my site map aren't indexed?

JOHN MUELLER: No, not at the moment. So we had that as one of the wishes submitted with the wish list we did earlier this year. But at the moment, that's not something that you can do directly in Webmaster Tools. What you can do is narrow things down on your side if you want to take the time to do that. So you can submit multiple site map files, of course. And you can separate the URLs on your site into logical sets of site map files where you say, well, these are all my category pages. These are all my product pages. These are all my articles. And that way, you can narrow things down a little bit. But sometimes, it's a lot of manual work. So I don't know how useful that is to do in practice.

MALE SPEAKER: Yeah. OK, thanks.

MALE SPEAKER: John, regarding things that get omitted from the index. Is it at all possible, or should you-- like often, I'll be looking around for things that I need to fix. And some of the things are omitted. Is it worth fixing those things first or other things that are currently in the index?

JOHN MUELLER: Both, I guess. If you know about issues on your website, those are things I'd fix. If you know about the pages that aren't indexed yet, it might make sense to look about how you can link to them better within your website. But essentially, it's hard to say which one you should concentrate on more. It really depends on your website and where you think your bigger problems are. Sometimes, the bigger problem is that your website isn't really crawlable, and that we can't index any of the content, because we can't actually crawl to it. And that can be a really serious problem. But sometimes, it might be that we can crawl it just fine, but the content that we find there is actually not so hot. Then fixing the content would be something I'd recommend there.

MALE SPEAKER: Sure. Thanks.

JOHN MUELLER: With Google adding mobile as a ranking factor, we have sites that average 20,000 pages. And lots of them are mobile optimized. But a few aren't. What's going to happen? So essentially, we're looking at this on a per-page basis. So if those pages that are mobile friendly are the ones that we're showing in search, that's perfectly fine. The ones that aren't mobile friendly are probably the ones that you really want to work on. But if these are pages that nobody really sees in search, then maybe that's not so critical. But it's not the case that the whole site will disappear on search if you have just a part of the page that's mobile friendly at the moment.

JOSHUA BERG: Now John, hi. I've got a question about Panda quality algorithms. Let's say a site that's been around for a long time got hit by the Panda 4 or the May, 2014 algorithm, and then they didn't do anything about it, because they were still planning on improving the site. And then later on, the next Panda algorithm comes out, like the September 2014. Do these, in general, the quality control algorithm build on each other? Or are they still just taking it from where it's at? So the reason I'm asking is because I've got some old sites for clients. And then they didn't update in a while. Like they got hit badly by the last Panda update. So should they be looking at oh, we need to get working on this before the next one rolls around, because they will technically build on each other? Or will the new algorithm just take the site from where it's at in its current relevance?

JOHN MUELLER: No. In general, our algorithms are looking at the current state of a website. And of course, that involves things like crawling and indexing, because when we say the current state, that means the current state as we know about it, which means we had to crawl, and index, and process all these signals first. And at that point, we're looking at that current state. But it's not the case that we'd say, well, a previous algorithm looked at your site a year ago and thought it was bad. Therefore, you have to do even more work than a normal website would have to do to get out of that. It's essentially if we look at the current state and we say, well, this is fine, then that's a good sign.

JOSHUA BERG: So what I was thinking about are the longer term algorithms like we've seen in the past two years or so ago. There was a Panda or some update that came out which when it refreshed, it led up some sites that had been down for a year plus. So these are the longest of the quality control type of updates. Will these be more reflected by this scenario so that if someone is considering rebuilding an old site, they should say, well, there's been a few Panda updates, and we haven't worked on it for a long time. So we'd be best off starting with a new URL and going from there. Or is it not the case that because the site's been a problem for quite a while that it's got hit by these longer term problem algorithms?

JOHN MUELLER: I wouldn't worry about it from that point of view. If you're considering revamping your website and making it significantly better, then that's not something where I'd say you need to move to a different URL to profit from that. I think users are going to respect that if you make bigger equality changes like that. And the search engines will pick up on it as well as it reprocesses data. So from that point of view, I wouldn't artificially hold this back or artificially push it into a specific bucket. But I'd rather think about what you can do to improve the site overall and actually go ahead and do that.

JOSHUA BERG: All right. Thanks.

JOHN MUELLER: My Magento website automatically applies templates depending on user agent, which means with the exact same link, mobile users see different content from desktop users. How does Google treat my website? Which version does it choose to index? If you're separating between mobile and desktop users, then that's essentially dynamic serving, as we call it, where you serve content based on the device that the user's using. And that's perfectly fine. That's a legitimate strategy to create a mobile-friendly website. So from that point of view, that's something that sounds like you're doing the right thing there. As to which version Google would index? In a case like that, we'd index the desktop version in general, because that's the main version, the version that we see with the normal Googlebot. And that's the one that we would focus on. If you don't have a desktop version and your whole site is just mobile, then of course, we'll focus on that too.

MALE SPEAKER: Hi John. Just wanted to follow up on this one. So we're also providing this [INAUDIBLE] website. So one version is shooting to our Indian users, and one for the US version. So Google is always indexing our US version. All the time, I'm just caching or [INAUDIBLE] version. I only see the US version, not the Indian version. How can I make sure that my Indian version is also getting indexed by Google? We're updating it regularly on a daily basis, maybe [INAUDIBLE]. But the US version always gets crawled.

JOHN MUELLER: Yeah. That's a good question. That's tricky. Let me just mute you for a second. There's a lot of background noise. So that's kind of tricky, because it's a difficult situation, of course. Our recommendation is generally to have separate URLs for the different language or location versions. So you would have one version for slash US and one version slash IN for India, for example. That's essentially the optimal situation. Then we can crawl and index both of those versions separately. If you have hreflang set up between those versions, we understand that they belong together. If you have one version like the homepage that automatically redirects depending on the user's location, then you could also include that in the hreflang set as the x-default page. But essentially, the important part is really that we have separate URLs for the different country versions so that we can actually craw. And index the separate versions. Because otherwise, like you said, we tend to crawl from the US. And we'll probably just see the US version, and we'll miss out on all the content you have specific for Indian users. So that's something where if you can set it up to have separate URLs, that would be the optimal solution there.

MALE SPEAKER: All right. Thank you.

JOHN MUELLER: Mobile responsive versus desktop. Could it be a problem if we hide too much content on the mobile version with display none? The UI is better if we hide it on mobile, but the difference to desktop is becoming pretty big. In general, like I said, we do try to focus on the desktop version for crawling, for indexing. So in general, we try to pick up on that. Our requirement is that the mobile version be essentially equivalent to the desktop version. That doesn't mean all the text is exactly the same. It doesn't mean that the whole UI is the same, that you have all the links in the sidebar the same. But essentially, the primary content should be equivalent on the mobile version. And if that's the case, that's perfectly fine. So you don't have to show all of the sidebar, the footers, the headers, maybe bigger images, those kind of things. That's perfectly fine to adapt that for mobile and not show that, or limit it, or to show a different variation of that. So a lot of sites, for example, take big images, and they fold them into smaller ones or ones that you need to click on to actually see for mobile. And that's perfectly fine. We're seeing a lot of content mismatch errors in Webmaster Tools for Android apps. Being an e-commerce site, we end up showing different content on desktop and m-site. How do we tackle this issue? Will it affect our m-site ranks? So this wouldn't affect how we rank your website in general. This is essentially a problem when it comes to app indexing where you give us one URL that goes to your app and one URL that goes to your website. And we try to figure out what the primary content is on the app page, as well as on the web page, and make sure that it matches. So for example, we're looking for if you have one page on your website about blue shoes and the equivalent page on your app is about red socks, then that would be essentially a content mismatch. But if the equivalent page on the app is also about blue shoes. But written slightly differently, then that's something where we might not be picking it up perfectly there. So if you see things like that and you think the content is actually equivalent but we're not picking up on it properly, then definitely let us know about that. Maybe post in the help forum. Give us the information that we need from the app specifically that you're using there so that we can follow up with you and see what exactly is going wrong there. Is it something on our site maybe that's interpreting your content wrong? Or is there maybe something in the app that makes it hard for us to actually pick up that content?

MALE SPEAKER: Hi John. I just want to follow up on this question. We have this site where one is desktop and one is a separate URL, basically. So where do we need to add this app index and URL, to our desktop site or to our mobilized website?

JOHN MUELLER: Good question. I don't know for sure. I'd double check in the documentation. My understanding is that it needs to be tied in with the desktop site. And if you have a mobile site that's already set up as an alternate, then we'd see that three-way connection anyway. But my understanding is it should be with the desktop site. But I'd really double check in the documentation first.

MALE SPEAKER: OK. Thank you.

JOHN MUELLER: My voice is giving out, so I'll take a break here. It looks like we still have a bunch of questions left. I'll copy them out and see if I can answer some of them offline afterwards. But otherwise, so far, thank you guys for all the questions, and I hope this was a useful Hangout again. And maybe we'll see each other again in the future ones.

MALE SPEAKER: Thank you John, and a lot of health we wish you.



MALE SPEAKER: Thanks have a good weekend.

JOHN MUELLER: Thanks have a good weekend everyone. Bye. | Copyright 2019