Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 11 December 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: OK, welcome everyone to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland, and part of what I do is talk with webmasters and publishers like the ones here in the Hangout and maybe the ones that are watching this video. As always, I'd like to give the people here in the Hangout a chance to ask the first question. Is anything on your mind before we head through the submitted questions?

MATT PATTERSON: I'd love to revisit my question from last week, which may be a bit tedious if someone wants to go before me, please do. OK, so as I asked on last week's, last Friday's call, we are having issues, we think-- at least we don't understand exactly what's happening with the way we're tackling hreflang tags for international content. So just to quickly recap. We're a music video streaming service. So we have lots of music videos. They have language, but it's not like text language. And we have the Chrome around that. So the login, about the site, all that kind of stuff-- that's translated into German and English. So we're a German site. International stuff is in English. And we've been trying to figure out how best to represent that to Google so that German's see German in the search results and everyone else sees German or English, whichever they would prefer. And in the last Hangout, I said that we were doing hreflang x-default and rel canonical for basically all the video play pages, so say, Rihanna's "Umbrella" gets rel canonical and x-default. And we also support sticking hl=dn or de as a query string parameter, and they get a rel alternate-- hreflang dn and hreflang de. And the rel canonical stays the same. And you said that you thought that we should change it so that we use the rel canonical to point to whichever of the language variants you are explicitly on at the moment. So if you land hl=dn, rel canonical should point to hl=dn, because otherwise the canonical uses a signal to drop the language variants from the index. And in looking at that somewhat, we're saying actually that Google is behaving kind of the way we'd expect it to. If I go to Google.de and set it to German, I get German [? tape TV ?] pages. If I go to Google.de and set it to English, I get the English [? tape TV ?] pages. And the Google Play store is doing what we're doing. So it's keeping its rel canonical to the unadorned language. So it's keeping its rel canonical pointed to the x-default. I'm just a little bit confused because I'm seeing some of the pages I'd expect, but I'm also-- your explanation made total sense. And I'm just a bit baffled at the moment, really.

JOHN MUELLER: Yeah, so we're sometimes showing the wrong version? Is that what you're saying?

MATT PATTERSON: No, no, you're showing the right version. But after our conversation, the behavior I would expect to see would be that you would only show the German version because that's what will get served by default to Googlebot if it hits the rel canonical URL. But we are seeing German and we're seeing English on pages that have the-- the hl=dn or hl=de variants listed in the metatags.

JOHN MUELLER: I'd probably have to look at some specific examples to see with the specific URLs, we're picking it up or we're not picking it up. Or we're showing it like this. Or we're showing it like that. I do know we try to figure out what the webmaster is trying to do. So if you set the real canonical like that, then we'll still try to figure it out. But in an ideal situation, we have everything that leads to the same final destination and the same decision on our side with regards to which URL to show. So that kind of helps. But I am happy to take a look at some example URLs if you have some. If you can send me something, then I can kind of see, is it working as expected? Is it kind of doing what it's supposed to be doing even though you're not doing it completely the way that we'd expect that? Or are you doing the right thing, and we're not picking it up properly?

MATT PATTERSON: Yeah, I basically want to make sure we're doing what you would expect and not doing the wrong thing for no good reason. So cool, thanks very much.

JOHN MUELLER: Sure.

DANIEL PICKEN: Can I jump in with a question, John?

JOHN MUELLER: Sure.

DANIEL PICKEN: I think we need some clarification on deciding what pages we should have on the index. So I think Google's best practice is to minimize, thin and duplicated content because of 100 related issues. And we would control that either by canonicalization or noindex or maybe even robots.txt. Now, I've got a situation where I've got a client, and they allow users to sell their own cars. So that generally leads to a lot of pages content-wise where they've decided-- they can put on whatever they want. And in a lot of cases, it's quite thin content. It's probably got a bullet point or two and a lot of images of the car, but not a lot there. And that happens across the entire site because, again, I suppose we're allowing the user to decide the content . So, on one hand, we can either just allow people to decide what pages get indexed. Or we try and control that. But, again, I suppose we really need an understanding of why or how should we can control this duplicate or thin content through the entire site because it's not low quality, but I don't know. Does Google see that as low quality? We've got loads and loads of thin pages that are very, very similar-- have the same type of car.

JOHN MUELLER: They could be low quality. It's something where I try to look at it from a user's point of view rather than from a very objective point of view where you're saying, well, there are 20 words on this page which are unique. And there are 30 words on this page that are unique. Therefore, we'll take the 30-word version instead of the 20-word version because it might be that sometimes there's something valuable in even a short piece of content. So that's something where you can't take a single one-size-fits-all approach and say, this is how we should be filtering the content on our side. Maybe we should be noindexing the lower quality or shorter content. You really need to look at how your user is going to react to that and watch what they would perceive as being low-quality content. So if they came to one of those pages on your site by looking for a car that matches those criteria, for example, would that be a good user experience for them? Would that be something that they'd be happy with? Or is that something where they'd say, well, there is nothing on here that I could find useful or valuable. Therefore, this was a bad search result for me. And it's Google's fault for sending me to this page. So those are kind of the situations that you should be looking at or you could be looking at. And depending on the type of site, you might take metrics like the length of the content. You might take metrics like how people respond to this. Do people react to this content? Is there a way for them to interact with the content like leave comments or vote things up or down or put it on, I don't know, a preference list or something like that? Those are the kind of things where maybe you could collect some signals about what defines good content for your site and use that to, on the one hand, maybe filter the things out that are really kind of useless. If people are posting pictures of stuffed animals on a site that sells cars, then that doesn't really make sense. And on the other hand, maybe pre-moderate things automatically where you're saying, well, I don't know how well I can trust this content. Therefore, I'll just put it on noindex for the moment. And if it picks up over time, then we can always flip that back automatically.

DANIEL PICKEN: Yeah, I think it seems generally the main issue is that we've got loads of the same type of page. And, yeah, so on one hand, yeah, OK, it's going to be quite specific. It might be a type of year or a type of, you know, it could be petro, diesel, et cetera. So it could be something to identify that specific car that somebody is looking for. So therefore, we are serving the most relevant page. It might be, OK, I've got loads of different relevant pages to the same [INAUDIBLE] based on the query. And I don't know. It's quite difficult. So I see what you're saying in terms of whether or not that's going to be good for the user. But I think, will it be bad for the user? I mean, if we've just got multiple of the same cars. But they're very, very slight. And then for whatever reason, then should I think that needs to be in the index? Does it? Would you say it should be in the index? Because again, I could be very specific about my query and the type of car that I'm looking for.

JOHN MUELLER: Yeah, sure. I mean, maybe this is the kind of situation where for the most popular makes and models, you have something like a category page instead of the detail pages that get indexed where you say, well, these are all of this type of car in your location, which we have available. And you can click through and kind of look at them individually or compare them or whatever you need to do there. But that's something where you need to look at it on a per-site basis. That's not something that I'd say all sites should be doing it like this or should be focusing on noindexing content that might be thin because for some sites there is some kind of content, like you said, we might want to have all of those different variations indexed because people are looking for something very, very specific. And they need to be able to find that somehow.

DANIEL PICKEN: So that leads to my second question, which is very much related. So, again, every website is different, and we deal with many, many different websites. We've got multiple clients. So I think if we're to tackle every time we look at this issue, we look at, OK, is this page consistently duplicated? If it is, then maybe we may canonicalize those pages, and then maybe you'd be able to be a little bit, I suppose, accurate in the way in which you crawl the site, and you don't have to use lots of resource. But if there are pages that, yes, they are thin, but they are very much relevant to that user, then just to really think about the user rather than Google. We're not going to have a [? pander. ?] We're not going to be able to be subject to [? pander ?] if we just have loads of thin content, when there isn't a lot to say about that car. Because I think that's where we just want to be very careful that we don't receive any kind of [INAUDIBLE] [? algorithim ?] update. We will obviously make every effort to make sure the content-- whatever is there is quality. In fact, it isn't just going to be-- like I just say random animals or random content. It's going to be directly relevant content, but there might just be not a lot of it. That's not going to cause us an issue, is it?

JOHN MUELLER: That shouldn't be a problem. There are some really big sites out there that have a lot of content. And we try to look at them overall to see what makes sense here with regards to maybe a new page that we discover on the site, like, how can we put-- which bucket should we put it in? The probably high-quality or probably low-quality bucket before we are able to really figure out what this page is about? That's what we're trying to pick up on. The other thing that you could also do with user-generated content like that is give the user who posted that content some feedback. And maybe say, well, it looks like this is a really short description. Are you sure there isn't anything more specific that you can post about this article before we actually publish it?

DANIEL PICKEN: OK, yeah, great. Thank you very much, John, appreciate it.

JOHN MUELLER: Sure. All right, we have a bunch of things in the Q&A as well. So let me run through those. And I don't actually have that many questions in the Q&A. So I'm sure we'll get back to questions from you all afterwards as well. "When having URLs with both HTTP and HTTPS in the Google index right now, is it OK to redirect all HTTP to HTTPS from one day to another with a 301? Or would you recommend using canonical add or doing a slow transition there?" From my point of view, I'd rather do a full redirect from one day to the next. In general, when it comes to site moves, when we recognize that there is really a full redirect from one site to another, we can process that a little bit faster because we can see, well, everything is moving like this. Therefore, we can assume that it will be consistent across the site. Compared to other situations where maybe a part of the site is moving, or you're moving in separate chunks, those are the ones where we say, well, part of it is moving. But part of it is staying. How should we deal with the signals across the site? We almost have to reevaluate the whole destination site step-by-step again so that we actually understand how that's the current status there. So you can move a site completely at one time be it from one domain to another or HTTP to HTTPS. That usually makes it a little bit faster because we'll also be able to pick up crawling a little bit. We'll be able to handle indexing a little bit better, and all of that forwards things a little bit faster than if you do it step by step. "I've asked this question a few times, and a lot of times in the forum, no one seems able to help. We're unable to get our mobile site indexed by Google. We've exhausted everything." I probably need a link to the forum thread so that I can take a look there. I will try to look it up with just the name. But that's probably a bit tricky sometimes. So if you can post a link to the forum thread or send me a note on Google+ directly, then I'm happy to take a look there to see what's happening. "Is there any advantage of having multiple site maps instead of a single one? Separate them out by category or single one for everything? Or one for just images?" In practice, it's really up to you. Whatever works best for you. I like to split up the sitemap file because in Search Console you can look at the index stats by sitemap file. And that sometimes makes it a little bit easier to understand what types of pages are currently being indexed and what pages aren't being indexed. So you don't see the specific URLs. But you see from the category pages, I have 90% in indexed from the product detail pages. I have 70% indexed. And that might be more useful information than if you just see overall I have 72% of my pages indexed because that way, you have a little bit more granular information. So I kind of like splitting things up. But in practice, from a technical point of view on our site, our systems handle both small sitemap files and big sitemap files in the same way. We can process them really quickly. So it's not that you'd have any technical advantage by doing that. "I'm working on a project with local contacts and do some testing with schema.org markup for local businesses. Does Google somehow recognize local business markup already and use it to enrich the search results?" Yes, we do pick up some of that markup . I don't know offhand which ones we do pick up. I know we pick up or try to pick up the opening hours, phone number, address information. And we try to use that in the knowledge graph as well. Opening hours is, of course, sometimes a bit tricky because there are multiple sources of opening hours. On the one hand, it could be on your site. It could be something you submitted to the local business directory directly. And we have to figure out which one of these is the correct set. But you can mark those things up. And we do pick that up from pages. And in general, from talking with the people that work on this markup that work on the local data extraction from web pages, it really helps them to see more and more sites actually use this markup because that encourages them to use it more and more on our site as well. Whereas if we don't see any sites using this markup, then we're probably not going to spend that much time on figuring out how to include it in the search results. "Penguin 4-- are we likely to see it in January?" I don't want to make any date promises. But I'm hopeful that things are lined up. And since it was on the edge for this year, I'm pretty confident that's good enough for January. But I really don't want to make any announcements on that.

DANIEL PICKEN: John, can I just jump in again, sorry?

JOHN MUELLER: Yeah.

DANIEL PICKEN: In the SEO community, reporting on a potential for a-- we're calling it a phantom update. So on the 19th of November there are a lot of either positive or negative movements across a lot of key phrases. Now, I don't think we find any official word of Google, and we've had a look at our clients, and we can see that correlates around the 19th. Is there anything that you know of Google has done on your site at all that you can talk about as well?

JOHN MUELLER: I don't think we have anything to announce. We do make updates all the time. And sometimes these updates are more visible than others. But it's not that we give external names or announce all of these changes. So I imagine what you're seeing there is just one of the updates that we made over time with regards to search quality. And it's not that I would be able to say, well, no, you're not seeing this update or, yes, you are seeing this update . I'm sure we made an update then. We make lots of updates all the time, and some of them are more visible. Some of them are more visible to niche communities that look at specific sites. And some of them fly under the radar of all of these people, and they are actually pretty big changes from our point of view. So from our point of view, we try not to make too much of a fuss about a lot of these updates because they're essentially just our work as usual.

DANIEL PICKEN: Right, OK. OK, I thought I'd ask. Thanks.

JOHN MUELLER: Yeah, I mean, it's always tricky for us when we see a lot of people talking about something externally. We like to say, well, it's this specific change, and it comes from this and that. But a lot of these changes don't have any clear source or clear actionable effect that we can say, well, this means you need to improve your keywords on your pages so that we think that they actually look better, because we try to focus on the final results, on the relevance for the user.

DANIEL PICKEN: Right, cool.

JOHN MUELLER: OK, I think this is actually your question here. You mentioned the other week that it's down to the webmaster's prerogative to decide--

DANIEL PICKEN: Oh, you can take that [INAUDIBLE].

JOHN MUELLER: --whether it makes sense to index noindex pages. Yeah.

DANIEL PICKEN: Pretty much why I asked. [INAUDIBLE] an interesting case.

JOHN MUELLER: Yeah, it's something that's also interesting because when we talk to the search quality teams about this, they are usually saying, well, you shouldn't be noindexing low-quality content. You should be making it high-quality content instead. And that's easy for engineers on our site to say. But when you have a larger website, that's not always that easy. So sometimes using noindex or finding other ways to deemphasize that content is an easier move or something that's easier done on a larger scale, at least. But if you have a chance to actually improve the quality of the content, then that's, of course, always preferred.

DANIEL PICKEN: Absolutely. I think it's just marrying up the resource to the size of the website. And I couldn't agree more. I think from my point of view, it's making sure that we're making the right decisions for every site that we look at. And not necessarily just like you're doing it from a Google point of view, but obviously, you got me thinking about the user point of view. And maybe that's probably what Google has been doing all along. Hence, the search results as they are. But, yeah, thank you.

JOHN MUELLER: I think this is pretty much the same with regards to noindexing content. There is a variation here about robots.txt. In general, I try to avoid using robots.txt for thin or duplicated content because it just results in us actually indexing those URLs without any content. So if we can recognize that a page is clearly noindex or clearly has a rel canonical or a redirect set up, that makes it a lot easier to funnel all of the signals to the right pages. Whereas, if it's blocked by robots.txt, then we don't really know what we can do with this at all.

DANIEL PICKEN: I think, John, with the robots.txt, I think it's more to do with trying to help Google in terms of the resource that you can apply to every website. So I agree with you in terms of that. But if you have a large area of the website where you know that Google is probably pretty useless for the user and probably useless for Google to crawl, then what kind of situations would you expect us to be using the robots.txt file in, just to give us some examples?

JOHN MUELLER: Good question. I've been trying to figure out where we could best recommend this. And on the one hand, the general situations where crawling causes several problems, that's definitely one situation where using robots.txt definitely makes sense. So if we crawl a part of your site that requires a lot of dynamic server interaction and lots of database lookups where each page takes half a minute to load, that's because all of the servers in the back are running like crazy, then that's probably not something you want Google to crawl because we'll just bog everything down and make it unusable for normal users. So that's definitely one area where I see robots.txt making sense. That could expand to things like some kind of search results pages as well where basically any URL would be a valid URL, a valid search term, and you'd have to run through that and figure out what might be coming back. So that's something where sometimes I could see that making sense also for search results pages. For duplicate content, for thin content, I really try to avoid using robots.txt because, essentially, we lose all of those signals that we might have for those pages. If you have internal links pointing at those pages, then we kind of lose that internal page rank that you're passing. If you have external links going there, we lose all of that because we don't know that we can actually fold these pages together and make one of your existing pages a lot stronger. So putting something like a rel canonical or even just a noindex, we don't want it indexed, or redirect on these pages and letting us crawl that makes a lot more sense from my point of view, if you can do that.

DANIEL PICKEN: Is it worthwhile canonicalizing loads of duplicate pages and to what? And what if you've got a very slight-- so, again, if you look at the car example-- sorry to take over the Hangout, guys, by the way. So, for example, if you've got loads of pages, and say, for example, you get 2-month leasing, 6-month leasing, 12-month leasing-- all the content from these pages are exactly the same apart from one number. Is it still worthwhile just leaving it all just open to Google rather than trying to canonicalize all these practically duplicate pages into one? What's the best way to handle that, because that's been always my problem that, OK, great, we've got all these different variations, but only slightly. The content is duplicated page after page after page. Is it worthwhile having one strong page to rank in Google by canonicalizing them all in? So you fold all the pages into one? Or just leave it all open so that, again, Google can understand that even if it's minute, there is a slight difference between all these pages?

JOHN MUELLER: Yeah, I don't have an absolute answer for that. I think the two factors that you mentioned there definitely play a role. On the one hand, if you fold them together, you have something much stronger, which could be a little bit more generic. But if you have one page that's like two months, three months, six months, 12 months on there that maybe leads into detail on those individual items, that might make a really strong page to show in search. On the other hand, if you know that people really, really want to find this variation, then maybe separating that out makes sense. So that's the trade-off that you have to look at on your side from keeping things very strong and focused on one generic page that you keep indexed like that, or splitting it up into maybe different variations because you know people only really want to find that specific variation. So I guess one example could be if you're selling something, and you have one store that's available for resellers and one for general consumers, then you could theoretically make one product page for both of those versions. But it's a very different audience. And they want to find something unique to their market. Otherwise, they'll feel, oh, this isn't what I was really looking for. So you might have the same product just on two websites-- one focusing on consumers and one for resellers. And that's something where it definitely makes sense to split that. But there are other situations where you might say, well, it's kind of a similar audience. So maybe it makes sense to fold them together into one stronger page. It will rank a little bit higher. And we'll just have to make sure that the page works for both of these audiences that are looking for these different variations.

DANIEL PICKEN: OK, thank you.

JOHN MUELLER: All right. "What's the best way to replace an old URL and get a new URL indexed if both URLs are to be used? So I can't use a redirect or a noindex, but I want to prefer my latest URL in the search results. Google is not accepting because the content is different." So in general, this would be the type of situation where you'd set up a redirect from one URL to the other. In this specific case, it sounds like that's not easily possible. So I imagine this is the type of situation where you have one main product page, and you suddenly have a 2016 version of this product--

MALE SPEAKER 1: Yeah, John, I am online.

JOHN MUELLER: OK great.

MALE SPEAKER 1: Yeah, actually the thing was that for the last one or two years, my old page was ranking. But with time, I wanted to, you can say, to make a new page for that very same keywords or target market. So when I made those changes, my company wanted that old page also to be in search because they were saying that why would lose these customers. But in such case, I was not able to use the redirect, 301. But when I was trying to use canonical because Google wants the same content on both pages, even if it was disobeying my canonical tag. But this is the thing that I am facing problems in.

JOHN MUELLER: So with the canonical, it would actually be similar to a redirect in that we would focus on just one page and drop the other one. So that's probably not what you'd want. What I would recommend doing there is moving the old content to a new URL and keeping the existing URL for the new content. So for example, I don't know if you have a phone or something like that, and you have a specific phone type like this Nexus 5, and you have a new model of this phone, then what I'd recommend is putting the new model on the existing URL and moving the old model to a new URL. It's kind of like saying, well, I am moving the old information to an archive page. So on the one hand, that content is still findable in search. But on the other hand, Google understands that the main page for this specific product is that existing URL. So instead of putting the new content on a new URL, move the existing content to the old URL and use the new content on the existing URL.

MALE SPEAKER 1: But in this case, the URL for the old page is not very user-friendly. It is like [INAUDIBLE]. So I wanted to optimize that new URL. This is why I was having this problem.

JOHN MUELLER: Yeah, so if you want to make changes on the URL structure at the same time, then that always adds more complexity. But what I would do there is first move to like a clearer URL structure. And then in a second step, do that shuffle with the old and the new content so that the new content is on the new, old URL and the older content is moved to an archive position, which is linked from that URL. So you can look at last year's model, and you click on it, and you find last year's page, but under this separate URL.

MALE SPEAKER 1: OK, thank you.

JOHN MUELLER: So essentially, it's like a newspaper category page, for example, where you see the current content is visible directly. And the older content keeps being moved back into an archive position where it's findable. It's still around. But it's not in the same place in that it's a separate URL that users can go to. Google can recognize its secondary position now. It's not the primary content for this type of page.

MALE SPEAKER 1: This is the thing. Quite often you face the problem like if we want to create a new page and pick a new URL. So we just either have to use a redirect or canonical. But in the canonical case, Google sometimes does not obey them because it finds completely different pages.

JOHN MUELLER: Yeah, so with the rel canonical for us, that's a signal. It's not a directive directly. So it's something we look at. In the first step, we have to index both versions, which is sometimes confusing as well if you have a large number of pages that you're using rel canonical on. First, we have to index both versions so that we can even pull out the rel canonical and understand that. So that's something that would happen in the first step. And in the second step, we have to evaluate are these really equivalent pages, and if so, then can we fold everything into that one designated canonical URL. With a redirect, it's a lot easier because you're really saying, well, there is no content on the old URL and everything has moved to the new URL. So it's a lot easier for us to say, well, we'll just focus on the new URL because that's where you're redirecting to.

MALE SPEAKER 1: OK. And then, John, also in my redirection case, should I keep my old URLs in XML sitemaps so that Google could see them and then go and find redirects, or just update internal links with my XML sitemap new URLs?

JOHN MUELLER: You can keep your old URLs in the sitemap temporarily. So I would maybe keep them for a couple of weeks until Google has picked them up. And then I would remove them from the sitemap files so that you really only submit the URLs in the sitemap file that you want Google to crawl just now and to crawl to index appropriately.

MALE SPEAKER 1: It means I create the old URLs list and list in external sitemap write it, and when the index pages are zero, then I can remove them.

JOHN MUELLER: You can remove them from the sitemap file.

MALE SPEAKER 1: The sitemap file.

JOHN MUELLER: Yes.

MALE SPEAKER 1: OK, thank you.

JOHN MUELLER: All right, let me run through the last three questions here. And then we can open things up for more questions from you all. "Is it correct that when there are two versions of a URL, or even more, one which canonicalized and one that's canonicalized from, the data from all of these URLs has to be hauled back by Googlebot from merging into one?" I am not sure exactly how this question is meant, but kind of like I mentioned before, first, we have to index both of those pages to be able to process the rel canonical. And, generally, what we'll do is we'll fold all of the signals that we have into that canonical URL. But we'll focus for crawling and indexing on the content of that canonical URL. So if you have one page, for example, about blue shoes and another page about red shoes, then you could say the red shoe is a canonical for the blue shoes, or the blue shoe has a canonical pointing at the red shoe page. Then that red shoe page won't necessarily rank for blue shoes type queries because those blue shoes aren't on that red shoe page. So if you want to use a canonical, really try to make sure that the content is equivalent so that you don't lose any additional context that might otherwise be gained from that other content there. "While domain name is not a ranking signal, why does Google Search make queries which are part of the domain name bold as well as queries in the URL and snippet bold?" In general, these are more user-facing type things. So the way that we do the bolding in the search results is something we do more for the users rather than something that we do for site owners as a sign that this is a critical issue on your site and you need to add that there. So that's something where ranking factors don't necessarily have to mean that this is how we would show it in the search results because the normal user doesn't really care what a ranking factor is. They just want to understand which of these pages are relevant for their specific query just now. All right. And with that, I think we've finished all of the questions here. So what else can I help you guys with?

MALE SPEAKER 2: Can I ask something?

JOHN MUELLER: Sure.

MALE SPEAKER 2: Thank you. We're going to redesign one of our online news magazines. And before we completely release it, we want to test user behavior on it. So we're going to launch it with a beta subdomain, for example, beta.newsmagazine.com. And some segment of users redirect to this beta subdomain using a temporary redirect, [INAUDIBLE]. We already put a canonical tag in every page of beta subdomain pointing to a relevant page on the main domain. Like I said before, it is a pretty big website with thousands of pages and a lot of organic traffic. So we really like to avoid indexing duplicate content from a beta subdomain. From my experience, I know a canonical tag all along doesn't prevent indexing every time. So I want to ask, should we use both canonical and robots noindex dofollow text in every page on the subdomain in this case? What do you think?

JOHN MUELLER: That could make sense. In general, I try to avoid the conflicting information situation where you're saying these pages are equivalent, but one of these pages has a noindex, and the other one doesn't. But I could imagine that it might work here in a situation like that where you really want to be sure that none of the beta subdomain is actually indexed like that.

MALE SPEAKER 2: OK, thank you.

JOHN MUELLER: We've got a lot of noise. All right, more questions?

KREASON GOVENDER: Hi, John, I have a question on geotargeting for country-level domains. So if you want specific landing pages for users in specific countries, and you have a country-level domain, how would you set that up?

JOHN MUELLER: So you have an existing country code top-level domain? And you have to target other countries or do you have like separate top-level domains for-- separate domain names for each country?

KREASON GOVENDER: No, I want to use a one-country level domain to target other countries. So there is just one domain, but specific landing pages or possibly sub-domains for those specific countries.

JOHN MUELLER: OK. So the tricky part here is if it's a country-code top-level domain, then you can't set geotargeting for it. We'll assume that the geotargeting is based on the domain name already. So if you have a, for example, FR domain for France, and you want to target users in Germany or users in the UK, then you wouldn't be able to use the geotargeting tools for that specific page in that type of situation. So that's the main difficulty there. If you have a generic top-level domain, then of course, you can do that. You can take a sub-domain or subdirectory and set geotargeting for that with a country code one, you're limited to the country version. You can still make individual country pages. And you can use things like hreflang markup to help us understand that better. But you'd be missing this country-specific boost from geotargeting.

KREASON GOVENDER: Would you suggest us changing the domain to a generic top-level domain?

JOHN MUELLER: That's always a really big move. So that's not something I would do just for the random SEO purpose because usually, at least for existing businesses, there's a lot already tied in with an existing domain name. So that's something I'd really be cautious about saying, well, just for this geotargeting you should move because on the one hand, it might be that the effect of the geotargeting for that specific site for those specific queries is very small. So it's not that you would be able to guarantee and say, well, if you move to a generic domain, then you'll have a 10% boost across the board because it's really hard to say. So what I would recommend doing there is looking at the type of results that you see at the moment for the queries that you're targeting in those countries and thinking about how locally specific are these results. Is something where having clearly geotargeted content could help? Or is this something where clearly geotargeted content doesn't really have that big of an influence there-- so looking at the existing queries, looking at the existing sites that are already there, and making a judgment call based on that or maybe just making a recommendation based on that because, obviously, moving a domain name is a really big step.

KREASON GOVENDER: Yeah, will our local ranking drop if we were to switch? Would our local ranking drop if we were to switch to a generic top-level domain?

JOHN MUELLER: I don't think so. You'd almost certainly see some temporary effects from that where if you move from, say, a .fr domain to a .com domain, and you had a section that was geotargeted for France in there, then in the long run, we would see those as being equivalent. So that .fr domain versus the subdomain or subdirectory for France that is set with geotrageting in Search Console, essentially those are the same from our point of view. But I imagine in the short term, it will take a bit of time for things to actually settle down in that sense. So if you want to make that kind of a move, and you know that your local traffic is really important to you, then I'd try to pick a time where you traditionally don't have that much local traffic to minimize the problems that you have there.

KREASON GOVENDER: The time frame for that?

JOHN MUELLER: The time frame--

KREASON GOVENDER: Months?

JOHN MUELLER: There's no real answer for that. It really depends on the site, how well we're crawling the site. Just from a gut feeling, I'd say moving across with-- especially with country-specific geotargeting sites, I don't know, maybe a couple of weeks for things to settle down again. It's really hard to say, because on the one hand, it's not that your rankings would be disappearing completely. You'd still be visible, and you might still be visible with the old URLs and they redirect to your new ones. But it's kind of this geotargeting effect of us understanding this whole site is focused on this specific country going to a situation where this whole site is just a part of the main domain, and it's focusing on that specific country. That's kind of a tricky move sometimes. And essentially, it's not a one-to-one domain move. It's not that you're moving from one domain to another, one-to-one, but you're moving one domain to a part of a new domain. And the rest of the new domain might be felt with other content for other countries, for example.

KREASON GOVENDER: Is there anything else that we could do to our existing site to increase the rankings in specific countries?

JOHN MUELLER: So what I would do as a first step is really just make sure that you have the country-specific content available, that you use hreflang markup between those different variations so that we understand which one of these is more relevant in those individual countries. And I'd primarily try to work just based on that to get started. And if you recognize that, or if you feel that, you could make a significant step by going forward and creating country codes or a specific or a generic version that's specific to one country, then that's something you might want to take then. But I'd definitely try to see how far you can go with just your localized content like that. I mean, there are definitely businesses that are active globally that do have a country code top-level domain, and they're successfully available in the search results as well. So it's not that it's impossible to rank locally with a non-local top-level domain essentially. But you probably have to work a little bit harder to figure out exactly what you can do that would make it more relevant for users there.

KREASON GOVENDER: OK, thanks, John.

JOHN MUELLER: Sure.

MALE SPEAKER 1: Yeah, John, related to this basically I would just take two minutes. John, can you please tell me what are the factors in ccTLD that Google does not recognize and Google asks us to use hreflang? I am not talking about different languages, but if I have .cr.domain.indomain.frdomain. S what are the factors in ccTLD that Google was not able to recognize with ccTLD that I have to use hreflang?

JOHN MUELLER: There are two aspects there. On the one hand, geotargeting helps us to understand which content is relevant for a local user. And hreflang essentially swaps out the content that we show in the search results with a version that's more relevant for the user and their language. So the problem, I guess, with just geotargeting alone is that we might show one version of the page to a user that's not really the most relevant for them. And we don't understand the relationship between the existing pages on a website. So with hreflang, you're really saying this specific page here is equivalent to this specific page here for this language locale variation. And then we can swap them out where we need them. Whereas with geotargeting, you're just saying, well, this page here is very relevant here, and there are some other pages that are relevant somewhere else. But we don't understand the relationship between them. So that's something where if we show the wrong page in the search results from your site, that's probably hreflang, which you can work on. If we don't show your site at all for local queries, then probably geotargeting is something that can help a little bit.

MALE SPEAKER 1: OK.

MALE SPEAKER 3: [INAUDIBLE].

JOHN MUELLER: I just hear noise from you-- really bad audio.

MATT PATTERSON: Could I possibly ask a quick follow up to Kreason and-- the last question-- Niraj. So going from a site where we have a generic domain, but we had only German content on it, and I'm presuming that you had decided that we were German only-- and then switching to something having all the content available internationally. What is that? Is there a potential for a significant effect on that when suddenly we're just totally flipping the tables on you, and that is confusing? Is there a better way to handle that were we ever do this again?

JOHN MUELLER: That shouldn't be such a crazy change. These things happen from time to time, and with a generic top-level domain, even sometimes people just change the setting in Search Console and see what happens based on that. So it's not something where I'd say we would get completely confused and drop the whole site from the search results because someone changed geotargeting. That's something we should be able to work with.

MALE SPEAKER 1: So basically, John, with the help of hreflang, Google wants to be sure that both are the official websites and indicating to each of the different locations?

JOHN MUELLER: Yes.

MALE SPEAKER 1: This is what Google wants to be sure that it only wants to [? swap ?] URLs.

JOHN MUELLER: Yeah.

MALE SPEAKER 1: OK.

JOHN MUELLER: I have a feeling with all these hreflang and international questions, we need to do a separate Hangout on this at some point.

MALE SPEAKER 1: Sure, sure, sure.

MALE SPEAKER 3: Still noise?

JOHN MUELLER: Oh, I can hear you now. Yeah, go for it.

MALE SPEAKER 3: I have to ask about sitemaps, a short question. Does Google look at image sitemaps differently? Because I submitted 3,000 pages of sitemap images. It indexed 2,000 pages, but only 70 images. Why? Why did he only index 0.5% images? But it didn't index the page.

JOHN MUELLER: I don't know.

MALE SPEAKER 3: You don't know. I'll have to write you an email. I'll send you an email.

JOHN MUELLER: Yes, if you could send me the URLs, I can take a look with the sitemaps team here. Usually, they can help me figure these things out.

MALE SPEAKER 3: OK.

JOHN MUELLER: So happy to look at that.

MALE SPEAKER 3: OK, guys, all the best.

JOHN MUELLER: OK, one more question from Don about canonicalization. "The canonicalized from and to was more regarding the efficiency of canonicalization. It occurred to me that it's not always the most efficient use of Googlebots visits. Is it best to minimize? So I guess to use fewer URLs canonicals than to use more." I think there are perfectly good use cases for rel canonicals. And for those use cases, I would definitely use it. I wouldn't force any other kind of technical structure on those kind of situations. So, for example, if you have an e-commerce site, and you have products that are in different categories, and the categories are reflected in the URLs somewhere, then maybe it makes sense to use a rel canonical to one of those versions. It's not something that you'd be able to redirect and say, well, instead of I go to shoes and then men's shoes, and blue shoes, and I find the shoe, and it would be kind of awkward for me to end up on a page that's just a list of blue shoes instead of being able to go back to that specific other category page there. So I would definitely use a rel canonical where it makes sense. I wouldn't use it in cases where it doesn't make sense, where if you're moving from one site to another, then a redirect is probably the right technical approach to handle that situation. So I'd see this as a tool and not as something that's interchangeable with anything else that you're using across a website. All right. It looks like we're just about out of time. If any of you has a last question, I'm happy to take a stab at it.

DANIEL PICKEN: Can I jump in with one as well?

JOHN MUELLER: Sure, go for it.

DANIEL PICKEN: Can I confirm that PageRank gets passed through the links on the page and ignores the main navigation and footer links? So anything that may be sitewide doesn't pass PageRank through, but is that correct?

JOHN MUELLER: We essentially pass PageRank to all of those links, even if they're in those so-called boilerplate-- like the navigation in the footer, those kind of things. We'd still pass PageRank there. I guess it's a bit of a different situation if you're passing PageRank across the whole site, and you're comparing it to individual pages from other sites because the aggregated signals are probably not the same as the sum of the individual signals. But, in general, we do pick up links from the boilerplate from the navigation from the footer, and we follow them. We pass PageRank through them internally as well.

DANIEL PICKEN: So would you say that category pages next in line to the home page generally will be the second strongest page because you tend to have links from the home page to the category pages in the main navigation?

JOHN MUELLER: Sure. Sure. I guess, in general, that would be the case. I regularly come across sites where things like the home and it isn't the most popular or most important page. So it's not something I'd say is a requirement for every site, but it's in the general situation, that's definitely how we look at.

DANIEL PICKEN: Excellent. Thank you.

JOHN MUELLER: All right. So with that, let's take a break here. Thank you all for coming. Thanks for all the questions. The next Hangout should be lined up as well. So if there is anything that we missed-- anything that's still on your mind-- feel free to post those questions there as well. I have a little bit of a longer one planned just before the end of the year. So if there are weird things that you want to talk about in a little bit more detail, we can always take those up there as well. And with that, thanks again. And I wish you all a great weekend. See you next time.

MALE SPEAKER 1: Yeah, thank you. Same to you.

MATT PATTERSON: Thanks, John.

MALE SPEAKER 1: Thanks, John. Thank you.

MALE SPEAKER 4: Have a great holiday.

JOHN MUELLER: Bye.
ReconsiderationRequests.net | Copyright 2018