Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 30 June 2015

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

JOHN MUELLER: All right, welcome everyone to today's Google Webmaster Central office hours again. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what I do is talk with webmasters, publishers, like you all. I see some newer faces in the Hangout here. I'd love to give you guys a chance to ask the first questions.

MARK: Can you hear me?


MARK: Hi, my name's Mark. My question is regarding boilerplate text. You've told us a few times now that Google ignores it in the website. And I have a website that has a little bit of boilerplate text at the top that is separated from the main content. What I would like to know is how is it separated? Is it just separated as a complete block of text? Or is there a possibility that any words that are contained within that boilerplate text is then also removed from the whole of the page further down?

JOHN MUELLER: So what we usually try to do there is just see that text as being associated with the website in general, but not something as being primary content for that specific page. So if this is something you have across multiple pages, then we'll try to rank those individual data based on their own content, not based on the boilerplate content. But it's not that if there is a word in the boilerplate and that's also in the main content that we demoted or anything-- we treat that primary content, it's interesting to say, as any other content you might have on your site. So it's not less valuable just because you have boilerplate on it as well.

MARK: Good, and would you say that the h1 tag is a good start? For myself, the boilerplate text is at the top of the page to introduce what we do. And then the h1 tag then has the main content. Is that a good marker for Google to say that's where the main content starts from?

JOHN MUELLER: Sure. That would help. I think one thing that might come into play depending on your website and how it's set up is sometimes if we don't have a good description for a page when we show it in search results, we'll try to take some of the content from the top of the page essentially. So if the boilerplate content is really the first content on the page, then that's something where maybe we'll use some of that boilerplate as a description in the search results. So if you have a really good description metatag, then that'll work fine. If you don't have any description metatag, we might sometimes pick out the boilerplate tags to show as a description.

MARK: That wouldn't necessarily be an issue.

JOHN MUELLER: That's good.

MARK: Thank you.

JOHN MUELLER: Sure. Other new faces-- Matt-- I don't think I've seen-- oh, I've seen you before as well, but not as regularly. What else can we help with before we start off?

MATT PATTERSON: Actually, I've got a question further down in the Q&A, which is following up from the question I asked a couple of weeks ago. So we've got now about 58,000, 60,000 active links submitted through site maps attached to our HTTPS property. And we have the corresponding app property in search console. And basically so every-- there are no links in the index. And everything is coming up as either content mismatch or when we do Fetch as Google to test it, coming up as internal error-- internal Google error. And I posted about this in webmaster forum again about two, three weeks ago. And I haven't had any replies to that. I'm just kind of wondering if you've got any more info about that. I mean, as I say, my best guess is that it's to do with the fact that we have video content. And it's restricted.

JOHN MUELLER: I guess that's the problem there that we're trying to match words on the page with words in your app. And there's not a lot of words in the app or on the page. And we'll probably see that as being kind of a mismatch there. But I sent a quick note to the team here to take a quick look to see if they can pull out your thread and add a reply there.

MATT PATTERSON: OK, thank you.

MALE SPEAKER: John, can I just ask you a very quick question? I'm going to leave my space for somebody else to take.


MALE SPEAKER 1: I noticed a really serious problem we're having with hreflang. And our .com and our And our is ranking for I noticed that the .com was ranking well in That got me curious. And so I took out the hreflang and replaced it for a canonical tag. And immediately, when the switch happened, our .com immediately seems to be ranking three or four places higher for the select pages that I tried. And the and .com are identical. And so that's very concerning for me as to why that should be happening. And if anything, the domain should be a stronger signal for if there is any difference.

JOHN MUELLER: But you have a canonical from the to the .com version?

MALE SPEAKER 1: Correct. And so now .com ranks in the But it ranks three or four positions higher. We're talking the difference between fifth and sixth place to second or third or fourth in many cases.

JOHN MUELLER: I'd have to take a look at what might be happening there.

MALE SPEAKER 1: Yeah, but otherwise, that means it's very difficult for me to consider keeping the hreflang, which we spent eight months developing our US version of our site with our currency and all that. And we're only about to go live. And now we can't because it's detrimental to the company if hreflang isn't going to do a straight swap.

JOHN MUELLER: Yeah, it should just normally swap. But it's hard to say without looking at the actual URLs.

MALE SPEAKER 1: Yeah. I've got documented evidence. And I'll send you some copies and stuff.


MALE SPEAKER 1: And we're doing some live tests. So that's that. All right.


MALE SPEAKER 1: Thanks, John, somebody else can come in now. Have a good day. Bye, bye.


AARON FRIEDMAN: I have a new face with me. I don't have new question. I am an old face. And I have a question. Basically, the spammy structured markup-- manual action. It's been around for over a year or so. The question I have is when you get that, does that downgrade the rankings of the website or just is about removing the rich snippets from the website?

JOHN MUELLER: That's essentially about the rich snippet markup. So rich snippets themselves don't give you a ranking boost. So it wouldn't really make sense to demote a site in rankings if they're doing something wrong with rich snippets. So essentially we just turn off the rich snippets there until we're sure that we can trust that markup again.


JOHN MUELLER: All right, Fernando.

FERNANDO: Oh, hello, sorry, I have a question about Search Console. We have several domains targeting each country-- .com, And we have HTTPS. So in Search Console we have to create the HTTP, www, also HTTP without www-- HTTPS-- www, plus HTTPS without www-- if that's correct?

JOHN MUELLER: If you want to add all variations, you can do that, yes. Usually there's one of the versions that's actually indexed. So that's the one that you'd need to be focusing on. But if you want to make sure that you're not missing any of the data, then adding all of those variations is a good idea. What I'd seen some people do is have one account that has all of these variations in it and a different-- their main account that they work on that just has a version that's actually indexed so that you have an account that's easy to manage-- easy to look through in different sites that you have. But at the same time, you're not missing any other information that we have because that's going to your collection account, if you will.

FERNANDO: Right because we want to work with only that-- with only one-- the canonical. So great advice-- thank you. And another question I was wondering is about the value of the backlinks. Each time we see that the content and there's rumors about how intelligent Google becomes to understand the value of the meaning of the text and how much is evolving the value of the backlinks as a primary force in SEO.

JOHN MUELLER: We do use links in ranking. So it's not that we ignore them at all. But at the same time, it's not something that has a fixed wave where we say, oh, 25% of your work should be for links. That's not the case. So that's something where I think it makes sense to look at all the different factors that we have and to find the right balance there. And for some sites, they'll have a lot of links because it's a very competitive area maybe where all the other sides also have a lot of links. Other sites, maybe they are active in an area where almost nobody has any links because nobody is linking. And it's not that we require all sites to do the same thing. That's I think the beauty of having so many different factors involved in ranking in that you don't have to do exactly the same thing as your competitors do.

FERNANDO: I love that point of view to merit content that is great, but without backlinks because nobody's linking. Thank you.

BARUCH LABUNKSKI: Hi, John, how are you?


BARUCH LABUNKSKI: I haven't been here three and a half-- three weeks or whatever. So I'd like to ask a question. There's a brand that is located in Hong Kong. So a .hk and a .com. So instead of going ahead and moving forward with the same content and just translating it to another place, I was wondering do you suggest-- I was suggesting unique content specifically for that area as opposed to having the same content as the US or catering with what the users like over there, even though the brand is the same, but giving the user an experience over there and an experience in the US. Do you think that's a good way to do?

JOHN MUELLER: That sounds like a good plan, yeah.

BARUCH LABUNKSKI: So if I have like let's say even a .ca website and a .com, I want to cater to the Canadians with the flag. The things I can't do-- I want to do different things.

JOHN MUELLER: Sure. That sounds like a good plan. I think if you're able to provide something that works well for your local audience, then I'd definitely go for that and try to make that happen.

BARUCH LABUNKSKI: OK, so creating a whole new separate site just for them is--

JOHN MUELLER: Creating a whole, new site is something obviously you have to balance a trade off between all the work involved with creating a new site, maintaining a separate site, and all of that. So it's not a situation where I'd say you should create separate sites for every country that you're active in. Sometimes it makes sense to create something separate, especially if they have very different expectations of what a website should look at, which might be the case if you have Chinese users and US users, they probably have different expectations of what a professional website looks like, for example. So that might make sense. But essentially, it's the same content. And you're just trying to maybe target the UK and Australia. But actually it's all the same. Then maybe it doesn't make sense to set up a separate site.

BARUCH LABUNKSKI: Well, it'll be all different content. And everything's going to be different. And, of course, yeah, like you said, it's hard work. But they want to put the hard work in and, of course, making content unique.

JOHN MUELLER: Yeah. That sounds reasonable to me. Yeah, sure.

BARUCH LABUNKSKI: Thanks, John, appreciate it. Nice seeing you.

JOHN MUELLER: All right. Let's run through some of the questions. And afterwards, we'll probably have time to discuss what else is on your mind as well. Or maybe you have any questions-- to the questions or to the answers. And we can go through those. We have a website with recipes and lose 50% traffic on the past one week and April up to today. "We don't build links, but some recipe aggregators automatically link to our site. Can this be a reason for this automatic penalty?" So I don't know for sure without looking at the actual site involved. It's kind of hard to say. But from what I've seen with these kind of sites in general, just because other aggregators link to your content isn't really going to be a problem. So that's not the case where you need to weave through your links and clean all of that up just because some aggregator is linking to your site. In general, if we recognize that these are aggregators, we will treat those links appropriately and more or less ignore them. At the same time, it's not something where I'd say you should just wait it out and see what happens, but rather really put a critical eye on your website and see where you can improve things as well. So if you're also aggregating content from various other sites, maybe that's a part of the problem. Maybe that's something that you could improve on and really make sure that the primary content on your website is your own content, something unique and valuable where you add value to the rest of the web. So that's something you might want to take a look at there. But just because aggregators are linking to your website, isn't really something I'd worry about with regards to Google and trying to clean out those links, for example. "How to build the best site structure for SEO? Should we have internal links frequently for other pages to the most important pages like article and sales pages? More links to the good articles, for example?" Essentially, you can structure your site in any way that you'd like. I think having a clear site that brings the semantic structure of the site out is definitely a good idea so that it's clear for users in search engines which part of the sites belongs together-- where you are structuring things in a logical way. So I think that definitely makes sense. But past that, it's something where as long as we can crawl those URLs and find links to other parts of your site, we'll probably figure it out. So I wouldn't just leave re-ramp a site completely just for skewing-- moving things around-- tweaking the URLs that are actually showed, but rather think about it as something from a user's point of view. And if you notice that your site is really hard to use because users have a hard time navigating through it, that's something I'd definitely work on. "Can you take a look at the search results page for 'domestic cleaner, Boston'? We have a Boston in the UK too. Yet all the local listings are from the US, while the organic results is a mix of the UK and USA." That sounds like a tricky search result. I passed that on to the team just before the Hangout. I don't know how easily we'll able to fix or resolve that problem there. It's always tricky when there is this really strong location signal that we have for Boston and the USA versus I didn't know there was a Boston in the UK, for example. So that's always a tricky problem. But I'll pass it on to their team to take a look at to see what we can do to improve that. "I don't know if you can answer this for a keyword example 'guru,' which of the following two a higher ranking factor? Example .guru as a domain name or example as a domain name?" We don't take the TLDs, for example, into account for ranking. So you don't really see any bonus from using a keyword in a TLD like that. Let me just mute you Matt. So from that point of view, they are essentially equivalent. It's not something where I'd say you need to go this way or that way. If you have a good domain name that works good for your content, I'd go for that. If you have a domain name that doesn't have one of the keywords that you're focusing on, that's fine too. It's not something where you need to put keywords into domains.



ROBB YOUNG: Are all of these new TLDs going be treated the same. Do you essentially-- unless they are country ones-- do you essentially treat them all the same? Because we, a few years back, looked at .travel for our site. And in order to get a .travel, we had to submit a proposal and justify that we did work within leisure and travel. But you don't care about that so anyone can try that--

JOHN MUELLER: No, no. We essentially treat them as any other generic top-level domain. So that's something if you find a domain name there, that's short and memorable and useful for your site, then I'd go for that. But I wouldn't pick a new domain name with a new TLD just because it has a keyword in there because you're expecting some kind of SEO boost because that's not going to happen. There are some of the new TLDs have a city or regional annotation like, I think, .nyc, for example. And that's something we also treat as a generic top-level domain at the moment. If over time we realize that actually this is a really strong location signal, then maybe we'll take that into account for geotargeting. But at least in the beginning, I'd imagine the first couple of years we'll be treating all those as generic top-level domains.

ROBB YOUNG: Right, sounds like a lot of work knowing who even runs those various TLDs.

JOHN MUELLER: Yeah, it's a lot of registrars offer a whole bag of these now. I think there are over 1,000 of these TLDs out there. So it's something you have a bigger selection to pick from I guess. And if you have a really long and complicated domain name at the moment, maybe it's worth picking out a shorter one-- a more memorable one. But I wouldn't do that just for SEO.

BARUCH LABUNKSKI: Do you guys deal with ICANN? Because ICANN is planning on disclosing people's information like on who is. So I was just wondering, do you guys deal with them? For instance, if I have a domain and I want a let's say private registration, can I do it directly with Google as opposed to--

JOHN MUELLER: There is a Google registrar. But I think they are US-only at the moment. I don't know the details because I'm not in the US either. So I don't know how that works. But essentially, the guidelines from the ICANN or something that the registrars have to fulfill or work with. It's not something that we as a search engine would work with directly.

BARUCH LABUNSKI: All right, thanks.

JOHN MUELLER: If some subpages-- articles-- don't have any big traffic from search results, they can be low quality from an algorithmic point of view. I guess it's a question. Just because they don't get a lot of traffic at the moment doesn't mean that they're low quality. So that's not something that I blindly follow and say these pages don't get a lot of traffic. Therefore, I need to delete them because Google thinks they are bad. Maybe they just don't have a big audience at the moment. And that can, of course, change. Maybe it's a topic that people don't care about now, but will care about next week. And everyone will go to those pages. So just blindly looking at the traffic isn't really a good way to recognize higher quality or lower quality content. "What should we do with a little duplicated pages like WordPress, author pagination posts, if those articles are in other categories? Should we give them noindex follow tag? And the links on a page maybe a no-follow? Or just rel canonical to the main pagination?" This is up to you how you want to handle this. So from our point of view, we'll try to index those pages if they don't have noindex on them, and they are available on your website. If you think that this is good content that you want to have indexed for your website, then let them be indexed. If you think this is essentially just a copy of the other content that you have elsewhere on your website in the categories, for example. Then maybe you don't want to have them indexed. But it's not something that we would say you need to handle it like this across all websites that are out there. It's really up to you. "What would you suggest for situations when we have two to three internal links to the same page from another? And the first one is from an image, the second one is anchor text, and the third one from date. Should we nofollow the image and dates to pass more page rank to this anchor text?" No, you don't need to do anything special there. We'll still pick that up. And we'll try to use that appropriately. So it's not that you need to of tweak that internal linking of a website to make sure that you have the optimal anchor text for all of the links that are passing page rank. We do try to figure that out appropriately. Of course, if all of these links have a nofollow on them, then we can't pass any page ranking. So I try to leave that as natural as possible. And if you want to link there and think that is a good place link to from your page, then just do that now. "I have a question regarding website migration from a subdomain to a new domain. Can we operate part of the site from the old site? And the rest of the site from the new domain, Especially when the mobile site is a separate website for the main domain and subdomain is in the subfolder?" You can do this. It's like a partial migration that you move part of your site to a new domain. And the rest of the site you leave on the old subdomain. But what will happen here is we have to process that on a page by page basis. The other option of moving everything from the sub-domain to a domain or everything from one domain to another domain makes it really easy for us to say, well, everything from this old site goes to the new site. And it's a lot faster for us to process. Whereas, if you say part of the site is moving, then we really have to check everything out on a page by page basis. And that adds a lot of complexity. That said, if this is your only option, then maybe you just have to bite that bullet and do it anyway. "A client of ours accidentally disavowed all backlinks to the website. How do we clear that list out?" So what you can do in Webmaster Tools, or in Search Console, is just upload and empty disavow file. And then we'll process that and take out all of the previous disavows. Or rather what you'd probably want to do in this case is just upload a disavow file with only the bad links in them-- the ones that you want to have disavowed. And then we'll take those into account. And if the good ones that you don't want to have disavowed are not in there anymore, then we'll take that into account appropriately when we process those links. So that happens automatically.

BARUCH LABUNSKI: Suppose he did it by accident for a couple days and then he undid that. That's not going to have any effect, right? Because there was no update in between, right?

JOHN MUELLER: Probably very little effect. It can still have an effect because any of those URLs that we crawl in that time period that we have to disavow for, we'll drop those links in the meantime. But chances are we'll recrawl those fairly frequently and check those out again and notice that the link is actually not disavowed anymore and use that. So if this is something that was done on an accident for just a short period of time, then I wouldn't expect any big changes in search.

BARUCH LABUNSKI: But according to one of your patents, there is an area where if you link back again, it has less value.

JOHN MUELLER: I have no idea which patent you're talking about. But we may have a lot of patents. We have a lot of really smart people here. And just because something is patented doesn't mean that we actually use it. So that's something always worthy taking into account. I think it's interesting to look at these patents because there are some really fancy ideas in there. But I wouldn't necessarily assume that we do everything that's in there.


JOHN MUELLER: "What are the early signals that Google detects when writing a web page, for instance, any breaking news? We see first page results from the "News" site. How does Google decide whether a newly created page from news has enough authority to rank on the first page of Google?" That's always a tricky situation because if we don't have a lot of signals for a page or a website, then it's really hard to rank those kind of pages. And if this is something really new, then that's twice as hard, I guess. So I don't have any easy answer there. It's not the case that we read "Google News" and see, oh, well they're talking about this problem. Therefore, we'll try to find websites that match that problem. Essentially, this is completely algorithmic and automatic. So it's not something that you can easily tweak as a webmaster. If you notice something is really trending-- really coming up. And you have information to share on that. I would definitely recommend putting that out on your website and making sure that we can pick it up as quickly as possible. So putting that in an RSS feed, for example, using pubsubhubbub to push that as quickly as possible to Google is a great way to let us know about these kind of content changes as quickly as possible because I think but the bigger problem that we sometimes see is people will write about something that's trending. But they won't really tell us about it. And we'll take a week later. And by that time, it's not really that interesting anymore. So they never really get a chance to be shown in search. "How imperative is it for simple sites that offer services to change to HTTPS? Is this going to be required at some point? Will this affect current rankings? And if so, how long would it be back to normal?" We do use HTTPS as a ranking signal. It's a really small signal at the moment. But we do use that. We also try to use that when picking out a canonical. So if we have a choice between the HTTP version and the HTTPS version, then we'll probably try to pick the HTTPS version to show for that specific website. It's not a requirement. It's not that you need to do any of that to be shown in search. So if you can't do that at the moment, that's perfectly fine. That said, I think this is something that's going to be a general trend on the internet. So it's not going to go away. So if you can't do that now, maybe it makes sense to look into that for the long run-- think about if you can do that maybe on the summer break or whenever you have time to make these kind of changes. "Operating a store with faceted navigation. A customer can click on Category, click Color, click on Sizes-- each generates a unique URL. All nofollow, no index follow after the category being reported as soft 404s in Webmaster Tools. Could it change to nofollow drop in internal links?" Oh, this is a complicated question. It has a lot of facets, I guess. So I think what is probably happening here is we're picking up a lot of URLs in crawling them on the website. But we're seeing that they all have a no-index on them. And in a situation like that, our algorithms are going to think, well, these all have a noindex on them. They are essentially 404 pages-- pages that we can ignore. So we flag those as soft 404 pages. We learn those patterns and pick that up and flag them as soft 404s. And, essentially, there is no big difference between the URL being noindex and being flagged as a soft 404 apart from, of course, the links that are followed there. But if you have made sure that the detail pages-- the content within your website is crawlable-- is reachable through normal links, then it's not that you really need those faceted navigation pages to also be crawled and followed. So from that point of view, having a flag as a soft 404 isn't something I'd worried about. It's not something you'd want to work around and trick our systems into thinking that it's not a soft 404 because these pages already have a no-index on them. They're not going to show up in search anyway.

MIHAI APERGHIS: Hey, John, regarding that question-- my approach to my commerce clients is usually for the faceted field or the URLs-- to use the rel canonical instead of noindex follow. Is that would pretty much aggregate all the signals to the main category without any filters applied. Is that an OK approach? Is that better?

JOHN MUELLER: Sure. That could work too. It depends on how you have your site set up. In our Help Center, we have a lot of information about faceted navigation as well and how you can set that up. So that might be something to check out. I imagine you've looked at that before. But if you're new to the faceted navigation topic, then I'd definitely check that out. Also, a blog post by Miley, which was perhaps two years old-- something around that-- with a lot of information like good practices for faceted navigation. So I'd definitely check that out. And in some cases, it might make sense to use a rel canonical. In other cases, you maybe just don't want to have those crawled at all.

AARON FRIEDMAN: John, we have another question for you.

JOHN MUELLER: All right, go for it.

BARRY FRIEDMAN: So I have a client that I'm working with right now. Their brand is-- they're known by two different names. One of them-- is the brand-- their named spelled out. And one of them is an acronym. And for the acronym of their name, they have a Knowledge Graph showing up. And it's pretty robust. And then for the name spelled out, there's nothing showing up at all. So my thought was to put on-- there's a schema markup for an alternate name. I thought to maybe put that on. And maybe that might trigger something. But is there anything else that you'd recommend to do at the same time to support that effort? I want to tell-- I want Google to understand. I want you guys to understand that the two are the same. So how would I-- what's the best way to do that?

JOHN MUELLER: The alternate name markup is probably a good idea there. But essentially this is algorithmic in the sense that we try to figure out what to show for these queries. And, for example, if nobody is actually searching for that written out name, then we might assume that it's not really that critical to show a Knowledge Graph entry there. So that might be coming into play there as well. But the alternate name definitely lets us know that these are related and that they belong together and that maybe we should show the same information for both of these names.

BARRY FRIEDMAN: OK. So you're saying because not many people are searching for that version of the names-- or possibly. It's possibly because of that. So how do you-- is there anything else to do aside from alternate name or, I guess, a complete driving people towards that as the brand name?

JOHN MUELLER: One aspect that you might want to double check is that the Wikipedia page is also clear on this name issue, if that's a problem there, to really make sure that information is reflected elsewhere on the web as well. I think that can help there too. The tricky part with a lot of the structured markup and the Knowledge Graph entries is really that these are essentially done algorithmically. And we try to pick up the right parts to show at the right time to users. But it's not something that you can easily control and say, oh for this query, I also want to have this Knowledge Graph shown. We treat those separately.

BARRY FRIEDMAN: So I'll tell you also-- are we good? Oh, we're good? OK, so another thing, so similar to, also associated with the Knowledge Graph is I have another situation which you brought up because of a Knowledge Graph and not being able to control it. A client that I'm working with has some information showing up in her Knowledge Graph. And it's almost directly correlated with what was on Freebase, which you can no longer edit, which means that because it cannot be edited, you can no longer get rid of that piece of information in the knowledge graph. So what does one do in that situation?

JOHN MUELLER: To get rid of something in the Knowledge Graph?

BARUCH LABUNSKI: Contact Wikipedia.

JOHN MUELLER: Well, there should be a feedback link below the Knowledge Graph entry where you can flag individual items as being wrong. I think that's still the case. And we have that there. So that's the first thing I would do there. That gives at least our systems that information that you're saying, well, this is wrong information. I think there's also a text box where you can provide some kind of justification or a link to the correct information-- those kind of things. I think that's the first step I would take there to at least make sure that this information comes to our side. And from there, it's hard to see how quickly something like that will be picked up or will be reused by the algorithms. But that's really the first point that I'd go into.

BARRY FRIEDMAN: So the issue here is that they don't-- it's not that-- it's not that it's necessarily incorrect. It's just something they don't want to draw attention to. And the only reference to it online is in Freebase. So it's not correct to remove it. It's just it's impossible to remove right now. So that's sort of a--

JOHN MUELLER: OK, if you want, you can send me a link. And I can take a quick look to see if that's something that the team would take action on. I can't promise that they would be able to do anything there. But I'm happy to pass that on to the team to take a look at as an example of something that might be problematic for other people as well. "We've reached a limit of how many websites we can manage in Webmaster Tools. We run websites for clients. But have stopped configuring Webmaster Tools for new clients. Since we're at the limit." This is probably a good thing because now we have this fancy, new thing called Search Console. So you don't need Webmaster Tools anymore. Sorry. There's a limit of 1,000 sites in Webmaster Tools or Search Console as well. So that's probably the limit that you're running into. But you can create separate accounts if you have more than 1,000 sites that you need to manage. And sometimes that makes sense to have accounts to archive all of this information to at least collect things like crawl errors on domains that aren't actually canonical, for example, to separate those things out. So you can definitely create separate accounts to attract more sites like that. "Anchor text of internal links that are not a part of the navigation bar. Does the text matter if the same text use often? Is it a penalty?" No, there's no problem with reusing the same text in year navigation on your menus. That's something that's a really common design pattern. That's not anything I'd worry about or try to artificially tweak. So if you have a menu across your website, and that's across 1,000 pages, then traditionally that menu is going to look the same across all of those pages. And that's not something you need to artificially work around. "Is it true that only a 2048-bit SSL certificate for HTTPS will give you a ranking boost? Or can you use any SSL certificate to get a ranking boost?" You can use any valid certificate to get that boost. When we index your pages with HTTPS and your site has a valid certificate in the sense that a browser when viewing this page and throwing errors-- then that's essentially what we're looking for with that ranking change. So you don't need to use a 2048-bit certificate if you have another one that's valid and accepted by a browser. So from that point of view, you're a little bit open there. But at the same time, I also wouldn't recommend using a lower quality certificate just because that's easier to implement because usually it's just a matter putting the files in the right place. So make sure it works in a browser. Make sure it's kind of a modern encryption set up. And it's not something that browsers are going to deprecate [INAUDIBLE].

MATT PATTERSON: Could I just follow up on that quickly?


MATT PATTERSON: Because the thing that doesn't address is the difference SHA-1 signed certificates and SHA-256 signed certificates. And you get the yellow triangle in Chrome with the SHA-1 signed certificate, although it's technically valid. Does that make a difference?

JOHN MUELLER: Not at the moment.

MATT PATTERSON: I didn't think so. But I thought it was worth checking since that question was asked.

JOHN MUELLER: Yeah, I think that's something you probably want to work on in the long run because from what I remember from the discussions, which have been a while back, that's something that they're trying to discourage people from using. So that's why you're getting the yellow triangle in Chrome. But in the long run, if you're updating things and long-term, then of course, you'd probably pick something that's more modern.

MATT PATTERSON: Well, everyone's certificate is going to expire in a year or two anyway. So at that point, there won't be anymore SHA-1s to fix.

JOHN MUELLER: "On a site with faceted search, the URL parameters look like this. So search? facet=genes &facet=black-- would there be any issue with Google handling these multiple parameters all with the key name facet?" Yes, that probably you would cause some problems on our side, especially if you're looking into the URL parameter handling tool where you can define which URL parameters you want to have followed or used and which ones you don't want to have used. So if the parameters are all called the same. And the value defines the type of parameter that you have, then that would cause problems there. That would make it a lot harder for you to define which of these facets you want to have used and which ones you don't want to have used. So I recommend really using unique parameter names so that they can be tracked separately as well. "A franchiser has many franchisees selling the same product, but in different cities. Each franchisee sells the template website with the same content. Should we noindex those sites? Or can we leave them as is? Will the franchisee sites be penalized?" In general, what will happen here is we'll recognize that this is duplicate content. And we will try to treat it appropriately. So it's not that these sites will be automatically penalized in search. But we'll try to figure out which ones are relevant and show those in search for the individual user. So in a situation like this, if you're searching for a type of business and your local city name, then we'll probably try to show the one that's matching that query. Whereas if you're just searching for type of business without specifying any location, and maybe we don't realize that this is location-specific, then we'll just pick one of these multiple sites and show those in search. So that's the general way that we would handle this duplicate content. One thing I'd just watch out for is that these aren't essentially doorway sites because that's something you would want to worry about and make sure that you have that cleaned up. So if these are really separate businesses. They are run by a local person in those cities. And they're run by themselves. And that's something that might be OK. But if these are just domain names-- if these are just domain names set up by the same company just to add the city name into the domain or put them on the site somewhere, then that's something that we probably see as doorway pages and try to take action on from a manuals webspam point of view.

BARUCH LABUNKSI: But I see that someone here are doing that locally. I emailed you about it. Did you want me to send that again regarding this one site that's doing that?

JOHN MUELLER: I think that's always a tricky situation. You're always welcome to send me these spam reports. But it's not something I can guarantee that the webspam team will say, well. We have to take action on this. And especially if it's a smaller number of sites, then that's not something that the webspam team would say, well, these are all the same size just tweaking the keywords, then that's something where the webspam team probably wouldn't take my [INAUDIBLE]. On the other hand, if you're talking about hundreds of different sites that all just swap out a city name, then that's something that the webspam team would be more interested in and more willing to say, well, we need to take action here to preserve the quality of our search results.


JOHN MUELLER: "If the desktop URL is marked as noindex, nofollow, but the mobile equivalent marked as noindex follow, will Google still index and show the desktop URL for searches made on desktop?" So from the question, it looks like both of these versions have noindex on them. So we wouldn't index either of them. The thing to watch out for with SEO in general is that you want to avoid giving conflicting signals. So in a case like this where you have one page with no index, no follow, and the other page no index, but follow. Then that's something where you're saying, well, pick it out yourself, Google. And that's the situation where Google will try to make one decision. And that decision might change over time. Our algorithms might say, well, the webmaster probably means this or probably means that. But it's not something where you can rely on the outcome to be the same all the time. So if you want Google or search engines to do something very specific with your website, then be really clear about that and make sure that you're not giving any conflicting information about what you're trying to do with it. So a common situation we've seen, for example, is you have one URL that redirects to a different URL. And that other URL has a rel canonical set back to the first URL. So on the one hand, we see a signal that you'd actually want the other URL index. But then you also have the signal that you want the first URL index. And our systems basically throw up their hands and say, well, I don't know. Maybe I'll pick this one. Maybe I'll pick the other one next time. So you can't rely on that. If you really want search engines to do something specific, make sure you're giving that information as clearly as possible. "How do we increase our budget-- our crawler budget for larger sites with over 50,000 pages? I noticed some of our important pages are not crawled on a regular basis. We prioritize those pages in site map to crawl on a daily basis." So there are few things here. First off, crawling doesn't mean that we think this page is important that we show in ranking. So if you have pages on your website that don't change frequently, but you're saying should be crawled all the time, that doesn't really make sense from our point of view. So if we're not picking up anything new from the individual crawls, we'll probably start crawling those pages less frequently. Another thing to keep in mind is we tend not to use the sitemaps priority information for crawling. And we tend not to use the change frequency in site maps for crawling. So if you have specific changes that you want to let us know about, then use the last modification date in site maps. Don't use the change frequency. Don't use the priority. So that's a great way to let us know about specific changes on specific pages that you want to have recrawled. With regards to crawling other pages from your website, the sitemap file is also a good way to do that. With regards to crawling more from your website, usually the main limitation that we run across is that the server just starts getting slower. So we might want to crawl 100,000 pages a day for a website. But if we notice the server is slowing down or sometimes throwing server errors where we crawl, then we'll back off with our crawling until we are at a situation where we're in an insane place with regards to picking up new content from your side, but not overloading your server. "I've recently seen a spike in issues related to increasing level of blocked resources from a domain that supplies website tracking for my primary site. It states that this will impair the indexing of my web pages. Will Google drop these pages now? Yes or no?" So if there's content embedded or reused in your websites that's hosted on another website that has this content blocked, then we obviously can't take that into account with indexing because we can't crawl that content. We don't see what's there. If this is content that's irrelevant to your website, for example, you mentioned a tracking site. Then that's less of a problem. So if we can't crawl a tracking pixel, for example. Then that's not going to of make us miss any information on your website. We can still see everything else that's on this website. On the other hand, if the primary information is hosted somewhere where it's blocked. For example, maybe you had a map on your web page and your map uses JavaScript to embed a lot of information on the map-- and that JavaScript file is blocked by robots.txt, then we can't pick that content up and show it for your website because we can't actually crawl and index that content. So in some cases, it'll be a problem for us to index that content on those pages. In other cases, the blocked content is minimal and doesn't really affect what we would see on a page. You can test this out with Fetch as Google tool in Search Console where you see, on the one hand, the version of the page that we would see through crawling with the content blocked. And on the other hand, the version of the page that you would see in your browser or you can test up in your own browser. And you can compare those two versions. You can see does Google bot have the ability to crawl and index all of the important content? Or is there something missing? And based on that information, you can work with your partners on other sites to have that content un-robotted.

MALE SPEAKER 2: Hey, John, can I ask you a quick question?


MALE SPEAKER 2: Sometimes-- speaking of the Fetch and Render tool in Search Console, sometimes when I use that tool, I notice that JavaScript imported from another site isn't loaded for some reason or it's blocked on that site like Disqus, for example. Will that cause a ranking problem? And it can have a rendering effect on my site. Will that cause a ranking problem on my site?

JOHN MUELLER: It can. Sure. If it's generating or creating content on your pages that otherwise wouldn't be on your pages, then that could have an effect on your site. So it's not that your site would get demoted for that. But if we can't pick that content up from rendering, then we can't use that to rank your site.

MALE SPEAKER 2: Right. Thank you very much for the straight answer.

JOHN MUELLER: Sure. All right, we just have a couple minutes left and a ton of questions left. So let me just see if there are any with higher votes. So we can just run through those quickly. "I am retiring a product for two years. It lives on its own domain. I plan to redirect the product domain to parent site. If after two years, I bring back the product domain and remove the redirect, will it regain the power that it used to have?" Likely not. So likely, if this is really gone for two years, then those signals that we have associated with it will probably have evaporated in the meantime. So from that point view, you're probably starting more or less over at that point. But for the most part, that's less of a problem because usually if you bring a product back after two years, it's something different anyway. So it's not something that would be seen as the same product as it was before. So from that point of view, if you plan on bringing this back at some point, maybe it makes sense to just keep those pages and put a banner on top and say, hey, this product isn't available at the moment. We plan to have more in two years when we have a new supplier or whatever. "One of our top article page is crawled and cached on a daily basis by Google. I placed an internal link a few weeks ago in the body text of the top article. But that link is never crawled by Google, even metadata set to indexfollow." That's really hard to say what that could be. So I'd probably have to take a look at an example. If you have a forum thread on that, I'd be happy to take a look at that. All right, let's just open it up for questions from you all. I'll copy these questions out and see if there's something that I need to follow up on as well.

MATT PATTERSON: I've got one. Sorry.

JOHN MUELLER: Go for it.

MALE SPEAKER 2: Hey, John, I don't know if Matt wants to ask his question. Or I think he muted himself again. But go ahead, Matt.

MATT PATTERSON: Sorry, yeah, I got a couple things. One is the follow up on the problems with Fetch as Google with video content in Android app. So actually while I was on the call, we switched on internationalization-- so more territories for some videos. And we've actually got some videos that are in the US and basically available worldwide. And we're still getting those internal Google error-- Fetch as Google problems. Another recent follow up on the question I asked a month ago about family-friendly links and the speed of indexing video content. We turned out we weren't using family friendly links. We've added those to all our content and actually seem to speed up the indexing of video content dramatically, which is good to know. And the third one is so, just again, on this problem with app deep linking. So we are launch partner for Android TV and Sony in Germany. So we're pre-installed on all Sony Android televisions. And, obviously, none of our app deep links are working currently . So it would be super lovely. I don't know if I need to go through them or what. But it would be super lovely to get some attention on that because I suspect particularly for Android TV, problems with app deep links and video content is actually a fairly big deal.

JOHN MUELLER: Yeah, I made a note of that to pass on to the team to take a look at. With regards to internationalization, I think that's something that potentially might be a problem there because, as with websites, we crawl primarily from the US. And if your content is blocked in the US, then we won't be able to access that either. So that applies to web pages as well as to apps. So that might be something that might be playing role there too.

MATT PATTERSON: Yeah, and as I said, we've actually just start switching on more territories to some of our content. So a lot of our content is licensed to GSA. So we've tried with content license for the US. We can verify if it worked if coming from a VPN in the US. But we're still getting internal Google error in Fetch as Google for that content. So that's still a problem.

JOHN MUELLER: Yeah, it's still very early days for the Fetch as Google for apps. But I'll definitely pass that on to the team and dig out that thread that you had on the forum so we can see.

MATT PATTERSON: Yeah, I'll update my forum posts with some known to work in the US URLs.

JOHN MUELLER: That would be awesome. OK.


MIHAI APERGHIS: Hey, John, regarding the bread crumbs problem that I had for that client with the home element with the display none. It does seem to not show any breadcrumbs rich snippets. Any updates on that? Any Idea what it could be? Or what we should do to display those?

JOHN MUELLER: I thought it was working last time. Or I misunderstood that. OK. I don't know. I'd have to take a look at what's happening there. But essentially, we should be picking that up. And that's something we try to do there. I know we had a problem recently with some bread crumbs that were repetitive in the sense that we would repeat the same breadcrumb piece multiple times in the search results. But that wouldn't be what you are seeing there, I guess.

MIHAI APERGHIS: We also edit the organization and cycling search bar thing. It didn't show up. But I know that only shows up for a certain amount of-- volume of queries, I guess. Or I don't know. You have some algorithmic choice to do that. But I know for bread crumbs, they should be picked up regardless. So I don't know what else can we do? I'll leave you a message on Google+.

ROBB YOUNG: John, while we're getting updates-- would you mind having a look at ours again?

JOHN MUELLER: Whoops. Just a second. Can you repeat that?

ROBB YOUNG: Yeah, while we're getting updates, would you mind having a look at our new domain again to see if it's still being affected by the old. It was April that effectively all of the 301s and ever connection was removed. I can put the-- I am sure you know the domain. It's the normal E one.

JOHN MUELLER: I'd have to take a look separately-- ah, the version with E.

ROBB YOUNG: Yeah, it's the one that we're using now that--

JOHN MUELLER: I don't see any problems with that.

ROBB YOUNG: Right, because it's still showing as over 100,000 links from the old domain going to that from the old 301 days. And they haven't been there since April.

JOHN MUELLER: Yeah, I wouldn't worry about that.

ROBB YOUNG: But I do worry about-- one of the things that we see is that every single-- when you click through into it-- every single one of them says it is via an intermediary link.

JOHN MUELLER: Yeah, I think when it's a redirect.

ROBB YOUNG: Right. But if it doesn't exist anymore, how long does that usually take to-- I've resubmitted the indexes up to date.

JOHN MUELLER: Longer than I'd like. I know we're working on improving the latency three. But I've seen situations like yours where we show things that are actually removed for a longer period of time. That's really just Webmaster Tools or Search Console. It's not that we take that into account with--

ROBB YOUNG: So that's not the real state of the index for ranking and algorithmic--


ROBB YOUNG: OK, so we just need to wait that out to get an up-to-date report?


ROBB YOUNG: All right, thanks.

JOHN MUELLER: All right, with that, we're a bit out of time. I'll set up the next Hangouts as well. And I think we have one on Thursday in German and in Friday in English again. For those of you in the US, since the holidays are coming up, you could join a crazy, early time for a Hangout as well if you wanted to. But, otherwise, I wish you all of a good week and see you in one of the future Hangouts again. Bye, everyone.

FERNANDO: Bye, bye.

MATT PATTERSON: Thanks, John. | Copyright 2019