Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 15 December 2015

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

JOHN MUELLER: Welcome everyone to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland, and part of what I do is talk with webmasters and publishers like the ones here in the Hangout, the ones that submitted a bunch of questions already. As always, if there are any people here in the Hangout, especially the newer folks who'd like to go ahead and ask the first question, feel free to jump on in.

MALE SPEAKER 1: Hi, John, back into it to my question about duplicated content on the automotive website that we spoke about last time-- did you manage to take a look at this?

JOHN MUELLER: Not in detail-- not yet.

MALE SPEAKER 1: All right, would you like to take a look at it in the future? Or would you like us to send you some more details on it?

JOHN MUELLER: I'll dig into it a little bit, yeah.

MALE SPEAKER 1: OK, although I have another question for you. It's in regards to the content, again. So currently, I'll tell you the story. Our website is selling car parts and accessories for about six years. It's a pretty big automotive website. But half a year ago, we decided to add new categories and new products to the website that are not related to automotive industry at all. For example, some home audio microphones, home theaters, and stuff like that. And we placed them in the root of the main website. And we noticed that currently, our traffic heavily dropped on the automotive keywords. So can we say that all those new products lowered the site ranking and caused that traffic drop?

JOHN MUELLER: Usually, it's not that simple. Let me just mute you. It looks like we have a bit of an echo there. Usually, it's not that simple that we would say, well, you focus on automotive parts. Now, you added, I don't know, maybe furniture or something completely different. Than we would say, oh, well, this website is not so strong for automotive anymore. Usually, we'd still focus on those automotive parts normally. So it's not that you would be diluting your website or making it less useful for automotive parts by adding more other kinds of information there. So usually, that's not something where I would say it would be related.

BARUCH LABUNSKI: John, but one time you said it can confuse Googlebot if you add more or different pages. You said that in one of the Hangouts that it can confuse Googlebot if you add--

JOHN MUELLER: Well, it shouldn't. So adding more and different kinds of contents can definitely make sense. It wouldn't result in us thinning out how we would rank your website for the existing content that's there. Of course, if you add a mix of high-quality content and low-quality content, then that's a different question, but if it's normal content that you're adding to your site, and it's a slightly different theme, than that should generally be fine. That wouldn't be something where I would say it should cause problems. So if you're seeing a drop in ranking after something like that, then that shouldn't be due to adding different kinds of content to your site. That would probably be due to something else that maybe we were picking up across the site or that we're just not finding as useful as it used to be.

MALE SPEAKER 1: Hey, John, another question-- would that be a better way to organize it? Should we better get a new domain for the entire other non-automotive theme or maybe a subdomain? Would it make any difference for Google at all?

JOHN MUELLER: I am sure it would make a difference, but I can't say if it would be a positive or negative difference. It would essentially be two separate sites out of one site, which is always a fairly big change when you're doing that with a website. So moving from one domain to another is something we could easily pick up on, taking one domain and splitting it up into two, or two domains and combining into one. Both of those scenarios are very hard for us to guess ahead of time what would come out of that. And from my point of view, personally, I prefer having one really strong website instead of splitting things up into separate websites just because if you can make one strong website, you can focus your energies a lot more and really make sure that everything across the whole website is high quality as it can be. I personally don't like splitting things up that much. Sometimes I think it makes sense for marketing reasons if you're targeting completely different audiences. Maybe you have resale products that you sell directly to consumers and other products you sell to distributors. And it might be the same product, but a very different audience, and maybe it makes sense to set up separate websites for that kind of situation. But in general, if you're targeting the same audience or generally the same audience, I really recommend making one really strong, really good website, rather than a handful or large number of less strong websites.

BARUCH LABUNSKI: A good example to take a look at would be maybe Newegg. Newegg has some similar to that.

JOHN MUELLER: I don't know the Newegg site. I usually say Amazon as an example here, but Amazon also has a bunch of different websites probably based on the different acquisitions that they've had over the years. So it's something where you can find pretty much everything on a big website like that. And I think if you can really build out a strong website that recognizes the brand-- people want to go there-- they explicitly want to go to that website because they think they can find the information they're looking for-- then that's probably something that you could aim for.

MALE SPEAKER 1: Thanks a lot, John. Actually, we're visiting your meetings in order just to make sure we're doing everything right. And from our perspective, we checked our website, and we still see a major traffic drop on automotive categories. Would it be possible that in my message I'm going to just include a couple of examples so you could take a look because it's for a so irregular--

JOHN MUELLER: Yeah, I'm happy to take a look. I can't promise that I'll be able to say much about that because a lot of times, this is feedback that we bring back to the search quality team. And the search team will try to figure out do we need to improve our algorithms? Are we doing something wrong here? Is that algorithm picking up something that maybe they don't want to tell you about explicitly? Those kind of situations. So I can't promise I'll be able to tell you need to change this line of code and that everything will be fine. But it's definitely useful information for us to have to discuss with the team internally.

MALE SPEAKER 1: Thanks a lot. That's going to be a great help for us. Thanks.

MIHAI APERGHIS: John, a quick follow up on that. When you have a very niche-specific, focused website, and you are adding a bunch of category of products not related to that niche, technically aren't you diluting though some of the signals that the website has through these new [INAUDIBLE] signals will be spread out both the niche-specific pages that you have as well as the new products that you've added? Perhaps this is why you also usually recommend for small businesses to focus on one specific niche and not got to growth.

JOHN MUELLER: I wouldn't necessarily say that you're diluting things. But it's something where, especially if the quality isn't the same, or you can't say this is something really well-established here. You're adding something completely new that nobody really knows about or cares about maybe to your website as well, then of course you're spreading things out a little bit, but if you're taking something really fantastic and adding more fantastic stuff to it, then that generally builds up on each other. It's not something where I would say it would be a problem. And sometimes what happens is you have to introduce new products somehow, and nobody will know about these things for a while, but you hope that through your existing brand recognition, people will come to your site and say, oh, well, this is interesting too. I recommend that to other people as well-- I'll buy that maybe, whatever. So that's something where you have to build up step-by-step, and obviously, if you're taking one thing, and you're piling on lots and lots of new content, then it takes a while for us to understand what to do with all that. But if this is good stuff, then we should be able to pick that up.

MIHAI APERGHIS: So there's no problem, for example, have a store selling laptops and IT products, and then suddenly, I'm adding baby products or pet-related food-- I don't know.

JOHN MUELLER: I'd say you'd have more problems with your audience than you would have with Google in that if you have a user base that is looking for IT products, and they keep coming back to your site for that, and the main products that you promote now are suddenly baby powder, then they'll go, oh, this is not what I wanted to find here and maybe run away. But if it's something where you know the audience is also interested in that, then, sure, why not? And that might be the case, and you target something specific to an IT audience that grows older over time, and suddenly they need baby products because they're all having kids. I don't know. These things can happen.

MIHAI APERGHIS: Google doesn't have any bias towards one or another as long as the audience is--

JOHN MUELLER: Not that I'm aware of.

MALE SPEAKER 2: John, I have questions about app indexing in the Search Console. Is that for a different Hangout? Or could I ask?

JOHN MUELLER: Go for it.

MALE SPEAKER 2: It's not news related. So I could share my screen to make it easier to understand. OK, so I launched the Search Engine Roundtable app around here and started to get indexed impressions, not many clicks, as you can see. And then that had that Search Console bug with the Search Analytics around here, and I was like, oh, it'll be fixed. But when it was fixed, it just kept dying, and then I don't know if it has anything to do with the app and something that I'd done, but when I look at the crawl areas, there is nothing, obviously. That's an issue you said you're working on, I think?


MALE SPEAKER 2: And if I go to the fetch, and I fetch a new URL, it tells me these things, but it always said that, even when I first launched.

JOHN MUELLER: What does it say? Let me click on you.

MALE SPEAKER 2: Sometimes it works, but if I touch the homepage or certain older pages, the work, it says, complete. But sometimes URLs give me this it's unsupported URI, which I can't figure out if it's just--

JOHN MUELLER: Oh, you have the colon in there. It should just be https, slash, and then the host name, and the URL.

MALE SPEAKER 2: So if I go remove--

JOHN MUELLER: It depends on how you have your intent set up in the app, but--

MALE SPEAKER 2: Good question. So if I remove the colon, you said?


MALE SPEAKER 2: Need the slash?

JOHN MUELLER: If you have it set up to respond to that, yes.

MALE SPEAKER 2: I have got to set this out. Yeah, it looks like the ones-- OK, I'm going to be-- all right, I apologize.

JOHN MUELLER: OK. I passed the issue with the impressions on to the team. And they basically said this was mostly because very few people have your app installed at the moment.

MALE SPEAKER 2: You've got to install it?

JOHN MUELLER: Yeah. So those people who do have it installed, they will see it, of course, in search. But those who don't have it installed, they won't see it.

MALE SPEAKER 2: I thought they'd see it anyway. I thought the new thing is that they showed the app even to people who don't have it installed.

JOHN MUELLER: We show the install button if someone is explicitly looking for your app. So if they are like, I don't know what queries you would use. It might search engine roundtable app or search engine round table maybe itself. Those kind of things should show the install button. But otherwise, it would only trigger on the content itself if you have the app installed.

MALE SPEAKER 2: Are you sure?

JOHN MUELLER: I'm pretty sure.

MALE SPEAKER 2: I thought there was a news release saying that even if you don't have the app installed now, it will surface the content using the streaming feature.

DANIEL PICKEN: Can I jump in? I've got the app, Barry, because it's great, And when I search for it, it comes up with standard desktop version, and then at the top, it tells me to open in there. It doesn't necessarily give me an option straight away to go straight to the app. It actually shows the desktop version first.

MALE SPEAKER 2: Yeah, it's weird. It used to.

JOHN MUELLER: I think we did some experiments with bubbling up the app for content-type queries. So if someone is searching for, I don't know, Penguin, and you happen to write about Penguins, then that's something where-- I believe we did an experiment about showing a deep link to that directly, even for people who didn't have it installed.

MALE SPEAKER 2: Right, like the example, again, I guess this is a limited number of people. The example was with hotels, that it would actually show apps without having them installed. You clicked on it, it would launch the Google Play app and streaming or something like that.

JOHN MUELLER: The streaming I think is just a very, very limited test run for them. So that wouldn't be happening.

MALE SPEAKER 2: And the good news is the fetch worked out without the colon. So thank you.


MALE SPEAKER 2: And a quick question on the-- somebody said that the author article markups-- the article markup in the rich snippets option specifically now requires author name. But has it always required an author name? Or you're not sure?

JOHN MUELLER: I don't think so.

MALE SPEAKER 2: You think that's new?

JOHN MUELLER: I believe that's something new that was added, but I saw the feed as well, and I asked the team, but I hadn't got an answer on that yet.

MALE SPEAKER 2: OK, authorship is back.

JOHN MUELLER: No, no, no. Not, not-- no, no. Let's not open those wounds, man.

MALE SPEAKER 2: OK, thank you.

DANIEL PICKEN: John, can I jump in with it, and well, it's specifically an indexing and no indexing question, if that's OK?


DANIEL PICKEN: So we spoke last week about why we would use a robots.txt file, and it would generally would be to block the unpaginated pages. Now, imagine the situation where Google's crawled the pages, so search pages are already indexed. So what we'll do then is we'll block my robots.txt. They are still there. We can't put a noindex on the page because if you crawl the site, you'll hit robots.txt, and then you can't crawl with noindex. In that situation, where we want to pull them out, those pages out of the search, how would we go about that because, again, we blocked them through robots.txt. How would we then go about pulling those pages out, and really as quickly as possible?

JOHN MUELLER: I'd use the URL removal tools for something like that. So if this is like a specific folder or sub-domain or something like that where you can say everything maybe that starts with search.php, or whatever pattern you have, should be removed, and I'd use the removal tools for that. Those removals are for a limited time, I think, 90 days. So I'd set a calendar reminder-- double check after 90 days to see what's still left because probably a lot of the content that was the blocked by robots.txt ends up dropping out of the index anyway if we can crawl it anymore. But after maybe 90 days, look at it and say, well, this is fine. This isn't that bad after all now. I'll just leave it like this, or maybe you'll say, well, I'll do another 90 days, and then it gets better after those 90 days.

BARUCH LABUNSKI: Are you referring to expireds or where it says expired? Because it says, expired, then I'll have over 3,000 things that I get from--

JOHN MUELLER: Yeah, that's when those 90 days expires. So there's a difference there between removing content and saying that you want the cache refreshed. If you use the cache refresh, then that will expire faster if we recognize that we've recrawled and reindexed those pages in the meantime, but if you use a removal tool, I'll just remove it for 90 days. You have a choice between-- so if you have the site verified, you have the choice between a per URL removal or a per folder removal, essentially. And for the folder removal, you can also specify a script name, and essentially every URL that starts with that will be removed. So if it's search PHP? Whatever. If you submit search PHP, then we'll remove everything that starts with that URL there.

DANIEL PICKEN: Will it work on anything dynamically driven, because this is a question that was put to me? And before the false, anything dynamically driven before the actual folder where it actually worked, it wouldn't accept it-- so a regular expression it won't accept.

JOHN MUELLER: No, it has to be exact. So it has to be an exact matches of that folder. So, if, for example, you have a category and then search, and then you want everything removed under search. But the categories can be hundreds of different categories. And that's not something you'd be able to easily do there. You can submit them manually like that if you have the time if it's really important for you. But it's not something where you can just say anything that starts with any random characters, and then it goes to search, that's what I want removed.

DANIEL PICKEN: Is there any alternative to that, or is that generally the default? If we we're not going to take the noindex route, is that generally, is there any other alternatives to that at all to get those pages out?

JOHN MUELLER: Not really. Depending on what kind of content is, what probably will happen if you block it by robots.txt is that we'll lose track of the content because we can't crawl it, and it generally won't rank that well. So , if you search for normal content type queries, you won't find it. But if you explicitly do a site query or in URL query, then you'll still find that. So depending on what you're looking at-- if you're saying, well, I just don't want normal users to find it, then probably it will be fine to just block it like robots.txt. If you say this is confidential information, maybe in the query string there is an email address or something like that, then you might want to take this step and say, well, I don't want people to even be able to run across that at all in search.

DANIEL PICKEN: If you were supporting noindex as a directive in robots.txt, I had heard somewhere that Google [INAUDIBLE], but I'm not sure. So I'm seeking clarification. So if you had the directive of noindex, would you honor that, or that wouldn't be accepted because that has to be in the page code?

JOHN MUELLER: Maybe. Yeah, so-- I mean, the tricky part is it's-- I believe we honor it partially, or to some extent at the moment, but it's not an official directive from our point of view. And it's something that could theoretically change, because it's not something that we officially launched. So you could put that in there, and it might be the right thing. Or it might happen that next week the team has been, oh, we've been meaning to remove this code for so long, we just delete that. And it won't work anymore. So it's something you could do if you wanted to see what happens, but if it's really a critical issue for you and something that you want to make sure is handled properly, then I wouldn't rely on that.

DANIEL PICKEN: That would be a good solution if you could do that. So then it stops that problem of putting things in the robots.txt where you can't crawl that where you won't be the noindex. So it overrides that. I think it would be good solution as a way forward if you were to do that.

JOHN MUELLER: It's a regular point of discussion, yeah, because there are definitely some good points for using the noindex directive in robots.txt. But there's also the problem of a lot of people don't know what the robots.txt does, and they just copy and paste it from somewhere else, and if they disallow all crawling of their site accidentally, which happens fairly frequently when people do relaunches, they accidentally keep the relaunch robots.txt-- those kind of things. If it's just a disallow, then we can still show the site in search and users can still get there somehow. It probably won't be optimal, but we'll still be able to do something with it. Whereas if they copy and paste and noindex in the robots.txt, and they remove their whole site, then that's something we can't really help the users to find that site anymore.

DANIEL PICKEN: And just quick index question, I'm aware of multiple solutions really to get Google to index our content. In your opinion, what would be the best and quickest way to get that content indexed? What would be the best way?

JOHN MUELLER: If you have handful pages?

DANIEL PICKEN: The pages on your site.

JOHN MUELLER: There's a bit of echo. So if you have a handful of pages, I'd use the Fetch as Google, so that could index. If you have a lot of pages, if it's dynamic, I'd definitely use sitemaps.


BARUCH LABUNSKI: John, I feel really lost now without the search tool's location. It's not me. Probably another million SEOs feel the same. It's just there's an option there where it says, myself being from Canada, there is only one option to either choose Canada or any country, any country and Canada is confusing. And I'm thinking towards the appeal in Search Console. Should I remove it? Should I [INAUDIBLE]? Should I not? Because it's very confusing the way it is.

JOHN MUELLER: So the setting in Search Console is confusing?

BARUCH LABUNSKI: No in search, like on top in Search Tools, there's no more location. You can't set it to specifically in Canada, Toronto, where I am. Like I said, I mentioned that in another Hangout, but it's hard to work with-- I don't where my-- like, you know, different areas-- it's jumping, and like what response. It's weird.

JOHN MUELLER: A lot of the search features are focus on user queries-- those kind of things. So that's probably not something where we'd say this is critical for a webmaster. Therefore, we should add that functionality back, because in general, what happens with a lot of these things where you have special links and fields is that users get confused because they don't really know what they're supposed to do. So one thing you could try is just using the URL parameters. I've heard that still works. So I don't know which parameter name that was, but that might be something to try out.

BARUCH LABUNSKI: This is on my phone. If I'm located wherever, the company I use, I don't want to mention their name, but they're just giving me an IP located in Mississauga which is about 45 minutes away from me. So I'm not going to get precise company. So if I'm looking for flowers, I'm going to get the flower shop 15 minutes away from me. I'm going to get about 45 minutes away from me.

JOHN MUELLER: Something that's particularly on a phone, we should be able to pick up more than just based on the IP address. But I will take that feedback back to the team and see what they say.


JOHN MUELLER: All right, let me run through some of the submitted questions, and then we can get back to more discussions. "Can you tell me if title and metatags are ranking factors? Is it generally OK to generate them automatically or is that bad?" As far as I know, from everything at the moment, the title is something we do use partially for ranking. Description, we don't use at all. Description is something we show in the search results as a part of the snippet. For both of them, I personally try to create something that's useful for users. If you're generating them automatically, that might make sense as well, as long as you're showing that your system generates something that's kind of useful for users and isn't just the copy one-to-one of your whole content. So from my point of view, if you have a smart set up to generate title and description tags, go for it. "Do you have any new SEO tips for 2016?" Oh, man, I don't have any magical SEO tips for next year. So I can't tell you about that high ranking metatag that we're currently working on. It probably won't work yet. But let's see, in general, I think next year you'll probably hear a lot from us about AMP, about mobile continued kind of as we've been doing over the years, because that's still a very big topic, and we still a lot of sites not doing that properly. Those are probably the bigger changes. Some of the other things that will definitely happen as well is more information about JavaScript-based sites so that we can really figure out how to handle these a little bit better in search and to probably make some better recommendations on what you could or should be doing there. But past that, of course, high quality content is something I'd focus on. I've seen lots and lots of SEO blogs posted about user experience, which I think is a great thing to focus on as well, because that essentially focuses on what we're trying to look at as well that we want to bring people to content that's really useful for them, and if your content is really useful for users, then we want to bring that to people. So if we can recognize that your site is doing everything right, then that's something we'll try to reflect.

BARUCH LABUNSKI: Just like Gary said, don't remove the content, make it better.

JOHN MUELLER: Yeah, that's definitely one way to look at it, especially when you're looking at quality algorithms. If you know that your content is low quality, then why not take the time to actually improve that instead of deleting everything. "Does Google treat every industry the same, or does it have some advantages or disadvantages?" In general, we try to treat everything the same. We try to make our algorithms so that they apply to the web overall because, on the one hand, the web is constantly changing. On the other hand, we don't have time to make specific algorithms for specific parts of the web, and that could be for individual languages. We try to keep everything general for countries, for industries as well where we say, well, this algorithm generally works across all of the billions of sites that we've run across so far. So it doesn't make sense to split it up into individual industry or even to have settings for industry or settings for language, settings for our country. Sometimes with search features, we do have limitations per country based on maybe policy and legal issues in individual countries. With regards to some of the, I guess, more advanced features, we also have, per language, sometimes edge cases or it works better or it doesn't work that well specifically around voice search for example. If you're searching in Swiss German as some obscure language, for example, then it probably won't be that good or that useful in voice search. But if you use something that's more commonly used, then maybe we can pick that up a little bit better. So these are sometimes the differences between what some people see and what you would see. So usually, that's more based on language and country rather than industries. "Does a two-way internal link carry more weight and reinforce the relationship between two pages more than a link just going one way? Could this help with rankings if the relationship was reinforced better?" I think, in general, this usually helps us to better crawl the website to understand the context a little bit better, but I doubt you would ever see any visible change in rankings from changing a website to using two-way links or one-way links because, for the most part, we can crawl these sites normally just fine. There's no need to artificially create reciprocal linking within your website. If we can go through your menus, through your normal structure on your site, then usually we can pick that up fairly well. So I definitely wouldn't play around with artificial internal links like one way, two way within a site because I think you're probably just wasting your time by focusing on it.

DANIEL PICKEN: What if you do it incidentally? So, for example, I've got "The Sun," BBC, all these great websites linking to me, and I want to share that with my customers. So therefore, I'll have a page get coverage around the web, and I will link to that. But then, am I in danger there? Because unsuspectingly, I'm doing this to share the media coverage, I suppose, there.

JOHN MUELLER: No, that's perfect fine. That's not something I'd say would ever be a problem. I think this question goes more into internal linking and, in a case like that, I think it's even more artificial if you're trying to avoid a two-way link, because if you have your internal pages set up, and you say, well, these pages are related. So of course they don't link to each other. It makes sense for the user. It makes sense for us to understand the content better. It's not something you need to avoid, or where you'd ever see any kind of a ranking change just by saying, well, this page links to that one, but it doesn't link back, so it can't be that strong of a relationship. I definitely wouldn't focus on that. "We added some service pages to our branch pages showing different services we offer in a certain city. This has useful and unique content, and our rankings went up a lot. Does this positive effect mean you don't class this as doorway pages?" Not necessarily. And so it's really hard to tell based on just a description. If these are a handful of pages that are focusing on specific aspects of your service that are essentially different product pages, that maybe you're offering to your clients, then that's something that sounds very useful and not something we necessarily react negatively to. On the other hand, if you're creating thousands of pages with hundreds of variants each, then that's probably something we'd pick up as lower-quality content on the one hand, maybe doorway pages, maybe even if someone from the web spam team were to manually look at it, they'd say, well, this is way too far-- way past the line. We have to take action here to make sure that the quality of our search results remains high. So just because you're seeing a certain behavior now doesn't necessarily mean that you're doing is the right thing. I'd take a step back and look at the bigger picture, maybe get some input from other webmasters who are in similar situations, who figure out are you getting close to the line, are you way past the line, or are you doing something that essentially just makes sense for your users and perfectly fine? "If you recover from a penalty, does Google still discount the keyword phrases it was penalizing you for to be with once the penalty has been removed? Or is there nothing holding you back?" It's hard to say. So in general, the manual action, if it's resolved with a reconsideration request, it's out of our system. It's not that our algorithms will hold a grudge against your website and say, well, there was a manual action here. Therefore, we can't trust this site as well. With regards to some kind of manual actions, our algorithms might pick up on these issues as well, and it might take some time to reflect the new state. So for example, if you have a lot of unnatural backlinks, then that's something where the manual action might resolve-- might happen and be resolved by cleaning up some of that, but the algorithms might say, well, we've recently recrawled these and still have seen the bad links. So it might take a certain longer time for that to settle back down again. But, especially, if you're talking about content on your pages, it's not that our algorithms would hold any grudge against your site from a manual action. "If I have page content under and H2 heading, and this content also has subheadings within it, should I use H3s with this content or just leave it all under H2? What's better?" So these headings help us to understand the context of your content a little bit better, but it's not a magic bullet. So if you use these headings incorrectly, then it's not something where we'll say, well, this will be a problem, and our algorithms will rank your site worse. These headings help us to better understand the structure, but it's not something that I would say wouldn't be absolutely critical. So from that point of view, if you think it makes sense from a semantic point of view to use H3s below the H2s, and you can structure your content like that, sure, go for it. If this is something that would result in you having to redesign your website, then probably there are more important things to focus on.

DANIEL PICKEN: What about multiple H1s? Will that have any impact at all if it's relevant to the page?

JOHN MUELLER: Not really. What generally happens there is we recognize there are multiple headings. So we don't know which one of these is really the main heading. So there is some kind of dilution, I guess you would say happening there. But it's not the case that we would say you're doing something wrong. We're going to ignore this completely. It's like taking a heading that's, I don't know, four or five words, and you expand it to a heading that's two paragraphs long, then, of course, we can't treat all of the content within that heading the same as a short one. "Regarding rich snippets, would it be OK to have the same reviews on many similar category pages or is there risk to lose ranking stars completely if one does so and therefore doesn't provide unique ratings on every page?" In general, the reviews should be focusing on the product that you're offering on those pages. So if this is a review about one specific product, and you have the same product on different parts of your site, maybe in different categories, for example, then using the same reviews there definitely makes sense. On the other hand, if you have the same reviews across your whole website for all of the products that you offer, that starts looking a little bit strange in that we don't really know what we should be doing with these reviews , because they're clearly not specific to the main product that you have on the page anymore. "Is there any way to find out what pages have not been indexed apart from using small sitemaps? This is in Search Console to help work it out. Also, any tips as to why pages may not get indexed?" That's a hard one. I don't know. So sitemaps is definitely one way to do it. Another thing that you could do if you have the server logs is to see which pages get traffic from search. That indirectly tells you these pages are actually indexed, because they get traffic from search, and maybe the other ones aren't indexed. That's probably what I'd focus on more rather than focusing on the technical part of this page is indexed or this page is not indexed. As to why pages may not get indexed, there are a bunch of reasons. Let's see, we have spam. Sometimes we recognize that there's spam. So we drop those from the index. Sometimes we recognize that a page is duplicate before even crawling it. So we might not pick that up. Sometimes we'll recognize its duplicate after crawling it. So if the content is essentially the same, or the primary content is the same, then maybe we'll drop that after indexing. Sometimes what happens is we just don't have enough signals for page to actually go and figure out, well, this is something we really need to crawl and index. So that might be happening that we don't even go out and crawl it because we don't know that much about it. What you can do there is help us with a sitempa file-- help us with clear, internal navigation. The easier we can recognize that these pages are really relevant to your site, the more we can actually focus on them. But otherwise, I wouldn't necessarily focus on having all pages indexed all the time. That's, from my point of view, something that's almost a rarer situation that we'd actually index 100% of a site all the time, even for really big and popular websites, we don't index all of the pages that we find all of the time. So that's not something that I would say is necessarily a goal. But rather, I'd focus on pages that you think are important for your site, and really make sure that they are visibly important when we do go off and crawl and index your site. "We've changed our structure data markups, yet, the old schemas we use to have are still shown in Search Console with the last detection date of nearly three months ago. I'm sure our pages have been crawled more often. So what's up with that?" I'd love to have an example of this. I've heard this from, I think, two people so far, and it might be that we're just showing the old data there because we don't have a new variation of that data yet for those pages. So for example, if you remove a specific type of markup from your page, then we might know, well, the old markup was last found on these pages back then. But it doesn't mean that the new markup that we find there which was different from the old markup, but hasn't been picked up. But if we're providing something that's kind of confusing in Search Console around structured data there, then I'd love to have some examples so that I can talk to the team about that and see what we can do to make them either clear to understand what's actually happening there or filter all of that so that we can really say, well, this is what you need to focus on, and we'll keep all of this cruff, old data away from you so you don't have to worry about it. But if you can send me some examples, I'd love to see them. "Is there any connection between crawling and ranking? We talk about index pages." Of course, I think the general question here is given better pages already indexed, what's the difference between a page that's crawled more frequently and one that's crawled less frequently? And in general, we do try to pick up pages that are more important and crawl them more frequently, but it's more of a the symptom essentially that we think, well, this is a more important to refer, and we'll crawl it more frequently. It's not that we'd say, well, we crawl this page more frequently for other reasons maybe because we keep see timestamping change on this page. Therefore, it will become more important. But rather, it's kind of the other way around that we think this is a critical page for us. Therefore, we'll crawl it more frequently.

BARUCH LABUNSKI: Are you guys every going to change the crawl stat graph, because it's been the same graph for so many years. Are you planning to maybe give it a makeover?

JOHN MUELLER: I don't know. Sometimes, I thought they were kind of useful. So at least, I think it's kind of live and it reflects what's actually happening there. But if you have specific ideas where you say this is totally confusing or can't work with this at all, the way that it is is, you've got to delete this graph and start over again, then that kind of feedback would be useful. But I think if it's working for people, if it's useful information for people, then I'd be up for keeping it the way it is. I wouldn't focus on modernizing something just for the sake of having modernized it, because that's always time that's lost which could spent focusing on other things. "Is it possible that product pages with images do benefit or suffer from their own ranking because of the ranking of the image in image search? So a good image ranking might result in a good ranking for the hosting product page?" Not necessarily. We try to separate those essentially as much as possible in the sens that some things make sense in image search. Some things are really important in web search, and it's not that we'd say this is really popular in image search or really useful for image search. Therefore, we'll boost it in web ranking as well. We try to keep those separate. And what might happen is, of course, for some topics, we'll show the image universal block on top, and that could include an image from a page that's actually ranked fairly low or somewhere below it in the main search results as well. So it's not that we'd, say, couple the two that strong. "We notice a huge increase in traffic over images. Google DE is referred. Has there been any major changes at Google with impact in image search?" I don't think there have been big changes there. One thing that is kind of special with the German and the French image search is that they use old-style UI with the iframed landing page behind. So that's something where you might see some effects from experiments there with regards to moving to HTTPS-- those kind of things. With regards to impressions and clicks in Search Console, you might also see some changes there as we try to unify the way that we track clicks and impressions across the older style UI that we have in Germany and France and the more modern UI that we have in URLs. So those are I think the two main sources where you might see some mismatches there. I think especially in Germany and France with regards to image search, that's something where you'll probably expect some changes because the old UI is in the mean time getting rusty almost where maybe it makes sense to find a way to modernize that a little bit in the future. "If a site has given us organic and affiliate links, can that be detrimental to our website. Imagine where a person placed an order on the site, blogged about the products, and later became an affiliate. Can this look dodgy to Google?" From my point of view, that's perfectly fine. That's not something I'd really worry about. If you do have a lot of affiliate links-- if you provide a snippet for your affiliates to link to your site, I'd just make sure that it has a nofollow tag to clearly say this is essentially a link that's placed with regards to monetary connection to the site. But otherwise, if their natural links and affiliate links on the same site pointing to your site, that can happen. That's not something I'd say our algorithms would pick up and say this is problematic. "Do blogs hosted on platforms such as WordPress or Blogspot have the same value as blogs on their own top-level domain?" Yes, in general, they do. And sometimes you can host your blog on Wordpress or Blogspot and have a custom domain. So it's not something that we tried to separate. "What's the status of HTTP2?" I don't know. Someone asked about this in one of the last Hangouts, and I haven't heard back from the team on that. So I'll assume we're not doing that just yet. If your site supports HTTP too, then that's something you can probably check your server logs and see what's happening there before we actually tell you. So enterprising SEO bloggers out there, if you find Google crawling with HTTP on your site, then that might be something to write about, and I'm sure Barry will pick that up quickly as well.

MALE SPEAKER 2: So you recommend you switch to HTTP2 just to test to see if you guys indexed it?

JOHN MUELLER: Well, you're not switching to HTTP2. It's something that's automatically upgraded in the sense that when a browser that supports HTTP2 goes to your site, it will send, I believe, a special header that says, I'm ready for HTTP2, and if the server supports that, you will switch to that transparently.

MALE SPEAKER 2: So it is definitely going to support both, you're saying.

JOHN MUELLER: Yeah, yeah, it's not that you're going from one to the other, and then suddenly it doesn't work anymore. Essentially, it's backwards compatible and it transparently upgrades.

MALE SPEAKER 2: We'll give it a try, thanks.

BARUCH LABUNSKI: Are you going to do a separate Hangout on AMP?

JOHN MUELLER: Probably about next year. So I think the next one is about mobile, which includes AMP as well. So that will be like an intro, I guess. But we'll probably do more on AMP as we get closer to actually showing it in search or as we do start showing it in search.

BARUCH LABUNSKI: By the way, I loved that video you showed on Google+. It's probably the best video I've seen in the year. The one that you shared about the AMP.


BARUCH LABUNSKI: Yeah, it was a 10 out of 10, John.

JOHN MUELLER: Yeah, it wasn't mine. So I'll have to pass that on. "Why so much not provided in Google Analytics? Difficult to analyze the traffic." This is essentially based on the referrer information that is passed to the websites, which is something, by policy, we decided to do, I believe, three or four years ago, in the meantime. So that's something where you have to be creative-- use the data from Search Console as well. Mix that together with the analytics data and find a way that you can connect those two. "I'm looking for SEO PPC for a company. Our ranking for some of the main search terms, 'self-guided walking holidays,' has dropped significantly the last seven days." And this probably goes on. Ooh, lots of questions, suddenly. What I'd recommend doing there is maybe posting in the help forum and getting help from other peers who have similar websites who might have some tips for you about this kind of thing. And in the help forum, make sure that you have your URL in there somewhere. You can use a URL shortener if you like but just so that people can actually look at your specific content rather than try to discuss the theoretical situation of this site ranking for this term. So that's probably what I'd recommend doing there. All right, let me just run through some of the remaining questions and see if there's something specific we need to focus on. Otherwise, yeah, maybe I'll just open it up for questions from you all in the meantime.

BARUCH LABUNSKI: Just the HTTPS move-- I wanted to know is it really necessary add the disavow file from the old one.

So from to should I add the disavow?

JOHN MUELLER: You should put the disavow on your canonical version of your site. So if you move your site from HTTP to HTTPS, then that's why I'd submit the disavow file. If you move from one domain to another, of course, then I'd put that there too.

BARUCH LABUNSKI: So you're saying it's necessary to do that.

JOHN MUELLER: Yeah, it's fairly easy to do. You can download the file in Search Console and just upload the same file again with the other setup, and then you have it solved.

BARUCH LABUNSKI: OK, 2048 is the best certificate you suggest, yeah?

JOHN MUELLER: I think that's the common one at the moment.


MALE SPEAKER 1: Hi, John. Is there any difference between a link with a text anchor and the link from a picture in case that picture has alt and title in terms of PR transfer?

JOHN MUELLER: Page rank should be the same regardless of what you have there. With regards to associating the anchor with the URL, I believe both of those would be essentially equivalent. You don't need both the title and the alt tag. I think the alt tag is enough there because it's essentially duplicated again.

MALE SPEAKER 1: Thanks a lot.

MALE SPEAKER 3: All right. OK, John, I'm just really curious to hear if we are doing anything wrong regarding using the Search Console API because even though the quota says something around, I think, 2,000 requests per second, we can only manage around three per second. Who can we get in touch with to solve these kinds of issues?

JOHN MUELLER: I think that's the normal quota. So for the API, we have a general quota. think it's 10 QPS. But that's across all the features. If you're only focusing on Search Analytics data, then I think the limit there is 2 QPS.

MALE SPEAKER 3: OK, so we would have to request increase.

JOHN MUELLER: Yeah. I believe at the moment, we're discussing how we should set up the quota in the future so that people don't need to have special exceptions. So you probably won't hear from us that quickly, but we're looking at what we could to do there to make that a little bit easier so that it just works for people who need this kind of data. So in practice, what I'd do there is just make sure that you're throttling and spreading your traffic out so you don't hit those quota limits. I think another difficulty there is that the error that's returned if you go above the quota is, I think, some generic error instead of a clear like you're doing too much. You should slow down.

MALE SPEAKER 3: Would it help to do request from, let's say, more than one IP address? Or is that irrelevant?

JOHN MUELLER: That's irrelevant.

MALE SPEAKER 3: Yeah, because we're hitting the same endpoint.


MALE SPEAKER 3: But is there any way-- we wouldn't need to increase-- maybe we could manage with the quota right now. But if we want you to tightly integrate with Search Console, and I should give the background we are Site Improve, which we have created a website governance tool. We want to integrate keywords on a per page so they can see what pages are ranking for. And down the line, the quota it just may be insufficient.

JOHN MUELLER: I don't have a great answer for you at the moment. I suspect it'll look different in a couple of months after we figure out what we can do with the quota-- what kind of limits we can provide. Also, with regards to better documenting and how we're actually doing the quota specifically around Search Analytics because that's where a lot of people are hitting the limits with the other functionality-- the API, the people who are not reaching those limits that much. So that's something where maybe we can clarify how many requests per site or per account you can do so that you can better work out, well, if I grow to double my user base, I need to maybe do more caching on my site or maybe bundle request a little bit differently so that I can remain below the quota for the long run.

MALE SPEAKER 2: John, can I go back to the app stuff quickly?


MALE SPEAKER 2: So it seems like if the Fetch as Google does work, then there wouldn't be any crawl errors. So everything seems to be working properly in terms of deep linking and stuff like that. When I first launched, I had on average maybe like 600 impressions to 1,000 impressions. Then that Search Console bug happened, and now, I'm literally at zero impressions to two impressions per day. Do I have a Penguin app penalty?

JOHN MUELLER: I'm not aware of any app penalties around that. So I wouldn't say that it's anything like that. I think what probably happened is that the timing was awkward, especially with regards to that Search Console bug that we had there where maybe we were running an experiment before. We were running that intellibot then, and we stopped that experiment, and we had that bug in Search Console. Now, it looks like, well, because of that bug, the numbers went down. But actually, it's just because we stopped that experiment. But I really passed your site onto the team to look at to see what specifically is happening there because I also thought, well, this isn't really motivating for people to actually do app indexing if you get two impressions a day. That's not really that exciting. So I don't know what else I can really recommend there other than you encourage your readers to use the app as well. What might make sense is to use the Smart App banner that you can use now for Chrome that has the banner that goes to the Play Store link so that when people are using your phone to access your site, then you point them there.

MALE SPEAKER 2: OK, I think I have that. I'll double check. And people are asking me, show me this app. Show me if it's worth doing. And I'm like, I really see nothing now. So I can't really tell people anymore.

JOHN MUELLER: All right. Let's take a break here. I think we have the next couple of Hangouts set up. The one, I believe, in the week between Christmas and New Year is a little bit longer. So we have some more time to chat, and maybe we won't go through so many questions or something like that. Maybe we'll do it a little bit more chatty. If people are around then, feel free to jump on in. Maybe we'll also have time to look at specific sites, depending how much we'll find the time to make it.

ROBB YOUNG: Can you reveal secrets on that one?

JOHN MUELLER: I don't know. Maybe.

ROBB YOUNG: One small secret?

JOHN MUELLER: I don't think that's going to go on. But maybe that secret metatag that I was mentioning before.

ROBB YOUNG: All right.

JOHN MUELLER: All right.

ROBB YOUNG: It's worth asking.

BARUCH LABUNSKI: But you're working the holidays like always, right? Because you're on the forums every single holiday-- you and Matt Cutts last year.

JOHN MUELLER: Yeah. I don't know. I don't know what Matt's plans are for this year. I don't know. It's always nice to hang out in he forums and do some stuff there, and especially over the holidays when you see people having problems with their websites, they usually appreciate that I try to help. That's something you guys can jump in as well. And maybe you'll find the question here or there. I know Mihai has been doing a bunch of answers in the forums, which is really appreciated. So you're welcome to jump in as well, and maybe you'll see some of us there for the holidays. All right, with that? Let's take a break here. Thank you for joining. Thanks for all the questions. Thanks for all the discussions on the various topics that we've been looking at. And I hope to see some of you again in some of the future Hangouts. Bye, everyone.

DANIEL PICKEN: Thanks, John. Thanks, guys.

MIHAI APERGHIS: Bye, John. Merry Christmas, Robb. | Copyright 2019