Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 03 November 2014

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

JOHN MUELLER: OK. Welcome everyone to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller. I am webmaster trends analysts at Google in Switzerland, and part of my role is to help connect webmasters and publishers and SEOs like you guys with our engineers to make sure that information is flowing in both directions and everyone can work to create awesome websites. So to get started, I guess we have a bunch of questions submitted already, but if any of you want to go ahead and ask the first questions, feel free to jump on in.

AUDIENCE: As the rollouts of the Penguin two weeks ago, I saw sites disappear to page 20 or 30. I'm not here to bug you or pinch you or anything like that. I just wanted to know, is there a reason why the algorithm reversed these spammy sites back to, for instance, to work-- like the site that went to page 20 came back to the first page, back again. Like, is there a reason why it came back? So that's a very spammy site, right? So with--

JOHN MUELLER: I don't know about your specific site. But as far as I know, the whole data is still rolling out slowly, so you might just be seeing fluctuations from that.

AUDIENCE: So it's still rolling out?

JOHN MUELLER: As far as I know, yeah.

AUDIENCE: So in regards to the disavow file or whatever, will that still be also crawled, or is that too late?

JOHN MUELLER: It's never too late, but it's probably something that we wouldn't take into account for this round primarily because we have to recrawl all those links first. But that's not the case. Like, you'd submit the file and we process it and it's taken into effect right away. We essentially have to recrawl all those links and then that data is taken into account the next time the algorithms use that data. So it's never too late. It's something, if you see problems, I'd definitely submit that file and make sure that you have it in there, but it's probably not going to take effect for this round.

AUDIENCE: OK. No, I'm talking in regards in general of what's online. That's not just that one site. I'm talking just in my region, in my area.

JOHN MUELLER: Yeah. If you're aware of any kind of problems like that, I'd definitely clean that up as quickly as possible. I wouldn't wait for these kind of things because it does take quite a while for everything to get reprocessed and run through all the systems here. So it's something where if you see these problems, fix them as quickly as you can, and then we can reuse that data over time. So I wouldn't wait and artificially hold things back and say, oh, I'll submit the file when I have all 5 million of these links checked. If you see a bunch of them really problematic, submit that now and it'll get processed a little bit faster.

AUDIENCE: OK, thanks. I also pinged you something that, if you have a chance, if you can let me know. It's pretty crazy, whatever.

JOHN MUELLER: OK. I can take a look. Yeah. Someone else raised their hand.



JOHN MUELLER: Oh wow. OK, one after another.

AUDIENCE: Is it OK to ask a question?


AUDIENCE: OK. First of all, to make a long story rather shorter, did you get my message on Google+?

JOHN MUELLER: Yes. I passed that on to the team.

AUDIENCE: Perfect. So what I wanted to ask in public was, as I outlined in that message, the site-- which you already know. I don't have to post it, do I?

JOHN MUELLER: No, that's fine.

AUDIENCE: You already know it. OK, so the question is when the site was hit with negative SEO after my first appearance with you, and we had a conversation and you said that the site was under what was called a historic Penguin. Before I purchased it, it was actually under punishment by a Penguin algorithm. Since then, we've done a tremendous amount of link removal and disavow months before this latest revision of Penguin. When this Penguin came down, I saw some strange stuff. Some stuff moved around, but the primary phrase-- the EMD, which the domain is-- is gone, completely just gone. I don't know if there's something added to the new Penguin that goes after the real specific primary phrase, but it's completely gone. So this Penguin update, I don't know if it's the effects of that negative SEO that I sent you. You asked for examples. I gave you a handful of examples-- about 20 of them. So I just wanted your thoughts on-- I guess basically, [INAUDIBLE] computer before where you could actually see it, if this thing was still suffering from Penguin or something else. Would you be able to do that and just tell me if it's still a Penguin that I'm dealing with, or something else?

JOHN MUELLER: I can try, yeah. Can you post the domain quickly in the chat and I'll just copy it from there?

AUDIENCE: Here it comes. If you can be nice to me and you like coffee, I can send you a lifetime of coffee from Costa Rica. It's really good.

JOHN MUELLER: No, that's fine. We have lots of coffee.

AUDIENCE: Are you really--


AUDIENCE: --accept bribes in public. [INTERPOSING VOICES]

AUDIENCE: It's not a bribe. It's me being a nice guy. I'm sick.

AUDIENCE: You guys in Costa Rica, you really know how to party, you know?

AUDIENCE: It's too early, man.

JOHN MUELLER: Yeah, I don't know. That shouldn't be that problematic, but I can double check with the team here to see what is actually happening there. As far as I can see, that looks pretty good now.


JOHN MUELLER: So I can double check to see what exactly you're seeing there.

AUDIENCE: OK. Because my board of directors has assigned me the task of getting this resolved once and for all because there's 17 employees that lie in the balance of this. So no pressure, no pressure. I'll see you every week.

JOHN MUELLER: Yeah. No, I totally understand that sometimes, there's a lot about involved with making a website and keeping it up and running. And depending on how things go on Google sites, sometimes that's better or sometimes not so well. So if there is something we can do to help there to make sure that these sites are ranking the way they should be ranking, I think that's something we always try to do.

AUDIENCE: OK. Thank you very much, and thank you for acknowledging receipt of the message. I appreciate it.

JOHN MUELLER: Sure. I think there are two other guys raised their hands.

AUDIENCE: One was me, John, as usual.

JOHN MUELLER: All right.

AUDIENCE: Gary's also raising his hand via Skype to me, but you can't get in this week. So I'll take my question first, and it was just whether you've had a chance to look at the new or the moved domain, which is that one, which is where we've moved the old one. You told us that we should not try and salvage the previous domain because of unknown technical issues.

JOHN MUELLER: I saw your message this morning but I haven't had a time to discuss it with the people here because they're all based in California. So they're still asleep.

AUDIENCE: You don't work weekends, John?


AUDIENCE: You don't work weekends?

JOHN MUELLER: Well, we have to work weekends, too, and [INAUDIBLE]. It's not the case that I have red and green buttons and just can flip the switches.

AUDIENCE: All right. OK. Shame Barry's not here to publish that. OK, so if you can have a look at that, we've got another call Friday, haven't we?


AUDIENCE: Yeah, OK. It's just with us, every day between now and the holidays, is we-- just so you know, I'm sure, we would know with gift companies-- we do 50% of our revenue in that six weeks, so if that six weeks isn't stellar, we don't have another 46 weeks or whatever it is. And it seems that everything we've done in the last six months from changing to HTTPS to changing this domain to every [INAUDIBLE] just halves again the traffic and the impressions. And we thought that when I asked you specifically if we should 301 the previous domain because we're worried about any potential penalty, but you assured me that, don't worry, there's not a link-based penalty coming with it, so it shouldn't be an issue. But it is an issue, because it's gone from-- which we were using the normally href to hreflang to say, OK, this is our US site. Now it's 301, so it's properly the US site. It's gone down again, so which would suggest a penalty is following it, but not once have we had an actual penalty within Webmaster Tools. And you've at least inferred that there isn't one-- or at least one-- you can determine as one of the main algorithm penalties. I've just never seen this level of suppression of penalty for a site that is-- we're trying to be a bit-- we've never done anything wrong. We're not a spam site. We've got thousands of customers, hundreds of through and through reviews. It's all unique content. There's nothing that should be toxic or even bad-- no only toxic, but it's so toxic that it's just an anomaly to me. I looked at a lot of stuff. I recognized Don's stuff from before and when Don was talking about Costa Rica and the stuff that I've looked in the forums with other people for him. But I've not seen a case like ours that is so bad.

JOHN MUELLER: Yeah. I need to double check what actually is happening there since you started the move there. What was that, two weeks ago, right?

AUDIENCE: It was [INAUDIBLE], so yeah, it was the Friday the 20th or so when we had the last-- oh, so it's been there now 10 days or so since then. The impressions that night or the next two days went up, as you would imagine. And since then, they've absolutely tanked to lower than they've ever been before.

JOHN MUELLER: OK, yeah, I see that. I need to see what's happening there with the other engineering team.

AUDIENCE: It's just we can unwind it and start again. But every time we make a change, we don't have two weeks to wait for things to flush through. We did in February, but now in November we don't.


AUDIENCE: John, is it wise to switch back to HTTP from HTTPS?

JOHN MUELLER: Sorry? I didn't hear the question quite.

AUDIENCE: Right, remember I did a site wipe 301 redirect on your advice. And it did its tank and it's still-- well, it's doing its thing. But is it wise to switch back-- once you've made the switch to HTTPS, if you're unhappy with the results and you're tanking in the results, is it wise to switch it back, or the damage kind of already done and you might as well ride it out?

JOHN MUELLER: Ideally, you shouldn't be seeing any kind of negative effects just from a move to HTTPS. So unless you're seeing something that users aren't able to access your site properly, then I'd try to leave it on HTTPS. But if you're seeing significant issues there, you always have the ability to roll that back if you prefer.

AUDIENCE: But John, isn't it HTTPS that also makes the sites slower? So while you might be benefiting from the HTTPS, you might be getting hit the site being slower than it was before?

JOHN MUELLER: That shouldn't be a problem. So it's possible to implement HTTPS in a way that doesn't slow the site down. And it's also the case that the speed issues, when we look at that in web search, they're mostly a matter of us differentiating the site between sites that are kind of normal in the speed and those that are really, really slow. So that's not the kind of change you would make with a move to HTTPS.


JOHN MUELLER: Well, I mean, having a faster site is always something that brings more engagement by users. And with that, it brings more kind of positive signals for us that we can pick up on for search-- more recommendations, more links, those kind of things. But just like tweaking the sites by a couple of milliseconds isn't going to be something that will reflect in search with change in rankings.

AUDIENCE: Would you ever talk about doing a Hangout about the Google, the Cloud Storage, how we can benefit from that?

JOHN MUELLER: I don't have any insights into that specifically, but I can ask around to see if there's someone who can jump on in and join us in one of these Hangouts.


AUDIENCE: Hey, John, couldn't all of this stuff-- the HTTPS and how fast your servers are and how fast the page is-- shouldn't all that stuff kind of be nice to have stuff and have tiny incremental benefits and downsides rather than what I've seen and seems like Don has seen and Barry last week. They've died after HTTPS. I mean, properly 50% less than before. This stuff should be kind of advisory, shouldn't it?

JOHN MUELLER: I mean, that should just be working properly. That's something where we've talked to a bunch of engineers internally and, essentially, if you move from HTTP to HTTPS with a normal site, that should have absolutely no negative at all. But we are seeing anecdotal information from some of you guys, from some of the other people in the forum, so these are things we've been bringing back to the engineers to review. And for the most part, they're comfortable with finding other reasons why maybe these changes took place. So I guess, personally, I'm never completely convinced when I see these anecdotal feedback in the forums and elsewhere, but it's something that where these examples, they always help us a lot to find kind of the edge cases where maybe the algorithms aren't touching it properly. But again, if you guys are seeing these kind of edge cases where you move from HTTP to HTTPS and you see significant change, then by all means, bubble that up to us so that we can take a look at those examples. Send me the URLs involved and we can take a look at that. I know some of you have already sent me stuff, and that's been really useful for me.

AUDIENCE: Hey John, can I ask you a question and change topics a bit?

JOHN MUELLER: All right. And then we have to get to the Q&A.

AUDIENCE: This guy, Barry Schwartz, emailed me. I don't know who he is. I don't know how he got my email. But this guy, Barry Schwartz, emailed me and he wanted to ask you, you notice that Matt Cutts mentioned that he had recently spoken with the web spam team and he was very happy with work they've been doing. Can you confirm that that includes the recent rollout of Penguin?

JOHN MUELLER: Well, it's a web spam algorithm, so he's probably seen that, yes.

AUDIENCE: OK. There is Barry's question.

AUDIENCE: Hey, John?.

JOHN MUELLER: All right. One last question before we jump to the submitted ones. Go ahead.

AUDIENCE: So on the last chat, about [INAUDIBLE], we discussed it was hit by Panda. I think you had the engineers look at it, but there wasn't any specific feedback you were comfortable in sharing. But the one suggestion you made was to conduct the survey with some of the Panda-related questions and stride to gauge the areas. We did a nationally representative survey. It reinforced, unfortunately-- not unfortunately. Fortunately, but unfortunately for you-- what we kind of thought we didn't see any red kind of marks or any problematic areas in terms of the responses that we got. And it kind of reinforced the fact that it is considered one of the best credit card comparison websites, if not the best, by the board of the regulators, and the consumer advocates. So what I wanted to ask, is it possibly to have-- I know the engineering team took a look at it and said, oh yeah, the algorithm is working correctly. But is it worth having someone from the quality team take a look? And maybe they can point something that may be either is a false positive for the algorithm, or they can point and say something, OK-- because the survey, we only did it for a few key pages. We couldn't have done it for every single page, review every single page of the website. But we did it for a few pages templates, if you will, [INAUDIBLE] present most of the site. Then actually, our users felt very comfortable. Like trustworthy, authoritative, user friendly-- all of those things.

JOHN MUELLER: Yeah. I can double check with the team to see if there is someone else who can take a look at that as well just to kind of make sure that the algorithm is really picking on the right things there. I know they're also working on these things, so it might be interesting for them to have a specific use case like that to kind of see what you're seeing versus what they're seeing at the moment.

AUDIENCE: Would it be helpful to send this survey results?

JOHN MUELLER: Sure. If you have something like that, then that might be interesting for them to take a look at as well.

AUDIENCE: OK. Perfect. You want the results of every single response we got and they can analyze it, or the aggregated--

JOHN MUELLER: Whatever you have. Something that's easy to understand, I guess, is probably easier. Yeah.

AUDIENCE: Thank you.

JOHN MUELLER: All right. We have a bunch of questions that were submitted here, so let's run through some of these as well. People voted on them, so giving them the chance. We just released a widget for our tool. The image links back to our tool, which is useful to our users. Some suggest we should nofollow it, but we give webmasters that choice instead. The link can be removed by choice. Are we in breach of the quality guidelines? That's a tricky question. I guess I haven't seen the widget directly. I saw a mention of it on Google+, but I haven't seen how it works, how it's embedded directly. In general, I try to aim for using nofollow on these of widget links because these are things where people are copying and pasting the code directly and just copying them onto their pages, and it's not really the same as a normal link. What I would perhaps do is make it easier for people to link normally to your site or your tool as well, or make it easier for people to remove the nofollow in that link so that they can change that if they prefer. That's it. I haven't looked into how that's implemented here in this specific case, so that's something that might be done fairly well already. In general, what happens with these kind of links is it's not that we would penalize the sites that embed the link or that embed the widget like this, but rather that we try to kind of round up those kind of links and say, well, these are all the same links from the same widget, and people were embedding this because they didn't realize that this was actually linked. And then we'll work to try to ignore those links in our system. So that's kind of the process that happens behind these kind of things. So the clearer it is that people are really able to know nofollow the link by default, for example, or to remove the nofollow if they want to, the easier it is for the web spam team to take a look at this and say, well, this looks fine, This. Isn't really problematic. In extreme cases, we've seen situations where there was this nice widget and it had a bunch of hidden links in there, either for the tool or sometimes even totally unrelated links. So you would have a weather link widget that you'd embed on your pages and it'd have hidden links to some casino website, for example. And that's the kind of thing that we really try to catch there, where the webmaster really isn't purposely linking to some casino website. They essentially just tried to embed this widget on their site. And it sounds, from this question like you're probably doing it fairly well there. But I guess I can double check afterwards to see how you really have it. But I'd really make it clear so that when someone just copies and pastes a code for this widget to their own site, that by default it has a nofollow there, and make it possible for the webmaster to take that out if they prefer. Our site has been heavily demoted due to DMCA filter. We're a popular user-generated app store and receive fewer notices than a bunch of other sites. We suffer from a high number of frivolous DMCA notices. How can we appeal? In general, the DMCA process is fairly fine-tuned, so that's something where you can go through that process and submit-- I don't know what it's called, the notice that this content has been removed or that this DMCA complaint isn't something that you think is correct. So I'd go through that process. That's not something where, from a web search point of view, we'd kind of get involved and say we're going to look at these DMCA complaints and demote a site like that. So essentially, I'd primarily go through the normal DMCA process and make sure that all of is cleaned up as appropriate. I want to add my website to a directory for good potential customer traffic for my niche, but the link only with dofollow. The admin can't give me a nofollow. How do I do that without breaking Google's guidelines? Should I use a brand or a URL like an anchor text? In general, I wouldn't rely on directories as a way of getting links, so I think you're on the right track there, that you're thinking about all of these issues. On the other hand, if this is a directory where lots of people are coming through to your site, then that might be something worth having that link on. And that's something where you could see this as advertising. You could see this as something where you're kind of advertising your site there. And if you really want to prevent the patron from being forwarded to your site, one thing you could do is have the link go to an intermediate site that is blocked by robots text, and from there, redirect to your final URL. So that's essentially one way that you could block the flow of page rank to your site even if you can't kind of that control the no are not nofollow state on the linking site. So that's one thing you could be looking at there to kind of really make sure that it's handled in the proper way. If we missed the cut-off for this disavow for the latest Penguin, will the new disavow not be taken into consideration until the next refresh? Is it something that's going to take a long time to refresh? So like I mentioned in the beginning, the disavow file is processed continuously as we recrawl. So you submit your file once, and the next time we recrawl those problematic links, we'll take that disavow file into account and use that for all of our algorithms. And Penguin is just one of these algorithms that uses this kind of information about spammy links, for example. So that's something that will be taken into account across the board. It's not something where you need to wait for any kind of submission. So if you find problematic links and you think these are spammy, that maybe a previous SEO place, then by all means, put them in your disavow file and submit that disavow file as you move forward. Don't wait until you've collected a large collection of links. You can submit that file every day if you want to.

AUDIENCE: [INAUDIBLE] when you roll out now, so now Jerry mentioned that, I think, when he was talking to Barry.

JOHN MUELLER: We're looking into what we can do to kind of make that update a little bit faster. But essentially, this is something where if you notice a problem, you can submit that file and it'll be taken into account as soon as we recrawl those individual links. It's not something where the data will be pushed through Penguin directly to the search results immediately, but this is something that just happens continuously, so I wouldn't artificially hold any disavow file back.

AUDIENCE: Hey John, on that point, one of the questions that keeps coming up is, if we need to be disavowing those spammy links and so on, why can't someone outside us-- and I understand that you guys have argued repeatedly that negative SEO is not possible and you want an example-- but can I understand the key thing that I'm missing, if we can hurt ourselves with other optimizing stuff, why can a competitor not hurt us by pretending it is us and buying all of these things and doing all of this? And so what I'm trying to get to is, should we be actively monitoring every single link and making a decision, disavow or not?

JOHN MUELLER: So yeah, I totally understand that this is a tricky situation to be in. I know our algorithms, and definitely from a manual web spam point of view, we're kind of watching out for this kind of situation where we can try and recognize the kind of negative SEO, like you mentioned, things happening and take that into account accordingly. I think overall we're doing a really well job of handling that appropriately. So it's definitely a complex situation in some cases. In some cases, it's a little bit easier. But it's definitely not something where I'd say this can never happen. But at the same time I'd also kind of caution against saying, oh, this happens all the time. So we're kind of aware that this is happening, that people are trying to do this. And we're definitely trying to take that into account appropriately in our algorithms.

AUDIENCE: I guess what I'm trying to say is that in a theoretical standpoint, I think we could all agree that this is completely doable to do, right? Because if spammy sites and others are willing to sell links to whoever, then it's not hard to mask who you are, right? So I understand you guys are trying to take into account, I'm assuming with looking at how many links a site got over what time video and from where. But if someone has the patience and the resources, they could very easily mimic bad behavior from a white hat actor, right?

JOHN MUELLER: I wouldn't necessarily say that it's very easily done. But I do agree that it's a tricky situation. And depending on what all has been happening there, it's sometimes hard for us to determine that on our side even where we're manually looking at it. But I know it's something where we do have people taking this into account and we're trying to do the best way of kind of navigating through that jungle of people on the one hand doing spammy things ourselves, and on the other hand, perhaps third parties doing spammy things against the wishes of the website owner as well. So that's something that I'd say has been around for years and years and years now. It's definitely not new. It's definitely not something where I'd say this just popped up on our radar and we're just taking a look at it now. It's been something that we've had to take into account for a long time now. So I feel fairly confident--

AUDIENCE: Penguin didn't make it worse?

JOHN MUELLER: I don't think so.


JOHN MUELLER: I don't think so.

AUDIENCE: John, more audits? John, wouldn't you recommend the webmaster to do more audits?

JOHN MUELLER: I think if you're looking at your links and you see really spammy, problematic type things, and I'd kind of just disavow that so that you have peace of mind. But at the same time, I wouldn't kind of overly fixate on all the links and download a copy of the links every hour to make sure that you're not missing any single spammy link that's happening there, because usually what happens in a lot of these cases is that this is a big pile of links that you can spot it once and kind of take care of it that time. But again, if you see these problems showing up in your link reports, then by all means, take care of them. If you don't see them popping up on your link report, then it's not something where I'd spend way too much time on to kind of make sure you're not missing any of the first signs of this happening.

AUDIENCE: John, if someone has scraped their content, do we both disavow and DMCA? Or if we did just a DMCA, you would consider it as a disavow so it has been removed from the index and therefore would not be counted?

JOHN MUELLER: That's a good question. I don't think that the DMCA complaints result in the page is actually being removed from the index. So I think they're just not shown in the search results. And that's what you also see on the bottom of the search results where it says so and so many pages were removed from DMCA requests. So that's something where if you see this happening, you can put them in your disavow file, but chances are you don't really need to because we're already taking that into account. And if you've sent the DMCA complaint to the hoster as well, they'll usually take the content offline anyway. So that's something where you probably have it covered already.

AUDIENCE: OK. Thank you. All right. Lots of Penguin and links questions. Let's gets through some of these others here. Is it possible that multiple links from a website to a different language domain without a nofollow tag will trigger Google Penguin? If you're talking about linking different versions of your website together, then that's something I wouldn't necessarily worry about. IQ is hreflang markup to also let us know about the connection between these websites so that we understand what this kind of collection of pages actually means. So that's not something where I'd worry about that from a web spam point of view. If a website shows up in Maps, they shouldn't be showing up in the top 10 in organic search results. Or can they be in both places for the first page? I don't know how Maps handle that specifically. But from my point of view, this is something where it's generally worth listening to how the user feedback goes and responding to that. It's not something where we'd be pedantically saying that only one of these pages should be showing up in the search results, similar to how we handle the situation with multiple pages from the same site showing up in the same search results. Sometimes it makes a lot of sense to show a bunch of pages from the same site. Sometimes it makes sense to just show one of these pages. So that's something where we're generally fine-tuning our algorithms, and we'll definitely be testing the results every now and then to see if we're getting things right or if we're kind of headed in the wrong direction. So I wouldn't assume that any one of these situations where the site shows up in Maps and shows up in Search or shows up in Maps and doesn't show up in Search is going to remain like that forever.

AUDIENCE: Hey John, related to that, in the past, I used to remember that when a site would show up multiple times-- different pages from the same site-- they would kind of be indented or bullet list type things. And now we have noticed that on several high volume searches that sites will show up and it'll just look like individual results just like it's multiple sites. Is that something that's intended that way or something that might be looked at, going back to the old way?

JOHN MUELLER: I think it kind of depends on the site. So we do still show these site links type of search results where they're kind of indented and underneath each other or sometimes in two columns, but it's not the same as it was, I don't know, maybe three or four years ago when they were indented, like you said, for some other search results. So again, sometimes it makes sense to show them separately. Sometimes it makes sense to fold them into one result. It really depends a lot on the site and the query and what we're trying to look at or what we're guessing that the user is looking for.

AUDIENCE: But now it's tied into the organic, right? You guys sometimes push the Maps up and push the Maps down. I remember Jonathan Simon was saying he called it a so-called bonus.

JOHN MUELLER: I really don't know how these Google Maps local search results work, so I can't really help you with that. But again, this is something where we're trying to recognize what the user is looking for and trying to bring that into the search results. And since it's something that's fairly new still, we definitely experimented a lot with various variations of those things. So that something where I wouldn't assume that the current step is going to remain like this forever.

AUDIENCE: Hey John, can you cross my question off the list there?

JOHN MUELLER: It's a big list.

AUDIENCE: I can ask it verbally for you.

JOHN MUELLER: OK, go for it.

AUDIENCE: My question is, does Panda have any positive factors or factors that boost? Or does it all have negative factors or factors that demote?

JOHN MUELLER: Essentially that's the same, though, because if you demote some sites, you're kind of promoting others, right?


JOHN MUELLER: So that's something where we generally wouldn't be differentiating here. So I guess you would say both.

AUDIENCE: OK. So mostly, it just looks for factors that people have done on the site and will demote based on that?

JOHN MUELLER: Yes. I mean, we do look at both sides of things and we kind of add up what we see-- the good parts and the bad parts-- and see where it kind of evens out for the site overall. So it's not the case that we're only looking for bad things and we count that against everything else that's on the site. We do try to take a holistic view of the whole site and see overall, this looks pretty good or this looks kind of bad or kind of good, and take that into account for the rankings there.

AUDIENCE: So kind of like a pros and cons?


AUDIENCE: Awesome. Great.

JOHN MUELLER: All right. Is mobile usability a ranking factor for the mobile search results? Will you rank websites that are mobile-friendly higher than those that aren't when searching Google when using a mobile? I believe at the moment we're not directly doing that. I imagine that's something that can change over time. But you'll probably also have a fairly strong indirect effect there in the sense that if you have a website that really doesn't work well on mobile and a lot of people are using mobile as their primary internet device, then they're not going to be able to recommend your site. They're not going to be able to kind of pass that on to other people because they just can't do anything on your site. And to some extent, in some locations, we're seeing mobile really surpassing desktop usage, so that's something you really want to be taking into account on your side even if it's not a direct ranking factor on Google's side at the moment.



AUDIENCE: Just a quick question. Oh, I promise. Could you share with us if Google has given any thought to putting inside of Webmaster Tools a little notification of what algorithm might be affecting their site?

JOHN MUELLER: Yes. We've put lots of thought into that. We hear that every couple of Hangouts as well. It's tricky, primarily because these algorithms were written for our search results and not meant as something one-to-one actionable for webmasters. So it's not the case that any of our algorithms will trigger and say, oh, you need to shorten the titles on your pages or you need to kind of improve the overall quality on this and this and this page. So that's something where the algorithms are trying to figure out how specifically we should be kind of treating these pages in the search results, and that doesn't necessarily translate one-to-one back into something that the webmaster could be doing differently. I think it might make sense at some point to find something that does something similar to what these algorithms are doing and bubble that up to the webmaster and say, hey, our Webmaster Tools quality check has recognized that these and these and these types of pages are generally lower quality. Maybe that's something you want to look at. But I don't think it would make sense to take the search ranking algorithms and kind of bubble that information up directly in Webmaster Tools just because it has a very different goal. But I do bring this up with the Webmaster Tools and the engineering team every now and then to make sure that we don't lose track of that because I think sometimes, some of the information from these algorithms could be really useful to webmasters. And I know there are a lot of you out there who are trying to make awesome websites, and if we can help guide you in the right way, then I think we should be trying to do that.

AUDIENCE: That would be really awesome if you could. We would be unbelievably thankful for that. And it wouldn't even have to be all the algorithms. It could just be the main ones, like the main ones that have names.

JOHN MUELLER: We have a lot of algorithms and some of those are fairly strong. So when we try to determine the relevance of a site, it's something where we take a lot of stuff into account. So I'm kind of wary of just saying, well, just like the black and white animals, those are the ones that we're going to be showing in Webmaster Tools. I think kind of focusing on the algorithms is also sometimes the wrong thing to do because you're focusing on the current kind of elements for those algorithms rather than focusing on what you could be doing for the website overall that will be relevant for the long run. But these are definitely things that we're always looking into and thinking about how much we could put more of this into Webmaster Tools, how much we have the technical details covered in Webmaster Tools, and say, well, technically, Webmaster Tools covers everything you need. Now we need to focus on kind of the softer factors. They're always long discussions.

AUDIENCE: Because at the end of the day, we don't want to be police officers. We just want to really make the web better and we're webmasters. Focusing on these things, in the end of the day, is hard. Like I talk to a lot of SEO guys locally here, and they're just tired of this, you know?

JOHN MUELLER: Yeah. I know. I mean, if there's something we can bubble up in Webmaster Tools there, I think I'd definitely be for that. But we really need to make sure that we're bubbling up something that's really actionable for you guys, not something where you see, oh, well, our algorithms think your site is kind of mediocre. And that doesn't really help you. Like, what should you be focusing on? What should you be doing? Is it something spammy on your site? The spammy comments, perhaps, that people left? Or is there something technically that's kind of wrong, or is there something with the links to your site? Kind of having more information about the actual problems, I think, will be useful. But all of this is probably fairly far off. It's definitely not something where you'll see this showing up in Webmaster Tools the next week or so. But we do try to bring these things into the discussions with the Webmaster Tools team and with the engineering teams to try to see how much of the information that we create about a website can be shown to the webmaster to help guide them in the right way.

AUDIENCE: Yeah, that would be helpful.

AUDIENCE: And something else in Webmaster Tools that came up recently. When we switched over and transitioned HTTPS and have the account now for HTTPS, we lose tracking or traceability for the crawl errors, because what happens is, every crawler, when you look at the where is it linked from, because we kept the same file names, it always shows that it was linked from the HTTP version because of the redirect. And then on the HTTP version, there's no real crawler because it redirect it to the HTTPS. So we're trying to figure out if there's a way to trace where the errors came from.

JOHN MUELLER: OK. That's good feedback. I didn't realize that was happening. But I'll definitely see if we can kind of bring the original source in there. That's seems like something we need to be doing differently.

AUDIENCE: Yeah, that would be very helpful. Thanks.

AUDIENCE: Hey John, long time listener, first time caller. Speaking of algorithm chasing, can you give some color and commentary regarding any IP address filtering Google uses for the SERPs when serving results from multiple domains? And if there is filtering, essentially, can that be overcome with a buildup of authority? And if so, what is the limit to the number of results which come from different domains from the same IP address?

JOHN MUELLER: In general, we don't have any hard limit on the number of results from the same site or from the same kind of group, so that's something where I guess in some extreme cases, you might have 10 out of 10 results from the same site on one search results page. With regards to the IP address, that's not something we tend to take into account that much except for cases where there are really a ton of doorway pages, a bunch of doorway sites that are active there. So if we can look at an IP address and we can see, well, this is our normal shared hosting environment and there's some sites like this on there and some sites on that, then that's actually fine. That's not something we really take into account. On the other hand, if we look at the IP address and we see there are 10,000 sites on there and they're all trying to target the exact same keyword or variations of the same keyword, then that's something where our algorithms might say, well, this looks more like a collection of doorway sites than anything else and we need to take some kind of action based on that. So it's not the matter that these sites are all on the same IP address. It's essentially just that they're kind of a collection of doorway sites. So that's kind of what we'd be taking into account there. But it sounds like you have some very specific examples in mind. And if you want, you're welcome to send those to me to kind of take a quick look on Google+, and I can double check with the team to make sure that things are happening as they should be ther.e

AUDIENCE: Sure. No specific examples, just something from years back and possibly applicable to future decisions.

JOHN MUELLER: OK. But in general, it's not something where I'd say you need to be careful which IP address you move your site to. You need to be careful which hoster you move to. And in general, most of the normal hosting environments, even if they're on shared hosting, it's not something you need to worry about. It's not that we expect the site to be on a unique IP address or that they don't have any neighboring sites on the same seed lock or anything crazy like that.

AUDIENCE: Makes sense. Thank you.




AUDIENCE: In Webmaster Tools, when it comes to search queries, this is something that drives me bonkers so I want to hear what you think. So it shows really positive. I have a site that's under punishment, it's coming out of punishment, so I hold a lot of weight in what search queries shows, and it shows a lot of green, big jumps. That's very positive for me. But when I check in another web-based ranker or something like that or another piece of software to check the rankings, it's completely different. So all the web-based stuff or software-based rank trackers will take my main phrase and show that it's completely out, like I said previously. But yet, Webmaster Tools shows that on an average of 25 to 30 range, and I'd like to know why is Webmaster Tools so different? And should I hold much weight in those search queries? How did the search queries show you that data? I know it's based across multiple locations.

JOHN MUELLER: So the data we show in Webmaster Tools is based on what users have seen. So that's something where, depending on how your site is ranking and for which queries you're looking at, that could be playing a role there. So that's something like personalization. That comes into play there. Just generally like how far people tend to click through the search results comes into play there. And that's essentially what you're seeing in Webmaster Tools. I think it's like the average composition of your site for those queries. That's what we show there. So for example, if your site is generally ranking in position, I don't know, 500 for a specific query, and there are very few cases where, because of personalization, it jumps up to position 10, and nobody clicks to page 500, then that ranking at position 500 won't be reflected in Webmaster Tools. Whereas that ranking, where it's periodically at the higher position, might be shown in Webmaster Tools just because it's something that people actually see in the search results. So that's the kind of skew you might be seeing there. Usually you can tell by the numbers there. When you're looking at that and you're saying, well, the impression numbers are really, really low for this query where I assume a lot of people are searching for it, so it's possible that maybe there's something like personalization, maybe something like geotargeting, something like that playing a role with the rankings that you're seeing in Webmaster Tools.

AUDIENCE: Thank you very much for that answer. That was very helpful.

AUDIENCE: Can I ask you a question?


AUDIENCE: John, if two totally different companies have different website design but they are selling the same products and happen to be on the same ISP and on the same IP address, influencing one website although over the other-- I mean, does Google make the assumption that they represent duplicate websites?

JOHN MUELLER: That should be absolutely fine. I don't see why we should be treating them kind of like one site instead of two separate sites, so I wouldn't really worry about that. And again, this is something where if you're on a large hosting environment, your IP address my change from time to time. Or if you're using a content delivery network, you might have the same IP address from time to time as your competitor has. And just because of that, shouldn't be something where our algorithms would assume that, OK, this is actually the same site, and therefore we should only show one of these and not the other one.


AUDIENCE: Yeah, go ahead, [INAUDIBLE].

AUDIENCE: Because it's a competitor-- let's say it's shared hold thing so he has another neighbor that did something really crazy, I mean--

JOHN MUELLER: That's fine. That happens. If you're on-- I don't know, some of the big hosters have upwards of 10,000 sites on the same IP address. There are bound to be some spammy sites among there. There are bound to be some adult sites in there. There are bound to be maybe some competitors even that are selling the same product or very similar products. And that's something we kind of have to live with. The web has a limited number of IP addresses. We shouldn't expect that everyone has their own IP address. We shouldn't kind of artificially push people into that situation.

AUDIENCE: OK. Thank you. I just wanted to reiterate that, but thank you very much, John.

JOHN MUELLER: Sure. All right, let's grab some more of these--

AUDIENCE: [INAUDIBLE] I managed to get on. I was just wondering-- one of my questions was in the Q&A. You mentioned a long time back, you said when you put my website into your magic tool you've got there, you said there was issues with content. And I was wondering if you could take another look and just sort of point me in the right direction. It kind of goes along the lines of what was brought up in this chat just earlier on. Because I don't really know what to look at, and if you can see something specific and you can send say, look at that page, do something with it, without giving me too much info, that would be massively helpful.

JOHN MUELLER: I think we looked at your site a couple weeks back as well, and the content stuff was missing there. I mean, missing in the sense that it wasn't being flagged. So I think you're on the right track there, but if you can post a URL in the chat, I'll just throw it there really quick. We Should have a magic tool Hangout and just--

AUDIENCE: Yeah, we'd love to see that tool, too, John. Everybody's talking. Would you like to share the tool with us.


AUDIENCE: John, why don't you share your screen?

JOHN MUELLER: I don't see any content problems there anymore. So that sounds like you're on the right track there.

AUDIENCE: One weird thing, then is [?, ?] we talked about two weeks ago. And I checked everything for Webmaster Tools, but it's still showing Webmaster Tools' 18 pages index, but there's thousands of pages indexed. So Webmaster Tools has definitely got a reporting issue in regards to that site.

JOHN MUELLER: And you double checked to make sure that the right version is being shown in Webmaster Tools?

AUDIENCE: I quadruple checked. Yeah. There's definitely something strange happening.


AUDIENCE: And also, I've got no idea whether or not my site is still under Penguin because we hreflanged everything. So I don't know whether or not you can see if we're still suffering any effects, but I'm under the impression we should be out of it.

JOHN MUELLER: OK. I can double check that. Let me just go through a bunch of these questions.

AUDIENCE: And that's it. Thanks, John.

JOHN MUELLER: A bunch of people have voted on these. I guess once we'll have to do one that's just live and once we'll have to get through all these questions for a change.

AUDIENCE: I think it's a lot of big votes.

JOHN MUELLER: That's possible. I wouldn't assume that people would be gaming stuff on the internet, but you never know. If a site has thousands of old blog posts that rarely get read, could this impact them negatively in the eyes of Panda? Should we be noindexing old blog posts? This is, I guess, a tricky topic because to some extent, it makes sense to keep an old archive of things. But on the other hand, if you're looking at these old blog posts and you say these are really low quality posts, then it might be worth cleaning that up. But in general, our quality algorithms do look at a site overall so we try to look at everything across the site. And if there are parts that aren't really relevant, that are kind of older, and kind of there for a reason but not really the primary reason for your site, then that's something we try to take into account as well. So just because you have these old blog posts where maybe you were doing a news announcement from 2001 and that's still on your site, that's not something I'd necessarily noindex just to clean things up. But sometimes it might also be that these old blog posts are really low quality and kind of bad, then that's something you would take action on. So just because they're older doesn't necessarily mean that they're good or bad in any specific way. Do links from statistics and other web search engine results pages index in Google? Can they have a bad influence on ranking for a site? Should we be disavowing them? In general, that's something that we've learned to live with for a really long time now. Most statistic systems aren't public like that anymore, but in the beginning, most of them were, so that's something we've had a lot of practice with, kind of recognizing those links and not treating them in any strong way. Pages that aren't indexed in Google are things where you generally don't have to worry about either. So that's something, both of those kind of issues are things I wouldn't necessarily worry about. My website was crushed by the most recent Penguin update. We were attacked with numerous negative SEO attacks over the past eight months. We've disavowed the links along the way besides just disavowing and waiting for Penguin to run again. What else can we do? You're welcome also to send your site along to me on Google+ so I can take a quick look and pass it on to the engineers if need be. But in general, if you see things like problematic links to your site and you're disavowing them, then that's the right thing to do there. Sometimes what happens is that they're actually other issues on a site as well, and then not cleaning those up can still cause problems. So for instance, recently we ran across a site that was saying there was a lot of negative SEO against their site and they pointed to some links. We look at that and we said, well, yes, this is clearly negative SEO but our algorithms aren't taking these into account at all anyway. And on the other hand, you have a lot of blog posts that you've been doing there, guest blogging on other sites where you're dropping links to your sites, and that's something that our algorithms were picking up on, and something the manual web spam team would be picking up on as well. So kind of make sure that you're really not just focusing on those negative SEO links, but instead also take a step back and look at your site overall and make sure that you're really doing the right thing across the board. So don't assume that all problems you're seeing are always due to these kind of issues. But again, if you're unsure, if you think that this is something our engineers need to look at, by all means, forward that on to us. Why would Penguin-affected sites not show signs of recovery after this latest refresh? There are many US sites that have cleaned up problems and showed no improvement. Should these webmasters abandon their domains and start over or wait until the next Penguin? We've seen a lot of sites that have actually recovered from the Penguin algorithm essentially, so that's not something where I'd say there's kind of a block from sites from being kind of positively affected by the newest update. But on the other hand, if you're not seeing much of a change there, then I'd probably go back and look at what else you need to clean up. So maybe there are more problematic links that you were assuming were kind of fine that you need to kind of clean up. Maybe there are other issues on your side that aren't related to links, that aren't related to web spam that you need to kind of focus on as well. So kind of take a step back and try to look at the overall picture again to make sure that you're really doing the right thing there before doing something drastic like starting over on a new domain. On the other hand, sometimes it takes a drastic step to actually take a step forward. So it's hard to say in general. I've heard you talk about noindexing pages of lower quality content. For instance, if we add a new author and aren't sure about their quality yet, do pages with original content that are noindexed generate the same level of page rank and indexed page probably do? In general, if you're noindexing these pages, then that's something that we don't use for our quality algorithms because we don't show them in the search results, so it's not something that we'd be taking into account with regards due to quality algorithms there. The advantage of using a noindex versus removing the content completely is that the content is still accessible within your website. So if you want to see how your users react to that content when they browse your website, that's something you can use a little bit to kind of gauge their interest and to figure out should this be content be improved or should this content be removed completely, or is this something where you're maybe being too cautious and putting a noindex on something that's actually quite good, even if it's from a newer author. So that's something that you can use on your side to kind of gauge the interest or the general quality of this content. We've noticed recently that for some queries, multiple results from the same domain occur in the search results. This phenomenon causes other high quality pages to be pushed down. Is this intended? Like I mentioned before, there's no hard rule where we say we only show this number of results from any specific site. Sometimes we show more. Sometimes we show fewer. And I think that kind of makes sense. So talking to the engineers, they're very clear that they don't want any hard guideline on saying how many pages from the same site we should show in the search results because sometimes users expect to find a lot of results from the same site. I don't know about this specific query. I'm happy to take a look at that afterwards and see if there's anything crazy happening there. But in general, it's not something where we'd say there's an absolute limit. And if you see more than two results for the same site, then that's definitely a big on Google's side. But if you're seeing weird search results, we always appreciate hearing about that. Who has faster iterations, Penguin 2 or Penguin 3? May we expect to have longer penalties with Penguin 3? In general, we're working to kind of improve the update frequency there, but it's not something where we have any hard and fast guideline where we'd say this data is updated once a month or every other week. We're kind of working on improving the speed of the algorithm there, but we need to make sure that the quality of the data that we generate through that is still sufficient so that it actually makes sense. So just making things faster doesn't necessarily make things better. All right. We're out of time again. I need to head out, but it's been great talking to you guys. We'll see, maybe we can set something up to kind of review sites a little bit better in the future for one of the next Hangouts. And maybe we can get one where we actually go through all of the questions that are submitted in a way that helps you guys as well.



AUDIENCE: In regards to that, have you ever thought of having someone go through these questions before you get to the Hangout to decide which, because there's a lot of them now actually could be moved straight to Webmaster Central, to the forums that could be the generic-- could be answered by other people to save, and then the ones that come up in the Hangout could be a lot more helpful. Those situations that only people that come on these calls might either understand or get more benefit from. Because the kind of "how could I improve my website" questions, sometimes I think you're just a bit too polite to and you should have someone the day before or the morning before just go them and say, actually, stick that one straight into the forums because they'd be better off getting 10 opinions there. And then you'd get the better questions and more of them covered rather than answering the same question over and over and over again.

JOHN MUELLER: Yeah. I think there's definitely room for improvement on how we go through these. On the other hand, I also like to make sure that those who are asking the same question also get their answer. But I think we'll be kind of iterating over this general medium over time.

AUDIENCE: I would agree. I think everyone should get their answer straight from John if they want to. So even if it takes a couple of tries, I don't think there should be-- even though I'm part of the cool club right now, I think that shouldn't be a cool club, if you know what I mean. I think everyone should get--

AUDIENCE: No, I'm not suggesting there should be. But if the question is very vague-- how could I improve my website?-- and John doesn't look at it and tell you how to improve it, then it's just wasted everyone's time and doesn't actually help the poster at all unless John actually takes the URL, looks at it, and says, actually, what you should do on the sub-page is this and here is the URL. Just how can I improve my website, that just doesn't help anyone. Because then you're just reiterating the guidelines. Read the guidelines, create quality content, look at your links, update it regularly. I'm not saying that isn't useful, but I'd suspect the people who are on this call could actually, if they know how to find these web Hangouts, they should know something about Webmaster Tools and have probably read the guidelines anyway.

AUDIENCE: Anyway. Thanks, John. I think you're wonderful. I've got to run.

AUDIENCE: Yeah. Same here.

AUDIENCE: Thank you, John.

JOHN MUELLER: I think we should maybe take a look at that, yeah. Maybe we could do a Hangout about these Hangouts once to kind of discuss what we could be doing a little bit differently. Might be something worth trying out as well. [INTERPOSING VOICES]

AUDIENCE: --your tool, or no?


AUDIENCE: About to show us your tool or is that not allowed?

JOHN MUELLER: We have a lot of tools that are internal. Sorry.

AUDIENCE: Thanks, [INAUDIBLE]. I look forward to the Hangout about the Hangouts.


AUDIENCE: Thanks, John.

JOHN MUELLER: I think that's worth doing. All right. Thanks a lot or joining, everyone. Have a great week. And see you in one of the future Hangouts.

AUDIENCE: Thanks, John. Much appreciated.

AUDIENCE: Thanks, John.

AUDIENCE: See you next week.


AUDIENCE: See you, John. | Copyright 2019