Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 26 September 2014



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: OK. Welcome everyone to today's Google Webmaster central office hours hangout. My name is John Mueller. I'm a Webmaster Trends Analyst at Google in Switzerland. A part of my role is to talk to webmasters like you guys and make sure that you're getting help information you need from us to help make the web a great place. So we have a bunch of questions that were already submitted. And before we get started, as always, if one of you guys wants to ask the first question, feel free to go ahead.

DEREK MORAN: I'll go ahead John.

JOHN MUELLER: OK.

DEREK MORAN: OK. About three months ago, I freaked out because I've discovered I had 90 subdomains, index for a website that didn't actually exist. And what I worked out that had what had gone wrong is [INAUDIBLE].

JOHN MUELLER: Go ahead.

DEREK MORAN: OK. What I worked has gone wrong is in my DNS records, there'd been a wild card entry. Alright, so I ended up with these 90 subdomains. Alright, so I had to create a webmaster accounts for them and then I just canceled, did a complete website removal. But what I want to know is that OK to do that and then I just cut out that wild card entry altogether.

JOHN MUELLER: Yeah. That sounds like a good thing to do there. So from our point of view, the wildcard DNS is particularly problematic for crawling because we end up crawling all these duplicates. And we think, oh there must be something good here, we'll keep looking. And we keep running across these subdomains and trying to pick them up. So that if you remove the wildcard DNS entry, that's essentially the most important part. Using the removal tools for the stuff that's already indexed is fine as well.

DEREK MORAN: Yeah. Cause I ended up with like 6,000 different duplicate pages. It was just out of control.

JOHN MUELLER: Yeah. So from a practical point of view, this kind of duplicate content shouldn't be causing problems your website. It's not that we penalize the website for that. It's just technically a problem that we have to kind of crawl all of these pages, and then figure out that they're actually duplicates, and we'll filter them out of the search results. But it's a lot of work that has to be done on your server on our side to actually get that far. So it's not that you'd be penalized for that, you wouldn't be demoted or anything like that. It's really just a technical problem.

DEREK MORAN: OK. Cool. Thank you.

JOHN MUELLER: Sure.

MALE SPEAKER: Morning John. How're you doing?

JOHN MUELLER: Good morning.

MALE SPEAKER: Good. So about, in May, we had a site that had a manual link penalty applied. And we actually saw the penalty appear across the main www, as well as a couple of other subdomains. And after about four months, the penalty happen to expire. And so that manual action disappeared from within Webmaster Tools. And so what I'm curious about is because those manual actions disappeared from all of the subdomains, was that a single penalty that happen to be applied across the entire site or were those subdomains subject to individual penalties that all happen to be applied and expire at the same time?

JOHN MUELLER: So both could be possible in the sense that if we can, we try to be as granular as possible. So we tried to target a specific subdomain if that's just one subdomain that's problematic. If we can't do that, then sometimes, we're kind of broader and just apply it to the whole domain. So theoretically, both of those would be possible. Usually, you'd tell if this is just affecting one subdomain. You'd see that it's just there in the Webmaster Tools and you wouldn't see it for the other subdomain. So for instance, if you have something like Blogger, for example, where you have lots of completely different websites on different subdomains, then that's something where we probably tried to do something more granular and just say this is what this specific subdomain. Whereas, if these are essentially just different parts of your website and they happen to be on different subdomains and it's possible that the web's campaign will say, well, it doesn't make sense to target each of these subdomains individually, so we'll just apply it to the whole domain.

MALE SPEAKER: Thank you.

JOHN MUELLER: All right. Let's go through some of the questions that we have here. Is it possible to accidentally lock out of the Google crawler other than using the robot's text? I guess, there a lot of ways you could theoretically block Google bot. There are things that you could do, for example, by blocking the IP address, by having some kind of firewall in between that recognizes Google bot as some kind of malicious script that's trying to scrape your website, for example, and then some firewall in your network, essentially blocks it. That's something we see from time to time. Sometimes, we see it that a website response to different user agents in different ways. And for whatever reason, recognizes Google bot as some kind of a script and says, oh I don't want to serve you any content. We sometimes see that. With the robot's .txt file, one thing we do sometimes see there is sites that serve a server error for the robots .txt file. So instead of serving content, they serve as server error like this, the 500 error, which essentially tells us that the server has some content that it would like to show but it can't at the moment. And on our side, we treat that as the robots text essentially blocking us from problem completely. So if you serve a server error or the robots .txt file, then we'll assume that we can't crawl anything because we don't know that we're allowed to be crawling and we'll stop crawling completely. These are all things that we can show in Webmaster Tools in the crawl area section botton. Even if you're blocking the IP address or blocking the user agent, then you'd see things like server unreachable, those kind of errors in the crawl area section in Webmaster Tools. I noticed there's a whole bunch of App Indexing questions here. I can't really help with those because I don't have that much experience of actually debugging those. But we do have a separate Google Moderator page set for App Indexing questions specifically. So I'd copy those in there. I copied down all the questions, so I'll double check that they're already in the moderator page. But I can't really help with that here at the Hangout at the moment. It's going to see that people are using App Indexing though. So I hope we can resolve those questions there. Let me just clean these out here for the moment. It's a little bit easier to find the other questions. Wouldn't it be better if instead of penalizing algorithms, you invested resources in developing and launching rewarding algorithms that dramatically increase the rankings for good websites? Essentially, we do this with things like Panda, where we try to recognize higher quality content and show it appropriately in search. And even with all of our other algorithms, if it at some point we're recognizing lower quality content and showing it lower in search, then of course, the higher quality content that's left, we kind of put this up a little bit higher. So it's always the case that there are two sides to these algorithms. It's never the case that we penalize all of these websites or we demote them in search and nothing else bubbles up because we have to show something to use as when they research. So it's always the case that there are both sides involved with these algorithms. And that's something that we also include in our analysis when we analyze how these algorithms are doing and how we need to tweak them. It's not just that we look at the sites that we're moving from search, but also the sites that are showing up higher in search and making sure that those are actually the right kind of sites, the sites that we'd like to show.

JOSHUA BERG: Hi, John.

JOHN MUELLER: Yes.

JOSHUA BERG: So just to be clear. Panda does some promotion of sites as well as filtering or what we might call demoting of content?

JOHN MUELLER: Well, it's always both sides that are involved there. If you're doing some then the other stuff comes up higher. So it's something where it's more of a philosophical question, I guess, if you want to look at it as something that's bubbling up higher in quality content or pushing down lower quality content. Because in the end, the lower quality content goes a little bit further to back and the higher quality content goes a little bit further to the front.

JOSHUA BERG: All right. So but does it highlight any of the quality points or is it always focusing on the problem, negative parts of a slice?

JOHN MUELLER: It always involves both those parts. So that's something where in our algorithms, we try to recognize the higher quality content and treat it appropriately. So it's never the case that the algorithm will just go out and look for bad signs and signs that you're doing something wrong. It would also need to make sure that it's treating the sites that are doing something right appropriately as well.

JOSHUA BERG: OK. So it's not fair to call this a penalty algorithm of Panda, for example. They also looks at good content or good pages as well.

JOHN MUELLER: Yeah. I mean, we don't call these penalties internally because essentially they're just the way of us trying to show the more relevant higher quality content in search. And it's not the case that we're only looking at the bad sites and say oh these guys are doing something wrong, we need to get them out of our search results. It's really that the matter of us trying to provide really high quality search results overall. And that means, bringing up these higher quality pages that might not otherwise have ranked there.

JOSHUA BERG: OK. Does that apply more to newer additions of Panda. For example, are they better at looking at good quality content or items than say the older versions?

JOHN MUELLER: I'd like to say that, of course, every time we do an update, we try to make them better. So that's something where we try to bring this in all the time with our updates. So I don't know specifically what the newest we have there. But that's something where we're definitely trying to make sure that we're treating these sites appropriately. And when we make bigger updates that we actually call out, we do hope that it takes a significant step further in that direction.

JOSHUA BERG: Yeah. 'Cause I remembered Matt talking. This was back a year or so. We're looking for some signals of some of the medium or borderline sites, you know, which might mean these sites are good sites and, you know, they didn't need to get lumped in. So it could filter some of those borderline sites better and maybe some of the positive aspects as well in it.

JOHN MUELLER: Yeah, exactly. That's the kind of feedback that we use to kind of refine our algorithms. So from time to time we'll try to call out a request for feedback on these kind of things. I know we also have a forum open for Penguin since the beginning that we kind of go through every now and then to double check what's happening there. And it's always good to give us feedback in that regard. And even if we don't have anything specific, open any kind of feedback channel or something, if you see something working particularly well or particularly back, take the time to bring it to us, so that we can talk to the engineers about it and see what we need to improve there.

JOSHUA BERG: Alright. Thanks a lot.

JOHN MUELLER: Thanks. Are you advised to continually update the disavow file if and when new URLs are found? Yes. If you notice problems, I definitely add them to the disavow file, so that they can be taken into account the next time we crawl an index so as urls. So this isn't something where I'd say, you fill it out once and you leave it running forever. If you notice a problem, you can help us to fix that by adding it to the disavow file.

MALE SPEAKER: Yeah, hi John.

JOHN MUELLER: Hi.

MALE SPEAKER: Yeah. Small question for you.

JOHN MUELLER: OK.

MALE SPEAKER: Yeah. The question is that, when we are searching some more titles in Google, we did some keywords related to it and it was displaying another title which is from another rather reduced not specified in the title's metadata display. They'll go, why this is happening and we'll do. We don't know.

JOHN MUELLER: So with the titles, we try to rewrite them when we can recognize that the original title on the page wasn't that great. For example, if there are a lot of related keywords in the title, that's something that maybe the user wasn't that interested in, kind of this keyword stuffing issue. When we recognize it, the same title is used across large parts of the website, that's something that we try to improve on. When we recognize that the title is particularly long, we'll try to find something that fits in the shorter visible space in the search results. That's particularly useful for mobile, for example. On a smartphone, you have even less space. So we have to have an even shorter title that we can show users there. And sometimes, we'll also take into account something from the DMOZ, the Open Directory Project. If there's a site title there, then we might look at that as well and use that. You can block the ODP title by using the no ODP metatab. But the other titles that are rewritten, that's something that our algorithms do automatically. That's not something you can specifically block. You can help to avoid that by really making sure that all of the titles across your site are short and to the point, that they're really about the content on your page that we can really show them one to one in search and make sure that users understand what your page is about. One thing to keep in mind there, as well, is that titles can change depending on the query. So from our point of view, we don't just have one title for a page. We might have a small collection of titles that we swap in and out, depending on what the user is searching for. So that the user can recognize that this is really a great page for this specific topic

MALE SPEAKER: Yeah. Thank you.

JOHN MUELLER: Does that help?

MALE SPEAKER: Yeah. Nothing else is going on.

MALE SPEAKER: Alright John, just a quick question.

JOHN MUELLER: OK.

MALE SPEAKER: You launched description snippets recently and they seem showing up in a lot of it, the rankings, so I say they help quit a lot with in regards to kind of adding extra information to your actual subsnippet. I just wanted, I know, that this scheming for lot of sites but looking at a lot of the sites that have description snippets, they don't seem to have any schemer on that. So just kind of wondered how you went about getting that information of the page and if there's anything that we can do to come and entice you to do that of our sites so I think.

JOHN MUELLER: We have magic algorithms. Yeah, I mean, this is something that sometimes quite hard to do algorithmically. So the clearer the information is structured on the site, the easier we can pick that up. Sometimes it helps to have like tabular information or to use a clear list definitions, something like that to help pick it up. If you want to use schema.org markup to give us more information about the specific entities that you're talking about, that's something you do as well. But I think it's also important to keep in mind these are still very experimental features. And it's something that we might notice, doesn't make so much sense to show this much information in search. Or other webmasters aren't happy with showing this much information in search. Or users are confused by seeing information that was extracted incorrectly. And so I should imagine that this is something that might be changing in the future. So as a webmaster, I guess if you want to kind of leverage this thing, I'd try to make sure that your information is as structured as possible on the pages and that where you can, you have that org markup to let us know about entities and attributes and those kinds of things. But I can't guarantee that we'll be showing them specifically for your site or for any site.

MALE SPEAKER: OK. Now, that's fine. I know with them, schemers, well before you used to have to kind of submit a form to Google to let you know that this site's got schemer on them. Should we, we don't need to do that now?

JOHN MUELLER: No. You don't need to do that. We pick that up automatically.

MALE SPEAKER: OK. No, that's it.

MALE SPEAKER: John, I had a follow up question on the auto rewriting. It would be a really useful if there was a way to surface that information in Webmaster Tools. So as a webmaster, obviously as we're looking at our sites and how they display, titles will get rewritten depending on the query, which you just said. So you know, having that information available or having a separate report in Webmaster Tools would be really helpful for us. You see something like that making an appearance?

JOHN MUELLER: What would you do with that information then? So how would that kind of lead back to your website? Or would you react tp that?

MALE SPEAKER: So I think, one of the users would be around clickthrough rate. So if we saw that a particular URL was being favored or a particular format rather, then for optimization for the rest of the site, we could take those hints and those clues and apply that to other URLs.

JOHN MUELLER: OK. So essentially something maybe in the top search queries reports where you could click on a query and you could see that these were the titles that we're showing and this was a clickthrough rate for those specific titles.

MALE SPEAKER: Yeah. That will be really useful.

JOHN MUELLER: Oh, yeah. That sounds interesting. We're working on the search query report at the moment. So maybe I can give that to the team on time.

MALE SPEAKER: OK. Great.

JOHN MUELLER: That's interesting. All right. Yeah, go ahead.

MALE SPEAKER: Changing metadata for every 15 days, will it have any negative impact on the keyword ranking?

JOHN MUELLER: Changing the metadata every 15 days, that shouldn't be a problem. I mean, one thing to keep in mind is we don't use a description or the keyword metatags for ranking at all. So if you want to change them regularly, that's fine. If you want to keep them the same, that's fine too. That wouldn't be cause for any problem.

MALE SPEAKER: Do you find, depending on the ranking strategies and everything updated by the Google, we are going ahead. Is there get any back impact like in the dropping of keyword rankings like that by changing the data?

JOHN MUELLER: If you just change the description and the keyword that metadata, I don't think we care about that all. You can do that however often you want. I know some sites had used that to do kind of testing for the snippet, to see which snippet works best for your users, to see the clickthrough rate for the snippets. So if you want to do that, that's essentially up to you.

MALE SPEAKER: Yeah. Because we are changing the GDPS site readily from HTTP to HTTPS as a Google's lost the update and we're concentrating on that. We would like to know that. Is there any negative impact on that length? Because the website is presently in good condition and we want to get according to the Google updates presently.

JOHN MUELLER: I wouldn't expect any visible change when you move from HTTP to HTTPS, just from that change, just from SCO reasons. So that kind of ranking effect is very small and very subtle. It's not something where you will see a rise in rankings just going to HTTPS. I think in the long run, it's definitely a good idea and we might make that factor stronger, at some point, many years in the future. But at the moment, you won't see any magical SCO advantage from doing that. And that said, any time you make significant changes in your site and change the site's URLs, you are definitely going to see some fluctuations in the short term. So you'll likely see some drop or some changes as we recrawl and reindex everything. And in the long run, it'll settle down to about the same place. It won't settle down to something that's like a point high or something like that.

MALE SPEAKER: OK. Thank you.

JOHN MUELLER: So I think, that's just important to keep in mind. When you make these kind of changes on your website, move from HTTP to HTTPS, it's not a magic bullet that fixes your website. It's rather something for the long run, that I think makes a lot of sense. And you might see some effects from the user side at some point. But at least in the short term, you're not going to see any visible SCO advantages. It's a really small breaking factor then.

MALE SPEAKER: [INAUDIBLE].

JOHN MUELLER: OK. We recently performed the 301 to a new domain. After more than three months, we still haven't gotten back our previous rankings. Although, the pages didn't have any particular changes, how long does it take to recover from this type of psych move? In practice, this should be something that goes fairly quickly. So moving just from one domain to another and following all the steps that we have in our help center, we recently updated the help center with a lot more information. So that might be something to double check, that you're doing everything right. But theoretically, after a while that should be settling down. If they're saying it's still not good after three months, that sounds like either there were some issues that were unrelated to that change that are happening. So maybe, whatever algorithm just picking up problems on your website in general and this is something that would happened with your old site too or there's something technical that's kind of stuck on your side or on our side. If you want, you're welcome to send me those URLs and I can take a quick look on our side to see if anything is on our side that's problematic or if there's anything I can let you know about specifically with this kind of change. But in practice, if you do a site move properly and everything goes the way it should then I would expect after a month, maybe two months, it should be kind of stable again and similar to the previous visibility. Can you go into a bit more detail on why this Penguin refresh cycle is almost 12 months compared to the previous data refreshers of on average six months? I don't have any specific details I can share with you guys. I know the team is working on this. So it's something where we're trying to find a way to improve that overall and that takes a little bit longer. So sometimes, things don't move as quickly as we'd like but that's not because we're completely ignoring the feedback or ignoring these algorithms. Whoop, another [INAUDIBLE] question. Do you upload a disavow to help with the Penguin update? And if you do, will you see the difference before the next refresh? So uploading a disavow file will change those links essentially into no follow links the next time we crawl them. And if you do that, then that's something that effects all of our algorithms. So that could have an effect before Penguin refresh if those links weren't specifically tied to any Penguin related problems. It could have an effect with regards to manual actions if they're link based manual actions in place for your site and it could have an effect on the Penguin refresh when that happens, if these links are essentially processed by then. So it's something that I wouldn't only pm do for Penguin, but rather to kind of clean up these old league issues that you might know about that you just don't want to have associated with your site and email. On September 12, I might migrating a site from HTTP to HTTPS. I followed every step of the instructions and it followed that 301. From then on, I came under significant loss of traffic and it dropped in search results. Why? I'd have to look at the site specifically to see what exactly is happening there. But let's see, September 12, isn't that far back? So this might be that in this area fluctuations, where everything has to be crawled and reindexed again. But if you want, you're welcome to send me those URLs I could double check to see what exactly is happening there. I'm pleased to hear that small, medium, high quality sites will be treated more fairly by Panda. Is this a global update? Yes, this Panda is a global update. It affects different languages and countries in slightly different ways but this is something that applies across our whole search results. OK. And here's a URL. I'm not sure which URL this is for. But I'll copy it down just to make sure that I have it afterwards. I imagine this is for one of the recycling question. Let's see, Cyclops since October 2013 suspect the Penguin algorithm change. We can't find any unnatural links. That's great. Content duplicates, early we're in top position for keywords. I'd have to take a look at the site specifically to see if there's something I can find there. But in general, I kind of take a step back and think about what you're trying to do with your website and just double check to make sure that what you're providing on your website is really at the highest quality possible, and is really something that we should show number one, for any of the queries that you're specifically targeting. So I guess I'd have to double check your site to see what it's actually about first to say anything more specific than that. We spoke with two of the Penguin team in the last month and a half. They updated your own progress. We do regularly speak with the search quality teams that are working on these algorithms. And we kind of catch up to see what we can do to help them and to see where they are. So I am guessing this is still something that would easily fall within your range of before the year. But as always, I can't make any promises on these kind of things because things can change and maybe this is something that will come out next week because everything is ready by then. Maybe it will take a little bit longer. But I do know the team is working on these updates. So hopefully we'll have something for you guys soon. How do you cycle?

JOSHUA BERG: John.

JOHN MUELLER: Yes.

JOSHUA BERG: Another question. So expounding on what you said earlier that these algorithms are never, you know, all either all negative. Then Penguin also has positive aspects to it where, you know, these sites are more trustworthy. I mean we've heard about, for sometime, we know about that there are trust banking algorithms for, you know, how sites might pass authority that have a higher trust. So is Penguin involved in that as well?

JOHN MUELLER: I wouldn't specifically call it like trust algorithms or trust raking or anything like that. But as I mentioned before, when we review these algorithms, we have to view the results as they appear in the search results in the end. So if some of these sites are demoted for the webspam techniques that these algorithms find, then those that pop-ups still play a role in our analysis. So we have to make sure that the right kind of sites are showing in the search results and not just look at the sites that were demoted. So especially when we do an analysis over algorithms like this, we look at things like the first couple pages of the results for lots and lots of queries. We send those out for review by neutral people who are kind of reviewing before and after this algorithm change. And they're not going to review which sites don't show up there anymore. They're going to review which sites that actually visible for those query. So in that sense, that's something where if we show sites that are visible that are actually higher quality, that are good sites that we could trust, that we feel users can trust. That's essentially a good change. Some of these algorithms might be focusing on the web spam side and kind of taking those out. But every time we review algorithms, we review what's left. We don't review just what was removed. So what's left has to be really, essentially, what we'd like to show you as is in our search results. So with that in mind, it does bring both sides in there but it's not that I'd say what's left has this inherent higher trust rank or with something like that.

JOSHUA BERG: OK. Well for example, one thing Matt mentioned specifically, previously was how if one site is caught with some link problems, they may have received like 20, 30, 40% demotion in either the rank or authority that which everyone will it. And then, you know, they may not be able to pass on any authority as a secondary effect. So the you know, 20, 30, 40% that these sites may get demoted relative to their linking problems is it there just say then those numbers could also go positive?

JOHN MUELLER: I don't think so. I think, I don't think so. But I think, what you're point at what Matt said was specifically with regards to manual actions, where when we notice that a site has a lot of unnatural links on it, that maybe it's selling links, maybe it's engaging in link spam, or in link exchange, those kinds of things. And it's linking to a lot of irrelevant sites, then that's something where we take manual action on those unnatural outbound links. And where we might say, well we can't really tell which of these links are actually good, so we're going to ignore all of the links on this site. And that's something that we do manually, where we manually kind of take action on a site that has this problem. We'd let them know about this in Webmaster Tools as well. They'd see the unnatural outbound links message there. So that's something where we try to do that manually and try to recognize that as something that we can't really trust this website anymore. Algorithmically, it's probably a little bit harder. I could imagine there are smart algorithms I could try to recognize the same kind of situation. And also say, well we can't really trust the links on this site because there's so many standing links on here as well. But I don't see that going in the opposite way that we would say, oh there are lots of good links on the site, therefore I'll trust every link here, twice as much as I otherwise would. Because the page rank algorithms already, kind of, take that into account. If a site has a higher page rank, then the links will be passing a little bit of higher patron. So that's something where I don't think it would makes sense to, kind of, amplify that aspect additionally.

JOSHUA BERG: Alright. Thank you.

JOHN MUELLER: OK. How to remove a site link from search results? We have a feature in Webmaster Tools that lets you do demote a site link. It doesn't let you remove it completely but it's a strong signal for our algorithms that you don't want this specific site link shown. So I definitely take a look at that. One thing to keep in mind is that site links are also based on the query. So it's not something that will always be appearing in the search results. And sometimes, it can happen that we think this set of site links is pretty good for this site. And if you search for the site in a different way, we might say well these site links aren't really that relevant for this specific query for this specific site at this time. So just because you see a site link there, when you do maybe an artificial query or search directly for the URL of your website, doesn't necessarily mean that users will also see that cycling when they search natural. But you can give us that information through Webmaster Tools and our algorithms do take that into account.

MALE SPEAKER: Hi, John. Can I ask you one question?

JOHN MUELLER: Sure.

MALE SPEAKER: I would like to ask for a voodoo search query. My website getting [INAUDIBLE] impression like [INAUDIBLE] in impression of the keyword. But clicks are [INAUDIBLE] foreign [INAUDIBLE] only. How can I improve clicks rather than impressions? It depends upon the URL's [INAUDIBLE] URL band marketing and [INAUDIBLE]?

JOHN MUELLER: That's essentially something you would need to ask your users because it's not a technical problem how to improve the clickthrough rate. It's essentially a matter of the user believing that your site has the best content for the specific query. So those are things like maybe titles, maybe the snippet that's shown, maybe even the content, the quality of the content on your site. So it's not a technical aspect. You don't need to change the URL structure. You don't need to kind of tweak things technically on your website. It's really a matter of making sure that your content and the way you present that to users in the search results matches their needs and encourages them to say, oh well this matches what I was looking for, let me check the site out. And that's something that's hard because there is no simple rule to say how to improve the clickthrough rates for these groups.

MALE SPEAKER: OK. Fine. One more last question, John. What is the fetch and render in the [INAUDIBLE] option?

JOHN MUELLER: The what?

MALE SPEAKER: What should be? Fetch and render.

JOHN MUELLER: Yes.

MALE SPEAKER: What is the options that [INAUDIBLE] helpful from the website?

JOHN MUELLER: I didn't quite understand the last part. Sorry.

MALE SPEAKER: Fetch and render, this option in fetch and render.

JOHN MUELLER: Yes.

MALE SPEAKER: What is the fetch and render? Or it will be helpful from the website?

JOHN MUELLER: OK. How is fetch and render helpful for website in general? So in general, the fetch as Google feature is really helpful to double check that Google bot can see content. And the fetch and render option there is so that you can see how Google bot would see your content. So sometimes, there are things like the site getting hacked and there's different content shown. That's something fairly obvious that you can find there. But sometimes, you also see that your CSS files are being blocked by robots feds. The JavaScript that's pulling in the content is blocked by the robots or .txt files. And that means, that we can't render the pages in the same way that our browser could render them. And that sometimes makes it a lot harder for us to kind of understand the pages, understand the content on there. In particular with mobile, that's a really big problem because if we can't see the CSS, we can't tell that this website is actually a great website for smartphones. So we can't treat it appropriately for smartphone search because we don't know. We can see what it looks like. So that's something that, I think, helps a lot. And the fetch and render tool is that on the bottom, you see the robot and resources as well.

MALE SPEAKER: Yeah. Thank you. Thank you again.

JOHN MUELLER: Sure. One thing, may also going back to your clickthrough rate question. One thing to keep in mind, is that not all theories are essentially the same that you can just lump them together and say the overall clickthrough rate for my website is bad. You really need to look at the queries individually. Sometimes if someone is searching for your brand name then maybe they don't need to click on your website because they already know your website. They're may be looking for something around your website or on your blog or somewhere else. So you kind of have to look at the type of query as well as just looking at the clickthrough rate. So try to group things into things like branded queries or navigational queries if someone wants to go somewhere specifically or informational queries if they want information about the specific type of product that you might offer. And if I could treat them separately when you're looking at this on your website because it helps to kind of show the real problems and not just hide them in a bigger picture.

MALE SPEAKER: Thank you. Thank you.

JOHN MUELLER: OK. There's some URLs here. I copied those down and I'll take a look at those separately afterwards. They've always just kind of complicated. This eval question, in the first attempt, I submit a link to abc.com for disavow. Then in the second attempt, I upload again without abc.com. Will Google count abc.com in my back links? Abc.com isn't really that bad. So I don't know. Seems like a good tie keep on my links if they're linking to your site. But I'll this is just a placeholder. If you upload a second disavow file and remove links or remove domains that you had in the previous ones, then those disavow files override the previous ones. So essentially, the next time we crawl that link on that website, we'll see it's no longer the disavow file. So we'll treat it as a normal link and have it pass page rank, have it effect our algorithms appropriately. In both of these cases, if it's disavowed or not it will still show it in Webmaster Tools. just because something is disavowed in your disavow file, doesn't prevent it from showing up in Webmaster Tools, just like we show other links in Webmaster Tools that also have a no follow on them. So that's something you'd still see it in Webmaster Tools. If you disavowed it, you'd still see if it has no index, sorry, no follow on it. And if you remove a link from a disavow file then that's no longer is in effect. So if you're updating your disavow file, you keep adding and adding more things, then you would keep the base structure the same and just keep adding. You wouldn't replace it with a new file. Let's see. About such cite links. I see many webmasters claims about improper site link on their site. It is not possible to give them up that chance in Webmaster Tools? Like I mentioned before, you can let us know site like that you'd like to have demoted. We don't remove them completely in some cases but we could take that into account for our algorithms. This data is processed I'd say, maybe once a week. So you wouldn't see that change immediately. I definitely give it a week or so to kind of level down into this specific algorithms to see what has changed there. If people in video finishes, is it OK to ask them to leave so that other people can enter? OK. I am hereby asking you to leave with other people, if you want to make room for other people. But I'll leave it up to you. Usually, if you want to join this, you have to be kind of quick. I'd post a link in the even invite and usually maybe one minute or two before we start. If you've never made it to these Hangouts and you really, really want to join me, let me know on Google+ before the Hangout starts and I'll add you a little bit before I add everyone else. So sometimes that helps. I made the question about the migration from HTTP to HTTPS any consequent drop and havoc. Here are the URLs. OK. I'll just copy that down, make sure I take a look at that later. We've split our website in two, due to corporate branding. Our rankings had dropped but we implemented 301's economical tags were necessary. Have you seen anything like this before? Websites split not migration. So yes, we sometimes do see website split or kind of separating into completely separate domains. In practice, you're always going to see more fluctuations then, than when you do a site move. The main problem there is that we have to take all of those signals that we have kind of collected over the years for those individual URLs or the website in general and find out how we should separate that. Whereas with the site move, we can say well the whole website is moving, so we can just take all of the signals that we have and pass them onto the new website. You'll always see fluctuations with site moves as well. But usually, it's easier with site moves. So seeing fluctuations and drops in rankings, at least temporary like this is probably normal if you're splitting the website up, just because that's such a bigger step, let's say, on the complexity scale. So obviously, this isn't something that you can easily avoid. Usually, there are bigger reasons why you have to split a website up. But you should just keep in mind that this is, kind of normal that you would see stronger fluctuations in a case like that. Now, I think Penguin 2.1 only look at the website's inbound link profile. Penguin one looked on site issues as well as links, now Panda covers on site. Was Penguin 2.0 and 2.1 change to only look at link signals? We call the Penguin algorithm a website algorithm and it tries to take into various web spam aspects. So I don't think it would be fair to say that it only looks at links. So that's something that tries to get taken into account in general. Panda, on the other side is more of a quality algorithm, where we try to focus more on the quality of the content, the quality of the website overall. So those are essentially two different aspect. Sometimes, they overlap a little bit. Sometimes, WebCenter issues are there because equality is also low. Sometimes they're completely independent that something has web spam issues, but actually it has a really high quality website. And obviously, when they don't overlap that well, that's harder for us to handle correctly. Because on the one hand, we want to discourage the web spam aspect, on the other hand, we want to show great results in search results.

MALE SPEAKER: John, John. Can you hear me?

JOHN MUELLER: Yes.

MALE SPEAKER: Yeah, that was my question. There seems to be a lot of confusion about page keyword spamming, on which kind of algorithms take that into consideration. I think Penguin 1 really did take that into consideration. But when there's no questions in the Google Webmaster forums, there never seems to get a great answer for it. So I just wanted to know if there's, you know, a definite answer on what they should be looking at and which algorithms probably mostly affected them?

JOHN MUELLER: So we have a lot of algorithms and most of them don't have any flashy public names. So it's hard to say that we should like or we'd be lumping like the keyword stuffing type algorithms into one or the other of these bigger algorithms. But this kind of stuff is taken into account on our side. When we recognize that there's keyword stuffing happening there, that we try to figure out how to best handle it. From my point of view, a lot of these topics when we recognize kind of this spammy activity on pages, I generally prefer our algorithms to like just react to that spamming part and say well, I'm going to ignore the spamming part here because maybe the webmaster didn't really intend to stand us like this. Maybe it's something that was on their website for years and years now. Maybe they didn't even notice it themselves. And just focus on the good parts of the website and treat the website appropriately like that. So I think, for more aspects that's a reasonable approach to take. And that's something where maybe you wouldn't even see a drop in rankings if your keyword stuffing. But you wouldn't also see a rise in rankings because of this keyword stuffing. So from that point of view, it's often not a critical issue that you really, really need to resolve as quickly as possible. But something that if you have this kind of keywords stuffing on your website and you keep maintaining your website, you're going to have to keep thinking about this keyword stuffing and kind of artificially take that into account as you revamp your website, as you create a new design for it. And that's just a lot of work that can cause more problems than anything else. And since you're not having any advantage from this kind of keyword stuffing, it just makes sense to clean that up, so you don't have to worry about it in the future.

MALE SPEAKER: So you think that a company name could be classed as a keyword spamming, if you have your company name say 15 times in a new page?

JOHN MUELLER: Theoretically, I could imagine that might be a problem.

MALE SPEAKER: Yeah.

JOHN MUELLER: If the company name is something like, I don't know, cheapmortages.com or something. In those kind of situations, the company name almost overlaps with the keywords that they're trying to target and that could be seen as something where we'd say, well this looks like you're artificially spending these key words. We don't really recognize it's a company name. But it looks like you're trying to artificially spam these keywords and that's something we might react to. And [INAUDIBLE] the company name like, I don't know, XYZ.com or acne.com, Google.com, that something where you're not going to be like spamming the keyboards because it's your company name. You rank for it anyway. There's nobody else that's trying to compete for these terms. That's not something where we'd say this is really going to be a problem if we ignore those keywords on your change because the rest of your site is all about this company. But if your company name is just like keyword one, keyword two, keyword three, then obviously our algorithms

are going to say: Well is this really a company name? Is this someone just trying to spam those keywords? Where do we draw the line? How much should we ignore here? And how much of we give this weight? We see that sometimes, when sites create domains specifically for keywords where they say, my website is cheapmortagages.com, therefore I should rank for number one for cheap mortgages. Because you know, everyone's looking for my site. They're researching specifically for my keyword, right? And that's definitely not the case.

MALE SPEAKER: My company name was registered 10 years ago. So we've been trading 10 years. Our website is eight years old. I'm just really worried that Google might be thinking, our company name, Whole sale Clearance UK Limited , because we've got that on our page four, five times. Are we overdoing our company name? It's really hard to be one page and not mention our company name too much because that's what we do as well. We sell wholesale stock. So would Google see that our company's been registered 10 years before we had a website?

JOHN MUELLER: We wouldn't look into it in that detail. So it's not that our algorithms are looking for like when this company was registered and doesn't match the records there. But we'd looked at it over all on our website. And if you're using that in a reasonable way, then I wouldn't really worry about that.

MALE SPEAKER: OK.

JOHN MUELLER: And especially our keyword stuffing algorithms, we do try to recognize keyword stuffing but we're not going to penalize a site for keyword stuffing in general. So you'd have to be really, really, really obnoxious to actually trigger something on our site. What we'd say, we're going to demote this website completely because it's just stuffing keywords all over the place and we have no idea what to trust anymore. But if this is just a name that you're repeating on your site. In the worst case, we won't be looking at the name that often. We'll say, well this name on the site or this specific page is mentioned a lot of times, therefore we have to be careful about it. But if the rest of your site really focuses on that name and that's what people are searching for, to find your company, then that's generally a good thing. That's not sign that we remove your site from those searches.

MALE SPEAKER: OK. [INAUDIBLE]. Thank you.

MALE SPEAKER: Yeah, hi John.

JOSHUA BERG: John, what would be some of the other web spam signals that Penguin might look at that you would suggest? And one question is, was the email spam ever thrown in that mix for--

JOHN MUELLER: Email, spam.

JOSHUA BERG: --Penguin? For example, I mean Google make it a large quantity of bounce backs, our email? Before it's related to spamming a link [INAUDIBLE].

JOHN MUELLER: I don't think we take email spam into account because it's just such a completely separate part of Google. It's not that we like read the links in Gmail and say, oh well we'll pass catering to these. Because a lot of people will share things privately and we're not going to like dig into that. So I don't think we take email spam in that sense into account as well. But of course, sometimes, if you're really obnoxious with email spam, that stuff ends up on the web as well, going to mailing lists that are public, going to, maybe, forums that post these mailing lists that are public. Sometimes that's the kind of leaks out into the web as well. And that's something that we might pick up from those places. So I don't think we'd take into account something specifically from Gmail. That's just such a different part of Google.

JOSHUA BERG: But Google may look at some of the site related to a negative sentiments on this mix as well.

JOHN MUELLER: That's always tricky. I mean, we do try to recognize that kind of situation. But it's really, really hard to do that in an accurate way. So that's something where we might take that into account when they're really, really, really strong signals that are saying, here's a link and it's totally terrible and you should not look at it at all. But if we find it as a link, we kind of recognize the context of that link, then that's something. In really extreme cases, we might take that into account. But in general, we're not going to do that much of sentiment analysis around every link on the web to figure out, is this is a positive mention or a negative mention? We're going to kind of trust the aspect there that, if it's passing page rank, that's something that we might want to take a look at. If it doesn't pass page rank, than whatever, you can pop-out as much as you want.

JOSHUA BERG: Thanks.

MALE SPEAKER: John.

JOHN MUELLER: Yes.

MALE SPEAKER: Excuse me. Excuse me, John.

JOHN MUELLER: Sure, go ahead.

MALE SPEAKER: We're working on a dot com site, but in another country. John, are you listening?

JOHN MUELLER: Yes.

MALE SPEAKER: We are working on a dot com site and we are targeting to another country like Arab. But if we are not able to find data ranks in there, is there any possibility to get level ranking in countries like--

JOHN MUELLER: So essentially, how to optimize a site for a different country. There are a few aspects that you'd want to look at there. Primarily around geotargeting. So either using a top level domain that's specifically from that country if you can do that or using a generic top level domain and setting the geotargeting Webmaster Tools. That helps us quite a bit. If you have content that's valid for different countries or that's translated into different languages, you can also use the href lang markup to let us know about those different versions. So you could say, this is a version of English for Saudi Arabia, this is a version of English for maybe another country there or maybe for the UK or maybe for the US. And let us know about that and we can take that into account.

MALE SPEAKER: Yeah, OK. And one more question, John. We are having a website and then we are completely banned in all the internal links, only we are targeting the home page. And coming to that, we are posting some blogs regarding that URL. And we are not able to find the blocks we are posting. But we are getting the category links in the searching MP as well.

JOHN MUELLER: OK. So essentially, like a category pages of your blog are showing up in search, instead of a actual blog posts?

MALE SPEAKER: Yes. Yes. Yes.

JOHN MUELLER: Sometimes, we've seen sites use plug-ins in a wrong way, in the sense that they have accidentally put no index on these pages. That's something I will definitely check. But the other aspect is sometimes just that we have to learn about this website a little bit more and we have to learn to trust it better. And that happens over time essentially as you grow a little bit more popular, as more users are using your site, as people recommending your site. We can crawl a little bit deeper. We can make sure to index a little bit deeper. And that kind of happens naturally. So I would first check to make sure that technically nothing is blocking those pages from being indexed. And if technically, everything's OK, then I just continue working on the quality of your website and continue making it has this higher quality as possible.

MALE SPEAKER: Hey, John. Can I get a question?

JOHN MUELLER: Sure. Go for it.

MALE SPEAKER: OK. We have a website. It's seven years old. Something like the YouTube, OK? We have been key stuffed by the SU companies. They created more than 20,000 profiles on our website and they link built to us. We closed everything before 401 deleted all the pages. For more than eight months, we have been trying to clean everything and when will we see changes? We have 50,000 visits from Google per day. Now we're on 200. So we really don't know how to recover and we've tried everything. It wasn't us. It was users building links to our website. So is there any way to find out what's real problem in this?

JOHN MUELLER: You could send me the URL and I can take a quick look. I can't promise that I have a quick answer for that. But I can definitely take a look there. We have some recommendations for handling this kind of profiles spam when people are putting it on your site. It sounds like, you figured most of that out yourself, in the meantime. These are things using CAPTCHAs to kind of block scripts from creating these pages automatically. Maybe no following links there, those kind of thing. But I imagine, this is something you learned the hard way.

MALE SPEAKER: We all handled. Yeah. We disavow more than 10,000 domains and everything, but it's just not refreshing, you know. We launched new version of the website about 10 days ago, Google is crawling about 150,000 pages per day.

JOHN MUELLER: OK.

MALE SPEAKER: Can we expect changes in some normal times, two to three months?

JOHN MUELLER: Usually, if you significantly changed your website, that's something where you would see changes, yes.

MALE SPEAKER: OK. I have sent you a link on the right side. It's flippy.com.

JOHN MUELLER: OK.

MALE SPEAKER: If you have to time, send an email info@flippy.com or nitro@flippy. I can send you my email address.

JOHN MUELLER: OK.

MALE SPEAKER: If you can please help us. It's not us building inholded links, just users, same as the YouTube has the problem, and the other media companies. So we're not trying to be bad but users are using us in illegal way. So I don't bother anymore. Please just help us. Just give me a how to fix this.

JOHN MUELLER: Yeah. I'll take a quick look afterwards. Yeah.

MALE SPEAKER: OK. Thank you. Thank you.

MALE SPEAKER: John, can I ask you a question?

JOHN MUELLER: Sure. We have an AdWorks account. We have our HCDR about 6% to 7%. We recently created the marketing campaign that works fine, but the links down our average. It's there are about 2% to 3%. Is it better to have their marketing campaign to another AdWorks account? Nothing's have fixed.

JOHN MUELLER: I can't give you any ads advice. I'm sorry. I really don't know. We split the web search, the organic site, from that site completely. So I really don't know what I could help you there. I'd checked with AdWorks forum, perhaps? I don't know if they also do these kind of paying outs, but I can't help with that sorry.

MALE SPEAKER: OK. Thank you.

JOHN MUELLER: Sure. Let me just run through some of these questions here to see if there's anything I can add here. After a successful disavow with my webmaster links can decrease? No. As I mentioned before, disavowed links still remain visible. How can I tell that my disavow's successful? In general, if you would submitted it in a technically correct way, it'll always be processed continuously. It's not something that is processed once and you'll see those OK. It really is process continuously as we recrawl. I had a former hack and cleaned up my site. Will I recover traffic as before? Usually, yes. If your site was hacked and that you've cleaned it up completely, then that's something that we'll pick up as we recrawl and reindex your sites. Sometimes, it takes a little bit longer and I just also double check with the fetch as Google tool, to make sure that your site is really clean. Sometimes, there are different forms of hacks on a site and some of them you'd see directly in the browser and others only Google bots see. So kind of double check that it's really clean. Should I be worried about a constant flow of irrelevant links to my website. In general, if you look at your links and you notice that there are things that you really don't want to be associated with, I just put it in the disavow file and move on. That way, you don't have to worry about it. So if you see something problematic, take care of it and it'll be kind of cleaned up. If a site was built completely using AJAX, Campus and WebGeo and this has only one page with little text, will Google bot the site that it's a bad page? No. We won't treat it as bad site. But what might happen is that it's harder for us to actually get to your content. So that's something where I use the fetch as Google with the rendering option to see what we would actually see when we call a page. And maybe we'll be able to pick up the content. Maybe you'll see that some of the content is blocked by robots text that you might want to allow crawling for. The other aspect is if your whole website is essentially on one URL for crawling purposes, then that makes it essentially impossible for us to find the rest of your content because it depends on how you click through your site. So if you can set it up to use separate URLs, you can do that using HTML5, pushState for example. Then that makes it possible for us to crawl those individual URLs and to actually index them separately. So that might be something worth looking at as well. But is definitely not the case that we would treat a site like that as being bad. Let's see. How we bring more visitors to our Google+ page? Essentially, this is a page like any other on the web. You can recommend it to your users. You can encourage your visitors to recommend the two other friends. It's not something where we would say, Google+ pages are inherently different than any other kind of webpage out there. When I search keywords in search results showing different types of Meta Title rather than the original title? We talked about this briefly. I have a bunch of a number of questions here. So maybe, I'll just open up to you guys. What's left? What's still on your mind? What can I help with?

DEREK MORAN: Yeah. I've got a quick top questions John.

JOHN MUELLER: OK.

DEREK MORAN: It's been three weeks since I've converted my entire site to HTTPS. And I've noticed an interesting pattern. Every single page the did not have rel equal canonicals in it converted to TTPS, it would think 24 hours. But every page that did have rel equals canonical has not converted in the index it all after three weeks. It's still stuck at the old one.

JOHN MUELLER: OK. That shouldn't actually be the case. Yeah. That's a bit weird.

DEREK MORAN: Yeah. So it's basically--

JOHN MUELLER: --have in you name tag.

DEREK MORAN: Yes.

JOHN MUELLER: OK.

DEREK MORAN: So which means, it's basically all the best part of my website is not converting. But my forum, which is still good part, that converted. I don't know it.

JOHN MUELLER: OK. One thing that you might want to do is look at the cached pages there. So what sometimes happens and I know it confuses WebMasters a lot is we'll have multiple URLs associated with the same content but we'll actually index it primarily under one of these weird versions. And you usually see that if you look at the cached page. So if you do something like the site query, you'll see those old URLs that you think should be under the new URL. If you click the drop down and click on cached page, it will show you the URL on help that we actually index this content down there. And what might be happening there is we show it in the site query but actually we index it as HTTPS s and then you're essentially covered.

DEREK MORAN: OK.

JOHN MUELLER: But I'll double check to make sure that there's nothing, otherwise weird.

DEREK MORAN: Well, we opt out everything. We opt followed everything, you know, the tap there and not given up.

JOHN MUELLER: OK. I wouldn't necessarily worry about that. That's something that sometimes, it just takes a little bit longer to catch up. We notice that we can't always trust rel canonical, especially when it's used in a bad way. But in general, when we find it and we see it used properly, we do try to take it into account, in addition to 301. It's a pretty good signal. For example, we've seen a lot of sites have a rel canonical set to their home page. So they'll have a big website with lots of different pages. But the canonical is that to their home page, which theoretically, if we follow that, we drop all of these other pages and just index the homepage. So that's the kind of situation we'd ignore the rel canonical. But if you're using this in a clear way to kind of let us know or confirm a move, then that's something we should be taking you into account. And it sounds like something, maybe we should figure out if something broke on our side, we got stuff there.

DEREK MORAN: Yeah. Well, I definitely have added all the canonicals to the HTTPS.

JOHN MUELLER: OK. Sure.

DEREK MORAN: Yeah. Cool. Thank you.

MALE SPEAKER: Hey, John. I wanted to go back to the disavow files again.

JOHN MUELLER: OK.

MALE SPEAKER: So you mentioned earlier that any of the sites that you put in there, effectively changes those links to be no follow. Do you think that those sites will also be used on aggregate, so Google looks at all of the various disavow files that are being published by all of the various Web Masters. Would they be used to enhance, say Panda, or some of the other search algorithms?

JOHN MUELLER: I wouldn't rule that out completely, but it's very tricky. It's not something where we'd say, we can take this one to one and use it for our website algorithms. For example, there's a situation where maybe a very legitimate market is out there that has a lot of high quality content and good links in it, but it happened to get stuck on some list of some script that out of post comments. And a lot of sites might have auto posted comments there that are essentially useless links that we should be taking out. But the rest of the content on this block is actually really high quality content. We see that happening a lot with government websites. For example, that they'll have really good content on the website but they haven't set up in a way that allows random people to add comments and links to those pages. So they're taking advantage of bilateral [INAUDIBLE]. And if the sites all disavowed those government pages, then that kind of cleans up that connection between that government site and they're site, those spammy links that they dropped there in the past. But that doesn't necessarily mean that this other government site is really low quality spamming website that's just spamming everyone. It just happened to be open for other people to get spammed. So that's something where I could imagine to some extent we might take that into account to double check some of our website algorithms. But we really, really need to be careful when we do that, that we don't kind of take into account sites that are essentially good site that just happened to get taken advantage of.

MALE SPEAKER: Alright, John. I have to ask. Why doesn't Google just ignore all the comments so they immediately stop all spamming online on the comments?

JOHN MUELLER: I don't think it would work that way. But yes, it would be nice if we could stop that kind of comment spam. We saw it, for example, when we introduced the no follow tag. Lots of websites moved to the no follow tag. But a lot of these auto posting spam scripts, they essentially post their comments regardless. They don't even recognize that these comments are being no follow. So it would be nice if we could just like flip a switch and say, OK all spammy comments will disappear or people will stop spamming after we make this change. But realistically, I don't think that's quite that easy. Alright. We're a bit over time already. So I just want to thank you all for all of your questions and comments. It's been really interesting.

MALE SPEAKER: John, one second. John, one second.

JOHN MUELLER: OK. One last one.

MALE SPEAKER: Happy New Year.

JOHN MUELLER: Oh, yeah. It's new year, yeah.

MALE SPEAKER: Right?

JOHN MUELLER: Yeah. Not here in Switzerland but yes.

MALE SPEAKER: Great talking to you.

JOHN MUELLER: OK Great.

MALE SPEAKER: Thank you.

JOHN MUELLER: All right. So have a great weekend everyone. I hope to see you guys again, set up the new Hangouts later today and feel free to add any questions that I missed there, so that we can go through those then. Thanks again.

MALE SPEAKER: Thank you from [INAUDIBLE]. Thank you. Bye. Thank you. Bye. Bye, John. Get it.
ReconsiderationRequests.net | Copyright 2018