Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 10 March 2015

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

JOHN MUELLER: Welcome, everyone, to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller. I'm a webmaster trans-analyst at Google in Switzerland, and part of what I do is talk with webmasters and publishers like we have here in the audience already. Before we get started with the normal Q&A, if one of you has any questions that you've been burning to ask, feel free to jump on in.

BARUCH LABUNSKI: Basically adding site maps, or refreshing them, in other words, would that basically boost the cross states? I mean, if the cross stats are very, very low, let's say in the hundreds, and all of a sudden, I made some changes to a certain amount of pages on a site, would that boost my-- because there's a lot of talk out there still about this whole method. So if I refresh it, would that boost my stats?

JOHN MUELLER: Possibly, yeah. So I guess there are two aspects there. On the one hand, we try to find the maximum crawl speed that would make sense for a server, and we'll try to stick to that. So that's kind of the upper bound we have for a website. That could be the higher limit that you set in Webmaster Tools. It could be a limit that we determine automatically if you don't have anything set in Webmaster Tools. So that's kind of the high limit that we have there. And we usually don't crawl at that high limit all the time. So if we're crawling at 100 pages a day and we have our limit set to 10,000 pages a day for your site, or for your server, then theoretically, there is this room for a whole bunch of pages that could be crawled during that day. And if you submit a site map, and you tell us all these pages just recently changed, then we'll go off and crawl them. So potentially, if there's kind of this room that we have available for crawling more pages, then we'll try to take that into account when we actually do crawl these pages.

BARUCH LABUNSKI: And so also moving to an SAWS, or moving to a different cloud as opposed to having my own server and moving into a better server, that will also help, right?

JOHN MUELLER: Sure. I mean, we kind of set the server limit based on what we think your server can take and that's kind of based on what we've seen from crawling. So if we crawl 1,000 pages a day and your server does fine, if we crawl 2,000 pages a day and we see a lot of server errors, or we see the response time go down significantly, they might say, well, there's probably a limit between about 1,000 and 2,000 that we shouldn't go past. And if you move to a server that's significantly stronger where we can crawl a lot more pages, than most likely we'll be able to crawl more pages. The thing to keep in mind there is that just crawling these pages doesn't mean that they'll rank. So we might crawl 10,000 pages from your site on one day, but that doesn't mean that these 10,000 pages are going to be indexed the next day, it doesn't mean that they're going to rank higher than any pages that we crawled maybe a year ago. So just because we crawl them doesn't mean they'll rank better, but we do try to keep that way. And we say, OK, well, if the webmaster says that all of these pages changed yesterday, or recently then we should go check them out so that when someone is searching for that content, we actually have that content. It's not that they'll rank higher for anything on these pages, but if someone is searching specifically for that content and we don't have anywhere else, then of course we're going to have to show your pages because that's where this content is.

BARUCH LABUNSKI: I see. So I might see a little jump there like on, let's say the 10th, being today. And then like two days later, it's back to where it was. Like you were saying.

JOHN MUELLER: From the crawl speed, I imagine we'll try to smooth that out over time. So we'll try to pick up these pages as soon as we see that they've change, but that usually spreads out a bit. So if submit a site with a million pages and you say, these all changed yesterday, then we're not going to be able to crawl on millions of those today.


JOHN MUELLER: We'll have to kind of spread that out.

BARUCH LABUNSKI: Thank you so much, John.

MALE SPEAKER: So, John, I have a follow up there. And we had this bit of a problem because we were trying to figure out what last modified date we should put in the site map. So for example, if the CSS changes, or the java script changes on the page, that effects some of the supplementary elements on the page, but not the main content. Google's crawler uses the "if last modified since" header." And so we send the response accordingly, but that depends on whether or not JS and CSS have changed. But on the site map it's different because we can control, we know exactly when the main content changed versus when the CSS and JS changed. So long story short, the question is in a site map, the last modified date, should we change it if the JS and CSS change?

JOHN MUELLER: If you want that indexed, sure. You can do that. But primarily, we want to see when the main content changes and that can include things like comments, that can include things like additional news articles, or updates that you have on those pages. So these are all kind of in the bulk of the main content, but if you want to have layout changes reflected in the index, then by all means, do that as well. So if you do a complete redesign of your website, the internal links change, and all of those things change, then by all means, let us know about that. We'll try to recognize that automatically as well and pick up on crawling that as well.

MALE SPEAKER: But redesigns don't happen that frequently. Normally, you'd make smaller changes to CSS. And the thing that I was concerned about was if the last modified date in the site map is different from the last modified HTTP header that the PH sends--

JOHN MUELLER: That's fine. That's fine. I mean, we try to recognize that the last modification date and the site map make sense because what we've seen a lot is that someone will set up a dynamic script to create the site map file and say, oh, the last modification date for all of the pages across my website is right now. And that's almost certainly wrong. And that's something that we try to recognize and say, well, I think the webmaster's trying to shoot themselves in the foot here, they don't really want to re-crawl the whole website all the time. We have to figure it out ourselves. But if you give us a reasonable last modification date, we'll try to take that into account. We have the Dutch football fan club here today. What's up?

MALE SPEAKER: I'm curious. Gary from Google, who might be sitting somewhere near you, or maybe not, I don't know if he's back yet, announced, or suggested that they're going to announce at Google this week that something's changing with the Ajax recommendations for Google. Can you talk more about that since it was talked about a little bit so far?

JOHN MUELLER: I think, if he announced that we're going to announce something, then maybe it's best to wait for that announcement. I mean, the bigger picture is we've gotten better and better at rendering pages, so it's generally less of an issue that you need to do something like the Ajax following work around to actually make this content accessible to Googlebot. So it's something if we can render the page directly, then why should you have to go through this is kind of magic dance to create this alternate version of your website to kind of serve specifically to Googlebot? So that's kind of the bigger theme and that's the direction we've been heading for a while now. So we're getting better and better at rendering JavaScript, we've got it worked out for pretty much most kinds of pages. So I think the next logical step would be to say, well, you don't need this kind of magic work around anymore, you can just focus on us being able to render those pages directly.

MALE SPEAKER: He kind of implied that it's not good user experience to design sites that are fully built in Ajax.

JOHN MUELLER: I don't know what he said there specifically. It's hard to comment on that.

MALE SPEAKER: I was going to follow up on Barry's question and ask if you have a search tool with filters, do they still recommend using the explanation mark hashtag to illustrate the URL that Google can go directly, or are you guys saying that won't even be necessary?

JOHN MUELLER: I am not sure specifically what you're referring to.

MALE SPEAKER: If you have a search tool and one of the filters is, I don't know, blue shoes. In the past, you guys recommended that you would use I believe explanation mark, hash to illustrate-- so for example, let's say anation#blueshoes, so then if you went also to the URL, you would get the same thing as if you clicked on that filter.

JOHN MUELLER: I think you're probably mixing up the hash thing, the Ajax crawling things there a little bit. So with the Ajax crawling set up, you have the hashtag, explanation mark.

MALE SPEAKER: Yes, sorry, I got it the other way around.

JOHN MUELLER: And the URL we would call is then question mark, underscore, escape, fragment, equals, whatever you have after the hashtag. And that's essentially the Ajax crawling set up. And for search pages specifically, I think that's independent of any of this. Ajax crawling specifically, that's something where you can use URL parameter settings and those kind of things.

MALE SPEAKER: Perfect. Thank you.

JOHN MUELLER: So I guess we have this first question covered. Let me just click down here. Do you considerate it bad to do a 301 redirection of old domain to new website. Old domains have thousands of pages indexed and there will be not related pages to point on the new website. Can I just 301 the whole domain to the home page of the new URL, or is that bad? If you're moving from one domain to the other, I definitely would do a 301 on a pretty regular basis. If you're essentially just setting up a new domain and it's unrelated to your old domain that I don't know how a 301 redirect to the home page would realistically make sense there, so that's probably something I'd avoid. If you're essentially moving to a new website and you can map them one to one, then maybe that's a possibility. But for the most part, I'd really try to figure out a way that you can do a one to one mapping of the old pages to the new pages. They don't have to be the same URLs, but if you have like a product page for your blue shoes on your old website, and a different URL for a product page for blue shoes, then maybe there's a way that you can set up some redirect, so that users to the old pages can reach the right new ones. And on the one hand, that helps users. On the other hand, it helps us to forward all of the signals that we've collected over the years and bring those to your new pages as well. So as much as possible, I'd really try to set up a one to one redirect there and not redirect everything to the homepage. I have an old site not updated for years, which is the tutorial directory and I have thousands of pages indexed in google. I want to re-run that side on a new script and delete all the old content. What should I do with all the old indexed pages? Should I 410 them? It's hard to say exactly what you're trying to do there. So if you have a lot of content in a CMS and you want to move to a different CMS, then I would try to, like in the previous question, set up redirects from the old versions to the new ones. That really helps us to kind of keep that connection and make sure that all the signals that we have collected for the old pages, for the content that you have there, is forwarded to the new content. On the other hand, if you want to delete your whole website, and essentially start over, and you don't want to keep any of the old content, then putting a 404, or 410 there for those URLs is perfectly fine. And that's essentially up to you. 410 will be a tiny bit faster, but for practical purposes, the 404 is just as useful.

JOSH BACHYNSKI: Hey, John, can I ask you a quick question?


JOSH BACHYNSKI: So is there a difference between Payday Loans and Penguin? Because you guys refer to them both as the web spam algorithm. I was wondering if there was a difference, and are you still updating Payday Loans, have you fixed Payday Loans, what is the deal with Payday Loans?

JOHN MUELLER: I don't know. What do you guys do with these Payday Loans websites that our web spam engineers have to kind of focus specifically on that niche. That sounds like something is really going wrong there. I don't know what's up with Payday Loans, so I don't have any specific answer. It is different than our other algorithms, like Penguin for example. But I don't know what the specific issues are there that the website is saying this is such a complicated and specific problem that we have to focus on this specifically.

JOSH BACHYNSKI: So you don't even know if it's still being updated?

JOHN MUELLER: I don't know. I can't speak for them. I'd have to check it out and see what's happening there. Is this something that you're seeing with specific sites?

JOSH BACHYNSKI: Well, we are seeing fluctuations. And Barry does his due diligence and asks Google if it was Panda or Penguin. And they say, nope, it wasn't Panda or Penguin. And so some sites that went down on a previous, admitted Payday Loans date, either go up on this date, or go down on this date of the unnamed update. And so we're just trying to diagnose what's going on, and just trying to fix websites, and get people back into the rankings.

JOHN MUELLER: As far as I know, the Payday Loan specific algorithms are really specific to those type of sites, though. So that's not something where if you have a site about shoes, or an e-commerce website, you'd have any problems with anything Payday Loans specific. So that would be something different.

JOSH BACHYNSKI: Well, the issue there then is that on the admitted Payday Loans date that Google had confirmed-- Barry had asked and Google confirmed this is a Payday Loans update, back in 2013, I think. Various sites that were not in any Payday Loans niche-- I would say they're probably in competitive niches, I would admit that, but various sites, like limousine sites, car rental sites for example. I've had numerous clients come to me and they went down on those dates. So either they were hit by Payday loans, or maybe Penguin was dropped in back there, or some other algorithm. Would you guys ever put Payday Loans and Penguin at the same time, maybe?

JOHN MUELLER: It's theoretically possible. We make a lot of the updates over the years. On the one hand, if it's back in 2013, it's really hard to say what would be affecting the site now that happened by then. But in general, last year we made over a thousand changes in web search. I think we said that somewhere. That's something where on any given date, there might be larger changes, something that we might talk about even. And there might be a lot of other changes that are happening that are just happening as they usually happen. And there might be things that the website team is doing manually at the same time, as well. So that's something where just because something happened on a date, I think there's some help where you can look up and say, OK, well these kinds of things happened on that date, so it might be that. But I wouldn't assume that just because it happened on that date, it has to be that. But I think you've seen a lot of these cases as well and you kind of understand that, too.

BARUCH LABUNSKI: Are we going to see less fluctuations this year? For 2015 as opposed to last year because last year there was-- I just wanted to know.

JOHN MUELLER: I have no idea. I don't think the search quality team is sitting there scheming and saying, well, we'll make lots of fluctuations this year and keep people on their toes. We're just trying to keep the search results relevant. And sometimes that means we roll out bigger changes that we've worked on for maybe a year, maybe even longer. And maybe those make bigger fluctuations, but it's not the case that the team is explicitly focusing on fluctuations and saying, this is something we should more of, or less of.

JOSH BACHYNSKI: Well, I just wanted to finish up on that, John. So I understand completely what you're saying and of course, we don't want complete transparency, but quite frankly, pretty much the only way we could diagnose whether or not a site has been hit by Penguin, or Panda, or Payday Loans, or any other named algorithm, is by looking in the traffic, and looking at the ranking reports, and seeing if they fail on a confirmed date, or they fail when a bunch of other people fail. Then we can presume possibly it's Panda, which as far as we know-- although Barry's about to ask you a question about that. As far as we know, Panda updates roughly monthly on a regular, unannounced basis. And so if a bunch of people are hit at the same time, we can predict that maybe it was Panda. Quite frankly, if the ability to diagnose that is removed from us, that really is a bad thing and we can't diagnose what has hit a site. And it's not a trivial change, John. It's thousands of dollars to go clean up a backlink profile and it's a different set of a thousand dollars to go make a site much higher quality. And for a lot businesses that's not a trivial difference. I'm sure you can appreciate that. Some businesses, yes, they have lots of money and they can improve their site and they can clear the backlink profile, but many businesses don't. I don't want to be the broken ethics wheel, but quite frankly, shame on Google, if that's what you guys are doing. We need announcements. You don't have to us exactly how it works, but we do need announcements for these things, so we can diagnose. At least if it's a quality issue, fine, we can go fix that. Or if it's a link issue, fine, we can go fix that. Does that make sense?

JOHN MUELLER: That definitely make sense. I don't know how much of that we can bring out, but it's definitely something that we bring up with the teams when we discuss with them. And I think there are various levels of information that we could give in that regard. I don't know how far they'd be willing to go, how far they'd be willing to go with regards to changes that they feel are pretty major that maybe nobody noticed.

JOSH BACHYNSKI: So the very last question on that. So again, is Penguin running on a monthly schedule now unannounced, or is it still going to be announced when it refreshes?

JOHN MUELLER: At the moment, I don't think it's on a monthly schedule. We're trying to get that more regular, but at the moment, it's not something that's rolling out automatically.

JOSH BACHYNSKI: Thanks very much.

MALE SPEAKER: John, I have been trying to diagnose an issue like this. An example is, on February 22nd, we noticed that the traffic that Google sent on the mobile site dropped by about 40, 45 % and the desktop and tablet versions were unaffected. I don't know how to diagnosis this because every single thing that I do in terms of the [INAUDIBLE] headers are set, Google's tools say that it is mobile friendly, I haven't made any changes on the mobile site and this is a significant issue because I usually get about 100,000 people a day on mobile, and now it's down significantly. So what can we do to diagnose that issue because nothing seems to have changed?

JOHN MUELLER: That sounds mostly like a technical issue on your side. I'd have to take a look at the specific URLs to see what exactly is happening there, but in general, we currently have very minimal differences in the mobile search results, so the smartphone search results I assume you mean, and the desktop search results. So we watch out for things like if you broken redirects, like [INAUDIBLE] redirects, if you have bad error pages on mobile, if you have Flash on mobile, those are things that we watch out for. It sounds like you already have a pretty good mobile site.

MALE SPEAKER: It's the same URL. We don't even redirect. So if you were to look at it, how can we contact you? Can you publish your email address?

JOHN MUELLER: I post in the out forum and feel free to post the URL of your thread in the event listing and I'll take a look there. But generally speaking, since we don't differentiate from the ranking side of things that strongly at the moment between the desktop and the mobile pages, if you're seeing a significant drop in the mobile pages, then it sounds like something more on your side is going wrong, then something on our side has changed. But I'm happy to take a look. I think in the future as we roll out the mobile ranking changes where we are going to take a stronger approach to recognizing mobile friendly pages and non mobile friendly pages, you will see a stronger difference between the desktop and the mobile versions, but at the moment, it should be fairly equivalent besides the exceptions that I mentioned.

MALE SPEAKER: I've got one question about Panda. Gary mentioned that Panda is constantly running in real time. I wasn't at the session, I couldn't really ask you more questions about that, but as far as I know, based on what I'm tracking, there might have been some Panda rumblings in the past week or so, maybe a little bit of testing, but before that, I haven't seen any significant Panda changes since October 24th. As far as you know, could you tell us a little bit about this real time, instant, constant remark? Is it constantly running? What's the story with that? It used to run every month, but I think you guys stopped that at some point.

JOHN MUELLER: I am not exactly sure what he was referring to with the real time stuff there, so I don't have any specific comment on that. I'd have to check in with him to see what he was talking about there. But as far as I know, we do kind of have Panda tied into our search results more directly, so there's this kind of real time reflection there. But as far as I know, we haven't updated the data that's reflected there for a while. So that's probably the date that you're looking at there and the real time is more the technical aspect of how we integrate the [INAUDIBLE].

MALE SPEAKER: Are you guessing that that's the date, John? Or do you know for a fact that it's October 24th? Because it's funny that my customers site is in the same situation and that's the exact date where they fell off a cliff.

JOHN MUELLER: I don't know for sure the exact date. I know it was in October.

MALE SPEAKER: The last time they refreshed the data was in October.

JOHN MUELLER: Barry watches these like a hawk, so he would know which date it is. I'd have to dig through a big pile of emails to find something.

MALE SPEAKER: But just to confirm, they haven't refreshed this data since sometime in October, as far as you know.

JOHN MUELLER: As far as I know, it's not refreshed yet.

MALE SPEAKER: Hey, John, since we are on the Panda issue, in the past you have said that comments you consider as part of the page and you guys consider them. And let's use Barry's website as an example, which has usually thinner articles, but a lot of comments and conversation happening. Can it be the case where if there is a lot of comments-- not thin content, just maybe 500 word articles. Sorry, Barry. Can it be the case where if a site has a lot of comments, then potentially Panda might get confused and think that a lot of different articles are looking alike because maybe the comments are not differentiating themselves. People are not using the keyword, the article topic as much as it's used in the body of the article, and then you are inadvertently penalizing sites that you should be rewarding because they're having a lot of community conversation and engage community versus a different website might have a slightly longer article and no one bothers to comment on it.

JOHN MUELLER: I don't think that's the case. We do look at the whole page and the whole website in general. We look at the quality of a site, so that does include things like the comments. But I don't think it's the case that if you have a lot of people who comment in the same, or similar way that would significantly pull down the quality of your pages. I imagine if you have the issue that a lot of people are commenting exactly the same way across all of your articles, then that's probably a general lower quality sign anyway, but something that you'd want to look at as a webmaster and think about why are all these people saying exactly the same thing on all of these pages? Are these real people, or are these bots trying to drop links, or what's really happened there? Because usually if people are commenting on something, they have something unique to say and they're not just on to say, well, I liked your article. Thank you for posting it, please link to my website. They're going to say something unique.

MALE SPEAKER: No, absolutely. My concern is that the conversation that happens may not to be strong enough for a bot to recognize the differences. I think for a human, it's definitely. But maybe the conversation is happening and if most of the content on a page comes from comments, so let's say 70 % comes from comments versus the rest of the article because you have again, a very active conversation, that maybe a bot cannot say, oh, what is this page about. I can see the title of the page, but I cannot tell what this page is really about versus the next page, versus the next page. And you might be having a false positive.

JOHN MUELLER: I think if we look at the whole page overall and we really have trouble recognizing what this page is about, then maybe these comments are kind of off topic. But even off topic comments can provide a value for some sites, so it's not the case that I'd say if you have shorter articles and a lot of comments, that's a bad thing. I really look at these overall and say, well, overall, this page provides a lot of value. Some of it is on top here and some of it is on the bottom here with a lot of user generated content. And overall, this is something that provides value that we'd like to show in a search, so that's something that we try to reflect there. And it's not the case that we'd say, well, there are 500 comments on this page and the actual article is only five lines, therefore this page is lower quality. There's actually a lot of content there if there's so many comments, so that's not something where I take the number of comments to lines of actual body article into account. We look at the overall page and try to recognize what that's about, not just focusing on one or the other elements.

MALE SPEAKER: Thank you.

JOHN MUELLER: All right. Let's run through some of the questions that were submitted. Google mobile ranking. Are we going to see something like a page layout algorithm for mobile due to small screen sizes? One quarter to one third is covered with ads, is that a problem? I don't know. That's kind of forward looking. I would love to be in a situation where we can say all websites work great on smartphones and now we can focus on making sure that they look even better on smartphones and don't just show ads on smartphones. I think maybe at some point in the future, but not anytime soon. If an article is copy and pasted on multiple geodomains, if we implement the hreflang mark up, do the canonical things that need to be pointing at the country [INAUDIBLE] that publish it first? You could do that, if you wanted to. You don't necessarily need to do that. If you're using the hreflang mark up, then I would do the canonical to each of the hreflang implementations of that. If you're not using the hreflang mark up, you just want to publish this on multiple pages, but actually prefer to have one version index, then definitely use the canonical and just point to that version. You don't need to do the hreflang in addition because we kind of dropped the hreflang information, if you have a canonical set up, to a specific version. I'm trying to recover my site from Panda, but did not see any Panda refresh updates since October. It's very frustrating. Can you give some clue about the next update? We talked about this briefly. I don't have any update on when the next updates will happen, but we've been talking to the teams here as well to see what we can say, or encourage things to move along a little bit faster.

BARUCH LABUNSKI: That would be great.

JOHN MUELLER: I have over 100,000 pages added by users. 90% of the pages have thin content and a lot of duplicate content. I'm thinking of adding no index, no follow to those pages. I'm scared. Of course, 50% of my search engine traffic goes there. What should I do? This is something that's really hard to say on a very generic level. So if you have a lot of user generated content, and you know that it's lower quality content, and you don't want to be associated with that, then that's something you might want to take action on and see what you can do to either improve that content, which might be an option, or to remove the content that's lower quality for the mean time and improve on it step by step. Or think about what you can fold together. All of those things. So essentially, that's something where it depends on your website, on what you want to do, how far you want to go, but if you're able to recognize that a lot of your content is lower quality content, then I think that's an important first step. And from there, there are a few options open that you can use to treat that. A few months back, [INAUDIBLE] website, his links started to show some unusual set of links. More than 3,000 at once. I got to know that someone has used a spam link bot towards my website. I dissolved most of the domains, or disavowed probably most of your domain, but my rankings were dropping. In general, if you use a disavow tool for that, that's the perfect way to handle that. You can disavow it on a domain level, which means you don't have to focus on the individual URLs, you just compile the domains and submit it like that. If your ranking is still dropping there, then chances are that's completely unrelated to that because on the one hand, we do try to recognize this kind of spammy link behavior automatically and we essentially try to ignore it. Using the disavow tool, we make sure that we can drop it. But this isn't something that generally would be negatively effecting your website.

BARUCH LABUNSKI: On the mobile level, John, just a quick question. On the mobile level you were just discussing, now the Google mobile usability updates every two, three weeks. Now that it's a ranking factor, can we mark it as fixed soon, like the errors and stuff? Is it possible that the feature will change? Marking it as fixed as opposed to waiting two, three--

JOHN MUELLER: I think we have something around mark this fixed plan, but I don't think it's for the mobile usability feature. But in general, the mark this fixed in crawl errors for example, is essentially just for the UI to make it easier for you. It doesn't change anything on our side, so it doesn't result in us re-crawling those URLs, or re-indexing them. It's essentially you just saying, I looked at these, I don't want to look at them anymore, show me the next batch.

MIHAI APERGHIS: By the way, related to that, I was in the hangout last week regarding mobile websites. And I found out that the ranking packet will be applied in a binary way, so if the website passes the mobile friendly test, then it will get the boost, or whatever it gets. If it doesn't, it doesn't. So it's not about just a few issues being fixed, the whole test needs to show the green light in order to get the full benefits. So not parts of it, or portion.

BARUCH LABUNSKI: The problem here is that if there's 200 pages that are not mobile friendly, I don't want to go through the same ones over. I'll just mark it as fixed and then and I would get wherever. Like, 30 left, 20 left, whatever, as opposed to having the Excel on this screen.

JOHN MUELLER: That's good feedback. I'll see what we can do there. I don't know if we can implement something like that really quickly, but I think that seems like something that might make sense there. And in general, what will happen is we'll re-crawl those pages and we'll drop them out automatically, but I can see how you might want to let us know about that a little bit earlier, so that we can re-crawl them a little bit earlier, so that you don't have to see them in the chart and aggregations there.

MIHAI APERGHIS: But we can still use that to render and then submit an indexed site?

JOHN MUELLER: Yes, you can do that, but that mostly works if you have a handful of pages and if you're changing a lot of pages. Then it might make more sense to just say select all of these in this chart and mark them all as fixed.

JOSH BACHYNSKI: Hey, John, can I ask you a clarifying question?


JOSH BACHYNSKI: So again, there will be a debate later on Barry's site filled with internet trolls and vitriol, so I just wanted to make sure we get this clear. Some people are taking your last comment about Panda that the code hasn't been updated since October 24th, when Panda 4.1 was released, as to say that Panda itself is not run monthly since October. Can you confirm if that is the case, or are you just not sure?

MALE SPEAKER: Wait, before you confirm, specifically, the code by not have been updated, the data, in terms of the signals you're pulling from websites, also have not been updated since October. So you might be re-running it monthly, but it's with the same old data. That's how I understood it.

BARUCH LABUNSKI: Well, why would they do that?

MALE SPEAKER: I don't know. They like to run things.

JOSH BACHYNSKI: That's the silliest thing I've ever heard. I

JOHN MUELLER: I don't know where you get this level of detail from us. As far as I know, we haven't got any of that yet. We've updated the way that we're pulling in this data, but it might be that's something that you don't really see directly.

JOSH BACHYNSKI: So when you say data, you mean code, right? You haven't updated the code since October, but you're still refreshing it on a regular basis? It's still running. Panda is still running.

MALE SPEAKER: What don't you understand? Data is note code. Data is data.

JOSH BACHYNSKI: Data can be code.

MALE SPEAKER: Can I ask the question in a different way? My customer's site was hit on October 24th and fell off a cliff. We believe it was Panda and we've turned that website upside down. We like took a machete to it, wiped out anything that could be perceived as low quality content, and added a ton of new content, and have not seen any recovery. Would that be the expected result based on what Google's done with Panda, or is it possible that we're still waiting for, let's say, a data update for that site to fully recover?

JOHN MUELLER: In general, if you make bigger changes on your website, you'll see this gradual change over time anyway, but you'll probably see the bigger jump once we've actually updated that data. But it really depends on what all is happening with your website and there might be some issues there that are totally unrelated to what our quality algorithms are picking up there. But if this is, in a theoretical case, only due to our quality algorithm thinking that this site over all is lower quality and not as good as it used to be, or compared to other sites, maybe not as good as it could be, then that's something that will be changed when this data is actually updated.

JOSH BACHYNSKI: So John, I'm sorry. So you're saying Panda has not run since October 24th, to your knowledge?

JOHN MUELLER: As far as I know. I mean, I don't know the specific date. I'd have to look it up. As far as I know, it's been a while.

JOSH BACHYNSKI: Fair enough. So, John Mueller is not sure, Barry.

JOHN MUELLER: I don't really have time to dig through my emails during these live Hangouts.

MALE SPEAKER: He said it has been a while. What's so confusing? The data has not been updated. Sites that have been hit by Panda, back sometime in October, November, if you want to go that far, will not see a recovery specifically around Panda since then. It's that simple.

JOHN MUELLER: I mean, when you're doing significant changes on your website, you'll see these gradual updates happening over time anyway. As users kind of love your website, as they share it more, as go back by themselves more, you'll see this kind of gradual growth anyway. So that's something you should be seeing in any case, regardless of anything happening around things like Panda. In general, the bigger jump, you'll probably see when that actually is updated and when our algorithms are able to confirms that yes, this old evaluation of your website might not be relevant.


JOSH BACHYNSKI: So the Panda code is no longer in effect and Panda is not affecting websites?


JOHN MUELLER: Let's move on with some other questions here. I think we should do maybe like one specific to Panda.

JOSH BACHYNSKI: That would be great.

JOHN MUELLER: And we can focus on all these questions, but maybe I should have a chance to actually prepare for that.

MALE SPEAKER: That would be very therapeutic.

BARUCH LABUNSKI: And address the scandal.

MALE SPEAKER: Moving on to a different topic. For videos, you're currently only selling snippets on the search results for YouTube. Any updates in regards to potentially changing that and would you mind sharing the rationale behind that? It seems to me that you want to show video snippets for any site that ranks high for that term.

JOHN MUELLER: We do. We do show video snippets for essentially any site that we recognize as having video content. So if use something like video site maps and we can recognize that this video is on those pages, you have the play pages designed properly, marked up, then we will try to show that. So that's something that's unrelated to just YouTube, but not YouTube.

MALE SPEAKER: I thought that you had stopped showing on the search results the video snippet for non YouTube results. Am I mistaken?

BARUCH LABUNSKI: No, no. It's not there anymore. From what I see, I just see it's strictly YouTube results only.

JOHN MUELLER: So when I search for something like Vimeo, skateboard, I see Vimeo sites. At least for me, it's still working. Is this like something you've seen as search results in general? Because maybe something got stuck. Because as far as I know, this should be working with all kinds of video sites.

BARUCH LABUNSKI: No, John, I'm using something like Wistia, and in Wistia, you have an option for the snippet to show. It was also a blog on that. And it's not showing any longer in sites. So like, or whatever, it doesn't show, but regular YouTube, or Vimeo, yes, you'll see it. You'll see the snippet if it's just-- but not on just regular sites.

JOHN MUELLER: I mean, what we try to do is recognize the video is really an important element on this page and treat them appropriately. Because if you're in video search, and you search for a type of video, and you land on a page where the video is actually like in the sidebar, and it's like a thumbnail size, then that's kind of a bad user experience. Whereas, if you go to a normal landing page where the video is kind of in front, then that's kind of a thing we'd like to show. And that's similar for image search, for example. So if you're searching for an image on something and we know this is on the page somewhere, but it's actually a real sub, sub, sub element, then maybe that's something we wouldn't show directly.

MALE SPEAKER: John, I just pasted the URL where the only video snippets are on YouTube, at least on my end. And there are other the videos on the search results, but they don't get the video snippets.

JOHN MUELLER: I imagine that just depends on what we have there. So when I search that, in the bottom result, I have from Daily Motion, which is a video snippet.

BARUCH LABUNSKI: If Michael Jackson has a YouTube video on his home page and there was a YouTube on top of him, just the YouTube snippet would show. The Michael Jackson website would not show.

JOHN MUELLER: I don't know. I haven't seen Michael Jackson's website.

BARUCH LABUNSKI: It just disappeared last year sometime. I can't remember the date.

JOHN MUELLER: This is something that we've always been trying to optimize in the sense that we're trying to make sure that we don't overload the user with additional information that's essentially the same. But try to recognize which parts are relevant and which parts are irrelevant on the page. So that's something where maybe this is kind of play into that. I'd have to take a look at specific queries to really see what exactly you're seeing there. And maybe there is some bug on our side that we're skipping over video mark up on pages that we actually should be recognizing properly.

MALE SPEAKER: Thank you, John.

JOHN MUELLER: A question about the change of address tool. Why is there no option to move sub domain to new domain? That's a good question. We've been talking about that to the was a Webmaster Tools team as well and it's possible that we can get an update there. So hopefully, that'll be possible at some point. In Webmaster Tools, could you s listing, giving warnings about 404 errors that come from obvious spam and scraper sources? That's good feedback. I don't think that we can just block that out completely at the moment, but that's something we can definitely take a look at and see what we can do there. I don't know if you submitted that to the wish list that we sent out a while back. I will definitely copy that down to the site so that we can file that for later, too. We just have a few minutes left, but since you guys have been asking so many live questions, let me grab a bunch from the Q and A. Can Google really reproduce a JavaScript rendered page without the escape fragment? And how much of a trust decrease is there with the escape fragment? You can test this in Webmaster Tools with the Fetch & Render tool to kind of see exactly what Googlebot will be able to see. In most of the cases that I've seen, we can get it pretty much range, or we can point at things like JavaScript, or CSS files that are blocked by robots text. And usually that's the biggest issue that we've run across, where we could actually render the page properly, but because you're blocking your JavaScript, or your CSS from being crawled by Google bot, we can't actually do it. So this is something that's also very relevant for mobile pages where actually we'd like to be able to recognize that they're mobile friendly, but if you're blocking us from being able to see them as such, then we can't really recognize them as such. And there's no trust decrease or anything if you're using escape fragment, or not. Essentially, we're trying to just see what these pages look like to the normal user. With the announcement of facts being included as a ranking factor, how will Google handle incorrect information online because obviously, there's nothing incorrect online. This was just a research paper that some of our researchers did and not something that we're using in our rankings.


JOHN MUELLER: We have researchers that do fantastic research, that publish tons of papers all the time, and just because they're researching something, and trying to see which options are happening there, or because maybe they even patented something, or create new algorithms, it doesn't mean that it's something that we use in search. So at the moment, this is a research paper and I think it's interesting seeing the feedback around that paper and the feedback from the online community, from the people who are creating web pages, from the SCOs who are working on promoting this pages, but also from normal web users who are looking at this and saying, well, I don't know how good Google do this, or maybe this would be good, or maybe this would be bad. But at the moment, this is definitely just a research paper and not something that we're actually including in our searches.

BARUCH LABUNSKI: It's a feedback paper.

JOHN MUELLER: A feedback paper.

BARUCH LABUNSKI: It's just basically just kind of feedback. And is it ever going to happen, but from what you're saying--

JOHN MUELLER: I don't know. If we can recognize what is right and what is wrong on the internet, that has to be some pretty advanced artificial intelligence. I don't know how that could work. But I would totally install that on my phone. If you have a red light, green light, this site is telling the true, or is trying to trick you. That would be pretty awesome. Who goes first?

JOSH BACHYNSKI: I said, I guess it depends who programs it.

JOHN MUELLER: Well, if it knows exactly the facts, the facts don't depend on who programmed it. I guess opinions are a bit different, but this makes it interesting. You can ask Google, who killed JFK? You'll get some answers and you'll get a variety of answers, but it's not like there is this one proven thing that is absolutely correct. Maybe there is. I haven't been paying attention to that, but these are types of questions where there's naturally some disagreement and it's important for us to bring the various viewpoints into the search results and not just say, well, some of these people are saying this, but these other people think that's maybe it's not the case, we shouldn't show them.

BARUCH LABUNSKI: Did Uri Geller really bend the spoon, yes or no?

JOHN MUELLER: Depends on how you mean how he did it.


JOHN MUELLER: I have to run a bit early. So if you have one really short question without an echo, then I'll grab that, but otherwise I have to head out.

MIHAI APERGHIS: Quick one, John. Regarding Google Hummingbird, to change gears a bit, has there being any significant improvements over the past few years? And whether Hummingbird equally applies to international languages as well, or since it's based on topicality and semantics, for some languages it might not provide the same affect as for the English language.

JOHN MUELLER: I don't know. So the original version is definitely international, but I don't know how changes over time have affected that. And I know we do work on understanding languages better, so that's possible that that kind of flows into that, but I don't know specifically around the Hummingbird changes how our changes of understanding of language have kind of flown back into that as well.

MIHAI APERGHIS: Right, but we do expect that that part of international languages, it would have a lesser effect than for the English language where you have a better understanding of the semantics, relationship between words and everything.

JOHN MUELLER: I don't know. Possibly. I know sometimes we were able to handle individual languages a little bit better than others. And sometimes that's frustrating if you're in one of those languages that we don't pick up on so well, but I don't know how that kind of worked in there.

JOSH BACHYNSKI: Maybe we can have a Panda Hangout and then have a Hummingbird Hangout?

JOHN MUELLER: Yeah. Yeah, I don't know.

JOSH BACHYNSKI: Not to make more work for you, John.

JOHN MUELLER: I have to head out. Thank you all for your questions and maybe see you guys again at one of the future Hangouts.


JOHN MUELLER: Bye, everyone. | Copyright 2019