Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 15 July 2014



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video
[INAUDIBLE]

JOHN MUELLER: OK, welcome everyone to today's Google Webmaster Central office hours Hangouts. We have a bunch of people here already. Lots of questions already submitted. As always, feel free to ask questions in between or to comment on the questions or the answers or anything. Maybe to get started, let's take a question from you guys. Who wants to start?

MALE SPEAKER 1: Yeah, once there's a page that's a 404, and suppose a client has 10 pages with 404s. How long would it take until you guys clean those pages out? Because I still see them after three weeks. They're still there, like, they're still indexed. A client of mine is really picky about that, so she wants them gone.

JOHN MUELLER: So if these are just a handful of pages, I just use the URL removal tool for that. And just get them taken out like that. Usually for 404s or for any kind of page, we essentially just have to recrawl them. And depending on how they're linked within your website, how often we recrawl your website in general, how often we recrawl those pages, it can take a couple of days, a couple of weeks, maybe even a couple of months, sometimes, for those 404s to drop out. So if this is just a handful of pages and you're really picky about these being visible in search, then you could just use the URL removal tool. If there are a lot of pages, like hundreds or thousands of them, I just let them dropout naturally, even if it takes a couple of months to get reprocessed and all of that.

MALE SPEAKER 1: OK, and what about random ones? For instance, there's one from a CNN one, but the blog is no longer there, and it's been there over four weeks.

JOHN MUELLER: So on someone else's website?

MALE SPEAKER 1: Yeah. [INAUDIBLE]

JOHN MUELLER: You could use a URL removal tool for that as well, but usually that's not something you'd really have to clean up for other people. So as long as it returns a 404 or if it's blocked by a /robots.txt or it has a noindex on it, you can use a URL removal tool for that.

MALE SPEAKER 1: OK [INAUDIBLE]

JOHN MUELLER: All right, let's grab another one from you guys. See who's still awake.

MALE SPEAKER 2: I have a question there if you have time, John.

JOHN MUELLER: Sure.

MALE SPEAKER 2: I have a good friend in the SEO industry who tells me-- he runs a very large e-commerce site, and he gets negative SEO attacks all the time, or what he thinks are negative SEO attacks in the form of bad links to his pages. And I didn't think the disavow file worked like this, so I'd like to tell you what he's doing and you tell me if this is a disavow file, if you can, or not. But he says that he sees a certain page start lose ranking. He checks the backlinks for that page. He sees some bad links pointing to it or links that he didn't make or doesn't trust, anyway. He puts those links in disavow file. And he said like clockwork, three or four days later, the rankings for that page will come back. And he said this has happened about a dozen times, so he doesn't think it's coincidence. Now if this is the way the disavow file works-- I don't know if you can confirm this-- but if this is the way it works, I find that to be an incredibly positive thing. Because it means that webmasters have a chance of defending themselves against some of this negative SEO crap that's going on. So I don't know if you can confirm or deny it, but I didn't think the disavow file worked that way, John. I thought it took much longer.

JOHN MUELLER: Yeah, usually it would take longer. So my guess is maybe they're seeing effects from something else that are just happening there where, maybe, some algorithms are picking up changes and reprocessing them, and it just happens to coincide with that. But in general for the disavow file, we need to recrawl those links, and depending on what the problem was with those links, we need to rerun those algorithms to do that. So if this is something, for instance, with the Penguin algorithm, then it takes a while for us to actually rerun that algorithm. And that's not something where you'd see changes within a couple of days or a couple of weeks. So at least from that point of view, that wouldn't be related to the normal type of issues where we react to problematic links to a page. My guess is that this is just our normal algorithms picking up changes on that website, on those pages, and responding to that. And the disavow file is good in the sense that it'll prevent those problematic links from causing any problems in the long run, but you're probably not going to see any really short term effects like that.

MALE SPEAKER 2: Right. OK, great. Thank you. That's how I thought it worked. Thank you very much.

JOHN MUELLER: So I guess in those cases, our normal algorithms are working fine and just ignoring those negative SEO attacks there. And the fluctuations they are seeing with those individual URLs are just normal fluctuations from algorithms in general.

MALE SPEAKER 2: Great. Thank you.

JOHN MUELLER: All right, let's grab some questions from the Q&A here. "If Google has decided not to use the Penguin algorithm ever again, would Google rerun one last time to help the sites effected that have cleaned up, or would the Penguin-hit site be affected forever?" As far as I know, we're not retiring any of these algorithms, and when we do retire them, we generally remove them completely so that any effects that were in place from those algorithms won't be in effect anymore. So it's not that we turn off this algorithm and keep whatever data was in there forever. It's essentially something where if we decide to turn off the algorithm, we will remove the data that's associated with it. So it's not the case that stuff gets stuck forever.

MALE SPEAKER 1: Well, there's rumors out there that it's not coming back.

JOHN MUELLER: Oh, there are lots of rumors out there. Come on. You know that. This is something where we tend to announce things when they're ready, and we tend to put them up when they're ready. And sometimes it takes a little bit longer then we expect. So that's, I think, the case here with the Penguin algorithm. It's not that we're turning it off or that we're leaving in this state forever, it's essentially just taking a bit longer for us to update that data. So from that point of view, I wouldn't believe any random rumors that people are just making up based on the current situation without actually having any information from our side. But I think that's normal in the SEO industry, to some extent. And I think that's something that we have to work on as well, from Google's point of view, in the sense that if people are making up random rumors and they're making decisions based on those rumors, then that's partially our problem as well because we need to respond to those kind of questions.

BARRY SCHWARTZ: John, speaking about rumors--

MALE SPEAKER 1: --follow your Twitter there, the webmaster Twitter. Are you guys going to be more active on that?

JOHN MUELLER: The webmaster Twitter account, we are a bit more active on that. We have started doing a lot more on the Google+ page, the Google Webmasters page. Those are essentially our normal platforms where we're bringing out a lot of this information. If there's specific topics that you think we need to be more active about, feel free to let us know about that.

BARRY SCHWARTZ: John, speaking about rumors and stuff like that, now that Matt Cutts is no longer around for the short, four month period or so, who do I or we ask about when there are specific updates? Who is taking over that role at Google to tell us about updates. Can you tell us who's the person there? Is it you since you took a hit for the authorship or is it going to be somebody else?

MALE SPEAKER 1: John Mueller.

JOHN MUELLER: Probably a combination of people. So that's something where, depending on who's working with the teams involved, they'll tend to get that information out there. It might be one of us like me, Pierre, Gary, [? Zinna, ?] Maria, who were working kind of directly with the Webmaster Tools team and with the webmaster outreach team on those kind of issues. But we also have people in Mountain View who are working on a lot of these things, who are running the Twitter account, the Google+ pages, and those are channels as well where we're going to be bringing out this information.

BARRY SCHWARTZ: So there seemed to be something going on June 28 and possibly also July 5. Are you aware of stuff Google is working on that you could say? A lot of people were saying it was Panda refreshes and stuff like that. Could you comment about that, or are you just not aware?

JOHN MUELLER: I'd have to double check what's happening on those specific dates. We make up dates all the time, so it's sometimes kind of tricky to figure out, OK, webmasters are noticing this specific update and not these five other ones that we did on that those specific days. So that's kind of tricky. But if you have examples that we can look at where we're doing things wrong or where the rankings don't look like they should be, then that's always something useful for us to take back to the teams and figure out what exactly is happening there and what we could be doing better.

BARRY SCHWARTZ: So just to step back a little-- sorry to take up-- in the past, I got the impression that you didn't have the level of access or security clearance to know certain things about what's in the works in terms of certain algorithms or maybe you were told x months or x days before something was being released to prepare you. But now are you more involved in certain things that are rolling out every so often, in terms of algorithms, since Matt Cutts is on a leave, or doesn't it work that way?

JOHN MUELLER: Partially. A lot of the things Matt has been doing, we kind of have to take up that [? rule ?] and start to do those things as well. And a lot of the things he's been doing have already been done by other people anyway, so it's kind of a mixture of both sides.

BARRY SCHWARTZ: OK. Thank you.

JOHN MUELLER: All right. "If we own multiple sites, some educational, some e-commerce, should banners for the e-commerce sites on the educational sites that we also own be nofollowed? Does it matter if they're followed, nofollowed or site-wide, not site-wide?" In general, I'd see this more as a question of are these essentially advertisements on those pages or are they natural links within the content there. And if they're essentially advertising, then I'd treat that like any other advertising, even if that's for your own sites. So in those cases, I'd definitely put the nofollow on there. If this is just a natural link within your content and it happens to be to another site that you own, then that's fine. That's like any other natural link on your content. "Why does Google index pages that don't actually appear on my website? For example, other websites create an incorrect link to one of my articles, and it gets indexed by Google." I probably have to take a look at the examples to see what exactly is happening there. My guess is that either your website is actually serving content for those URLs or maybe those URLs are blocked by the robots.txt file so that we can't actually tell that those pages don't exist. And in both of those cases, that's kind of suboptimal on your website because essentially any URL could be indexed in a case like that. So what I'd recommend doing there is making sure that if these URLs don't exist, make sure that they really return a 404 and that they can't get indexed like that. "Will Google still run large updates such as Penguin even though Matt Cutts is on leave through October?" Yes, absolutely. Matt doesn't have to do everything, and I think if we had one person who ran all of our algorithms then that would be kind of bad. So these are algorithms that other people have been working on even before when Matt was still active here. So that's not something that gets paused or gets stopped until he gets back. So these updates will continue running. We'll continue putting out new updates, new algorithms in the search results and hopefully, improve your search results by doing that.

MALE SPEAKER 1: But the effect won't be as the one last year, right? Like the really shaky effect that was-- Would this just be an effect that would not be as shaky as last year?

JOHN MUELLER: Some algorithms are more visible than others, so it's not that we'd say since Matt is on leave, we'll just run everything-- make small updates and tweaks instead of bigger changes. We've always had to make bigger changes from time to time. Sometimes even the smaller changes look like bigger changes because maybe, external ranking tools are testing it in a really weird, skewed way that make it look like a really big change, but actually, it's kind of a small change. And sometimes really big changes roll out on our side that external tools don't even notice, like all of the Hummingbird stuff, for example, where we thought this was a pretty significant change, and a lot of these external ranking, search tracking tools, they didn't even realize that we rolled it out.

BARRY SCHWARTZ: I noticed, and you guys denied it.

JOHN MUELLER: All right, Barry notices everything. I think it's hard to hide anything from you.

MALE SPEAKER 1: Even the logo that you changed, he updated right away on his headlines. I didn't notice the logo change. It was like one millimeter or something like that.

JOHN MUELLER: Yeah.

BARRY SCHWARTZ: Can we talk about the Sandbox?

JOHN MUELLER: The Sandbox, OK. Go ahead. What's on your mind?

BARRY SCHWARTZ: --I've been seeing a lot of chatter, especially in the black hat forums, about these webmasters who are "black hats" that say they used to "churn and burn," meaning create websites, Google smacks them a month later, and then they try again. They're saying it's much harder for them to get their rankings in the normal amount of time, which I don't know what the normal amount of time is, but it's much harder for them to get their rankings in the normal amount of time. This reminds me of the original Google Sandbox, which I don't know what you guys named it back in 2004. I don't think you were with Google in 2004. But I think it kind of reminded me of that. They said this was released a little after Panda 4.0 where it's much harder for them to actually start ranking well for these really quick churn and burn, black hat types of websites. Do you have any comments about that?

JOHN MUELLER: That sounds like a good thing to hear, I guess.

BARRY SCHWARTZ: No, it is, but--

JOHN MUELLER: I don't know specifically what they're seeing there, but I do know that we have various algorithms to try to recognize these kind of situations where sites kind of pop up really quickly and they get really high visibility and search results, and we need to take action on them manually because they're doing things quickly. And those are the kind of things that we try to recognize what our algorithms as well, so maybe they're just seeing some updates and some of those algorithms that are causing that. But I think--

BARRY SCHWARTZ: Whether some updates and algorithms rely on the round Sandbox-y stuff? I'd have to double check when specifically, but these are the type of things that we work on as well. And that's definitely possible that we rolled out one of those recently. OK. I'll touch base with you and find out if you could give me anything on-record. Thanks.

JOHN MUELLER: Sure. "Many high value local searches are still dominated by sites that create hundreds of services in placename.html URLs with thin or spun text. How should small local businesses compete against this organic results without adding to the spam?" I think it's good to avoid creating these kind of doorway pages because that's essentially something that's going to be a long-term liability, even if they show up in the search results at the moment. So that's something where I'd strongly recommend making sure you're sticking to the guidelines, you're not falling into this trap of creating doorway pages. At the same time, if you're seeing that these kind of spammy pages are ranking, submitting a spam report is really useful for us. Or if you see that this is something systematic which you can show with some very generic searches, then that's something you can also send to me directly, and I can take a look at it with the search quality teams here. But essentially, this is something where we try to recognize these situations, and we try to respond appropriately. Sometimes, for example, our high-quality-sites algorithms will recognize this and say, hey, there are hundreds of pages on this website, and they're all kind of similar, and they're all really low-quality, then maybe we shouldn't be trusting this website in general that much anymore. So we have a bunch of algorithms that try to recognize and work with these kind of issues, but again, if you see that these are still happening you're welcome to forward those onto us. I can't promise that we can fix every one of these immediately, but it's definitely useful feedback for us to bring back to the engineers so that we can work on creating algorithms that handle these situations a little bit better. "My page has been stuck on page two in the coupons niche. Occasionally, it bounces to page one and a very spam niche will get sites on page one. Is there anything I can do to help it move up, or could being stuck on page two be an algorithmic penalty for me?" I don't think we have any algorithmic penalties that stick websites to page two. We do have various algorithms that look at the quality of these websites, though. So specifically, if you're saying that your website is in the coupons niche, there are a lot of really low-quality type sites that we see in those kind of niches that essentially just take feeds, and they re-publish them without adding any additional value to those pages. So that's something where I'd work really strongly to make sure that there's something really unique and compelling on your website that's not just the usual feeds that these kind of sites re-publish. And that's something where I imagine it's going to take a bit of time for both users to recognize the high-quality nature of your site and for our algorithms to pick up on that as well. So I think I'd just, in general, recommend keep working on that and making sure your website is really the highest quality possible and not just re-published information from various feeds. Let's see. "I decided to migrate a website with a history of search quality issues to a different domain, but the domain I want to use has been 301 redirected to a domain we're abandoning. Is this a problem? Should I pick an entirely new domain to be safe?" That sounds like a weird circular situation. In general, that should work. At least from a technical point of view, that shouldn't be that much of a problem. It might take a little bit longer for us to recognize this kind of new situation where one side was redirecting to the other and now that other side is redirecting to that first one. So that's something where, from a technical point of view, it might take a bit of time for us actually pick up on that. With regards to the search quality issues that you mentioned there, if you 301 redirect the old site to a new domain, you have to keep in mind that some of those problematic issues with regards to search quality might be forwarded to the new domain as well. For example, if you have a history of problematic linking to that old domain, and you just 301 redirected to a new domain, then all of those problematic links are essentially forward as well. So those kind of issues are essentially following your website along if you just redirect them. On the other hand, if it was more a question of the quality of the content, and you've revamped the content completely, and you're just moving to a different domain to do the branding side of things, then that's generally fine. I wouldn't try to use the 301 redirect as a way to get out of search quality issues without actually fixing those issues first. And if you fix those issues, essentially, you can also stay on that domain as well because that's almost just as well as moving to a different domain. In that regard, I'd recommend just making sure you have to search quality issues fixed completely. And then, the move from one domain to another is essentially more a technical issue than anything related to search quality.

MALE SPEAKER 3: Hey there, John.

JOHN MUELLER: Hi.

MALE SPEAKER 3: Just to kind of add to that, if somebody happens to be in that same situation we were in, we actually had the co.uk website, 301 directing to our .com, when we decided to make the changes we did. Our .co.uk was put as a standalone site, and it worked very well. So if this person actually has cleaned up all their problems and is simply just waiting for a penguin refresh, but they're not willing to wait any longer because, as we discussed, we don't know when it's going to happen, then it could be a very positive move for them and shouldn't be an issue.

JOHN MUELLER: But I think in your case, it wasn't that you moved to the other site, you essentially built up a new website on the co.uk. Right?

MALE SPEAKER 3: Well, right. It's the same content, essentially, with pricing and a different currency and using the hreflang. So what we discovered was really there wasn't anything wrong with our content, and actually, it was Penguin simply holding us back, even though we cleaned up all of our links problems. The only problem we had was the fact that we were waiting for a Penguin refresh because our .com site Is still nowhere, and our .co.uk site is still-- we just hit page one from [? HTTP ?] web again, after four and a half years. So we are simply going through these hoops waiting for a Penguin refresh. Now it's been nine months, I think, or whatever it is. There's probably a lot of people who have cleaned up their problems and are simply just waiting for a Penguin refresh. Another two or three, four months out of business for people can cripple them. If we hadn't done this move, I may not be here today in my position still. So it's pretty vital.

JOHN MUELLER: Yeah, I know. It's something we're also talking with engineers about to see what we can do to kind of speed that process up because that's really frustrating in a case like yours where you actually spend so much time to clean those issues up and you're waiting for things to be processed and being updated again. And I think in your case, that was a really great move with the hreflang because you really have UK specific content. And that's, actually, a great fit for your website, but that might not work for all websites. And that's something I think we need to work on to kind of speed things up a little bit, as well.

MALE SPEAKER 3: While we're talking, can I ask you a quick question regarding-- Now that we're actually getting traffic and we can do business and all that kind of stuff, I've noticed that a lot's changed in a few years, and we're not seeing the keywords that people are arriving on our site. So they're searching for Virtual Office London, and we're getting in our statistics the new changes Google made in 2010 that hides the keywords. And what we wanted, or what we've tried to do is if somebody types "office space London cheap," we want to put the listings that are cheaper at the top, but we can't do that because the keywords are being blocked. Is there a solution and why is it being done?

JOHN MUELLER: Not for those kind of situations. So it's not something you could do on the fly as the users are coming to your website, but you do see the aggregated information in Webmaster Tools just a couple days delayed there. So that's something you could track through Webmaster Tools, but you can't respond to it on the fly. And there are various reasons why we started doing that. I believe we're starting to do that for ads as well now. So this is something where we think it makes sense to pass a generic refer along instead of the full refer that we used to have there.

MALE SPEAKER 3: I think it's counter-intuitive, to be honest. I think there's a lot of situations where more data is better, and the only downside that I can see is that potentially, Google wants to protect and hoard this information. That's how I see it from my point of view. Whereas if I could maybe even plug into Google Analytics live, as it's happening, to get that protected information, that would be a useful way. But there should be some way that we can actually get a hold of that information so that we can tailor the search directly to the customer. I mean, we've talked about Hummingbird and having too big a logo at the top because we don't want customers to scroll. If I can deliver the right content to the top, the customers won't scroll. Surely, I'm preempting what my customer wants. It's got to be the best solution.

JOHN MUELLER: Yeah, we saw a lot of really spammy sites that were doing that kind of thing in a spammy way. So I think in your case, that might make sense, if we're sending users of various, let's say, interests to your same pages. But we've seen a lot of situations as well where sites were doing really spammy things there. And the primary reason we're not showing the refer is also for privacy reasons so that this information doesn't get forwarded directly unencrypted, and that it's something that essentially, the user has for themselves, and it's not something that we forward along. So I think that's probably not going to change that quickly. And even having some kind of a back door through Analytics where you could see this information live, I doubt that's going to happen. I could imagine that we might be able to get the data in Webmaster Tools a little bit faster so that it's maybe just, let's say, a couple of hours, a day behind instead of the, I believe, three days that it is at the moment. But I don't see us changing to sending the full refer anymore or to showing this data alive through Analytics or some other kind of back channel.

MALE SPEAKER 3: Yep. Thanks, John.

JOHN MUELLER: I spent a lot of time looking at my log files as well before I started at Google, and it was always interesting to see where, exactly, people were coming from. But I think from a privacy point of view, it makes sense to take this step and to really make sure that the data is as secured as it could be. So that's something where I don't see us changing any time soon, and I don't see that data coming back, to be honest. But maybe there are ways we can make this a little bit faster, at least in Webmaster Tools in an aggregated way, to at least give you faster feedback on how people are finding your site.

MALE SPEAKER 3: Yeah. I was more concerned about being reactive and delivering exactly what my customer wants. it's one of those getting rid of something great to deal with a problem with abuse. It kind of seems like the wrong action to take in that sense. And if somebody's landing on my site and they're landing on office space in Birmingham, I already roughly know what they were searching for in the first place. Whether or not they're using the word "cheap" or "discount," is irrelevant from the point of privacy, as far as I'm concerned. So it seems a bit of a moot point.

JOHN MUELLER: Yeah, I imagine with some sites is more problem than with other sites. But we've had to take that step, and I really don't see that coming back, to be honest. So if that's something you were waiting for, I'd try to find other ways to maybe do that. Maybe create a separate page for low price, virtual offices or those different attributes that you're looking at there and see if that works for--

MALE SPEAKER 3: --if I was to do that I would end up with a Panda issue. And so--

JOHN MUELLER: It kind of depends.

MALE SPEAKER 3: --I just want to reorder it, and I want to put a nofollow, noindex so it won't get indexed. So it's a circular problem, unfortunately, that I think all of these things have all been built separately without that very idea in mind about how it would actually be very useful if it was utilized properly and not made simply on the basis that, oh, somebody's come to my page; I'll create a page based on that content that's going to be dynamic and full of a bunch of garbage, which is obviously what people did for a long, long time. And I understand that, but I think there's some very good uses to it.

JOHN MUELLER: Yeah.

MALE SPEAKER 3: Anyway, it's something to think about if it gets discussed that there are practical uses.

JOHN MUELLER: Sure.

MALE SPEAKER 3: Thanks, John.

JOHN MUELLER: All right, "I've noticed some disconnect between Webmaster Tools and the emails. I've seen some penalties appear, disappear, and reappear, yet the emails stated that there were no manual actions. Have there been some glitches in the past three weeks as this is unusual?" That shouldn't be happening. So essentially, what sometimes happens is that there's a slight timing problem between when the emails are sent out and when the data is visible in Webmaster Tools. I believe Webmaster Tools data for manual actions updates maybe twice a day, something around that range, and emails we might send out once a day. So there's this timing issue there where maybe you'll get an email and it'll already been visible in Webmaster Tools slightly beforehand. So it's not that there's this one second interval where we send out the email and show the data in Webmaster Tools. It's sometimes slightly staged there. But it shouldn't be the case that things pop up and disappear and reappear and emails come randomly, and you don't see any of that in Webmaster Tools. That definitely shouldn't be the case. One thing I've seen where some webmasters were confused is when manual actions expired. That's something that happens essentially with all of our manual actions in the sense that at some point, we think it makes sense to expire this manual action because maybe the webmaster has cleaned it up and just hasn't gone through Webmaster Tools to let us know about that. And usually that's in the range of a couple of years, something around that kind of a time frame. And when these manual actions expire, essentially, they're no longer visible in Webmaster Tools, but, as far as I know, we don't send out any specific email to tell you about that. So what might happen is that at some point you get an email saying hey you have a manual action. You look in Webmaster Tools, it shows a manual action. And a couple years later, you look in Webmaster Tools again, and it doesn't show the manual action anymore. But that's not something that would happen from one day to the next, like come and go, come and go again. It's something that would probably take a couple of months, a couple of years to actually drop out of Webmaster Tools.

BARRY SCHWARTZ: John, Ashley has a question. Her bandwidth's not so good, so she asked me to ask.

JOHN MUELLER: All right.

ASHLEY BERMAN HALE: Wait, Barry. I'm going to try again. See if it works. Can you guys hear me now?

JOHN MUELLER: Yes.

ASHLEY BERMAN HALE: I know, I couldn't let Barry get the spotlight the whole time. So if a webmaster follows all directions in a site move, 301s, change of address tool, but Google keeps showing the non-preferred version months after the redirects, what can the webmaster do or how can they change it? Or is it a trust issue with the new domain?

JOHN MUELLER: How are you seeing the whole domain? Is that if you're doing a site query?

ASHLEY BERMAN HALE: A site query or you're Googling. It looks like you move from longer domain to shorter domain .com. You bought a new domain. You wanted a snappier brand, but the longer one keeps showing. URL structure is entirely at the same. 301 redirects are good, no blocking, change of address tool used, but Google just keeps sticking to the old one, let's say, six months after the site move. What should a webmaster do?

JOHN MUELLER: Post in the help forum. Essentially, that's something we probably want to look at. So posting in the forum or posting to me, directly, that's something you could do to let us know about that so we can take a look. One thing to keep in mind, if you're doing a site query for the whole domain, then sometimes we'll just show the old domain content anyway because we think, oh, you're looking for this specific URL on your site. And even if we know that it's actually moved to a new one, we'll say, well, we know this content used to be on this URL, so we'll show it to you because we think it matches what you're looking for. So essentially, we're trying to show you the information that you're looking for, and in your case, you're actually trying to confirm that it's not there anymore. So the site query would probably be a little bit not so helpful in a case like that. The other thing to keep in mind is that for larger sites there'll always be some pages that just take a really long time to be recrawled. So if do a site query, probably the whole page, the main pages, we'll move over fairly quickly, but there might be some long tail pages there that we just don't crawl that frequently. And maybe it'll take a half a year, maybe even a year for us to actually recrawl and reprocess all of those pages and see those moves. So that's something you might see in the site query that they're these really long tail pages that are essentially just stuck on the old domain that take a long time to be reprocessed.

ASHLEY BERMAN HALE: So if Google is just really tenacious about showing the old, unpreferred version, nothing you can beyond all the signals, but just wait and ping you, ping the forum?

JOHN MUELLER: Yeah, that's something that should pick up automatically at some point. Sometimes there are algorithms on our side that are trying to pick the right URL where we see there's 301 redirect, but we see all of the links pointing to the old version, for example. And we think, well, everyone keeps linking to this old version. It must be the preferred version, even if they're redirecting somewhere else. So really making sure that all the signals are telling us that the new one is really the right one to use, 301, maybe a rel=canonical, updating the old links if you can contact the people that are linking to the site. All that adds up for us. But if we're really picky, and we just keep sticking to the old one, then it sounds like that's something our algorithms should be at recognizing a little bit better and something that we can talk to the engineers here to make sure that we're doing the right thing instead of sticking to an old domain that you don't want to have visible in search.

ASHLEY BERMAN HALE: Thank you.

JOHN MUELLER: All right, "According to body via, a crawling tool on my website has duplicate content issues because my content pages are being served with HTTP and HTTPS protocols which I don't redirect to each other. Should I be worried about that?" No, you definitely don't need to be worried about that. From our point of view, that's more of a technical issue. That's something where we always have to deal with this kind of duplicate content in that we crawl different URLs. We see the same content. We have to handle that on our side. So that's not something where you'd see any kind of a demotion or any kind of a penalty because of that. That's essentially a technical problem. What you could do is use something like the rel=canonical. You could set up a redirect if you have a preferred version, but you don't necessarily need to do that all the time. We can live with this situation. What will happen in this kind of a situation that might be negative for your website is that we'll be recrawling both of these versions more regularly, and maybe your server has a problem with that extra load. Whereas with two copies of the same page, that's probably not so much of a problem. But if you have session IDs in there as well, if you have different URL parameters that all lead to the same content, then that's something that can add up, where every page that you have on your website, they're actually 10 or 100 different URLs that we have to crawl to reprocess that. And it sometimes gets to the point where we have to spend a lot of time recrawling duplicates, and we don't pick up your new content as quickly. Or we crawl the website so frequently that it actually causes problems on your server that we're slowing your server down for all your normal users because we're crawling all of these duplicates all the time. But if you're just seeing this kind of an issue with HTTP and HTTPS, then that's not something where we usually see any kind of a technical problem with that. That's essentially something we just have to solve on our side where we say, oh, we see the same content on these two pages. We have to pick one of these and show it in the search results. We're not going to demote a website because of that. We're not going to penalize the site. It's just we have to pick one of these pages, and if we don't know which one to pick, we might pick one that you don't want to have shown. So if you have a specific opinion on which one we should be showing, tell us about that. If you don't mind that we just pick one or the other than that's fine too. We'll pick one. "If we were penalized because of backlinks in a widget-- users posted our widgets on their websites-- is it enough if we add a nofollow to all the links, or do we have to completely delete such links?" From our point of view, if you put a nofollow on those links or if you disavow them or if you delete those links, it's all essentially the same in that we have to recrawl those pages. We'll see that the link is no longer passing page rank, and that's fine for us. So putting a nofollow there is absolutely fine. Just keep in mind that we have to recrawl those pages to actually see those changes, so that might take a bit of time to actually be completely reprocessed, regardless of which solution you pick there.

MALE SPEAKER 2: John?

JOHN MUELLER: Yes.

MALE SPEAKER 2: I have a interesting question, if you don't mind if I could jump in.

JOHN MUELLER: Sure

MALE SPEAKER 2: Great. I don't know if you can see it, though. I'm putting in the chat. Oh, it doesn't let me do it. I though you could just drag images to the chat. I can get an image for this if you want, but I have a site that got hit by Panda 4. And among a number of things, it's aggregating some content, so I imagine that might have something to do with. But among other things, it's got Disqus, the commenting system, installed at the bottom. I asked you about this on Google+, but I don't think I was very clear. And what's happening is Disqus is not blocking /robots.txt Googlebot from crawling, but it's just timing out, it looks like. And so the net effect, however, looks like they're blocking JS, and it is affecting the design quite a bit because the 10 or 20 comments that would usually show up at the bottom there aren't showing up. And I know that Google likes to check out those comments to make sure that everything is of a high quality as well. So I was worried that possibly could be making some problems. What do you think?

JOHN MUELLER: That shouldn't be causing that much of a problem there. I think the main issue you'd be seeing there is if we can't crawl these comments, if we can't pick them up for indexing, we won't be able to rank those pages for those comments. For instance, sometimes we have situations where there's a lot of text on a very technical topic in a blog post, and the comments describe it in a more colloquial way and discuss the issue in slightly different words. And in cases like that, those comments are really useful for us to understand that this page is actually about this topic. And if we can't get to those comments, then it's going to be a bit harder for us to rank that page appropriately. So it's more a matter of finding content on those pages and showing those in search appropriately than anything that we'd see from a quality point of view. And there's sometimes legitimate reasons to have blocks of text on a page that are blocked with the /robots.txt file. Maybe they're parts of a page you don't want to have indexed like that. Maybe there's some content there that you're not allowed to have indexed like that for licensing reasons. And you put it in an iframe, and the iframe content is written in the robots.txt area, for example. So there are legitimate reasons to do that, and that's not something where we'd say, from a quality point of view, that would be causing any problems. But at the same time, if you're saying that these comments are timing out for Googlebot, then maybe they're timing out for users as well, and that's something to watch out for because that could be degrading the user experience in general. But just because they're roboted or not visible at all to Googlebot doesn't mean the site or the page itself is lower quality, it's just that we can't use that content to show that page in the search results.

MALE SPEAKER 2: OK, so that wouldn't be considered cloaking at all?

JOHN MUELLER: Not really, no.

MALE SPEAKER 2: OK. Great.

JOHN MUELLER: All right, here is a question about how Google handles AJAX URLs. "I've noticed that URLs are being indexed without using hashbang method. Looking at the site, it does have a meta fragment. Is that the reason that we no longer need hashbang?" So if the page has the meta fragment meta tag in there, then essentially, that's telling us we should be using the hbang, the AJAX crawling scheme to crawl and index those pages. So we'll try to crawl it with the ?_escaped_fragment=whatever parameters and try to pick them up like that. At the same time, we are doing more and more to actually understand JavaScript and to understand how these pages look when they're being rendered. So sometimes it can happen that even if a JavaScript-heavy page isn't using the hbang method, we'll actually be able to pick up some of the content there. So I imagine going forward that's something that's going to become more relevant in that we'll be able to pick up more of this JavaScript content directly without having to go through the AJAX crawling scheme. But at the moment, you could still use AJAX crawling scheme. It's not that we've deprecated it or turned off or anything like.

BARRY SCHWARTZ: Why do you hate Flash?

JOHN MUELLER: Umm-- Umm-- we don't hate Flash. [INTERPOSING VOICES]

JOHN MUELLER: I think you're referring to the mobile stuff that we did recently. It's not that we hate Flash. We still crawl and index Flash content for web search normally. It's just that if you're using a smartphone, chances are you won't be able to look at the flash content there. And if your website is Flash-based and we send smartphone users there, they're going to be a bit frustrated because they're not going to see anything. So it's not that we hate Flash, it's just that it doesn't seem to be supported that well among the smartphones at the moment.

MALE SPEAKER 1: Hopefully, they get there, they shake hands and they agree, and they'll work together.

JOHN MUELLER: Yeah, this is something where things have moved on a little bit, and if your website is based completely on Flash, then you're essentially blocking everyone who's using a smartphone from being able to access your content. And that's also a really bad user experience from our point of view in that if we point smartphone users to your pages and we know ahead of time that they won't be able to see anything there, then that's something that we should avoid doing. Usually, what happens when we look at the feedback for search, in general, is that people will say, well, Google send me to this page that's absolutely empty. It's Google's fault that I can't read this content. And this is something where we kind of have to take that into our own hands and find a way to handle that better in the search results so that users who are using a smartphone are actually happy to use our search and not stuck on sites that they can't actually use.

MALE SPEAKER 1: Do faulty redirects have-- Let's say that you have 10, 15 faulty redirects. Does it have a serious impact on rankings over time? For instance, let's say you have faulty redirects for a month or two, and then an algorithm comes by. Will I get impacted by that if I didn't fix those faulty redirects on my smartphone?

JOHN MUELLER: So faulty redirects would be when you access the desktop URL, and it redirects you to the mobile home page, for example, or to another page--

MALE SPEAKER 1: Correct, [INAUDIBLE] it still hasn't been fixed because I think Pierre did say in a blog that it does. But I'm not [INAUDIBLE].

JOHN MUELLER: Yeah, so essentially, that's one of the category of problems where we see that smartphone users essentially aren't able to get to the content that they'd like to see. So that's something we're looking at, for example, showing them lower in the search results, maybe batching the search results to tell users that maybe they won't see the content they're actually going to be clicking on, those kind of things. So that's not something that we do on a site-wide basis, it's really on a per URL basis. Because we've seen some sites do it for a part of their site, works really fine on mobile and another part has these weird, broken redirects. So we do that on a per URL basis, and it's not that the website itself would be demoted, it's just those individual pages we'd like to show a little bit lower in search.

MALE SPEAKER 1: Right, and so once it's fixed, then they'll pop back up?

JOHN MUELLER: Yeah, definitely. And this is something that's only visible in the smartphone search results. So it's not that it would affect your desktop search results, it's really just for the smartphone users. When we can recognize that it won't be able to use this content, we'll try to take action on that. So at the moment, we're at the stage where we're kind of blocking out these individual issues where the content is completely blocked for smartphone users and telling those smartphone users about it, maybe showing them a little bit lower in search, those kind of things, to make it easier for smartphone users to actually get to content that they can actually look at.

MALE SPEAKER 1: But if Page A has, let's say, 700 words and in mobile, the customer is totally different in mobile. Right? So it's not necessary to have all the 700 words in that one page. Right?

JOHN MUELLER: Yeah, so from our point of view, if you link your desktop pages to a mobile page, then we expect that the content is equivalent. It doesn't mean that the content has to be exactly the same. The layout can be completely different. Obviously, often it is completely different, like the sidebar is missing, the menus structure might be slightly different. But the primary content should be equivalent. So if you're looking at a desktop page about red cars, then the mobile page should be about red cars and not about green bicycles, for example.

MALE SPEAKER 1: But the content can be less.

JOHN MUELLER: Sure. Yeah. We see that a lot, for example, with the sites that kind of move the comments away from the mobile version where you have to click on a link to actually get to the comments. But on a desktop version, the comments are completely visible, and that helps speed up the mobile version of that page. So from our point of view, that's completely fine.

MALE SPEAKER 1: OK. Maybe there should be another blog about that, John, because I see sites still, the mobile there's like 1,000 words. And the "call us now" is all the way towards the end. That's crazy. You know?

JOHN MUELLER: Well, I'd say that's kind of their problem in the sense that we're showing the search results as we think they're relevant. But if the call to action is completely hidden on mobile, that's essentially their problem and something that they should be working on to improve the general mobile user experience there. At the moment, we're not ranking the search results based on the mobile friendliness, apart from issues where mobile users can't access the content at all. It might be that at some point in the future, we do start ranking search results for mobile slightly differently by taking into account a little bit more of the mobile friendliness and kind of bringing the more mobile-friendly sites a little bit higher. But at the moment, there are just so many sites that are doing it completely wrong, they're blocking mobile users completely, that we think it makes more sense to focus on this very obvious situation of sites that are essentially inaccessible on mobile.

MALE SPEAKER 1: OK. Thank you.

MALE SPEAKER 4: John, would you mind if I ask a question?

JOHN MUELLER: Sure. Go ahead.

MALE SPEAKER 4: For full disclosure, this is on behalf of someone else, and you may recognize it as soon as I post you the link. In this search result, there's what essentially looks like a Google Answer box. I just put it into chat.

JOHN MUELLER: OK.

MALE SPEAKER 4: And I think you'll recognize who it is that I'm asking the question on behalf of. What is it, essentially, and is there room for abuse from brands when it looks like someone's asking a generic question? Because they're getting a branded answer.

JOHN MUELLER: Yeah, that's kind of weird.

MALE SPEAKER 1: I see that also for a lot of-- It's not just this--

MALE SPEAKER 4: Is it knowledge graph stuff? Or does the-- [INTERPOSING VOICES]

MALE SPEAKER 1: --meta from the home page I saw.

JOHN MUELLER: Yeah, I think, in general, a lot of these answers that we provide there work really well, and they bring this information up a little bit higher and make it more interesting to click through to find out more about that. But I think branded answers like that are probably not what we're trying to do there. So I can definitely talk with the team here that works on that to see if we can improve that little bit.

MALE SPEAKER 4: All right, well, Gary will be pleased.

JOHN MUELLER: Yeah, that's the first one I've seen where it's obviously branded like this. And I think that's something we definitely need to watch out for, that it doesn't turn into an advertisement for websites, but rather, that it brings more information to the search results about this general topic.

MALE SPEAKER 1: For now the click-through is through the roof.

JOHN MUELLER: It's tricky. I think this is, for example, one of those topics where we have very heated discussions internally. Where on the one hand, I think it really makes sense to show a bit of a bigger answer for some people who are searching for these topics, and on the other hand, we think that webmasters might feel that their content is being misused by being shown in this bigger snippet where a searcher might get the information they want directly in the search results instead of clicking through to the site. So you say that the click-through rate is probably increasing here. We think the click-through rate is generally increasing, and we think that they're more, let's say, really interested people who are clicking through. Whereas people who are just curious about this topic, they might look at this answer and say, oh yeah, this is all I need to know, and I don't really need to read through the site. So this is something where we are still in discussions internally, at least with regards to how we should handle these kind of answers. [INTERPOSING VOICES]

MALE SPEAKER 5: Can I ask a question?

JOHN MUELLER: Sure.

MALE SPEAKER 5: Regarding the Payday Loans Algorithm, I was wondering is that dealing primarily with off-site content. In other words, is it a variation of Penguin or something similar to that in that it's dealing primarily with spammy content off-site? Or could we also be talking about on-site with the quality issues that maybe Panda didn't get like excessive keyword stuffing or something like that?

JOHN MUELLER: I think specifically for that algorithm, it's something where we're looking at a very specific type of website. And we're looking into various signals that we can pull out, and that can include on-site and off-site signals. So specifically, if you have a spammy payday loan website, then that's something that could involve a lot of factors that you can work on to improve that overall. So I wouldn't necessarily focus only on on-site or only on off-site, but rather, look at it holistically and make sure you're covering all bases there.

MALE SPEAKER 5: Are the [INAUDIBLE] going to be any longer as far as how it's going to affect that site than usual with other--

JOHN MUELLER: I'm not sure how frequently we'd be updating that.

MALE SPEAKER 5: All right, and then another one was that you've mentioned-- in one Hangout you were talking about a website that might be too far gone, that it would just be better off starting from scratch. What kind of scenario do you think that a site has gotten to that stage, where we could look at a site and just say quite easily, oh yeah, might as well start again? Would it be something that's been repeatedly hit by a variety of algorithms over time?

JOHN MUELLER: It's really tricky. I don't think there is any totally obvious signal where you can look at it and say, oh, they should just start over again. But I think if a website has really been doing problematic things for quite a long time, then that's something where you start thinking about that, at least. And the tricky part is, of course, trying to find the right balance there in the sense that if a website has been active for a really long time, it might have this older brand attached to that people know about. Whereas if the website is still fairly new, then maybe it's a little bit easier to switch to a different domain to switch to a different brand, but at the same time, they probably haven't done that much bad stuff in the past either if it's still fairly new. The situations where I think you to watch out for is if you're starting a new website on a domain that has been used for really spammy things in the past, then we can, to some extent, take those spammy things and ignore them because we recognize that this is a completely new website. But some of those spammy things might still be associated with that website for a fairly long time. For instance, if they've been building spammy links for five, ten years now and someone starts a completely new website on that domain, then that's a lot of really problematic history that is going to be hard to clean up. And if you're just starting new on that domain name, then probably it's not that much effort to actually pick a different domain name and just start on one instead.

MALE SPEAKER 5: And just disavowing everything wouldn't be--

JOHN MUELLER: Well, sure, you can. It's not that it's impossible to clean those things up, but you have to balance the amount of work and the amount of time it takes to clean that up with what you're trying to achieve. And if you're a completely new website, then spending a couple of months on disavowing old links that you're not really involved in is probably not the first priority on your mind when you're building a new business. Whereas if you're an older, established website, then maybe it makes sense to spend a bit more time to actually clean up all of those old problems. So it's something where there's no obvious and easy answer, and I know it makes it hard sometimes to make the right decision there. And one tricky aspect of that, as well, is that you don't know how things will change in the future. Where maybe you'll clean up all of the links now and you don't really know if the next algorithm update is going to take place next week or maybe it'll take place in a half a year. So that's something where it's sometimes hard to find a balance between when you should cut loose and say, OK, this is going to be way too much effort for us actually clean up completely. Whereas it might be a little bit easier for us actually start over on a new domain. And some people have started over on a new domain and have made really great progress there. Other people have cleaned up their old website and manage clean that up and get it back into good shape again. So it's not that it's impossible to do, one way or the other, it's just that you have to, at some point, make essentially a business decision on where you should focus your energy, where you should put your money, your time.

MALE SPEAKER 5: OK. All right, thanks a lot.

JOHN MUELLER: And I think that's something where you guys who have a little bit more experience with cleaning up issues around links, with cleaning up issues around spam, you're in a good position to help these websites make the right decision and to look at their business model overall and say, hey, you're a completely new website and you're on this domain that has been spammed by previous owners in crazy ways in the past. Maybe it makes sense to just pick a different domain name and start on that one instead. And if over time you manage to clean up the domain name, maybe you can put something on there as well or maybe you can move to that one. [INTERPOSING VOICES]

MALE SPEAKER 5: Would the bigger challenge be that the website had external links and stuff pointing back to it, or say it was a real poor quality website previously for a long time?

JOHN MUELLER: I'd say most of the time, it's the external factors that are harder to clean up. But sometimes-- if it was a really spammy website in the past and sometimes it might have gotten stuck on one of our catching-low-quality-content type algorithms that just takes a longer time to update again. So usually, I look more at the problematic links in that situation, but I still take a look at-- What is it?-- archive.org to see what the old content was. And if the old content was really problematic, if it was just great content, content from feeds that's just scrambled and spun, then that's something also worth taking into account. And with all the new top level domains that are coming out, maybe it just makes sense to grab something that's completely new where you know you don't have any anchor attached to your website that's dragging you back or you don't have to worry about that at least. Where even if you clean up these links now, you never really know what else might be dragging you back a little bit there. Whereas if you start out with something that's really completely new, then you don't have to worry about that. And you can spend all of your time, all of your energy, on making a fantastic website instead of having to clean up old things that might potentially be pulling you down.

MALE SPEAKER 1: John, before you go, there was an article about how Google for webmasters, "it's only going to get worse, but not better." That was the title. Can you clear this for the SEO community? It that true? Having these kind of articles-- It's not depressing. It's just-- Of course, they can write whatever they want, but--

JOHN MUELLER: Well, I think one thing to keep in mind is that the web is constantly evolving, and in the past, it has been kind of a niche market area. And in the last couple of years, essentially, it's really mainstreamed. So it's something where maybe five or ten years ago, if you opened an online bookstore, you were like one of the first five people to do that, and you'd have it really easy to rank for buying books online. Whereas now there are tons of companies that are offering these services. A lot of the traditional offline businesses are also active online, and it's just going to be harder to compete online. It doesn't mean that it's possible, and there's still a lot of room for these niche market areas where you're talking targeting an audience that is too small for these big websites. And I think there's a lot of potential, but it's not the case that, from Google's point of view, it's going to become much harder. It's just that the whole market is maturing, and it's obviously going to be harder to rank number one for something where there are hundreds of other sites that are already competing for that slot.

MALE SPEAKER 1: OK, but in the end of day, you have to know what you're doing, and that I agree with, whatever it was stating in the article. But you clarified this before that SEO is here to stay. You still need it. Right?

JOHN MUELLER: Yeah, definitely. There are lots of technical things, for example, that I like to see as a part of SEO that essentially make sure that the website is accessible to search engines. And those are things that have to remain there. It's not that you can take a photo of a brochure and put that online as an image file, and suddenly that will rank number one. You really have to know what it involves to put a website up to make sure that it's crawlable, to make sure that it's accessible by search engines, that they can read the text on there, that they can index that content properly. And that's something that's definitely not going to go away. And all of the understanding on how the marketing side of search engines also work, like what people are searching for, how you can create content that matches what they're searching for, how you can create content that works well for users and works well for search engines. Those are things that aren't going to go away. And just because they are more people that are active online doesn't mean that it's going to become impossible to do that or unnecessary to do that. There's still a lot of, essentially, craftsmanship involved in making a website that works well in search.

MALE SPEAKER 1: Of course. Yeah, OK.

MALE SPEAKER 3: I actually think that it's going to get easier, and it'll get easier once Google actually deals with the churn-and-burn issues.

MALE SPEAKER 1: No, it's just that writing an article like that is totally-- kind of agreed with it and didn't agree, but--

MALE SPEAKER 3: Yeah, I think it's going to get a lot easier once the churn-and-burn sites are gone. When Google deals with those quick, churn-and-burn spam issues, there's going to be a lot of spaces in the top 10 that are available there were never-- [INAUDIBLE] I actually think it will once that issue is resolved. Right now in my niche, the number one site-- we even talked about it six weeks ago-- it's now gone from number two to number one. And they're a complete spam site, churn-and-burn [INAUDIBLE] --things will change, and it'll be a much better place. And it won't be more difficult, it'll be much easier.

MALE SPEAKER 2: Hey, Gary, Sandeep had a question. I don't know if John still has time, but Sandeep has been trying to ask his question for the last 45 minutes. John, do you have time for it?

JOHN MUELLER: Sure. Is it in the chat?

MALE SPEAKER 2: Sandeep, can you ask your question?

MALE SPEAKER 1: Your mic is off, Sandeep Let me un-mute him. Or no, you can't do that.

JOHN MUELLER: Or maybe you could just type it in the chat. I could pick it up from there.

MALE SPEAKER 1: Here's the link. The link is right here, John. Basically, this was the link.

MALE SPEAKER 2: And there's his question in the chat.

JOHN MUELLER: All right. "Content--" "Penalty on our sub-domain "Our manual penalty was lifted, but our site's traffic is still the same. It's been almost six months, and not picking up." I'd have to take a quick look. I don't know if I can see something right away here. From a really quick glance, I think what you're looking at there is not that there's any kind of a manual action that's holding you back anymore, because it looks like that's been resolved, but rather just our algorithms that are kind of not so happy with the quality of your content there overall. And with regards to forums, that's sometimes a bit of a tricky problem because people can be posting anything in your forums. So that's something where you probably need to take action as well and think about ways that you can make sure that the quality of the content overall on your forum there is of the highest quality possible. So we've seen things like finding ways to figure out if an author is providing high quality content overall and maybe no indexing content from new people that are joining your forums, maybe no indexing forum threads that just have a question and no answers yet. And making sure that the content that we pick up for indexing is actually really high quality and the things that are completely new that you don't really know if it's really high quality or not are, essentially, blocked with a noindex. So that's something that doesn't get picked up from one day to the next. That's something you really need to work on your forum and think about ways that you can make sure that it works well overall, that the high quality content as indexable and the lower quality content is blocked from indexing until you've had a chance to vet the content or process that content and make sure that it's really high quality. But that's something that sometimes is kind of tricky to do on a forum, and it takes a while to figure out which method works for your forum. Whereas some methods might work for one time of forum, other methods work for a different type of forum. And maybe it even makes sense to, in some cases, for example, if the forum is just filled with chatter, to noindex the majority of the content there and just focus on content that you think is really high quality and have that content indexed instead.

MALE SPEAKER 1: All right.

JOHN MUELLER: All right.

MALE SPEAKER 1: We should go to a two-hour special.

JOHN MUELLER: Oh my god, no.

MALE SPEAKER 1: All right.

JOHN MUELLER: Yeah, it's useful here. They said the meeting room isn't reserved so I had a chance to take a little bit longer. So let's take a break here. Thank you all for coming. Thank you all for all of your questions. I set up another one in English for next week which is a little bit later. So for those of you in Pacific time zone, it might be a little bit easier to join in. And I wish you guys a great week.

MALE SPEAKER 1: Bye.

JOHN MUELLER: Bye, everyone.

MALE SPEAKER 2: Thanks very much, John. You're a god-send for webmasters. Thank you, John. [INAUDIBLE]

JOHN MUELLER: OK, bye.
ReconsiderationRequests.net | Copyright 2018