Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 24 February 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: OK. Welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what I do is talk with webmasters, publishers, SEOs, like the people here in the Hangout, and the people who submitted questions, and try to get your questions answered. A bunch of questions were submitted already with the Q&A feature. That's usually the best way to kind of get questions up here. If you can make it in here live, usually these slots go very quickly. So if you want to join one of the future ones, make sure you put your comment into the event page and make sure that you're available when I open it up a few minutes before we start. With that said, let's get to one of your questions. Do you have anything?

BARUCH LABUNSKI: Last week-- I've been waiting patiently since Google News Hangout to ask you a question.

JOHN MUELLER: All right.

BARUCH LABUNSKI: OK. So better be a good one, right?

JOHN MUELLER: I guess.

BARUCH LABUNSKI: So I want to know, do you guys give more priority to a responsive website? Is there a difference between-- would you less give authority to, like, an M dot, as opposed to a responsive website? In other words--

JOHN MUELLER: No.

BARUCH LABUNSKI: No?

JOHN MUELLER: No. We essentially treat the three different ways of handling that equivalently. There's some, I guess, inherent advantages to using the same URL or to having a responsive website. Which means, for example, you don't have to worry about things like canonicalization. We crawl one URL, and we get the full content. That makes it a little bit faster. But that's more of a technical issue then an actual ranking issue. So if you set your site up properly, according to our guidelines, what we published, and you use one of those three methods, then that'll work. It's not that you have to use responsive, or you have to go to an M dot, or something like that. Use whatever kind of makes sense, what works for your infrastructure.

BARUCH LABUNSKI: How about having pages like in terms of service and privacy policy? Does that make a difference as well to that have? Does bot look at that?

JOHN MUELLER: We'll crawl and index those pages. But it's not that we have any kind of a special algorithm that seeks out terms of service pages and says, oh, this website has a terms of service, it must be a good one. Essentially this is content that anyone can copy and paste from somewhere. So it's not necessarily a sign where I'd say because you have a terms of service, this must be a higher quality website.

BARUCH LABUNSKI: Yeah. Because there's a lot of other SEOs out there that they haven't told the business owners, hey, you need to see an IP lawyer to get these specific pages written. Unless you have AdWords, you have to have specifically-- in terms of service privacy policy pages. So it doesn't really matter, right? Like if a business doesn't have their privacy policy pages.

JOHN MUELLER: From our point of view, it doesn't really matter. For the users, I don't know. It might depend on the type of service that this website is offering. So if you're selling something and you don't have your address, you don't have your terms of service anywhere on a home page, then that's something where a user might kind of look at that and say, well, I don't know if this is really a legitimate business, or if it's some fly-by-night spammer who's just trying to show up in search. But that's something more on the user side. That's not something where we'd say Googlebot would be looking at the terms of service and kind of making sure that they're written correctly.

BARUCH LABUNSKI: OK. I just wanted to-- because there was a big chatter around that somewhere. OK. Thank you so much.

JOHN MUELLER: Sure.

JOSH BACHYNSKI: I have the perfect segue for that, John.

JOHN MUELLER: All right.

JOSH BACHYNSKI: The perfect segue is this-- that's a very excellent point. So today, would you say SEO-- is it more about the technology of the site? Or is SEO more about the psychology of the user? It seems to me we're moving into a state where SEOs are more marketers and we more have to make sure people are clicking where we want them to click. And what [INAUDIBLE] says a lot about task completion. Do you think SEO is more about the technology side of things or more the psychology side of things these days?

JOHN MUELLER: It depends on how you define SEO, I guess. I think there's a lot of technology behind making a website that works well, a website that's easily crawlable, a website where the content is directly visible. That's something where if you work on a bigger site, then you will run into lots and lots of technical challenges. And these are things that are not trivial to kind of solve for an existing site. They're not always trivial to kind of set up properly for a new website. So there's definitely a lot of technical base knowledge that I think is necessarily around SEO. And these are things that for people who've been in the industry for years now, they're like, well, this is all so obvious and common. You have to be able to crawl a URL, and it has to have links to other pages. It's something that says you guys have gotten used to over the years. But new people who get involved here, people who kind of understand the technology of making really cool websites, they might not know all of this. This is kind of like the technical side of SEO that I think is really, really important. Because if you kind of mess up with the technical side of things, and everything is on a single URL, but it uses this really cool technology, then these are things that essentially make it impossible for us to kind of crawl and index your content properly. So I think the technical side of things definitely shouldn't be underestimated. There's a lot of work involved there. And there's a lot of knowledge that you gain over the years that is really valuable when you go and talk to clients, when you go to look at new websites, or look at existing websites to kind of help them improve things. The other aspect of user experience, I think, that's something that kind of flows in with any kind of marketing. When you're kind of working on a website to make sure that it works well for the users, that's something that I wouldn't call inherently kind of SEO. That's something that you have to do with any kind of business if you're working with them. You have to make sure that it matches the client's needs. That it matches what they're trying to achieve, the task completion, those kind of things. So I wouldn't say that's something that belongs inside the technical SEO side of things. But it's definitely something that a lot of people forget when they work on websites. Where we'll see them kind of work on their technology, and they have this really cool AJAX-ey website or Flash website that does really fancy things. But if you take a step back, you say, well, users are just trying to find our address or our opening hours, and they can't get that. This is going to be a problem. And you can do all the technical SEO that you want to kind of improve this website. But if users can't use that website, then all of that work that you're doing to drive traffic to the website through search is going to be lost. It's not something that will be interesting.

JOSH BACHYNSKI: So you would say the psychology is not trivial.

JOHN MUELLER: Well, psychology is never trivial. But these are things where if you have experience on the web, you kind of notice these a lot faster. Just like when you look at a lot of websites you'll say, well, this is a spammy website. This is a reasonable website. And you'll be able to kind of spot that within a couple of seconds when you look at these websites because you kind of have trained your eye over the years. And these are things that kind of overlap more and more with just generally things that normal webmasters get wrong, that essentially are things that they could be fixed and improved.

JOSH BACHYNSKI: Fantastic. Thanks, John.

GARY LEE: And John, I just wanted to add to that conversation as well. I've been online since 1993. In 1997, we saw Google. I sent you an email last night. I don't know if you've received it yet. But we're still seeing incredibly poor stuff going on. I'm taught to do all the right things. Everybody that's ever watched these Hangouts has seen me here for X amount of years now. We're playing by the rules, doing all the right things. We just keep getting lower in the rankings. Our site looks great. We've done user testing. We're doing all the things that everybody is saying we should do, and we're seeing negative things. But on a positive side, we're seeing more user engagement, lower bounce rates. We're seeing all the things from our users to say everything's good. It's just Google that hates us. So we don't know what's going on. But yet we see sites and businesses that consistently break the rules. The main culprit is number one for our major key terms in the whole industry. They've been around a while, but their site got banned. They've now got a new site that redirects. They use hidden links, which I haven't seen since 1999. It's just unreal. And you can consistently see through blog posts that they're still doing this in November and December of just gone. And there's even more I've found recently. So it's such a double standard. There's so many people-- I'm not the only one that notices this. There's thousands of people. It just happens to be that I can get on this Hangout out and have a voice with you about it. But what's the point? We're spending thousands in AdWords. We're doing our part. We're contributing to Google. This is what you want us to do. It's just not right. It goes against everything you guys say. And anytime I bring this up in webmaster forums, I'm told, concentrate on your own business, and forget what other people are doing. Well, it's just wrong.

JOHN MUELLER: I passed your note on, by the way. But I totally appreciate that feedback. And I can understand that this is frustrating, especially when you kind of work to improve your site, and you don't see those positive changes kind of reflected in that. And I know that's not easy to kind of keep working on that. But I think you're headed in the right direction. And I think, let's see-- how can I phrase this? You have your heart in the right place. You're doing the right things. And I think in the long run that's something that will be reflected. But I totally understand that this is something you've been working on for years now. And that it's hard to kind of wait, or continue working in this direction so long and not see significant changes.

GARY LEE: You're suggesting to me and other webmasters that we should create another site and use these cheap, crappy tactics to do what we do. Because four years of what we've been doing hasn't paid off, four years. No other business, apart from a strong business like ours, would survive, essentially, without Google, which is what we've done for the best part of four years. Those companies crash and burn. So you're telling us and everybody else to copy those tactics. They work. And you look at our major key terms in Google in the US and the results in Google are appalling. There's no way any engineer could stand by the results that show up there. It's just incredibly apparent. They're not even related businesses in some of those search results. Something's horribly wrong. But we can't sit by any longer doing what we're doing. Yeah, we can keep sitting there and waiting. Waiting for what? We're watching everybody else haul in the cash, and they're not even good businesses. We're winning awards. We've done all kinds of things. We're now the third largest virtual office supplier in the world. And we're not getting any recognition. So you told me this last time a year ago. You're doing the right thing. You're on the right track. It's just not the case. Even if we did all the right things, the businesses who are still ranking at the top in first, second, and third-- even if we were number one, we'd still push them down to second or third. They'd still be ranking. They're still there. The algorithm isn't working. And everybody's seeing this. In all the forums people are talking about this. We want to work on our sites, but Google is the problem.

JOHN MUELLER: I appreciate that feedback. I'll definitely pass that on.

GARY LEE: Thanks, John. I'm going to get off this Hangout, so I can give other people some space to get on. And I advise everybody else to ask their questions and then leave and allow other people to try and get on the Hangout. Thanks, John. Have a great time. Have a great week.

JOHN MUELLER: Thanks, Gary.

GARY LEE: All right, bye.

JOHN MUELLER: All right, at the moment we use dynamic serving for mobile and desktop. But our site will become responsive. I understand that Google is recommending responsiveness, but is the loss of speed we're facing because of heavier source code a bigger problem for Google News? I can't speak for Google News directly. That would have to be someone from Google News to look at that. But with regards to web search, generally speaking, the change of speed on that order of magnitude isn't something where you'd see any significant changes in how we kind of crawl and index a website, or how we show it in search. So in general, when we look at the speed that it takes for a site to render, for example, we try to differentiate between sites that are really, really slow, and sites that are kind of reasonably fast. So if your site is kind of reasonably fast, and you're just seeing changes in the millisecond area, then that's not something where we'd kind of take that into account for Google's ranking. So we wouldn't say, well, this is 110 milliseconds instead of 120 milliseconds. Therefore it'll be a slot higher. That's not the level of detail that we're looking at there. We're really looking at the overall rough picture to see is a site reasonably fast? Is this something where if a user clicks on it, they'll be satisfied in a reasonable time? Then that's essentially OK. Also, there are ways to make even a responsive website fairly fast and snappy. So just because you're using a responsive design, doesn't necessarily mean that things will be slower or more bloated. Is it a good idea to have a rel canonical link to the home page from 404 pages? The problem we have is that people link to our GA experiment pages, but when the experiment ends, there's a 404. We can't redirect due to parameters in the URL. This is something where I try to do what you can't do and namely set up a redirect. And make sure that since those old pages were equivalent to your new pages and just on slightly different URLs, then I'd set up a redirect there. I think that would be fine. In the worst case, if you can't do a server-side redirect, then doing a JavaScript redirect would be a little bit better. To at least make sure that users are actually reaching the right page after they kind of follow a link to the wrong URL. With regards to the rel canonical, if this is a 404 page and it's returning a 404, then we're not going to process any of the content on there, including any rel canonical. So if you have a rel canonical on a 404 page, that's something we're going to skip. So again, in a case like this I'd really recommend to try to set up a redirect from the old URLs, the experimental URLs, to the current one. What can be done about individuals that build spam links to your website and then ask for payment to remove them? This is basically an extortion scheme. That's affecting the rankings of these websites. They built hundreds of links with the same link text. This is something on the one hand, our algorithms pick up on these things and essentially handle them fairly well. So in general, that's not something you really need to worry about. If you do notice that something like this is happening, then by all means use a disavow tool and let us know about that so that we can kind of take those links out of the link equation. There's definitely no need to pay anything for these websites to kind of have them remove it. If you use a disavow tool for something like this, then that's essentially equivalent. How much collaboration goes on between traditional search folks at Google and Local, or Google My Business folks? It seems like there's a bit of disconnect between the teams. Is this an accurate perception? It's hard to say because we are in different locations. So at least there's a disconnect in that regard. The other thing to, I guess, keep in mind is that the Google My Business is essentially focused on the map side of things and not directly on the search side. So there's always this kind of traditional separation within Google where we say the organic search side should be as neutral as possible and shouldn't be kind of influencing people on the ad side, shouldn't be influencing people on the map side. That should essentially be separate. So what we try to do in many of these cases is really make sure that we keep the separation as strict as possible. That means when we make changes in web search, then they'll find out about it at the same time that any of you find out about it, at the same time that any other external companies might find out about it. So in that regard, that's possible that they might not know everything that we're doing because we try to keep them separate as much as possible. On the other hand, we also try to understand that a lot of small local businesses, they essentially go to maybe Google My Business team, or just do Google Local, or somewhere like that. And they want to get the same kind of information as well. So every now and then we will do something together where we'll say, well, these are the basics, the information that we've been giving out to webmasters in general. Maybe you guys would want to do something similar with that kind of information as well. And then every now and then we'll do some kind of a collaboration on that level.

BARUCH LABUNSKI: So they have their own data, like Google My Business has their own data. Now you guys left this other company that you were depending on for data. Now Google My Business is-- well, because I just saw Jade Wang. I saw her like two weeks ago at the local [INAUDIBLE], so I talked to her. And so she was just mentioning that. So basically you guys don't use that data though at all. Because you have your internal data now.

JOHN MUELLER: I'm not sure which data you mean.

BARUCH LABUNSKI: You're gathering the local directories. You're gathering info from GPS.

JOHN MUELLER: I think that might be something that Google My Business people do, or the Google Local, the Google Maps folks do. I don't know how they handle that. No idea, sorry.

NEMANJA DIVJAK: John, I have to ask. I got into the searching text also testing. And for example, I'm checking the dates. We have more than 6,000 keywords to our website from the log detail. And for Europe its average position is about 8, and for US it's 30 to 60, such positions. So it seems that for US it gives completely out of maybe a third, fourth, fifth page only.

JOHN MUELLER: That's possible. I don't know about your website. So in theory that's very possible that you might have very focused content where people in one country see it very highly in the search results, and other people in other countries don't see it that visibly. So that's theoretically possible. I think the other thing to keep in mind with the search impact feature is that, especially when you're looking at impressions, we kind of count that a little bit differently. In the sense that with the old system we would count every individual URL that's shown in a search result. So if for one query we show two or three kind of results from your site in the search results, then we'll count that as two or three impressions with the old system. And with the new system we'll say, well, this is all the same site. It's one impression for the site. So you'll see kind of this, I guess, difference in the numbers, especially when you're looking at impressions. And I think the new system kind of makes sense in the sense that this is actually your site. It's the number of times people see your site. You don't really care how many search results within these individual queries were actually shown. But maybe when we have a bit of time we can do a separate Hangout about the search impact feature to kind of get some broader feedback as well. I know some of you submitted the form and didn't get invited. We hope to kind of do another run fairly soon. So maybe we can get the rest of you all in that run. And then we can do something about it.

NEMANJA DIVJAK: It works quite well. It gives good insight. You basically see everything-- what's going on from the queries, clicks, impressions, countries. It works great.

JOHN MUELLER: Great. Are places or My Business listings listed in Webmaster Tools? No. That's essentially separate. What do you think about disavowing a specific link structure? For example, www.example.com/ index?sref=something. I'm not really sure what you're trying to do there. In general, if this is within your website, then that's not something I'd use a disavow tool for because that's not really what it was meant to do. It wasn't really meant to kind of remove links within your own website, but rather remove links from external sites pointing at your website. With that in mind, if there's something on external websites that you kind of want to have removed from the link ref, then as much as possible we recommend using the domain directive in the disavow file so that you kind of have all variations of these URLs covered. So don't just try to find that one individual URL with maybe a very specific parameter in it and just disavow just that. But rather, if at all possible, disavow the whole domain so that all variations of that URL, and the variations that are kind of indexed with the same content are taken out as well. What's the best process for removing a manual penalty from a domain when the website will be dismantled? Will directing to a splash page that says this site has been deleted help us remove the penalty through reconsideration so that we can 301 elsewhere? I think there are two aspects here. On the one hand, you want to get rid of this penalty, which I think is fine to kind of get rid of if there's something old and problematic that you had on your website. On the other hand, you're redirecting to a different website. With regards to reconsiderations, the webspam team generally likes to see that there's really content on this website where it makes sense to actually kind of remove a penalty or manual action. So for example, if a site is removed from search because of spammy content. And the webspam team looks at it during reconsideration and just sees a 404 page, then they are not really going to have much to go by and say, well, this is good content. Now we should index it. But actually since it's a 404 page, we wouldn't index it anyway. So it's almost like you're trying to get something reconsidered. But there's actually nothing there to actually be reconsidered. So as much as possible if you're doing a reconsideration, especially when it comes to content issues, make sure that this content is actually visible. So make sure that there's actually something high quality that the webspam team can say, well, this makes sense to index normally, therefore we should remove any manual actions. So if you're moving from one domain and you're moving to a completely different domain-- you have really high quality content on the new domain-- then I'd just set up the redirect. Do the reconsideration request. And say, hey, we have a completely different site now. All of the old site is essentially gone. And then the webspam team will be able to take a look at that and say, well, sure. All of this hidden text, or all of this keyword stuffing, or low quality content, or whatever it was, is gone, cleaned up. And you can focus on your new site. Will Matt Cutts be coming back to Google?

BARUCH LABUNSKI: That's a great question.

JOHN MUELLER: I don't know. You'd have to ask him directly, I guess. He's not completely gone, so I won't say too much. But I think this is something I can't really answer. It's essentially a personal question for Matt. So feel free to ask him directly, if you want, in Twitter. Maybe he can tell you something. Maybe he doesn't want to. It's up to him.

MALE SPEAKER: Hi, John.

JOHN MUELLER: Hi.

MALE SPEAKER: Hello. I have a question about design, how it could improve or make things worse with Google. I'm sort of familiarized with the trying to put videos in the header in order to make more experience, bring more experience to the website design. So I was wondering if it could hurt the loading times? If it could affect the rankings?

JOHN MUELLER: To put videos in the headings?

MALE SPEAKER: Yeah. Videos in the heading. They are played automatically when you get in.

JOHN MUELLER: I don't know how that would affect Google search directly. So I think that's something-- what might happen is we pick that up and say, well, these are video play pages. And we saw like the video thumbnail next to the search results. Which I don't know, sometimes might be useful, sometimes might not be useful. But otherwise that shouldn't affect search directly. I imagine some users might get upset if you have auto-playing videos when you go to a page, but that's essentially between you and your users. It's not something that we kind of get involved in.

MALE SPEAKER: OK. Thank you very much.

MALE SPEAKER: John, I have a quick question.

JOHN MUELLER: Sure.

MALE SPEAKER: In Canada we have different regions, provinces. And we have Quebec that primarily is French. And if I were to want to target content or web specifically for Quebec, is there a way of doing that? Because from what I can gather in Webmaster Tools, everything is basically country-based. I can't break it down by region or anything of that sort. Is there any suggestions, or anything in the works to be able to do that? Besides creating a site and using the CMS to target IPs or anything like that? I know you can-- and that gets really nasty, and I don't want to get into that. But there are reasons either for legal compliance, or just to accommodate the difference between types of language between, let's say, this type of French and that type of French. Just ideas, nothing's in the works. But we're just trying to come up with some plans for future.

JOHN MUELLER: Yeah. If it's essentially just French Canadian and French French, for example, then that's something you could do. You could set up a site--

MALE SPEAKER: Like CAFR?

JOHN MUELLER: Yeah. You could set up a website and say this is targeted for Canada. Geotargeting with either a CCTLD or with Webmaster Tools. And then you have your content in French. And essentially that kind of targets that automatically. So that would be essentially a really direct way to do that. You could also, if you have different variations of your content-- like English content for Canada, or French content for Canada, or maybe French content for Canada and French content for France-- then that's something where you could use something like the hreflang mark up between those pages. And say this is the French Canadian version, and this is the French French version. And these are essentially the links between these pages. This is equivalent content that I have in various variations. And Google will then try to kind of swap that out with the hreflang.

MALE SPEAKER: And if I wanted to build a site that was specifically for the US, and I don't want the Canadian market to come across it for any reason, or vice versa, what is possible to do?

JOHN MUELLER: Oh man, you don't like the Canadians, huh?

MALE SPEAKER: No, I'm a Canadian.

JOHN MUELLER: Those poor guys. There are a couple in here in the Hangout.

BARUCH LABUNSKI: Watch it guys, watch it.

JOSH BACHYNSKI: Watch it [INAUDIBLE].

MALE SPEAKER: Hey, I'm Canadian. Easy.

JOSH BACHYNSKI: We can tell you're Canadian.

JOHN MUELLER: Settle down, guys. Fight outside, OK.

JOSH BACHYNSKI: The flag didn't give it away behind you.

MALE SPEAKER: Oh, that's right. Thanks.

JOHN MUELLER: So the main thing you can do is set up geotargeting. You can't guarantee through search that people from Canada will never see your content. So when it comes to search, it's always possible that someone in Canada searches on google.com and they end up on your site because it's kind of showing up more visibly on google.com. So that's not something you can really guarantee. What you can do if you wanted to be, or if you needed to be, really direct and say, well, this really needs to be limited to the US and users in Canada can't see it all for maybe legal reasons, for example, then you can block users outside of whatever region you're targeting by IP address or whatever. And then say these users don't see the content. You're blocking them on your server through your .htaccess file, or whatever you have there, and presenting some kind of an error page through that. So for us, what's important is that when Googlebot crawls, it sees the same content as a user from the same region would see. And we generally crawl from the US. So in that case, you'd be kind of lucky. Because you'd be serving Googlebot the normal content. You'd be serving US users normal content. And we're kind of happy with that. That's fine with us. That's not cloaking. It would be more problematic if it were the other way around. Where you'd say users in the US aren't allowed to see the content that users in Canada are allowed. Because in that case, we crawl from the US. We would also see this error page and say, hey, you can't look at this content. And then we'd say, well, we have nothing to index here. We can't show this site at all on our index because we're not allowed to crawl it. So in a case like that, that would be more problematic. That would be something where if you wanted your content found in search, you'd almost have to kind of come up with some kind of a back door there. And say, this is a generic page about our content that legally we can show to users in the US. And we can have this page indexed. So the actual content itself won't be indexed. But kind of like this pointer to the content can be found.

MALE SPEAKER: OK. That's really good. One last question, I'm not sure if I asked it last week. Site links based off of the difference between-- obviously, you can't do much about influencing those. Those are-- you come up with those yourself. I've noticed based off of region, let's say, someone doing the same search for our company in Vancouver versus Halifax will get different options for site links. Is that possible? And again, also a different experience when doing mobile? Are you that dynamic at this point?

JOHN MUELLER: Theoretically it's possible. We do a lot of geotargeting where we try to figure out what's relevant for users in different locations, especially if we can recognize that they're trying to find something local. Then that's something where a user in one part of the country might see slightly different content than in other parts of the country. I imagine that's something you'll also see in the site link. So it's not just in normal search results, but also in the site links. You'll see that reflected. It really, I guess, kind of depends on the type of site that you have and what you're targeting. But if you have something, then that's something-- sure. That's essentially possible.

MALE SPEAKER: All right, thanks.

JOHN MUELLER: I think there was a question from [INAUDIBLE]. Let me see. I have a question. When I search by links, Google allows running of Google Ads, which Google won't allow. The question is does Google allow the purchase of links in the algorithm? So I think at some point we stopped showing Google Ads for some of these queries. If they're promoting something that kind of breaks Google's terms of service. Those kind of things. I don't know if that applies to the queries that you're looking at there.

MALE SPEAKER: Thanks again.

JOHN MUELLER: Sure. All right, why are EMDs still ranking so well in search when I can clearly recall multiple statements from Google saying that they would be devalued? In Italian, the Google situation is ridiculous in many search results. I think there are two things to kind of keep in mind with exact match domain. So these are the EMDs that are being talked about. Exact match domains are essentially when the keywords for a query match what the domain name says. So if you're looking for, I don't know, cheap, red cars, then you would have cheapredcars.com as an exact match domain kind of. And from our point of view, what we look at primarily is whether or not these provide a bad search experience, not whether or not they're an exact match domain. So if we run across sites that are using exact match domains that are really low quality, then that's something we try to take action on. Where we try to demote those either algorithmically or manually from the webspam team. On the other hand, there are lots of sites that have essentially exact match domains for the queries that they're targeting, but they're high quality sites. They're normal websites and normal sites that we'd show in search. So just because it's using a domain name that matches the keywords or the content that they're targeting, doesn't necessarily mean that it's a bad site, one that we shouldn't show in search.

BARUCH LABUNSKI: But will bot be able to distinguish-- for instance, if I have tennisballs.com. And so Googlebot would know, hey, this is a brand and so on. There's so many signals he would be able to understand later on, or Hummingbird, that is, as well?

JOHN MUELLER: I didn't understand your example that well. Sorry.

BARUCH LABUNSKI: For instance, let's say I own tennisballs.com and my brand is tennisballs.com. Googlebot will understand that it's a brand. But if it was just a spammy website, you would know that as well?

JOHN MUELLER: It's not so much that we look at the brand and say, well, this is a known brand. We should treat it differently. It's more that when we look at the website, we see that this is something legitimate, something useful for users. So that's something we should show. If you look at a big website, like google.com, or Amazon, or Facebook, people are searching for that keyword Facebook or Amazon. And they want to see that website. So you could argue that they're kind of an exact match domain for their brand name. But essentially this is what the website is called. They do a great job. So we should show them in search for those kind of queries. So from that point of view, it's not so much that it's an exact match domain. But rather that this is something that a lot of people have abused in the past where they'll take hundreds of domains that match the keywords that they're trying to target and just put low quality junk on those pages. And in the past, maybe they get through and they get those into search. And they get those ranking as well. I'm sure there are situations where we get that wrong now as well. And this kind of feedback is really useful. Also, with regards to saying that in Italian it's worse, that's useful a little bit. But even more information will be even more useful. So if you can send me a bunch of queries where you're seeing really low quality content bubbling up in search for fairly generic queries, then that's really useful information for the engineers to kind of work on. I can't promise that they'll be, like, snap and rewrite their algorithms to handle that. But that's definitely something they take seriously, something we try to figure out what went wrong here, and where we need to tweak a little bit more. Where we maybe went too far. Maybe we have to dial back things a little bit. So if you want to send me specific examples, that's always really welcome.

JOSH BACHYNSKI: Sorry, John. Would you say that EMDs might have a benefit in one way in the sense that Baruch raised? If you're selling red apples, for example, a fictitious red apples product. That people want to buy red apples and you have buyredapples.com. And that's what they want to do. And you let them do that. It seems to me that at the very least the EMD would have a slight leg up in that regard.

JOHN MUELLER: I think that's mostly a marketing advantage. So I wouldn't say that there is an inherent search advantage for that. But there is-- I don't know. I haven't done any marketing research on that. But I would imagine there is at least a marketing advantage in that when people go there and say, whoa, well, this business is called Buy Red Apples. This is exactly what I was looking for. I'll go there. I'll buy something from them. They seem to know what they're talking about. So it's not so much that there's an inherent advantage in search. It's more that maybe there's something indirect, or maybe psychological almost, that takes place with the visitors where it does make sense to kind of look at that. I think from a business point of view, if you're using an exact match domain you're kind of putting yourself in a very small corner. So that's something I'd be kind of cautious about. If you're starting a new business, and you're selling red apples and you have buyredapples.com. That's fantastic. But if you want to kind of expand to bananas, or oranges, or different kinds of apples, then you're kind of tied in that corner. And you're like, well, buy your oranges at buyredapples.com. And you're kind of stuck in that situation. But that's not so much a search problem. That's more, I guess, a marketing, a branding problem. The other thing that we often see is that people will buy or set up a kind of exact match domain. And say, well, I have buyredapples.com and this is my brand. And people are searching for my brand, but not finding my website. And from our point of view when people are searching for buy red apples, they're probably not searching for this obscure buyredapples.com brand. They're looking for this kind of general information. So it wouldn't be the case that we'd automatically show that site number one in the search results there. Unless we really have a confirmation that actually this website is exactly the brand that they're looking for, exactly the kind of business they're looking for. But just because you have kind of matching those keywords, doesn't mean we'll show it in the search. And we'll see that sometimes in the forums where people will say, well, I branded my business buyredapples.com, and this is the brand that I want to kind of present to people. Therefore Google should recognize that as a brand and show that in search. And that's essentially not the way it works. We kind of look at it from the other way around. Where we see, well, people are accepting that this is the website and this is its name. Therefore we should show it in search.

BARUCH LABUNSKI: And as soon as you get it, you also said that you give it a boost as well, right? Like maybe a month or two boost and then it goes back to its original place, like page three or four. You kind of try it out sometimes, right? You try it out?

JOHN MUELLER: I think that's kind of separate from the general situation around exact match domains. But essentially if something is a really, really new website, then it's really hard for us to figure out how we should be ranking this because we don't have a lot of signals. And that's something you might be seeing there. It might also be that depending on what you're kind of targeting, what you're looking at, is actually a really tough market where a lot of people are already very busy. Where it doesn't make sense for us to say, well, we'll give this site the benefit of doubt. And we'll let it rank number one for Amazon queries because this is something we already know people are actually looking for this. So just because you're a new website we won't automatically show you higher in search. But it is a lot harder for our algorithms to figure out what are the right signals that we should be looking at here. And how can we kind of collect more information about this website over time to figure out where it would be appropriate to show in search.

BARUCH LABUNSKI: Right.

JOHN MUELLER: All right, let me run through a bunch of these questions that were submitted. And then we'll try to get to some more Q&A. How does a short downtime affect rankings? I noticed my server froze up and I wasn't able to get it back up for a day and a half. My rankings dropped noticeably from the week prior to the week after. In general in these kind of situations we recommend making sure that your server returns something like a 503. So that we understand that this is a temporary issue, and that we kind of don't take that content into account for indexing, that we don't drop the URLs automatically. That we kind of give it a little bit more time to catch up. We'll try to recrawl those URLs before we actually do anything big in search. But temporary is always kind of a question of time. So a day and a half I think is something that we'd definitely be able to kind of tide you over with a 503. But if your website is down for over a month and you're serving a 503, then I wouldn't assume that Google will kind of keep everything frozen in the previous state and show that again. So a day or so I think is something that we'd generally be able to handle really quickly. The difficulty here is if you don't have any control over what your server is showing during that downtime and it's serving a 404 page, for example. Then that's something where we'll crawl your existing pages. We'll see the 404. And we'll say, well, this page disappeared. Maybe we should remove it from the index as well. So that's something where you might see a general drop in traffic to your website because of that. But as soon as we can recrawl those old pages and see that actually the content is back, then we'll kind of bubble that back up in search again. So these things can, if they're set up incorrectly, have a negative effect for temporary time. But it'll kind of bubble back up to the previous state usually fairly quickly. So if you had a day down time, then I imagine maybe a couple of days for things to settle back down with crawling and indexing again. And things will be pretty much at its previous state again. Lost ownership after URL change. For internal reasons we recently changed our URL structure and 301-ed old to the new. However, where some of our old content has been copied, the copiers page now ranks instead of ours. What can I do? I don't really understand the question. It sounds like you moved from one website to another and some of the content is still on the old site. I guess in a case like that, whenever possible, if you're moving from one URL to another, really make sure that you have clear URL redirects set up so that we can kind of follow those redirects. And then that's something that shouldn't really have a negative effect there. I'm not really sure how you mean the copiers here. So one thing I might do here is post in one of the help forums, or post in the Google+ communities and kind of talk to other webmasters about this. Maybe there's something technical that was set up incorrectly with the redirects there.

BARUCH LABUNSKI: John, also, as soon as that guy made that move fetching and rendering would also help, no?

JOHN MUELLER: With a 301 redirect you don't really need to resubmit that to Google with fetch and render function. Because in the worst case if we send people to your old URL, then they'll still get redirected to the new one. So it's not something that you kind of have to force that move manually in any way.

BARUCH LABUNSKI: How long would it take for the old one to disappear from the results? Only when you make an update, right?

JOHN MUELLER: It depends. So if you're moving a whole site, then you'll probably see the primary content get moved fairly quickly, within a couple of days. And the rest is going to take weeks to months, where maybe even after a year you'll still see some of that content index under the old URLs. And if you explicitly search for the old URLs, then we'll say, well, we know this content was also on these old URLs. So we'll show it to you because you're explicitly asking for it. So that's something where if you're explicitly looking for the old content, we'll show it to you because we think it's a good service for you, which might not be what you're looking for as a webmaster.

ROBERT O'HAVER: John, I have a question when you get done answering the other one.

JOHN MUELLER: Sure, go for it.

ROBERT O'HAVER: Oh, OK. This is an e-commerce site situation I wanted to ask you about. We have a site that has multiple products-- or the same product in multiple categories. Is that a negative effect, having basically the same content but in different categories?

JOHN MUELLER: Not primarily. So in most cases we would figure that out. And we'd try to index the different URLs for the same content, the different category pages essentially. And if someone is searching for something general, we'll pick one of those URLs and show it in search.

ROBERT O'HAVER: Well-- sorry. Go ahead.

JOHN MUELLER: What you could do if you wanted to kind of force us to pick your preferred URL is just set up a canonical.

ROBERT O'HAVER: That's what I thought. OK.

JOHN MUELLER: All right, just lots of questions left. Is it possible to recover from Panda without a new Panda update or Panda refresh? So Panda is our high quality sites algorithm where we look at high quality websites and try to show those a little bit higher in search. And if you've significantly cleaned up your content and you were kind of affected by this Panda algorithm. Then that's something where you'll probably see changes unrelated to Panda positively affecting your website over time. But you'll probably see a bigger jump when the algorithm is actually updated. And this is something that's not either on or off. It really has a lot of shades in between. So it might be that we see that your website is generally a little bit better. And we'll reflect that in search as we kind of rerun those algorithms. There's a payday loan out company that has the same name as our client mortgage company. Could this negatively affect rankings for the mortgage company? No. They would essentially be separate. So just because it has the same or similar name doesn't mean that we'll take that into account. Does Google fetch ignore some characters in URLs? It looks to error out on "+" signs. For example, some "+" within the path it says redirect to with a space in the path. That sounds more like a bug then any kind of actual feature. So I'll definitely pass that onto the Webmaster Tools team to kind of take a quick look at what might be happening there. I see well-known sites using widgets to generate keyword rich backlinks via publishers. The guidelines are 100% clear on providing widgets with default backlinks. What's Google's official stance on widgets with embedded keywords on follow links? This is something I try to avoid. I think we have cleaned up the guidelines a little bit. Or we might be publishing something fairly soon to kind of make this a little bit clearer on what we kind of expect there. Let me see. [INAUDIBLE] has a question. I'm having problems changing from a member in the forum. I managed to do that if the [INAUDIBLE] software website for firing him millions of backlinks per day, which increased his numbers nonstop. So what should he do? So I think you're asking if a competitor is using scripts to create low quality, or unnatural backlinks to your website. This is something where I'd just use a disavow tool. And submit those domains with a disavow tool, so that we can take those out of our equations essentially. For the most part, we'll probably just ignore those links anyway. So it's not something where I'd say you need to kind of look at your links every day to do that. But if you're aware of this problem and you see these links in Webmaster Tools or elsewhere with other tools, then I'd just pull out the domain names and put them in a disavow file and essentially leave it like that.

BARUCH LABUNSKI: You don't need to write comments underneath the disavow file, right? Just go at the main operator straight, and then there's no need to write this website is blah, blah, blah.

JOHN MUELLER: Yeah. We don't read the disavow file. That gets processed automatically by our scripts. So unless you have something flattering to say to Googlebot, no need to add. It's useful sometimes for you as a webmaster if you're maintaining this file so that you know what's happening there. But you definitely don't need to put anything in there for us. All right, Mihai, I think, has a question.

MIHAI APERGHIS: OK. Thanks, John. I have an e-commerce client that is using categories or categories that have multiple products that are featured on several pages, like page 1, 2, 3 of the category. They have a canonical to a view all page. So one question would be if that page takes a while to load because of the number of products, is that a problem? And the second one is-- I'll send you the URL. The pages are actually-- the products are actually generated on the page using AJAX. So the moment you enter, you might not see the products. So when Googlebot sees the page, it might not see the products instantly. Or the page might not be cached. So I don't know what's the situation exactly. Would that be a problem indexing or crawling the products? Or does the Googlebot wait a bit to see what's going on? Or does it actually work?

JOHN MUELLER: I think with the view all page you just have to kind of make sure that you're not overdoing it so that you don't have 5,000 products on the view all page. And it's like 100 megabyte HTML page to kind of load. But if it's a reasonable page, I wouldn't worry about the time it takes to load the full page. With regards to AJAX, I would use the fetch and render tool to make sure that we can actually see all of the content.

MIHAI APERGHIS: There's no products being displayed in the thumbnail screenshot.

JOHN MUELLER: OK. Then probably what's happening is if you look on the bottom of that fetch and render page, it mentions which URLs are blocked by robots.txt, so which ones can't be fetched. And maybe there's an AJAX URL or a JSON URL that's trying to be fetched that's blocked by robots.txt. And that's the reason we wouldn't be able to pull up the content. So if you can set it up so that we can actually fetch and render it, then that's something we'll try to take into account in search.

MIHAI APERGHIS: OK. So it wouldn't be any problem if it uses AJAX just as long as it's not blocked by robots.txt. And one last thing, I did a site query, and it seemed that the canonical-- so it's not the view all page that's showing up in the queries. It's only the first page of the category. The rest of the pages or the view all page don't really appear in the search results. Is that a signal that Googlebot might be ignoring the canonical of the view all page?

JOHN MUELLER: It's hard to say. I'd have to take a look at the specific example and look at that with the team. With the rel canonical, it's important to keep in mind we have to first index the page that's not canonical in order to see the rel canonical. And then we'll keep that page indexed as well. So if you do a site query specifically for that, you'll still find it. Even if it has a rel canonical pointing somewhere else. So that's something where depending on the type of query you're doing, we might just be showing you what we think you're looking for specifically. And it's not a sign that we're kind of ignoring your rel canonical, or that we haven't gotten to it. But it might be that we're just trying to show you what you're looking for. But if you have specific examples, I'm happy to take a look at that with the team.

MIHAI APERGHIS: Right. I was asking because the view all page doesn't show up at all in the site query.

JOHN MUELLER: That sounds strange, yeah. All right, Mark, I think you had a question, too.

MARK: Yes, I do. Thank you, John. So my question revolves around sponsored stories. I'm curious if a group publishes a story in, say, "USA Today" with links going back to their home page. And the story's identified as a sponsored story, but they haven't nofollowed those links. Is that something that's going to float page rank from, say, "USA Today" or "Huffington Post"? And is this a safe practice for content marketing?

JOHN MUELLER: Essentially, if these are sponsored posts and you're putting links in there, then that's something we'd like you to nofollow. That's something we'd treat those links as being essentially paid links because that's the reason why the content is published there. So we'd want you to kind of have a nofollow there. And that's something where the webspam team, when they run across that, they will see if they need to take action on that. And they will try to take action on that to kind of make sure that those kind of links aren't artificially impacting the link graph essentially that we keep.

MARK: And would that action be taken against the host site, the publisher itself, or the destination URL?

JOHN MUELLER: It depends on the situation. So there's some situations where we can see that one site essentially just exists in order to sell links. So the webspam team might say, well, it's easiest to take action on this one website that's essentially just selling links. Other situations we see the people buying the links are essentially only being supported by various paid links that they have somewhere else. And where it makes sense to say, OK. We'll take action on these people who are actually buying those links. So it's not that there is a one-size-fits-all situation here. It's really something that the webspam team has to take a look at and think about what's the proper approach here. How do we also reach the right people and let them know that this practice isn't OK? Because it's not just the case that we want to kind clean this up on our search results, but we also want to let the webmasters of these sites know that, hey, this is a problem. You might not have been aware of this. We're taking manual action like this. Based on what we've seen here, you can clean this up, and then we can kind of forget about this. So from that point of view, we try to find the right approach there to be as granular as possible so that we don't affect the whole website if there's just one sponsored post there. But at the same time that we do take enough steps to make sure that this doesn't cause any problems.

MARK: So if we were seeking visibility using one of these publishers, it may behoove the company or the client to request a nofollow tag be added. Even though their language around it says this is a sponsored story or a published, promoted post, if you will.

JOHN MUELLER: Yeah. I'd definitely do that. And depending on the type of visibility that you're gaining like this, this kind of sponsored content can still be very positive for the website. In that people are finding information about your business. They're able to click through those links. And then if they see that this is really good content, maybe they'll recommend it separately. But essentially that link there, that's only there because of the sponsored content, that should be nofollowed.

MARK: OK. Thank you very much.

JOHN MUELLER: All right, and with that we're a little bit over time already. I'd like to thank you all for all of your time, your questions and comments. It's been really interesting, as always. I have, I think, for this week set up the next one on Friday. So maybe we'll see some of you guys there, depending on your time zones. Maybe we'll see you in one of the other afternoon Hangouts as well. I am also looking into setting up another Google News Hangout because there were so many questions around that. And maybe we can work with some of the other teams to get some of those Hangouts out for the webmaster audience as well.

JOSH BACHYNSKI: Do you have three seconds to hang around there, John, for a private question?

JOHN MUELLER: Sure. All right, so with that I'll say goodbye and hopefully see you guys next time again. Bye, everyone.
ReconsiderationRequests.net | Copyright 2018