Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 19 May 2014

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

JOHN MUELLER: All right. Welcome everyone to today's Google Webmaster Central office hours Hangout. My name is John Mueller. I'm a webmaster trends analyst at Google in Switzerland. And I will work to try to answer your questions. So with that said, let's get started. Do any of you want to ask the first question? Otherwise, we'll go through the Q&A that we have here. Lots of questions submitted already. If you want to, you can add more questions to the Q&A along the way. If you're live in the Hangout, feel free to make yourself heard if you hear an answer that you need to have more information about. All right. After a reconsideration request for a client, we received three examples, which included one clearly natural link. Are these example links reviewed by humans before you send them out? Yes. We do review them. But it's always possible that for whatever reason, something slips through that might not be completely perfect. That's a part of the reason why we include three links there, so that we can make sure at least there's something in there that you can work with. If you notice that all three examples there aren't really that helpful, that's something that's useful for us. Something that helps us a bit to kind of improve the overall process. So if you see that happening, feel free to send that link to us, or the URL of your site to us. You can do that, for example, through my Google+ profile. And I'll check with the reconsideration team to see what we can do to improve the quality there. But, in general, these are supposed to be ones that you can kind of take, and look at the general pattern, and work on attacking that problem a little bit more broadly than just individual links. And, as always, with reconsiderations for link-based issues, the important part is here that you really try to get the general problem as widely as possible. That you don't focus on individual links, but rather think about what you can do to really clean up the issue overall. When using the disavow file, for example, make sure you use a domain directive wherever you can so that individual other links that might be out there don't cause any problems. And really think about what you can do to convince the person who's reading your reconsideration request that they're 100% certain that you're doing the right things now, that you've taken appropriate steps to clean things up in the past, and that they can trust you so that they can just re-include you as quickly as possible. And again, if you're seeing examples that you find a bit confusing, let me know about it. And we'll see what we can do to improve that. Has there ever been a verified case of a site receiving a manual penalty and not being notified? In Webmaster Tools, I've heard anecdotal evidence but would like to clarify or bust the myth. To reiterate, a manual penalty, not an algorithmic change affecting the site. In the past, we would apply manual action without notifying the webmasters. But I think in the past year or two years, we've been essentially notifying webmasters of all manual actions that we're taking with regards to watch them on a website. So it doesn't matter what kind of manual action it was. If there's something that we're doing manually here to demote a site or to respond to some kind of webspam that we found on a site, that's something we tell you about.

AUDIENCE: That was my question, John. So hopefully that's answered.

JOHN MUELLER: OK. Great. And we have a lot of algorithms out there as well, so that's something where you wouldn't hear about it from us. But essentially, if there's something manual that we're doing, and it's a part of webspam, something essentially that kind of falls into play with the webmaster guidelines that we have, then that's something we definitely notify the webmaster about. Provided, of course, they're also in Webmaster Tools. And if you add your site after that, then usually we'll forward those messages as well. But if this manual action was put in place maybe a year or two years ago, then it's possible that we won't show those notifications that far back in Webmaster Tools. But we'll still show them in the Manual Actions Tab. So even if there's a manual action in play that you don't have an email for because it happened maybe two years ago, then you'd see that in the Manual Actions section in Webmaster Tools.

AUDIENCE: All right. I actually asked that question last week, but it didn't get on. So I think in between time, you've got my email anyway. So it might not be that. I think I was clutching at straws there.



JOHN MUELLER: Sure. My site suffered a huge drop in ranking, possibly due to some spamming back links, which I've sent in disavows in twice for in November and January, but no change or any acknowledgement from Google. How can I get it back to page one? I'd probably have to look into the details of this specific site. I didn't check this one out. But in general, if there's something algorithmic around the links, then it's important to keep in mind that these things don't clear up from one day to the next. It's really something that can take maybe half a year, maybe a year, in some cases even longer to kind of get reprocessed where we run through all of the links that we found there, the algorithms updated, the new algorithm data up-pushed as well. These things aren't something where you'd see changes from one day to the next after cleaning them up, after doing the disavow. With regards to disavow files, we generally send a notification in Webmaster Tools when you send an update. But that's essentially an update of the data file that you submitted to us. That's not a sign that we've processed all of those individuals disavows. Used the Alt tag on an image important. If there's a title and a WordPress caption also, titles and captions show to readers. Surely Google can read them, too. Is the use of Alt title and caption, if all the same, considered competitive or spammy? We do use the Alt tag as a way to easier recognize what's on an image. So that's something that makes it a lot easier for us. But we also take into account text around the images, the titles on a page, those kinds of things. So especially in situations where there's one primary image on a page and the whole text is about the same content, or the same type of content, then that's something that's very easy for us to kind of fold that text back to the image and say, this image is relevant for this text. Whereas if you have 20 images on a page, and the text is for the individual images, and there's no overall kind of topic on this page, then it's really hard for us to say that all of the text on this page is relevant to all of these images equally. So in cases like that, things like Alt text or direct subtitles below an image really make it a lot easier for us to recognize that this image is relevant for this specific part of the content on a page. So from that point of view, I think using an Alt attribute is helpful for us to kind of recognize that kind of thing. It's not necessary. We do try to take into account the rest of the page as well. And it certainly wouldn't be spammy if we find similar content elsewhere on the page. In general, I avoid just like copying it one to one on the page because that's also kind of repetitive for the user if the title is the same as the subtitle on the page, as maybe a heading on the page, and things like that. So I try to just vary it so that users can read through it without getting completely bored. Can I transform visitor's direct links to pictures on the pages of the original poster image? I'm not really sure. Let me just mute you for a second. I'm not really sure what you mean there. So if you could elaborate, maybe in another question, that would be really helpful. My site seems to have an algorithm penalty, but I don't know if it's Penguin, Panda, or anything else. Can you take a look and give me some idea of what I should look at to recover? I need to look separately from that. What you could do is maybe start a thread in a forum or on Google+ and drop a link to that in the [INAUDIBLE] listing. And then I can take a look at your site, at the forum thread, to see if there's something specific I can point out there.

AUDIENCE: I got a question.

JOHN MUELLER: All right.

AUDIENCE: Just change it up a little bit. So you said in the forum thread that if the content hasn't changed or if you remove the page, why should we remove the manual action? Kind of like if a tree falls in the forest, then nobody sees it. Could you explain that a little bit more? I guess for persons like who's just worried he has a manual action, and he wants to get rid of it?

JOHN MUELLER: So in that specific case, I think the manual action was based on the content itself. It was some kind of spammy content, or thin content, something that was duplicated. So from our point of view, it really makes sense to kind of resolve that manual action when we can see that there's content there that's actually useful for us to crawl and index. So if you just remove the content temporarily and say, oh, yeah, I'll add something really good on here as soon as I get this manual action resolved. Then that's, for us, is a kind of a tricky situation, because you don't want to disclose what you're actually putting there. And that's one where it's easy to assume that maybe the webmaster just removed the content temporarily and is going to put the same content back up again. It's a bit different if you're starting over with a new website. Those are the kind of situations where we might say, OK. The new webmaster has nothing to do with the old content that was there. They want to start fresh. That's something where we could say, OK. From a manual action point of view, it makes sense to kind of clean that up. But if this is the same webmaster that just wants to put some unknown content on that site again, then that's something where the webspam team was like, you know, I don't really know if I can trust you to do the right thing. OK. Is it true that websites are punished for bad links? It just seems too easy for competing sites to pay someone $5.00 to blast down new links at a rival. We do take action on unnatural links that we find on a site that can include things like us ignoring those links. An, in some extreme cases, can result in us actually also demoting a website if we think that those spammy links, for example, are really relevant and problematic with regards to search quality. So from a manual point of view, it could happen that we take action on a website based on unnatural back links that it has. On the other hand, this kind of situation is something that we do actively work to prevent. So we try to prevent that from both algorithmic basis so that these kind of links don't cause any problems up or down for a website. Because if they're usually relatively easy to recognize. And definitely also from a manual point of view, that when someone from the manual webspam team takes a look at the situation, they can see, OK. These are links that maybe some competitor placed there that aren't actually doing anything. Or if they are causing problems, maybe we can work to actively ignore those kind of things. So from our point of view, we work really hard to prevent this kind of situation from causing any problems. Theoretically, it's possible that something slips through and we have to work to resolve those kind of cases. And if you think that this is something where your site is involved that might have been affected negatively by some competitor, for example, that's always useful for us to know about. So you could, for example, send me a note on Google+. In almost all of the cases that we looked into where someone reported that this was happening with their website, we noticed that either those bad links have been there for a really, really long time. So that's something that we sometimes see in the forums. They'll say, a competitor just recently started buying links against my website. And we look into the history, and we see, well, this website has been buying links for the past five, or six, or seven years. And usually competitors aren't that persistent and keep on buying links for a site that are actually helping it during this time. So from that point of view, that starts to look a little bit weird. Or what another really common situation is that a website just might not be ranking where it used to rank for completely other reasons than these links. So, again, we work really hard to prevent this kind of thing from causing any problems. It's, I imagine, extremely rare that they would cause some problems. And if you see something that you think might be running into that, you're always welcome to let us know about that so that we can take a better look.

AUDIENCE: Hey, John. How you doing, there? I just wondered if that mistake did happen, would that website, if they got a manual penalty, would they have to still wait for a refresh? Or would Google have some way of actually manually fixing that problem and having them reenter into the results again?

JOHN MUELLER: With regards to manual actions, that's something, if we process the reconsideration request, if we have that information, then we'll try to take that into account immediately. With regards to algorithmic problems, usually that's more something where we'd have to figure out what we're doing wrong algorithmically and resolve that, which sometimes is a longer term issue.

AUDIENCE: So, in theory, if a mistake was made, and as minute of a possibility as that could be, that website could actually then potentially wait up to six months for a refresh before they could get themselves where they supposedly should be? In this remote situation?

JOHN MUELLER: It's theoretically possible. So I'm not aware of any case where we ran into this kind of a problem. Because usually the algorithms do try to take a better look at the bigger picture as well. And in pretty much all cases that I can think of where this could have been a problem, the bigger picture was pretty much OK. It was just a small section here where someone was buying links on Fiber, or some other SEO forum, or something like that. And the algorithms, in cases like that, they wouldn't count this against a site in a way that would be completely visible. So that's something where theoretically it's possible that there's this situation out there. In practice, I haven't seen anything like that. So it's really kind of hard to put it into perspective.

AUDIENCE: Yeah. My concern as always with these things is that there are so many things that Google isn't seeing. You know, especially in the small to medium size market, when it comes to cars or houses or some of these really big terms, that seems to be where a lot of search results are being checked manually by the webspam teams. And there seems to be a hell of a lot of spam. I mean, in my industry, the people who are number one, I've done a backlink audit on them, and they don't have one link that is natural. Not a single link. Yet they're number one. And it doesn't matter how many times you report it, or if I send you a report about it, it doesn't get sorted out. So my concern is, how much is Google actually missing? And if you're not listening, in depth into the reports that we can all send in, you need more people to analyze these things. There's no way that anybody could look at that website, and even find one link that was natural. Somebody try to get on top of it.

JOHN MUELLER: It's not really possible for us to say what we don't know, right? So from that point of view, that's really tricky. I think an important aspect is also that we do try to be active in ignoring things that are problematic to a large extent. So sometimes what might happen is you run a site, and you say, oh, this site is clearly using keyword stuffing. And it's only ranking there because of this keyword stuffing. But actually when we look at it from our side, with our algorithms, we're like, oh, the algorithms picked up on the keyword stuffing, and they're actively ignoring it. But it's still ranking because of these other signals that we have. But I understand it's really tricky, externally, to see what's actually happening internally. You also don't see which manual actions, notifications go out to other sites. So from that point of view, it's important for us to get these spam reports so that we can actually go through them and see what we need to do differently. But it's also important, from our point of view, to realize that we can't kind of take all sites out of the index that are doing something sneakily. So we have to try to find the right balance there. But I totally agree. These things where we're getting it wrong, where we're missing something, are important points to bring up and helpful for us to kind of have.

AUDIENCE: Yeah. All right. Cheers, then.

JOHN MUELLER: Yeah. All right. Here's something different. For a bilingual site, Arabic and English, why Google changes the title tag of an Arabic page and appends a brand name in English. And what can we do from our side to minimize these occurrences? That's an interesting point. In general, we try to recognize the language and the geotargetting of these pages. One thing I'd recommend doing there is making sure you have the [INAUDIBLE] markup set up properly, so that we can understand the difference between these individual pages. The other thing you could do was send me some examples. So, especially when it comes to titles, it's important that I have a query that I can try out here in my browser and get the same results. With titles, it's important to realize that we take various factors into account, also from the query directing. So if you do just the site query for a site, you might see a different title than the user would see if they were to search, actually. So understanding which language setting a user is searching for, what they're searching for, really helps to kind of double check the title that's actually being shown in search. And something that we can bring to the engineers and say, hey, for users searching in Arabic, for this website, for normal, generic query, we're showing the wrong language site title or the wrong language subtitle or something like that. So that's something, if we can reproduce it one to one, we can bring that to the engineers. And they can use that to improve the algorithms. But especially with multilingual sites, it's also important that you give us as much information as possible on your side as well. Is there a trusted place one can send a site's domain for checking over any signals that would cause it not to rank, despite looking fine. Has no shady history, but it will just not rank despite Google having indexed it. It's tricky. I mean, a lot of things are hard to kind of see, one to one, externally, with regard to ranking. Just because a site looks nice doesn't necessarily mean it'll rank well. So that's something where I'd, as a first step, kind of fine some peers that could take a look at your site. That could take a look at how it's interacting with the web. To see if they can spot something that might be problematic that might be worth bringing up. Or that might point at a technical issue on the website itself. Or that might point at something that you're missing. So perhaps you have a fantastic website, but nobody knows about it on the web. It's not linked from anywhere. Nobody is talking about it. You essentially have nobody visiting your website. Then that's something for us that makes it really hard to see how we should rank this page. If nobody's linking to this page, if nobody is sharing it on social media, does that mean that this page is really relevant? Or maybe people are looking at it and saying, well, this doesn't match my interest. This doesn't match the site topic. I can't really trust this. These are all things where peers can give you some information as well. And probably help with a lot of things that you could work on. So going to a place like /r/webmaster forum to various Google+ communities, to probably also Facebook communities, LinkedIn communities, there are lots of places where webmasters interact. And kind of bring that up, asking for advice is definitely a good idea. If you've gone through that kind of a process, and you really can't find anything that looks in any way remotely interesting or weird with your website, then you can definitely just send it to me on my Google+ page. And I can take a quick look on our side to see if there's anything problematic happening. I can't promise a direct response, but if I see something weird happening with your website, then I'll definitely talk to our engineers about that so that we can improve that situation. In some cases, I can give you some information. So that's always worth trying. But I'd really primarily work together with peers to see what kind of information you can get from them, what kind of experiences they've gone through with their websites that might be relevant for you as well. Can I redirect visitor's direct links to images in Google images to a page of the original poster image? I think this is mostly about the image [INAUDIBLE] page that we show there. So from our point of view, we generally recommend that you let users look at the preview image in image search, and then click through to go to your website. We don't recommend catching the refer and then sneakily redirecting them to your website. Because sometimes that's just confusing for users. I have a site where content is also good, but no organic traffic. Again, this is the kind of situation where I primarily work together with peers to see what they think about your website. What they think about the technical situation of your website. How it's interacting with other websites, for example. So that's not something where there is any one-size-fits-all, magic solution to make any website rank. You really have to work and make sure that you're following the best practices, that you're not doing anything wrong from a technical and from a quality point of view. And that you're doing the right things to get the word out about your website as well. How are content aggregators different from content farms? Good question. I don't actually know, for sure. So content aggregators sounds like these are sites that are taking feeds from various other websites and re-publishing them. Which is essentially a collection of information that's already available on the web. Content farms are generally websites that hire writers to create content on just a variety of topics. Sometimes that content can be good. Sometimes it's kind of lower quality content. So from our point of view, content aggregators is more if you're only aggregating content, not providing any additional value, that's something where the webspam team might say, it doesn't really make sense to send any crawling resources to this website. Because we can't pick up anything unique. We already know about all of this content. With regard to content farms, usually we'll still crawl and index those pages. And if the overall quality is good, then we'll try to show that in the search results. On the other hand, if the overall quality is really bad, then that might be reflected in the search results as well. Why is Google no longer going to tell the public and webmasters about future Penguin updates? Do you think that's helping webmasters understand if their site has been hit by an algorithm update or penalty? As far as I know, we're still talking about Penguin updates. We haven't had any recently as far as I know. So that's probably why you're not hearing about that. With regards to Panda updates, we've generally switched to a more, let's say, incremental schedule for Panda updates. So we don't talk about them that much. And it's not something that happens once a year. It's just something that happens on a regular basis. So from that point of view, we don't say too much about that. Because it's just a normal part of our process. In general, with regards to telling webmasters about when their site is affected by individual algorithms, one of the difficulties there is that most of our algorithms are built in a way to improve the search quality. And not built in a way to inform the webmasters about specific issues. So it's really hard to say, we've given this algorithm is affecting this website. What does that mean for the webmaster? So that's really kind of a tricky situation for us. And that's something where just because one algorithm doesn't particularly like one website doesn't necessarily mean that there's anything specific that you need to do differently on that website. It might be that this is just affecting the rankings in a specific way that your website was showing up for more relevant queries, for the right queries for your website, and maybe not showing up for other queries. And that's essentially what algorithms are supposed to be doing. And not a sign that you're doing anything wrong. That said, I think it would still be interesting to give some kind of algorithmic feedback to webmasters at some point. I don't think that's going to happen anytime soon. But I do think it would be useful to give some general kind of feedback to the webmasters so that they kind of know what's happening behind the scenes and specifically if their site is strongly affected by any of our algorithms. So that they kind of know which direction they should continue working on. So that they're not stuck in this vague fog and have no idea which way will lead them to a little bit more success. So maybe that can happen at some point. I wouldn't hold my breath. It's not going to happen anytime soon. But this kind of feedback that we get from webmasters in this regard is always helpful to kind of drive that idea forward with our engineers.

AUDIENCE: John, if consistency is something we look for when it comes to the Penguin update, we'd expect one this week. Is there any sort of chatter?

JOHN MUELLER: I don't have anything specific to announce. So a lot of times, even from our side, we hear about these afterwards. So I'd stay tuned to Barry's blog. He's usually the first one to pick up these kinds of things.

AUDIENCE: Yeah. Cheers.

AUDIENCE: Yeah. I think you guys are doing something. You're not telling us about it, but you're doing something.

JOHN MUELLER: We're always doing something.

AUDIENCE: Doing something pretty big. I think it's all you. No, I'm just joking. But there's only so many times I can ask you guys, so.

JOHN MUELLER: Yeah. Does Google use nofollow links to some purpose in a search algorithm. If it's not the case, why are they showing in Webmaster Tools? Are nofollow links enough important to be in Webmaster Tools? So in general, we recommend that adding a real nofollow to links that you don't want to have counted in our algorithms. And those could be links, for example, that are advertisements on other websites. And from our point of view, these links still drive traffic to your website. So that's something that we'd like to at least show in Webmaster Tools so that you're aware of those links being there. We don't use them for our search algorithm. So we wouldn't use them to forward page rank. We wouldn't use them to forward any other signals from those pages. So if these are links that you don't want to have associated with your website, having a nofollow there is absolutely fine. And it's not a sign that this is something that you need to take care of or you need to remove those links completely. If you have a nofollow there, then those can be normal links on someone else's website. They can be advertising driving traffic to your website. That's completely fine from our point of view.

AUDIENCE: Hi, John. Can I step in with a question?


AUDIENCE: Thank you. The malware website distinctively pointed to has totally separated statistics now there. So should we mix them if we want to completely [INAUDIBLE] what's happening on the respective website related to Webmaster Tools?

JOHN MUELLER: If you have everything on an m.domain, I'd definitely add it separately to Webmaster Tools so that you have this information. With regards to mixing them together, I think that's ultimately up to you how you want to treat those sites.

AUDIENCE: Yeah. I have no more websites and the mobile version is pointed to the m.subdomain. So strictly separated statistics for keywords and stuff, right?

JOHN MUELLER: Um, to some extent, of course. But, at least from our recommendations, we say you should have on your pages a rel=canonical pointing to your main pages, so that we understand this connection between your mobile and your main desktop pages. And so that we can use your main desktop pages primarily as the ones that we crawl and index for search. So most of those statistics will probably be for the desktop pages, or on the desktop pages in Webmaster Tools. Sometimes you might also see some at the level, so I wouldn't necessarily take the statistics in Webmaster Tools as a sign that this is everything that Google is doing with the We essentially show part of the mobile information also in the desktop version.

AUDIENCE: OK. Thank you.

JOHN MUELLER: So it's a bit confusing. We're trying to see what we can do to kind of fold everything together there. But it's a complicated topic. And it's kind of gotten more complicated since people have been finding ways to kind of create a mobile site that works more or less and done a lot of these things before we did some of our recommendations for smartphone sites, for example.

AUDIENCE: Yeah. Well, it was your suggestion. I mean, as you guys said, well, when you switch to the new update to be able to add the version, the Webmaster Tools separately, so.



JOHN MUELLER: How to get Google to rank my site first. That's really a tricky question. Such an easy question and such a complicated answer, I guess. We use over two factors for crawling, indexing, and ranking and we try to recognize what your site would be relevant for. And we do that on the one hand, based on the content that you have on your website. On the other hand, also based on information that we have from outside of your website. So that could be things like links to your website. So what I usually recommend doing in a very general case, if you have a website, is first of all making sure that you have the absolute best content of its kind so that your website is really something that would be seen as an authority for this type of information. That's something that you have something on your website that other websites don't. So that, in the worst case, if you come to us and say, hey, this is my website. These are the key words I'm targeting. We can take that information and go to our engineers and say, hey. This site should clearly be absolutely number one for these queries. If we're not showing it number one, we're doing something wrong. So it's our problem. That we need to solve that. On the other hand, if you come to us with your site and you say, my site is just as good as all of these other sites on the first couple pages. Then our engineers are going to look at that and say, well, if it's just as good as these other sites, why should we show it as well? We already have some that are just as good. So instead of trying to be just as good as some others, really make sure that your site is by far really the best of its kind. And going that direction is something that we'll essentially be more of a long term way to success. And not something where you'll see changes from one day to the next. So just creating the absolute best website for a specific topic today and putting it online doesn't mean that tomorrow, it's going to rank first. It's something that Google has to start collecting signals about your website, has to see how people are linking to your website, recommending it in other places. And we compile that over time. And as we see that everyone agrees that your website is the absolute best website for this specific topic, then we can start kind of showing that up in search as well. So this is something you have to work hard on your side to kind of create this fantastic website. And you have to take the first step and make sure that people are seeing it, that people are able to talk about it, that people are running across your website and seeing that. So doing things like putting it on your letterhead so that your clients can go to your website. Getting the word out maybe through advertising is a possibility. But essentially, you have to get the word out about your website so that people can go to your website, they can recommend it if they really like it. And then we'll start collecting those signals and seeing how people are responding to your website, looking at your content, understanding how it fits in with the rest of the web, and show that appropriately in search. So there's no shortcut for these kind of things. You really have to work hard and work for the long run. And make sure you're providing something absolutely fantastic. We have a manual penalty because of unnatural, inbound links. In sample links in Webmaster Tools show old natural links from 2009. After about 1,000 links clean-up, reconsideration request has been rejected. How can we get more examples to be in line with Google? In general, if you do a reconsideration request, you'll get some sample links back. So that's one way to get a little bit more information. The other thing to keep in mind is when you're saying 1,000 links cleanup, it's important for us that you kind of go past looking at the individual links, but also looking at maybe the overall site and doing something like a domain-level disavow in your disavow file. So that we don't pick up other links on this site. For example, if a website were to have used forum spam at some point, taking one thread's URL and saying, I want to disavow this individual thread here, isn't really going to be that useful. Because forums tend to have the same content a lot of times on different URLs. So you might disavow this one here, but you have all of these other ones here on this same site that are essentially the same thing. And that's something you'd miss if you just disavow that individual URL. So from our point of view, doing domain-level disavow really helps to make sure that any of these links are blocked and dropped from our point of view. Also going forth and looking at the general pattern of the links that you find in Webmaster Tools is really helpful. We do primarily focus on the links we show in Webmaster Tools, but if there's a general pattern there, then that's something maybe worth looking into a little bit more. For example, if you notice that we have a lot of links on link directories, for example, and you clean up those individual link directories listed there. But you know there are probably a couple hundred other link directories that you also applied to, that you also got submitted to, that's something you could go forth and clean that up as well. That's something that the webspam team, when they process your reconsideration request, also sees. They look at, one the one hand, the individual issues that they found there, but they also look at the bigger picture and see, oh, from these 5,000 links that we thought might be problematic, 90% got cleaned up. That's a pretty good success rate from our point of view. We'll let that go through. On the other hand, if they look at it and say, oh, from these 5,000 links that we thought were problematic, 500 of them got cleaned up. Then that starts looking like, maybe the webmaster should be doing a little bit more work to actually clean that up. So sometimes it's a bit of an iterative process. I just recommend that you don't go through this and say, oh, well, with this disavow file, I got rejected. I'll add five more and see if that's good enough. But rather, take a step back. And really take a significant bigger step to clean up more. And I guess the other thing to keep in mind, especially with regard to links is that we also use these algorithmically. So cleaning up a manual action doesn't necessarily mean that your website will pop back up into the position that it was before. On the one hand, the support from those problematic links will be gone, so that's something that will be missing there. On the other hand, our algorithms might still be kind of taking those links into account and counting that in your site's rankings as well. So that's something where cleaning these up, getting rid of this manual action, is a great first step. But I wouldn't see it as a last step that you have to take to actually clean up these issues. In the video about titles and serps, Matt recently said, ideally the title tag should not only reflect the content of the page, but also the greater context of the whole site. Could you explain this and give an example of how to do that? I don't actually remember that video, so it's really hard for me to give an example.



AUDIENCE: It was about some weeks ago, I think. It was about ' titles and the serps. And why Google does replace them in some cases. And what users should do in order to avoid that. He said, for example, we should use, of course, relevant keywords in the title and the short title tag. So he gave some tips. And in this context, he mentioned ideally the title tag should not only reflect the content of the page alone, but also the greater context of the whole site. This would be good for the user.

JOHN MUELLER: I'd have to watch that video again to reply in that detail. From our point of view, we tend to replace these titles when we see, for example, that there's keyword stuffing in a title. That's a very obvious situation. Or that the title is just like home or default page. Those kinds of situations are generally fairly obvious. When we have long title and recognize that we need to shorten them a bit, that's something where we might replace them. Usually what we also do when we replace a title is to give some context of the page within the site. But also some information about the site itself. So that could be, for example, from the site's branding, the domain name, the homepage. Something like that. So that's essentially what we try to create when we kind of algorithmically create these titles. If you want to avoid that, kind of mimicking that kind of situation can help. So if you have a short title that also includes something about your domain or about your website itself, that helps us a little bit. But I'm not specifically sure what he was alluding to with regards to the context of the whole site.

AUDIENCE: I was a little bit surprised about that advice, because my first thought was there is already an ideal place for the greater context of the whole site. And that's the home page. And usually the user only has to make one click to reach it. So I don't think it's really necessary to give information about the whole site on every specific page.

JOHN MUELLER: I'd have to look at the video again. I'm not really sure what I can say any more. But we do try to at least bring some information about the website in general. Usually that's just like something that we think matches the site overall. Could be like the domain name, could be like the company name, something like that. And that's sometimes helpful in titles because it gives the user a little bit more information on is this like company that focuses on this specific issue? Or is this like a home page of this company? Or is this a specific article within a bigger website? And sometimes it's relevant for the user to understand that distinction. But I don't know. I'll go take a look at that video again.

AUDIENCE: You don't think it could mean that it would be a good idea on the title to use, for example, generic keywords? For example, there is a page about green apples that the title also is telling about not only apples but also about fruits or nutrition. You know what I mean? The bigger, generic keywords?

JOHN MUELLER: I wouldn't focus too much on keywords with regards to ranking for titles, but rather making it clear to the user what these stages are about. So instead of seeing the titles as a way to improve the ranking, I really try to think about, for this specific query, if we were to appear in the search results for that, what title would tell the user most about the kind of content they could find on this page? And that's sometimes easier said than done. So that really also depends on the query itself. So what I'd do there is look at the specific queries that you're looking at, so go to Webmaster Tools, the top search queries. Look at the search results and think about, if one of my pages were to show up here, what title would I have to provide there to make it clear to the user what this page is about. So that if they might think my page is relevant, they'll go to my page. And maybe it means that the title is something that doesn't really work so well there at the moment, because maybe your page isn't really relevant for a large number of users that are searching for your site for that query. But it's something where it's really hard to give very generic advice because it's so different for individual websites. That's also part of the reason why we change the titles depending on the queries sometimes.

AUDIENCE: I see in the chat someone gave the video.

JOHN MUELLER: OK. Great. I'll copy that out.

AUDIENCE: John? I have a question about back to what you were talking about earlier, the nofollow links. So you were saying that they don't pass page rank or have any bearing on the ranking. In the past, there was discussion about page rank sculpting. And then it was explained that those links don't give away-- I mean, the nofollow links will still give or the authority is just dissipated from them. Is that the same kind of case for all nofollow links? In other words, some webmasters used to try to make half of their links nofollow so that they would channel the page rank to all of the most important content on their page. And then people were getting quite into that. And at the time, Matt said that's not a very good idea to just let it flow. So my question is, about the nofollow links, are there cases where that page rank is not dissipated? Or is it always handled in the same manner?

JOHN MUELLER: It's essentially handled in the same manner. So we definitely don't forward any page rank through those nofollow links. So from that point of view, that's essentially the same. How we handle the other links on the page, I believe we've said that we do still split it up among the other links that we find there. In general, I found in practice even before we announced that this whole page rank sculpting is something that was causing websites more problems than actually helping them. In the sense that our algorithms understand what it means to have something like a privacy policy page that you link to from all of your websites within your website. That's not a concept that's completely foreign to us. And that's something we have to deal with. And just because this page is linked to from all of the pages within the website doesn't necessarily mean it's actually the best page of its kind. So that's something where, if you're using page rank sculpting to sometimes hide the links to the privacy policy, then suddenly our algorithms look at that and say, well, this doesn't look like a normal privacy policy page. It looks like it might be something even more important because it only has links from a handful of pages within the website. And suddenly we essentially can't understand that this is something that's a completely normal construct on a website. And try to treat it in a way that isn't really what they're trying to do. So a lot of cases, this kind of page rank sculpting is something where the webmaster will think that they're channeling page rank in a way that really makes sense for them. But actually our algorithms are looking at it and say, this doesn't look like a normal website. I don't really know how to treat the links on this website. I don't know what the context of these individual pages within this website is. So that's something where they are essentially shooting themselves in the foot and causing more problems than they're actually hoping to solve by using this kind of page rank sculpt.

AUDIENCE: Right. So what I was getting at, too, though is that if they are using nofollow links, or certain sites do use mostly all nofollow links on their sites, as a habit, does that essentially mean that they're not giving away page rank from those connected pages?

JOHN MUELLER: If those links are nofollow, then we definitely don't forward page rank through them. So that could make sense. For example, like I mentioned with advertising, it could make sense for user-generated content where you can't absolutely trust these kind of links. So blog comments, those kind of things. But essentially if we find a nofollow on those links, we're not going to forward any page rank there.

AUDIENCE: OK. So the page rank not being forwarded also means that the page rank is not given from that page. So that page, its page rank has not been reduced any because it wasn't passed along.

JOHN MUELLER: I am not sure of how specific that happens algorithmically. But generally, that's not the level of interaction that you need to focus on with a website. That's something that we try to handle internally, automatically, to make sure that it's doing the right thing there. It's not something where we'd say, oh, well, 9/8 of a page rank is kept on this page. And only one eighth is kind of dropped because of this nofollow link. That's really kind of more the level of information that we only use for really extremely low level diagnosis, internally. That's not something that we worry about when we're debugging a website or its ranking.

AUDIENCE: All right. Yeah. Because I was also thinking about a lot of the nofollow links throughout some of the Google platforms themselves.

JOHN MUELLER: Yeah. I'm not sure about how that works. The thing to keep in mind is we don't actually have any SEOs at Google, so sometimes engineers do things in strange ways. And we end up with something that's following best practices and sometimes not actually following best practices when it comes to links like that. So if we spot that something's going wrong, we might let them know about that. But it's not something where we try to optimize for patron flow or anything like that. We essentially try to make our websites speak for themselves through the quality of the service that we provide. And that's by trying to do any specific SEO.

AUDIENCE: All right. Thanks.

JOHN MUELLER: All right.

AUDIENCE: Can I just ask you a very good question just before we go?

JOHN MUELLER: Yes. As long as no one kicks me off here [INAUDIBLE].

AUDIENCE: OK. There's been a lot of people talking about the idea that if you've got a website that's penalized and have talked about basically if you've got too much problem, start a new domain. If you do start a new domain, can you give the old one up and use canonical tags to point to the new site? And will penalties pass? Sometimes there's a good reason to keep that brand itself up. You know, what would be the complications?

JOHN MUELLER: So essentially with the rel=canonical tag, you'd be forwarding the signal from one site to the other. And that would include things like penalties and our algorithm signals as well. So that would be seen very similar to like a 301 redirect. And if you want to keep the brand, keep those kind of things, one thing you could do is, for example, use robots.txt to block crawling. And then do a redirect for the users. So user find your new website, but search engines see it as separate websites. But I think that's something that would be relevant only in very extreme cases. But it's something maybe to keep in mind if you run across something.

AUDIENCE: Yeah. I think some people are worried that they don't want to have to rewrite all of their content if they know the program is the backlinks. So how do they put the new content on a new site and not then get [INAUDIBLE] for duplicated content.

JOHN MUELLER: Yeah. That might be a possibility there, but it's probably worth looking into it in detail first.


JOHN MUELLER: All right.

AUDIENCE: Thanks, John.

JOHN MUELLER: I need to go. So thank you all for your time. Thanks for the questions. I see we have a bunch of questions leftover. If you want to add a question to the next one, feel free. And that'll give us a bit more time to vote it up. All right. Have a great week, everyone.

AUDIENCE: Same to you, John. Thank you. Bye. Thanks. Great show. | Copyright 2019