Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 25 April 2014

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

JOHN MUELLER: OK, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. I am John Mueller. I am a webmaster trends analyst at Google in Switzerland. I work a lot with webmasters and web search engineers to try to make sure that we're all speaking the same language, and that we can share information accordingly. Looks like we have a pretty full Hangout already, lots of people here, some regulars, some I don't think I've seen before, which is great. We are doing this Hangout on the Google+ page from now on so that we have everything combined in one central place. I hope that ends up working well. Otherwise, we'll have to find something else, maybe. Let's see how it goes. We have a bunch of questions in the Q&A already. But as always, if one of you guys would like to start, feel free to go ahead and ask a first question.

LYLE ROMER: Hey, John, good morning. Just a question on something we've seen recently and want to see if this is something to be concerned about. Basically, we posted some original blog content from an event we were covering. And after that, a few days later, we took the h1 text, put it in quotes, searched in Google, and the Facebook page links that we put out to the blog outranked the blog itself. And then when you remove the quote, the blog ranked 160 something. And what we want to look at is, is this some kind of indication that our algorithmic penalty is so severe that we have no hope of recovering, and need to look at something else? What you think about that?

JOHN MUELLER: Yeah that's always a tricky question, tricky problem. I think to some extent what might be happening is that our algorithms are looking at your website in general, like the whole website, and not really sure how to work with this content. By making sure you have it available on Facebook or on some alternate channel, as well, I think you're pretty good set to at least have your content accessible in Search. So from that point of view, I think that's fairly well. With regards to the website on a whole, the whole domain, those kind of issues, that's something that sometimes just takes a bit of time to settle back down again after you've cleaned up whatever issues might be out there. And for some of these algorithms, there are things where they seem to take a fairly long time. And that's something we're definitely working on improving to make sure that they're a little bit faster. But it's not something where we'd manually be able to switch anything over and say, OK, this algorithm doesn't affect this website anymore. So that's something you kind of have to work with for the moment by making sure you have your content accessible on an alternate domain, like on Facebook, for example, or on the Google+ page, or wherever else you're also active. I think that's a great way to at least get your content out there and findable in Search in the meantime.

LYLE ROMER: OK, thank you.

MALE SPEAKER: John, I have a question as well that I was eager to ask you. And it's related to a problem that had stemmed before we were hit by Panda and by Penguin. And we're sort of backtracking over all of the issues we had. One of them is that we are seeing our meta tags being rewritten by Google as an estimation of what you think is best for us to be displayed as. And one of those is, for example, an office space listing. And we are providing an office space in, let's say, NoHo or SoHo in central London. And you're rewriting our meta tag as a virtual office. This is actually a misrepresentation of what we have to offer for that specific page. So when the customer lands on that page, he's expecting a virtual office solution. He's seeing an office space solution. He's unhappy with his Search result, which is not what we have specifically put into our meta tags. And I'm unhappy with the result. And it is a misrepresentation of our company, and creating high bounce rates. How do I solve this issue?

JOHN MUELLER: Essentially what's happening there is our algorithms are rewriting the titles, the anchors that we show in Search, and that's essentially the way that we link to these websites. And for a lot of times what we've seen is that the titles in Search are suboptimal, the titles presented on pages are suboptimal, so the algorithms try to take those titles and rewrite them into something that makes a little bit more sense. If we're getting that wrong for your website, it would be great to have those examples, like the queries that you used to find those pages, and the titles that we were rewriting which you might have better titles for. So that's always good examples to have.

MALE SPEAKER: I can send you an example in your email. I did send you one last week. Obviously you've been away. It has been a problem for many, many years, and I presume it obviously has to be a problem with our content and with-- there has to be something along the way where Google doesn't understand our website. It can only be our fault. But my problem is, it's a misrepresentation of what we actually have to offer. And I have a big problem with Google trying to determine what it is we have to offer, and then to tell our customers that what we are trying to offer is different from what our meta tag is actually specifically telling our customers what we have. So they only arrive at our site disappointed. And that's a problem. That is quite a serious problem.

JOHN MUELLER: That's something we can take into account for future algorithms, but it's essentially the way that we link to those pages. So we can't guarantee to always take the title tag. We'll try to take the title tag when we can use that. We'll try to use some information about the website in general in those titles, as well. There's a limited space available there. And depending on the title that we have to work with from the pages, we might have to rewrite more, we might have to rewrite less. So in the Help Center article, for example, we list a bunch of things that we would rewrite. For example, if there's mainly a connection of keywords, it's a really long title, If it's something that looks like keyword stuffing or a spammy title, those are all things where we try to rewrite the title to make it a little bit clearer. We might get some of these wrong. That's definitely a possibility. So having those examples, if you've sent them my way, is a great thing.

MALE SPEAKER: Yeah. In this situation, obviously, we've come back and forth with you over a long period of time, maybe perhaps a year or more. I'm very aware of spammy type of tags. Our title tags are very well written. They're very specific. And as far as I'm concerned, this is a rather serious problem because it is a misrepresentation. And I feel like there's a greater problem here. And after all, if Google is going to represent a company by rewriting its title tags, it really has to get it right. This is quite a serious problem to tell a customer you have one product, and then to actually simply display a different product-- these are very different things, an office space versus a virtual office. And it's the difference between selling somebody an iPad or a desktop computer. They're very different. And to represent a company that way, it's very disappointing to see. Like I said, it's been happening for a long time. And now that we've done the hreflang situation we talked about last week, we're starting to see rankings recover, and starting to look at a particular situation that is problematic. And I'm very unhappy about that, to be honest with you. It is a serious concern.

JOHN MUELLER: Yeah, I can take a look at that and pass that on to the engineers. I can't promise anything in that regard because this is something that we have to do on scale. And improving the algorithms definitely makes sense for these kind of cases, but we wouldn't manually be tweaking them.

MALE SPEAKER: I'm not expecting any manual tweaking. I just think it is wrong to make suggestions about people's businesses and then say that to customers. And I think that this is a greater problem. I'm speaking on behalf of a lot of people that I've seen this about on various other forums, where they're not happy with their titles being rewritten and being represented in a way which Google wishes for them to be seen as. I think you're taking a rather big legal risk there rewriting somebody else's representation of their business and presenting them as such.

JOHN MUELLER: Well, essentially it's just the link from our website to your website. And like you would use any specific anchor text on your website when linking to someone else's website, this is essentially what we're doing here. I understand your point of view, as well.

MALE SPEAKER: If I was to link to your blog, for example, or your Google+ page, and use something that was inappropriate, that would be a defamation of character, or it would be a misrepresentation. So yes, I could link to you saying whatever I want, then. But from a business and professional point of view, I don't think it's appropriate. And so that's where I'm coming from. Yes, I could link to you, saying whatever I wanted. Is that legal? That's a different standpoint.

JOHN MUELLER: OK. We'll take a look at that separately.

MALE SPEAKER: Sorry. Yeah. Sorry. [LAUGHS] I'm just saying it just because I don't want the high bounce rates. And I don't want you to rewrite our [INAUDIBLE] and get them incorrect, especially after all of the things that we've [INAUDIBLE] such a long period of time. That's where I stand with that.

JOHN MUELLER: All right.

TILL VOLLMER: Hi. My name is Till, hi.


TILL VOLLMER: We are doing [INAUDIBLE] on the 2nd of April that on one of our pages, there's some malware alert. And when we analyzed that, we found out that someone actually used an image that was coming from a blocked website. So we resolved that. Ironically, this image came from a Google Search. So that's the ironical thing here. So we have a functionality where you can choose images from a Google Search, and this image was actually on a blocked page. So resolved that issue, that's very nice, OK. But now our Search rank went down. And actually when you entered MindMap, we were between one and three in the Search rank. Now we are at 18, and that really sucks, of course. And question is, I know that there's a procedure and it takes 90 days, approximately, or something, but you can probably share that a little bit more. But what really sucks in my eyes is how can we protect ourselves by that, because it's user-generated content. It's only an image, so there is no real security risk there, I guess. It's an image check. So that would be interesting. And how can we protect ourselves from user-generated content, and even ironically was presented by Google itself. So that's what it makes a little bit more awful. So do we have to use the safe browsing API for every link or for every image that we want to embed in our sites?

JOHN MUELLER: Yeah, so essentially even if it's an image that's embedded, or a JavaScript or something like that, that can be a security risk, because if that domain is serving malware, it could, for example, be redirecting to malicious images, malicious files, malicious other URLs. So that's something where, even if it's just an image that you're embedding from someone else's website, that could be essentially a vector for spreading malware. So that's where we're picking this up on. From our point of view, if you're providing this website with this content, you're responsible for this content, as well. So if this is content that's essentially malware, essentially an image from a site that's flagged as malware, it could also be that this is content that's just very spammy, or very low quality, very thin. And that's something that you're providing to us to show in Search. And you're essentially saying this is the content that you want shown on your website. So that's essentially where we come into play there, in the sense that we expect you to serve content through a Search that's safe and higher quality. So with regards to malware specifically, I'd recommend using something like the safe browsing API, especially if you have the ability for users to embed images, to embed other files, videos, JavaScript, maybe even those kind of things. If you have the ability to do that from external sites, I'd definitely use something like the safe browsing API to scan that content regularly so that you can take it down before it causes more problems.

TILL VOLLMER: Mhm. The real problem, you have to actually scan it before you deliver it on the site, because when you just scan it when it comes in, it's OK, but then later the website is banned. You are--

JOHN MUELLER: Yeah. One thing you can do--

TILL VOLLMER: Performance also down because in the MindMap that we use, people can upload, let's say, 100 of images. You have to go through all them before you present them. That's awful. On the other hand, if you don't do it, if you just do it regularly, once the Google robot is coming and indexing, boom. You are out of the Search rank. So it's really not nice.

JOHN MUELLER: Yeah, so I think that's something, it's separate from that. The ranking is essentially something that's not affected by any malware flagging. So the malware flagging is essentially just blocking the user from visiting that page through Search results, and presenting the interstitial when you enter the URL in Chrome directly. It doesn't change ranking at all. So the ranking side is completely separate from the malware side.

TILL VOLLMER: So this means that something else has happened that's ranked us down, actually.

JOHN MUELLER: Yeah, yeah.


JOHN MUELLER: It was coincidental timing, but it's something that wouldn't be related to the malware side.


JOHN MUELLER: One thing that some people do to protect themselves against a bad host coming along is to serve the images themselves, or to take the images, resize them appropriately and put them on some CDN. Depending on how many images you have, maybe you could even use something like Picasa Web or Flickr, or something like that, where you don't have to worry so much about the bandwidth costs. But essentially that's something that you could do to protect yourself against the host going bad. Because if you're just linking to external image files, and that external image hosts or it goes bad with malware, then that's something you're essentially open to, you're kind of vulnerable for, if you do it that way. On the other hand, if you take those images and host them yourself, or you host them wherever, on some other CDN, then that's something-- you're protecting yourself because you're pretty sure that your website is under your control. You're not going to be vulnerable to these hacking attacks. Or you pay more attention than other people and you can fix them quickly.

TILL VOLLMER: OK, great. I really like the response that the malware is not the cause of the ranking problem, which means for me, I have to go back and look what is the cause of the ranking problem. [LAUGHS] OK, great. Thanks, John.

JOHN MUELLER: OK, sure. All right, let's grab some from the community here since a lot of people submitted them. Recommended to come here from one of the industry heads at Google. Our problem, we rank for thousands of keywords, and rank well on the first page for most of the secondary keywords. But we can't be found anywhere on Google for our main keyword for five plus years. That's really hard to say exactly what that might be, or even to make guesses at that, because it's quite a vague or general problem description. So what I'd recommend doing there is posting some details in one of the Help Forums and dropping a link to that in the Hangout. And that could be in another Help Forum on a Google+ community, in the Google+ webmaster the community here, wherever you'd like to share that. But essentially what people would need to look into that is which keywords you're looking at, where you're seeing this problem, and which your side is. So that's the details that we'd need there. If the Sitemap XML file is automatically created and includes pages that have been marked as noindex, is this going to give me any problems for the rankings? Yes and no. The pages that you've marked as noindex, of course they won't show up in any rankings. So that's the problem that you're going to see there. If you've marked these as noindex because you don't want them to appear in Search, then that's great and you're all set. This won't affect any of the other pages in your Sitemap file or otherwise on your website. So just because some pages in the Sitemap file have a noindex doesn't cause any problems for the rest of the website. What's the difference between smartphone Search results and desktop Search results, and how can we resolve this matter? I'm not sure exactly which matter needs to be resolved there. But essentially we try to use the same Search results for smartphones and for desktop. Sometimes we'll change the mix a little bit, that will show more or less images, for example, or map or no map, those kind of things. But the Search results generally would be the same. What we do do, however, is if we recognize that individual URLs are broken on smartphones, for example, they don't work at all, then we might choose to show them lower in rankings in the smartphone Search results. So they would be ranking normally in desktop. but because we can tell that a user with a smartphone can't access them at all, we'll show them a little bit lower in the normal smartphone Search results. So if you're seeing that kind of a difference, and that's the problem that you describe there, then that's something where I'd double check to make sure that your smartphone website actually works well on a smartphone, and that it doesn't block users from accessing the rest of it. We did a blog post on that, I think mid- last year or something like that, describing some of the problems that we've seen. That includes, for example, a redirect from all of your smartphone pages to a single page, or serving an error on your smartphone version and showing the normal desktop version on the desktop version. And these are the kinds of things where the user on a smartphone can't really work around that. So by presenting those URLs in Search, we're doing them a bit of a disservice. My site may have a penalty due to unnatural anchors. Can you confirm? Our WordPress theme was released by a previous SEO with a footer which points at my site, have this and all these links, but no change, and more sites keep using a theme. This is the kind of thing where I'd recommend cleaning up the theme so that you make sure that new sites aren't linking like that anymore. That's always a good thing. With regards to confirming a penalty, that's something you'd see directly in Webmaster Tools. So that's not something I'd have to manually do here. If there's a penalty, if there's any kind of remedial action active, then you'd see that in Webmaster Tools. A lot of times you'd also see example links so you can work to clean that up. That said, if this is something that has been happening for a very long time, if this is a theme that you've put out years ago, for example, and you've always had these sneaky footer links in there, then that's something our algorithms might also have picked up on. So in cases like that, we recommend also cleaning those links up, removing them where you can using the disavow tool, but also being aware that when an algorithm picks up something like this, it can take quite some time for everything to settle down again. And that can easily be a half a year, or a year. So that's something where I'd recommend really working through, making sure you've cleaned up all of these issues, and really making sure that you're avoiding this going forward. So cleaning up that theme so that you don't create more of these problems going forward is really a good idea. One other developer has accidentally deployed a robots.txt file that disallowed all bots. Oops. I think a lot of people have done that. This was fixed four days later. However, over the next two weeks our traffic from Google has dropped steadily to almost nothing. How can we fix that? Generally speaking, this is something that would fix itself automatically. So as we can recrawl again, we can essentially pick up those URLs again and crawl them normally, index them normally again. If it was active for four days, then that's fairly long time. But on the other hand, it's not like a giant time. So I imagine that's something where you'll see some effects over a couple of weeks, but it should jump back to normal after a while, as well. If you're seeing the traffic go to almost nothing, then what I'd also double check is to make sure that everything else is OK so that you can really fetch the individual URLs that were important in the past, look at your top search queries in Webmaster Tools, the top pages. And take those top pages and make sure that you can actually fetch them in the Fetch as Google tool in Webmaster Tools, just to make sure that there isn't anything else in the robots.txt that's blocking. And double check that you don't have any noindex meta tags in there, as well. So that's something we've also sometimes seen, that a developer pushed a version of the site to production that actually still had noindex meta tags in there to prevent indexing of those pages. So those are the kind of things I'd just double check to make sure that everything is coming back to normal. But usually, over the course of a couple weeks, this is something that should settle back down again. We see lots of SEO blog posts how gets manual action spam penalty notice, cleans up a few bad links, disavows the rest, gets the penalty lifted two weeks later, and returns to 90%. Why are WhiteHat Hat sites told to make an algorithm for over a year? So essentially the same algorithms apply to all of these sites. It's not something where anything special is happening with these specific sites. For some sites, it's definitely possible to go from a manual action to cleaning that up to reappearing in Search normally again within a couple of weeks. That's not necessarily related to these blog posts that are going around. That's essentially the way the normal process should be working. On the other hand, if there are things that were problematic for years, and that's something [COUGHS] where maybe the algorithms have been picking up on some of these older problems. So essentially it's not something that's specific to smaller sites or specific to individual sites. It's just how the algorithms are picking things up on and working with that. And yeah.

LYLE ROMER: I could follow up slightly. I think one of the frustrations that a lot of people seem to have is that with the manual penalties, people will get a notice, this is what's wrong. And they can go in there and fix it. With the algorithmics, you don't really know exactly what's wrong. And I don't know if there is some way, because you sit there, and you don't really know if you fixed everything. And then you're waiting for six months, a year, or whatever, and try to figure it out. If there was a way that Webmaster Tools could display some kind of red light, green light, and some categories or something, but-- because I don't really know how you go about knowing that you have actually cleaned everything up until you wait six months to a year, so--

JOHN MUELLER: Yeah. That's definitely something we hear from time to time, and something we're looking into what we can do there. But at the moment that's not possible. So a lot of our algorithms are essentially focusing on search quality. They're not focusing on something where we'd say this is feedback directly for the webmaster that can be useful for them. So from that point of view, it doesn't make sense to do that for most of our algorithms, when it comes to search quality. But I can definitely see how it could help webmasters to do the right thing if they had more information about these issues.

LYLE ROMER: OK, thanks.

MALE SPEAKER: John, just a quick thing. We were actually posting on your wall while you were away on Easter break, actually from the webmaster. [INAUDIBLE], her and I were debating about a specific question relating to that. And that is that once you get a manual penalty-- and we have discussed this many times-- can you recover quickly? And this comes back to Lyle's question just now. And then the people who are waiting for a long period of time, there seems to be a disparity there, where you said before, you need to wait for a Penguin refresh. And you won't recover unless you've had the refresh. Now are they related differently to an unnatural linked penalty, related to the Penguin algorithm? Ashley seems to be convinced that once your revoke has been issued, that you absolutely rank where you should rank for your website. And that there is no period of waiting time. Whereas we had numerous conversations where you said you have to wait for a refresh. So I'm confused as to where the situation sits at. What's the answer? And is it different for everybody? [LAUGHS]

JOHN MUELLER: Essentially with the manual action, what's happening there is that this manual hand brake is loosened when you've resolved that issue. But that doesn't necessarily mean that the rest of our algorithms are already taking that into account. So your site is definitely ranking where it normally would be ranking if you have resolved the manual action. But that doesn't mean that it's ranking where you'd want it to be ranking, or that it's ranking where it would be ranking if all of the algorithms were able to look at the whole web again and recalculate everything just now. So essentially all algorithms have some kind of a delay, in that they have to look at some of the older data. We can't crawl the whole web on one day. There is just so much stuff out there. So from that point of view, we're at least ranking your website where it would algorithmically be ranking when the manual action is resolved. But that doesn't mean it would be ranking there if all of the data that were out there, were able to be recalculated instantly and it would be still in the same place. So essentially it's ranking naturally again. But that doesn't mean it's ranking where it would be ranking if everything were at the, let's say, live state.

MALE SPEAKER: Sure. So it's such a difficult thing because I'm in a situation right now where I've been waiting forever and a day for the right thing. And I'm past caring to a certain degree, just because of the way it is. I'm just downbeaten. I care a lot for other people, and their other situations. And I've found a place where I want to help people quite a lot, and to educate them. And there really is a situation where top contributors are informing people right now that where they rank is where their site ranks. And it isn't true. There is this disparity. There is this moment, right now, where if the robots and everything could recall the site and do all the wonderful things it needs to do, and the Penguin could do a refresh, then actually they would be ranking in a different place. So putting in all this extra work, as wonderful as it would be, may achieve them better results. But had they done nothing, and they waited six months, and waited for a refresh, they could actually still rank better than where they would now. And so it's not a representative point. They are not where they should be. And I'll tell you why. I installed the hreflang thing that we talked about last week. And I'm seeing my website rank infinitely better, back from the 300th place I should be for virtual office, down to the top 50 region. And it can't simply be the fact that it is a site and I'm ranking in the search engine and how well the site ranks. And for my dot com to rank in 450th place and my to rank in the 50th place has an absolute correlation to the way that the algorithm is looking at my dot com website due to the manual penalty it was issued on the website.

JOHN MUELLER: It's really hard to say in general. So essentially the manual action is something that's manually in the way of normal, natural ranking. But that doesn't mean that the algorithms are always instant. And with regards to waiting a half a year and seeing what happens then, so much will change in a half a year, regardless. So that's something where I wouldn't say that your site would be ranking differently now if it had the data from in six months. So essentially there's always this flux.

MALE SPEAKER: The websites are identical.

JOHN MUELLER: Things are always changing. So that's something where I wouldn't disagree that if a site has no manual action that's ranking where it normally would be ranking. And essentially that's kind of what it is. The manual action is something that will unnaturally hold the site back. And if there's no manual action, then things are ranking where they normally would be ranking.

MALE SPEAKER: But it doesn't make sense. In the situation that I'm in right now, the site does-- my dot com has no manual action. We have a revoked from the 27th of May last year. I have a with identical information ranking, for my major keyword, in top 50 position. And for many others, top-- in fact, I'm number one for various other keywords. Where I don't rank [INAUDIBLE] dot com. And I haven't done for a long time. If that was the case, then we wouldn't be seeing what we're seeing right now, which is a clear indication that there is a suppression on the dot com, which we're waiting for a refresh to happen.

JOHN MUELLER: That's normal, yeah. Some of the algorithms aren't updated.


JOHN MUELLER: Yeah. These updates can take quite a bit of time to be sent out, to have all of this data collected. But essentially we see that as ranking as it would naturally be ranking. We don't see that as being unnaturally held back. But I can see from your point of view, it's something that looks very similar. So I understand the confusion there, as well.

MALE SPEAKER: It's just a situation where there's a lot of people in my situation. We're on webmaster forums, we're receiving information from top contributors, giving us knowledge that is, in their mind, absolute fact. And I've experienced this over the last three to four years, which is incorrect. And I'm very happy to be part of your Hangouts to be out to get clarity, because I'm experiencing a very different situation with you. And a lot of people would make a very different decision about how to handle their penalties If they understood long waiting times and the understanding that once they get a revoke that it isn't the end of the road. This isn't a, you're now ranking where you should be ranking. That's incorrect. It is incorrect. There's people's jobs at stake. People who own the companies, it's their fault. We've made these decisions to build links unnaturally, or however we might do. But there's other people who are staff members, who lose their jobs over bad decisions about whether should they start a new website. If they knew they had to wait two years, or a year and a half, what should they be doing as a decision-making position? And I believe it's Google's responsibility to have clarity about how long you could potentially wait before you recover in order to make the right decision. And on numerous occasions, you've said that it's better sometimes, if you've cleaned up your problems, to start a new domain. And I think it should be very clear on the understanding it as to why you should be starting a new domain, and the delays that it takes in order to recover, what you have in store for you in the future. And I know you don't want to make life easy for spammers and for people do things. But right now, in our industry, the person who ranks number one for virtual office in London owns three or four different websites. Nothing's being done about that. And we're back in exactly the same situation we've been in for the last 10 years, where people are manipulating the results. And nothing's being done, and Google isn't actually responding quickly to these requests. So nothing has changed, and I'm very disappointed with the situation. And it's such a shame that we have to be in this situation because I only want the best.

JOHN MUELLER: OK, Well let's look at that separately. Let's go through some of the other questions, because there are lots of people that also submitted other things.

MALE SPEAKER: I'm sorry, yeah. It's a big deal. That's all.

JOHN MUELLER: All right. We're a classified website with 20,000 listings. We can't control the content, so some things might be thin. Is it a good idea to noindex the thin pages? Or will this cause an issue, as some of the pages aren't indexed, and others aren't? Essentially that sounds like a really good idea. So if you can recognize that some of these pages are thin, and putting a noindex on them, that's a great thing to do. There are some ways that you might be able to do this in an automated way. So for instance, if you can recognize that the text isn't unique, or that someone just submitted to 20 or 50 new listings that are all identical, then that's something maybe you could take into account. Other things I've seen people do is to try to create some kind of reputation for the people who are posting. So people who've been known to post good content, they can post right away. And other people have a noindex, maybe, for either a certain period of time, or until there's more reasonable content for that individual listing. So that's something that you can create your own system of recognizing these issues. But if you can tell that this is thin content, and you'd like to block it from Search, that's a good idea, from my point of view.

MALE SPEAKER: OK, thanks a lot.

JOHN MUELLER: I have misinterpreted the rel next and rel previous instructions with regard to URLs. Could that be a cause of my drop in ranking? I didn't take a look at the example list there, but essentially if you're implementing this incorrectly, it could cause some problems. But for the most part the rel next and the rel previous just gives us a little bit more information about how your pages are connected. So that shouldn't be a reason for a giant drop, unless of course you're doing something like noindexing a lot of pages that actually you wanted to have indexed. Any tip for a site that has a competitor building link spam to our site, causing manual action in Webmaster Tools? We've done multiple removal requests and disavows, only to have new comment of form link spam being software generated constantly. To a large extent, we do try to recognize that appropriately. But if you're getting a manual action, then that's something where you can also let us know about the things that you're seeing, and maybe give us some information about things that we could watch out for, as well. But even if you're getting a manual action like this and you think that you're not, or your previous SEO, or someone associated with your site, isn't responsible for that, I definitely recommend just throwing those domains in a disavow file, and making sure that they're not going to count, and maybe removing some of them if you have a chance to do that. But I definitely recommend doing something, rather than just watching and letting that happen on its own. Recently our webmaster account, which was having 11 sites all on a different niche, received manual actions for thin content. Same manual action on all 11 sites. There was one blogspot site with no content links, totally empty. So is there any problem with my WBT? I'm not sure what WBT is, but if these are all thin sites, I'd recommend deleting or combining them, or taking that information and putting them somewhere else. So that's something. I'd recommend, taking action on this message and making sure that you don't run into the same kind of issue again in the future. So one thing you could do is combine these into one really strong website that's really good. If these are 11 sites that you need to have completely separated, then it can make sense to have 11 sites like this. But chances are, it's going to be a lot harder for you to create really fantastic content for 11 sites compared to one website that's reasonably strong. So that's something where you have to make the tradeoff, but essentially if these are 11 sites that are really thin content, affiliate sites that are just reusing feeds from various sources, then that's something where you probably want to rethink your strategy and think about what you really want to achieve. And maybe it makes sense to work on one really strong, fantastic website instead of scraping together 11 sites that are just really thin, low quality-ish. Sitemap reports in Webmaster Tools show the number of URLs submitted versus the number of indexed. Is there any way to know which URLs were not indexed? At the moment, that's not something you could see directly in Webmaster Tools. And it's usually not something you absolutely need. If 90 percent of the URLs are indexed, then that's usually a pretty good sign. What I'd recommend doing, though, is splitting the Sitemap file up into logical sections, so that it's a lot easier for you to understand which part of my website is being indexed, and which part isn't being indexed. That could be, for instance, categories in one page. Or the articles in one Sitemap file may be putting the individual products themselves into a separate Sitemap file, maybe splitting the different language versions, if you have something like that. And by splitting it up into separate Sitemap files, you will be able to see, per Sitemap file, how many are actually indexed, and how many aren't indexed. And you can make a little bit of a more knowledgeable decision on whether or not that's a problem, or maybe that's fine like that. With regards to why some of these URLs might not be indexed, there are a lot of reasons. It could be that maybe there's an old index on these pages, like one of the previous questions was. Maybe these are pages that are essentially duplicates. Maybe these are pages that we have indexed under different URLs already. So if you have, for example, parameters in your URL, and you have one version without parameters. And you submit one version of the Sitemap file, and the website itself links to the other version, then maybe we'll index that version. Maybe we'll index the Sitemap version. So that's a sign that maybe you're serving something in an inconsistent way, where, if you were a little bit more consistent, you'd have a little bit better information in Webmaster Tools.

MALE SPEAKER: John, could it also be an indication of [INAUDIBLE] pages that are not indexed?

JOHN MUELLER: Usually not. So we have a lot of storage. And if it's something that we find in a Sitemap file, and it's reasonable quality, it's kind of unique on its own, then that's something we'll definitely try to index. Of course, if it's a new website, and you submit 100,000 URLs in your Sitemap file, then it's going to take a while for us to understand that this is actually a reasonable website, and we should take time to actually go through these 100,000 URLs and index them all individually. But if it's a reasonably established website, and you have a lot of URLs on your pages, we'll try to get as many of those in as possible. How does Google treat two plus links to the same internal page? Does it only look at the first link and ignore the others? We do look at both of those links. We do use both of them in our internal algorithms. One common question that we sometimes see is, what about the anchor text? What about follow, no follow? In that regard, it's something where technically our algorithms might be doing it one way at the moment, but that's not something where I'd say you can rely on this, or you should rely on this. I essentially just created a normal linking structure within your website. And let us crawl that, and we'll try to pick out the right information from there. So that's not something where I try to artificially manipulate which link is shown first in the HTML code, because that's likely not something that would ever remain consistent. It's not something where we'd say, this is something we want to define in a specific way and never change.

MALE SPEAKER: So John, for example, if I had, let's say, a page on my blog about telephone answering, and part of it was about a particular service we offered, which was a phone answering service, and the second part was maybe about a 24-hour service. Both are useful for the customer to be able to physically click on. Would the anchor text have any bearing, really, about me making sure both of those key terms were linked or--


MALE SPEAKER: It's beneficial to the customer, which is why I'm doing it, primarily.

JOHN MUELLER: I'd do it how it makes sense for your website. I wouldn't do it in any way to try to improve the anchor text or something like that. And from a technical point of view, what might happen is we pick up one or the other and use that as a main anchor text. But we definitely see the whole page. We can see that there are multiple links on there, and we can take all of them into account, as well.

MALE SPEAKER: Sure. I don't want to over link. But I'm looking at Wikipedia as an example of how a link page is. And I'm only linking with things that I think people will physically read, and then think I need to click on. But I'd also want to push the limits and quotes, anything that would be problematic. So just go about my daily business and hope that people will get it. It shouldn't have any problems.

JOHN MUELLER: I don't see any-- I haven't seen any cases in practice where anything like that would be problematic.

MALE SPEAKER: Yeah. OK, great. Thank you, John.

JOHN MUELLER: Any plans on increasing the max size of the disavow file? We're currently mostly disavowing at domain level and have run out of room. That sounds like something we should look at. So I'll check with the team to see what we can do there, or what we might need to do by looking at the other files that we have. So thanks for that feedback.

MALE SPEAKER: Anybody that is having that problem should be uploading heavy files to a Google Docs directory rather than putting that content into the disavow file itself, is my opinion.

JOHN MUELLER: All right. I have a site that has an average load speed of five seconds, yet almost all pages take under one second to load on a day to day. But one day out of a month, I might have a single page with a load speed of 50 seconds. This is ruining my average. Should I worry? From our point of view, that's not something you'd really need to worry about. We do take the rendering time of a page into account when we do some rankings, but that's essentially only when the page is rendering really, really slowly. So if you're talking about something between one and five seconds, then that's completely reasonable. That's not something I'd really worry about there. If you're talking about the time it takes to download the individual URLs, then that's something-- what will likely happen is our algorithms will notice that your website is really slow on this one day. And it'll slow down crawling speed for a short period of time, but usually it'll catch back up again after a couple of days. So if on that one day, for example, you have maintenance, you're doing backups, and your website is really slow because of all of that, chances are we'll crawl normally on that day, in the next couple days crawl a little bit slower, and then speed back up again as we see that your website is accessible again. So that's not something where you'd usually see any kind of a ranking change, or even significant crawling or indexing change. Is product price considered a factor for SEO or better optimization? From our point of view, not really, especially when you're talking about normal Web Search. A lot of products can't be trivially compared just by the price anyway. So that's not something where we'd use a price directly. We do sometimes show the price as a rich snippet in the searches, also. Sometimes users can differentiate on that. It might be different in other parts of the company, for example in Product Search. But I don't know how that's handled there, so I can't give you any input on that. Short URLs versus long URLs in WordPress, what about that? That's always a good topic. So from our point of view, we do try to prefer short URLs because we think they tend to make more sense. And if we can simplify a URL a little bit, we'll try to show that, as well. But this is essentially just a matter of the URL that's shown in the Search results. It doesn't mean that we'll change your ranking or anything. So essentially this only comes into play when we've discovered content on your website we want to show at a certain place in the Search results, and we know there are multiple URLs that lead to the same content. Then we'll try to figure out which one we should show in the Search results. And often, if we can find a shorter version, we'll try to show that just because it's a cleaner version, maybe one that's easier for the user to remember, those kind of things. But that doesn't change how we would rank that URL at all. So from that point of view, it's just something that the user would see in Search. And a lot of times if you're using something like breadcrumbs, or if we've recognized breadcrumbs on your pages, we wouldn't be showing the URL at all, anyway. So the user wouldn't even be seeing which URL is being linked to.

MALE SPEAKER: So you wouldn't rank that if one page had 20 words in its URL as opposed to three, for example.

JOHN MUELLER: No. Essentially from our point of view, that's the same URL. For example, if these different URLs would lead to the same content, that's the same thing. That's not something that would affect ranking at all. So just shortening your URLs and hoping that your ranking will go up is probably not going to happen.

MALE SPEAKER: Yeah. Thanks, John.

JOHN MUELLER: Are there any best practices when it comes to using pushState? Does it have to be on anchor tags? Or can Google understand and find pushState on regular [INAUDIBLE]? Are there any impacts regarding indexing, ranking, and update frequency compared to "a" tags? So generally we recommend using pushState together with the normal static links on a page so that you have this natural fallback. If a user doesn't understand JavaScript, they can still work within your website. So essentially we could still crawl just by looking at the "a" tags and actually find all of the content. That's the best practice that we recommend. When it comes to JavaScript itself, we do try to understand JavaScript to some extent. And if we can recognize URLs, if we can recognize pushState events, those are things that we'll try to use, as well, for crawling and indexing, more and more as we see sites actually using this more and more. But if you want to make sure that your site is always crawlable, then I'd really make sure that you also have the normal static "a" links, as well. [BUZZING SOUND] I can't hear you, Joshua. You have a lot of noise. I can hear you now. Yeah. Go for it.

JOSHUA BERG: Is there any scenarios where Google might editorially alter the ranking of content? For example, in the News section, obviously, there would be some of that sorting or whatnot. But in the organic Search, let's say if some of that content related directly to Search, or someone was writing things that were wrong, that would lead users to do the wrong kind of things in Search, or vice versa, they were--

JOHN MUELLER: We work really hard to make sure that there's no editorial bias in Search at all, that it's all algorithmic. I believe even in Google News, it's completely automated. It's not that anyone is reading these articles and saying, oh, this should be the breaking news article. That's all completely automated. So that's definitely not the case that anyone is reading these things, or trying to figure out do these match my opinion, do these match the facts that we've seen, for example. That's definitely not something that we would do at all. So if we find this content, and if our algorithms think it should rank this content, then we'll try to show that content in that place. We're not going to manually bring any editorial bias into Web Search.

JOSHUA BERG: OK. And regarding News, that brings up another question, because there's quite a number of sites that, sometimes there's even top links that-- maybe the Wall Street Journal or other links that go to a subscription service, or a paid service, where you don't actually read the content. Or it's on mobile and they're not mobile-friendly sites, or there's some front-page ads, those kind of things that are very poor experience. But I often see in the News section some of these types of sites regularly ranking high. Is that just a challenge to be worked on, or--

JOHN MUELLER: To some extent, that's acceptable. I think in News, they have a little bit different guidelines, especially with regards to things like subscription services. I believe some of that should also have a snippet in there. But I'm not aware of the details of how Google News handles these kind of cases, so I can't give you any definitive answer there. With regards to subscription services, Google News also supports the "first click free" system, where essentially if you click from a Search result into one of these pages, you can see the content. But if you go to that page directly by copying the URL, for example, maybe you'll see a login page that requires a subscription. So that's one thing that might also be coming into play there. But for Google News specifically, you'd have to post in the Google News forum, for example, or ask one of the people from Google News.

JOSHUA BERG: All right, thanks.

JOHN MUELLER: All right. With that, let's take a break here. Looks like we still have a lot of questions, so I'll try to go through some of those and see if I can answer some more in the event listing itself. And I'll definitely set up the next Hangouts, as well, so that we can get more of these questions answered next time. Thank you all for joining, and I wish you guys a great weekend.

LYLE ROMER: Thanks, Johh, you too.

MALE SPEAKER: Thanks, John.

MALE SPEAKER: Thank you very much.

MALE SPEAKER: Thank you very much

JOSHUA BERG: Thanks, John.

MALE SPEAKER: Have a good weekend.

JOHN MUELLER: Bye, everyone. | Copyright 2019