Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 02 June 2014



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: OK, welcome, everyone to today's Google Webmaster Central office-hours hangout. Today, we're doing a slightly different model in the sense that there was some time for people who signed up ahead of time chat one on one directly with me about questions around their website or about web search, in general, anything, essentially, that they'd want to talk about. After that, we'll be opening up for the normal Q&A At the moment, we only have two of the guys who signed up here live, which is great for them, of course. But if you signed up, you should see an invite from me in your screen as well. And maybe, to get started, to give you guys a chance to join on in, I'll do two or three questions from the Q&A, and then we'll switch over to the one on one part. All right. You recently said forum links are bad. Are natural forum links linking to a URL also bad if someone was suggesting a page or article in a post? Should we disavow if we can't have the links no followed by the forums? How do algorithms on the spam team view these links? So I think that thread where we were talking about this kind of touched upon a lot of issues that are similar but not really completely the same, in a sense that there was someone there asking should they go to forums and drop their links. And from my point of view, that's clearly an unnatural link. If you're going to a forum and just leaving your links there, then essentially that's spamming that other forum. That's something we wouldn't recommend doing. We also know a lot of people have done similar things in the past in that they'll go to the forum and leave maybe one or two low-qualityish responses, or they'll just post there with a link to their site in their profile. Or they'll create hundreds of profiles just for those profilings. And all that would be seen as an unnatural link by both our algorithms and by the manual Webspam team. So if they run across a site, and the only kind of links that this site has are forum links, where you can tell that the site owner or someone associated with the site essentially dropped those links, then that's something where the Webspam team would say this doesn't OK. This looks like something where we might need to even manually step in to clean that up, so that this doesn't affect the search results. So from my point of view, being active on the forums is great. But doing that just so you can drop links is probably not the best kind of motivation that you should have with regards to interacting on forums. So I wouldn't necessarily worry about surely natural links. So if someone is really posing about something on your website in their forum, great. I mean, if that's a recommendation for your site, that's perfect. On the other hand, if you're the person going to these forums and recommending your products, then that kind of looks sneaky. And that's not something that we'd appreciate. And definitely not something that the forum owners would appreciate either, right?

AUDIENCE: But John, how do you know who it is that dropped the links? Isn't this the classic case of negative SEO?

JOHN MUELLER: That's usually fairly obvious if you look at the bigger picture. So looking at the individual posts, it's kind of tricky sometimes. But if you look at the bigger picture and you see that this site has been doing this for years, and years, and years, then it's almost obvious that all of this is related. And that some competitor wouldn't be out there promoting this site for five or 10 years just in the hope that maybe at some point later on Google would be taking action on that. So these are kind of things where in a lot of cases, it's really obvious who's doing these things. Sometimes there are borderline cases where we'd say this isn't really clear exactly what was happening here. And those are the kinds of situations where we do try to take the appropriate action, which might be to just ignore those links without causing either site any problems. Or is might be to say OK, we'll err on the side of caution, and say maybe these are OK links. Maybe these are natural links that we need to count normally. And the whole let's say, negative SEO thing is something that isn't really completely new. It's something that's been around for years, and years, and years. And we have a lot of practice with these kind of things. And sometimes we'll see issues that we pass onto the engineers, and say hey, is this really working the way it should? And those are the kind of things where we try to refine the algorithms, try to refine our manual actions. But overall, I think we're doing a really good job of catching the right things there. All right, let's grab one more from the Q&A here, and then we'll switch over to you guys. I have a very old website which was ranking on page one for years, and now not ranking from rank viewers like electronic component suppliers. There's no [INAUDIBLE] fix all the issues which we might have impacted rankings. I'd have to take a look at the site specifically to really say anything about that. But one thing you could do in the meantime is maybe post in one of the help forums, and see if one of your peers has some insight as well. But that's kind of hard to do directly live. So maybe let's take a different one here. How can I join the hangout? At the moment, it's just the guys that signed up. Sorry, but I'll be opening it up for everyone in a couple of minutes and post the link in the normal event. And one more normal here. If you have hundreds, or even thousands of pages of content, what are some tips to find poor quality pages other than user stats and Amit Singhal's 23 questions-- for example, low search rankings, low page rank, et cetera? I wouldn't focus so much on things like current ratings or page rank, because that's not something that's easily refinable down to individual pages. But I'd look at factors that may be relevant for your website. And sometimes there are things like time on site, or what people do after they visit a landing page from your website that can help you to track these things. Sometimes you can add indicators yourself, like little plus one buttons or star ratings or some ways for users to let you know about the pages that they're interacting with. And sometimes even the lack of any kind of interaction from the user is a signal, where you could say users can drop comments on pages if they want to, but if nobody ever comments on these specific pages, maybe that as interesting to the user. So these are things where you have to take a look at your website, and think about what you could do to really do something to pick up signals that are easier to quantify and refine.

AUDIENCE: John, does that having hundreds or thousands of pages and, some of the signals won't give you any clue if you've got a blog that's got a five-year-old post on it that just isn't getting any more views at all for good reason because it's not topically relevant anymore. Does having some of those not visited pages do any harm to your site? Or do you just generally assume that the older something is it just isn't going to get the kind of views. So should webmasters clear up older content? It's not doing any harm. It's just not doing any good, really.

JOHN MUELLER: That's always kind of tricky. I think to a large extent, if you look at your website overall, and if you have content pages where you're providing some kind of content, some kind of service on those pages, I'd try to clean up those older, craftier pages as well. Whereas, if you have something like a blog, where you just collect and add more, and more content over time, then that's something where those old pages might be useful as an archive-type thing. So that's not something where I'd say you should never have older pages, or where I'd say you can always have older, useless pages, because depending on the type of websites, sometimes it makes sense to clean that up. And sometimes it's not so critical.

AUDIENCE: OK. But they shouldn't harm your site-- have those just sitting there doing nothing.

JOHN MUELLER: If these are essentially just older pages, that's something where I generally wouldn't worry about it. The thing is, some of our quality algorithms look at the website overall. So if you have a mix of pages that are kind of low qualityish, or just really old and obsolete and a mix of pages that are really good, then it's sometimes hard for our algorithms to figure out how should we treat this website overall. So if you know that something is completely obsolete, maybe it's worth flagging it on the content or removing it even. But if these are just older versions of pages that you essentially keep as an archive, that can be fine as long as the user, when they go there, they realize this is an archive page from 1995. And the content there was relevant then, but maybe things have moved on in the meantime.

AUDIENCE: So John, wouldn't it be Google's algorithm deciding how to rank individual pages? Why do you have to basically paint the entire site with a brush of some useless pages? Even if the number of useless pages in terms of the quantity of useless pages may be let's say, 10,000. But in terms of the actual traffic that they get, it's hardly anything. So my understanding is that your guidance is for webmasters to no index all these pages. But why not just ignore all these pages, and just focus on the good ones? And basically rank each page according to the query, or for each query. Why paint the entire site with that brush?

JOHN MUELLER: Yeah, that's a good question. I mean, from our point of view, that's not trivial always, in particular when we see new content. So there's a new page showing up on your website, and we know overall you have a really high quality website, then we're going to give that page a lot more weight because we think, OK, overall, the content on your website is really high quality and really good. So we're going to assume that this new page is going to be really good as well. Whereas, if overall we have this mix of pages where we say, oh, you have to be careful and pick the right pages to show on the right queries. Then if we see something completely new, we're probably going to be a bit more cautious about that. So from that point of view, it's not something where we'd say you need to do that, but it does make it a lot easier for our algorithms if they can overall recognize that this is a great website. And even if I haven't found all the signals that I need to rank this individual page for I know it's going to be good, so I'm going to show it really well in search. And that also comes together with things like rich snippets, with other things like authorships where we show metadata from your pages in the search results, where if we can really trust your website, if we can really assume that you're providing great content there, and we can see that overall things are working great for your site, we'll really trust that metadata a lot more and actually show it as often as we can in search. And with let's say, something like a blog where you have this weird mix of content anyway, that's generally less of an issue because when we do have to look at the individual pages. And we do have to individually look at that. And for new content, sometimes it'll come out well. Sometimes it's something where we're more cautious that we might lead to in the beginning.

AUDIENCE: Right, OK. I guess it makes sense. I've been hit hard by this. So it's a personal issue.

JOHN MUELLER: Yeah, I totally understand. But I think it's something that users also kind of notice. It's something where if you have, for example, a really, really technical website. And you have some really great high-quality technical content on there. But mixed within this technical content, you have something that's low qualityish, maybe not really on-topic, then the user looks at the site. They see the technical stuff. And they think, oh, this is great stuff. And they browse through your website, and they see this, like, eh, this doesn't make any sense at all. This isn't something that I really want to read through. And when it comes to leaving the site and going somewhere else, they have mixed feelings, with regards to recommending the site. So on the one hand, technical stuff is great. On the other hand, this other stuff that was on this site was so-so or not even really good at all. And it really makes it hard for them to figure out what they should be doing there. All right, let's get started with you guys. What kind of issues have you been running across?

AUDIENCE: Me?

JOHN MUELLER: Sure.

AUDIENCE: I just turned my microphone off. Is that on or off?

JOHN MUELLER: It's on.

AUDIENCE: OK. Well, I don't know if you remember the last couple of times I've been on and sent you the site to look at. I actually did post a URL in regards to a search result, which is now starting to affect not only-- well, it affects everything, including brand terms now, so much so that the results even for our brand name and from the home page, a snippet that only appears on our home pages and nowhere else is suppressed into the repeat search with the omitted results. And there's no manual action there. As far as I know, it's not algorithmic because it hasn't had any ups or downs. Ironically, now I run that same search, and it's actually showing. So it actually flipped a switch this morning.

JOHN MUELLER: I still see what you think you meant when I try it out now. It's tricky. I mean, one of the things here is that one of our algorithms is picking up some issues that probably need to be reprocessed on our side. And that's not something where I have any timeline where I'd say this is when this is going to happen. So it's a situation. On the one hand, I think our algorithms would be picking it up correctly if they were updated there. But on the other hand, you want to find a solution there as quickly as possible. So it's hard for me to say.

AUDIENCE: Well, it's been going on since July last year.

JOHN MUELLER: I mean, one possibility is always just say I'll look into maybe setting up an alternate domain name, where you can host some of the content that you really think is important and relevant for search. So that might be some information about your website. It might be information about specific services that you offer that you've maybe moved to different domain. That might be a possibility. The other idea might be to just see what you can do to improve things overall. But it sounds like you've been working hard on improving things overall, and not much movement has been visible, right?

AUDIENCE: Yeah, and we have, but then we've never done anything to not try and have quality content. And it's all original. Some of the pages, as you can see, have got hundreds of reviews on them. No one page is more important than another, because you can't tell someone that a flight lesson in New York is more important than a dinner cruise in California, because it's 100% relevant to the person that's searching in their geographic area. So it's not the kind of site where we can say OK, let's take this content, and put it over here. And let's take this slot, and put it over here. Unless you could say to me, well, Rob, get rid of the blog. Then we would happily get rid of the blog. But in terms of the actual main site pages, we can't pick and choose which area of the country, or which customers we're going to ignore.

JOHN MUELLER: OK, yeah. I don't see any problems with the blog. Is it like the GIF blog on the bottom?

AUDIENCE: Yeah, yeah.

JOHN MUELLER: I don't see a problem with that. I think that's fine. I mean, that might be something that you could spin off to an alternate domain name, or somewhere else. But I don't think that would significantly change anything for your website.

AUDIENCE: Right. I mean, one of the things we had considered was, because it's just had no change in traffic, is moving to a new domain. But the advice or all of the reading we've done, is that if there's a problem with the site and the domain, it's just going to follow us. And so we don't want to start over and just have exactly the same problems again. And then we'll be back here trying to fix problems that, in reality, I know everyone says this-- we don't think we can do much about, because we don't build spammy links. We don't build thin content. Every page on there is for a purpose. And there's a lot of genuine customer reviews and interaction and social-- like 50, 60, 100 likes on the single products. People find it genuinely useful. But it's being suppressed deliberately.

JOHN MUELLER: Yeah, so I think one idea might be to spin off the blog to a different domain, and see how that performs. And see if that helps anything. And that also gives you a little bit of an idea, with regards to what you could keep on one site, whereas what you might want to move to a different domain. But I think spinning that off to a different domain will probably be a good way to at least test what's happening there.

AUDIENCE: All right. I mean, the blog is just content we create around different activities and gift occasions. It's not commercial intent. So even if the visitors to the blog increased tenfold, it wouldn't make any difference to us whatsoever. But if that was suppressing anything, then great.

JOHN MUELLER: I don't think it's suppressing anything, but I think it might be a good idea to at least test how it would perform separately.

AUDIENCE: Right, OK. So otherwise, it's wait for algorithm update?

JOHN MUELLER: Yeah.

AUDIENCE: Famous algorithms? Or smaller, lesser known algorithms?

JOHN MUELLER: Lesser known algorithms. I realize it doesn't really help that much. But I think it's always a tricky situation on our side because we do push these up to the engineers to get reviewed and to get updated and to see what we can do to speed things up. But at the same time, they have to balance with all the other algorithms that they're working on. So it's tricky.

AUDIENCE: Almost the best thing that could happen is that we got a manual penalty,

JOHN MUELLER: I don't know. Maybe not.

AUDIENCE: But if it was algorithmic, then that's not going to have someone be able to look at it and say, OK, we'll lift it.

JOHN MUELLER: Yeah, yeah, exactly. I mean, this is something that we do. We do run across these cases. And when we see that something's happening as it should, we do go to the engineers, and we push. And make sure that they're under a little bit of pressure from our side as well to move things forward.

AUDIENCE: All right. OK, that's not necessarily reassuring. OK. But it's reassuring in as much as you're saying to me, you haven't done anything wrong.

JOHN MUELLER: I mean that the algorithms pick some things up when they saw this. And I think some of those things are probably resolved in the meantime, and that the algorithms wouldn't be picking up on strictly anymore. So from that point of view, I think you're in a good shape now, but it's hard to say exactly what would happen, when this algorithm would update.

AUDIENCE: OK.

JOHN MUELLER: All right, Nicolich.

AUDIENCE: Yes, John, I emailed you the question. I don't know if you got a change to at the 410 URLs.

JOHN MUELLER: Let me see. That was the question about the 410s.

AUDIENCE: Yes, so there are, I think, thousands of URLs that have been returning 410 for at least a year now. But why doesn't Google stop crawling them?

JOHN MUELLER: We assume that at some point there might be content back again. So what essentially is happening when we see a 404 and a 410, and we trust that signal, then that will happen is we'll remove the URL from the index, but we'll still recrawl it from time to time. So we won't recrawl it as often as we would a normal page that we know exists. But maybe every couple of months, we'll go back and recrawl it to see if it's still a 410, or if maybe some content is there now, so that we're sure we're not missing anything. And these crawls usually happen in a way that they don't interfere with the normal crawling. So us checking those 410s doesn't mean we're not crawling for normal content anymore. It's just something that we do when we realize we have extra capacity for your website, we'll double check to make sure that we're not missing anything there. So it's not a sign that anything is broken. And it's definitely not causing any kind of ranking or crawling indexing problem with your site. Is just we want to double check these things from time to time.

AUDIENCE: So 404s and 410s, you treat them essentially the same. Because I thought that a 404 might actually warrant something like that. But a 410 is a very definite signal that look, this has been deleted.

JOHN MUELLER: We do treat the 410 a little bit stronger than a 404. But that's mostly with regards to how quickly they drop out of the index, not with regards to how often we recrawl them. So, for example, if we've seen a page that existed for a long time, it returns 404, then we're not going to remove it from the index right away. But rather, we'll double check and crawl it again to see if it still has a 404 maybe a few times, maybe just once, depending on the page. And then we'll drop it out of the index. Whereas, if it's a 410, then we'll drop it out of the index fairly fast. But if this is something that is like a long-term status for these pages, then the difference between a 404 and a 410 is essentially not something you'd see on your side.

AUDIENCE: OK. So in my question, there was also a digression where for one of these URLs, Google Webmaster Tools said that there are two URLs pointing to it. And both of them were basically the same URL-- one with the "www" and one without. And there is a 301 redirect from the non dub-dub-dub version to the dub-dub-dub version. So why does Google Webmaster Tool think that it's two URLs?

JOHN MUELLER: Essentially, we saw it like that. So I'd have to double check what exactly happened there. But what we've seen in the past is sometimes a website will have both versions up on their website. And at some point later on adds a 301 redirect between the dub-dub-dub and non dub-dub-dub versions. So that might be happening there. But essentially, it's not something where I'd say you need to worry about that, because if we're already folding those versions of your site together, then that's fine. That's not something that is broken or anything that you need to change on your side.

AUDIENCE: OK. So yeah, I think years ago they were not redirecting. But it's been at least five years since the redirect was set up.

JOHN MUELLER: Yeah, OK. Five years is a long time. So I'll double check, actually, on that and make sure that we're really treating those the same. But usually, what would happen there is we'll crawl out from one of these URLs first. Maybe we'll see the redirect through the other version. And if we see it on the other version directly, then we might list them at the same time, But usually, we should actually be folding these together also for the link reports like that.

AUDIENCE: Right. And so what about the fact that the information is incorrect, in that Google Webmaster Tool says that this 404 or 410 URL is actually being linked from this page. But it hasn't been linked from that page in, again, three years, or at least a year.

JOHN MUELLER: That can be normal. That can be normal. That's something where, especially with regards to the 404 or 410 reports, that's something where we'd say this is where we initially found this link. And if in the meantime that link has disappeared, we don't have anywhere else for it to link from, then we'll still show that page as saying this is where we found it, because back then, that's where we found it. That doesn't change how we see it now. It's not a link that we count any more now, especially since the destination is a 410. But back then, that's essentially where we found it.

AUDIENCE: OK. Thanks.

JOHN MUELLER: OK.

AUDIENCE: So is that the same, John, in Webmaster Tools links to your site? And I've seen this quite often. We have hundreds, if not thousands of links where it says there's 10,000 from this one site, and we know for sure that site's been dead for months, if not longer.

JOHN MUELLER: Usually, that should clean up faster. So the links information in Webmaster Tools is kind of what we've tracked. But especially if it's website that used to have a site-wide link on it, and the whole website is gone now, we have to recrawl all of those individual links and notice that those pages are essentially gone now before we drop them completely. So that's something where it would be kind of normal to see a period, I'd say, of maybe up to half a year or year, where you'd still see those link from that essentially dead domain still showing up in Webmaster Tools.

AUDIENCE: Yeah, I mean, the bizarre this is that we owned another site where we had a site-wide on it. And you'd click it, it says there's 1,500, 2,000 links from this particular site. But when you click through to it, they're all to the home page. You click through there's only actually three. But it's showing as way more on the surface. So is the inside information more accurate?

JOHN MUELLER: Not necessarily. But when we can recognize that there's a site-wide link from this website, we'll show the total count. But we'll show you a sample of those links. So the total count is essentially the total over all the links that we found there. And sometimes that's like 100,000. But if you click through, you'll only see two or three pages, because that's what we think is a relevant sample from that site.

AUDIENCE: Right. OK, yeah, because it's always been the same three. We started wondering whether there was an initial problem with those.

JOHN MUELLER: Yeah, It's essentially just a sample. And sometimes it's a little bit weird that maybe you have an old domain and a new domain, and you're redirecting or something like that.

AUDIENCE: We did. We redirected it over to that. Right.

JOHN MUELLER: Yeah. All right, so I invited everyone else. It looks like some people have joined in.

AUDIENCE: Hi, John. I have a question.

JOHN MUELLER: All right, go for it.

AUDIENCE: So if you could go to this URL here. So there are two things going on here. So one is we can see that there is two additional tabs. There's the Activity and the Answers tab, in addition to the Reviews tab. And we consider all of these to be essentially one page. And I think users think about it the same way, whether they land on the User, the Activity tab, they are on Capital One's profile on [INAUDIBLE]. And so we were thinking of using link rel previous and next to consolidate the signals. But then at the same time, on the URL I sent, because we have a lot of reviews, we also use link rel on that page to paginate the reviews. So essentially, my question is two-fold. One is am I thinking about this correctly for using link rel previous next for the activity in the answers tab? And if yes, then how do you go about essentially paginating two different types of content in one page? And we have the same issue in another page, where we are showing the top 20 reviews and the top 20 questions for a particular company. But in the event that there are more than 20 reviews, or more than 20 questions, we want those paginated. So essentially, you have two different types of content on the same page that are both paginated.

JOHN MUELLER: Yeah, so I wouldn't see those separate tabs as being something you'd use rel next and rel previous for, because there's essentially different kinds of content there. It's not something that you scroll to the bottom, and you click the next button, and it shows a different tab. So I wouldn't use rel next rel previous there. I'd actually keep them completely separate, because sometimes you use these pages, these apps can have really separate content of their own that's relevant to be found there. So from my point of view, I'd try to keep them separate. If you want to keep them together, one thing you might want to do is think about what you could do to actually move all of this content in the same HTML page, maybe using CSS or JavaScript to flip the tabs instead of switching to alternate URLs. But I think from my point of view, I think this looks pretty good, actually. So using the pagination for the reviews on the bottom is a good way to get all the reviews connected. And having different tabs as separate URLs is a good way to keep those pages separately indexed. And I think that's generally OK.

AUDIENCE: Related to that, we've heard a lot about Google having a limited crawl budget for each website. So when we use pagination are we giving up some of our crawl budget? So would you recommend when it's a good user experience to keep the length or the number of pages to a minimum, because we're using up our crawl budget, or not [INAUDIBLE]?

JOHN MUELLER: We try to match the crawl rate that we have based on your server capacities, primarily. So if your server is fast, then that's something we try to crawl as much as possible from. Whereas, if your server is slow, then we'll try to crawl fewer pages, and try to focus on the main or important ones. So it's not something where I'd say this model is better, or that model is better. It's just if you have more URLs, we have to crawl them to actually get them indexed. And if your server is underpowered, then maybe crawling more URLs is a problem. On the other hand, if your server is a high-powered server, then maybe crawling more URLs is no problem. So that's something where maybe you need to check on your side first. And think about how fast we're picking up these pages, what we're showing, for example, in Webmaster Tools in the crawl stats, for example, you'd see how many milliseconds it takes, on average, to crawl an HTML page. And if you're seeing reasonable numbers there, maybe around 50 to 200 milliseconds per HTML page, then that sounds like we can crawl as fast as we need to from your website.

AUDIENCE: So the way you decide how much you crawl on a website is based on how quick the server is? You say, OK, this website's so valuable. So I'm going to be crawling like 5,000 pages a day. And this website is more valuable, so I'll do 10,000.

JOHN MUELLER: We do look at some of that, like how important we think this website is. But usually the server is the limiting factor. So if you have a reasonably high-quality website where you have good content, then we're going to try to crawl all the pages that you give us.

AUDIENCE: And what happens if in a page you truly have two pages of content that both are paginated? How do you then paginate? Do you say, OK, let's say you have five pages of reviews and then five pages of Q&A. Do you do one through five is the reviews and six through ten is the Q&A?

JOHN MUELLER: I try to stick to one kind of pagination for a page. So I think it gets really complicated if you have separate sections on a page that lead to different URLs with different pagination states. So as much as possible, I try to stick to one pagination system on a page, where we can crawl through the paginated versions and say we have everything covered.

AUDIENCE: But in the event that you run into a situation is that what is the best idea? Is it maybe when you go to page two, maybe both types of content paginate?

JOHN MUELLER: That's sounds like a good idea, yeah. All right, hi, Baruch.

BARUCH LABUNSKI: Hi, John. Can I just ask a quick question about faulty redirects?

JOHN MUELLER: Sure.

BARUCH LABUNSKI: So I marked everybody as fixed, and that was last week. And then they all came back again, same old links again. I mean, it was the mobile version. So everything's marked as fixed, and they all came back again. Is there a reason why that happens?

JOHN MUELLER: Marked as fixed essentially just tells you that you resolved this. It doesn't change anything on our side. But it should only come back if we actually recrawl those pages and still see the problem. But it sounds like that's not the case in your situation. They just come back immediately?

BARUCH LABUNSKI: Nothing wrong. Nothing's wrong with the pages.

JOHN MUELLER: OK. Maybe if you can send me the samples, I can take it up with the engineering team to see if there's anything specific that we can point to, or if there's something broken on our side, may be possible.

BARUCH LABUNSKI: OK. Thanks, John.

AUDIENCE: Hi, John. Can I step in with a question?

JOHN MUELLER: Sure.

AUDIENCE: Thanks. If a page A links to a page B more than once, let's say four times, after Matt Cutts' latest video, he said that only the first anchor found in those links counts. Can you confirm that only the first anchor text counts?

JOHN MUELLER: You mean can I confirm or deny that Matt Cutts is telling the truth?

AUDIENCE: Not necessarily. He said that he didn't check that from 2009, so we'd like to know if this is the case now in 2014.

JOHN MUELLER: I'd have to double check actually. So from my point of view, what I'd like to give out as advice is not to focus so much on these technicalities, because essentially, if you're focusing on which anchor is passing page anchor or which anchor is connected here, then probably you're missing out on making sure that your pages are really the highest quality possible. And our algorithms are going to change over time. And maybe we pick up the first anchor today, but actually there is an engineer launching something tomorrow that picks up the second anchor, or that takes an average of all the anchors. And from that point of view, it's not something where we'd say this is a standard. that we can say is going to remain in place forever. It can change over time. So that's not something where I'd really encourage webmasters to focus on this, and instead think about what you can do to make sure that your website naturally works well without having to worry about which anchor passes page rank or which one doesn't.

AUDIENCE: Well, it's true. I agree with you totally. My point was not specifically taking only the first anchor text. But if we know that all the anchors texts are passing by the SEO value, usually the first anchor text it's in the main menu. And that's not always optimized. I mean, you're striking the first menu with the most important links to be crawled. But you don't always use the most optimized anchor text, not necessary for SEO, but even for users. So it's good information for us.

JOHN MUELLER: Yeah, but at the same time, I wouldn't worry so much about the individual anchors there, because with any natural website, you're going to have lots of weird cross-linking with a variety of anchors anyway. So that's something where we'll try to pick up the right ones and show them. But I'll double check and see what I can find. I hope it's the same as what Matt said in his recent video. Otherwise, that would be strange for us. This is something that can change over time. It's not a standard that we say we're going to stick to this forever. And it's not something where I even recommend tweaking the website for, because like I said, it can change. And things like CSS make it possible for you to completely change the way your website has the content within the HTML page without changing the layout of the page. So there are lots of variations that are possible there.

AUDIENCE: OK, thank you.

JOHN MUELLER: Let's grab some from the Q&A because there are a whole bunch there as well, and see how far we can get. This is a link to our website. We'll probably have to look at that separately. Out product pages are not ranking well. We're one of the leading online marketplaces in India doing well for category pages but not for product pages. Can you take a look at our product pages, and tell us where we need to improve? I need to look at that separately. So live in a hang out is tricky. But you could, for example, post in one of the help forums or in one of the Google+ communities, and drop a link to that in the event listing. And I can take a look there. I've been facing a 40% drop in traffic since Panda 4.0 from my website, which is a price comparison platform. We've consulted some SEOs and nothing much seems to be wrong on our end. I need to get some insights on potential reasons. I think this is a site that actually was submitted. Let's see, yeah. So what I notice is price comparison sites are tough. That's something where it's really hard for you to provide a lot of unique and compelling value there. And it's also really hard for us to figure out where we should be ranking a site like that, because especially if you're taking content and information that's already publicly available and just mixing that up slightly, that looks really tricky for us to analyze and to figure out what exactly unique you're providing on your site. So instead of just mixing up the public information, I'd really recommend making sure that as much as possible across the board, your pages are really of the highest quality possible. And that you're not just taking two products and mixing up all of the different variations of the attributes and the different distributors there and creating separate pages for all of these different possibilities. So that's something where a site like that is going to have it really tough. And it's something where you're going to, perhaps, take a step back, and think about what you could be doing overall to significantly change how your website works to make sure it's really significantly different than just a generic aggregator of information. What could be the reason why new links are not showing up when I download the new links list from Webmaster Tools? We do update that data fairly regularly, but I wouldn't say that it's hourly or maybe even daily updated. So if you see that there are new links out there, and they just showed up maybe today or yesterday, then that's something you probably wouldn't see directly in Webmaster Tools. I'd use that information more to try to pinpoint specific issues maybe you're looking at. If you're seeing issues with links, then that's something you could clean up there by narrowing it down from the time frame.

BARUCH LABUNSKI: So would you suggest a third party tool for that if you really want to go hourly?

JOHN MUELLER: I mean, there are good third party tools out there. So that's something where I wouldn't say you should never use a third party tool, but you shouldn't need to use a third party tool for most things. But sometimes, if you have a big website, maybe a third party tool is a great way to aggregate all of that information and use that in a little bit more scalable way. It really depends on what you're doing, what your website is doing, what kind of processes you have, and what tools you are trying to use.

BARUCH LABUNSKI: OK, thanks.

JOHN MUELLER: My website was hit by the May 8, 2013 Phantom update. I fully recovered on May 20. I fixed the problems with the website within a week of being hit. Why does Google inflict these heavy penalties long after the problem was fixed? I'm not sure what you mean with the phantom update, and I'm not so sure what the problem with regards to slow website is there. So it's really hard for me to tell what, exactly, you're pointing at there. Yeah. It makes it really hard to understand the why question there if I don't really understand which parts you're talking about. So what I'd recommend doing there is maybe post a link to your URL or to a forum thread in the event listing. And I can follow up there to see if there's something specific that we need to fix on our side, or change, or look at with engineers. Webmaster Tools is showing 3,588 URLs, 603 domains pointing at my [INAUDIBLE] site, link audit is done every week for new links. In theory, if every spam link is found in Webmaster Tools and disavowed removed, is the release from the penguin anchor possible? Yes, of course. This is something where penguin is a website algorithm on our side that does take into account things like a natural links. And if you're working to clean that up, that's definitely something that can be reflected in the penguin algorithm. The tricky part with the penguin algorithm, specifically, is we have to recrawl and re-index all of these pages, which sometimes takes a bit longer. And the penguin algorithm has to be updated, as well, which sometimes takes a bit longer as well. And at the moment, that's not done that regularly. I think the last update was maybe half a year ago, something like that. So you'd need to wait for that update to happen before you'd see any changes. But it sounds like you're doing the right thing. You're cleaning things up. You're cleaning up new issues as they come along. One thing I might add to that is not just look at the individual URLs, but also include the domains in your disavow file when you're cleaning things up. From the question, it sounds like you might already be doing that. So that's great from my point of view. I found and removed disavowed every link one by one. I confined using Webmaster Tools, [INAUDIBLE], and Majestic, but my website ranking is not coming back. This is something where you might also want to get some input from other people to see what specifically was happening there. If you're sure that this was a problem with regards to links, and maybe the penguin algorithm was causing a problem there, then that's something where you need to wait for that penguin algorithm to be updated as well. So that's not something where you'd see gradual changes over time. It really would result in a more bigger change maybe when the algorithm is updated. But at the same time, I'd also caution against assuming that it was just a penguin algorithm and assuming that the problem is only the links, because sometimes, we see things like websites just having low quality content. And if the webmaster is focusing so much on links, they don't really realize that actually the content on their site itself is also lower quality and not that great. So I'd definitely get advice from my peers as well just to make sure that you're covering all of the potential issues. Why would links not show up in Webmaster Tools for an aged domain? They have a manual penalty, but they can't see the links. All the other Webmaster Tools that are showing I'd probably need to take a look at that URL to see what specifically might be happening there. I know some data in Webmaster Tools takes a little bit of time to actually start being visible. So if you just recently added it to Webmaster Tools for the first time, it might be that it takes a couple of days or up to a week for this data actually to be shown in Webmaster Tools. That's one possibility. It's also possible that something else is stuck on our side. So having a link to your examples there would be really useful for us as well. Does Google treat crawlable ajax links with the hashbang the same way as regular links regarding to allocation of importance among pages? So we do treat these essentially as normal pages. With the hashbang, we rewrite that to the escape fragment URLs. And we treat links to and from those pages essentially the same as we would normally links. So if there's no follow there, we'll use a no follow. If there's no no follow going in or going out, then we'll essentially pass page rank normally. So that's not something where the crawlable ajax format has any effect on how we treat links to or from the site. One difficulty there is sometimes that third party tools, they don't necessarily know about the escape fragments scheme and might be reporting that differently than we would be using that internally. But internally, we do treat them as normal pages. What are the best practices to deal with duplicate content? There are tons of them. Where do we start? I think the first one is not having duplicate content is a good way to start. Especially if you know that you have duplicate content, folding it together into one canonical version is a great idea. Using redirects, rel=canonical, that's a good way to combine that. We also have a help center article on all of these things. So that's something where I'd take a look as well. What you shouldn't be doing with duplicate content is using robots.txt to block crawling of the duplicates, because then we don't know that it's a duplicate version. And having duplicate content is essentially fine from our point of view. It's more of a technical issue in that we waste time crawling all of these duplicates instead of focusing on the main one. So it's not something where you'd have to worry about a penalty or any kind of action from our side with regards to this duplicate content. Our site allows vendors to upload products on our site. At times they copy content from other websites. How do we tackle this situation? From my point of view, with regards to duplicate content, it's not something you necessarily need to worry about, because we can recognize that as well. But what will happen is we'll probably rank the original instead of your content for those kind of queries. At the same time, if this is something that's happening overall, and it's degrading the quality of your website overall, then I'd definitely take action on that, and see what you can do to make sure that the quality of the content people are contributing to your site matches the expectations that you have. So sometimes people use simple methods like just waiting for maybe an admin to activate this content, to review the content before it goes live. Maybe you can do something fancy like set up a reputation system, where you say, oh this vendor has always been sending us great content. Therefore, they'll be able to post right away. Whereas, other people who are unknown or have been known to have iffy content, maybe they have to go through the review process. So it kind of depends on what you want to achieve there and how far you want to go. With regards to duplicate content on its own, again, we try to just see that as a technical issue. But if you think this is degrading the quality of your site overall, I'd definitely see what kind of steps you can take to clean that up. All right, we have a bunch of questions left but also very little time. So let's open up for questions from you guys.

BARUCH LABUNSKI: So in terms of there was massive shift again. And I know you guys are always testing. Will we eventually know what this recent update was, or just should we not pay attention to it?

JOHN MUELLER: Which update do you mean?

BARUCH LABUNSKI: Last week one.

JOHN MUELLER: Last week-- I don't know. It's hard to tell. I mean, we make changes all the time, so it's hard for me to tell which massive shifts you're seeing and how they correspond to the actual changes that we make on our side. Some of these changes we've seen are, for example, very local. Maybe we make changes on our country code detection, or something like that. And some sites might see that stronger than others. And other changes are broader that a lot of sites see. But it's really hard to know exactly what changes you're looking at. Usually, if it's something that has a bigger effect and is clear to extract, then that might be something that we'll talk about in more detail. But a lot of times, these are just normal changes in the algorithms that we make over time.

BARUCH LABUNSKI: OK.

AUDIENCE: I have a question about authorship. What about my chances to get the picture and the authorship in the SERPs? It's a small static website. It's not a blog. It has good content, but it's a blog, so it won't increase in the future. And I'm not an active writer. I'm not an active author, for example, on Google+ or Facebook. Should I try it?

JOHN MUELLER: I mean, you can always try it. It's not something where we guarantee showing the photograph. But we'll try to pick up the authorship if we can recognize it, and we'll show at least as a text snippet there. So from that point of view, that might make sense. That might be a good thing to get started with. I wouldn't focus so much on the photo itself as essentially the connection that you're providing there between the different kinds of content on your site. And it's not something where we'd look at Google+ and say, oh, this is a person who's active on Google+. Therefore, we'll show authorship. But where we'll try to figure out, overall, is this a person who's writing great content? And should we be showing that more in search? And that can happen from content just from your blog. It might happen from other articles that you write elsewhere. That's essentially a part of whatever you want to do.

AUDIENCE: OK, does it increase my chances if there are only very few authors in my specific liege.

JOHN MUELLER: I don't think that would increase your chances, but it also wouldn't decrease it. We'd essentially try to look at it on a per person basis, on a per author basis. So what other people are doing in that area shouldn't have that much effect on whether or not we show your photo, or how we'd show your authorship information.

AUDIENCE: OK, thank you.

JOHN MUELLER: All right, we're essentially out of time. But if one of you has something short to add, feel free to jump in. Gary, I think you're muted, Gary. We can't hear you.

AUDIENCE: Gary always has a question.

BARUCH LABUNSKI: Yes.

JOHN MUELLER: Oh no, technical problems in the last minute.

BARUCH LABUNSKI: Forget about it, he says.

JOHN MUELLER: All right, let me ask you guys a quick question before we head out, and maybe your microphone will start working great. How did you guys find the initial set up like this, where we just have a smaller group of people to talk about specific issues first before we head off into the Q&A.

AUDIENCE: It didn't make much difference.

AUDIENCE: I agree. I was one of the lucky ones. It's a small community. So I'm not sure it necessarily has a large effect on who's going to try and join in anyway.

JOHN MUELLER: My idea was a little bit to make a similar to the conferences, where people can come up to us and ask us questions directly. And also to make it possible for some people who maybe traditionally wouldn't be joining the long-term hangout to ask a question one-on-one as well.

AUDIENCE: I think if you're able to look at the data for a specific site, and at least send some figures to the webmaster saying this is the problem that I see with your site, that might be super helpful. I know there's only limited amount of information you can give out. But at least some hints might be helpful.

BARUCH LABUNSKI: I personally think that I like your guests hangout where you have [INAUDIBLE] or maybe you and Matt Cutts. That'd be a cool one.

JOHN MUELLER: OK. All right, I think we'll continue to experiment a bit with this model, and see how it works. And if you guys have any ideas for ways to completely around, I'm all ears. I think it's been lots of fun doing the hang out so far. And if we can tweak things to make it even better, then that will be great too. All right, with that, thank you all for your time. And maybe we'll hear Gary next time. All right, thanks a lot. Have a great week. Bye, everyone.

AUDIENCE: Thank you. Bye bye.

AUDIENCE: Hello. Hey, Gary. Your mic is not working. I can't hear you. Can you hear me?
ReconsiderationRequests.net | Copyright 2018