Reconsideration Requests
17/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 06 June 2014



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: OK, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst, at Google here in Switzerland. And I'm here to answer your questions about web search and websites. We have a bunch of questions that were submitted already. And I have a lot of noise. Can you mute yourselves? Maybe, this one. OK. Still, there's noise somewhere. We have a bunch of questions that were already submitted. And if anyone wants to get started with questions, feel free to jump in and ask away. [INAUDIBLE]

JOHN MUELLER: One after another. [LAUGHS]

GARY LEE: Hello?

JOHN MUELLER: OK.

GARY LEE: Essentially, with the updates that have happened, we've seen a very interesting change to our website. And we've implemented the hreflang for our co.uk website. And as a result of that, we saw a good recovery in media take from doing it. And the dot com still hasn't recovered. But what we noticed, when Panda 4.0 updated, it [INAUDIBLE] it immediately started ranking almost similar to where we left off four years ago, when the initial first Panda hit us. So all of the things that we've been trying to fix-- I haven't heard whether or not there was a Penguin refresh involved in this, but almost everybody is saying is no. So I'm wondering why we've seen such a strong recovery, and it's not related, eventually-- I mean, it happened on that day. Why have we been so strongly affected by Panda, when all signals and everybody we've ever spoken to has said that the majority of the issues are Penguin related and [INAUDIBLE] related.

JOHN MUELLER: It's hard to say without looking at the specific site. But what might be happening there is there are two sides involved there. Of course, on the one hand, there might be sites that were above you that are basically now ranked lower because of this Panda update. And it might also be that there was just a small component of Panda involved with your site as well that essentially got cleaned up now. So that's always the two possibilities, that some sites went lower, below your site, and maybe your site slightly got a little bit higher.

GARY LEE: I'm talking huge recoveries, from 300th place to second or third place, on thousands of keywords. We're back in the top ten [INAUDIBLE] on the majority. So we're talking this is a huge affect that Panda had on our site. And my concern is that, from what I've heard, is that this is sort of a less of a hit of Panda. So you've just kind of eased Panda up a bit. And if that's the case, then there's still something severely wrong with our content. I want to correct the issue, I don't want to be let off by an easing of Panda. I want to find the issues so that, as you tighten the screws again over the years, which will inevitably happen, we don't fall into the same trap. And my question to you last week that I couldn't answer was, how do I identify those areas without-- you know, we've spoken to you about this. We've spoken to multiple people on the forums. And nobody has identified anything for at least three years now. There is, potentially, a problem. I don't want to just sit here and say, OK, this is wonderful. We're back. Because that's not the case. We're being given an easy path right now, because of this Panda update. Or something's happened. But it also, not seeing the dot com recover, suggests to me that the Penguin issue is still in full effect as well. So I know this is a huge question in multiple areas.

JOHN MUELLER: Yeah. So the Panda changes, you mostly saw with the co.uk site. Is that correct?

JOHN MUELLER: Correct. Yeah.

JOHN MUELLER: But that's also a fairly new site that you had, right?

GARY LEE: Yes, that is maybe six to eight weeks old, at the most. It is all a TREC line.

JOHN MUELLER: Yeah. But what might have been happening there is that we were ranking that site based on, maybe, previous content that was there or something else that we had associated with that domain name. And that essentially got cleaned up.

GARY LEE: That domain has been under our control for 10 years. And it's only ever pointed to the dot com.

JOHN MUELLER: Yeah.

GARY LEE: So yeah, it was a direct 301 to the dot com. So the history of it-- I mean, it's something related to that content. All of the pages were indexed. We had at least four thou, or I'd say all of them, 70% of them. So it was very clear. We're in a very unique position, testing-wise, to understand Panda did not like the content that is on the site. And now, Panda [INAUDIBLE]

JOHN MUELLER: I don't know.

GARY LEE: --and it likes it.

JOHN MUELLER: It's hard to say. I'd have to really look into the details. I imagine that there's a few factors involved there. On the one hand, suddenly there is content there. Suddenly, the hreflang is set up there. That's something where we've been making algorithm changes with regards to how we treat the hreflang recently. So that's something that could also be playing a role there. It could also be that, for instance, the Panda algorithm just was looking at a previous state of the website, and that got refreshed. But it's really hard to say when a few things kind of come into the same time frame. So I can take a look to see if I could find something there. But it's also very possible that our algorithms, essentially, just realized that this is a normal website. And we should treat it normally, rather than based on these outdated or kind of sparse signals that we had in the past.

GARY LEE: Yeah. So that everybody here kind of understands this, is there anything specific that we can understand from the Panda update where, if it's an easing, what was it doing before? Is there any key areas, things like search results pages? In our sense, we've got like a office space search results, the same as maybe you would have a used car site or something, or a real estate property site where you'd have lots of listings and maybe lots of duplicate content. Is that an area that, maybe, you were analyzing wrong before and has been eased up now? Is there something like that, that we can--

JOHN MUELLER: We don't have anything specific that we point at there. But a lot of the things with regards to Panda are really related to the quality and how users actually find these pages or how our algorithms think that users might find these pages. And the tricky part there is that a lot of these quality issues aren't specifically technical issues. So if you have a Broker 404 page or something like that, that's not something we'd see as a quality problem, because that could work just as well for the user. The user doesn't see the difference. Or a robust text file that's disallowing crawling of CSS, or something weird like that. But on the other hand, our algorithms look at more than just the text on a page. Often, webmasters will come to us and say, oh, my website passes Copyscape. Therefore, it must be a high quality website. And it's like, well, you have unique text on your pages, but that doesn't mean this is great text. That doesn't mean this is a great website. Altogether, that doesn't mean it's going to be a great experience for a user to land on that page and see that text. So it's really the matter of everything kind of coming together. And our algorithm is looking at the overall view of the website. And that can include things like sparse search results pages that you have in there, empty search results pages. Those are the kind of things that users hate finding in search. It can include things like really low quality search results pages within your website, if it looks like auto-generated content that doesn't really have value. But on the other hand, you might have search results pages that are really useful and really helpful for the user that makes sense to be indexed like that, which are more like category pages.

GARY LEE: You know, we really tightened the ship over the last four years. And most people struggle to find anything particularly bad on our site. There's nothing there that's there for reasons it shouldn't be, other than the user wanting to find it. So our quality of our content, which I believe it is exactly as you're saying, right now, Panda likes it now. But I'm not certain as to why it didn't like it before. And obviously, something needs to be improved. I'm not just going to sit here and go, this is great. And this goes for everybody else, I think, that has been let go from this latest Panda update. I take it to believe it doesn't mean that your quality is now accepted and good. It means to say that we're not just going to treat you so badly as we did before, but you should still improve it. That's how I take it.

JOHN MUELLER: I think everyone should always improve their website. So from that point of view, I think you're absolutely right. You should never let go and say, the current state is OK. I'm not going to change anything on my website anymore. Looking back at your website, over the time you've come to these Hangouts, I remember your dot com website also was affected by Panda in the beginning. But you significantly made steps to clean that up. And that's no longer the case. That hasn't been the case for a while. So at least, those are the kind of things where our algorithms picked up on the changes that you made and are trying to reflect that in how we handle your site. In your case, of course, the Penguin algorithm is something that's still strongly playing a role there. And that's a bit trickier, because it just takes longer to update.

GARY LEE: Are we still waiting officially for a refresh? Are you saying that? Is there still a refresh waiting?

JOHN MUELLER: It's definitely something where, I think, from the time we did the last refresh, it seems like a good point to start doing that again, or at least to figure out why things are going slow. But with these kind of algorithms, the tricky part is, of course, we need to test the results of these algorithms. We can't just blindly roll them out and do that monthly at the moment. But that's definitely something we're trying to head towards to do these a little bit faster.

GARY LEE: Yeah. All right, John. Thank you, very much. If you do get a moment, if you could, even if it's just two lines in an email to me, just say, look at this area here. You should improve it. And at least it gives me direction.

JOHN MUELLER: Yeah. Sure, if I can find something specific. I think, at the moment, our quality algorithms like your site. So you're in good shape there. But I understand that it's always good to be even better.

GARY LEE: They might hate me next week. [LAUGHS]

JOHN MUELLER: I don't think so.

GARY LEE: OK, well, thank you, everybody else, for being patient with my questions as well. So yeah, thank you.

JOHN MUELLER: All right. I think, Karl, you had a question that you wanted to ask as well. Or did I see that-- oh, I think you're muted.

KARL: Yeah. Hello there, John. Oh, was that mine?

JOHN MUELLER: Yeah. Go ahead.

KARL: Yeah. I think my site has been affected by Penguin for, oh, since April, 2012. Again, I've been working on it daily, to be honest. Well, my main concern is how can I find any more bad links? I've kind of come to a standstill. So I've used all the tools that were available. I'm finding in Webmaster's Tools that there is no more bad links showing. I haven't found a spammy link since the beginning of the year. And even that one that did turn up was already disavowed. But the other tools I'm using, every now and then, I do find a spammy link that was built in early, mid 2011, early 2012. So I'm just hoping, if Penguin refreshes, is one or two links that I've missed really going to make difference? I'm happy that I've found every single link possible. I've spent two hours a day on it, and I have done for two years. I think I've done as much as possible, obviously. Is there anything else I could be doing? Is it a case of, yeah, there is still links out there, and it will be affecting me? Or as my Google Webmaster's Tools thinks? I'm only seeing 3,000 plus links there now. It was 300,000. So I've managed to remove a fair few. The ones that are there, let's say 80% of them are genuine, natural, got nothing to do with SEO.

JOHN MUELLER: OK. So I think that there are few aspects there. On the one hand, there is a certain time that it takes for our algorithms to re-crawl all of these links and to reprocess them. On the other hand, we have to do an update of the data for these specific algorithms. So specifically, with Penguin, that's one that we don't do as frequently as other algorithms. I think the last update we did was a little bit more than half a year ago, or maybe in April. I'm not sure exactly when it was. But these are things that take a certain amount of time to be updated again. And you will see that change, based on what we've crawled in the index until then when that updates. So that's not something you would see any gradual changes with along the way. On the other hand, it might also be that there are other issues with regards to your site that are also worth looking at and reviewing. So that's something also to keep in mind. Maybe quality issues, maybe some kind of other problems as well. Definitely, like with Gary, check with peers to have them take a look at your website as well.

KARL: Yes. I've been doing this for two years. I think the content is reasonable, so unique. Everything's right by me. It is an e-commerce syst kind of website, so a wholesaler. So we sell surplus and liquidations and stuff. [INAUDIBLE] we have to remove them. So it's all one off kind of unique content. Sometimes the content might be a little bit shallow, But our users would still find that of use. If it is shallow content, we do no index. Yeah, no index.

JOHN MUELLER: OK. That's great.

KARL: Is there any chance you could have a-- oh, you have had a look at it once. And you said I had an issue where Google's picking up the HTTPS version, and not the HTTP. So really, we redirected the HTTPS to the HTTP. And I thank you very much. You did point out that good point. It would be fantastic if I could find any [INAUDIBLE].

JOHN MUELLER: If you can post a URL in the chat, I can check it over briefly.

KARL: OK, I have to look at the channel. I'll try and look at it. Let's get the chat on.

JOHN MUELLER: OK, great. If you can post it there, I'll take a look in between, when I run through the questions that we have here now.

KARL: OK, fantastic.

JOHN MUELLER: OK? Another thing. Kind of keep in mind we recommend using the domain directive for disavows as much as possible, so that you don't have to worry about individual URLs. OK, sounds like you're aware of that.

KARL: Yeah.

JOHN MUELLER: OK. Great. So let's run through some of these questions here. What do you suggest for writing content for product pages? We're an online marketplace. And chances are, users see the same product at different sites with similar content. How to write unique, useful content for our product pages. Essentially, on the on hand, we don't penalize sites for duplicate content. So if you have the same content description as other sites, we wouldn't penalize your site for that. On the other hand, we're not going to rank your site for the same content that we already have other sites that are ranking well in the search results for. So I kind of recommend working to make sure you have unique content where you can, where it makes sense. But it's not something you absolutely need to have everywhere. Often, if someone is searching for this kind of a content, maybe for a product or something, they'll also add other qualifiers, as well. Maybe they're looking for something local. Maybe they're looking for something that's kind of unique to your shop. Those are all things that help us to pick your website, instead of some other website that has the same content. So from my point of view, I'd recommend rewriting the descriptions where it makes sense for you, but also making sure that the products that you offer and your website overall has something unique and valuable that makes sense for us to point to, so that you're more than just an affiliate of some bigger distributor, for example. Is Panda 4.0 only for English queries? Did you roll it out internationally? Yes, this is international. Should navigation links be rendered on all pages? Does this hold any weight, age? Should want out for hiding navigation links via JavaScript, so as to decrease the number of links being read on a page? I definitely wouldn't hide any kind of navigation links with JavaScript to make it look like there are fewer links on there. If they belong to your navigation, include them in your navigation. Show them in your heading, or in your menu, or wherever you have them. Our systems are pretty good at dealing with normal websites. And normal websites have some kind of a navigation on them. So I definitely wouldn't try to obfuscate your normal navigation in the hope that it makes anything better. At worst, I think it'll make it a little bit harder for our systems to recognize your content properly. So try to just use them normally. Use them how they make sense for your users. Please help me understand how Google thinks that website A is an authority site. Is it links, content, and other signals that tells Google that this is a trusted website? I think there are a lot of things that come together when it comes to authority. It's not one simple thing. And it's not something that you can easily tweak or optimize for. So that's something where we look at how users like these websites, how they recommend the website, the kind of content on there, how it's referred to on the web. All of these things add up for us. And they kind of give us an idea about the website itself. And the most important part here isn't if it's an authority or not, but rather if it's relevant for individual queries. And we'll try to show the relevant results in the search results. And even authority websites are sometimes ranked below more websites that just have better content. So I wouldn't worry so much about authority when it comes to doing SEO. There is a [INAUDIBLE] ad link near the PPC ads that show how Google used my history to improve the ads. It says Google used AdWord's usage statistics, et cetera, to improve personalized, organic search results. We don't use anything from AdWords for web search. So from that point of view, that's something that's completely tied to the AdWord side of things. There is a page you can look at to see the Ads preferences, like the-- what is it-- the interests that we think we've discovered for you, your age, those kind of things. If you search for Google Ads Preferences, you'll find a link to that page. And you can take a look there. You can turn things off. You can take interests out. You can remove the interest-based advertising, all of those things. So that's something I find really interesting to look at. But again, all of the AdWords related things aren't used for normal web search.

GARY LEE: John, is quality score used in any way? I think that might be where somebody was going with that, and maybe in relation to Google Local. And sometimes you have those kind of mixtures between organic and local together. And a lot of people have speculated that quality score is involved in that mix. I just thought I'd add that to the picture.

JOHN MUELLER: I don't know so much about local, because that's something separate from web search. But in web search, we don't use the AdWords quality score. I could imagine that AdWords might use some signals that are the same as the signals that we use for web search. But essentially, we don't use their scoring, their algorithms for that. And they tend not to use ours. Essentially, we have similar signals, probably, where we take similar data, but we don't use those values across that border.

ROBB: John, is that just to play it ultra safe in some sense? Because to me, I know there's a lot of conspiracy theorists who will say, if you buy AdWords, then it counts towards you. And in our case, it absolutely doesn't, because we've spent a fortune. But surely, it's still a quality signal. If there's a company willing to invest in their brand and pay for advertising, and they're that confident, you very rarely see your average subcontinent blog paying for AdWords to get visitors, because they're not confident in what they're selling. They're trying to get away with something. But if there is a brand that's willing to put up a sign above their shop in the street, and there's a brand that's willing to pay for advertising, then it, in some senses, is definitely a quality signal, or an authority, or a trust, or something to make them real.

JOHN MUELLER: We definitely don't use that for web search. But what you'll often see is that, if someone is willing to spend a lot of money on advertising, then probably they're willing to spend a lot of money to make a better website as well. And maybe they even spend a lot of money to make really great products. And that's something that's kind of reflected indirectly across the web. So it's something where we clearly don't want to use the ad side of things for web search, because we want to keep it as neutral as possible. But at the same time, if their company is willing to put a lot of money into creating a great product, creating a great website, then that's something we'll try to pick up on indirectly. That's not something where we'll say, oh, the ad spend is this much, therefore, their ranking should be higher, because that wouldn't be fair in search. I mean, it's something where we also sometimes get contacts from ads representatives internally where they'll say, oh, my most important client has this problem in search. And can you give me some hint? Can I help him to fix his problem, because he's threatening to go away or threatening to stop his ads campaign? And from our point of view, that's something that we really strictly block off.

ROBB: I know. I'm the same cause.

JOHN MUELLER: Yeah. We even have extra videos from Ab Cuts that are like, you should never do this. So that's something where we really have a really strong firewall to the ad side, just to make sure that people who are currently our clients don't have any kind of advantage at all in the normal web search, so that we can really be sure that the things that we show in search match what our algorithms think are the most relevant. And if people are buying ads, then that can be visible on the ad side in the search results. But that won't be visible in the natural search results. Sometimes we also hear conspiracy theories the other way in that they'll say, if I pay for AdWords, my site will disappear in natural search, so that I end up paying more for AdWords to be visible. So these type of conspiracy things, they come up every now and then, but it's definitely something we don't do.

ROBB: I'm not saying I buy it at all. And if anyone would be tempted, it would be me because of the amount of money we're currently losing and the amount of money we've spent. I just think that it's still a genuine quality signal. And you get the proof or benefit as well. I, for one, wouldn't care if you used it as a quality signal because I think it is a quality signal.

JOHN MUELLER: I think that the tricky part there is also that we only see a tiny part of the online advertising. And if you're buying ads on our search results, that's one thing. But maybe you're doing things on Facebook. Maybe you're doing things on other platforms where there's a lot more traffic, where there's a lot more visibility. And if we were only to use something like AdWords for ranking, then that would completely skew the search results even more.

ROBB: Oh, agreed.

JOHN MUELLER: So we really want to completely block that barrier from the client side of internet to the natural search side where we really want to focus on the content and the relevant factors that we find around the content. All right. Google says to get quality links and create unique and relevant content. But in our industry, how do we get these quality links when most of our users don't have a website? Google doesn't consider social shares and likes as ranking signals. That's always something that makes it a little bit trickier. But on the good side, the thing to keep in mind there, as well, is that if you're in a niche that's really hard to get links, then everyone else will have the same problem as well. It's not that your competitor will suddenly have an influx of millions of links, if everyone in your area has trouble getting links. So that kind of even things out a little bit. The other thing is that sometimes it makes sense to just make it easy for people to link to your site. It makes sense to have content that people want to link to, that they want to recommend. Getting traffic from social media, even if they don't pass page rank, that's something that can be useful for your website, not directly because of the ranking effect there, but of course, you're getting traffic. And you're getting personal recommendations. So people are at least going to your website. So from my point of view, I think it's important to remember that everyone else in your niche is going to have the same problem. And at the same time, working to create something that's really interesting for your users makes sense across the board, because some of these shares might be on social media that don't pass page ranks. Some of them might be passing page rank. So that's something where you have to work with what you have available. Does the trust of a page really make a difference for ranking? Can a page with zero trust out-rank a page with 50 out of 100 trust? So trust, I assume, is some metric, maybe from a third-party tool or something. But essentially, we don't track trust like that, so that's not something that we would use for ranking. And if this is from some third-party tool, then I'm pretty sure that we wouldn't be using that exact metric, because we don't scrape third-party sites to figure out which sites they assign which trust value to, for example.

GARY LEE: John, in relation to that, didn't Matt Cutts recently come out and talk about that there is plans to make it easier for smaller websites to compete with larger corporations, bigger sites, more powerful sites? And if so, was that also related maybe to the Panda release as well? Is that something you're familiar with, but something he recently said?

JOHN MUELLER: I don't know what exactly he would be referring to there. I think, on the internet, small sites still have a pretty good advantage over big sites in that they're just agile and they can move a lot faster. And if there's something that's currently really popular, these small sites can jump on that a lot faster and be more relevant. So it's not something where I'd say bigger sites have an inherent advantage over smaller sites, but rather you have to find your market and your niche and focus on that. So if you're a small site, and you want to compete with Amazon, then probably you're going to need a little bit more luck. Whereas, if you're a small site, and you have a special market that's really unique that you can focus on where you can provide something really special that someone like Amazon couldn't provide, then that's going to be a big advantage. And that's not something where Amazon is going to say, oh, we should put $10 million into this really small market, because it's never going to be worth that for them. So that's something where small sites, I think, have a really big advantage. And the internet is big. So if you have a small niche market that only pulls in a small percentage of the users globally, then that's still going to be pretty big, if you think of everyone globally who might be interested in that. But I think that's something where we always need to be sure that we're keeping the balance right, that we're not saying, oh, big sites like Amazon are important for the web. Therefore, we should ignore all of these small sites. We have to make sure that there's an equal balance there, that these small sites have a good chance to show up in the search results as well, because they do have really great content sometimes. What's the best way to deal with pages that are short-lived and expire after a specific time, a few months or weeks? Since we're not sure of the time span, we can't set the unavailable META tag. Do pages with a 404 error have any adverse effect on the site?" A 404 is fine. If it expires, if it's no longer relevant, using a 404 is fine. Some sites, if you have something that replaces that expired item, you could use a 301 redirect to the replacement. It really depends on the kind of site that you have, though. So putting 404s on those pages is absolutely no problem. If you know how they're related to other products on your site, using a 301 is also fine. Having a large number of 404s is also not a problem from our point of view. That's essentially a sign that your website is doing things technically right. So I wouldn't worry so much about those counts. Why do new product pages from some sites rank well in search, even when they have no or thin content or no significant links from any authority website? It's really hard to say, without having specific examples. One thing to keep in mind is that, if something is really new on the web, and a website creates some content about that really new thing, then, since we don't have so many signals in that area, for example, it's going to be hard for us to figure out which of these pages are really the most relevant. And sometimes, it can happen that a site that we think overall has really great content and overall is really active, has content like that, then we'll say, OK, this new page, we don't really know much about it. But we know it's on a really great website, so maybe we should show it a little bit more in search as well. But over time, as we understand that content better, as we understand the pages better, that's something that generally settles down into something that's maybe more sane, if you would call it, maybe looking at the content directly, maybe looking more at how this specific piece of content is recommended online. Many times our product pages rank ahead of a relevant page, which is a category or brand page. Despite optimizing category pages for relevant keywords, product pages show up for those keywords. Why would this happen? It's hard to say, again, without specific examples. Sometimes, what we see is that specific types of pages are just better linked internally. So it looks to our algorithms as if these are more relevant. So for example, maybe you have your new products listed on your home page, and those specific category pages are a few levels down. In those cases, it might look like those new products are actually more relevant because, whoa, these are linked from home page. They have to be important. So that's the kind of thing that might be playing a role here. But I'd also take a look at your content, your pages directly to see if there are things that maybe you missed. Maybe your category pages are pretty good category pages, but you're doing crazy keyword stuffing on those pages. And our algorithms are saying, oh, I'm not really sure how to value this page, because there's so much keyword stuffing happening on these pages.

GARY LEE: In Penguin issues there, John, where some users have their pages, they're the ones they link build to the most. And therefore, the algorithm, Penguin is actually punishing them a little bit. And so, therefore, the other pages end up ranking well. And the others are dismissed. We've noticed that a few years ago, but I don't know how relevant that is today.

JOHN MUELLER: Yeah. Usually, those kind of algorithms would be more site specific, so they would be affecting all of the pages on the site equally. But theoretically, there might be some things where, if you're doing something crazy with those category pages, something along the line, be it from of the manual web spam side to some of our algorithms, might be picking up those crazy signals and saying, oh, these category pages are kind of untrustworthy. We don't really know how to rank them. And in cases like that, product pages might rank a little bit higher. So that's something where you'd probably want to take a look at those specific pages. Look at them with peers and see if there's anything tricky happening behind the scenes that you might want to clean up to make sure that things are really working out well naturally, instead of trying to artificially manipulate these category pages into a state that they aren't really at. I'm often asked to participate-- oh, wait. Wrong click. We have a service that's indexed dynamically. Nothing is cached. When indexed by Google, the return page looks like this. So it looks like a query parameter. Would it be better for us to return path? That looked like static files. From our point of view, those are essentially equivalent. When I talk with our engineers about these kind of questions, they feel very strongly that you should be using URL parameters if your website uses URL parameters. So instead of artificially hiding those parameters in file paths or in file names, our engineers are very strongly in favor of saying you should be using URL parameters. Because from our side, it makes things a lot easier to understand your pages. So if we see these URL parameters, we'll be able to understand what's happening there. For example, if you have upper and lowercase mixed sometimes, or if you have words missing here or there, or some parameters added here, some parameters added there, that's something that's a lot easier for us to pick up if you have URL parameters. Whereas, if you hide all of that in a path, then it's really hard for us to recognize that, oh, this word here is actually a qualifier for this whole type of page here. And we should be treating it as a category page, or as a search page, or as a product page. So that's something where, if you currently use URL parameters, I'd really strongly recommend keeping those URL parameters, if at all possible. Obviously, if you can turn something that uses parameters into something that looks like a static path that works perfectly, that's also something that's fine. But from our point of view-- for example, WordPress does that really well, that it uses like the title of the post or the dates within the URL to make it look like a static path. But if you're not completely sure about this, not really 100% or 110% certain that you're mapping things properly, I'd strongly recommend just keeping the URL parameters. It makes it so much easier for us to crawl and index those pages. Exact match keywords, how to deal with ranking variation for similar terms. There are times when just a single space between a keyword shows wide variation among rankings, how to deal with that. Essentially, our algorithms try to figure that out automatically. So synonyms, variations of wordings, those are things that our algorithms generally do quite a good job at. So that's not something where you'd artificially need to include all variations of all of your keywords on all of your pages. When we see that happening, it often goes into the keyword stuffing area where our algorithms will look at that and say, oh, this guy is repeating all the variations of this keyword here. Therefore, he's artificially trying to appear relevant for there, so we should be cautious when looking at this page. So that's the situation you want to avoid there. Writing naturally, in this regard, maybe even writing naturally and letting users comment on your content can also help, because users often use different variations of keywords or different words to describe something that you're writing up on. So user comments in those kinds of situations can sometimes help as well. One thing also to keep in mind is that, sometimes, we do a really good job at recognizing these keyword variations. And for some languages, we might not be doing that great of a job. So I remember, in German a couple years back, it was a big problem for us because Germans have these really crazy long words that are put together out of lots of individual words. And depending on how you put them together, they have subtlety different meanings. And we would sometimes not recognize that in our search results properly. So that's something where our engineers needed that information to understand that, oh, you're getting these search results completely wrong in this specific language, because combining these words means this, and keeping them separate means something different. And we're treating them as the same or we're treating them different. That's the kind of feedback that's really helpful for us. So if you see that happening in specific languages, you're welcome to send that to me. You're welcome to report that in the search results with the Feedback link on the bottom. But that's really helpful feedback for us. But at the same time, that's something that's sometimes really hard to improve on. So sending this feedback to me, or sending it with the search results doesn't necessarily mean it'll be fixed tomorrow. So these are more the long-term, harder algorithmic problems to solve.

PANKAJ: Hi, John. This is Pankaj. OK, I have a question related to AJAX content. So we have a URL which shows the AJAX content, right? And we haven't updated HTML snapshot for solving bots per the request of [INAUDIBLE]. That is underscore, [INAUDIBLE] Though, the prior page content has been updated. So can it be the case of clocking in the [INAUDIBLE] GoogleBot?

JOHN MUELLER: It's tricky in those situations, because sometimes, you have subtlety different content in the escaped fragment version, just because of the way the page is created. But it's not something where our web spam team would look at that and say, oh, the content is a little bit different, therefore, we will punish this website. So when the web spam team sees that, they understand that there can be technical reasons for these kind of situation. And usually, that's fine. On the other hand, if you've changed your content and it's not reflected in that version, you won't be able to rank for that changed content. So if you improve it, if you expand the content, then, if we don't see it, then we can't show that in search.

PANKAJ: Oh, OK, great. I have an second question related to faulty redirects. So in Webmaster, we can see the faulty redirects, right?

JOHN MUELLER: Mm-hm.

PANKAJ: But how Google identified that it is a faulty redirect or not? Because I encountered a case wherein it is actually redirecting to the correct page wherein the content is being relevant to the same search that has been for the [INAUDIBLE] one. But Webmaster is showing that it is a faulty redirect.

JOHN MUELLER: That sounds like something that I could bring back to the product team. So if you have some examples like that, I'd love to see them.

PANKAJ: Definite, I'll return them.

JOHN MUELLER: Yeah, you can send them to me, maybe, directly on my Google+ page. That's probably the easiest. In general, the faulty redirect is something we do when a smartphone user accesses your desktop pages and is redirected to the home page, for example. That's the main problem that we see there.

PANKAJ: Sorry to interrupt. That is completely fine. But I found a couple of examples wherein the content, entire content won't be the same case. But it doesn't seem relevant, which is fulfilling the search's query and showing as in a faulty redirect.

JOHN MUELLER: Yeah, that sounds like something where maybe we're picking this redirect up wrong, or maybe we're misunderstanding it. So again, that's the kind of feedback that would be really useful for the product team to have.

PANKAJ: Oh, OK. And second thing, in those scenarios, can putting the same title tag for both the URL will help Google to avoid this kind of errors?

JOHN MUELLER: I wouldn't focus so much on the title tag. So we should be able to recognize that otherwise as well. So if I had those examples, I can check with the product team to see what we can do to fix that on our side. That shouldn't be something that you have to worry about.

PANKAJ: Oh, OK. Great. Thank you.

JOHN MUELLER: OK.

PANKAJ: Yes, thank you.

JOHN MUELLER: There's a question about Google+ profiles. I don't really know how those things are handled. So you'd have to ask someone from the Google+ or the Google Local-- I think it's called now-- local business team about those profiles and pages. When I submit a rich snippet on your Webmaster Tools, that time, it demonstrates well. But when it is in the search results, it doesn't work correctly. Rich snippets, we have a few factors involved with rich snippets. On the one hand, you have to technically make sure that it's marked up properly. So it sounds like you're doing that properly. But on the other hand, we also have to trust that markup. And we have to understand that it's relevant for this search result for these pages. And sometimes, for example, if we don't really trust the website that well, if we don't understand its quality, or think that maybe it's a lower quality website, then we're going to be less likely showing rich snippets in search results as well. So if you're seeing things that are working fine in the testing tool, but don't show up in search, I'd really recommend making sure that the overall quality of your website is as high as it can be.

AUDIENCE: Hi, John.

JOHN MUELLER: Hi.

AUDIENCE: Hi. I've got a question around structured data itself. And you were just talking about rich snippets. So apart from showing it in the search, is there any systematic benefit that we derive from the structured data on the pages? And will it help GoogleBot to better understand the page that we have?

JOHN MUELLER: Yes, we do use structured data to try to better understand the pages, to understand the concepts that you're talking about on these pages. So that's something that does help us. I know it's something that the team from Bing, for example, also talks about all the time. And a lot of times, even if you mark up things for structured data on your pages that we don't show up as rich snippets at the moment, that's something that helps us to better understand the concepts that you're talking about on this page. So for example, if you're talking about golf, that could be the sport. It could be a car. Those kind of things, if those are marked up properly on those pages, it makes it a lot easier for us to understand what you're exactly writing about there. So from my point of view, I definitely recommend doing that.

AUDIENCE: All right. Thank you.

PANKAJ: So some-- hello?

JOHN MUELLER: Yes?

PANKAJ: John, it is in relation to the same. So does it help in ranking to that particular URL if we are using rich snippet? Because we are helping Google to understand the content. So somewhere, if it is giving the signal that it is a relevant content, and it might help?

JOHN MUELLER: Yes and no. So we don't use the rich snippets for ranking at all. So whether or not your page uses rich snippet markup isn't something that we would use for ranking. But like you mentioned, sometimes it helps us to better understand the pages and show it in the relevant search results better. So it wouldn't be that we would, overall, be promoting your site better in search results. But maybe we can show it better for the queries that really matter or for the queries where users are actually really looking for your content, and not something else that just happens to use similar words on the page. So it's not that there's any kind of a ranking boost associated with structured data. But if we understand it better, maybe we'll show it in the right places a little bit better.

PANKAJ: Oh, OK. Thank you.

JOHN MUELLER: OK.

KARL: Hi, John.

JOHN MUELLER: Yes?

KARL: I did manage to put my URL in the group chat. Is there any chance you can just have a little look, see if there's anything major that I'm doing wrong?

JOHN MUELLER: Let's see. Where is it? OK. Let me just take a real quick look. But it's really tricky, always, to take a quick look without looking at the details. Yeah, I think working on the links is probably a good thing that you've been doing there. So I see that Penguin is still not really happy with your website. But this is primarily probably a matter of things updating. But I think, as always, with these kind of algorithms that take longer, the important part here is that you don't want to go into this iteratively, in the sense that you work on some of the links now. And when it updates, you say, oh, well, I'll clean up 20% more links and hope that's better for the next time. Because there's just this long time between when you update a link and we actually go out to crawl it. We re-index it. We re-process that information. We use that for the new algorithm update. There's just this long time between there. So I'd really recommend really making sure that you're cleaning up everything that's associated with that. It sounds like you're doing that already. But just make sure you're not saying, oh, well this link is kind of organic, so maybe I'll leave it here and take it out if Google complains in half a year, because you don't want to wait that extra half a year associated with these kind of issues.

KARL: Yeah, of course. OK. I did find a fair few spammy forum links, but there was also a handful of genuinely natural ones. The natural ones were using a URL to a page on my website. And they're talking that product on a forum. Would you suggest it's probably best to remove, disavow, even a genuine link back, because it's in a forum? And I have found problems with forum links. So the SEO company must have used that as well one of their tactics.

JOHN MUELLER: Yeah, I wouldn't generally say forum links are bad, because a lot of times, there are really good links from forums where people are recommending your content. They're talking about your business. That's absolutely fine. But like you said, some SEO companies go into forums and just drop their links directly. Or they add low quality threads and say, hey, what do you think about this company? And then they come in with a sock puppet and say, oh, this is a great company. I go there all the time. And those are the things where our algorithms pick up on these patterns and they react to that.

KARL: Yeah, that's what I was just thinking about, if they had picked up on the pattern of forums. And say I've left five or six ones that I think they're natural. I don't think they help at all. Will it do any harm to just put them in a disavow anyway?

JOHN MUELLER: I wouldn't put them in a disavow, if you're sure that they're really natural.

KARL: OK.

JOHN MUELLER: So I wouldn't say that, if you have been forum spamming, then everything from the forums is bad. Make sure you're cleaning up those things that the previous SEO companies were doing there, that you're disavowing them where you can where you can't clean them up directly. And make sure that things are working right from there on. But if they're really good links from forums, I'd definitely leave them. I mean, that's something that people were recommending. That's a good sign.

KARL: Yeah, All right. Thank you, very much for looking.

JOHN MUELLER: Sure.

GARY LEE: John, I've got a quick spam related question I just wanted to talk to you about. In the chat, I-- let me pull up the chat here. I'm just going to post a link. I think this is a quite serious problem a lot of people have been talking about with some bots. And I've just put an example there-- I'm not going to talk about it online, but-- for you to see. And this is an example of what we're seeing. A lot of sites are getting penalized and immediately have backup URLs. And this is one of them that appears very high up for a lot of keyword rankings. And everybody that's looked at it for us has shown that it's manipulation. But as soon as you take this down, there's another one for the same company. It's come back again three months later. What I think everybody is very frustrated with and why a lot of people are building links dodgy to their own websites and doing things is because we're not seeing action quick enough. And I know that we all want a better web. And the indication is that this isn't taken down quick enough. And it takes three months. Then the psyche behind people is that, we'll, I'll never be up at the top because there's always someone cheating that is going to be above me. And I know there's lots of signals that we can't see that might cause some of these businesses to rank well, but some of them are such blatant manipulators. We need a better place to go to highlight these things, where they really are taken seriously. And I feel that the link reports that we do and the spam reports, they're just used as an idea. I think if you guys can actually put a marker against them in some way to say, look, let's react quickly to these things in some way. And then, when the next iteration of Penguin or Panda runs, they'll be factored into that algorithm. Because you always say, we'll see what they're doing wrong and we'll try to make sure we capture everybody. But if we can capture them quickly, if we want to try and stop these new domains from manipulating the results, we're going to have very clean results. But we see this so often. And this is such a blatant example of abuse. This particular business has about five URLs that they also own that are in the top 20 results. I'll never bring this up with you ever again, if you can tell me that you genuinely think that this website ranks where it should.

JOHN MUELLER: I don't know this website. So I can't say anything about that. Specifically on the topic of spam reports and how we respond to those, that's definitely something we've been looking into. And that's something where I imagine you will see some movement over time, because we're working on increasing the velocity of how we respond to these, but also making it easier for people to let us know about these kind of issues. But that's something that is probably not going to be quite as visible as, maybe, a new feature in Webmaster Tools, because that's a lot of stuff that we're doing on the backside on our side of things there. But filing spam reports is definitely a good idea there. If you see things that are bigger patterns where you say, oh, all of these are from the same company, and they're all spamming in the same way, or these have been swapping out here, that's something you can always send my way.

GARY LEE: [INAUDIBLE] asked you last week, actually. And it was a website that had 20 of its own domains, all interlinking, and doing that. And I sent you that email about two weeks ago.

JOHN MUELLER: OK, yeah.

GARY LEE: [INAUDIBLE] for a long, long time. So if you wouldn't mind just looking back in your history and passing that one onto your spam team, it's a different URL to the one I posted.

JOHN MUELLER: OK, sure. Yeah, especially bigger patterns that you can't really post in the forum there, that's something you can send our way directly. And we can talk to the web spam team about that. We can't promise that we can take action on all of these. And sometimes, we take action on them, and they still rank fairly highly. So they're ranking for other reasons there, and not because of these spamming things that they're doing. But it's definitely something we take seriously and something we work on, handling both algorithmically where we can-- usually, that makes more sense-- and manually in the cases where we see that it really causes significant problems.

GARY LEE: Is there any chance that the emails that we send you-- is there any chance that, if they do get passed onto the web spam team that they can respond saying, we've seen it? I'd just like to know that there's some kind of acknowledgement to say, we've looked at it. Whatever action they take, I don't care. All I'd like to know is that it didn't end up in your spam filter or buried along 3,000 other emails that everybody's desperately trying to contact you. Because right now, we have no idea whether or not you've seen it, you've been dealing with it.

JOHN MUELLER: That's something we've been looking into for the web spam forums in general. Kind of like with Map Maker, if you send updates about maps, you get an update saying, hey, thank you for your update. We're going to take a look at this. And maybe you'll get update saying, hey, we took a look at it, and you were wrong, or you were right. And those are the kind of things where I think it helps the people reporting spam to understand, it makes sense that I report spam, or to understand, maybe I was reporting something that wasn't really a problem. And that kind of feedback, I think, is really useful for you guys. And it makes it a little bit, also, so that we get higher quality spam reports. Because if we can tell you the kind of things we take action on, then probably you'll be able to figure out, oh, this is really a problem that I need to tell Google about, or this is something that Google will maybe fix in a algorithm at some point. I can't really solve it now.

GARRY LEE: Yeah, because you've got hundreds of thousands of SEOers who are willing to do bit work for Google. We're all here, ready to make the search engine better. And you need to better utilize us to do that.

JOHN MUELLER: Yeah.

ROBB: Gary, I'm with you. But what's stopping other people reporting you if you're doing something right?

GARY LEE: That doesn't matter.

JOHN MUELLER: Well, I mean, that's something we have to look at manually. It's not that we would say, oh, this is a spam report, we'll accept it always. We really need to take a look at what's actually happening there.

GARY LEE: If I'm doing something wrong, then I should get punished for it. And if I'm not, then Google's not going to do anything about it. I mean, I trust Google on that front, that they're not just going to do something wrong. And that's why I think this is a really good idea.

JOHN MUELLER: But it's something that I think is probably more of a long-term goal from our side. It's not something where you'll see any changes next week. But it's definitely something we're aware of. And I think it's something we should be heading into, that direction, to be a little bit more transparent on the things that you submit to us so that you get feedback, what happened there, if it got looked at or not, so that you also understand where it makes sense to submit things to Google and where Google will say, well, we can't really change it. This guy is just better than your website. And that's the way it is. But at least giving some feedback back to those people who are submitting spam reports, I think, definitely brings up the quality of those reports as well.

GARY LEE: Yeah, I think you could almost build in a quality score where people who successfully report things that really are bad are taken more notice of quicker. And obviously, you'd report lots of things just to be seen, just get filtered into the noise at the bottom. And so you really do actually get to the people who are really reporting genuine things.

JOHN MUELLER: I mean, there are a lot of ideas that are theoretically possible here.

JOSHUA BERG: John?

JOHN MUELLER: Yes?

JOSHUA BERG: You mentioned that we may see improvement on systems for giving feedback on manipulation type of issues. One thing that I've had some concern about that I would like to see, if possible, us have a way to give more feedback about mobile issues, responsive web design, and things like that. The latest blog that came out, that was really good. I mean, I was like, wow, we've been waiting for this a while. So that's already an improvement there. Almost everyday, though-- because I'm a bit of a news junkie. I read news a lot on my mobile. And I go through sites a lot. And I see sites ranking high. And they just cannot be viewed on the mobile. And they'll even be in first, second, third place. So if we had even just a button to click, OK, this site doesn't work with mobile. It doesn't resize properly, or something like that, ways to provide a little bit of feedback. I know the mobile is a newer thing. And Google's concentrating on it a lot more now, because it's more of a priority, because so many people are going towards that. But that's just a suggestion and ideas, that I would like to be able to give feedback in that more.

JOHN MUELLER: That sounds like a good idea, yeah. I mean, we're working on bottling these things up when we find them as well. But being able to submit feedback might be a good idea too, yeah. All right. We're way over time. But it's been really interesting talking to you guys. And lots of good ideas flowing back and forth, so we should do this again.

PANKAJ: John, may I ask a last question?

JOHN MUELLER: OK, go for it.

PANKAJ: Oh, OK. We came across a client who gave us a website wherein if we do the site search, site, colon, and type the website, so we came across results where are showing improper tags, title tags. So what could be the possible reason for that? Because let's say there's a title tag of abc.com and something. So it is showing only abc.com, and skipping the rest of the things.

JOHN MUELLER: Yeah, so we re-write the title tags when we determine that they're either too long, or that they're not as relevant for the user there. Sometimes, we get that wrong algorithmically. And those are the kind of things that we'd like to hear about. But one thing to really keep in mind is, when we re-write the title tags, we do that dynamically, based on the query. So it's not something where the titles you would see in a site search are exactly the titles that a user would see. So instead of focusing on the site search, I try the queries out that are actually bringing visitors to your site and see what the titles look like there. Maybe you'll see that the titles there actually look reasonable.

PANKAJ: OK. Thank you.

JOHN MUELLER: OK. All right. So let's take a break here. Thank you, all for coming. I wish you guys a great weekend.

ROBB: Thanks, John.

JOHN MUELLER: See you next time.

JOSHUA BERG: Thanks so much, John.

KARL: Thank you very much, thank you.

MENASHE AVRAMOV: Bye.

JOHN MUELLER: Bye-bye.
ReconsiderationRequests.net | Copyright 2018