Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 12 September 2014



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: Hi. Welcome, everyone, to today's Google Webmaster Central office-hours hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google Switzerland, and I'm here to help answer your webmaster web search related questions. Looks like a lot of people are in here already. So to start off with, do any of you want to jump in and ask the first question?

MALE SPEAKER 1: John, can I ask a question about secure and site maps?

JOHN MUELLER: Sure.

MALE SPEAKER 1: We very recently, as most people have, moved to All Secure, and we did that by-- I don't know if you remember last week-- I meant enforcing 301ing to secure [INAUDIBLE] new option. So we did that within the site map as well. And I think we did it wrongly because we suddenly were throwing errors from Webmaster Tools saying, you've submitted 1,500 URLs, and only 12 have been indexed, I think because the site map immediately also 301ed over from non-secure to secure. And so as soon as the bot got to the site map, it couldn't find any URLs for the site it was supposed to be indexing. So the normal site should have a normal site map, and the secure site should have a normal site map and just let Google do its thing and index both and then decide-- and then 301 on a page level. Is that right?

JOHN MUELLER: Yeah, so basically, you 301 redirect all of your old pages to the secure ones? Is that correct?

MALE SPEAKER 1: Yes. Also, we did it on the site map, so Webmaster Tools on the normal one started to have a bit of a fit and say, I can't find anything.

JOHN MUELLER: Yeah. There's an option of temporarily submitting the old URLs in a site map file so that we can crawl those a little bit faster. What you could do there is say that you have changed these old URLs and set the last modification date for those. We'll go and crawl those URLs, see the redirect to the secure version, and follow that. Within site maps, it's likely you will see some kind of warning there saying, you're submitting URLs that are actually redirecting, something like that, and that's completely fine. You know you're doing this. It's something you're doing on purpose so that those redirects are found and followed, so that's completely fine. It would be a little bit different if you submitted those URLs and you expected the insecure versions to show up in the index because you submitted them in the site map. Then, that warning would be relevant. But if you're moving from one version of a site to another version, then that warning essentially just tells you that they're being prompted normally.

MALE SPEAKER 1: Right. And having two site maps, one for the secure, one for the not, that also is fine because it all--

JOHN MUELLER: That's fine.

MALE SPEAKER 1: [INAUDIBLE] and Google then choose which one.

JOHN MUELLER: Yeah that's fine.

MALE SPEAKER 1: That would be better.

JOHN MUELLER: I think in the long run, you'd want to focus on just the version that you want to keep [INTERPOSING VOICES]

JOHN MUELLER: But in the short term, [INAUDIBLE] doing a site move like this, that's totally fine. It would be a little bit different if you submitted those URLs-- [INTERPOSING VOICES]

MALE SPEAKER 1: I didn't get that last bit. Someone's got their microphone on.

JOHN MUELLER: So in the long run, you'd want to just submit the final URLs, so the secure version where you have your content in there.

MALE SPEAKER 1: OK, so give it two or three weeks to let that all fully re-index on secure.

JOHN MUELLER: Yeah. Exactly.

MALE SPEAKER 1: I'll go into Webmaster Tools and see what happens. OK.

JOHN MUELLER: That should work. Yeah.

MALE SPEAKER 2: Hey, John, can I ask a couple of questions?

JOHN MUELLER: Sure. Go for it.

MALE SPEAKER 2: I don't know if you remember we talked a couple of weeks ago. Took a client more than a year and a half ago, and despite a lot of efforts, I haven't managed to move him through the results to get more organic search [? subject. ?] I was assuming he was penalized, some sort of a factor. I don't know if you managed to get a look at it. I left you a link.

JOHN MUELLER: If you can drop the URL in the chat, I can double check.

MALE SPEAKER 2: I sent you a [? brief ?] message with the Webmaster [? oral ?] [? news. ?] [INAUDIBLE] I talked with a few people there about [INAUDIBLE] managed to reach a good conclusion. So, yeah, maybe you can help me with that. And the second question would be actually related to this one. Is there any possibility in the future on Webmaster Tools to show some information regarding algorithmic penalties, such as Panda or Penguin. I assume you haven't done so far because maybe if you could do it, you would have to show what exactly you consider low-quality links and things like that. But I guess it will help us as SEOs, even a small notification like, we noticed an increasing amount of low-quality links to your website. Please check out the guidelines, something like that, without actually giving examples. That would also actually help webmasters know that somebody's doing a bad job on their team, and that will help them maybe research more into it and find out more about what the quality guidelines are. And also actually would help also SEOs when, for example, I took some new clients, and they paid for a year with an ex-SEO firm that has brought them a lot of black-hat links, directories [INAUDIBLE]. And they were very skeptical in removing those links [INAUDIBLE] for example actually removing them because they paid for it, and it's just my word against the other SEO team's word. So they were not convinced that they should remove those links without any proof that their site would be affected by those links negatively. So even a small notification would help that by just saying that there's-- you don't even have to tell them they're penalized. Just say that there are a bunch of low-quality links that maybe you want to take a look at. That's more of a suggestion than a question.

JOHN MUELLER: Yeah, it's a tricky topic. We bring this up with the engineering teams regularly to see what we can do to bring a little bit more of that algorithmic information into Webmaster Tools so that webmasters have a little bit more insight into that as well. The main problem is really that our algorithms aren't really made for showing information to the webmaster. They're made to optimize our search results so that the good results bubble up. And because of that, it's something that we can't really bring that information one-to-one in Webmaster Tools because it would be [? potentially ?] more confusing to the webmaster. So sometimes, for example, when we recognize that a site is ranking, let's say, for the wrong terms. It's ranking really well for this term, but not for this other term that we think is more relevant. One of our algorithms might step in and say, well, your site isn't so relevant for this here, but it is relevant for this one here. And if we were to show that to the webmaster and say, we think your site is low-quality because it doesn't match what it was ranking for or what we were showing in the search results, then the webmaster might think, oh, well, this is something they have to fix. But from our point of view, it's just moving things to the right queries and not necessarily saying this is something that the webmaster is doing wrong. It's something we were doing wrong, our algorithms in the past, and this algorithm is kind of fixing that. [INTERPOSING VOICES]

JOHN MUELLER: So it's really tricky. I understand with links, specifically, it's perhaps a little bit easier, but even there, it's sometimes really hard to algorithmically bring this information out because even when we do this manually, from the web spam team, when we send example links to sites that were manually flagged for having web spam issues like that, we have enough problems there finding really good example links, too. I'd be kind of worried if we did this completely algorithmic and just tell the webmaster, oh, you did some spam. You should clean that up. And the algorithms are maybe picking something up that isn't really that critical or isn't really that problematic or is something a competitor put up or an ex-SEO put up that we're ignoring already. It's not something that they really have to worry about. On one hand, I'd love to put more of this information into Webmaster Tools, but on the other hand, we really need to make sure that it's really, really actionable for the webmaster and really useful, not just exposing some internal details that don't really have that much sense for the webmaster.

MALE SPEAKER 2: Right, but would you do it like on-- since Penguin is being refreshed not so often, so it's not like daily or something like that. It's every few months. Maybe once the Penguin [INAUDIBLE] algorithmic refresh, you could just trigger some sort of message or something like that?

JOHN MUELLER: It's really hard. Yeah. This is something we do bring up with the engineering teams regularly because we know that when we look into our internal tools, we sometimes have information that would be really useful for the webmaster. And if the webmaster wants to fix the problem, we should help them fix the problem as much as possible. On one hand, we'd love to do that. On the other hand, it's not that trivial. So it's not something that I would imagine you'd find in Webmaster Tools in the next month or so, but I wouldn't be surprised if it showed up-- I don't know. I'm making a big guess, maybe a year or something like that. I think we'd love to bring more of that information into Webmaster Tools, but at the moment, especially with Webmaster Tools, the priorities are more on the really hardcore, technical issues that you're doing wrong. So if you have a site set up that's completely wrong for mobile, that's something that's very black and white. We can test it. The webmaster can fix it. It's done. And all of these kind of gray areas where our algorithms are doing a little bit of finding some small problems, that's very tricky to communicate to the webmaster. But we hear it from you guys regularly. We do bring it up with the teams regularly, so it's not something I'd say is completely out of the question. It's just not trivial at all.

MALE SPEAKER 2: Yes. This is, as I said, it was related to my previous client's ranked website in question because some notification of this sort would at least help me to know where to target my efforts going on, especially if there's a problem like this where nothing has worked so far, even though we have done a lot of effort. It would be very useful at least to know where to focus more than other areas [INAUDIBLE]

JOHN MUELLER: Yeah. We definitely hear you.

MALE SPEAKER 2: OK. Thank you.

MALE SPEAKER 3: Hi, John. This is [INAUDIBLE].

JOHN MUELLER: Hi.

MALE SPEAKER 3: Hi. I have a question. Let say we have a URL which is blocked by robot.txt, and again, it has been 301 redirected through from another URL. So would it pass the PageRank or not? I just wanted to confirm.

JOHN MUELLER: It wouldn't pass any PageRank because with a robots.txt, we wouldn't see the redirect at all. So if it's blocked by robots.txt, then the URL that's blocked collects the PageRank, but the redirect doesn't forward any of that.

MALE SPEAKER 3: OK. Is there any possibility of that particular URL, the destination URL. It has not been crawled from anywhere else, so it is also being blocked.

JOHN MUELLER: You could submit a site map. That might help us. But if this is the only link to that URL, then that seems like something where our algorithms might think maybe this is not such an important URL after all, and we might not index it that well. So I'd just make sure that it's easy for your users to recommend these URLs so that these links do show up more, and don't hide everything behind a robots.txt redirect. Make it so that people can just naturally link to your website.

MALE SPEAKER 3: OK. Thank you. I have one more question. [INAUDIBLE] We have a e-commerce site, and it got the data from the manufacturer directly in the databases. So when there's a possibility [? other ?] they will be [? sending ?] the same database to [? others ?] as well because the product is the same. So the duplicacy is there. So when the site is so exhausted that it is very difficult to [INAUDIBLE] a single state. So if we have [INAUDIBLE] create [? our ?] unique content at some stage of some part of the website, but again, the percentage is very high of the duplicacy. So would it be [INAUDIBLE] a right choice to shut down some pages out of the website, so the percentage of all our duplicacy of the website can be balanced out? So in some sense, we are not eradicating entire duplicacy but balancing out. So [? queue ?] ranking would be affected in that [INAUDIBLE].

JOHN MUELLER: So I think, first, we don't penalize a website for duplicate content. So just because you use your manufacture descriptions doesn't mean that we'll penalize your website. What will happen is when someone searches for those individual pages or that piece of content, we'll try to show something that's relevant there. So that might be your page. That might be the manufacturer's original page. It might be someone else that's selling the same product. It would be hard to say. But essentially, it's not something where we would penalize your site for having that, so I wouldn't say you'd need to remove it completely. But if you feel that this is lower-quality content because it's just the same as everyone else has, putting a noindex on there is a good idea so that the content remains within your website. Users can still find it by navigating through your website, but it's removed from the search results. So that might be a possibility there. But otherwise, try to, where you can, make sure you have your unique content, your unique value that you're adding to this content, which might be, if you're selling these products locally, then maybe your address is something that's relevant there. So people are searching for the product and your city name, so that would be a good match. But if you're just selling this online and everyone else is selling it online in the same way, then that's kind of hard to provide that extra value. But be creative. Maybe there's something that you can do there that really provides a lot of [INAUDIBLE] that others don't have.

MALE SPEAKER 3: OK. Thank you.

MALE SPEAKER 4: All right. John?

JOHN MUELLER: Yes.

MALE SPEAKER 4: Just quickly on that one. With regards to [? I've ?] [? seen ?] unique content on e-commerce websites, we've got a couple of sites where, say for instance, they're both selling, I don't know, for example, bottles of Coke. One's 500 mL. One's one liter. Obviously, the sort of thing that the content on the pages is going be very similar, apart from the actual size of-- quantity of the actual Coke that's being sold. How would you go about displaying that sort of thing on the website?

JOHN MUELLER: It depends a bit your website and how far you want to go. There are two main options where you could say-- [? you're ?] [? muting. ?] That's a weird echo. There are, essentially, two main options that you can do there. One is just to leave these different product variations on your website under individual URLs so that they're available on your website. If someone is searching specifically for that variation, then they'll be able to find it in search. The downside there is we have to crawl a lot of pages, and if you have variations and variations, like let's say you have t-shirts in different colors and in different sizes and for male and for female, then that's a lot of different variations that we have to crawl and index separately. You could do that if you wanted to. So if someone is searching for a blue t-shirt for this size, this type, then you would have a perfect page for that. And another idea could be to fold those different variations into a single URL, where you say, this is the main product URL. And all of these options are, essentially, variations that I have available on this page, but this page is about this product in general. The advantage of doing that is that you have a much stronger individual product page because you have one these. You don't spread yourself thin within your website with all of these different pages. And it's something that will probably rank a little bit better for the more general queries where someone is searching for a t-shirt with maybe a specific design on it, but they don't care about the color or the size, or they'll pick the color and the size when they come to your site. So those are essentially the two main options there. It's not that we'd say one is correct and the other one is not correct. You kind of have to work out how it works for your website. If you have t-shirts that are in different sizes, then probably those variations aren't so relevant that it makes sense to make individual pages for that, but if you have different patterns on these t-shirts, maybe it makes sense to have individual pages for each of these patterns. So you have to look at the products that you're selling and consider is this specific variation relevant and unique enough that I want to have it indexed separately. Or is this just a part of a more general product that I want to have visible in search, and the user will pick the right variation that they want when they come to my site.

MALE SPEAKER 4: [? OK. ?] Thank you.

MALE SPEAKER 2: John, related to that question, I usually advocate for the second option of going for a single URL where the variations of the product are available as attributes or something like that. Would it be useful to include, for example, the t-shirt size in the meta description, like available sizes-- S and-- something like that, so we could also be able to rank better for people to search for black t-shirt, size 10, or something like that?

JOHN MUELLER: We don't use a meta description for ranking. We do show that in the snippet, but it's not something we use for ranking. So from that point of view, it wouldn't help for your ranking, but it might make this snippet a little bit easier to understand. So maybe you could do that like that.

MALE SPEAKER 2: So you cannot take actual text from the meta description to-- or use it for at least [INAUDIBLE] on the ranking?

JOHN MUELLER: No. If it's just the meta-description tag, we don't use that for ranking. So we use that for the snippet, but we wouldn't use that for ranking the page.

MALE SPEAKER 2: Oh, OK.

MALE SPEAKER 5: I have a question about the href language tag.

JOHN MUELLER: OK.

MALE SPEAKER 5: And we have a big site with about-- apps, and it's in three languages. It's in English and German and French. And it was first indexed without any of these rel="alternate" href tags. And if you search in Germany, you see the English versions and vice versa. It's really crazy. So we implemented the tag afterwards. That's four weeks ago. But still it's really crazy that if you search in Germany, you find the English results. And if you search in the US, you find the German results. How long should we be patient, or should my customer be patient, until those href language tags really show effect?

JOHN MUELLER: On the one hand, we need to recrawl both of those variations.

MALE SPEAKER 5: Yes, all of them. Yes.

JOHN MUELLER: So that takes a bit of time. Depending on the size of the website and the--

MALE SPEAKER 5: It's about three million pages.

JOHN MUELLER: Yeah. And depending, I guess, on how much we crawl, which pages you're seeing in the search results. If they're like the important pages in the search results, then that's something we'd probably crawl and index a lot faster. If they're very random pages on the website, then maybe we'll recrawl them every couple of months.

MALE SPEAKER 5: Oh.

JOHN MUELLER: So that's something to kind of keep in mind. But in general, I imagine you should see some kind of visible result after four weeks.

MALE SPEAKER 5: Four weeks?

JOHN MUELLER: Yeah.

MALE SPEAKER 5: Is there any way to speed it up? And is it better to put it in the code or on the XML site map?

JOHN MUELLER: Both options work. So they work at the same speed. We have to recrawl those pages even if the href lang is in the site map. What you can do to speed that up is submit a site map file with a new last modification date for those URLs.

MALE SPEAKER 5: OK.

JOHN MUELLER: That's something we could process. With a site that size, I don't think there's any magic bullet that you can use to speed everything up. But if you still see these problems after, I guess, four or five weeks, then that's something we definitely want to see those examples so that we can figure out what we need to do better on our side. It sounds like you've waited a reasonable time to let that get updated. So it almost feels like something on our side is sticking to something that's wrong.

MALE SPEAKER 5: In the webmaster tool where it should show that these tags are noticed, in the English side we see 600,000 of them and in the German version only 6,000-- of millions. So there is something stuck there. And another question related to that is should we help the customer or the user out there? If they access the site, should we redirect them based on their IP address, in addition to that href language tag? Would that help? I mean, then the customer sees the correct version right away, and we would not redirect the Googlebot, of course, because that wouldn't make sense.

JOHN MUELLER: Yeah, that's the main problem. But we would almost see that as cloaking.

MALE SPEAKER 5: Yes, of course.

JOHN MUELLER: Because if you were redirecting users in the US, but not Googlebot when it crawls from the US, then that would look kind of like cloaking. What we--

MALE SPEAKER 5: Sort of. But it would help the customer in this sense right? It looks like cloaking, yes? But then if a quality rater looks at it, they would see that we don't do anything bad, but we only try to help the customer.

JOHN MUELLER: I think that the bigger problem from our side, what we'd see there is that it's confusing to Googlebot. When we don't really know what exactly is happening when someone's going to a page, that makes it really hard for us to understand how we should be treating this page. What we generally recommend is showing a banner on top. So that's something that the customer can see and they can make a choice on that because sometimes people in the US are searching in German, and they want to find a German page. But of course if they're searching in English, then finding the English page makes a lot more sense.

MALE SPEAKER 5: Yeah. So you're saying it's best if we know they are coming from the US, put up a banner and say, hey, why don't you try the English version of this, you're based in the US, and vice versa.

JOHN MUELLER: Yeah. Yeah, exactly.

MALE SPEAKER 5: And then set a cookie so they only come to the site correctly.

JOHN MUELLER: Yeah, that's fine. That's absolutely fine.

MALE SPEAKER 5: And I had some customers with banks, and they needed to really-- the US customers must only see the US version and vice versa. In that case, I mean, it is sort of looking like cloaking, but is there any way to notice Google that there is a legal responsibility of the bank to do that?

JOHN MUELLER: Not really. It might be interesting to have some of those sample URLs if you have some that you can share with me because I know that the team is currently looking into what we can do with those kind of sites that must show something different to users in different countries. And that would be useful to kind of think about what we can do there to improve that.

MALE SPEAKER 5: OK. All right, thanks.

JOHN MUELLER: Sure. OK, let's go through some of the submitted questions because people seem to have voted on them, and it would be a shame to let them go. Let's see what we have here. "We've been updating our content throughout our e-commerce site and placing some of the text behind the 'Read more' button for users and Google to see. Could a 'Read more' button be devaluing our content that we have worked so hard to produce?" To some extent, we do recognize that content is hidden, and we try to ignore it. So if this is something where you're putting a lot of unique value behind these "Read more" buttons, then it's possible that our algorithms are thinking, oh, this looks like it's hidden. It's probably not that important. So I'd really recommend, as much as possible, putting the real content directly visible on the page. And if you have kind of auxiliary content or additional value that you're providing there, using "Read more" or using kind of a tabbed layout-- something like that-- is absolutely fine. But the primary content that you really want to rank for should be visible directly. "I noticed Google patented the ability to use implied links as a ranking factor, which I'm guessing are brand mentions. Is this currently being used as a ranking factor? Or will it be used in the near future? If so, when will it be used?" That's a lot of questions. So in general, we do a lot of research-- we do a lot of work to find out what might work-- what kind of algorithms might work, what kind of algorithms do we need to follow up on. And we might patent some of those, or we might publish research papers on those. That doesn't necessarily mean we use them at the moment for ranking. And that doesn't mean that we will use them in the future for ranking. That's something where I wouldn't assume that just because there's a research paper out there or a patent out there, that this is something you guys need to focus on. And similarly, if this is something that we're going to be putting into use in the future, in general, we're not going to pre-announce that. We might not even announce it afterwards if we're using a very specific type of algorithm. So that's not something I'd really be able to help you with. But at the same time, it's not something where I'd spend too much energy on and say, oh this really obscure patent from Google-- if Google were to do this, then I'd have to add, I don't know, five dollar signs on pages, and I could rank all the time-- that's probably not going to happen that easily. We do have a lot of really smart people doing cool stuff here, but just because it's published doesn't mean that it's live or going to be live soon. "When moving a Penguin-hit site to HTTPS, does a disavow file have to start from scratch again to nofollow disavowed domains? The disavow file is added to HTTP and HTTPS. But the HTTP file is a year old, whereas the HTTPS one is just a few weeks old." So if you're moving from one site to another-- if that's across domains or from dub-dub-dub, non-dub-dub-dub, or from HTTP to HTTPS, I'd just also upload the same disavow file for both of those version so that we have that connection there. But it doesn't mean that it starts over. So essentially what happens here is, on a per URL basis, we'll crawl your site. We'll see the 301 redirect. We'll switch to the other URL in our index. But we'll kind of, try to forward as many of those signals as we can to that new URL. And those signals include all those URLs that were disavowed that we've crawled in the mean time that we're treating kind of like nofollow links now. So all of those signals get forwarded to the new URLs. You don't start over from scratch. It's not that you'd have a big disadvantage from moving from one version of a URL to another one. So all of that does get forwarded as much as possible.

MALE SPEAKER 1: Can I ask a question, John?

JOHN MUELLER: "If a website is stuck in limbo due to a Penguin penalty from past SEO work, is it likely, if we have disavowed all questionable backlinks and removed whatever we can with a top quality site, that a Penguin refresh we could see a boost again in Google?" Yes. I mean, this is a very theoretical question where you're essentially saying, you've turned your site around and made it the best of its kind. And with an algorithm refresh, we'll try to take that into account as much as possible. So if you've really cleaned up all of the webspam issues associated with your site, then the webspam algorithm will say, well, this is fine the next time it runs. So from that point of view it's definitely possible. It's certainly not the case that once an algorithm thinks your site is bad, that it'll be bad forever and that you'll be stuck forever. So that's something that shouldn't be happening there. And in some cases where it looks like something is being stuck forever, contacting us so that we can go to the engineers and ask about the details there, that's always useful. But in general cases like this where you've seen a lot of webspam on your site, you've cleaned all that up, and you're just waiting for a webspam algorithm to update again, that's completely normal. And you'd almost certainly see a change after that algorithm runs again-- after the data is refreshed with algorithms updated.

MALE SPEAKER 1: John, can I just ask a question based off of the previous question?

JOHN MUELLER: Sure.

MALE SPEAKER 1: I want to ask-- when you move the site, you moved the disavow file, et cetera. A couple of weeks ago, when I asked about the-- when you 301, you said some of the authorities [INAUDIBLE] all of it. But we couldn't quite get to the bottom of why. Does that mean, then, that if you move a site, and you move everything, when you're 301ing you get, in theory, you get-- let's say [? setting ?] up the ranking or whatever it is-- [? former ?] [? issue ?] but all of the penalty for that issue? Or do you apply the same kind of percentage of lost ranking as lost penalty. Because essentially, you'll just end up worse off if you follow [INAUDIBLE] some of the ranking but all of the penalty.

JOHN MUELLER: I imagine--

MALE SPEAKER 1: [INAUDIBLE], does that sound--

JOHN MUELLER: I know what you mean. I know what you mean, yeah. So I don't know. It's very theoretical. Let's put it that way. I think--

MALE SPEAKER 1: In our situation, it's not. It's financial. It's not theoretical at all. We haven't done it. I'm just saying, in real world scenarios where people have their own businesses, and they're using real money, it's not theoretical in any sense. It's a real decision that costs real people their jobs.

JOHN MUELLER: Yeah. So when it comes to algorithmic issues, that kind of dampening factor definitely happens there as well. So that's something where you'd also see the same kind of dampening with the forwarding. When it comes to manual actions, I believe those will just be passed on one to one because there is often no kind of like dampening available with manual actions. Either someone says your site is low quality or thin content and it's removed from the index, or it's not removed from the index. There's no like, oh, your site is 10% removed from the index. That kind of thing. So that's something where, from a manual action point of view, that would be forwarded completely. But if there's anything--

MALE SPEAKER 1: But you would be worse off?

JOHN MUELLER: Yeah. But I wouldn't assume that you could like chain, I don't know, 10 of these redirects after each other to kind of [INAUDIBLE], but it's something that happens proportionally with all the rest. So it's not that one side goes a little bit down and the other side stays as bad and [INAUDIBLE].

MALE SPEAKER 1: It's more-- I mean, I'm not ever suggesting that you should move site to avoid a penalty anyway because you're not really fixing the underlying problem. It was more because of what's [INAUDIBLE] terms of the secure. And so you move everything to a secure site. And you therefore-- you're losing some of the ranking but you're gaining some back if it's secure. But you don't know what. [INAUDIBLE] penalty [INAUDIBLE] is all fine. You'd be worse off, but you wouldn't know why unless you'd watched this discussion.

JOHN MUELLER: Yeah, yeah. That's not the case. That's something where, if you don't have a manual action, and it's really purely algorithmic, then that kind of dampening happens regardless. I'm pretty sure if you just changed from HTTP to HTTPS, that's even something that-- where we wouldn't have any kind of dampening there because that's essentially just such a small variation there. That's something where we wouldn't say this is like a site move.

MALE SPEAKER 1: Right. OK. Carry on.

JOHN MUELLER: OK. Let me just go through some more of these questions. And we have more time for questions from you guys afterwards. "I noticed that to receive the search bar and the search results, you need to implement schema markup. Is schema markup something that Google is going to look at more over time?" Yes. We are looking at schema.org markup more and more. And we're always looking at the kind of markup that's being used, and that helps us to decide which kind of markup we should also support. "Is schema markup currently used as a ranking factor?" Not in the sense that if you have schema.org markup, we rank your site better. But it definitely helps us understand the content on your site a little bit better. So if you mark up the entities on your site and you clearly mark up what you're talking about, then that makes it a lot easier for us to say, oh, this page is really about this topic because they confirmed this with the schema.org markup or with other kind of, metadata markup on these pages so that we could really be sure that this page is about this topic. But it's not the case that just adding random schema.org markup to a page makes us say, oh, they use a schema.org, it must be a better page then all these other ones. A question about redirects-- "Could you clarify or confirm that with chained redirects, link equity is reduced at each jump? By how much is it reduced? And does it vary or remain consistent at each step?" This is something-- like I mentioned before, it's mostly a theoretical question in the sense that some part of the PageRank rank is essentially dropped with these redirects. If you chain a lot of redirects, then some of that will be kind of reduced during those chains. This is one of those reasons why we also recommend going out and contacting the sites that are linking to your pages when you do a site move. So that those links go directly to your new pages, and essentially, you don't have to worry about that drop. So this is something where if you, kind of, had access to a very granular PageRank meter that was extremely live, then that's something where you might see a tiny drop there. But in practice, it's something where as long as you're not going overboard with amount of redirects that you're chaining, you wouldn't really see any change in rankings there directly. "Is the number of posts on blogs or social media, like buzz, a ranking factor? Is a big buzz better for Google?" We do pick up content that's changing on your site, and we try to show that in the search results. If it's relevant content that matches new or current activities, then that could help us to say, well, this page might be relevant for these kind of new and upcoming things. But it's not the case that we'd say, well, there's a lot of content that's coming in and out of these pages, therefore it must be good. So we really look at the content and try to figure out what we need to do with that content and try to react accordingly to that. So it's something where if you just add like RSS feeds to your pages where you're constantly updating the newest news from whatever on those pages, that doesn't necessarily mean that these pages are suddenly much more valuable for us or much more relevant for us in the search results. If you're doing a lot of work to update these regularly, we're probably crawling them fairly quickly because we see a lot of changes. But just because we crawl them regularly doesn't mean that they're suddenly a lot better. So I wouldn't focus just on the quantity of buzz or the quantity of blog posts that you put out. But really make sure that the quality is the main thing, and the quality and relevance of what you're putting out there matches what you're trying to achieve. So it doesn't look like you're just like churning things out automatically or that you have a squad of writers who are just creating content like crazy. If that content isn't really relevant or useful, then that's not going to help your site. So just because a lot of things are changing doesn't mean that it's better like that. "I've seen in the past, websites that are relatively new showing well in the search results for a short period and then dropping back. Does Google temporarily rank a website well to see how users interact with it and then adjust its ranking based on this interaction?" This is something where I imagine a number of our algorithms try to find the right approach with new websites where we don't have a lot of signal. So especially if a website is really new and we don't have a lot of information on how useful this website is, how high quality this website might be, then our algorithms are going to make some estimations first based on what they've seen so far. And based on that, they might show it in the search results. And then over time as we gather this data, then those algorithms are going to adjust their estimations. And that might mean they'll adjust them up because they think, oh, well, this website is actually a lot better than assumed. It might be that they just sat down and said, well, we thought this was going to be great. But whoever created this website didn't keep up, and it's not as great as we thought, actually. That's something where our algorithms, when they have to work with estimations, might make estimations on the high side, might make estimations a little bit on the low side. And it just takes a while for things to shake down and land at a reasonable level where we say, all the signals that we know about this website have been confirmed, and we really trust that this is the right place to show this website in the search results.

MALE SPEAKER 4: Hi, John. Can I ask a question?

JOHN MUELLER: Sure.

MALE SPEAKER 4: Is a question is being posed by [INAUDIBLE] in our group [? chat. ?] So I just got the information on that as well. I want that information again. So in one post, when Panda 4.0 came, so a post came from [INAUDIBLE] about good quality sites, wherein they were referring that some authorship is also a way to measure how quality a content could be. And now, the authorship is being removed entirely from the search results. Does this mean that the quality of content means authorship is being ignored by the Google Map or what measures the quality of the content?

JOHN MUELLER: We actually never use authorship for ranking directly. We mentioned there are some misconceptions around authorship being used for Panda, for example. And that's something we never really did. We used authorship for the in-depth article [INAUDIBLE] web search. But at the moment, we don't use authorship at all. That's something where if you feel that your site has low-quality content, just adding the authorship information wouldn't have changed the low-quality content. But that's something where you really have to step back and think about your website overall.

MALE SPEAKER 4: OK. Thank you.

JOHN MUELLER: "My site is in limbo, affected by the Penguin update, removed and disavowed all known unnatural links. I don't know if I can now move forward with the site for fear that work will be in vain to ongoing penalty. How to know if just the PageRank is reduced or if the site has an algorithmic penalty?" If the page rank was reduced, that's essentially something that would have been done on a manual level. That's something that would have been a manual action and visible in Webmaster Tools. So that's something where if you see this in Webmaster Tools, you kind of know, OK, my site has a lot of unnatural links, therefore the webspam team has decided to reduce my page rank for that. The thing to keep in mind with PageRank in general is that it's something that we don't think is really an actionable metric. I believe we haven't updated it in over a year now. And I imagine it's something that probably won't be updated in the future anymore. So that's something where I wouldn't focus on PageRank. I would look at the rest of the situation with your website. If you see that your website is strongly affected by a webspam algorithm like Penguin and you've really cleaned up all of those links, disavowed the ones that you can't remove, those kind of things, then generally it's a matter of waiting for that algorithm to update again. I know it's been a while now. I imagine this is something where you might see changes in the reasonably near future. But it's not something where I would say this is going to happen today or tomorrow. I know the engineers are working on this. But I don't have any hard dates where I can say on this date, this algorithm is going to be updated again. Let's see. "Does Google test algorithms like Penguin in a live environment before rolling them out? Could some webmasters have experienced changes attributed to Penguin 3 rolling out in the last few months?" Yes, we do some live experiments with our algorithms. There's a really good video from Matt from I think March or April of this year where he talks about how we test new algorithms. Primarily, what we do with new algorithms is test them based on the known data that we have, the rated, existing URLs that we have, to see which queries that the results show better, higher-rated results and for which ones we show lower-rated ones. Sometimes we'll also do a new batch of rating where we'll say the search quality rater should rate the search results A and B and tell us which one is better, and go through the individual URLs there and review them to see which one matches what the user is looking for, which one matches better the information that the user might be interested in, might want to find there. That's something that we do. And as a last step, almost, when we think our algorithms are really good, we'll run live experiments on them. And that will take maybe 1% of the traffic and say, OK, they'll go through this version of the algorithm, another percent will go through a different version of the algorithm, and we'll see how users react to those individual changes. And that's something that we do for ranking algorithms. That's something that we do for UI changes. We generally have, I imagine, way over 100 experiments running at any given time. That's something where you almost always, when you do searches, you'll be in some kind of an experiment. And we'll be double checking if we're doing things right or if were doing things in a way that we could be doing better. And what we do with these A/B tests, sometimes that's something that people notice. Sometimes that's something Barry notices and writes about and does screenshots, those kind of things. Ranking experiments are obviously a little bit less visible. But we do run those all the time, and we do try to figure out which of our algorithms need to be tweaked and how we can best tweak them to make sure that they're really reflecting what users now want to find in search results, how they're searching, and all of that. But I'd really recommend checking out that video from Matt from earlier this year, because he covers that with a little bit more detail. "Does Google need a 301 redirect forever for every page on a site when a URL structure changed multiple times over the years? Would Google pass ranking signals along to the newest generation, or would they revert to an older version if there's no 301?" We generally would revert to an older version, because we forward those signals to the newer ones, and all of the internal linking will also be reflected in pointing at the new one. So if you've changed your site structure and you've changed the internal links, then we'll probably try to follow those internal links as well and focus on the newer ones. Personally, I'd recommend making sure you have a 301 redirect in place for at least a half year. If you can keep it there longer, I would definitely do that. If you can track the usage of those redirects, like users actually following them, then you can probably make a little bit of a better decision there and say, well, nobody is looking at these URLs anymore. No user is looking at them. No search engine is crawling them anymore. Therefore, I can just turn them off and simplify the HT access, those kind of things.

MALE SPEAKER 5: John, I have a question regarding that. I've seen lately that 301 redirects are staying in the Google index for years. I have correctly redirected 301 sites, whole sites by 301 redirection. And they are still there literally after two years. I've seen that on several sites in the German Google index. And I see that Google is trying over and over again-- I see that in Webmaster Tools-- to access those old URLs. I would just like to clean out the old index, but I see they just don't disappear. Is this a new thing? Before, 301 redirects really were a very clean way to clear up the old sites. But now it looks like it really stays.

JOHN MUELLER: In general, we'll try to follow 301 redirects as much as possible and really focus on the new URL. What you will sometimes see is that we know that there are multiple URLs that essentially are equivalent and lead to the same content. And if you do something like a site query for a domain that has been redirecting for a while, you'll still see a lot of URLs there, just because we think that you're specifically looking for those URLs and we know that we have some equivalent URLs that were also on those URLs, so we'll show them there. And what you'll see in general is if you look at the cache page, you'll find that the new URL is actually shown on top of the cache page. But you're kind of looking at the cache page of the old URL. So we will have actually indexed everything under the new URLs, but we know that for those new URLs, there are also these old URLs, and they're equivalent. So if you explicitly look for those, we will say, well, you're looking for these. We will show them to you.

MALE SPEAKER 5: I see. But why?

JOHN MUELLER: We sometimes see people searching, for example, for the old website if they know that, I don't know, my old domain, dot de, and they search for that old domain, then we want to show them what they're looking for. And that's something where what we're trying to do for the users confuses webmasters, in the sense that if you do a site query, you expect, well, those numbers will go to 0 once everything is crawled. But from our point of view, we think, well, if you're explicitly looking for that, maybe we should show you something even if we have everything moved over in the meantime.

MALE SPEAKER 5: So how do I know that everything has been crawled?

JOHN MUELLER: I would focus on Webmaster Tools, the index to URL count in the Site Maps, for example.

MALE SPEAKER 5: Well, I don't see the old-- there's no Webmaster Tool for the old site, because it has been completely redirected.

JOHN MUELLER: Yeah.

MALE SPEAKER 5: I cannot verify the old site. And actually, I have a customer with hundreds of old domains, which we all have redirected with a lot of work, and they still appear in the Google index if we search for them explicitly, of course. And there's no way to check if-- because the webmasters ask us, can we turn off those redirections now? Because it's a lot of work to keep them up. And I'd have to tell them, sorry, I just don't know. Google is trying to access those old URLs for ages, for really three years now.

JOHN MUELLER: Uh-huh.

MALE SPEAKER 5: Isn't that strange that I cannot tell Google, hey, this domain is really gone. Take it off. I don't want to see it anymore, and I don't want any redirections there anymore.

JOHN MUELLER: Yeah.

MALE SPEAKER 5: Wouldn't that make your work easier?

JOHN MUELLER: To some extent, yes, as long as nobody else takes that domain.

MALE SPEAKER 5: Yes, of course.

JOHN MUELLER: That's always the tricky part. If you say, I don't need this domain anymore. I'll let it expire. And someone else picks it up and puts a website on there--

MALE SPEAKER 5: That's different, because there would be new content there, and there would be no redirection there. There would be new content there, but that would be my problem, really.

JOHN MUELLER: But that's something more we would have to crawl those again.

MALE SPEAKER 5: Yes, of course.

JOHN MUELLER: But I do agree that-- I've been seeing more of these questions as well around the web where maybe we can do something better with those site queries and say-- if we can recognize that these URLs have really moved, maybe we shouldn't be showing them in the site query either. And only if someone is explicitly looking for the name, maybe, but not on the site query. But I don't know how quickly we'll be able to make changes like that. It depends a bit on what the search quality teams have been thinking about the site queries, anyway. But I totally see how this makes diagnosing things a lot harder.

MALE SPEAKER 5: It does, yes. OK. So it's not only me. That's the good part.

JOHN MUELLER: Yeah, definitely. "It seems rich snippets are not displayed when I use schema.org markup with JSON-LD. Do you support schema.org with JSON-LD to show rich snippets?" We only support JSON-LD for a very limited number of schema.org markups. I think that's only for events and for the new site link search box. So for the other types of root snippet markup, we currently don't support JSON-LD. That might change in the future as we move our markup testing tools to JSON-LD as well. But at the moment, we don't support it for all different types. Let me see what kind of questions we have left, what kind are higher voted. "Will the Penguin 3.0 launch in 2014?" My guess is yes. But as always, there are always things that can happen in between. I'm pretty confident that we'll have something in the reasonable future, but not today. We'll definitely let you know when things are happening. "From Webmaster Tools, I get crawl errors from our Wordpress blog. I've checked them, but nothing. The crawl errors come from the tags that we have on the blog. For example, like this--" a bunch of weird, I guess, Unicode characters. In general, if we can crawl those URLs, we will try to crawl them if we find a link somewhere. And depending on what we find there, we might crawl them more frequently or less frequently. But in general, if this is just a URL that returns a 404, then we will slow down our crawling over time. It's not something that would negatively affect your website. It's essentially just, we found a link to those pages, and we think maybe we should check them out, because we like your website and would love to find more content on there. So we'll try those URLs. If they don't work, that's fine. But we'll let you know that we tried and we found an error in Webmaster Tools, just like you would find that error also in your log files. So it's not a sign that anything is broken or that you need to fix anything. It's essentially just us saying, we tried this URL. It didn't work, and maybe that's fine. "We have a site that we cleaned up that had a manual penalty. The traffic has significantly dropped since the reconsideration request was approved. We are earning links via content marketing, but things get worse, not better. What can we do to recover?" If the manual action has been resolved, that's essentially a good step. Because that means at least from a web spam point of view, the team, when they manually reviewed your site, didn't find what they found in the past. So that's a sign that you're going in the right direction. If you're seeing negative or no changes in the search results during that time, what's likely happening is that our algorithms are picking up other issues that maybe the webspam team isn't picking up on. That could be everything from the quality of your content. It could also be webspam issues. It could be keyword stuffing, the usual issues where our algorithms might pick up on this, might respond accordingly. But the webspam team would say, this is technically not webspam, so we're not going to flag it as a manual action. So what I'd recommend doing there is, first of all, making sure that the quality of your site overall is really the highest it could possibly be. So if you compare it to other sites that are active in that area, it's obviously clear that this is by far the best one of that kind. That's essentially the first step I would take there. Obviously, that's easier said than done. There's a lot of work involved in that sometimes. If you're earning links via content marketing, I'm not really sure what you mean with that. That sounds a lot like you're doing guest blogging and just adding links to those guest blogs, which wouldn't necessarily be the best way to earn links. Because essentially, you're creating this content, and you're trading this content for a link, which makes this link kind of an unnatural link. So that's not something where I would say this is particularly a good thing. On the other hand, it might be that you mean something completely different with earning links via content marketing. Maybe you're creating great content on your website, and people are referring to that, and they think that's great. And that would generally be a good kind of link. But if you're placing these links into content and other people are publishing your content with your links, then that seems like something you'd want to double check maybe with other people who have gone through this process before to see that you're not doing anything that's causing more of a problem for your site than you think you're actually providing an advantage for.

MALE SPEAKER 2: John, can I ask a question, please?

JOHN MUELLER: Sure.

MALE SPEAKER 2: OK. I have a question. Can Google read our server specification and use this as a ranking platform?

JOHN MUELLER: You mean from the certificates?

MALE SPEAKER 2: No, no. I mean supposing that the server has a limited bandwidth, will that impact on our rankings because maybe the site would not support a large amount of traffic that we will get to in some of the case?

JOHN MUELLER: No. That wouldn't be-- the bandwidth for the server is mostly a technical problem for us, because we might not be able to crawl the content as well. But [INAUDIBLE] the content, we have an index that we're not going to assume that it's [INAUDIBLE].

MALE SPEAKER 2: But even the fact that the server is--

JOHN MUELLER: Oops. You're muted.

MALE SPEAKER 2: Yeah. Sorry. So even the fact that the server maybe is a very big one, very powerful, would not influence in our ranks?

JOHN MUELLER: No. That wouldn't matter. The server speed is really primarily a problem for crawling and indexing. If we can index [INAUDIBLE], then that's fine. We don't make adjustments on how much traffic a server will be able to handle.

MALE SPEAKER 2: I understand. Thank you very much.

JOHN MUELLER: All right. And with that, we're kind of over time. So thank you all for all of your questions. I hope you guys have a great weekend. And maybe we'll see you guys tonight. Cheers.

MALE SPEAKER 2: Thank you. Bye, John.

JOHN MUELLER: Bye.

JOSHUA BERG: Bye. Thanks.

MALE SPEAKER 1: Bye. Thank you.
ReconsiderationRequests.net | Copyright 2018