Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 27 October 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: Go on to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. And I'm here in Mountain View this time at a special US-friendly time to help answer any webmaster, web search related questions that you all might have. I see there are some new faces here, as well. If any of you who are new to these Hangouts have any specific questions that you want to bring up before we get started. Feel free to jump on in.

JACK HERRICK: Hi. I'm new. I have a question. So, I have a site that's aiming to be a very high quality site, and we're making substantive investments in content. We never show up in the rich answer box, and I'm wondering what we're missing.

JOHN MUELLER: OK. It's hard to say without knowing your site and what kind of queries you're looking at. In general, it's important that we are able to recognize the content properly on those pages. So using things like structured data markup could help us to better figure out which parts of the page are actually content, which parts aren't. So that's one thing I'd aim for. But it's even past that, it's something where we might not show it for all kinds of queries. So it's not something that you could just add to your page, and we'll be guaranteed to always show that. So what kind of content do you have on your site?

JACK HERRICK: So it's a how-to site, it's wikiHow.

JOHN MUELLER: OK. I'm not sure, but I think I looked into that site sometime last week. And there are some areas where we do show this richer snippet on top for that site as well. So usually what I do to try to find these things is to look at the search query information. And filter for things like how or why-- queries that start with those kind of words. And see if you have any kind of these longer more question- like queries there. And oftentimes, if you try those out you'll see that we are showing some kind of a quick answer, bigger snippet type thing for those types of queries. So, that's one way you can try to find these, but it's not something that we'd show for all kinds of queries. So, if someone is just searching for generic keyword- keyword type queries, usually we wouldn't show this kind of a bigger snippet on top.

JACK HERRICK: Got it. Well, it's good to hear we're in it. I've actually looked at lots of how-to queries. I haven't seen it, but it's possible that it's changed. So I'll definitely need to poke around more. Thank you for that.

JOHN MUELLER: Yeah, it's something that's kind of tricky. Because some people want to be found there. Some people prefer not to be found there. So we try to tune our algorithms to pick up the cases where it really makes sense and where we think that it would be useful for the site to actually show it like that.

JACK HERRICK: Great. Thank you.

JOHN MUELLER: All right. Any more questions from newer faces in the group here? Nothing special? All right.

MALE SPEAKER: How about the old faces?

JOHN MUELLER: Old faces. That's like me, right? You're talking about me?

MALE SPEAKER: Just Barry. He ask that.

JOHN MUELLER: All right.

ROBB YOUNG: I'll ask a question, John. I answered a query from a journalist last week by HARO, you know, the Ask A Reporter service, who was asking for experts to comment on penalty, which was a Google Translate penalty. And I said, that sounds like a load of rubbish to me. I don't know what you're talking about. So I wonder if you could comment on anything you'd heard in regards to that.

JOHN MUELLER: So a Google Translate penalty?

ROBB YOUNG: Yup.

JOHN MUELLER: And what would that be?

ROBB YOUNG: I think I Googled it a few times. I think people were getting confused with maybe using auto translate on their site, generating what then becomes duplicate content, according to some sources. So it's just the same as everyone else's, since their original content is duplicate anyway. And then they think it's another version of a duplicate content penalty, which again, in itself, doesn't even exist anyway. So I don't know where they got it from, but there's some chatter out there about it. But if you haven't heard it, then I'm guessing it genuinely doesn't exist. Like I said, I don't think it exists anyways. It sounds stupid.

JOHN MUELLER: Now, I'm pretty sure we don't have any Translate penalty per se. So, what does happen is if you use auto- translated content on your site, we might see that it's auto- generated content. And the website team might take action on that. If you use Translate as a way of kind of spinning texts-- like if you translated it into German, then back to English-- and use that two- steps- translated version of content on your page, then essentially that's spinning content. And that's really terrible content to show in search. And that's something where the website team might say, well, this is pure spam. It doesn't make sense for us to even waste any cycles trying to crawl and index it, so we'll just drop it from the index completely. So that's one place where Translate might be involved, but it's not the case that you're using a translation and that's bad. It's just that you're using machine translations, and you're creating, essentially, content that's really hard to understand and not really useful for any users. So we take action on it, based on it essentially being spun content, and not because it's translated.

MALE SPEAKER: John, I have two quick questions.

JOHN MUELLER: All right.

MALE SPEAKER: What's the status on the not set for Google Analytics in search engine optimization query report?

JOHN MUELLER: OK, I don't think there is any status. It essentially is just a different way of looking at it. So in search analytics, if you look at the numbers of the queries that you see there, you'll sometimes see that which we show on top, aggregated, maybe 100 queries. And in a table below, if you add the numbers together, maybe you'll see 70 queries or something like that. And the difference there is essentially queries that we filter out. And in search console we don't show those separately. In Analytics, they chose to just call them not set. So that's essentially what you're seeing there. It's not something completely different. It's essentially just showing that difference that you'd have to kind of calculate yourself if you were looking at it just in search console.

MALE SPEAKER: Interesting. So the press team hasn't contacted you about that at all? To ask you anything about that, because that's a quick answer that seems to be like that's not changing.

JOHN MUELLER: We had a bit of back and forth before we figured out was actually happening there. So I don't know if that was part of that, but there were also, I think-- no. Go for it.

MALE SPEAKER: And the next question is, there are now reports that the Google search console is now displaying old messages to new verified [INAUDIBLE]. Is that a new behavior? Where I had to say a site had a penalty yesterday, somebody verified me as a user on their account today. I would see that old penalty?

JOHN MUELLER: I don't think that's new, but the tricky part there is, it depends on the message. So for some messages, we have to essentially set up that new owners also see them for a specific time frame. And other types of messages are just for the current owners. So you could imagine that if something were broken with crawling and indexing, and nobody had the site verified, and someone starts to verify today, then you might want to see that backlog of messages there. But that wouldn't make sense for all types of messages.

MALE SPEAKER: Right. So I thought that was the case when I wrote the blog. So there's no way to categorize what old messages would show up for users and what wouldn't. In a general way, penalties generally wouldn't show, or-- you don't want to put yourself in a box?

JOHN MUELLER: I don't know. I don't think you would see it on the message. That it's queued for new owners as well. I don't know if we have any clear separation there that we ever communicated, that it's [INAUDIBLE] into two boxes.

MALE SPEAKER: OK. All right, thank you.

MALE SPEAKER: We have a question about ads on the page. And what we're trying to understand is even if you're not covering your content with ads, where-- how can publishers get some more guidance on if they're getting viewed as either some problem with their ad layout? Or trying to extract too much value from a web visitor at one time, where it would cause a problem with their site. Is there some more guidance? This is HubPages, but us and several other large ad supported companies had a lot of discussion about trying to figure this out. And so far we followed the Panda question, does site have excessive ads, and the good sites and bad sites, and that hasn't been very fruitful. But we'd love more guidance on if we're doing something. Or if our ads are too slow on the page. Or if there's something else that could really help us understand, are our ads causing a problem?

JOHN MUELLER: I don't think we have any specific guidance there. Not like, you should put this many ads on a page, or not more than that. We do kind of have that one update from, I think, two years ago, where we said too many ads above the fold would be a problem. But I don't think we have any guidance on that flexible barrier between zero ads on a page, and then so many that the user kind of gets deterred from actually doing something useful. I suspect it's something you can probably pull out more on your side by doing clever A/B testing. To kind of figure out where do you see users engaging with your content. And at what point do you see people dropping off or not engaging as much anymore. And that's something that wouldn't be taken directly into account for search. But indirectly you'll definitely see some effects from that.

MALE SPEAKER: So it may not be just too much ads on the page. Just looking for some overall engagement metric might be a better way of going at it?

JOHN MUELLER: Yeah so I wouldn't focus too much on the number of ads on the page, because it really depends on the placement and how you can present them to users. And those are things that you can't just quantify and say, two ads on this page, that's good enough. It really depends. If there's one ad covering all of the content, then of course they're not going to be able to do anything with that content. But that's something where you can kind of can play around with Analytics on your side to see what's the right ratio for you. Obviously you want some people to click on the ads, to kind of engage with the ads as well. Because that's how you monetize your site. But at the same time, you want to make sure that people actually go to your site directly. And they want to come back to your site. And it's not something where people go to your site and they're like, oh, gosh, so many ads. I have to find my way through to actually get to the content. So finding that balance is something that probably depends a lot on the site, on the users themselves. And something that's probably a lot easier for you to figure out than to take some generic aggregated number that Google would be able to give you.

MALE SPEAKER: Yeah. And my second question is about, we do this kind of goofy thing on HubPages where we've been trying to shrink our corpus down to get to a useful set of content. And a side effect of that is we take content out of Google's index because we think it's too low quality. But just because we have so much content, we take a lot of it out that it doesn't have traffic. And we started looking at our content a little bit differently lately. And we see that we have a lot of really keyword-rich titles and subtitles. And I don't know, you hear the idea of over optimization, but are there things you can do at scale to try to water it down? Does adding your brand to your title tags-- do things like that help? Is there any other way you could do some things at scale to sort of say, look we're not-- I think we've created one problem by trying to solve another one.

JOHN MUELLER: I can't think of anything specific off-hand. I don't think just adding the brand to the titles would significantly change things. I guess it's kind of tricky when you're in the situation where you have a lot of very optimized content, in that it over the years it has been very search optimized. In that in your keyword, your titles are exactly what you think people might be searching for. And it kind of evolves into this kind of SEO content that's created for SEO reasons, and not really for users to stay on your site and engage on your site. So that's something that's probably worth watching out for. But at the same time, it's hard for me to say, well, you should just rewrite everything, so that nobody knows what your content is about. Then it won't be optimized any more. That obviously wouldn't make any sense either. But I think as you move forward with regards to creating new content, high quality content, I'd just make sure to make it as natural as possible. Which is easier said than done when you, over the years, refined making SEO-friendly content as something that's kind of like second nature.

MALE SPEAKER: Can that cause a Panda problem in and of itself, even if the content is really engaging? Basically, the stuff we have left on HubPages is engaging. Long time on site, but we know it's really keyword- rich.

JOHN MUELLER: It's hard to say. I don't think that it should cause a problem on its own. It might be that in aggregate, there are enough other signals there that make us kind of confused as to what we should pick up there, and what we shouldn't pick up there. But just because it's keyword rich, I don't think that would be a general problem.

MALE SPEAKER: Keyword rich might be putting it nicely.

JOHN MUELLER: Right. I mean, it's really hard to say. Because sometimes, a site does have content that is really optimized. Because over the years it's kind of evolved that way. And it's hard to say, I want to water it down and make it look more natural by spending more time into refining the text. Because that's a tricky process. You're obviously artificially trying to make it look more natural now. Yeah, I think that's a kind of situation where at some point you just have to say, well, this is the way my site is at the moment. And going forward, I just want to make sure that we follow these guidelines. Set something up to make sure that it kind of matches your philosophy for your site, what you'd expect from users, the kind of content that you want to provide on the web.

MALE SPEAKER: Thanks.

MALE SPEAKER: Hey, John. When you move a site from one domain to another, should you also take the disavow file from the original site, then upload the menu site?

JOHN MUELLER: Yes.

MALE SPEAKER: OK. How much time should you give? So what we are doing is, we did that first, because we know it takes time for the disavow file. So we don't want to make the migration and then, because there is a lag in between, there's a bunch of spammy links that get counted. So how much time do you think we should give between the time with disavow to the time we do the migration?

JOHN MUELLER: I'd do it at the same time. I don't think you need to wait. Because what happens with a redirect to your new site is once we see the redirect, we'll kind of move the canonical to that new site for those individual URLs. And then we need a disavow file for that. But it's not something where we need the disavow file first, or we need to do the redirect first. I'd really just do it at the same time.

MALE SPEAKER: OK. But it's not problematic that we did the disavow file first?

JOHN MUELLER: No. No.

MALE SPEAKER: OK. Thank you.

MALE SPEAKER: So if you want to change the domain after you've changed to HTTPS, and nothing is changing-- 301? 302? Which one?

JOHN MUELLER: If you're changing domain-- if you're changing URLs for the long run, always a 301.

MALE SPEAKER: [INAUDIBLE] to get back to me. So 301, right?

JOHN MUELLER: I mean 302 is not going to cause any problems. But the 301 is a clear sign for us.

MALE SPEAKER: All right, sorry, just an interruption. You know what we are doing. You should see content, and for example, Google did drop from one 500,000 pages. There is only 10,000 errors left. Could we expect, after all the errors are cleaned, to Google start again doing algorithm changes, and rethinking about the website, after we have removed almost everything that was duplicated content from the web?

JOHN MUELLER: Usually that just happens continuously. So as we re-crawl and re-index those pages, it's not something that would-- one moment comes, and then the Google algorithm runs, and then everything changes again. So for most of our algorithms, that just happens continuously.

MALE SPEAKER: Because what we noticed-- we did receive more long tail keywords. But average position dropped from, like, 30 to 70. And the search isn't changing from 200 visits a day, that's it. 200 clicks and it's like flat line for the last five months.

JOHN MUELLER: And you've moved to a different domain?

MALE SPEAKER: We moved to HTTPS. We cleaned all the content, we moved everything, and there are basically no changes at this moment. It's been more than a year of changing. Yeah.

JOHN MUELLER: It's hard to say when you make a lot of changes. A lot of things can change in search. So if you just move to HTTPS, then usually we can just forward everything to HTTPS. But if you make a lot of changes on your website, if you remove a lot of content, then all of that-- it takes a long time to be re-processed.

MALE SPEAKER: We moved six months ago and Google is finally almost complete with sitemaps and everything. I'm hoping something will change. Or we do have to change to the short domain.

JOHN MUELLER: I think that's something where you need to be patient. Because when you make really bigger changes on a site it's-- I'm so sorry. But if you make really bigger changes on a site, it takes up quite a while for everything to be understood again. Internal links, understanding of the context between the pages-- it takes quite a bit of time sometimes.

MALE SPEAKER: It's crazy. It seems it's easier to start a new website, new domain and change the name, than wait for changes.

JOHN MUELLER: Sometimes, yes. Sometimes that can be the case. It's a tricky situation. But if it's like-- if you have a business, and you have a very bad reputation, and you change your complete business. You have a new owner, you have a new name. But actually it's at the same business location. Then it might take a while for that to kind of be understood by the community. That you're actually a different business now and you don't want to be related to the old one. And sometimes it's easier to say, well we'll just take all of our people and start a new business somewhere else and unrelated. It's sometimes a hard decision to make.

MALE SPEAKER: Will Google track our Adsense code? Should we change our Adsense also?

JOHN MUELLER: No. I don't know what Adsense does.

MALE SPEAKER: But does Google connect it and follow it with our email and everything?

JOHN MUELLER: No. I mean, sometimes it makes sense to start a new website instead of trying to fix an old one. It's a really hard decision to make though. It's not something that I would just say, let's have some beers, and then decide to throw everything away and start new.

MALE SPEAKER: Especially when you go in high investments. That's my problem. Again, thanks.

JOHN MUELLER: OK, let me run through some of the submitted questions. Good questions so far, but lots of these have been submitted for a while. Will the retirement escape fragment be accompanied by better angular crawling? And so for six months our site displays correctly in view as Googlebot but doesn't crawl properly. What's up? So, yes. Unrelated to the AJAX crawling part, we are always improving our rendering to understand these pages better. So that's something that isn't complete, and sometimes it works really well. And other times it doesn't work so well. So if you have a very JavaScript heavy site, I'd recommend just testing it, making sure that we can index it properly. Use the fetch as Google such as fetch and render tool in search console, so that you can try things out there. I would expect this to improve over time, though. So it's not something that will always remain like this.

MALE SPEAKER: Hi, John. This is my question. I guess a couple of fall-outs. We did actually try rendering the site in fetch as Google, and it displayed perfectly. But then dropped out of the search rankings. So that made us assume that it actually wasn't being viewed properly. So that's why we switched to pre-render. And I guess the follow-up is, when will the pre-render support be finally dropped. Because we're very reliant on it at the moment, and nervous about switching back to just a JavaScript site.

JOHN MUELLER: So, are you using the escape fragment for pre-rendering?

MALE SPEAKER: We are. And that's been successful for us far more, I'd say, than having the site itself crawled.

JOHN MUELLER: Yeah, I suspect we'll keep maintaining that for a while. Because that's not something we can just turn off. A lot of sites do use that, so it's probably I guess at least for the for midterm, we'd be keeping that.

MALE SPEAKER: OK, thank you.

JOHN MUELLER: All right. We put our page content under and h2 tag heading. Should we put subheadings within this body as h3. How much ranking signals is h2 to h5 tags? So these heading tags in HTML help us understand the structure of the page, but it's not that you have any kind of a fantastic ranking bonus by having text in an h2 tag. So sometimes we'll see sites trying to abuse that, and they put their whole content into an h1 tag and say, well, this is really, really important text, and you should treat it with high value. And we do use it to understand the context of a page better, to understand the structure of the text on the page. But it's not the case that you would automatically rank one or two steps higher just by using a header. So I'd recommend using it to give a semantic structure to the page. But I wouldn't say that this is a requirement for ranking properly and such. What do you do to encourage the use of sitelinks for your website? Any tips? So sitelinks are those little links that we sometimes show under a search result, specifically when you search for something like your brand name. And what helps us there is if you have a clear structure on your website that we can crawl and index where we see a clear context between the individual parts of your website. That really helps us to better understand that. And if you have that on your website, then essentially it's a matter of our signals understanding that it makes sense to show this for your website. And then we'll try to show that. Sometimes we'll see it doesn't make sense to show for our website. If a website is really small, or if we see that people are mostly just going to the homepage from search, then maybe it doesn't make sense to show sitelinks on a website. So it's not guaranteed, but if you do have a clear structure for your website, we could pick that up and try to use that. On our e-commerce site, we have a category page which has links to our products, which are currently h3 tags. Is this OK, or shouldn't h3 tags be used as internal links? That's perfectly OK from my point of view. It's not something where I'd say, you will get any problems from that. If you think that this is a good way to link those individual parts, then sure, why not? My main competitor and website are both affiliates from the same company. And even though I have more unique content, citations, links, et cetera, they get 10 times the traffic I get. How can I fix that? It's really hard to say when we hear these situations where my competitor is ranking better than I am, and my website is clearly better. When you finally recognize that Google? Essentially we use a lot of signals for search. And one thing to kind of keep in mind is when we run across cases like this and bring them to the search team, especially when it's a situation where you're saying, well, my site is just the same as my competitors, and why are they ranking about me if mine is just the same? They should be ranking similarly? And the ranking team usually comes back to us and says, well, if there's no clear reason why this site should be ranking number one, for example, then from our point of view, it serves a user just the same if we show this site or that site. They're equivalently. Even the webmaster is saying my site is just as good as other one. So that's something where we really try to look for a significant step up with regards to quality, and the content, and the service that you provide so that we can go to the ranking team and say, well, for this query, we're showing this site and this site. But that's clearly bad for the user, because we have this other alternative, which is miles above everything else there. So to really kind of differentiate yourself, not just a little bit, but in a really great way to show that this is something that is almost a bug if Google search wouldn't show this as a first result. I understand that if you do a 301 from one domain to a single URL, you may treat some of those links a 404s. If we remap some 301s, we did a while ago. Google changed from treating some URLs as 404s back to 301s and pass on that link juice. Maybe? It's hard to say here because I don't really know the context. But usually what happens when we see a lot of redirects to single page, essentially, that's a sign that that page doesn't exist anymore, that we should treat it as a 404. And we'll flag those as soft 404s, and you'll see that in Search Console. On the other hand, if you say this URL changed, or the content that was there moved to a different URL, then that would be a good use case of a 301 where we would kind of forward that information on. So if you are just artificially creating pages so that you have different URLs to kind of 301 to, but actually they're all kind of 404 pages and that the content doesn't exist anymore, than we'd probably still flag that as a soft 404. And in general, that's not a problem. That's not something you'd need to avoid. If the content doesn't exist anymore, then a 404 is a perfect answer for that. You don't get penalized for something like that. When you do a lot of citations, would you recommend to use Search Console Fetch as Google to index these instantly, or can you do many citations in one go? Or can that cause you problems? So in general, this isn't something that we really focus on with regards to Search Console. It's not a link. So it's not something that's on your site that we would link to. If you change the content of your pages and you need to get that indexed as quickly as possible, sure. You could use Fetch as Google and submit to index to kind of get that sped up a little bit faster. But just by adding citations to your page, I don't think that would significantly change anything at all unless these citations are your only content, and suddenly your content goes from Ford to Chrysler, or something like that. So that's not something I'd really worry about. I'd love clarity on Penguin, or on the phantom update I've read that Google was targeting sites that are how-to, eHow style sites. Our site is a hybrid site, part e-commerce, part DIY. I don't really know which specific parts to kind of go into there. So with regards to Penguin, Penguin is a web spam algorithm which focuses on web spam issues. So if you have web spam issues associated with your site, which could be something like problematic links, paid links to your site, that's something we'd pick up with Penguin. And that's something you could clean up by removing those links, or by using the disavow tool. I'm not really sure what's meant with the phantom update here. We do make changes in our algorithms all the time. So it would be almost unnatural to not see any changes at all in search in that regard.

JACK HERRICK: John, could I have a follow up on that questions. That wasn't my question, but it struck a chord with me since I run a how-to site. And I have noticed that many publishers in the how-to space have been negatively impacted by Panda and another related quality updates. Is there something intrinsically wrong with how-to content? Is there some reason that it seems to be hitting the how-to publishers disproportionately?

JOHN MUELLER: I don't think there's anything specifically wrong with how-to content in general. One thing we did notice for awhile is that there was, or was at least a lot of really low quality how-to content out there, which I think a lot of you guys cleaned up really well. So that's something that I think has significantly changed over the years. And I think that's a good move. But it's not that our algorithms are in any way looking for how-to sites and saying, oh, this is a how-to site. We shouldn't be showing it in search like any other site that has content on it. So we do try to kind of make our algorithms generic so that they work across all of the web as much as possible.

JACK HERRICK: Would you have advice for how-to to sites that are negatively impacted? That think they're doing a good-- or you think they're doing a good job of having quality how-to content?

JOHN MUELLER: I do not. I'd have to think about that. I think recognizing the really high quality content on a site is sometimes really tricky, especially when you have a lot of user generated content. And kind of recreating that content in a way that still makes sense for your users, still make sense for the creators, but at the same time provides a pretty high bar where you can say across the site, our content quality is really high. That's sometimes tricky to find the right metrics to look at that. And sometimes it makes sense to look at things like how people are engaging with the content, what they're doing on your site. Are they staying? Are they going to other parts of the site? But finding the metric to understand at scale like which pages are lower quality content that you need to work on and which pages are really good that maybe are worth promoting more is really tricky. And that's I think, specifically tricky on user generated content sites. Not just how-to sites, but in general, if you're focusing a lot on user generated content, then that can be really hard to figure out how you want to find that bar in an algorithmic way because you can't manually go through all of this content yourself and say, well, tweak the wording here, and then it's a little bit better.

MALE SPEAKER: Do you have suggestions?

JOHN MUELLER: Sorry?

MALE SPEAKER: Do you have suggestions or ways? Because I appreciate that question, and it's certainly at the vein of something we've really struggled with.

JOHN MUELLER: I don't know. I think to a large extent, some of the changes I've been seeing on some of these how-to sites have been really, really well made. And I think that's generally the right direction. I don't know how far you would need to go. It probably depends a lot on the site itself as well. But I think, especially the changes with regards to combining content and creating really high quality content based on some of the user generated content, that's essentially a good step to go. And I think a lot of this content is useful content on the web as well. I think weeding out some of the lower quality content was an important step. But the higher quality stuff is, essentially, good content to have on the web, stuff that people are searching for, stuff that people might want to have information on.

JACK HERRICK: Can I follow up on one of the things you said John on the point about sort of keeping people on your site. For a how-to reference site, we think about success if someone is coming in and learn how to their shoes, what we want to do is actually not send them to another page on our site, we want to team them to tie their shoes. And then they're walking down the street. So we measure success that way. Kind of happy users that at the end of the day, they read the page. We measure how long they stay on the page. If they don't come back and do more searching, we view success there. Should I instead view that as success, or should I view success as they click another page on my website. And therefore, I'm really kind of playing a page visitor game. What should I be trying to do here?

JOHN MUELLER: I think it depends on your site. So for some sites, it might make sense to kind of expect people to start engaging with the content more, or to come back by themselves, or where you start seeing things like, people are searching for my brand name together with some query, how to tie my shoes. Then you kind of see that you're building this reputation of being able to provide answers that people actually like. And for other sites, maybe it makes sense to see them more engaging within the site itself where you see, well, they start with this one question. But actually, they find this whole chain of other things that are actually also interesting on the site. And both of these are things that, I think as a site owner, you might be able to recognize and say, well, this is a pattern that we want to see. Or maybe there are specific pages where you say, well, this type of content we expect people to come, look at the content, and be happy and gone. And this type of content, we expect them to kind of come and dig their way through the rest of our site. Or maybe even within one site, you kind of differentiate like that. But I think there is no single metric for all sites. So it's not that I'd be able to say, oh, you need a time on site of at least 27 and 1/2 seconds, and then you're good. It really depends on your website.

ROBB YOUNG: Isn't one of the problems here that general how-to sites are, by definition, generalists, and can't be experts, or ranking number one for everything. They wouldn't be generalists. They'd be specialists. How a how-to for sports, or DIY, or something might reasonably expect to rank number one for how to build a table, or whatever it is. But surely you can't-- I mean, we've got two people here in the Hangout. Both of their sites can't be best at everything they offer. Or is that being a little unfair?

JOHN MUELLER: Yeah, I mean that's true as well. Yeah. I think what probably would makes sense is to do a Hangout specifically on user generated content. Or we can also maybe invite you guys again to specifically look at specific techniques maybe you'd be able to apply to recognize good things, bad things, and also to get maybe some of your feedback on where you're seeing things are going in the right direction, and where you're still seeing well. I did all this work on this part of my site. And it's a really fantastic site compared to some other sites out there. But I'm still not seeing any results.

MALE SPEAKER: I'd be happy to.

JACK HERRICK: Yes, count me in. That would be great.

JOHN MUELLER: All right, great. And we also have some people, I think, with forums on their sites. I think that kind of goes into a similar direction. It's slightly different with forums, but it's still very user generated content oriented. So we'll set something up with you.

ROBB YOUNG: And it also seems like it's really hard for you guys because-- I can't remember which one you guys just said it-- but you said that if you don't-- how your measuring success could be someone comes on, finds exactly what they're looking for, and goes away happy. But that's great as a measure of success for users. But essentially the better you do your job then, the less likely you all to make money, and the better chance you have of going out of business because you just don't have a product to sell in that. Is that not one of the--

JACK HERRICK: Yeah, wikiHow has always been kind of half for profit business, half public service. And when we're making product decisions, we're really trying to lean on the public service side and make our users happy. So that a continual problem we have at wikiHow but one we're used to.

MALE SPEAKER: I'd be happy to give up ad revenue if I knew that the right thing to do was. It's just trying to get that balance right.

JOHN MUELLER: OK. Let me run through some of the other questions that were submitted, and I'll set up something specifically on this topic and try to figure out how to do it in at a time when you're not all asleep.

MALE SPEAKER: We'll get up for it.

MALE SPEAKER: I don't sleep.

JOHN MUELLER: All right. We're adding structured data to every page. Can you add too much structured data? Can that cause an issue where Google would penalize a site thinking it's too spammy? In-house SEO has us adding itemprop image or URL on every link and image on the whole site and more. So I don't think you can add too much structure data in the sense that you would see negative consequences from that structured data. But you're adding a lot of cruft to these pages. And all of this markup makes things very slowly. And it also means that when you're maintaining things, it's a lot of work to kind of keep the HTML clean. So I'd try to focus on marking up the entities that you think are really relevant for those pages, and try to focus your energies there. And another thing you could do, depending on how much time and resources you have, maybe it makes sense to say, well, I'm just going to focus on marking up things that might be shown as rich snippets in the search results where I do actually have a direct, kind of a return on my investment on actually adding all of this markup. Whereas you can essentially add markup for every word, and that's not something that would be reflected in search at all. So trying to perhaps focus a little bit more on the things are actually used, that probably makes sense. But again, it's not something where we would penalize a site for marking up too much. If we were to serve up additional localized content by means of JavaScript once a page is loaded, depending on the user's location, is that going to be cloaking? Because normally the page won't have this additional content. That wouldn't be considered cloaking, but what probably happens is we wouldn't be able to actually index that content separately. So if you're using JavaScript to swap out a translation on a page, then that's something which we would like to be able to indexed separately so that we can always show that in the search results for users searching in that language. Whereas if you're swapping it out with JavaScript, we'd pick up one version and just index it like that. So as much as possible, use separate URLs for translated content so that we can index it separately, and also show it to users correctly. If you remove a domain or link from a disavow file, does it mean it will be automatically included when Google next crawls your site? Or is there some time limit? Yes, it'll automatically be included the next time we crawl that link. So not your site, but rather that specific link that we try to pick up there. So the next time we try to crawl that page that has a link on it, we'll double check the disavow file, see that it's no longer there, and then we can use that as a normal link to kind of understand your site better. Can no indexing a page have an impact on the pages that lead to it? Can one no index action cause the lead up pages to be removed from index as well, assuming the lead up pages have quality content and meta robots tags the index. No, that shouldn't have an effect on the pages that are linking to the no index page. And sometimes, no index is a perfectly fine way to structure a site in that you'll have one part of the site that's actually indexable, and another part of a site that not indexable. And just because you separate the two with a no index tag doesn't mean that the indexable content is worth any less. When you view an HTML page, do you get any SEO benefit from having an h tags at the beginning of the body? Or doesn't it matter? This kind of goes into the question from the beginning. We do use h tags to understand the structure of the text on a page better. But it's not the case that you get any magical SEO benefit from putting all of your content in an h1 tag in the beginning. We do give it a slight boost if we see a clear heading on a page, because we can understand this page is clearly about this topic. But it's not something that I'd say you absolutely need on every page to show up in search. A lot of pages don't use h headings. They just markup the content with CSS. How important is the anchor text for internal links? Should that be keyword rich? Is it a ranking signal? We do use internal links to better understand the context of content on your sites. So if we see a link that's saying like, red car is point to a page about red cars, that helps us to better understand that. But it's not something that you need to keyword stuff in anyway, because what generally happens when people start kind of focusing too much on the internal links is that they have a collection of internal links that all have four or five words in them. And then suddenly when we look at that page, we see this big collection of links on the page. And essentially, that's also text on a page. So it's looking like keyword stuffed text. So I try to just link naturally within your website. And make sure that you kind of have that organic structure that gives us a little bit complex, but not you're keyword stuffing every anchor text there. In Search Console, I use Fetch as Google. And I get a partial status because Googlebot is blocked by Maps with robots.txt. Does that mean showing Google Maps on my branch pages is not having a positive SEO effect as Google is blocking it? Maybe. So let me elaborate a bit. So if Google Maps, like I think it is at the moment, is blocked by robots.txt. And when you're embedding it in your page, then that content within that block from Google Maps can't be used for indexing the page. We can index the rest of the page, and we can understand the page usually based on the rest of the content. But if there is something specific in that kind of Google Maps iframe that you're embedding there, that you do want to have indexed, then that probably won't be picked up. So for example, if you have a big map on that branches page, and all of the individual branches are just listed as pins on the map. And they have the opening hours and the pins on that map, then because we can't load that map actually or we can't process the API calls to kind of show that map, we can't get the information about those individual branches. So what you'd need to do in a case like that is kind of treat the map as something where you're providing additional information, but you're also providing that same information elsewhere on the page. So if you have different branches, list them out like normal in HTML on the bottom of the map. And maybe clicking on them opens a pin on the map. But you're not reliant on the map pin to actually give that information. So that's kind of where sometimes we would see a problem. And sometimes it would be perfectly fine. So usually, if we have a page that has a lot of content on it and it also has a map on it, we're not going to say, well, this is worth any less because we don't know the map is showing. But if the content itself is shown in the map, and the map itself is blocked by robots.txt, then we can't get to the content. We can't index that. Google says it's preferable to no index instead of deleting content. If many pages must be deleted, would it be better to replace them with pages that are no index and show a contact removed message instead of a single 404 page? I prefer to see a 404 page because that helps us to avoid crawling those pages as much. So probably in the end, we'll treat the no index pages as soft 404s, and we'll drop them out like that, as well as because they have a no index of course. But it's generally a little bit better from a crawling point of view to have a clear 404. As a small business, we're never going to be able to get those top quality endorsement links from big sites. So apart from writing great content, doing social media, and having a good website, what more should we be doing? So I sometimes see this happening with smaller businesses that go online in the sense that they try to compete with really big brands out there. And usually if you're a smaller business, that's probably not going to be a good strategy because you never really have all of those resources as those really big companies have. And it's not something that you probably want to head down that direction. So I think that's something where in search, and in general online, you're going to have a hard time if you're trying to compete, for example, with Amazon. On the other hand, if you really have something special and unique that you're offering that all of these big companies don't offer, then that's something that I'd put in the foreground. And really make sure that you're promoting that, that you're showing the unique value that you have that other companies don't have. And online, there's a lot of potential there, because there's potentially a pretty big audience out there, even if you're doing something kind of unique that other companies aren't doing. So instead of trying to compete with the really big names on generic terms, on generic users who are just trying to find, I don't know, cheap iPhone, try to find something that's really unique to your business. And make sure that that's really clear and up front on your website so that we can pick that up and so that we can send users your way when they're looking for something that matches one of those areas. So that's kind of where I'd head. Not try to get as many links and as many endorsements as everyone else is doing, but really have something where you kind of have the market for yourself. And what you're providing is something that people are looking for, and where you have something unique that you can offer that isn't reliant on getting links from random other sites. When Googlebot crawls a page, what does it do when it comes across an error? So invalid HTML, can it discount that part of a page completely? So we sometimes see this question as do you have to have valid HTML, or is valid HTML a ranking factor? And ideally, it would help us if everything were valid HTML, because it would make it easier to understand the content. But in the real world, most pages aren't valid HTML. And there's always something broken on a page. And we have to be able to deal with that. So it's not the case that we would say, well, this is invalid HTML. We'll drop it completely. We'll still try to understand what's actually on this page and how we can rank it. And Sometimes it's easier then. Sometimes it's really hard. Sometimes we have pages that don't even load properly in a browser, and figuring out how to rank those can be pretty tricky. But if it works in a browser, if it's something that's just like slightly invalid HTML, then that's not something I'd lose any sleep over. Maybe at some point, work on cleaning that up. But it's not going to be critical. All right. Another question about headings. I'll skip this one. Does Googlebot have a favorite color? No.

MALE SPEAKER: MALE SPEAKER: John, can I ask if you have a date in meta description, can that be duplicated content? For example, users upload 500 pages a day. Can that be filed as duplicate content if we include date?

JOHN MUELLER: I wouldn't worry about that. I don't think that would be a problem.

MALE SPEAKER: OK.

JOHN MUELLER: Let me see here. Here's another one about the escaped fragment. Will a site be penalized for cloaking due to differences between the JavaScript version and the escaped fragment version? No, it wouldn't be penalized. But sometimes we see sites do it wrong in the sense that the prerendered or the escaped fragment version is a very stripped down version that doesn't have any navigation for example. It's just the text content itself. And without the navigation, we don't have any kind of links to follow within the site. And we can't really crawl the content properly. So it's not so much that we would penalize a site for a difference there, though potentially if we find sites doing something really egregiously spammy, maybe we would penalize a site for that. But usually most of the problems just come from a technical reference between the rendered version and the escaped fragment version in that it causes us technical problems when that difference is there. All right. We're at time. But I have this room for a little bit longer. So we can hang out a little bit more.

BARUCH LABUNSKI: Wait John, it's very sad that I missed the entire Hangout.

JOHN MUELLER: Well you made it now.

BARUCH LABUNSKI: How are you? It's nice to see you.

JOHN MUELLER: Yeah, good to see you. Haven't seen you in a while in the Hangouts.

BARUCH LABUNSKI: Yeah, it's been a long time. So I just wanted to know, I mean this is huge news. We're talking like Bloomberg, and we're talking like-- I mean, all of us want to know what's going on. I mean, at least can you elaborate just a little bit on RankBrain?

JOHN MUELLER: I don't have anything unique to kind of share that hasn't been in the news there already. So that's something that we've had running for a while now. And it works to understand these specific types of queries a little bit better so that we pick the better matching pages to show in cases like that.

BARUCH LABUNSKI: So you're not seeing us obsolete, the people in this panel here. We're not going to be obsolete in the next three, four years, right?

JOHN MUELLER: I don't know. Some people might want to be obsolete and live on a deserted island, enjoy the sun all day. That's another option. Some people want to retire at some point. I don't see webmasters going away. I don't see SEO going away. I don't see search engineers going away either. These are all things that essentially need to be in place. And if artificial intelligence algorithms can help us to make it a little bit easier, then sure. I mean, that's great.

BARUCH LABUNSKI: [INAUDIBLE]. John, is it like just 5% of the actual other-- I mean, with RankBrain, I mean there's just so much other stuff in the actual pipeline there.

JOHN MUELLER: Well, there's always a lot of stuff happening. So I don't know if the others saw this article. I think it came out a couple days ago on Bloomberg with an interview with one of the engineers that's behind this. And this is, essentially, an artificial intelligence algorithm that tries to understand these longer, more unique queries a little bit better. And that's something where I think we get around 15% of the queries every day are unique, that we've never seen before, where users are being creative, or they're trying something new, or they're just using Google like they would talk to anyone else out there. And sometimes that's really hard for us to understand what would be the best set of results to show them. And this algorithm helps us to kind of improve that a little bit. But there are lots of factors in our algorithms. So those algorithms aren't going away and being replaced by one big artificial intelligence that does everything and won't tell anyone what it's actually doing.

BARUCH LABUNSKI: OK. I mean, that's interesting. I appreciate that.

JOHN MUELLER: I think a lot of the artificial intelligence stuff is happening in every day areas where you don't even recognize it, where you don't even see that there's something really fancy happening behind the scenes. And usually those are the best kind of implementations where, like, you go to Google Photos and you search for, I don't know, pictures of a sunset. And the artificial intelligence systems are analyzing the photos and trying to recognize those photos and show them to you. And you look at that, and you so whoa, well, it worked, somehow. But you don't look at it and see, oh my god, this artificial intelligence is reading all my photos. So I think those are always the kind of interesting aspects of technology where, when it's implemented in a way that looks like it's completely natural and completely easy, that's usually the more trickier cases.

BARUCH LABUNSKI: Right. Well that's, I mean, this thing and again, everything is applied. It's the RankBrain, it's the links, it's the overall rank. I mean, you guys look at the whole [INAUDIBLE] specific area.

JOHN MUELLER: We look at a lot of things, yeah.

BARUCH LABUNSKI: OK, sounds good.

MALE SPEAKER: Hey John, a couple of quick question now that I've got some extra time of you. So these are pretty much client related, but I'll try to make it generally applicable. One client came to us with this domain. They basically bought it quite a few months ago. And when we took a look at it, we noticed there was a manual penalty, a lot of really spammy links. And so we decided the client was, or really wanted this domain name. So we couldn't get them off it, And we decided to go through the whole process of submit a reconsideration request, moving the links, and so forth. So now when we tried to start this process like a couple of days ago, we noticed the manual penalty that was applied to the website is gone. And there's no message why it is gone. And my only theory is that maybe it somehow expired. But that's kind of weird because I was expecting there would be a message. I also thought that maybe I didn't see it right the first time, but I actually have a screenshot on it somewhere in July with the manual penalty applied. So any idea what happened there?

JOHN MUELLER: I don't know. Maybe it expired. So when a penalty expires, a manual action like that, we don't send a message because it's usually the just the case that a site has cleaned something up or hasn't been using Search Console, doesn't really know about this kind of reconsideration process. So that's something that can sometimes happen. Theoretically, it could also happen that's something on our side got stuck in this one specific message, just didn't make it through to your message center. I don't think that should be something that would normally be happening. So I most likely assume that it's just expired.

BARUCH LABUNSKI: Well, we still are going to go through disavowing and removing any links we possibly can. But I just want to know if it's still the case that we need to do a reconsideration request now that we don't have that option available from the console?

JOHN MUELLER: Yeah, if you don't have a manual action in Search Console, then you don't need to do a reconsideration request. I think cleaning things up is always a good idea, because our algorithms do look at similar things sometimes. But you don't have to kind of go through that manual review process in a case like that.

BARUCH LABUNSKI: And just quickly, regarding what Gary said on Twitter, I mean, for some reason, you guys are so open now on Twitter by the way. I love the replies and what's going on. I love seeing that. And so Gary was saying, OK, so we're going to do this old disavow, all that stuff. And then eventually, Penguin will happen this year sometime. Right? I mean so, it's just, I guess it's worth it to continue to disavow and clean up and so on, right?

JOHN MUELLER: Sure.

BARUCH LABUNSKI: Right, OK. Well I mean, you've just been asked this question so many times, and it's annoying. I wish there was just a onetime answer and everybody will just leave this subject alone, put it to rest. It's confusing.

JOHN MUELLER: Yeah. I mean, sometimes things do change. So it's sometimes useful to get a confirmation that it's still the case.

BARUCH LABUNSKI: Yeah. OK, I mean, yeah. So trying to end up as much as I can. I'm not waiting for it. If it happens, it happens. That's it.

MALE SPEAKER: Quick another question John? So here's a URL. We did a few modifications to this page like a week ago. And I decided to do the Fetch as Google and submit to index to make sure the new content gets picked up immediately. And the page just disappeared from the index entirely. So it's simply not showing up at all. I noticed this was the case with another page as well. And after about three fetches, it finally got into the index. But this one just doesn't seem to, I don't know, show up at all. I've tried info site in URL. And there seems to be nothing wrong with the website. Fetch as Google shows up correctly. So I'm not sure if it's a technical thing, or anything else.

JOHN MUELLER: Maybe you have a typo on there. No, I'm just kidding.

MALE SPEAKER: I didn't think so. You scared me now.

JOHN MUELLER: Yes, of course. Now let me just take a quick look. I think what's happening here--

MALE SPEAKER: Basically just a page of products that have that brand. So it's simply a brand page. And we just added some content to get more information about the brand on that page. And it's just not in index anymore it seems.

JOHN MUELLER: So we're picking up, I think, a slightly different version of the URL. Or is it? The HTTP version of the URL. And we saw that it had a no index meta tag on it a while ago. Is that possible?

MALE SPEAKER: Yeah. Well, the HTTP one was on a different server. But as far as I remember, I Fetched as Google and submitted to index from the HTTPS property. Shouldn't that--

JOHN MUELLER: We still swap that out, yeah.

BARUCH LABUNSKI: Is it a new site you launched?

JOHN MUELLER: This is something where, especially if you have the same content on the pages and one of them has a no index on it, then sometimes that's tricky for us. Because we see both of those versions, and we say, well, we pick one of these as a canonical. And then we happened to pick the one where you also have a no index on it, which is kind of confusing. So instead of a no index, I'd use a rel canonical there to let us know which one you really want to have indexed.

MALE SPEAKER: But the HTTP version has a 301 redirect to the HTTPS version.

JOHN MUELLER: I don't know. I'd have to take a look at the details of what actually is happening there and why.

MALE SPEAKER: I just did a 301 sitewide so there shouldn't be any possible way to pick that HTTP one.

JOHN MUELLER: Well in that case, probably it'll settle down in the way that you like so it will actually index it again.

MALE SPEAKER: Hopefully, hopefully. OK, thanks.

JOHN MUELLER: All right, let me just grab this one question. I implemented JSON-LD on all my WordPress posts. And afterwards found out that Bing search engine doesn't support JSON-LD. Why is Google supporting a standard that isn't an accepted industry standard? So I guess one of the aspects here is that JSON-LD is pretty new. And we do support it for some things, but not other things. I don't know what other search engines are doing with it yet, but is something that's being picked up more and more. And in general, when you add structured data markup to your site, I'd try to make sure that you're looking at it holistically in the sense that you're not just focusing on something that's purely for Google, but also think about the long run on how you want to maintain this. Where else do you want to use this markup? And make sure that you pick one of the formats that actually works for you in the way that you want. And that's kind of the reason why we support those different types of markup as well.

MALE SPEAKER: John, can I jump on top of that question?

JOHN MUELLER: Sure.

MALE SPEAKER: How's it going. I've got another fun one for you on support. For HTTP/2, if we are using it, will Googlebot handle it? Will we get a speed bonus if it's a heck of a lot faster, or not until it's universally supported by browsers?

JOHN MUELLER: You won't get a speed bonus in a sense that you'll see something in rankings. But if we're able to crawl these pages faster, what might happen is that we're able to crawl more pages or crawl them more frequently. So it's not something where I'd say you'd get like a ranking bonus, but maybe just from a technical point of view, we can crawl faster. Then that would be useful for some sites as well, especially if you have newer content, if you have a lot of content.

MALE SPEAKER: What about for rendering like an e-commerce site where you have 30 products. And now we can render the images in a-- from a performance standpoint, it's a lot faster for a user.

JOHN MUELLER: Yeah. I don't think we'd have any automated bonus for ranking. Sorry.

MALE SPEAKER: Thanks, John.

MALE SPEAKER: Hey, John, I've got a quick question about fetch and render again, if that's all right. So the question I'm wondering, so about the Fetch as Google and Angular sites, when-- or is that already implemented? Because we have an Angular site, and it's not rendering properly on the Fetch and Render tool. But the blog post on October 14 made it sound like it's coming soon.

JOHN MUELLER: So what's not rendered properly?

MALE SPEAKER: So it's just not showing at all. The fetch is showing our page, but the Googlebot render shows just a blank page. But this is what your users saw side is the correct render.

JOHN MUELLER: I'd love to have the example URLs to kind of look at. If you can send them to me separately, maybe by Google+, I can take a look at that. In general, we're working on improving that. And it should be getting a lot better at some point when we kind of update our rendering engine. So that's something that will probably come improve quite a bit on our side as well over time. One thing you can do on your side is to try to figure out where it's getting stuck. And it might be that there are specific elements that you're crawling that aren't supported by Googlebot bot at the moment. And if you can instrument your code in a way that you can catch all of these errors, then probably you'll able to figure out, oh well, this specific element at the moment isn't supported by Googlebot. Maybe I need to have some kind of a fallback for that as well. And oftentimes, it's just something in the JavaScript that's getting stuck. And because you don't see that in a Fetch and Render tool, it doesn't show you the console view as well, it's really hard to find that. So instrumenting things to figure out where exactly it's getting stuck can help. We'll probably get a lot better in this as well over time. But if you want to have it working as quickly as possible, kind of doing that on your side might be an option. Or it might be an option to say, well, I'll just prerender these pages for the moment and kind of keep testing this. And when I see Googlebot is able to pick it up properly, then maybe I'll switch to just the JavaScript version.

MALE SPEAKER: Yeah, so that's what we have right now is the meta fragment. But yeah, I just wondering about that. I guess the other kind of follow up piece to that-- and I know that you're probably not going to have a very detailed answer-- but what is the difference between the two panes? I know one's Googlebot, but what is the other one coming from? Because that render is correct.

JOHN MUELLER: Which one do you mean?

MALE SPEAKER: So when you're looking at the Fetch and Render tool, there's a panel that on the rendering, it says this is how a visitor would see it. And that one is correct.

JOHN MUELLER: Oh, OK. Usually that's with the Googlebot user agent. And the other one is with a browser user agent, and with the robotic content or the robotic content kind of excluded. So for Googlebot, we wouldn't use a robotic content to render a page, which might be a problem if one of the JavaScript files is blocked by robots.txt for example, or if the server responses are blocked by robots.txt. And usually that's kind of what we'd like to show there. Like a normal user would be able to see all you content, would able to render it. But because Googlebot has these subtle differences, it's not able to get everything or able to get most of it, or looks pretty much the same.

MALE SPEAKER: OK. Gotcha. Cool. Yeah, I'll send you that URL on Google+. Thanks for your help.

JOHN MUELLER: That would be great. All right. So it's almost 20 minutes over. Thank you all for all of the questions so far. I'll definitely set something up for user generated sites so that I can ping you guys as well. And would love to get your feedback on how things are going in the meantime as well. So thanks everyone for joining. I'll set up the next Hangouts probably when I get back to Switzerland. So it might be a week. And maybe we'll see you again one of the future Hangouts.

MALE SPEAKER: Thanks, John.

JOHN MUELLER: Bye everyone.
ReconsiderationRequests.net | Copyright 2018