Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 11 August 2015

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

JOHN MUELLER: OK. Welcome everyone to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller, and I am a webmaster trends analyst here at Google in Switzerland. And part of what I do is talk with webmasters, publishers, people like the folks here in the Hangout, as well, who are working on websites. As always, we have room for some questions from some newer folks to start off with. Are there any new folks in the Hangout that would like to ask a question?

NICK: John, yeah, I have a question, if I could go ahead.


NICK: So our company has been trying really hard to get some different parameters on index-- so things like searching, and sorting, and you know, like page [? due ?] within our pagination. We have everything set up as correctly as we possibly can, from what we can tell. Pagination is running fine. We go into Webmaster Tools and set our parameters correctly. Canonicals are all set up right. But nothing really seems to be dropping out of the index. In fact, some things like pagination, where we had a bug that caused a bunch of our pages, deeper in the series, to get indexed, we saw that drop pretty drastically a few days after we fixed the bug. But now it just tends to be going back up and down. So we're really not sure why, first of all, our parameters, like search and sort, aren't dropping out. And then also, why the pagination isn't being picked up. We've used robots, as well, to block crawl and to no index those pages. And just nothing seems to be working at the rate that we would have expected.

JOHN MUELLER: OK. I guess there are few things that kind of come together there. I don't know which parts are relevant in your specific situation. So the first one is, if these parameters are blocked by robots.txt, then we won't be able to see any no index or canonicals on those pages. So the robots.txt is something I wouldn't recommend doing there unless you have a really, really critical problem that we're crawling your server so much that it's crashing or something like that. So I try to avoid using robots.txt for solving duplicate content problems like that. The other thing is with the rel canonical, when you're looking at that, we first have to kind of crawl and index the page that has the rel canonical on it before we can forward everything to the new pages. So if you're specifically looking for those pages in search, then chances are you'll still find them. But if you search for something like some of the text on the page, then you probably won't find those in search.

NICK: OK. So would you say the best thing to do would be to temporarily open up our robots, so that those can be crawled? And then just close them back up once we start to see the results?

JOHN MUELLER: I'd leave it open. I'd let it continue to be crawled, so that the canonical can be found. If you have a no index on these pages, that can be found as well. So essentially, we can continue looking at your site normally. With the robots.txt in place, if that's blocking those parameters, we'll see links to those pages from the other pages of your website. And we'll try to follow them. If they're blocked by robots.txt, then we'll index those URLs without any content. So if you do a specific site query, something very specific to look for those URLs, you'll see the URL is indexed, but it says, blocked by robots.txt.

NICK: OK. All right. Thank you.

JOHN MUELLER: All right.

RUDRA PRASAD: Hey, John. I have one question.


RUDRA PRASAD: We recently launched a website wherein we kept all the content in JavaScript. And with the new changes from Google, we saw that Google started crawling all the content, what we have on the website. Not in the full version, but in the text version, I can see all the content is crawled and indexed in Google. We wanted to check, like, what is the value of this content compared to the plain HTML content, what we have on the website? And the second question is, if you do any interlinking from this content, does it carry any value?

JOHN MUELLER: So if your website is built with JavaScript, like with a JavaScript framework that pulls in content from other sources, then we'll try to render those pages, kind of like a normal browser will do that. And we'll take the content that we find from rendering and use that for indexing. So if there are links in there, we'll follow those links. We'll keep them as normal links. If there's content in there, we'll treat that like normal content. It's not that we would kind of devaluate anything just because it's generated with JavaScript. But the important part is that we can crawl all of these JavaScript resources. So if you have JavaScript files or if you have Ajax responses, those kind of things, make sure that those are not blocked by the robots.txt, so that we can actually pick up everything and crawl it normally. OK. Let's start through the questions that were submitted, then. Looks like it has a weird order. Let me see if I can flip back and forth and get those kind of sorted by votes. I guess not. OK. Is there any SEO benefit to having an RSS feed? So this is, I think, a good question. In general, there are two aspects involved there. One is if you're looking for a ranking boost by having an RSS feed, that's not going to happen. The RSS feed is really something that we see more as a technical help to crawl and index the content a little bit better. So if you have a website that's changing content fairly quickly, if it's a news website, a blog that has a lot of new content, maybe even a shop that has lots of new content, then the RSS feed really helps us to stay on top of things, so that we can pick up all of those new URLs and crawl them as soon as we can. So if we recognize that the RSS feed is working fine, and we get a ping from the RSS for the RSS feed, then we'll pick up the RSS feed. We'll try to crawl those pages that are updated there maybe within seconds. So that's something that works really, really quickly. You can also, if you have a website that's updating a lot very quickly, you can also use PubSubHubbub to kind of speed things up for the RSS feed, so that you don't need to ping in separately. So those are all things that really help us to pick up new and modified content within a website with a RSS feed. There's no direct ranking boost for the website itself. But of course, if this content can't be indexed, then we can't pick that up and show that in search. So if there's something new on your site and the RSS feed helps us to find it faster, then we'll be able to show it in search faster. But it's not that there's any bonus for the existing content, for everything else, just by having an RSS feed.

MIHAI APERGHIS: Hi, John. I'm working with a big automotive publisher in the US, and they're using their news site map. Would that be kind of equivalent, or do you process those independently, or just by the Google News bot? And the RSS feed would still help to kind of discover and index faster the new articles? They put out like about 10 articles per day, so it's really important that those get picked up really fast. They have a news site map, so I was wondering if that's enough to kind of get the content indexed.

JOHN MUELLER: If they ping that news site map, those 10 times a day, then that's generally enough. We have a blog post on the blog about RSS and site maps, maybe about a year back. That kind of goes into the details of where you might use one or the other format. But if you're looking at something where you have 10 updates a day, then I think either variation would work. If you're looking at something that has thousands of updates a day, then probably an RSS feed can help to kind of focus on those updates specifically. Or you can also use PubSubHubbub to push all of that direct.


JOHN MUELLER: Would there be a small ranking benefit if we compare the same site once with a lot of 404 and soft 404s present, and other technical problems, such as wrong hreflang, no canonicals, in comparison to the same site in a perfect technical condition? So again, they're the two aspects here. On the one hand, the crawling and indexing part. And on the other hand, the ranking part. When we look at the ranking part-- and we essentially find all of these problems. In general, that's not going to be a problem. Where you might see some effect is with the hreflang markup, because with the hreflang markup, we can the show the right pages in the search results. It's not that those pages would rank better, but you'd have the right pages ranking in the search results. With regards to 404s and soft 404s, those are all technical issues that any site can have. And that's not something that we would count against a website. On the other hand, for crawling and indexing, if you have all of these problems with your website, if you have a complicated URL structure that's really hard to crawl, that's really hard for us to figure out which version of these URLs we should be indexing, there's no canonical, all of that kind of adds up and makes it a lot harder for us to actually crawl and index these pages optimally. So what might happen is we get stuck and crawl a lot of cruft on a website and actually not notice that there's some great new content that we were missing out on. So that's something that could be happening there. And yeah. I guess that's pretty much the comparison. So it's not that we would count technical issues against a site, when it comes to ranking. But rather that these technical issues can cause technical problems that can result in things not being processed optimally.

GREG: Hey, John, can I jump in real quick?


GREG: I unmuted because I'm in a coffee shop, ambient background noise. I'll try to keep it short. I've noticed, especially in the last six months or so, across a number of different verticals, rankings seem really stale and static, like they're not moving around at all regardless of on and off sites activity. We're doing a couple of viral marketing campaigns that we're going to get a ton of links through. We've restructured-- and this is not just one site, the site that I've been pestering you about for the last year and a half. It's a number of them. They're responding well on Yahoo!, Bing. What's going on with Google, basically?

JOHN MUELLER: It's hard to say what you're seeing there. So in general, we are working on search. And we are pushing updates all the time. So I think last year, we made over 1,000 updates in search. And these are things that you'll always see resulting in fluctuations in ranking, changes in the way that we rank things. Also, of course, we pick up all of the new content, pick up all of the signals that are associated with that, and try to take that into account. So if we see that signals for one site changed significantly, then that should be picked up within a fairly short time, maybe even like minutes, hours, that kind of time frame even.

GREG: Because it usually-- I mean, I've been doing this stuff for, like, 12 years. And in the past, usually within a couple of weeks after you start implementing some changes, whether it's on or off site, you're going to see some movement. The needle's going to move one way or the other. Lately, at least for all of the sites that I'm monitoring, even competitor sites that we're just monitoring, actively monitoring, nothing's moving. And I don't know if it's just the verticals that we're in, or if certain images are tougher to change. Or I don't know if you can elaborate on that.

JOHN MUELLER: I don't know. It's an interesting question. But I don't really have anything specific where I'd say, oh, we, I don't know, all went on summer vacation for the last couple of months, and nothing is happening in search. People are working here, and they're pushing changes. We have our algorithms that are picking these things up automatically. So it's not that there's any kind of an artificial freeze on the search results. Or we'd say, for this niche, or for this kind of website, or even for search in general, we're freezing things and not changing things at the moment. Yeah, I mean, things should always be changing. But of course, if you make a lot of changes on your website and you don't see any results, then maybe some of those changes aren't really relevant for that site's ranking at the moment. That might be one aspect there.

GREG: Thank you.

JOHN MUELLER: All right. When blocking dynamic URLs with robots.txt, does it make sense to also implement a meta robots tag? I think we looked at this briefly before. But essentially, if a page is blocked by robots.txt, then we can't pick up any of the meta tags on the page. So we can't tell that there's a no follow on there, if there's a no index on there. We don't see any of that when the page is blocked by robots.txt. So if those meta tags are really important for you, in that you want something indexed, or you don't want something indexed, or you want the canonical to be picked up, you want to kind of combine different pages into one single page so that all the links get concentrated in one place, then make sure that we can actually crawl those pages, so that we can process them for indexing. Even if that means that these pages have a no index on them and you don't want them indexed, if we can't crawl them, we can't tell that there's a no index on there. To get rid of doorway pages, we're thinking of doing 301s. Is that better than doing a follow no index? As these pages get traffic, we're trying to tidy up our site, but don't want to get punished or have a negative impact if we tidy up wrong. So essentially, both of these options would work. You could also, theoretically, use a 404 on these pages. So from a technical point of view, these options are all possible. Which one makes more sense for you is essentially left up to you. So if you want to combine everything into one single page, using a 301 is a great way to do that. If you want to keep these pages kind of as ad-landing pages, those kind of things, but you don't want them taken into account for search, then maybe a no index is the right option here. But essentially, these are all different ways of kind of reaching the same goal. And it really depends on your site, your goals, what you want to achieve with that regarding which one of these options you choose. What I have to do to show up my company information from Wikipedia Knowledge Graph sidebar on brand search queries? I've already added it to Wikidata and Freebase. What else is required? Essentially, we pick up this information from various sources, like Wikipedia, like Wikidata, Freebase. And our algorithms then try to figure out whether or not it makes sense to show that information. And sometimes this is not really a technical decision, in the sense that our algorithms might look at this and say, well, it probably doesn't make sense to show something specific from Wikipedia in a sidebar here. Or sometimes they'll say, well, it makes sense to show something, but we don't really have that much information, so we can't show anything. So I think by going through and making sure that the base information is available, you're doing the right thing there. And the rest is not really something that depends on technical issues, but rather on how we think that this content matches what a user might be looking for. As a multi-location business, we use the same toll-free number for all our branches. I've read we should use local numbers for each local branch. Does not using a local number impact local rankings? I think this question is more about Google My Business local search, and I don't really have an answer for that. So for that, I'd probably check in with the Maps forum or with the Google Local Business forums and ask there. It's possible that this is used for the Google Local Business entries, but I really don't have an answer that I'd feel comfortable sharing. When using parameters in Google Search Console, is this only a guide for Google, or is this definite? Is it better to use parameters or use robots.txt if I want to block certain pages from being crawled? So this kind of goes into the question we had in the beginning. In general, using the parameter settings in Search Console is a great way to give us information about which parameters you think should be indexed, or which ones need to be crawled, or need to have a representative crawled. But it's not a complete block. It's not that we would stop crawling those parameters completely. But rather, our algorithms might look through a sample of these and kind of double-check what the webmasters applied there. And if that looks good, then they'll continue using that. But if they notice that there's a significant mismatch between what the webmaster said and what is actually found on the website, then we might choose to kind of not value that webmaster input that much. So that's something kind of to keep in mind there. If you really need these pages to be blocked from crawling, if they're causing a load on your server that you can't handle, or they're causing problems elsewhere within your website, I'd really use a robots.txt file instead. Because that's really a definitive directive, where you're saying, Google, you shouldn't be crawling this part of my website or URLs that look like this. Search Console is great to kind of give us more information about your website. These parameters are important, or these parameters are for sorting or language selection. But it's not something where we'd say, this is a definitive rule for Google, that it should be used like this or not. So if it's causing a problem, use the robots.txt. If it's just a matter of giving us this information so that we can index and crawl your website better, then, of course, Search Console is the place to go. How does Google show in Google Webmaster Tools how a user sees your web page and how a crawler sees it? When I'm blocking some content to the crawler, how Google shows this content on the reference of how users see this content. So I don't actually know for sure if we still do this, but I believe in the Fetch as Google feature, if you select that you want to have the page rendered, we show you the page how Google Bot would see it, based on the robots.txt directives that are in place, and how a user might see that, kind of disregarding those robots.txt directives. So specifically around JavaScript, CSS, images, those kind of things. Because often we find that if you compare those two screenshots next to each other, then you'll notice that, oh, this is really critical content that I'm blocking. Or you'll notice that maybe this is totally uncritical content that's being blocked, and it has no effect at all on how the page is actually rendered by Google. So that kind of difference between those two screenshots helps you to figure out, are you doing the right thing with a robots.txt file, or are you blocking too much? Or are you maybe blocking something that's not really that critical, which wouldn't be like a first priority that I cleaned up? Will the feature Critic Reviews and Knowledge Card be available globally, including countries using double-back character sets? I don't actually know what the plans are there. In general, we do try to start off with something kind of limited to try things out. And then we roll it out globally as much as possible. And differences around character sets are usually less of a problem, because we have a lot of practice reading pages in various encodings and character sets. Sometimes it's more of a problem of us recognizing or understanding the content properly, being able to double-check that it's actually working as expected there. But for the most part, we do launch things sometimes in a limited basis initially. And then we try to expand that when we see that it's working well and use that globally as much as possible. Ideally, all of our algorithms would be global, because that makes it a lot easier to maintain things. We don't have to worry about this specific algorithm that's only used in Japan. But rather, we can focus on one algorithm that's [INAUDIBLE] across the whole world, ideally. Would URL's content as below count as duplicate content? Let's see. There's And there's So I think the question is around, would translated content be seen as duplicate content? And for us, the answer is a clear no, in that, if something is translated, it's really something different. It's not the same as the other language content. So just because kind of the source of the content is the same, and you have one version that's in English, and one version that's in Italian, one version in Spanish like this, doesn't mean that these pages are equivalent, that they could be swapped out against each other, that they're duplicates. So we'll try to crawl and index these pages separately. We'll try to show them in search results separately. What might happen is, if you have an hreflang link between these pages, where you say, well, this is the equivalent content in Spanish, and this is the equivalent content in Italian, then we might use that to swap out those URLs when someone is searching in those languages. But we wouldn't treat them as duplicates. So we treat these as normal URLs. And especially in the beginning when we just look at the URLs, when we see these URLs alone, kind of like you have them here in the question, we would definitely just try to crawl them separately if we can. We wouldn't be able to recognize from the beginning that, actually, this is kind of a similar URL that maybe we don't really need to crawl.

ODYSSEAS: Hey, John, can I ask you a question?


ODYSSEAS: So we have also a big site. I just sent you the URL. And from what we can tell, it doesn't have any content quality issues. I don't know if you can confirm that or check it out. But is it a good idea-- how would you go about migrating a site that has some quality issues to a bigger site that doesn't have? And maybe you can check also the URL and confirm for me that my assumption is correct, and it's not like one plus one equals-- going to be four in the quality arena. But how would you think about that idea?

JOHN MUELLER: So when you're combining sites?


JOHN MUELLER: So not when you're, like, moving from one domain to another? But rather, you're taking two sites and you're putting them into one site?

ODYSSEAS: Correct.


ODYSSEAS: And one of them is much bigger than the other.

JOHN MUELLER: Yeah. I don't think there's one easy rule to kind of handle that. So in general, what you'd try to do is to make sure that all of the old URLs-- so if you have two domains, for example, and you keep one of them, and the other one, you want to kind of fold it into your main domain, then you need to make sure that all the URLs from the domain that you don't want to keep kind of redirect to an appropriate URL on the new domain. And that could be a new page on the new domain. It could be an existing page. But essentially, we'd want to see kind of all of those redirects happening there. But the tricky part here is, of course, folding two sites together is not as easy as, like you said, one plus one is two, in that, we take into account all the signals of the combined site. And that's not something that we can just kind of extrapolate from. So we essentially have to crawl pretty much all of those URLs to be able to fold things together. And then reprocess everything on our side to see where does everything kind of end up in the long run. So with a new site, how does it look in the bigger picture?

ODYSSEAS: Got it. But there is not something like-- so you do an overall assessment? There is not anything like, OK, the content quality penalty from one site will move to the other? And all of a sudden, you will have a bigger site get penalized? As you said, you will crawl the whole picture and get the whole picture to make an assessment. Is that right?

JOHN MUELLER: Yeah. Yeah. That's pretty much what we try to do there. We try to treat it as one website. So it's not that we'd say, oh, well, this piece of content here came from this website here, therefore, to be treated differently than the rest of the content on the website. But rather, we'd look at everything the way that you have it now on your website or in the end. And we'd try to figure out how should we treating this website overall.

ODYSSEAS: And how does algorithms like Panda work, that they don't run on a continuous basis in a situation like this?

JOHN MUELLER: So it's hard to say. Because we do have to kind of update those algorithms from time to time. They do look at the site overall. So if they're low-quality issues across the whole site, then that's something that could be taken into account there. But if they're just low-quality issues on a very small part of the site, then that's generally less of a problem.

ODYSSEAS: OK. Is my assessment correct that the URL I gave you seems like to be fine? It's not-- it doesn't have any content quality issues like the other one?

JOHN MUELLER: I don't know. I haven't had a chance to take a look.


JOHN MUELLER: But I'll try to take a look afterwards. But in general, it's something where it's really hard to say ahead of time how that will kind of end up, if you combine two sites. If you move from one domain to another, it's easier to say, well, we just transfer all of the signals from this domain to the other one. But when you combine things, it's hard to say. Sometimes one plus one ends up being four, in that we see this as a really, really awesome site combine. Sometimes we'll see that one plus one is maybe one and a half, because it's kind of a mixture of a lot of low-quality content with higher quality content. It's kind of hard to figure out where that should be ending up in the long run. So that's something where I kind of take care to figure out how exactly you want to merge these sites to make sure that you really have something fantastic in the end and not something that's kind of a mixture of, I don't know, in an extreme case, two mediocre sites. I don't think that's the case with your sites. But that's something where you kind of have to think about what are you trying to achieve in the long run.

ODYSSEAS: Yeah. No, in the long run, strategically, we want to do that combination. From a branding standpoint and from a focus, it's not a question. But what we don't want to happen is transfer a penalty to a bigger site. So I'm glad to hear that you're going to assess the overall picture and not just say, oh, those pages had the penalty that came over. Now we need the second site to have a site-wide penalty as well. So that's assuring. And thanks for your offer to check out the domain name offline.

JOHN MUELLER: All right. Let me run through some more of the questions that were submitted. And then we can get back to some more open discussions. For multi-location businesses, some sites have branch pages for each physical location. If you offer the same service at each depot, how can you rank for local terms unless you have location-specific category pages hanging off branch pages? Is that bad? So I guess there are multiple ways that you can look at this. On the one hand, if you have kind of the extreme case where you have almost doorway pages, where you have each location has its own website, then that's something that we would generally see as being kind of spamming, something that I'd recommend you not do. On the other hand, another extreme idea, you just have one home page that's for all of your branches. That might not be perfect either. In general, what you'd have, if you have local businesses, you probably also have a local business entry. And that's something that could be ranking in the search results. So from my point of view, what I'd definitely try to aim for is making sure that you have your local business entry for all of these locations. And if there's something specific that you have to share around these locations, maybe it makes sense to put something on a website. Maybe it makes sense to combine some of these entries together. So if you have branches in all the cities across the US, maybe you'd want to separate that up by state or by type of service that you offer in general, and kind of list the branches that offer the service. I'd just shy away from going down the doorway route, where you're essentially just creating pages, or websites, just for these individual branches, and they don't have any unique value of their own. So that's what I'd avoid. If there is some way that you can provide unique value maybe for a handful of branches, maybe for some of the services that you offer across these branches, then, by all means, put that out on a website. But avoid just creating pages just for kind of matching those queries.

GREG: John, if I can jump in again one last time. I'm going to try to restructure the question. How important are links and outreach in content marketing in today's Google algorithms?

JOHN MUELLER: We use both. I don't think this will really help you in that regard. But I think the interesting part of our algorithms is that we do use so many different factors. And we don't apply the same weight for everything. So it's not the case that every website needs as many links as Wikipedia has in order to show up in search. But rather, depending on what people are searching for, we try to balance the search results and provide something relevant for the user. And sometimes that means that we put some weight on links, because we think that they're still a relatively useful way of understanding how a site should sit within the context of the web. And sometimes it might mean that we don't focus on links at all, that we have a lot of good content that we can recognize there without maybe any links at all, with maybe a really small number of links that are actually pointing at that. So it's not the case that there's one fixed weight, where we'd say, well, links are 85%, and on-page content is this much, and this is this much. But rather, we try to balance that based on the search results, based on what we think leads to relevant results for the user.

GREG: OK. But I mean, essentially, building a quality website with unique value proposition, with some links, good content, social interactions, should produce good results?

JOHN MUELLER: Generally, yes. But I mean, it all depends on what you're aiming for. If you're trying to--

GREG: Number one.

JOHN MUELLER: Yeah. But if you're trying to be, like, an online bookstore, and you have a nice website, and it has a handful of links, and some people are talking about it on Twitter, then you're not really going to pass everyone else that's been running online bookstores for the last 20 years. So it's really a matter of kind of looking at your site in the context of where you want to be. And some niches are really hard to kind of get into. And it's not a matter of just having a nice website that people think is kind of nice. You really have to have a really strong presence there. And that's not something that's easily done. Sometimes that takes a lot of time. Sometimes that takes a lot of work. Sometimes that takes a lot of money to actually create a website that people think is professional enough to actually trust.

GREG: I trust your answer. Thank you, John.


MIHAI APERGHIS: John, regarding that last question with multiple locations of businesses. So for example, if you have one location for each state, let's say, would it be enough to kind of add on that location page maybe directions on how to get to that office, or maybe some information about even the staff, or other contact information to kind of build up the uniqueness of that page? Would that be enough to kind of tell Google, this is not just a page made for ranking or query [INAUDIBLE]?

JOHN MUELLER: Yeah. I mean, that's all good things that you can add to the page. Opening hours, those are good things to add. Maybe unique information about this location could be useful. Those are all things that add extra value, in that, if someone is searching for that business, and they click on a search result, and they land there, they're not going, why is this showing in search? This doesn't provide any value. So those kind of things definitely make sense. They also help us to kind of recognize where we should be showing this search result-- maybe to extract the opening hours, that we can use in the sidebar, all of those things.

MIHAI APERGHIS: Right, right. And I assume structured data markup actually helps you kind of find out?

JOHN MUELLER: Yeah, yeah. Sure.


JOHN MUELLER: Once I change its business and domain name, pages are still showing with the old domain. 301 redirects have been set up. But I must change the search listing so the new name and domain appears. Is change of address in Search Console the best way to go? The change of address tool in Search Console is definitely a good thing to do. We have a whole set of guidelines in our help center about what you could be doing when changing from one domain to another. And I'd kind of go through that and just double-check that you're really doing everything right there. The thing to keep in mind is that even if you have a redirect set up, it's going to take quite some time if you do a specific search for the old domain, for that to really drop out. So for a certain amount of time, we'll probably try to be smart and say, oh, this person is searching for the old domain. And we know about this old domain, so we'll show that to you in the search results. And that's probably not what you're trying to achieve there. So that's something where if you do a site query for the old domain, you'll almost kind of need to be prepared for those numbers to be there for quite a long time-- maybe months, maybe a year, even, before they actually drop out. But the good news there is that if someone is searching for your business name, or for your company, or the type of business that you're doing, then we'll be showing the new domain as much as possible. So essentially, if you're looking for the old URLs, then we'll try to show those to you anyway. So that's not a good metric to focus on. But if you're searching for your business, the business thing, then that's something we'll probably pick up on fairly quickly. John, why are you not on the Open Office Hours drawing? The person has hair, but you don't have any. OK. I need to add some hair, I guess. Yeah, I thought it was a really nice trailer image there. But maybe I need to get one without hair. I don't know. [SIGHS] Always have to be so accurate. Terrible. I am webmaster for an information-based website. When a user fills out a form in a conversation like that, will the algorithm deduce that it's a positive ranking factor? Or as a user spends time reading a blog article, does that increase the authority of my website? So in general, I don't think we even see what people are doing on your website-- if they're filling out forms or not, if they're converting to actually buying something. So if we can't really see that, then that's not something that we'd be able to take into account anyway. So from my point of view, that's not something I'd really treat as a ranking factor. But of course, if people are going to your website, and they're filling out forms, they're signing up for your service, or for a newsletter, then generally, that's a sign that you're doing the right things, that people are going there and finding it interesting enough to actually take a step to leave their information as well. So I'd see that as a positive thing in general. But I wouldn't assume that it's something that Google will pick up as a ranking factor and use to kind of promote your website in search automatically. As Google started indexing the content in JavaScript, will it have the same value as plain HTML content interlinking [? within ?] the JavaScript content? Will it carry the same value? Yes. Like I mentioned before, when we render the pages and we can pick up the content through JavaScript, then that's something we'll treat as normal content. And if there are links in there generated by JavaScript, we'll try to treat that as normal links as well. So that also kind of goes into the area of links that you don't want to have on your website. If there's something like user-generated content, if you have advertisements on your website for other websites, then I'd just make sure that you're using the rel no follow there to make sure that those links don't pass in page rank. So even if they're generated with JavaScript, you can add the no follow attribute there by whatever DOM manipulations, or however you generate those links within your content. And we'll respect that appropriately, as well. Does desktop/mobile server errors impact search ranking? I noticed many 5xx errors within Google Search Console. But mostly, those URLs are no index from source. But why those considered errors and crawled by Google? So with a 500 error, we'll assume that's a server error, and we won't index that page like it is. So if we see a page that returns a server error, we won't take that content into account for indexing. Of course, if the page has a no index on it, then we wouldn't take it into account anyways. So that's not something where you'd see any kind of an impact directly in search with regards to ranking. You might, however, see an impact on how quickly we crawl. So if we notice that a server or a website returns a lot of 500 errors, then we might think that this is because we're crawling it too hard, that we're kind of causing these errors by accessing it so frequently, that we'll back it off from the crawling rate that we use for that website. So from that point of view, if your website used to not have any server errors and suddenly, you see a lot of server errors, you might kind of double-check the crawl rates to make sure that we're actually still crawling as much as we used to crawl. Or maybe double-check the crawl rate to see if we were crawling way too much, and if our kind of backing off on the crawl rate is the right thing to do there. So that's kind of what I'd look at there.

CRISTIAN: John, can I ask you a question?


CRISTIAN: You have this page [? littering ?] in the SEO market in Chile for about five years. But between the number one [INAUDIBLE] position for five years. A few weeks ago, we have made some redirections for some page talking similar things to this main page. [INAUDIBLE] SEO. So until then, we have a very good position. But then we have go down in the positions. It may be this pages that redirects to this main page. [? Hap ?] may hurt because some penalty? We don't have any penalty as far as I know. But it maybe hurt this redirection of this page that have no content to this main page. Can you hear? These redirections? I can show you the--

JOHN MUELLER: Yeah. I probably need the URLs. Maybe you can post them in the chat. But I probably need to take a look at the details. So it's not something where I can just take a quick glance and say, well, this is what's happening there. But in general, as things change on the web, on your website, then that can result in changes in ranking, too. But if you can post the URLs in the comments, then I can pull that out afterwards and take a quick look to see if there's anything specific that I could point out.

CRISTIAN: OK. Thanks, John.

JOHN MUELLER: HTTPS boost. Websites that have implemented HTTPS, does it matter which SSL certificate it is? For example, if it's a standard certificate or an extended validation certificate. No. From our point of view, we treat those the same. If it's a valid certificate that's set up in a way that's using modern encryption standards, that's not obsolete, then we'll treat that as a valid certificate. And we'll look at that on a per-URL basis and rank that appropriately. So it doesn't really matter, from our point of view, which type of certificate you use. If there is something specific that you want to use that certificate for, then by all means, pick the one that works best for you. But if you're just trying to move everything to HTTPS, there are a lot of options out there. There are a lot of really cheap options, as well. So you don't have to go out and buy a really expensive certificate if that's not something that you explicitly need for anything special. What's the limit on number of participants in the video? 10. So I think that's about it. We fill up very quickly in the beginning. So in general, if you want to join these and you haven't joined them before, then make sure you leave a comment in the event listings, so I can add you earlier. Otherwise, make sure that you're refreshing that event page fairly frequently, so that you pick up that link and can jump in as quickly as possible. I just bought an SSL certificate for my main domain. One of my folders runs WordPress. If I don't use SSL on this folder, would it harm my rankings or create duplicate content? So like I mentioned before, we look at this on a per-URL basis. And if there are some parts of your site that are on HTTPS and some parts that are on HTTP, then we'll take that into account appropriately. And some of those might have that kind of tiny boost for HTTPS, and other parts wouldn't have that. So that's not something I'd see as being a critical problem. On the other hand, kind of having a website that's split, HTTP and HTTPS, makes maintenance a lot trickier, in that you can't easily include HTTP content within an HTTPS website. It'll kind of flag that as a mixed content warning. So as much as possible, I'd kind of aim to moving the whole website to HTTPS, so that you really have one version to maintain. You don't have to worry about the mixed content issues. And it makes things a lot easier in general. I'm an [? old ?] webmaster. Is there some way I can get a walk-through assistance to make site maps? I've read everything I can about it, but I really need to have someone walk me through it. Is there a basic web-based tutorial or interactive for this? I'm not aware of any basic tutorial to kind of handle that or create those [? type ?] of files. But in general, most content management systems-- so if you're using something to blog with or an e-commerce shop, they have either site maps built in already, or they have a plug-in that makes it really easy, where you just activate a plug-in, and then the site maps will start working. So that's kind of what I'd look for there. Depending on which system you're using to publish your content, see if there's something really simple to just activate that sets up site maps for you. And it might be that your content management system is already doing site maps for you behind the scenes, and you don't need to do anything at all. Let's see. Have you ever considered implementing dynamic search results pages? Many websites for competitive queries deserve to be top one. Why not have different sites on top one, top two, et cetera?

MIHAI APERGHIS: Yeah, John. Why are you using static search instead of dynamic search?

CRISTIAN: I made that question.

JOHN MUELLER: So can you elaborate, Cristian?

CRISTIAN: I think that many sites, the search are competitive keywords, I think. Many sites deserves the top one. And many deserve top two and top three. So Google, for the other side, is looking for testing in the search. No? But this linear catalog of the server, that maybe can be for 24 hours a week or even a month in the top one. This linear catalog can be improved with this dynamic search. Why don't you use this search in testing the real number one, the very best number one site? That's my question.

JOHN MUELLER: So basically, to have the top search result kind of swap out against other ones from time to time? Something like that?

CRISTIAN: No. You can have an array for the top one. So if the American [INAUDIBLE] keyword maybe deserve five sites on top 1, or 10, or 11. I don't know. I think it depends on the competitive for the query. But for very competitive queries, I think that a very big amount of sites deserves the number one. So why don't you use an array for number one, a different array for number twos, a different array for number threes on even very-- the top tens? Different arrays? Different search? Very dynamic search. It's the end of the SEO, I think.

JOHN MUELLER: I don't think it would be the end of SEO. It would just change everything. But I think there's some really interesting ideas around that with regards to going from just those 10 blue links that we have in the search results to providing something slightly different, which might mean that maybe there are more results on a page, or maybe they're presented in a way that's very different than they are now. I don't have any good answer as to why we don't do this at the moment. But I know the teams are always working on trying out new things. And sometimes they're subtle changes. Sometimes they're really crazy changes. And these are things that are always being tested. So whenever you do a search, chances are you're within maybe 10, or 20, or 30 different experiments, where someone will be testing something with regards to the ranking maybe, with regards to the layout. Sometimes that's just like a pixel here or a slightly different color here. But sometimes those are really crazy experiments, too. So these are things that I know the team is always looking into and trying to find things out that work well. And they do lots of really crazy things in internal tests or with very limited tests. But they're always trying to improve the search results, to make sure that it kind of works really well for users. And while I can't really imagine 10 results being, like, in number one, I could definitely imagine situations where it makes sense to make that a little bit more dynamic.


JOHN MUELLER: All right. We just have a few minutes left. So I'll just open it up for questions from you all.

MIHAI APERGHIS: I'll go ahead with a question regarding our previous talk about an automatic publisher. We have a bit of an issue, so to say. We're not sure if we should implement this. Let me just send you maybe a new URL. The idea is that, as a publisher, we have a lot of articles. So, like, let's say 100,000 of pages with articles. And each page, since it's in the automatic niche, has a lot of photos. And each photo is on a different page. So that would skew the report between, like, pages with just a photo on them and maybe links to other photos, to the pages that actually have text content, and articles, and things like that. So like, I don't know, 75% picture pages where everything is just a photo and a few mixed other photos. So we're worried that Google might kind of think that, well, this website is mostly about photos. And we were thinking that we might-- maybe we should no index those. But then we are worried that Google won't list the photos when people search for image search. It's in this niche. A lot of people go to image search. And we don't know how that would play out. So we don't have a lot of traffic to these exact pages, maybe 1% or 2%. But still, it's 1% or 2%.

JOHN MUELLER: I don't know. I don't have a direct answer that I can give you to kind of follow as a guide. What I might do in a situation like this is just test these pages and see what happens. Maybe take, I don't know, 1% or 5% of your website, and you implement one variation. Implement another variation on another set of those pages, and just kind of see how things work out. And maybe you'll find one or the other works really well. Or maybe you'll find that both of them work well, and you can pick and choose based on something else.

MIHAI APERGHIS: Well, working well is-- we're not sure what to expect. For example, for a period of time-- so that's one of the articles I was talking about. And each of the links to the photos is on a separate page. And for a period of time, all of the photo pages had a rel canonical to the main gallery page. So Google only knew about the article page and its main gallery page. But since the rel canonical was there, maybe that wasn't the best option. Google wouldn't be able to see the photos and crawl the photos. And we removed that, and we basically let the Google Bot go over all of the new pages. And we kind of started dropping, like, 5% per month in traffic ever since. We're kind of worried that this might be because Google kind of thinks the website is about-- not so relevant to these topics, as it was before. Because it sees, like, a million pages with just a photo or a couple of photos on the page versus 100,000 articles, let's say.

JOHN MUELLER: I think, to the most part, we'd look at that on a per-page basis. So I think this is something you could just try out, and see which variation works better there. But it sounds like a really complicated setup. And it's really hard to kind of make a mental model of what exactly you're doing there and what the implications would be. But this seems like something where I'd try to test that as much as possible. And see if there really is a connection, like you suspect, or maybe if something completely different is playing a role there. Maybe, I don't know, users are happier with one version of these pages or not. And search engines don't really care, either way.

MIHAI APERGHIS: Well, I know there are some Google algorithms that look at the website as a whole, not just page by page. So this is why I'm worried that having just 80% of the website pages with just a photon on them might affect the other 20% quality-wise, let's say.

JOHN MUELLER: I don't know. It's really hard to say kind of offhand. Yeah.

MIHAI APERGHIS: But for example, if I no index those pages, they're still followed. Google would still pick up the photos. Would they still show up in image search if the page on which the photo is is no index?

JOHN MUELLER: Probably not. But I have to go now. Someone else needs this room.


JOHN MUELLER: It's been great having you guys here again. Thanks a lot for all of the questions. I'll try to think if I can find a better answer for you, Mihai. But I need to go through that in my head, the different options there. OK. Maybe I'll see some of you again in one of the future Hangouts, maybe later this week, maybe in two weeks again. Until then, wish you guys a good time. Bye.

AUDIENCE: Bye, John. | Copyright 2019