Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 11 September 2015

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

MALE SPEAKER: I know I have missed him a few times. [INTERPOSING VOICES]

JOHN MUELLER: Welcome everyone to today's Google Webmaster Central Office Hours Hangouts. We have a small audience here live in the Hangout at the moment. So if you're watching this, feel free to jump on in. Or if you want to jump in in a future one, make sure you check out the link ahead of time. I don't you, maybe we'll just start with you two. Is there anything on your mind, anything that we need to discuss, talk about, questions you might have?

MALE SPEAKER: A question that I asked in the chat was, I help manage a site that I put in there the META NOODP and [? NOOD ?] directory, but for some reason, when you-- it's a big law firm. And for some reason when, it shows of their listings, it shows their title correct. But next to the URL, it shows the DMOZ title, the old company name. And I'm wondering if that just takes time to go away? Because it's been about six months that that's been up.

JOHN MUELLER: That should be a lot faster than six months now. So you use the robots META tag NOODP on that [INAUDIBLE]?


JOHN MUELLER: It might be interesting to send me that example. Is that a question somewhere?

MALE SPEAKER: I could put it right here.

JOHN MUELLER: OK. Or you could put it in a chat, and then I can take a look at that afterwards, and see what's happening there. I'll take a look at that afterwards. Otherwise, let's just kind of go through the questions here. If there is anything that pops up that you want to comment on or ask for more details on, feel free to jump on in. Let's start at the top, see where we go. Will interlinking pages using dynamic URLs with tracking parameters negatively affect our existing rankings? Though the canonical tag is implemented on our tracking URLs, should we use only original URLs for interlinking pages in the site? So I guess, within the website, I try to use [INAUDIBLE] URLs, and make sure that you're using the ones you do want to have as a canonical. That's something we do look at. But if external people are linking to pages within your website, and they have tracking parameters, or maybe an affiliate parameter in there, and you use a rel="canonical" to kind of clean that up on your side, then that's perfectly fine. I just wouldn't use that within the website. So if someone goes to your website, and you link to your page on shoes, and you have this weird tracking parameter attached, then that makes it really hard for us to confirm that this is really the URL that you actually do want to have in that. We do take a look at canonical, but we also take a look at how you link within your website. And if that doesn't align, then we don't really know which version to actually pick. Another thing to keep in mind here is that if the content on these pages is really identical-- so you have one URL with a tracking parameter, one URL just clean-- then, generally speaking, what will just happen is we'll switch from one URL to the other. It won't affect the rankings. It just will affect which URL is shown in search. We're redoing our drop down navigation on our e-commerce site. Can you offer any advice on best practices, and do we need to include all categories to maximize ranking potential? I'd make sure that people can crawl-- go through your website, through the navigation that Googlebot can kind of crawl through your site with the navigation, and find the individual parts of your site. That doesn't mean that all pages have to be interlinked with all other pages. That usually doesn't make much sense. So I try to focus on what works for the user, and generally speaking, if the user can kind of click through your pages to get to your content, Googlebot will be able to do that as well. And we'll be able to understand the context of your individual pages a little bit better, because we see how they're interlinked. So maybe there's one detail page that has a link from this category page, and that category has a higher level category page, and all of that gives us a little bit more context about how this page belongs within your website. We are a tool hire website, and in Google Search Console, the significance column is full, a solid blue line, for the keyword hire, and 40% full for the word tool. We can't seem to rank for this term, and yet everything else seems OK. Can you please check? So this is within the-- what is this called? The keyword-- I forgot the name-- in Search Console, the keyword tool, where you see how important we think these words are on your pages. And that tool is based on crawling. It's not based on what we think is relevant for your site for search, for ranking. It's really just that, when we crawled your website, we found this word so often and this word so often. And that's kind of what you're seeing there. So that's something where I've used that more as a way to kind of double check that we can find any of your keywords, and to double check you're not hosting any kind of hacked content. So if you find keywords that are totally off topic in there, that's probably sign that maybe there's some parasite hosting going on, where someone hacked your site and is hosting something else. So if your site is about tool hire, and you suddenly see pharmaceutical keywords there, that's a pretty good sign that something is kind of off there. On the other hand, if you go into that tool and you see some key words for tool and some keywords for hire, then that's a sign that you kind have that content on your pages. We can find that through crawling, and the ranking part is essentially completely separate from that. So with just that information that you have those keywords on a page or on your site, it's really hard for me to give advice on which direction you should be heading to rank better for those keywords. So what I'd recommend there is maybe going to the webmaster help forums, maybe going to another webmaster community where you feel comfortable, and just asking for advice there, and getting input from other people who have gone through this process, as well. And maybe there are simple things on your site that you could do to make it clear that your site is about this topic. Maybe it's just a matter of building out your site, and making it something that people really love, and that they come back to all the time, that they recommend to other people. Some time ago, our SEO agency got us to remove our internal links from our articles on our e-commerce site, as Google can see this as over-optimization. From what I've read this, might not be the case. Can you clarify Google's position on this? So in general, I don't see any problem with internal links from articles on an e-commerce. Site so sometimes if you're an expert on a topic and you're selling products that kind of belong to that topic, then maybe you'll write some articles about this topic, as well, and give more insight on why you chose those products to sell, or the pros and cons of different variations of those products. And that's useful content to have. And that's something that sometimes does make sense to link to your product pages or to the rest of your site. So that's not something I'd see as being overly problematic. If this is done in a reasonable way in that you're not linking every keyword to a different page on your site, but rather saying, well, this product is important here, this product is important here, this is something we offer, this is something maybe someone else offers, or this is a link to the manufacturer directly, then that's useful information to have. That's not something I'd remove. Is it OK to run different HTML and different content on different mobile browsers, even though the URL is the same? For example, the HTML on Safari is different from the HTML on Chrome. So generally speaking, I'd try to minimize this, this kind of differentiation for individual browsers. It's important for us that the primary content of these pages is equivalent, and past that, it's essentially kind of up to you how you want to do handle that. So we see that a lot with a desktop page and a mobile page. And maybe the mobile version of the page is completely structured differently, but the primary content, the reason for that page is still the same. And that's kind of the important part for us there. The Tricky part you might run into there is with regards to Googlebot. It's a lot harder to kind of test what Googlebot actually sees, or what individual user sees, if every browser sees a slightly different variation of your content. So the maintenance, the testing, that's a lot harder. Also, caching can be tricky here if you're serving a different version of the HTML for all different browser types, then that essentially means that no cache in the network side between your site and the user can actually cache this content completely, because everyone sees something different. So there are some technical things that you need to watch out for with regards to making sure that it's actually usable. But in general, it's fine from our point of view. We call this dynamic serving for mobile stuff. We have a Help Center article, or developer site article on dynamic serving that gives you more information on what you would need to do on your server to let us know that this is dynamic serving. If we then use JavaScript on pages we've 301s to make them look more localized by superimposing localized text over the h1, h2 tags, is this classified as cloaking? It's hard to say what that would kind of look like in the end. So in general, you can use JavaScript to personalize your pages, and if this is a way of personalization that you think makes sense for your users, then by all means do that. The thing to keep in mind is Googlebot does process JavaScript as well, and will see what you're showing with JavaScript there, as well. So if, for example, you show something specific for users in Mountain View, California, or wherever Googlebot crawls from, then it's possible that Googlebot will index those pages like that. So maybe your home page suddenly says, welcome users from California. And if someone searches on the internet, and they're not based in California, they'll still see that snippet. So that's something to kind of keep in mind in that, just because it's using JavaScript, doesn't necessarily mean that Googlebot won't be able to see that, and won't index it like that. Is there any way to check the number of apps screens indexed in Google if we've been using app indexing in linking to the website? At the moment, there isn't a direct way to do that. You can do that with the search analytics information in Search Console. So you can see which individual pages were shown in the search results, and from that kind of extract, so many unique pages were shown in search, therefore these pages were probably already indexed, or they were at least indexed at that time. But we don't have anything like an index status chart like we have for websites for apps specifically. When are Google Home Services coming to the UK? So I don't know what Google Home Service are, if that's something new that launched recently. I'm not really sure. In general, what happens with international launches is we try something new out. We see how it works, and if we see that it works really well, then we'll try to spread that out across other locations, depending on how easy that's doable. It'll go a little bit faster, or it will take more time. So some of the projects that we work on have maybe legal issues around them, or policy issues around them, where we can't easily take something that we're doing in one country and apply it to all other countries. So that's something where, if is this is something, a service that maybe got launched in the US, or maybe some other country, then we'll probably try it out for a while to see how it actually works. And if we see that it works really well, we'll probably try to spread that out into the countries where we think it makes sense. Sometimes that faster, and sometimes that's slower. There's no absolute answer for that. Do we need to add schema or breadcrumb code on the last label that's non-applicable in the breadcrumb structure? I took a quick look at the documentation for the breadcrumb list, and they do mark up the last node. So from that point of view, that probably makes sense. You can also try this out on individual pages and see what you find in search, and how that matches what you'd like to have seen in search. Since this is kind of a snippet leading to your pages, it's not something that would affect it's ranking. We're classifying sites and have a few thousand pages, which generally show two or three listings while others show zero listings at other times. Reason being, there are few listings in small cities. Will those pages with three or four listings get counted as low quality pages? So in general, if you can recognize that you're showing a search results page with zero results, then that's something I'd recommend to block from being indexed. So put a no index on there, if possible, so that we can crawl that. We recognize that there are zero search results on this page, and we won't necessarily index that. Because that's a really bad user experience. If someone is searching for maybe a home in a specific city, and they click on your result, and they land on your site, and it says, oh, we don't actually have any of this, then that's a really bad user experience. And that's something that we do try to recognize to some extent. But if you can help us by letting us know that this is something that doesn't have any useful content on it, you shouldn't really index it, then just put a no index on there so that we don't even try to show it to users.

MALE SPEAKER: I have a question on that. Does that also go for tags? Because on our-- on my site, we've been trying everything. So one of the things that we tried was to make sure that there was no thin pages, where a tag may be used only once or twice, meaning they would have the big page with two little articles on there. So we just did away with tags completely to make sure that we didn't have any thin content, and all of our content had lots of content. Is that something you recommend when you're only using a couple tags here and there?

JOHN MUELLER: Sure. You can do that. Yeah. I think that's something you, as a webmaster, can kind of make that decision on. And if you know you have other pages that have really a bulk of a lot of content on them, and some of those pages, like the tag pages, you recognize don't have that much content on them, or aren't that useful, then you can definitely say, well, I will set a threshold somewhere in my CMS that if I have less than five articles, maybe I'll show a no index. If I have more, maybe I'll keep that indexable. That's really up to you. I think, from our point of view, our quality algorithms do look at the website overall. So they look at everything that's indexed. And if we see a bulk of the indexed content is actually lower quality content, then we might say, well, maybe the site overall is kind of lower quality. And if you can tell us that this lower quality content shouldn't be indexed, shouldn't be taken into account, then we can really focus on the high quality stuff that you are letting us index.


JOHN MUELLER: We have pages that aren't linked from the site, yet they're indexed in Google. Is it possible if Google got those pages from a browser, or just Google index pages opened in Chrome? I don't think we take that into account at all. So I imagine-- so in general, when we see pages that aren't linked within a website, and they end up being indexed, then usually they're linked in some other way. So we don't go out and invent URLs to try out. We don't look at things like analytics to see what people were clicking on and try those out as well. That's something where we probably found a link to that somewhere. It might be that, maybe there was a broken link within your website, maybe there was a broken link on an external website. Maybe this link was posted in a forum somewhere, and someone corrected that link afterwards, but we kind of picked it up at that time. All of these ways are different possibilities for us to actually pick up URLs that are on your website and try to crawl and index them. And if there is something that you don't want to have indexed at all, then I wouldn't recommend relying on just not going linking it from a website, because sooner or later, someone's going to stumble across it. And if they post about it in a forum, or somewhere else, than maybe we'll find that as well, and we'll try to crawl and index it. So if it's confidential information, make sure it's password protected properly. If it's information you just don't want indexed at all, make sure it has a no index on it. If it's information that's problematic if Google were to crawl it, for example if it causes a high server load, then maybe block it in the robots text so that we don't even try to crawl it. But I would definitely not rely on something not being linked to block that from being linked. Is the user agent for smartphone and desktop different? Yes. We have a Help Center page on the different Googlebot user agents, and we have the normal desktop user agent, Googlebot version 2.1, or whatever it is. And we have the mobile, the smartphone version, which has recently been updated to, I think, the new iPhone 6. So Googlebot got a new iPhone. And we have the older feature phone versions, which are different depending on the device, because sometimes the content is very different depending on the type of device. So I'd check out Help Center article, and take a look at the different user agents there. Is Google considering structured data a ranking signal? In general, not. But I guess the general leaves some room open there in the sense that we try to use structured data as a way to understand more about this page, and we try to use that rich snippets, for example. And rich snippets themselves aren't a ranking factor. But sometimes that leads to a more visible search result, and if people tend to click on your search result more, even though it's ranked lower, then that's good for your site anyway, because you're getting this traffic based on that kind of nicer-looking snippet, or that snippet that has more information about what you're offering at least. So that's something where it wouldn't necessarily rank your site higher, but it might attract more visitors, because they recognize the type of content that you have there. On the other hand, what we do see is that when sites go off and implement structured data, they tend to use a very clean HTML format, go from maybe a messy HTML template that they had maybe five or 10 years ago to a really clean HTML5 template, or something like that. And it's not necessarily that moving to a cleaner layout helps us. But it does make it a lot easier for us to understand the context of the text that you're providing on your pages. So the structured data helps us a little bit to understand, well, you're talking about golf the sports, a not the car, or jaguar the animal, but not the car. And these kind of things help us to better understand those pages. And I think, over time, it's something that might flow into the rankings, as well, where if we can recognize someone is looking for a car, then we see, oh, well, we have these pages that are marked up with structured data for the car, so probably they're pretty useful in that regard. We don't have to guess if this pages is about the car or animal. So I think, in the long run, it definitely makes sense to use structured data where were you see that as being reasonable on your website. But I wouldn't assume that using structured data markup will make your site jump up in rankings automatically. So we try to distinguish from just a site that's done technically well, and a site that actually has good content, because the user doesn't really understand the difference in the end, and they just want to find something that's relevant for them. And just because it's done technically well doesn't mean that it's more relevant to a user than something that's done not so well. Despite adding the improved site links search box code to my site, Google has not started showing the search box on brand terms. What could be the possible reason? So adding the markup doesn't mean that we'll show the search box. It does help us to understand that, when we show the search box, we should use the version that you're using for your markup there. So it's not that it will kind of trigger the search box for your site, but rather, when we show it, we'll use the version that you have specified in your markup. So in that regard, just by-- kind of similar to the previous question on structured data-- just by adding this markup doesn't necessarily mean that we'll change the way we rank your site, or that we'd show this if we didn't think that it was reasonable for users.

MALE SPEAKER: There was an article on those that said that regardless if you have or not, if your site is very high traffic site, that it will probably show automatically. Is there any truth to that?

JOHN MUELLER: I wouldn't necessarily say that it has to be a high traffic site. But if we can recognize, for example, that people are trying to find something specific within your website, then that probably makes sense. So if someone is searching for, I don't know, NASA, chances are, they are not looking for the NASA home page, but rather something specific within the NASA site that they want to find. So in a case like that, it would probably make sense to show this site search box. For how many months or years should we keep a 301 redirect? Or can we remove them after some time? Will removing the rules have a negative impact? Will Google start showing the old URLs instead? Theoretically, a 301 redirect is a permanently redirect. So theoretically you could keep that forever. Practicability, that's probably not that reasonable that you can keep something like that forever. At some point, if you're moving domain, and you let go of the old domain, then you kind of let that come out. But in practice what happens is, if we recognize that this is a permanent redirect, we'll try to keep that in mind for the future as well. So if you've moved your site, and we've been able to kind of recognize that your site has moved, which might take maybe a half a years, maybe a year or so, that at some point, you can kind of take that redirect out. The thing to keep in mind there is, if there are still links to the old version of URLs out there, external links, maybe bigger links to those external versions, then chances are we might still show the old version, as well, if you remove the 301 redirect. So that's something where, if you're doing a site move, or if you're changing your URLs, you really need to make sure to follow the guide that we have in our Help Center, which also includes reaching out and making sure that everyone's updating their links to the new version, so that these old links don't end up getting lost in the end. Because otherwise someone will click on this old link. You're not redirecting anymore, and they either land on a 404 page, or maybe on a part domain that someone has picked up. And that's really a bad user experience, and that's something you probably want to avoid, as well, because you'd like to have that link associated with your actual content, not with a 404 page or part domain from someone else. So I guess going back to this question, how long should it be? I'd definitely aim for at least a year. If possible, I'd try to keep that longer, as well, depending on what's reasonable in your situation. Isn't Google's practice to add no index tag to paginated pages? Is it OK to use no index tag on paginated pages and also use rel="prev" and "next" on those pages? So kind of similar to the tag pages in the beginning, and the empty search results. It's really kind of up to you what you want to have indexed and not, and if you feel that these paginated pages do provide a lot of value for users, then maybe it's worth keeping them indexed. If you feel that they don't provide a lot of value to users, then no index is probably fine. If you do put a no index on them then the rel="prev" and rel="next" doesn't really make sense in that case. So I'd probably choose between either a no index, or using rel="prev" and rel="next", and try kind of make that decision based on that. My website is all good with Google, or so I thought, all good in search engines, and then when I tested it on mobile friendly, it comes up as it couldn't fetch the URL. Also, when I go to my site here in mobile, it's all good. There are no problems. So this-- I don't know-- it's hard to say, because it could be different things. The mobile friendly test is using our smartphone Googlebot to access your pages and see if we can get to the mobile-friendly content. And in some situations that's not directly possible. For example, if your site, if your server is kind of overloaded, and we're kind of running at the limits where we think, if we request more pages from the server, then server won't be able to handle that really well if it's kind of a slow server, if it's overloaded a lot. If it sends a lot of server errors, then we might back off and say, well, someone is requesting this page to be checked with Googlebot, but we know that the server tends to be kind of at its limit, so maybe we won't actually fetch this page, and we'll show an error instead of causing problems on the server. So that might be option that's happening there. Another thing that might be happening is maybe something on your site is blocking smartphone Googlebot. And that's something that you probably want to figure out and resolve, as well. So if this is your website, you can certainly use the Search Console Fetch As Google tool, where you can also choose between the normal Googlebot and the smartphone Googlebot to kind of double check what exactly is happening on your site when those pages are fetched. Does Google submit forms? So we do that in some very specific situations where we think that this looks like a search form, and we're not getting all of the content that we suspect is available on this website. Then, in those kind of situations, maybe we'll try to plug in some key words from the existing content and see if we can find links to more pages. But that's a really rare situation. It's usually not the case that we'd go to a site and say, oh, there's a form here. Let me enter some gibberish and click submit, and see what happens, because in practice, those forms don't lead us to any new or interesting content. Search forms sometimes do lead us to content that we can't otherwise find, especially if the site doesn't have any navigation at all. If it doesn't have a sitemap file. If we see that, essentially, maybe the homepage it's just like a search form, and we suspect that there's a lot of really good content on the site, but we can actually reach it through crawling, then that's one situation where you might say, well, we know the site is about this topic, we'll try some of those keywords, and see where we find more links to the actual content on the site. Is the iOS app indexing content available in India? I'd currently hold off a bit on the iOS app indexing side, because I know things are still in work there. And with the new Apple iOS 9 stuff coming up, I suspect maybe some of that will change slightly. So that's something where it's not that it's not available in India, but I suspect it would be more efficient to wait a bit and see how that settles out, and then implement the final recommendations, rather than to run off and try to implement as it is now, and have to change that when one of those change over time. Will numbers and date be considered as user generated content, or are naturally written text sentences only refer to user generated content? All of that can be user generated content. That's really up to you. I mean you're the webmaster, and you know where your content is coming from. So it's not that we would artificially classify something as user generated content and devalue that in any way. It's just essentially content on your website, and where it comes from his essentially up to you. Search console crawler error shows 200 404s due to changing URL structure. I've written a 301 rule in [INAUDIBLE] access. Should I fetch all pages and submit to index, or is it enough to mark them as fixed? So marking them as fixed does absolutely nothing our side. It's only on your side in the user interface that it just removes those errors from being shown. In general, we do recrawl these pages automatically. There's nothing you need to do to have that recrawl. But you can use the submit to index feature to let us know about this change a little bit faster. You can also put them in a sitemap file, and say this is a new change date. And then we'll pick that up and see, well, these URLs all changed recently. We didn't know about that. Maybe we should check them out again. So submitting to index is one way to make it a little bit faster. Sitemaps is another way. But in the worst case, if you've just fixed this, you could also just leave it like that, if it's a lot of URLs and you can't do them manually. My client has around 100,000 backlinks from 21 low quality spamming websites. After uploading it on the disavowal file, the index status information has reduced dramatically. Is this normal, or should I upload another disavowal file with pure domains? So the index status information is independent of the disavowal file. So if you've disavowed those links, that's something that will just take time to be processed. It's not something where you would see an immediate change in any part of Search Console. So that's something where you're probably seeing two changes happening at a similar time, but they're not actually connected. With regards to index status, there are two changes that we recently had here. On the one hand, we had a slight glitch on our side where, for one day, I think the numbers kind of dropped. And on the other hand, we did change something on our site to try to count the more relevant pages. And for some sites, that does result in a visible change in the numbers that we show there. It's not, however, a change that affects what we actually show in search. So that's kind of a way to try to show the more accurate counts in Search Console. I suspect the index status counts will change at some point in the future again, when we make additional change to try to bring more accurate counts into that. We're currently struggling with our clients, and could you give me some pointers in the right direction? It's really hard to say like what could be done with a website on the fly in these live hangouts. What I'd recommend there is maybe go on to the help forums and getting advice from peers to see if there's anything specific that they see on that site that could be improved there, or also, where you can explain a little bit better what specifically you're having trouble with, where you're seeing problems in the search results. And that can help people to point-- [INAUDIBLE]

JOHN MUELLER: Oh, well. Somehow my Mac decided to reboot. Always great timing. Let's see, if we could just get things set up again. Looks like it's still live. So maybe we're lucky. Let's see. Using majestic historical data, I've disavowed bad links that shouldn't be live anymore. This had a positive effect for a couple clients. Would you recognize this as a good practice? Why is Google still counting links from websites that are 404? So if you're disavowing links that don't actually exist anymore, that shouldn't have any effect at all on search, because we take the disavowal file into account when we crawl that page, and if that page doesn't exist anymore, then we can't really use that link anymore. So those links drop out naturally. So if you disavow a link from a page that doesn't exist anymore, then that essentially doesn't change anything. We would crawl that page, we would that it doesn't exist anymore. We would drop that link. Then the disavow doesn't really play a role. So just because a page doesn't exist anymore doesn't mean that you need to do anything special with regards to disavowal. When I checked the cached copy of my [INAUDIBLE] site, the URL in the cached copy shown is a different site. Why is Google showing a different site? So I took a look at this, and this URL leads to this somewhat general PDF file. And if we recognize that the same PDF file is hosted somewhere else, we might choose that other location as well to show in search. What you can do there, if this is your PDF start with a rel="canonical" header, a rel="canonical" HTTP link header, and then we'll try to take that into account when we pick which URL to show there. In practice, it probably doesn't make much difference in a case like this because this is a PDF, and if people go to this PDF, it doesn't matter where it's currently hosted. The h1 tag size is controlled through CSS, versus h1 tag without CSS. Which one is better and more helpful in ranking? So from a rankings point of view, this isn't really question from our side. It's not something that would affect rankings. If you're using an h1 tag to specify headings, that's perfectly fine. But it doesn't need CSS. You can use CSS if you want, but it's not that we would say, well, this is styled with CSS, therefore it's even stronger than a normal ending. We really just pick that up as a heading, and we take that into account for rankings. Do we need to add alternate an alternate time for android app indexing on desktop and mobile both for a dynamic serving site? If no, then let me know which version we have to add the alternate tag to? So in general, on your mobile site, you have a rel="canonical" pointing to your desktop site, so we can focus on a desktop version of your site. So that's the version I'd put the alternate tag on for app indexing. So pick the canonical for that content that you want to associate with your app page, and use that as the one to kind of pivot from for app pages. Is there's a schema to show the number of property in Google search results pages? I don't know exactly what you mean with number of property, and I don't think we have anything specific like that. So for the things we show in the search results pages, the rich snippets, that's something that we have specified in the Webmaster Help Center. So I'd double check what we have there and see which of these aspects make sense for your website, and see if you can implement that. The thing to keep in mind is there are lots and lots of ways that you can add structured markup to your pages using or other setups, and not all of them are actually used by Google to be shown as rich snippets in the search results. So that's something, if you have a limited amount of time, and you want to add something to your site that's visible, then I'd focus on that which is shown with the rich snippets, for example, or which is used for other search features, and focus on that kind of markup first. Is it a good practice to alternate tag for app indexing on no index no follow pages? It doesn't have any effect in a case like that. Because-- so kind of taking a step back-- with app indexing, you're taking an Android app, and specific parts and pages of the app, and saying this page in my app is equivalent to this page on my website. And we kind of need that connection between the website and the app page to know how we should rank this app page in the search results. So if you're saying the website doesn't actually exist, you just have an app page for this individual item, or the web page has a no index on it, for example, then that's something where that connection doesn't really work for us, because if we don't show the website in search, then we won't show that app page in search. So you're kind of doing something that doesn't really have an effect in the search results. It's not that it would cause any problems. It just doesn't kind of give you any kind of advantage by actually doing that. Code shown in Fetch As Google versus a cached text page or a page, which one is better to conclude? If our content links are being read by Google, both don't show the same stuff. So the cached version generally shows the HTML version of the page. So if your server is doing a lot of stuff with JavaScript, or doing something else kind of fancy on the side, then that's something which might not be reflected in the cached version of your page, but which we do take into account when you render a page with Fetch As Google, and when we render a page for indexing. So for indexing, we focus more on the version shown for Fetch As Google and the rendered view there. For the cached page, there are certain reasons why we would show the HTML, the raw HTML that we picked up as being the cache page, and not try to render it in a specific way. As we know, implementing schema doesn't change the look and feel of a web page for the user, so can we implement schema markup on web pages only for bots, but not visible to users, so that the page load time doesn't increase for users. I wouldn't assume that all markup is never visible to users like that. So that's I think one aspect where we generally recommend not cloaking to Google, because on the one hand that makes it really hard to diagnose what's actually happening. On the other hand, that could theoretically cause problems with the webspam team. So if at all possible, I'd really recommend making sure that your pages show the same markup to users as they would show to Googlebots. Can we use on residential projects like apartments, flats. I suspect that's probably not the best use of But what I do there is there's a Google+ community on structured data markup, and maybe ask there, or maybe ask in the webmaster help forum to get some insights from other people around how they use this specific markup. Can we had HTTP vary header only on mobile optimized pages, or is it required to add on the desktop pages and the mobile pages both? So the vary header tells us that this content is different depending on the user agent, or depending on the user that's trying to access it, which means on one URL it might be that we see normal content with the desktop Googlebot, but we see that the mobile content on accessing with the mobile Googlebot. And the vary header tells us that we shouldn't just crawl this page once and say, this is the version for this URL. So if you're doing this kind of dynamic serving, as we call it, where you're showing different content depending on the user agent, then you definitely need to use the vary heard on both versions. So on this URL, regardless of which version you're showing, use a vary header so that we know that it's different depending on the user [INAUDIBLE]. On the other hand, if you have separate mobile and desktop URLs, then you don't need to use a vary header at all, because these pages are separate, and every time someone looks at the desktop page, they see the desktop page. Every time someone looks at the mobile page, they see the mobile page. It's not that this content varies. So from that point of view, if it's separate URLs, you probably don't need the vary header. If it's the same URL showing desktop and mobile content on the same URL, then using the vary header for both versions definitely makes sense. Is it required to open CSS and JavaScript for Googlebot? Requires a strong word, I guess. From our point of view, it helps us a lot to be able to understand the page a lot better if we can render it the way that a user would see. So on the one hand, it helps us to see, for example, this page is mobile friendly, because we look at the CSS and JavaScript, and we can tell it fits on a mobile browser. It doesn't do anything crazy to kind of overflow the mobile browser. It's not a desktop page that doesn't work on a mobile browser. It's actually really mobile friendly. So if we can look at the CSS and JavaScript, we can't really be certain about that. On the other hand, we see more and more sites that use JavaScript frameworks where they pool in content using JavaScript, and if we can't access that JavaScript, or if we can't access the APIs, the server responses, those kind of things, then we can pool in that content for indexing, either. So you might be pooling in a lot of really fantastic content from your server. With JavaScript, you do that in a fancy way that works well for users. But if we can process the JavaScript, we basically have to focus on the HTML page, and maybe there's not enough content on that HTML page to actually get the full context for your site to understand how it should be ranking. So it's certainly not a requirement that we be able to see all of this content, but it does help us a lot to really understand how we should be ranking this page, where we should be showing it, which content we should associate with it, so that we show it appropriately in the search results. Does schema play a major role in ranking, or is it just a signal for Google and other search engines to understand your content and provide the best search results possible at this time? Like I mentioned before, it's not a direct ranking factor. It's not that we would say, this page uses markup, therefore it must be good. But it does help us to understand the context a lot better, and to understand how things fit in, what these words on this page actually mean. So one example there, is, for example, if you have a restaurant, and you have opening hours on your web page, and they're just like in a random HTML table on the side on your web page, then that's sometimes really hard for us to figure out what that actually means. Does that mean this restaurant is always open then? Is this table something that can easily be picked up where we see like weekdays and time, and things like that on it? And if you were to use the structured data markup for opening hours, then we would be able to really see a clear answer there, where we look at this page once. We see your opening hours. We understand, OK, this is a type of business. This is a restaurant. Here are the opening hours. When someone is searching for an Italian restaurant in this location, and we know that your business is open, then we can guide them to your business really easily, because we know that your business has these attributes, and we can kind of trust them that they're useful. So that does help us a lot for them, but it's not the case that just like adding schema or adding structured data to your site automatically makes it rank higher. Google shows title and description tag of duplicate pages, but the URL and content being cached are of corresponding original page. Duplicate content contains a canonical tag pointing to the original page. The original page gets 302 redirected to the duplicate on an IP basis. Whoa, this is a complicated setup. So I'd probably recommend posting in the webmaster health forum for something like that, just so that you can explain the situation a little bit better, and maybe provide some examples, either on your site, where you're doing this, or on someone else's site, if you think that maybe this might be an option for your site, and you'd like to have kind of a recommendation to do it like this or do it like that. Giving some more context around this question probably makes it a lot easier to answer. At the moment, I'm not really sure what exactly the situation is, and where these titles and descriptions might be coming from. If one doc had been cached in Google, that if in any case I wanted to delete that doc from Google search completely, how could I do that? So ideally you would remove that page from the internet first, so from the server where you put that. And once you've done that, you can use the URL removal tools and say, well, this is a URL that I removed, please take that into account, and then within a day or so, we should be able to just remove that for you. So especially if it's been removed from the internet, then that's something that's a pretty straightforward process. The URL removal tool will definitely help you to get that done. If you've just change the content on that page, then you can also use the URL removal tool to let us know that this specific word was removed, or this specific phrase was removed, and we will double check if that's the case, and then try to show that appropriately in the search results. On the other hand, if this page still exists on the internet, and it's still a public paged, then it's really hard for us to say, well, this person wants that page removed, and this other person actually put that content up on the internet. What should we do there? It's kind of a tricky situation. So in a case like that, I'd really try to work together with the person who actually put the content up and try to see if that can be removed, so that you can do the normal removal requests for that. We just have a few minutes left. Maybe I'll just open it up for discussions from you all.

KREASON GOVENDER: Hi, John. Is there anything specific that we can do to rank for certain countries? Like for example, on we're ranking on the first page, and for in AU, we're raking way down on the third page. So is there any way that we can specifically facilitate that ranking on at AU.

JOHN MUELLER: You can definitely use geo targeting. I don't know If you have looked into that.

KREASON GOVENDER: Yes, we have. But from my understanding, geo targeting is for just one location that you can locate, or you can you add multiple locations?

JOHN MUELLER: So geo targeting is for a single country, where you can say my website, or this part of my website, is specific for users in this country. And then if someone is searching for something local in that country, we'll understand, well, this is something that's locally relevant. We'll try to show that to the user there. But it's not the case that you can say, well, this content is relevant in these countries, and it should be seen as local content in all of these countries, because these are essentially different countries. You're saying it's locally relevant, but it's also like everywhere the same. So that's kind of conflicting there. The thing to keep in mind with different countries is there is a different competition aspect there, as well, where maybe you're working really well globally. Maybe in individual countries there's a really strong competitor that is kind of resulting in this change, and just because you're very good globally doesn't necessarily mean you'll bee good in all individual countries. So for example, if you take a really strong brand-- I don't know, maybe Coca Cola, something like that-- which is really popular in the US, really popular in some countries. It might be like the most popular choice for a soft drink in all countries. So you might see individual countries having different rankings there.

KREASON GOVENDER: So can we use geo targeting for landing pages?

JOHN MUELLER: For landing pages, sure. I mean, for geo targeting, you would need to use a generic, top-level domain, and then you could say, well, in this subdirectory I have content that's specific to this country. And how much content you have in that subdirectory is essentially up to you. That's maybe, I guess, like the minimum, would be like one page. But it could be that you have everything there, or you say, well, users from this country go to this part of my site, and they have all the local pricings and, like, local delivery options there. That's another option there, as well. Another thing you could do, if we're showing the wrong version in the search results, is to use the hreflang markup. So if, for example, in Australia, we show the version of your page that's actually meant for the UK, then with the hreflang markup, you can let us know that these two versions are equivalent. This one is for Australia, this one is for the UK. So if a user in Australia wants to see my UK page, we'd swap that out and just show the Australian page instead. But that wouldn't change rankings. That would just kind of try to show the most relevant version that you have of this set of pages.


JOHN MUELLER: All right. More questions from you guys? Anything? Nothing? I see a question on how to add-- a question to the Hangout Q&A. So on the Events page, there's a Q&A link where you can add more questions there. I don't think I'll have time for this one. But I'll set up the next Hangout later today, and you can definitely add the questions there. All right. Otherwise, let's take a break here, and I'll try to figure out why my laptop rebooted in between. But thank you all for coming. I wish you all a great weekend, and maybe we'll see you again in one of the future Hangouts.

MALE SPEAKER: Thank you, John.

MALE SPEAKER: Thanks, John. Take care.

JOHN MUELLER: Bye, y'all.

KREASON GOVENDER: Bye. | Copyright 2019