Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 26 January 2016

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

JOHN MUELLER: OK, welcome everyone to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller, and I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do is talk with webmasters and publishers like the folks here in the Hangout, the people who have submitted lots of questions, those in the forums, those on Twitter, Google+, wherever. We've gotten a bunch of questions this time. I tried to group them together. But maybe, as always, for those of you who are new here in the Hangout who haven't been coming to these regularly, is there anything on your mind that you'd like to get answered right before we start?

LYLE ROMER: Hi, John. I just had a quick question regarding global usability and responsive design. And basically, how narrow do you have to have your responsive design work to? Because obviously, at some point there is a practical limit. And some tools not related to Google's tools told us that we're not responsive. And we know if we go to really, really narrow, things start to fall apart. So I just wanted to know how narrow do you need to actually work?

JOHN MUELLER: From our point of view, when we're looking at mobile content to recognize that it's mobile friendly. Regardless of if you're responsive, dynamic surveying, or separate URLs for mobile, we essentially use an iPhone-like browser. So I think the iPhone version is iPhone 5, I'm not completely sure. So basically last generation iPhone, that's kind of the width that you'd be aiming for, for the mobile-friendly batch. If you want to go past that and do something for even smaller devices, that's totally up to you. And depending on your audience, maybe that makes sense, maybe it doesn't make sense.

LYLE ROMER: Great, thanks.

JOHN MUELLER: All right. So no more questions to start with, I'll run through what was submitted maybe a couple of hours ago when I put this together. I grouped it into-- what is it-- four main themes, internationalization, ranking factors-- who doesn't want ranking factors-- indexing, instructor data. There probably are new questions submitted in the meantime, and I'll try to run through some of those as well afterwards. Internationalization is right on top here. We're about to launch a site for Canada, both in English and French since both are spoken there. Should I use sub-domains or sub-directories, especially? From a [INAUDIBLE] point of view, that's totally up to you. So for country targeting-- for geo-targeting-- it's really useful for us to have a clear grouping of those URLs. So that could be a sub-domain on a gTLD, could be a ccTLD, separate one. It could be a sub-directory on that gTLD, all of those work for geo-targeting. For language targeting, you can use whatever URL structure you want. So you can use fancy parameters if you prefer, that's totally up to you. So for country targeting, we need to have it clearly grouped. For language targeting-- what you're talking about here for Canada, so English and French-- it's totally up to you, and you can group those, you can separate those however you want. Personally I try to group them just so that you can diagnose issues a little bit more per language to see well, my English content as performing like this, my French content is performing like this. And the easier you can group that, the easier it is for you from a practical point of view take care of that. But from a search point of view, you can use whatever you want.

MIHAI APERGHIS: John, can I have a quick follow-up on that one?


MIHAI APERGHIS: So we have a client now that's actually in Canada, and they offer travel tour packages or people that want to see New York, for example. And they have a .ca domain and they're doing pretty well, but now they want to expand to the US and Europe audience. Same packages-- for example, for New York-- and of course I recommended that they use a gTLD, so they won't be able to do much with that .ca domain. And we have two options. One is just like a .com domain, and use that for the international audience and keep the .ca domain just so they keep their rankings and everything else. Or use a .com domain and use a /ca and re-direct the .ca domain to that. So we can also benefit from the [INAUDIBLE] and everything else. But in that way they're worried they might lose some rankings from Canada, even if we join [INAUDIBLE] over Canada. What would you think would be a good option to not worry too much about their rankings that they currently have on Canada?

JOHN MUELLER: I guess it feels more like a marketing question to me. Is this something that is a long-term move to multiple countries? Is this just another small footstep out to another country? Or the difference between there, is this generally moving to all countries or is this just touching the water in one country? If you're just touching the water in one country, I think that keeping a ccTLD is perfectly fine, a normal strategy for a lot of companies that are just getting started internationally. If you really want to geo-target a bunch of different countries and you really have specific content for those countries, I'd recommend doing something where you can do geo-targeting. So a gTLD, maybe multiple ccTLDs, whatever makes sense in that case.

BARUCH LABUNSKI: John, but as soon as you let that international targeting go in the webmaster tools, and you switch that URL at the box-- let's say you are in the US all this entire time. So generically, I would drop, then, nationally in the US, if I am now all over the place then.

JOHN MUELLER: If you have a ccTLD and you also wanted to target a different country, then essentially you wouldn't be able to do geo-targeting for that additional country. You would be seeing a generic international company that's also active there, but it wouldn't have this slight local geo-targeting bonus from there.

BARUCH LABUNSKI: Yeah in Canada, they love the .ca's.

JOHN MUELLER: Yeah, I think from a marketing point of view, that sometimes makes sense too. Where if you know that the ccTLD has a really strong marketing effect for your audience, then maybe that's something you want to keep. But from a search point of view, that's up to you.

MIHAI APERGHIS: So I will be able to geo-target just the sub-folder of a gTLD or like .com/ca and re-direct the .ca to that. And then geo-target so we don't lose any of the geo-target benefits.

JOHN MUELLER: So if you move one domain into a sub-directory of a different domain, then essentially that's a site merge for us. It's not a normal site move, so we don't move everything one-to-one forward to the new site. We have to understand the new site and how it's standing separately. So you'll definitely see a longer period of fluctuations until things settle down again. Compared to if you move your whole content from .ca to .com one-to-one, and you added a folder for /us for the US. So that's something to keep in mind. Sometimes you say well, my long-term strategy is to have different folders for different countries, and we really want to go that direction. Then you have to bite the bullet early-- as early as possible-- move everything into a folder, understand that there will be fluctuations for a longer time, and live with it.

MIHAI APERGHIS: Well, if I would leave the .ca alone and start with .com but start from scratch, then I would have to balance that out, since I'm starting from zero.

JOHN MUELLER: Sure, that's another option. That you say I leave my .ca and I start an international version on .com and branch out from there. And then maybe in a later step when you know that your international site is getting traffic anyway, that things are working there, then also move the .ca over to minimize the risk of everything falling apart. Just because you don't get anything.

MIHAI APERGHIS: Right. OK, thank you.

JOHN MUELLER: All right, so that was our internationalization question for today. The next one was about indexing, so there are a few questions around indexing that I'll go through. So this question is, I had total indexed pages 28 million, and now it only shows me 10 million total indexed pages. And if I do a site query, it only shows me 600,000. So in general, you're looking at very different numbers there, and it can be normal that you see fluctuations like that. So the site query is something that's optimized for speed, not accuracy, so that's something where I wouldn't expect that number to match anything with regards to the actual number of indexed URLs. Because there's so many factors behind that where we don't actually look at the individual pages. We say well, we've seen a lot of pages from this domain, it's probably around this number. So it's more like a really wild approximation if you look at the site colon index count number. The index status information search console is more direct in that it mirrors what we have in our index directly. But that doesn't mean that these are the [INAUDIBLE] that you really care about. So there might be some duplicate content in there, there might be things that you don't really want to have indexed because they're roboted, or almost-duplicate content is probably allowed in there as well. The number I would personally look at with regards to indexing information is the site apps index count. So you submit a site maps file with all the URLs that you really care about, and look at the index count there. Because that checks those specific URLs to see if they're indexed. So duplicate content is ignored. It doesn't really help you to know how much duplicate content you have on your site because that's usually less relevant with regards to how it's actually indexed in search. But you do care about a specific set of URLs and you want to know how many of those are indexed. Let me see the next one here. We received a message that we have a high number of URLs for our website, which has 134 million people in it. So basically, this is a website that has a lot of content, a lot of pages, and we send out messages through search console to sites when we discover a high number of new URLs. And this question goes on, since I have them mostly no indexed, what's up with that? So we send this message out when we discover that high number of new URLs on the site, before we actually even go in and crawl those URLs. So they might be blocked by robots.txt, they might have a no index on them, they might have a rel=canonical set to an existing page. From our point of view, we just notice when we crawl your site we find a whole ton of new URLs that we didn't know about, and that means we have to go through all of those new URLs as well individually to actually see which one of those we can actually index. So it's a warning in the sense that you can probably optimize the crawling of your site by reducing the number of unique URLs that we discover like that. And if your site is still being crawled properly-- you think the indexing is fine, you don't care about a high load on your server-- then maybe you can just ignore this message. But in general, if you'd like to optimize the crawling of your site, then this is something I would actually look at. And see where are these URLs coming from? Are these maybe session IDs in there that I can remove or I can really significantly improve the crawlability of my site? Next question here. Often we discover another site has linked to our website, but the link has PPC parameters to it. So essentially, it's a link to a URL but it's not really the right URL. Is this a problem? Does Google view this as a clean link? From our point of view, we'll try to crawl that link. And if we see something like a rel=canonical or if we recognize the content already exists, we'll fold that together with the main URL that you actually do have indexed. So it's not something I'd say you need to avoid. Especially if you have rel=canonicals set up on these pages, then we'll crawl it, we'll notice it's a canonical, and we'll move on. So it's nothing you really need to worry about there. All right, I think one question about structured data. So let me just see. We have a merged WordPress blog domain on our main site. Webmaster tools search console shows 223 items with structured data errors. And the developers set it up with a specific folder structure, not sure whether to fix the pages or the posts. So what I would do in a case like this is first of all, try to understand how problematic is this for your site? If you're trying to get rich snippets-- for example-- for this content, or if you're trying to submit these pages as AMP pages, then structure data errors might be problematic. Otherwise, if you just have structure data on these pages and you don't really need it for any specific search feature, then probably it's not something you really urgently need to worry about. Once you've figured out is this critical or not critical, I'd probably go to the Webmaster Forums and get advice from peers about how to actually clean that up. Maybe it's just a matter of tweaking the template data you have on your site, maybe they'll come to you and say well, you don't actually need any other structured data. Therefore, you can ignore the errors or you can remove the mark-up completely, and you won't see any errors because there won't be anything to recognize anymore. So that's something where first, figure out how critical is it? And then get some help from peers who have gone through this process as well. OK, so now to the tricky topic of ranking factors. I don't know, I stumbled into this-- I think-- in one of the previous Hangouts, where as a side comment I said titles aren't the most critical ranking factor anymore. And suddenly everyone got excited and said titles are important, they are important for SEO and they are used as a ranking factor. Of course, they are definitely used as a ranking factor, but it's not something where I'd say the time you spend on tweaking the titles is really the best use of your time. So that's something where if you're focusing only on titles as in SEO-- if your SEO agency work is essentially going to other people's sites and saying oh, we will strip out all titles and rewrite them to include all of the relevant keywords and you will rank at ten places higher-- that's not going to happen. So just as a background information, I'll run through most of these fairly quickly, assuming I can find all the questions here. Is the number of reviews a ranking factor, with regards to web search? As far as I know, no. The number of reviews is not a ranking factor. It might be different with local, but with web search that's not a ranking factor. On the other hand, if these reviews are on your pages, then that might be content that we can pick up, that we can show in search and use as a ranking factor there. Keyword in the URL, is that a ranking factor? I believe that's a very small ranking factor, so it's not something I'd really try to force. And it's not something where I'd say it's even worth your effort to restructure a site just so you can include keywords in the URL. On a page that lists product, is it more beneficial to have the content at the bottom of the page or at the top of the page? Like above the fold, below the fold. That's something-- from our point of view-- within a normally-sized web page, it doesn't matter. So look at what your users are doing, where they're able to pick up this content, where they're able to actually re-use that content. Wow, this is probably going to generate a whole series of blog posts. I'll probably regret ever going through all of these questions. But since they're submitted, we might as well go through them briefly. Short URLs or long URLs? So what definitely plays a role here is when we have two URLs that have the same content and we're trying to pick one to show on the search results, we'll pick the shorter one. So that's specifically around canonicalization. That doesn't mean it's a ranking factor, but it means if we have two URLs-- one is really short and sweet and the other one has this long parameter attached to it-- and we know they show exactly the same content, we'll try to pick the shorter one. There are lots of exceptions there and different factors that come into play. But everything else being equal, if we have a shorter one and a really longer one, we'll try to pick the shorter one. And again, that doesn't mean it's a ranking factor, it's just if we have two of the same. A question from Martin. External links from your pages to other sites, is that a ranking factor? What if they're no follow? From our point of view, external links to other sites-- so links from your site to other people's sites-- isn't specifically a ranking factor, but it can bring value to your content. And that, in turn, can be relevant for us in search. And whether or not they're no follow doesn't really matter to us. So there's one really popular website that essentially has all external links no-followed, and it ranks really well regardless. And users still use it regardless. John

BARUCH LABUNSKI: John, just regarding that, I wanted to know-- there's a lot of people talking about how Wikipedia dilutes your ranking. Is that true? Like the no follow link.

JOHN MUELLER: The no follow link. So if you have no follow links, no. Essentially, no follow links are taken out of our link equation. So if you have a lot of links that are no follow, that's something we ignore. And sometimes sites buy a lot of advertising on other people's sites, and they get a lot of traffic that way and those links are no follow. That's perfectly fine, too. So just because it's a no follow link to your site doesn't mean that your other links will be less valuable.

LYLE ROMER: Hey, John. Also just related to links as well. Recently we were featured in a news story that linked us in their online article, and we were wanting to link back to it just to show that we were featured. And we were concerned that, that would look to Google like a link scheme or some kind of link exchange. Is that something we need to be concerned with?

JOHN MUELLER: No, go for it. I think it's great. I mean, if you're happy that you were featured there, then point people at it and show how other people are talking about your business. I think that's a great idea.

LYLE ROMER: Great, thanks.

JOHN MUELLER: Another question with regards to links, what a surprise. Does it matter if the main content area has a lot of links to external sites, and few links to the same domain? So I think there are two aspects there. On the one hand, if this is the area of your pages where you have your primary content-- the content that this page is actually about, so not the menu, the sidebar, the footer, the header, those kind of things-- then that's something that we do take into account, and we do try to use those links. The other thing to keep in mind is you still need to make sure that within your site you have a clear linking structure. And that can happen if you have a clear set-up with a sidebar, with a menu, those kind of things. But if you don't have a clear linking structure-- if you have just a lot of links to external sites-- what might happen is we can't really crawl the rest of your site properly because we can't discover it properly. So if you're really clear about this distinction between the boilerplate content and you have that set up properly so that we can crawl your site properly, then that's not something I'd really worry about. But if overall you don't have a lot of internal links within your content, then that's something that can lead to a situation where we don't actually discover all of your pages. And because of that, we wouldn't be able to show them in search, because we don't really know about them. So definitely make sure that your internal linking structure is sane, that it can be crawled. There are some great external tools out there that will crawl your website and let you know the URLs that they discover. So that's the main thing I'd do there. And if that's all fine, then I wouldn't really worry about the external links you have in the rest of your content. Here's a slightly different one. All of our group's websites are on the same C-block-- so a C-block of IP addresses-- is that a problem? No, that's perfectly fine. So that's not something where you artificially need to buy IP address blocks to just shuffle things around. And especially if you're on a CDN, then maybe you'll end up on an IP address block that's used by other companies as well. Or if you're on shared hosting, then these things happen. That's not something that you need to artificially move around. How important is fresh or new content outside of news? Does it help to implement structured data in terms of dates? For example, a review of an article or a restaurant. So I think marking up dates is always a good idea, that helps us to understand the context of these pages. Having fresh content-- if that's something that you really have for your site-- is always a great idea, I think. Because that's something we can pick up and show to people who are searching for that. But it's not something where I'd say you artificially need to create fresh content all the time. So some topics are just things that are like evergreens, in that they're always relevant to users. And they don't need to have a date updated all the time or a small tweak done or a press releases. Like oh, we changed our theme again, those kind of things. So I'd really look at the type of content that you have. Does it make sense to publish new information about that content? And if so, then go ahead and do that. It's not something I'd try to hold you back on.

MIHAI APERGHIS: John, regarding that. For example, what's the best practice when you already have an article about a subject, and there's an update or news happening about that subject. Should you create a new article or can you just update the old article and use that to show up for newsy-related queries?

JOHN MUELLER: I don't know, for sure. So from my understanding, for a normal web search you can just update it and let us know about that update. So send us a ping with the site map or whatever. If we recognize the content has changed, we can pick that up. I think for Google News, if you're also showing that content in Google News, there might be some guidelines around what you can do with existing content and updating the dates. But I don't know the specifics there. So if it's for Google News, I'd look at their guidelines or ask them directly. And that's something you can definitely also use on web search. If on web search you don't care about Google News, because you're not included in Google News or whatever, then that's probably a little bit of a different situation.

MIHAI APERGHIS: Right. Just to give you an example, we have that automotive website. So they have an article for a certain car, they have some spy shots, and they build an article on that. And then the car gets launched, so they want to keep the same URL and just update it with the launch information and pictures, things like that. And yes, it's on Google News, they are in Google News as well. I better ask Stacy about this.

JOHN MUELLER: I'd ask Stacy. Because if you need to do it in the specific format for Google News, that'll definitely work for web search. If you don't need to do anything special for Google News, then you're a little bit more free to try things out.


BARUCH LABUNSKI: John, can I ask a question about Google Bot?


BARUCH LABUNSKI: So 187 times a day, Google visits your site approximately, according to a company called Encapsulon. And 4% of the bots are fake. What do you guys do to prevent fake bots coming to a site? Or what are you guys doing to prevent fake bots from coming to websites?

JOHN MUELLER: We can't really prevent them. So fake bots are essentially people, either setting in their browser to use the Google Bot user agent or running a script that uses the Google Bot user agent, and tries to access a page. And from our point of view, we can't block that. It's not a copyrighted user agent or anything like that. So that's not something we can block. But what you can do on your side is if you see that Google Bot user agent, you can do the reverse IP look-up. And some plug-ins do this automatically, where they'll-- so we have this documented in our help sector-- where you look up the IP address that the request came from, and you can confirm that it's actually the Google Bot IP address. So if you see the Google Bot user agent and confirm that it's Google Bot IP address, then you know it's a real Google Bot. If you do the reverse look-up and see it's some random dial-up in some country you've never heard of, then probably that's someone just running a script that looks like Google Bot. And that's not something we can block, because it's a part of the internet, and people can set their user agent to whatever.


JOHN MUELLER: All right, can I interpret the quality of my site when 30% to 50% of my indexed pages are affiliate pages that contain affiliate links? So we've gone through this a few times before as well. Essentially, affiliate content isn't any magical content that would automatically be dropped from our side, from the search results. It's content like everything else. So if you have fantastic content, if you have a great website and some of the links in there are affiliate links, then fine. If that's a part of your website and that's how you monetize your website, that's perfectly up to you. On the other hand, if you're just scraping an affiliate feed and publishing that on your site and also including affiliate links in there, then essentially that's auto-generated content. That's not something that we find that valuable. So it depends on how you want to look at it, how you want to build your site up. So you can definitely make a fantastic site that also is affiliate-based or monetized with affiliate links, and that should rank fine. So it's not that there are affiliate links in there or not that is the important part, it's essentially how you make your site. Some other questions here. Our blog uses the plug-in that automatically generates suggested related posts at the end of all blog posts. Since Google doesn't like automatically generated content, should these be no-followed? No, from our point of view, that's fine. So essentially what you're creating there is an internal linking structure between articles that are related on your site. And if you can generate that automatically, that's up to you. The important part for us with automatically generated content is that the primary content of these pages isn't automatically generated. That there's actually really something valuable on those pages that you're linking across to, that is important for us to index and show in the search results. If a website is identified as hacked by Google and it shows as a link in search console, do we need to disavow it? Or is Google automatically not counting that? You don't need to disavow links from sites that are hacked. Essentially if we find that the site is hacked in a way that it has injected links from a hacker, then we'll probably ignore those links anyway. And otherwise, that can be a link like anything else, and possibly the hacked state is just something temporary. So you definitely don't need to go through all of your links and say oh, this is hacked, this is OK, this is hacked, and disavow those individually. That's something you really don't need to do.

BARUCH LABUNSKI: But if you look at it when you're doing that-- all of a sudden I noticed that when I see the hacked links, my firewall and antivirus will go crazy and I won't even be able to shut down that site. And I would automatically as a webmaster say to myself, OK, there's no way for me to even contact these guys. I need to put that in my disavow list, even though you guys recognize that as a hacked site. I know you guys recognized it, so it's not safe to put it in the disavow list?

JOHN MUELLER: I wouldn't really worry about that. So in a case like that, you're talking more about malware. And that's something where even from our point of view, it's more of a temporary thing. It's not that the site is bad in the way that it's linking to your site-- in the sense that this specific link from that site to your site is something that you placed there, which is problematic-- it's essentially that site got hacked, and it got malware, or it didn't get malware, it got a different type of hack. And that's not something that I'd say you really need to worry about, with regards to links to your site. I think maybe links from your site, that might be something where you'd say well, I noticed I've been linking to this one site a whole lot because it used to be a really fantastic site, but now it's totally hacked and full of malware. Maybe I'll remove those links to that site. That makes sense from a usability point of view, but even that's not something where we'd can say well, this is a search quality factor, where we judge your site based on the sites that you're thinking to there.


JOHN MUELLER: All right. Wow, still have a whole bunch of questions left. Let me try to run through some of these, and then we'll open up for more questions from you all. What's the effect on rankings if I implement rel=next and rel=previous tags? Will it strengthen the first page and weaken the rankings of the other pages? Yes. Usually it would help us to focus more on the first page, but the other effect is also that we understand that this is a series of individual URLs that actually belong together. So it makes it a little bit easier to say well, someone is searching for something maybe on page 2 or page 3, but we think the whole series will be useful for this user. So we'll show the start of the series in the search results. So it helps to bring more value to the beginning of the pages there. And in general, if you're seeing that people are really aiming for maybe the second half of the series that you have set up with rel=next and rel=previous, maybe it makes sense to separate that out separately anyway. I want to ask about structured disavow backlink for a site because I get millions of links per day from many domains. Not quite sure what the question here is. I'm sure lots of people are jealous because of the millions of links per day. If you're talking about the structure of the disavow pile, essentially that's up to you, how you want to structure that. So we recommend using domain basis, so that you don't have to go through all of these URLs individually and can just say well, this whole domain, all of the crazy links that I've been getting from there is something I really don't want to be associated with. I'll put it in my disavow files. So doing it on a domain basis makes it a lot easier to keep up there. The order within the pile, how you structure it in the file is really up to you. You can do it alphabetically, you can do it by any order that you really want. We don't look at the file manually, that's something that's processed completely automatically. Since Penguin looks for spamming, external links, and most webmasters now know when it will release, what's the risk of someone building spamming links to hurt a competitor? Is Google being too transparent on what Penguin does? You can't help everyone. Some people say more transparency, and others say less transparency. Terrible. From our point of view, we have a lot of practice with regards to handling these spamming links that competitors build to each other's sites. And we do know that this is a practice and we do take it into account in all of our algorithms, especially when we look at things like links. So that's not something where I really worry too much about that, with regards to the future updates there. I know the people who are working on these algorithms, they're really well aware of this kind of situation. And we have a lot of data to show them on specifically what things to watch out for, what things to keep in mind when it come to these updates. I'm running a news portal website since 2010. If I want to delete all the old articles on my website, what's the best way? If you really want to delete these articles, you can just delete them and serve a 404 for them. From my point of view, I always find it a little bit of a shame when good content goes missing on the web. So I'd probably-- if this is content that you think is useful or was useful at the time-- I'd probably just move that into an archive section of your website, so that you still have that content there, if that's really something valuable from back then. On the other hand, if you look at this and you say oh, I was just writing about daily news and none of this is more important when it's older than a week, then maybe it makes sense to prune that out. And say I'll just make a clean cut, and from now on I'll focus on more higher-quality news, something like that. But with regards to how to remove the content, that's really up to you. Personally, I'd just make a nice 404 page so that users know what happened and can react to that. All right, let's open it up for more questions from you all. What's stuff on your mind?



MALE SPEAKER: So I have a question about [INAUDIBLE] in some regards to working with the mobile site. So if we had better content, more complex content in the apps-- because there's just not a way to display it currently on the mobile web-- is there a strategy for getting that indexed? Right now we've looked at dynamically-generating it every 24 hours and building a static version, and then putting that on the mobile site so a user can find it. But it's really not the same because it's not interactive.

JOHN MUELLER: I don't think we have a great solution for you yet. And it's something we're at the moment-- specifically around app content-- we do primarily look at that connection to the web content and try to do the ranking based on that. So to some extent, we can probably pick up a little bit through the app [INAUDIBLE] API, but we're probably mostly still focused on the content we actually find on the web page. But if you have some examples, I'd love to show that to the team here so that they can maybe-- I don't know-- maybe change their minds with regards to how to best show this kind of content in search. Because I do know there are definitely options available with it in-app that you can't easily do on a website. And for people who are going the extra steps to actually make great apps, it will be a shame to have that content almost demoted because it doesn't have a good web equivalent.

MALE SPEAKER: OK. But would you suggest just ignoring building that? Because if we find that 60% of users should download the app-- this is one of the most viable things for them but there's no web equivalent so a web user doesn't know-- would you say just hold off and wait, or go ahead and try and build a semi-pseudo-version of it for the web and hope you guys like it?

JOHN MUELLER: I don't know. So we are working on the app-only indexing side, where we can index app content without a web equivalent. Maybe that will be a good match for you, but it's probably something where you'd want to get in touch with us directly and we can put you in touch with the people here so that you can discuss the options around that. I don't know what the current status is there and what the next steps would be.

MALE SPEAKER: All right, thanks John. Hey, Ashley.

MIHAI APERGHIS: John, if I could ask your opinion on something? Here's a URL of a bookstore, it's a category page. And they have a filter list that's very, very long. They have an author filter list. And there are hundreds of authors, and all of those filter links are no follow. [INAUDIBLE] that it might be a good idea, so Google [INAUDIBLE] index the competition of [INAUDIBLE] generate millions of URLs. However, I know that no follow shouldn't be used exactly like that, and it dissipates the link value going through them and it's quite a lot of links there. So do you think that might impact ranking performance in any way and you should remove it altogether? Or find a different way to display them?

JOHN MUELLER: It's always tricky if you have a large number of filters, especially when they can be combined. So I think a no follow helps us a little bit so that we don't try to forward page rank automatically to all those variations, but we still need a way to find almost the top level filter items. So if you're searching for a specific author, that we have an author landing page from that side.

MIHAI APERGHIS: We already have that, that's no problem. It's just that for categories we also have the filters, and we're thinking that maybe we should remove them so it doesn't--

JOHN MUELLER: Personally, I don't think that would affect that much. So it would probably affect a little bit the crawling of that part of the site. But if your server is fine with the crawling load as it is for those filters, I don't think you'd gain that much from an SEO point of view by doing something really fancy and hiding those URLs completely, playing with cookies or JavaScript or whatnot. So maybe from a usability point of view it would make sense, but I don't think from an SEO point of view you would really see a dramatic difference.

MIHAI APERGHIS: Well, I don't think from a usability point of view, having hundreds of authors there in the filter list helps. I was thinking something like maybe an auto-complete-- and you'd just type the name of the author and select it or something like that. So that would help with the usability and with Google Bot as well.



LYLE ROMER: Hey John, quick question. Back to that titles topic from before. Is it either a good thing or a bad thing to put your website URL in the title of all your pages?

JOHN MUELLER: We'll probably do something similar anyway. So we automatically try to find the name of the site through the title in the search results, and that's something that happens automatically even if you don't put the URL in there directly. So for many sites, we'll try to figure out what the name is of the site, which might not be one-to-one the domain name. But for other sites, maybe we'll just put the domain name in there. But essentially, that's up to you. Personally, I'd not use it as a main aspect of the title, but rather try to get the short and sweet titles for those pages together. And if you add a site name to it, that's fine. I don't think you gain anything at all by having the domain name-- the URL-- in there as well. So if your domain name is your site name essentially-- if you're, I don't know, and that's the name that you use on your physical store, then sure, use that as well. But if you're just Joe's Widgets, then maybe you just use Joe's Widgets in your titles instead.

LYLE ROMER: OK, thanks.



FEMALE SPEAKER: I have a question about hreflangs as well as 301 pages. My question is if you have two sites and the pages from site A are hreflanged to site B, but on that second site you actually 301 the page to a different page on site B, what takes priority or happens to your hreflang from A to B, if the pages on site B are 301ed? If that makes sense.

JOHN MUELLER: Yeah, so we'd probably drop those.

FEMALE SPEAKER: You'd drop the hreflang?

JOHN MUELLER: We probably wouldn't take that into account. So with hreflang, the important part is the hreflang links need to be between the canonical pages. So if you 301 from one site to another, then that other page would probably be the canonical.

FEMALE SPEAKER: It would be within the same site. So site B is 301ed to a separate-- so A has the hreflang, but B has the 301 between the pages.

JOHN MUELLER: I think in a case like that, most of the time we'd probably pick the 301 target. So where you're redirecting to as a canonical. And that's the one that you'd want to use for the hreflang links.

FEMALE SPEAKER: OK, and my follow-up question-- and the reason why we have the [INAUDIBLE]. I'm currently playing the part of Rob Young today and asking about experience days and if there's any chance that we could get an update on our original site with EX? Or if you've had any more feedback from your team that was-- I know-- looking into that for us?

JOHN MUELLER: We did look at that, I think, early in January. And I suspect we'll have some updates there, but we don't have anything at the moment.

FEMALE SPEAKER: OK, thank you.

BARUCH LABUNSKI: John, the question is about the Southern Hemisphere, and I'm just tired of talking about that specific character. It's going to be happening-- it's almost the end of the month-- and is there going to be any turbulence according to that algorithm this month? I just wanted to know because the whole point-- you said a long time ago-- is not to see the effect.

JOHN MUELLER: Oh, oh. Southern Hemisphere, that threw me off. So you mean those little black and white animals?


JOHN MUELLER: I don't know of anything specific. I think [INAUDIBLE] said recently that we're looking at Q1. So not specifically this month, but Q1. So my guess if he said Q1, is maybe not just yet. But I know they're working on this, and seeing what we can do to speed that up.


MIHAI APERGHIS: John, regarding AMP pages, is there any specific plot thing? Is there a specific launch date for that?

JOHN MUELLER: For AMP? I think we said late February, so Q1.

MIHAI APERGHIS: Also by the way, I noticed that in the demo, it seems that the pages are sorted by the impressions of the results. The results are sorted by the impressions of the pages, I guess.

JOHN MUELLER: I don't know about the order, but it specifically replaces the in-the-news block, so it's the newsy content that we [INAUDIBLE]. I don't know about the specific order. And it might be that the demo is more like a proof of concept than something where we'd say well, this is the exact ranking that we would do within a block like that.

MIHAI APERGHIS: So it will replace the news block completely?

JOHN MUELLER: Probably, something like that.

MIHAI APERGHIS: And will it have any connection to the news publisher dashboard thingy?

JOHN MUELLER: No, because the in-the-new block isn't specific to Google News. Any kind of web content, as well, can also be shown in there. So you don't need to be a Google News publisher to have that. I think the vantage from a Google News point of view is-- I believe they said the Google News app will also be supporting AMP. So that's something where if you're in Google News and you have AMP pages, then you'll see advantages from that side as well.

BARUCH LABUNSKI: But if your server's so fast and if you can get the pages loading really fast, and you're testing it in sites-- like ping them-- and you're in milliseconds, there's no need for AMP job, right?


BARUCH LABUNSKI: Because for instance, I use a thing called Sloppy, and Sloppy lets you test your site on a 28k-style modem and a 56k. And then you can actually see really how fast it is. So if it's fine, then why do it, right?

JOHN MUELLER: So the advantage around AMP is-- on the one hand-- speed, because that's inherently built-in. And of course, if you hand-make your pages, you can probably make something a little bit faster. But the advantage there is also that this content can be embedded directly into search results, so that it can be displayed a lot faster and people can swipe through to your content directly from the search results. So that's not something that we can do with any generic web page out there. We really have to know that we can trust the market on there, that it matches the standards that we have set for AMP, where we know what kind of iframes could be in there, what kind of JavaScript is allowed, those kind of things. And that's the secondary aspect there, that this content can be embedded directly. Also, I imagine on Twitter and other social media sites that also support AMP, they'll be able to take that content in directly so that people can actually read your content in the context of their social media experience. Without having to click through to your site and be almost discouraged from visiting your site because they have to leave the browser window again and go somewhere else. So that's an advantage that comes with the AMP format that isn't specific to just being fast. But of course, if you can make a site that's really fast and works well for users, that's obviously fantastic, too.

MALE SPEAKER: Hi, John. Regarding app and mixing, how does Google decide if to show a brand's app when searching for that brand? Taking in account where the brand's site is the first site to show in the search results.

JOHN MUELLER: So I think this is mostly around the install button that we sometimes show. Where, if we can recognize that someone is specifically looking for that app, then we'll probably show that. If it's a generic brand that has a great website and also has a great app, then that's something where we have to balance off the different options there. But I don't think we have any clear guidelines where we can say well, this means we'll show this and this means we'll show the app. If we have the connection through app indexing, then it's at least open for us to show that install button flow directly in the search results.

MALE SPEAKER: So is there any way to escalate it? In case we believe that--

JOHN MUELLER: So we're showing your app instead of your website?

MALE SPEAKER: In addition, to show an install button box.

JOHN MUELLER: So we're not showing it? Or we're showing--

MALE SPEAKER: The website is showing, but the installer isn't.

JOHN MUELLER: OK, so in a case like that I would make sure that your app indexing set-up-- at least for the home page-- is set up correctly. Because if that's set up correctly, then we should definitely be showing that. If you have maybe-- I don't know, a forum thread somewhere with your URL-- I can definitely look at that with the app indexing team here. But in general, the app indexing docs are fairly comprehensive. In the meantime, I'd just double-check that the debug step's there, where you pull up the app with the intents that you have to specify that, that actually works. And if that works, then over time we should be showing that button automatically. And if not, then probably something is set up incorrectly with the configuration. Another thing you can do is verify the app in search console. And that has a fetches Google feature for the app as well, where you can double-check that we can actually pull up the app through app indexing.

MALE SPEAKER: All right, I'll set up the link for the forum.


MIHAI APERGHIS: John, regarding your earlier response to rel=next previous question. Is rel=next previous used just for my category pages where you have multiple pages of something? Or can it be used for-- for example-- you have certain content that's on several pages, like chapters or something like that? If you know Moz's "Beginner's Guide to SEO," it's built like that. They have a landing page, and then links to each chapter. And could they use rel=next previous, would that help you in any way?

JOHN MUELLER: I think that would be useful for us to recognize that these belong together. We'll probably figure it out for something like that anyway. But if you specify that, that helps us to understand this is one block that's split up into multiple sections.

MIHAI APERGHIS: So you can understand that anyway? I was thinking that you're probably understanding categories easier, because [INAUDIBLE] sites, page one, two three, four, easier to recognize that content that's built on multiple pages.

JOHN MUELLER: Probably both.

MIHAI APERGHIS: And is that a factor in how you show cyclings, for example? Because I noticed for queries related to that, you show the main page then the cyclings through the chapters.

JOHN MUELLER: I don't know, maybe.

MIHAI APERGHIS: OK, good to know.

JOHN MUELLER: I don't know if cycling specifically looks at rel=next or previous, or if it's just the navigation is so obvious that we pick it up for cycling.


JOHN MUELLER: All right, I have to take a break here. It's been great having you, thanks for all the questions that were submitted. And we have more Hangouts planned for this week, so if we didn't get to your question, feel free to add it into one of the future ones. And I wish you all a fantastic week. Bye, everyone. [INTERPOSING VOICES] | Copyright 2019