Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 27 March 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: All right. Welcome, everyone, to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller. I'm a webmaster trends analyst at Google in Switzerland. And part of what I do is talk with webmasters like you and make sure that your opinions, your feedback is heard internally among the Web Search teams. And accordingly, that the feedback, the insights from the Web Search team make it to you as well. So with that said, we have a bunch of questions that were submitted already, but if any of you want to get started with a question of your own, feel free to jump on in now.

MIHAI APERGHIS: Hi, John. How are you?

JOHN MUELLER: Hi, Mihai.

MIHAI APERGHIS: Since Barry isn't here, I'm going to go ahead and ask you about the latest fluctuations that a lot of websites are reporting in the past week. Is there anything you could tell us? Maybe a new update, or an old algorithm being updated, a new one. Anything you could share?

JOHN MUELLER: We did take a look at this, internally, a bit because people have been writing about it. But from our point of view, these are just changes as normal. So normal fluctuations that always happen as we update our search algorithms, update the search data. It's not something specific that we would point at.

MIHAI APERGHIS: OK. Thank you.

MALE SPEAKER: I have a question as well, John. Hi. We've moved our images from a [INAUDIBLE] website. We moved all the images to a cookie-less HTTPS domain. It's a different domain than the one that we're serving on. We did a 301 for the images. We verified the new domain within the Webmaster Tool, and we tweaked the Image Sitemaps as well. The Webmaster Tool is showing no errors. It's showing a slow indexation of the new images, of the new URL of the images. And basically, we've lost 20% of our traffic, and I see a very, very slow rate of indexation, basically, 10% indexation after two, three weeks of the change. How long could this go on?

JOHN MUELLER: I think that's almost kind of normal when you look at large sites that have a lot of images. The main difference there, compared to normal web pages, is that we tend to crawl images less frequently than web pages just because, in general, images don't change that frequently. So for us to see this redirect takes a lot longer than with web pages. And for us to reprocess that, make sure that in Image Search, we have the right image associated with the right landing page, that takes time as well. So that's something where I think it would be normal to see a fairly long time for this to roll out and all of those redirects to be found. I think with Sitemaps, you're definitely doing the right thing. Make sure you're submitting Sitemaps with the images and the landing pages. Also, make sure that the landing pages all point at the new image URLs directly, so that it's not pointing at the old ones and has to follow the redirect. But essentially, this really just takes a lot of time.

MALE SPEAKER: Let's say, for a website of 100,000 images. Would you give me a time frame of how long it could take?

JOHN MUELLER: I don't know. Sorry. But what you can do is, if you can post your domain in the chat, then I can pass it on to the Images team to take a look to see if it's really working as expected. I know images tend to take a lot longer to move, compared to web pages, but maybe something is stuck on our side, maybe something is stuck on your side. And I'm happy to pass that on to the team to have them take a look at.

MALE SPEAKER: Brilliant. Thank you very much. I'll just use the Chat message there then?

JOHN MUELLER: Sure.

MALE SPEAKER: Brilliant. Thank you.

JOHN MUELLER: All right. Here's a question about the EMD update, Exact Match Domains. Just a bit of background on that. This is something that I believe we launched a year ago, somewhere around that, where essentially, if we recognize that a site uses exact match domains and it's a low quality site, then that's something that the web search algorithms will take action on. So let me see the question. It would be good to know if you can tell us more about the restrictions on domain name selection. Any chance for the low domains with millions of traffic to be hit by EMD, youtub-mp3.org, onlineaudioconverter.com, clipconverter.cc, et cetera. So I tend not to comment on third party sites. If you have a specific site that you're worried about, I'm definitely happy to give you feedback on that, but I think giving feedback on other people's websites doesn't really make that much sense. In general, if something, like I mentioned with this update, if it's low quality content and it's using an exact match domain, then that's something we take action on. We try to take action on low quality content in general as well, but it's not just that having an exact match domain would be a reason for us to say, well, we need to demote this website. Because a lot of websites have an exact match domain. People are searching for it, and they use that as their domain name. And that's perfectly fine. It's not something they would get any magical SEO advantage out of, but it's something that sometimes makes sense. Even Google-- essentially, lots of people search for Google on Google. And just because our domain name is the same thing as what people are searching for, doesn't mean it's less relevant. It's just what it's called. So from my point of view, if you're looking at domain names and you're not sure which ones to pick, I'd recommend choosing one that matches your business. And also, one that matches your goals, where you want to be in maybe the next five or 10 years as well, so that you don't have to go through the domain change process over time, that you really have something that works for you for the long run. And if that happens to include some keywords, then I wouldn't really sweat that. I think if you can find a good domain name that works for you, maybe with the existing top level domains, or one of the new ones, go ahead and grab that. What's Google's feelings on Linkware sites that give away free images and graphics, but require a link back for credit. Can Google identify those type of links and treat them differently, or do they treat them equally as editorial links, perhaps? So from our point of view, these are essentially paid links. These are unnatural links. These are links that are only there because of this exchange that's happening with some of the content that's being provided in exchange for one of these links. So that's not something I'd recommend doing.

MIHAI APERGHIS: But John, what about the content that's under the license that requires you to give a share, or mention it once you use it on your website, or things like that? Because this could be interpreted that you shouldn't use those types of resources just because the license itself requires you to mention the original source.

JOHN MUELLER: I think you mean like the Creative Commons ShareAlike Attribution licences?

MIHAI APERGHIS: Yes.

JOHN MUELLER: That's perfectly fine, but it's not something where you'd have to provide a followed link back to the source. If you want to use a no follow for that, that's fine. If you just want to attribute that, that's fine. But essentially, if you're providing this trade with content in exchange for a followed link, then that's the kind of link that we see as being unnatural. It's not there because someone wants to link to your website and say, this is a great website. It's just that they're in this exchange where you get something, and you have to do something back.

MIHAI APERGHIS: You're advocating against specifically making content just to grab links off people.

JOHN MUELLER: I think it's perfectly fine if people want to link to your website. That's fine. I think it's also fine if you want to have attribution for something that you do, but it's not something where you would require a followed link back. Maybe it's like a no followed link. Maybe it's something that people can link back in however way they want to, but essentially, it shouldn't be required that it's a followed link. Should you noindex subpages of categories? For example, if my shop has 100 books divided into 10 pages, should I allow only the first one and block the rest? Most of the time, these pages have similar metas, but their content varies every time. That's a pretty good question. So from my point of view, there is no absolute answer for something like this. This is something where you need to look at your website and think about what kind of content you want to have indexed and where do you think the content, the pages that you have indexed, are actually of high quality, high value for the user. So sometimes it makes sense to have these category pages indexed, and sometimes you say, well, these are just facets of search, for example, and thousands of different variations, but there's no actual additional value of having each of these different category pages and the paginated pages actually indexed. So that's something where you have to look at your website and think about what makes sense in your specific case. I think in this specific example, if you have a lot of books, and you have different categories of books, and can paginate through the different categories, then maybe it makes sense to just allow the first page of the category to be indexed and the rest of the pages to be noindexed. The important part for us is essentially just being able to find the individual product pages. So if you have categories in your shop and all of those categories are essentially blocked from crawling, or have noindex, nofollow meta tag on them, then we might not be able to find all of the individual products, and that would be a problem. But if these pages are additional ways of reaching the same products, then that's something you can make an editorial call on your side and say, this makes sense to have indexed, or this is essentially just a low-quality copy of the things I already have indexed, and I'll just put a noindex on them.

MALE SPEAKER: Hey, John. On that point, I had a question. So we have around maybe 40, 50 more category pages than our competitors. And the reason we do that is we go a little bit more niche to the user need. So here's an example, I just posted the URL on the Chat. But if you look at the top results, the top results are generic. They're not addressing the user need until we come around the middle, a little bit below the middle, of the page. And so my question is, when we see something like this, does it mean that essentially Google is saying that for this query, while the query is more specific from the user, your machines have learned that the more generic category page better meets the intent? Or does it mean that no, not necessarily, maybe more addressing of the need spot on is a better way and keep doing that? And this is an example of what maybe concerns us. Maybe by having 50 more category pages than others, we have created some appearance that maybe there is some duplicate, or repetitive content across some category pages. And so that's a question I have.

JOHN MUELLER: In general, just because something is ranking highly in search, doesn't mean that that's what we want to have ranking highly, or that they're using all of the techniques that we think are perfect and great. So I wouldn't assume that just because something is ranking for one query, that this is actually the best kind of content that can be made on the web. And with that in mind, I think if you're doing more than just providing something generic, more than just providing a kind of dry list of things that match a category, I think that's always a good strategy.

MALE SPEAKER: So your suggestion is, even though the top results are for the more generic category, that we should keep doing what we're doing because what we're doing meets exactly the intent. And there's actually people searching for this. It's not like a term I just made up. It has like 700 keyword volume per month, or something.

JOHN MUELLER: Yeah, I mean, if you're doing something of really high quality for this kind of a query, then I think that's perfectly fine. I think there are definitely lots of sites that try to target terms like that, and they have really low quality content, or they just kind of jumble things together, and say, well, this page kind of matches those keywords. And that's not the direction I would go there.

MALE SPEAKER: No. The results are spot on matching the user query.

JOHN MUELLER: That sounds good.

MALE SPEAKER: Thank you.

MALE SPEAKER: Hi, John. Just one follow up with this previous question about the pagination. So I just wanted to know, what if just before this canonical to my component pages, of my closed view page, for example, my main category URL to my component pages? Will Google crawl all of the links present on my component pages and index those ones?

JOHN MUELLER: Essentially what will happen there is either we'll ignore the rel=canonical because we recognize that these pages are too different, or we will use the rel=canonical, and then we won't actually get any additional value out those additional pages within that category. So if you have a paginated set of pages and you rel=canonical all to the first page, then either we'll ignore it because we think these pages are different, or we'll say, well, if you say just the first page, then we'll focus on the first page. And we'll lose all of the additional information that is on the other pages. So that's something where I'd make a more direct choice and say, well, I only want the first page to be indexed and maybe noindex all of the other ones. Or allow them all to be indexed, if that's what you want. And use the rel=next and rel=previous to make it easier for us to recognize the connection in there.

MALE SPEAKER: The thing is-- let's say, I have this one category page, and let's say we are just listing products, or maybe some numbers of, you can say, a news article like that. So maybe we have two, three, four kinds of pagination there. On page two, on page three we have [INAUDIBLE] articles as well. So will Google crawl all of the links?

JOHN MUELLER: Probably. We'll probably crawl all of the links there, but like I mentioned, it's something where I would-- as a webmaster, I would make a direct choice and either say, focus on the first page, or index all of these pages, if that's possible, and not make an open ended choice where you're saying, well, primarily focus on the first page, but I accept that maybe it doesn't work all the time. You're kind of leaving it open. I think when it comes SEO, it always makes sense to be as straightforward as possible with what you actually want to have happen. And make sure that your signals are all consistent. So if you're using a canonical, make sure that the pages are actually equivalent. If you want to have one page indexed and you don't want to have the other ones indexed, then be direct and say, well, this one is allowed to be indexed and the other ones all have a noindex. So as much as possible, really try to be as straightforward as you can, so that search engines don't need to make any kind of a judgment call when they run across the directives on your pages.

MALE SPEAKER: Thank you.

JOHN MUELLER: All right. What's the best way to target two different areas which have the same name under different counties in the UK? The area is called St. Ives and appears both in Cambridgeshire and Cornwall, UK. I have trouble pronouncing all these UK names. We're not able to have a generic page due to pricing issues. This sounds like something where maybe you just have two different pages on your website for these specific areas. It's always going to be tricky if you have something that might apply to different locations. So this is something where on the one hand, it's important that these different places are essentially indexed separately, so that they can be found separately. On the other hand, you have to assume that sometimes users will make it to the wrong page, so you have to have some kind of an additional way to let the user know that actually there's also a variation in this other place that might be more suited to you, that you might want to take a look at as well. So this is something we see quite often when it comes to country targeting, geotargeting, where maybe you have page for the UK and a page for the US. And a user from the UK might accidentally go to the US page, maybe because it's shown in search, maybe because it's linked from somewhere else. And essentially, as a webmaster, you can't expect that you will only get the right users to those pages. So what you could do with countries specifically-- it's probably a bit easier than with cities-- you could try to recognize the user's location and show a small banner on top and say, hey, it looks like you're from the UK. You can get free shipping from our UK website, for example. Or it looks like you're from the US. We have a US-based website that has all the currencies in dollars, so you can order things directly there. So that kind of cross connection between those kind of pages definitely makes sense, so that when users go to the wrong version of the page-- and they will. It will always happen at some point, either through search, or through other links on the web. That at least then, the user is able to actually find the version that matches their intent better. So cross linking those, making it easy for users to find which version actually works best for them, is definitely a good idea there.

ROBB YOUNG: John, we have a similar issue in the US because we have loads of different city and state pages on our site, as you know. But this seems like this is a fairly simple case of creating the content so that one is-- I think Google does a very good job of recognizing state entity level stuff. So if you end a product URL in ny versus ca, it definitely knows that one is New York versus California because it recognizes those state abbreviations. And I suspect it would be the same with that case in the UK. If you ended one Cambridge, St. Ives, Cornwall, or even just did a county abbreviation, it does quite a good job, usually, of recognizing that.

JOHN MUELLER: We definitely try to recognize that. I think that on a country level--

ROBB YOUNG: Then the pages should be different enough that you should recognize it anyway.

JOHN MUELLER: I think we definitely try to recognize that on a country level. Even if there is just the URL, that we see like, dash de, maybe for Germany, dash uk for the UK. On a state level, we might be able to get that right. On a city level, I think that's really going to be hard. So I think for the user it might be more visible. If they see it in search, or they see, well, the URL says this regional information there, but it's really tricky on a city level.

ROBB YOUNG: It works for us in the US. I guess it depends what you're selling. If it's a software-as-a-service or something, I think that domain is-- mailboxes, or something where it's essentially the same service in every area. It's going to be hard to create content for essentially the same page.

JOHN MUELLER: I think in a case like your situation, where you really have different services, different products that you're offering in those different locations, that's a lot easier. Like you mentioned, other people don't have it that easy, but it all depends on your website. Could you add a simple feature to Webmaster Tools that could send an email to owners every time Webmaster Tools sees new index pages that contains spammy keywords? Webmasters could fix the problems in real time, recognize cloaking, parasite hosting, et cetera. We've looked at this. We actually have a team that's looking into providing something similar in Webmaster Tools. It's always kind of tricky because on the one hand, you can do a lot of this with Google Alerts. I do this for my websites. For example, I essentially do a site query, and then a long list of spammy keywords that I think will not normally appear on my website, and set that up as a Google Alert so that as soon as something gets indexed with any of those keywords from my site, I actually get an email. And that works really well. One of my sites is hacked, and that seems to have gotten flagged. But of course, it's something that happens after these pages actually get indexed, so you're a little bit behind. Google Alerts is fairly quick, but it's still maybe a couple of days behind. The other difficulty that we see is that a lot of people don't use Webmaster Tools as this kind of real-time monitoring system. At the moment, we'll send them an email, and maybe they'll respond to it within a couple of weeks. And that's something where if we spend a lot of time actually making a system that reduces the latency by maybe one or two days, and on average people respond to these after a month, I don't know if that's really worth all of the effort there. So that's kind of the trade off we have to make. And I don't know what the final decision will be there. It probably depends on how easy we can implement something like this, but it's definitely something we've been looking at as well.

MALE SPEAKER: John, for these Webmaster Tools, there is an update on Google's Support page about this change of address, too. But I don't see this particular feature available in my Webmaster Tools. Is this something in beta? Or is something like it's just your goal?

JOHN MUELLER: It should be directly visible in Webmaster Tools. Let me just double check very, very briefly where that exactly is. I think what you need to do is go into your website, and then, on the top right, you have the little gear icon, next to the Help button. And in there, you have the Change of Address Tool. And it'll double check to see that there are other sites that you could [INAUDIBLE]. And it'll guide you through that process.

MALE SPEAKER: So I see just two options there. There is one, it says Webmaster Tools Preferences. The other says, Site Settings, not Change of Address Tool. I just checked on all websites.

JOHN MUELLER: You have to click through to your website first.

MALE SPEAKER: I am just in one of my accounts. So let's say I'm just on my main website and just clicking on this gear icon. So I only see Webmaster Tool Preferences and the site settings.

JOHN MUELLER: You should see more, though. So maybe you're not fully verified as an owner there. Is that possible?

MALE SPEAKER: I am the owner of my particular account, so I don't know.

JOHN MUELLER: It should be there, though. I'll double check if there's something specific that limits that, but that's essentially the location where it should be. You need to be verified as an actual owner of the site, so not just added with read only permissions.

MALE SPEAKER: OK. I'll check that.

JOHN MUELLER: All right. We have a large website with hreflang markup and Sitemaps. The Sitemaps are 99% correct, but we found some fairly consequential URLs are missing. In your opinion, would this be enough to prevent hreflang from working correctly? That sounds like you have it mostly set up correctly. So when it comes to hreflang, the important part there is really that the URLs work together as a pair. If you have two language versions, that they point to each other. And if one of those pages doesn't point back, for example, then we'll drop that part of the hreflang markup there. And if for most of your pages you have this set up properly, then it'll work for most of these pages. It works on a per page basis. So that's something where if you have a few pages within your website that are set up incorrectly, that wouldn't affect the rest of your website. So it's really only within those pages that you have in your hreflang pairs. If there's anything broken there, then we might drop that individual page out. So we saw this-- for example, if you have two pages and the German page points at the English page, and the English page just points at an x-default page, not that German page again, then essentially that's not a closed pair for hreflang. And then we would drop that markup there. But again, this is on a per-page basis. If there's some pages within your website that don't work right, then that wouldn't affect the other pages that actually do have this markup correct.

MIHAI APERGHIS: John, if I can ask-- I've got a few client-specific questions. I'm going to spread them out, so I give everyone an opportunity. So this is one of the clients. He's a very big automotive publisher in the United States. And he's trying to improve his URL structure a bit, both for users and for SEO purposes. Obviously, he has more than 100,000 visitors per day, so he's worrying that he might get some drops in traffic when he changes the URL structure. The idea is to-- what we notice with some of our competitors, they use-- for example, for car reviews. Let's say the BMW 3 Series review. Every year, there's a new model of that coming up, so we were thinking of having a single URL, like /BMW/3seriesreview. And when the new model shows up, the 2016 one, the current article, or the current review of the 2015 one, would go to a separate URL. And then we would replace the current URL with a new review. That way, we would have better authority of that page. And it will help us. People will mention the new review. They will always go to the actual, new review and so forth. Is that something that would be OK to do? Do you think that would be problematic?

JOHN MUELLER: That sounds great. I think the big advantage there is, of course, that we always know which is the main page for this specific product, or this article that you have. For example, we've seen that this is sometimes a problem with our own websites. We've seen that, for example, we'll set up a new URL for the new version of a product, and we'll notice that if you search for the product name, actually the old version of the product shows up in search because that's the one people have been linking to. That's the one we think is the most relevant so far. And it takes us quite a long time to figure out that this new page is actually the new, relevant one. So if you keep one main URL, and you move the old version to an archive, and you put the new content on the same URL that you had before, that's perfect.

MIHAI APERGHIS: So if once a year, we change the content of that page, it's not a problem for Google? Like saying, oh, you're switching content and I don't know.

JOHN MUELLER: No. That's perfectly fine.

MIHAI APERGHIS: OK. Thanks.

JOHN MUELLER: All right. How important is it into to disavow low quality but natural backlinks? So I'm guessing these are sites that you don't really trust, but it's a natural backlink. In general, this isn't something that you really need to focus on. So from my point of view, if you just happened to stumble across something like this, I would ignore it. I'd just leave it. If you really feel that these are unnatural links, then throw them in your disavow file, and you don't have to worry them either. But essentially if these are just low quality sites that are linking to your site, that's not something I'd worry about. It's not the case that there's some kind of low quality page rank that flows to your website and causes your site any problems. Essentially, if we think these are low quality sites, we'll probably ignore the links anyway.

ROBB YOUNG: John, can I ask a follow up question to that?

JOHN MUELLER: Sure.

ROBB YOUNG: It's fairly specific to our case, but you can answer it generically if you like. What would happen, in our case, but generically, if we had a 301 from our old bad domain to a new one-- you've said in the past that that would cause an issue for the new one. But what if we, within the new one, disavowed the old one? What would take precedence? Would it be the disavow that would say, actually, ignore the old one? Or would it be the 301 that says, actually, I've moved over here?

JOHN MUELLER: Disavow wouldn't work for a 301.

ROBB YOUNG: It wouldn't work.

JOHN MUELLER: So what would happen with a disavow there is essentially, if there are any links from the old site to the new one, then that's something we'd drop. But if you're redirecting a site to your new site and you want to disavow some of the old links, then you need to disavow the links that are actually pointing to the old site that are now being forwarded to your new site. So in a case like that, the 301 is kind of separate from the whole disavow process. It's not something that would be connected there.

ROBB YOUNG: So the 301 would still work and be operational and be trusted. So disavowing that domain would make no difference. Because we're obviously trying to disavow our own domain, not the links that were coming into it. We're essentially trying to find a workaround, which I'm sure you're not going to advise on.

JOHN MUELLER: I don't think the disavow would do anything in a case like that. So if you're redirecting the whole domain to the new one, then I think we would essentially just ignore that disavow because we're not seeing it as a link. We're seen as a redirect.

ROBB YOUNG: Right. OK. Just trying to find things that might be worth a try. Thanks.

JOHN MUELLER: Some players have created very dominant positions in highly competitive niches. Has Google reduced the value of backlinks on the long run to preserve an opportunity for novelty in these niches? For example, to avoid permanent monopolies. This is always a good question, but it's also a tricky question in that sense. One the one hand, we use over 200 factors for crawling, indexing and ranking, so it's something where I wouldn't focus on links and assume that links are the only reason these sites rank. We take a lot of different factors into account with websites. And it's not impossible for new sites to make it into the top in a very competitive niche, but on the other hand, it's also not very easy. It's the same with anything that's competitive. That might be like physical businesses. If you're trying to compete with businesses that have been working in an area for a really long time and they do a really good job of what they're doing, then that's always going to be hard. Whereas, if you're trying to get into a market where you see these businesses have been active for a long time, but they do a terrible job of what they're trying to do, or maybe they're just focused on some bigger picture ideas and they don't really bother so much about these details in this niche that you're trying to target, then that might be a really good opportunity to disrupt things. And say, well, I have a really fantastic product or service that I can offer that kind of falls into this niche where existing players have been active, but my website is so good, or my services, what I do is so interesting that I might have a chance to bump them out. But that's something that essentially is more a business decision, than something where we'd say, this comes from an SEO side of things. So on the one hand, like I mentioned, it's really hard to get into some of these competitive areas. On the other hand, it's probably worthwhile trying to be different than the others and not just say, well, I'm going to offer the same exact products that everyone else has and hope I can squeeze my way into the top there, because that's probably not going to happen.

ROBB YOUNG: John, sorry to keep joining in. Does some of this moving around still happen anyway if QDF-- does QDF still exist if you have something new, and novel, and it's fresh, then Google wants it anyway. So doesn't that kind of take care of what he's saying in a way? Not forever. And it wouldn't affect the value of the backlinks, but you certainly see things that are new come up and then drop off. The important part is trying to keep them there, from the business side. But there's always a chance.

JOHN MUELLER: I think this is one of those areas where it helps that we have so many different ranking factors. Maybe we can see that there's some things where it makes sense to show something fresher, and other things where it makes sense to focus on the things that we found to be really useful for users. So it's not a magic bullet. It's not that you can just say, well, I'll tweak my site, and make it look new, and then suddenly, I'll be able to rank above Amazon for online book store, or whatever. So these are the kind of things where we do take a lot of different factors into account. And if we can see that new sites are really offering something fantastic, then our algorithms will try to recognize that and show that appropriately. Over the last couple of months, we've received an increased number of spam referrals from some sites. Does this have any negative effects on our search rankings? Do you have any advice on how to stop this from happening? So I think this is essentially referral spam, where people will use a fake referrer, and access a site, and hope that that shows up in analytics, or whatever kind of website tracking system they have. And this is something that we essentially ignore. So from a search side, the referrer that people use is essentially irrelevant. That's not something we would even see. So we don't even use the Analytics data for search at all. On the other hand, I understand that this is kind of confusing. And I believe there are ways, for example for Analytics, that you can filter this kind of request out, or block your site completely for people who are using these kind of fake referrers. So that's essentially up to you. We don't take that into account at all for crawling, indexing, and ranking. If you want to block them, that's fine. If you just want to say, no, they don't really matter at all, then that's fine, too. So whatever you'd like to do there. There seems to be a question in the chat, a longer one. I don't have a microphone right now. My question is we have some texts on our online store that are very long, but only the first four or so lines are displayed and the rest are hidden with CSS until the user clicks on the Show More button. Is this something I need to worry about? This kind of question comes up regularly. Essentially, it's a question of whether or not hidden text is possible as a strategy, or something like that. From our point of view, what happens there is we'll crawl the HTML page, and we'll probably see the text in the HTML page. So we at least have knowledge of this text that's hidden. On the other hand, when we render the page, when we look at what it actually looks like in a browser, we'll see that this text is actually hidden. So we'll have knowledge of it, but we'll try not to use it as a primary content for the page for ranking. So if this is the content that you really, really want to rank for, then I'd make sure it's visible. If it's content that's just additional information that maybe you'd show up in search if someone is specifically looking for that, then that's fine to leave it hidden like that. I think that's the trade off you have to make. Think about, is this really critical content for your page, for your website in search? And if so, make sure that it's visible. Also for users, of course, because if they search for this content, they click on your page, and they don't see that content at all, then they will be confused and go somewhere else. So that's the trade off I'd make there. If it's just additional content that you don't really need to have visible in search, then fine. Leave that, noindex, put that behind a tab, or however you want to treat that on the UI.

MALE SPEAKER: Hey, John. I had a video-related question.

JOHN MUELLER: OK.

MALE SPEAKER: So I just sent the URL from Google Developers Tools. It shows that on this page, it seems like we have marked up the video object correctly, but when you go to the search results-- and this has been going on for a couple of weeks now, so the page has been indexed and everything-- Google, for some reason, doesn't see a video on that page. Are we doing something wrong? Is there something other than putting a video object that we should be doing when there is a video? And this video is something that we produced, so it's not like we have embedded it from another site and that's why maybe it's not showing.

JOHN MUELLER: I guess there are different aspects there. We do try to figure out if this is a critical video on a page and then treat that appropriately. The other side is that for video search results, we sometimes have to make a call there algorithmically and think about whether or not it makes sense to treat this page primarily as a video landing page, or primarily as essentially a web page. And sometimes you'll see that split there. So that's something where if you marked up the pages correctly, if you've marked up the videos correctly, then that's essentially what you need to be doing. But at the same time, you can't be guaranteed that we'll always show the video snippet and we'll always show it in video search.

MALE SPEAKER: No, I'm not wondering why you're not showing the video snippet. I'm wondering more why it's saying that there is no video results?

JOHN MUELLER: Probably because we don't treat it as a video landing page. So that kind of falls into that where we can recognize the markup, but that doesn't mean we'll automatically treat it as a video landing page.

MALE SPEAKER: Are you saying that a page can either be a web page, or a video page, it cannot to be both?

JOHN MUELLER: I wouldn't put it quite that binary, but essentially, if we treat a page primarily as a web page, then we're not going to be showing it in video search. So I think with your example here, it's probably not exactly what I mean there. But for example, what we often see is videos in a sidebar, or videos in a footer somewhere, or videos way out on the bottom of a page. And that's the kind of content where we'd say, well, we can recognize the video is there, but this isn't really a video landing page. This isn't really focused on a video, and we're not going to bubble that up in video search, even if you search specifically for that page. I think with your example there, it's something where I'd almost say, well, I'd probably talk to the team about that and see if that's something that we should be doing better. But in general, we can't guarantee that just because you have the video markup, we'll always treat it as a video landing page.

MALE SPEAKER: Got it. I will provide some more examples, and I'll let you know.

JOHN MUELLER: Sure, that sounds great.

MALE SPEAKER: Thank you.

JOHN MUELLER: All right. Somehow the questions are in a different order now. Hi, I'm seeing a bot attack on our Russian website. Difficult to identify bots to block. Visits via organic channel, Google mostly. Bounce rates are more than 95%, and time on site is less than five seconds. What measures can I take to protect my organic traffic? In general, this isn't something that we take into account for search. So if you're seeing this kind of traffic on your website in Analytics, or on your server logs, then that's not something that I'd say we even really notice from a search point of view. I don't think we even use that in search. So with that in mind, that's something where you probably want to focus on technical means of blocking these bots and not really worry about the search side. And if you have technical means to block these bots, then make sure that your users are seeing site normally, then that's essentially what we're looking for from a search point of view.

MIHAI APERGHIS: Hey, John. I'm going to try to go ahead with my second question. So we have a client that operates ISO consultancy for small and medium businesses and corporations, as well. And they, so far, have built a website for each type of ISO, each ISO number specifically because there were apparently few until now, and they wanted to focus and make sure the user experience is specifically for each standard as well. This is one of the main tags, for example. The idea is that they've now decided they have enough product and they would like to have it under a single brand. And that would mean going ahead and building a single domain, so that they could offer all of this information under a single website. And I wanted to ask you since we have a single domain for each of the standards and we're going to have a domain that features all of them under subdirectories, or subdomains, we're not sure yet, is there anything we should be worried about because with a single website, the relevance might be diluted through all of these standards and everything? Or is it not going to be an issue?

JOHN MUELLER: I think, in the long run, it's not going to be an issue, but if you migrate a lot of websites into a single website, you'll see fluctuations. And this is something that can take quite some time to settle down because it's not the same as a clean move from one domain to another. You're essentially taking all of these different websites, combining them into one, and that's something that is kind of tricky on our side, to combine the signals appropriately. So I think you need to accept that this is going to take, I'm just going to guess, orders of one or two months for things to settle down a little bit. So maybe do it at a time where you know that search traffic is generally low, that you don't really have to worry about that, or maybe just bite the bullet and say, well, we have to get this done. And we have to accept that search is going to be kind of confused for a while.

MIHAI APERGHIS: Right. I was thinking of a maximum of six months. So one or two months would be great.

JOHN MUELLER: I think, in the long run, it's definitely a good decision to make. And in the long run, I'm pretty sure that this will be good for your website in search. I just think that the whole migration is going to be, I don't know, maybe painful? It's really hard to say.

MIHAI APERGHIS: Right. But from a branding perspective, that definitely makes sense, and we were heading in that direction anyway. We weren't sure if we were going to keep separate websites and then maybe link to them [INAUDIBLE], or something like that, or build it into a single domain. I personally think that building into a single domain is a lot more beneficial, especially from a technical point of view.

JOHN MUELLER: I'd agree. I'd recommend focusing on one site, one domain, if at all possible.

MIHAI APERGHIS: Thanks a lot.

JOHN MUELLER: All right. Here's a question about crawling rates. We know crawling rates adapt to server capacity and availability. Assuming there's no issue with these, are crawling rates set per page or per website? In general, we set them per server. So that's something where we have a maximum limit to where we say, well, on the server there are a bunch of websites, and on maximum, we'll try to crawl this many pages per day, for example. And then how we split that up within that server, like across the different websites, if there are different websites on that server, that's usually based on a variety of factors, which could include things like how often we think these pages are actually changing, or how important we think these pages are. So especially within a website you'll see things like we'll be crawling the homepage maybe daily, or maybe even every couple of hours, if it's a really important homepage. And we'll be crawling individual, lower level pages a lot less frequently. So that's something that we also do across websites, where we try to recognize this is a really important website, this is a website that's been changing very frequently where we need to keep up, and we'll be crawling that more. I think one aspect that's commonly confused there is that just because we crawl something more frequently, doesn't mean that it's more relevant in search. Crawling is essentially just like the technical requirement for us to even pick something up for a search, but it doesn't mean that we'll be ranking it better, or we'll be showing it more often in search just because we've been crawling it more often. So the relevance side of things are completely separate from the crawling side. And I wouldn't try to artificially increase the crawl rate and hope that this will increase my website's ranking. Essentially what will happen is we'll crawl a lot more. We'll use your servers resources more, but we'll still show it the same way in search. What do we have here? I have a simple, static page that has a simple widget that shows the current time. Its source code will change every time Googlebot fetches it. Is that good, or bad? Will Googlebot change its behavior based on that? What if the dynamic portion was bigger? I wouldn't say that's good or bad. I think we'll try to recognize if this is something critical on the page, and treat that appropriately, and try to keep up, if we think that it makes sense. But like I mentioned before, just because we crawl something frequently, doesn't mean it's more relevant. It's not something that will automatically show up higher in search. So essentially, if you're trying to encourage Google to crawl more frequently, then that doesn't mean that we'll actually rank this page better in search. We might just be crawling it and trying to figure out what all has been changing there. And if this is something that's kind of in the footer, or in the sidebar somewhere, we'll probably tend to ignore it anyway and focus on the main content and the changes there. All right. We just have a couple minutes left and tons of questions left over. Do any of you have any questions that we need to get through before we finish?

MIHAI APERGHIS: One last one. For people who use mobile websites through a separate subdomain, using separate URLs, the guide recommends that you use the 302. And if everything is set up correctly, that would be seen as a 301 basically. So 302, 301, doesn't really matter. Is it the same for a website that uses the browser language to redirect the user to a certain language version of the website? Does the 302 make sense in that way?

JOHN MUELLER: So that would be kind of like an x-default, hreflang landing page, where you kind of handle the direction where the user goes on your side. And that's something where we'd also recommend a 302 for. Primarily because a 302 doesn't get cached by the network in between. So that's something where it's kind of like a technical reason and not an SEO reason to use a 302 over a 301.

MIHAI APERGHIS: Right, but we have to make sure the rel alternates are properly set up, otherwise-- one issue I found with this is that if the language version-- for mobile websites, for example. The desktop version, you have one URL, and you have one URL for the mobile version. For language websites, for multi-language websites, you may have the case where all the language versions are in subdirectories and the main page doesn't really exist. It just serves to automatically the redirected to /en, or /ro, or something like that. Is it still a good idea to do the 302 from the main URL, www.--

JOHN MUELLER: Sure. Yes. Also, with the 302, what generally happens is we index the redirecting URL, which is probably what you want in a case like this because then we'll send the user to the generic page where you can do your magic with the redirects to send them to the right version. So that's a 302.

ROBB YOUNG: John, do you want to have a look at the Chat that's going on in regards to Odysseus' previous question? So basically, I think that one of the ways he was using to test was using the site colon, but within the video search results. And that doesn't appear to show results for any domain, even if they have videos. So it's either a bug or it's deliberately designed to not work within video.

JOHN MUELLER: I don't know. It might be that they have different directives there. I know, for example, for image search, for a long time they didn't use a site colon query there either. So within the individual search facets there, I wouldn't assume that the same directives work everywhere. That's something you can try out and see.

ROBB YOUNG: Or YouTube doesn't show as any video results.

JOHN MUELLER: OK. Then probably the site colon query doesn't work.

ROBB YOUNG: So Odysseus needs to find a different way to test his theory.

JOHN MUELLER: Usually, what I try to do in a case like that is take something like a title, or something unique from the page, and search for that, and see if that shows up. Because site colon queries are very artificial, from our point of view. At least in web search, we'll try to show you what you're looking for. So for example, if you're redirecting one site to another and you do a site colon query for the old domain, then we'll try to show you that old information, though it's not really what we're trying to see. Similarly, if you do a site colon query on a smartphone, or with the smartphone settings in Chrome, we'll show you exactly those URLs, even though we might know that there's a mobile version on a different URL. So it's kind of an artificial query where you have to be careful with what you see out of that.

MALE SPEAKER: Thanks, Rob.

JOHN MUELLER: All right. It looks like we're out of time, but if anyone has a last question, I am happy to grab that.

MIHAI APERGHIS: Maybe some new invites on the Webmaster beta?

JOHN MUELLER: Oh, yeah. I think we're doing a new round next week, and we want to add a bunch more of the people who have signed up for that. I don't know if we can add all of them for the next set up, but I think that's happening next week, so stay tuned.

MIHAI APERGHIS: Fingers crossed.

JOHN MUELLER: I like where they're headed with the new feature there. It gives you a little bit more information, a bit more options, so it'll be interesting to see how it comes out. So with that--

MIHAI APERGHIS: John, one last thing. Regarding the new structure beta testing tool, is there any chance you'll be adding a preview like you used to have in the old tool? Because that was really useful to show clients-- this is how it looks like now, this is how it might look if you fix all of your issues. That would be really awesome.

JOHN MUELLER: It's a common feedback I've heard. So I don't know how easily they'll be able to add that because it really focuses on the structured data, not on the search side. But I'll double check with the team to see if they have any chance of doing that. All right. So with that, let's take a break here. Thank you for all of your questions again. And if I didn't get to your questions that you submitted before, feel free to copy them to the next event. I'll set those up later today, or feel free to open up a forum thread to discuss this in one of the Webmaster forums on there. Have a great weekend, everyone.

ROBB YOUNG: Thanks, John. Cheers.

MIHAI APERGHIS: Thanks.

JOHN MUELLER: Bye, everyone.
ReconsiderationRequests.net | Copyright 2018