Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 10 February 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: OK. And welcome, everyone, to today's Google Webmaster Central office-hours hangouts. My name is John Mueller. I'm a webmaster trends analyst at Google in Switzerland. And part of what I do is talk with webmasters, publishers, people who make websites, like some of those in the Hangout now and some of you who submitted questions already. I try to bring all the information back to the teams internally as well, to make sure that they're aware of your worries and that you're aware of where we're headed and the kind of questions that we're running across. So maybe we can just get started with one of you guys, a question from one of you.

BARUCH LABUNSKI: Yeah. Well, [INAUDIBLE].

JOHN MUELLER: OK, Baruch. Go ahead.

BARUCH LABUNSKI: I just wanted to know, at least, maybe, is it throughout the summertime will it become a ranking factor? Because it kind of feels that we're reaching that point. I know you can't disclose certain things, but I just wanted to know. Really focusing on that quite a lot, because of the usability errors. Because a lot of different clients, it's starting to pop up. Because I always check it now, the errors are there.

JOHN MUELLER: Yeah. I mean, this is something where we've seen the trend head towards mobile, towards smartphone, for quite awhile now. And we've been really pushing in that direction. I think two years ago, we did a blog post about some initial changes we made with ranking changes, with regards to mobile sites, issues that are popping up there. And recently, I think in December, when we launched a mobile-friendly label, we also mentioned that we're working on experiments there. So I wouldn't be surprised if you see more in that direction, but at the moment, I don't have anything specific to announce. And it's probably not something where we'd say, well, we're going to do this in the summer. I imagine, before we do something bigger around that, we'll probably do a blog post and give some more information out for the general public, as well.

BARUCH LABUNSKI: So a blog post and then announce the date, yeah? When it's going to be a ranking factor?

JOHN MUELLER: We'll see what we get to.

BARUCH LABUNSKI: OK.

JOHN MUELLER: I think, in general, when we make these kind of changes, we'll try to put them out in a blog post, because this is something that is essentially a technical issue that the webmasters can solve. And if we don't surprise them by that, then our hope is that they'll make more mobile-friendly sites that work well for users, both their users and our users.

BARRY: So nothing has been baked in yet in terms of ranking around mobile in this new algorithm that we're talking about?

JOHN MUELLER: I mean, we're always experimenting, so it's possible that some of you guys are seeing experiments happening. At the moment, it's not something where we have anything definitive to announce just yet.

BARRY: OK. Thank you.

BARUCH LABUNSKI: Thank you.

MIHAI APERGHIS: All right. Can I can I go ahead with a question, John?

JOHN MUELLER: Sure. Go for it.

MIHAI APERGHIS: So I have a friend with a unique case-- well, at least a unique case from my experience. This is actually also related to your announcement regarding Googlebot crawling from different IPs, geotargeted crawling. And I don't know if this applies to international, so other countries than the US. My client is in Romania. And he basically redirects the user based on what city from Romania the user enters. The website, he redirects them to a page specific to that city. And as far as I know, he uses a 302 redirect. I'm not sure what the best practice is or if the Googlebot would be able to see now that's the geotargeted coming on.

JOHN MUELLER: At the moment, we're not quite at that level of detail. So I think we're crawling from just a handful of locations, and those are individual countries. And I don't think it would make sense for a crawl from, say, all countries and all cities to see all variations of the content that we would see. So I imagine the number of countries will grow over time, as we see-- if this makes sense or if this doesn't make sense-- but it's probably not going to happen that we'll crawl from individual cities and say, well, we have a crawler based in each of these major cities in Romania, and we'll see what each city would see. I don't think that would make sense, because then, we'd have to crawl these websites thousands of times, the same URLs.

MIHAI APERGHIS: Right. As I said, it's a pretty unique case, because I haven't actually met anyone else with this specific scenario. So do you think it's a good practice to redirect the users with a 302 redirect? Because I'm not sure what to do. Basically, making a single page, the homepage, where a user would be able to choose his city is not an option, so they really want to redirect the user to that exact page that has content relevant to the city of the user. What would be the best idea on how to configure that?

JOHN MUELLER: So technically, a 302 redirect would be the right way to do that. That's one that wouldn't get cached on a network level. So it wouldn't be the case that one user visits this and a different user going through the same ISP sees the cached redirect. They would be sent to their main page again and then redirected wherever. So the 302 would be the technically right way to do that. I'd make sure that the individual city pages can be crawled and indexed separately, so that we can pick all of those up. What will happen is probably Googlebot will be redirected to-- I don't know-- one page or a generic page. And we'll see that as the home page. So if there's something really unique on the home page that needs to be associated with the site in general, then that's something you should show to Googlebot as well. Otherwise, as long as we can crawl the individual city pages, as long as they're linked between each other within the website, then I don't see much of a problem there. There are some sites that do that that are pretty successful with that. It's always, I think, a technical difficulty to do it correct, but it's theoretically possible.

MIHAI APERGHIS: So should we allow Googlebot on the main page without redirecting it? Or is that not such a good idea?

JOHN MUELLER: You should treat Googlebot like any other user from the same region. So if you treat users in the US by showing a generic page because they're not Romania, then that would be what Googlebot would see.

MIHAI APERGHIS: Yeah. Yeah. OK. That makes sense. Yeah. Thanks.

JOHN MUELLER: All right. We have the hreflang question from Arthur. Sitemaps hreflang implementation-- a site has two languages, English and French. There should be two distinct URL blocks. Each should contain a loc block tag for each language URL and then a link to an example. Can I place only one URL block to save some space in there? No, you would need to keep them separate. So we need to see the confirmation from both of these pages. So the English page should refer to the French page and, optionally, to the English page as well. And the French page should refer to the English page and, optionally, to the French page as well. So we kind of need that confirmation from both sides. So only putting them into one block will, essentially, lead to us saying, this is not a confirmed hreflang markup, and we'll ignore it. And that's something we'd also show in the hreflang status or error section in Webmaster Tools. Duplicate content, keyword stuffed content hidden in tabs, expandable modules-- is that going to affect the website? Or will it be ignored like all other hidden content and tabs that doesn't go toward a web page's rank? My general advice there would be to say, if you're aware of keyword stuffing and hidden content that you're just putting on those pages for search engines-- and that's something I'd fix regardless of whether or not Googlebot would pick up on that or not. So if you're aware of this kind of thing, I'd just clean that up. And then you don't have to worry about what Googlebot actually does with that. In general, when it comes to hidden content, stuff in tabs, those kind of things, that's something where we'll try to treat that as something that's-- on the one hand, we can crawl and index it, if we can find it there. On the other hand, it's something we're not going to treat with the same weight as really visible content. So if there's something really important on your page, then make sure it's visible. If there's something that's kind of auxiliary content, then by all means, put that in a tab or something like that. But if you're aware of this content just being keyword stuffing or just hidden text in there for rankings, then that's something I'd just clean up. I'd just get rid of that. How often does Google update the page layout algorithm? When the algorithm was first launched, Google told us we'd have to wait several weeks to see changes when a site is hit. We've been waiting almost four months now. How long is several weeks? I guess, theoretically, several weeks can be several weeks. I don't really have any fixed number for that. In general, we don't announce a fixed cycle for most of these algorithms. So we try to update them as frequently as we can. For some of these, we have to update the data first, which takes a bit of time to reprocess, re-render all of these pages to see what they actually look like. So that's something where we're probably not going to say this is going to run every month or every other month. It's essentially going to happen when we can make sure that the data [INAUDIBLE].

BARRY: John, could you say have there been any Panda or Penguin changes since late December?

JOHN MUELLER: Panda or Penguin changes since last December? I don't know for sure.

BARRY: You want to guess? Say, I guess it might be maybe--

BARUCH LABUNSKI: Like from 1 to 10, 10 being the highest.

JOHN MUELLER: Wait, wait, wait, wait. Let me pull up that tweet where I can confirm that I have nothing to confirm. Or what was that? Yeah. I don't know. This is something where, generally, we don't treat this as something really critical internally. And when these update, they update. And it's not like we're watching our engineers' hands to see what they're actually doing and doing the things every week or something like that. So these are essentially algorithms that we see as lots of other algorithms that are active in search. And we don't necessarily send out internal newsletters saying that, hey, this algorithm ran today.

BARUCH LABUNSKI: Well, just like Google Trends, it would be nice if we could see all the 200 algorithms going at once. This way, we will see what has been updated. We'll see that Panda has been updated, Penguin and so on, but there's 200-- we'd love this graph.

JOHN MUELLER: That's really hard, kind of also because of the way that these things work internally. For a lot of these algorithms, it's not something where this algorithm launches today and you'll see it live within five minutes. It might launch today, and it'll take a couple of weeks for the data to be processed and be updated. And it'll be kind of continuously updated like that. So that's something where there's sometimes no clear cut off date where we can say, this happened today, and you'll see the change tomorrow or in a couple of hours.

BARUCH LABUNSKI: John. let's say, for example, I'm in Seattle, and then the next day I'm in Atlanta. And I see different-- like for instance, when I was there, the results are totally different. Is that because the update hasn't reached that specific server yet? Because for instance, in Atlanta, you guys have a popular area there where, I guess, the Google servers are. So does it update there and then it kind of slowly rolls out through all the states?

JOHN MUELLER: Usually, we're pretty quick with updates across the data centers. So usually, you wouldn't see that much of a difference between the data centers. What you'd usually see more of is the experiments that we're always running. And we're always running, I don't know, 100, 200, 300 experiments at the same time. And when you're searching, you might be in any number those. So that's something where you'll probably see more of the changes based on that. You might also see changes based on geotargeting and your location, especially if you're doing local searches. If you're searching for a plumber, obviously, if you're in New York, you'll see different results than if you're in San Francisco. So that's the kind of thing where we do take geotargeting into account.

BARUCH LABUNSKI: OK. So there's no latency between?

JOHN MUELLER: I mean, there is some latency between these data centers, but we have a really good infrastructure in place where if we make changes, then they're pretty much active everywhere at the same time. And sometimes, something breaks, some wires, I don't know, get cut because someone digs a hole in the garden. And there happens to be this big fiber optic cable going through underneath. I don't know. Probably not that common, but theoretically, these things can happen. And it might happen that one of the data centers is a little bit behind or it doesn't have the full data. But usually, our systems are robust in that they'll be able to take care of that. And you, as a user, wouldn't really see any problems from that. Maybe you'll see like a tenth of a second higher latency than you usually would, but I don't know. Usually that's something you'd notice.

BARUCH LABUNSKI: Canada comes second, right? Once the US is updated, Canada is next, right?

JOHN MUELLER: I don't know. It depends on the distance, I guess. If you take the speed of light into account for the electrons traveling, then you might be a couple of milliseconds behind. But I doubt the average user will notice that.

BARUCH LABUNSKI: All right.

MIHAI APERGHIS: By the way, John, do you use that GPS synchronization service that Matt Cutts talked about in January? I'm not sure what you called it, something with S if I remember correctly. He said that it's part of the innovations that Google has made, and that has made it successful. I'm pretty sure it was something like this, using a GPS in order to coordinate separate synchronizations. Something like that, I'm not sure.

JOHN MUELLER: I don't know which part you're referring to.

MIHAI APERGHIS: Snapper, was it? I'm not sure.

JOHN MUELLER: Yeah. I mean, we use a lot of systems internally. And especially when it comes to replicating data across data centers, you have to be really smart with times and time stamps so that you don't override someone else's change. So that's something that takes a lot of work. And even sometimes things like when we have a leap second, that's something that sometimes causes problems in the infrastructure because, if things aren't completely aligned properly, then we don't really know which changes are newer and which changes are old. All right. If you compare two websites with equal Google score, but website A has less new pages per month and website B has a strong growth of new pages, would Google prefer website B? Not always. I guess we try to take into account a number of different factors. And relevance is something that's really hard to nail down and say, if you make more updates on your page, it'll be more relevant. Sometimes pages that don't have any updates for years are more relevant than pages or websites that keep pushing out new pages all the time. So that's not something where I'd say there is a magic factor that gives preferential treatment to websites that keep publishing new pages or websites that don't publish new pages. You mentioned that URLs can be indexed if they're blocked by robots.txt but have external links to them. But presumably, they're not blocked by robots that have a noindex. They would never be indexed. Yes, if we can crawl the URLs and we see the robots noindex tag, then we won't index them. So from that point of view, if you block it by robots.txt and have a noindex, then we can't see the noindex, and we might index the URL. Not the content, because we can't crawl it, but we might index the URL. If it's not blocked by robots.txt and has a noindex, then we won't be able to index that. What could be the possible reason of my mobile website coming in desktop search query and indexing? I'm using alternate and canonical tags as per Google guidelines. In desktop, I'm redirecting mobile crawler to mobile website. Usually, if for normal queries, you're seeing your mobile site in the desktop search results, than you've probably set up something wrong with regards to the connection between those two pages. Because if we can see that connection, if you have the canonical set up to the desktop page, then we'll show the desktop page for normal searches. On the other hand, if you're explicitly searching for those

mobile URLs, so if you're doing a site:m.mywebsite.com, then it's possible that we show you those mobile URLs because we think, hey, this guy is really looking for these specific URLs. And we know that these mobile URLs are associated with your mobile website, then we'll say, well, if you explicitly want them, we'll show that to you. So this is something where I'd first take a look at what you're searching for. If you're searching for normal queries and you're seeing your mobile site in the desktop search results, then that's something where probably you have the technical set up a bit incorrect. If you're explicitly looking for the URLs and you see the mobile website in the desktop search results, then that's not something where I'd say anything is going wrong. That might be completely expected from our point of view. I have a question regarding the answer box in structured data. Could you provide more clarity on when Google search results generates an answer box? Does any structured markup help Googlebot in using one page's content over a result over others? Essentially, the answer box, which is, I think, what they refer to as that kind of bigger search results on top, that's something we see as a type of snippet. So it's not something where we explicitly look for any specific markup or we treat it in different way. We think this is a type of snippet that make sense for the user, and we'll try to show that to the user. So that's something where there's no specific markup that you would need on your page for us to pick that up on. But we like to see that content, of course, on the page. And if it's structured in a clear way, then that helps us to pick that up and show that to the user appropriately.

BARRY: John, have you seen those action links? What I'm calling them, action links? They have little blue icons with arrows.

JOHN MUELLER: Yeah, I saw them, I think, on your blog.

BARRY: Yeah.

JOHN MUELLER: I imagine this is something that the team is experimenting with to see which way it makes sense to show links to the websites, to encourage users to go to those websites directly as well. So that's something. We're always looking into different ways of doing that. I think we've done experiments with the icon of the website there, with a specific call to action, those blue links that you mentioned. We're trying to figure out what makes sense in these situations. How can we encourage users to go to the website for more information if they need that? And that's something where I imagine, over time, we'll see even more experiments.

BARRY: Exciting. Thank you.

MIHAI APERGHIS: By the way, the naming of the service, the [? HBS ?] and transmission service, Matt said it was called Spanner.

JOHN MUELLER: Spanner, yes.

MIHAI APERGHIS: Is that something that is used for Google Search data centers as well?

JOHN MUELLER: We use that for lots of different types of data that we transfer unsynchronized like that, yeah. We try to keep these technologies generic, so that we can use them for anything, so that if something like Gmail decides to move out or do some things fancy, then they can just use the same technology. Can you give us more information about the recent algorithm updates? I don't have anything specific to announce there. So I'll go with that tweet. Oh, I should have looked it up. But I don't have anything specific that I can add there. Here's another one about the TEAK update. I'm guessing that refers to the same thing. I've noticed you sent out more warnings to non-mobile-friendly sites recently. I got one, too. I got one, too, as well. Is this a sign that you are beginning to use mobile-friendly as a ranking factor in Search? I think this is essentially a first step to keep webmasters informed and to let them about these issues as we find them. Some of these sites are obviously going to be easier to update to mobile-friendly. Some of them are pretty tricky. I know some of the sites I made with-- I don't know, embarrassing-- FrontPage, back in the days, that used table-based layout, those are things that are really hard to move to a mobile-friendly design. But this is something where when you start working on it, when you start getting some practice on that, it gets a little bit easier over time. So if you're getting these messages, I'd take that into account and think about what you want to do with those sites in the long run. Maybe find a way to make them mobile-friendly, too.

MIHAI APERGHIS: John, if I can, since we're talking about algorithms, I had a question about, actually, Rob's site. But this is mostly out of curiosity. And Matt has said several times that you try to be as efficient as possible with algorithms and try to, I don't know, maybe 99% or even higher efficiency so you don't have a lot false positives. But false positives are a reality. They do happen. And when they happen, it's kind of hard to just flag that website and correct it right away. They need to go through a whole new version of the algorithm. So is that something that might have happened to Rob's website via false positive, or a certain web scan algorithm, or something like that, and this is why there's pretty much nothing that he can do himself or you can do manually to change that?

JOHN MUELLER: I wouldn't call that specific case as a false positive. But in general, that's always something that can happen. So we do try to keep our algorithms as general as possible. We do try to minimize any kind of a wrong recognition of websites as being problematic. And for the most part, we don't have any kind of special whitelist where we can say, well, this website is actually OK, therefore, we'll take it out of this algorithm. For some individual cases, we do have that ability. So that depends a bit on the algorithm itself. So for a lot of the general search algorithms, we don't have that ability. But for some individual algorithms, we do need to be able to take manual action and say, well, for example, the safe search algorithm is picking up on these words on this website as being adult, adult website similar. But actually, they're talking about, I don't know, animals or something completely unrelated. And in those kind of cases, the safe search algorithm would have kind of a whitelist where we could say, well, this is a problem that we're picking up on incorrectly with the algorithm. We'll add them to the whitelist for the moment. We'll work to improve the algorithm, so that they don't take this into account in the long run. But in the meantime, we can, as a stopgap measure, help that. So that's something where that sometimes make sense. We don't have that for a lot of the other algorithms, like Penguin and Panda. So it's not that we would say, well, this website is being recognized as kind of problematic from a quality point of view. We'll put it on the whitelist, and they'll be seen as perfectly fine. That's not something that the Search Quality Team would want to do. So it kind of depends on the algorithm.

MIHAI APERGHIS: Since you said that Rob's site isn't a false positive case, then why do you say there is absolutely nothing he can do to alter the situation he's currently in, other than, I don't know, moving the website to a new domain?

JOHN MUELLER: It's a tricky case where I don't really have much liberty of saying much about what's actually happening.

MIHAI APERGHIS: OK.

JOHN MUELLER: So I don't really have much I can share there. And I think this is one of the individual cases where I don't really see this happening to a lot of other websites, so this is pretty much unique in its kind like this, I would guess. So it's not really that helpful for other people who are stuck in similar situations on a website and don't really know what to do. I know this is frustrating. And I wish I had something more specific I could share with you, with you, Rob, with the rest of you as well. But I don't have anything that I can bring up at the moment.

MIHAI APERGHIS: OK. Sorry, Rob.

JOHN MUELLER: All right. Two short questions. Is it useful to connect keywords with more than one word with semantic connectors? I don't think you really need to do anything fancy with two words mentioned on a page, if you're writing a sentence. If this is a part of a sentence, you don't really need to connect those words in any kind of artificial way. When searching for your brand, Google always suggests a spelling correction. Might this have a negative impact? That sounds like people are generally searching for something slightly different than your brand name. So on the one hand, this is something that will change over time, as people get to know your brand and actually want to search for your brand. On the other hand, in the short term, it might be a bit frustrating because people will probably be searching for the other name instead. But this is something where, if you're setting up a new brand, if you're setting up a new company, a new website, it's something I would always take into account as well. What are people actually searching for? Is this a name that's going to be confusing for people? Or is this something where people will kind of know how to search for my website, if they want to go directly to my website? Does Google use META, geo.region, geo.placename, geo.position, et cetera for local search results? No, we don't use that at all. So that's something where if you want to geotarget, then I'd use a normal geotargeting. You can also use the hreflang if you're targeting specific language country variations. But we don't take into account the geo.region META tags. I forgot what the overall name for those was, but we don't take those into account

BARUCH LABUNSKI: Can I just quickly make a small transition into bots?

JOHN MUELLER: Bots? OK.

BARUCH LABUNSKI: Yeah, because there's a study out there that 56% of traffic out there is bots. And I just wanted to know, can bad impersonator bots hurt our website?

JOHN MUELLER: Impersonator bots? So how do you mean?

BARUCH LABUNSKI: Fake bots.

JOHN MUELLER: I don't think that would be a problem. I'm not really sure what you would do with fake bots or impersonator bots, but I--

BARUCH LABUNSKI: Well, like spammers, I guess, out there. Right? And I just wanted to know, blocking them, would that affect a website?

JOHN MUELLER: No. No.

BARUCH LABUNSKI: No?

JOHN MUELLER: I mean, if these are scraper bots that are running across the web just copying content down or sending fake refers, those kinds things-- I don't know-- feel free to block that. I mean, from our point of view, as long as you're not blocking Googlebot, then that's not something that would be affecting us. There might be some subtle user agents where you would essentially be blocking us, if it's coming from Google. Things like the PageSpeed Insights testing tool, I believe, uses a special user agent. Of course, the mobile crawlers, those kind of things, use special user agents. But there are tons of bots out there. And from my point of view, feel free to block those. If you can tell that they're not actual users, if you can tell that they're not doing anything useful with your content, then sure.

BARUCH LABUNSKI: OK. Thank you.

JOHN MUELLER: I mean, these things have been happening since the beginning. So this isn't really something that would be new to us. If sites are crawling our sites or bots are crawling our search results even, that's something we're kind of used to and we have a bit of practice handling as well.

BARUCH LABUNSKI: Yeah, I know. Just looking at server logs and stuff, and it doesn't look good. So there's one specific site that it just doesn't look good. And it makes sense to do that.

JOHN MUELLER: Sure. I mean, you always have to find a balance between how much work you put into actually blocking these and how much resources they're taking away from your server. And if these are things that are crawling your server, and your server doesn't care at all about that, then I don't know if it's really worth the time to actually try to find a way to block them. But if these are causing problems on your server, if they're stealing your bandwidth and scraping your site in a way that's bogging down your server, then sure.

BARUCH LABUNSKI: Because what's happening is imagine if you put it through a third party. And then you can see where is 90% of the traffic coming from. And because of this specific crazy bot, you see that most of the traffic is coming from the Philippines. And that's not a good thing. You know?

JOHN MUELLER: I mean, not a good thing-- from our point of view, it doesn't really matter where your traffic is coming from. But obviously, if you're targeting a different country and you can tell that all of these visits are bots that are completely useless for your website, then sure, feel free to block them. I think there was a question similarly somewhere in here about analytics where some spammers are doing referral spamming again, where they're sending a refer with the request that they're doing for a website. And that's showing up in, I believe, analytics. And that's also something you can feel free to block. If you can tell these aren't real users, block them. Do whatever you want with them. I mean, this not something that would affect us.

BARUCH LABUNSKI: OK.

JOHN MUELLER: All right. I'm going to take my site down and use some of my content on my new site. Will this affect the rankings of that page, even though my old site is gone? So if you take the old site down, obviously, we're not going to be able to rank that as soon as we crawl and re-index those pages and see that they're gone. If you use some of the content on your new site, that's something we'll try to treat separately. It'll be a little bit different if you do a one-to-one copy of your whole website and put that on a new domain, because then, we'll look at that and say, well, this is a one-to-one copy of this existing page. Maybe the webmaster meant to do a site move, and we'll just treat it as a site move instead. So if that's something where you're just taking snippets of information from your old site and putting them on something completely new, then that's essentially something separate. I'm still confused how the disavow works. Can you give an example of how it works? OK. So essentially, the disavow tool is meant to take links that you don't want to have associated with your website out of our system. So if, for example, a previous SEO went and brought links from one website to your website, and you can't remove those links, and you want to make sure that these page links are not taken into account by our algorithms, then you can use the disavow tool to let us know about those specific links. Or let us know about the whole website and say, everything from this other website here that's referring to my website should be taken out of account from Google's system. So that's essentially what you're doing with the disavow tool. You can also use that if you find really problematic links that are happening to your website that you don't know where they're coming from. You can essentially say, well, I don't want to be associated with these links at all and submit that in the disavow file. And then we'll take that out of our systems and out of our calculations. Do you know the Latch application? If so, do you think that, in the future, will be in the service? I don't know of the Latch application, so I don't know what that would be. I host subdomains I don't want Google to index. I can't use robots.txt or META tags to exclude them from indexing. Is there a way that I can prove to Google that I own the TLD and add the subdomains to a blacklist? This would improve your index? That's an interesting question. So essentially, if you have a subdomain that you don't want Google to index, the best thing there would be to serve some kind of a server side authentication on those URLs, so that when we try to crawl those URLs, we'll see something like a password prompt. And we'll know, OK, this is not something that we can crawl. And then we won't even bother trying to crawl deeper or trying to index that content. So if you can do server side authentication, that's probably the best solution there. You could also do that based on the IP address. For example, if this is a subdomain that you only use internally, then maybe block all requests that are not coming from your IP addresses or from your users' IP addresses. So that's something you could do to prevent us from even trying to get into that. Another thing you can do is you can use Webmaster Tools DNS verification, which basically means you add a special element to your DNS settings. And we can use that to confirm that you own this subdomain or this domain name, for example. And within Webmaster Tools, you can do a site removal request, which is, I believe, valid for 90 days, where you can tell us, this content should not be indexed or should be removed immediately in case some of it is indexed already. Generally, I'd recommend sticking with something that's more permanent, something like the authentication, than to rely on a site removal request, because these expire after 90 days. And if you go on vacation at the wrong time or whoever is tracking this forgets to watch out for this, then that content might suddenly pop up again in the index. So using something that is resilient, based on settings, that's something I'd recommend doing there more. And that's something that would also work across search engines. So if this is a subdomain you don't want indexed at all, then if you're blocking requests from outside IP addresses, then Bing won't index it either. Yandex won't be able to index it either. It will essentially be removed by default.

MIHAI APERGHIS: Hey, John. I had a similar case quite recently, a hosting company that uses reverse IP subdomains. So basically the subdomains contain the websites that are of these clients that do have separate domains. But he's using this reverse IP subdomains for a technical thing. I'm not sure why, but he kind of needed them. So obviously, he cannot use robots.txt because that would also block the original domain and for robots.txt with noindex. So what would be the best approach there?

JOHN MUELLER: I guess [INAUDIBLE] the server side authentication, something like that. I don't know if that would work in that specific case. But essentially, I'd really try to focus on something that works permanently, that doesn't rely on unique characteristics of Google, that works across all search engines. Because otherwise, someone will access those URLs accidentally on their own and copy the content down, or scrape it, or whatever. So if you use something like authentication, if you use something based on IP address, maybe based on a cookie-- I don't know you want to do that-- that's something that's probably going to be more robust than just something very specific to Google.

MIHAI APERGHIS: One idea was to use just a different TLD in order to move the reverse IP subdomains to it, so we're sure basically.

JOHN MUELLER: Yeah, sure.

MIHAI APERGHIS: [INAUDIBLE]. Yeah. OK.

JOHN MUELLER: Sure. That would work, too.

BARRY: John, is there an update on when the new search queries report is coming out for beta testers?

JOHN MUELLER: They're working on that. I don't know if all of you are on the list, but I saw a lot of familiar names in the submissions. What's probably going to happen is we'll set up a version for part of the people that signed up, possibly this week, possibly next week. And we'll set up a slightly different version for another part of the people who signed up, so that we can compare how people are actually using this. And that's something where I imagine over the next couple of weeks you'll find out more. And it's still a very early preview, so I wouldn't expect this to be the final. I wouldn't expect this to be the final state. But it's really important for us to get as much feedback as possible on this. So if you see someone write about this somewhere, give us feedback. If you think this looks good, let us know. If you think this looks terrible, then let us know, too. These are things we need to know. How often does--

JOSHUA BERG: John?

JOHN MUELLER: All right. Go for it.

JOSHUA BERG: I wanted to ask you about the message that I sent earlier regarding these guys I heard about from someone that are doing the CTR manipulation, dwell time manipulation, that they're claiming. And they're selling this as a service. So someone told me that they'd gotten hooked into using a service like this. And they hadn't realized that it was specifically against Google's guidelines. So I thought that it may be, just as a suggestion, something to include in there more specifically. But on the other hand, there is that part of the guidelines which does cover, in a general way, where it says, on number three, "avoid tricks intended to improve search engine rankings." So I guess that pretty well covers any of these tricks of manipulation. But there's guys out there that are hawking these services saying, now you can't do link buying and other things that are against Google's guidelines. But it's OK to do this crowd search function where you try and trick with queries and whatever that they're doing.

JOHN MUELLER: I don't know. I mean, it's always been the case that people are doing crazy stuff that they think plays a big role in our search algorithms. And that's probably not the case. And that's not something where we'd explicitly list everything that doesn't play a role in our algorithms, or everything that doesn't matter, or everything that's kind of sneaky in some regards in our Webmaster Guidelines. So from that point of view-- I mean, if people are doing this and they think this is nice, then fine. If people like putting a pink background on their website, then-- I don't know. There are lots of really useful ways to spend time and money on making a website work really well, so I don't know. It's not something where I'd explicitly go out and say, this is bad, and you need to do it differently, or you need to do something slightly different to make that useful. I don't know. This is not something where we'd probably put that out in the Webmaster Guidelines, specifically.

BARUCH LABUNSKI: Big Mind, Deep Mind and Googlebot are working quite closely together, no? John?

JOHN MUELLER: Deep Mind. I think Deep Mind was an acquisition that we did last year or the year before that. I don't really know where all of their technology is being used.

BARUCH LABUNSKI: OK.

JOHN MUELLER: So I don't know.

BARUCH LABUNSKI: All right.

JOHN MUELLER: We use artificial intelligence in Search, as well. So it's possible that some of that is used there, but I don't really know.

JOSHUA BERG: Another question regarding a client who asked about whether it would be cloaking, or they assumed that it would be cloaking, if they were to not serve ads to Googlebots that are being seen by many or most other users or they were to show a different quantity or types of ads. So is that safe to assume that would not be acceptable?

JOHN MUELLER: Well, anytime they're serving Googlebot something different than what normal users would see, then that's cloaking. So if they're serving a different page to Googlebot in subtle ways, that's still cloaking. If they're serving something completely different to Googlebot, then that would be cloaking as well. So that's something where when we talk to the engineers, anytime we see this kind of thing happening, they're really not happy with that, because it makes their lives very hard. And also it makes the webmaster's life very hard. So for example, we've seen this quite a bit with regards to mobile websites where websites will cloak a subtly different version to Googlebot. And they'll cloak the same subtly different version to Googelbot-Mobile, as well, where actually, we would see the desktop version, again, because they have this special version for Googlebot. So that's something that causes a big problem for us because, when you access the site on a smartphone, you see a nice mobile-friendly site. But when you access it with Googlebot smartphone, you see that cloaked desktop site instead. So any kind of cloaking like that just makes it extremely hard for the webmaster to diagnose any kind of problems, because you don't really know which version Googlebot has really seen. It makes it really hard for our engineers to take action on the content that we see there. And it's something we have in our Webmaster Guidelines, as well. So if you're trying to do something misleading there, then that's something where even the Web Spam Team might take action. So as much possible, really try to avoid cloaking.

BARUCH LABUNSKI: Did you get my email regarding the spamming?

JOSHUA BERG: People mix that up, too.

JOHN MUELLER: Yes.

BARUCH LABUNSKI: You have?

JOHN MUELLER: Yes, yes. Let me go through some of the questions here. How often does the hreflang in Webmaster Tools get updated? And what is no return tags? I think we had a slight issue there with the hreflang data in Webmaster Tools for maybe a week or a couple of days, at least, where we showed a bunch of errors that weren't actually errors. But I think we fixed that fairly quickly. It might be that it takes a little bit longer to update that data now. So I believe it should be refreshed in the meantime, but it might be that it's still, maybe, a week behind, something like that. But that data should get updated fairly frequently, every couple of days, something around that time. No return tags means that you don't link back. So for example, the English page refers to the French page over here. And the French page doesn't refer back to the English page with the hreflang tag. Then we miss that confirmation that there is a connection between these two pages. And in those cases, we don't take that hreflang tag into account. So if you have an English page and a French page, make sure that they're connected to each other with hreflang, and not just from one side.

ARTHUR: So this is related with the first question you've answered for me, right, John? As that was the point.

JOHN MUELLER: Yes, exactly. You had that in the sitemap file? In the sitemap file, if you just have one URL tag, then you essentially have, just from one side, the hreflang tag. If you have it directly on the pages, then of course, even there, you have to have that confirmation back. How do you get on the beta testing list? We did a Google+ post from the Google Webmasters account, I believe, maybe two or three weeks back, requesting people could sign up there. So I'd check out the Google Webmasters Google+ account and maybe scroll back a couple of pages to see where we have that form. You can still sign up there. I don't know if we'll be able to take everyone into account for the beta testing of this new feature, but we'll try to do it in steps, like I mentioned. Is the Google Sitelinks Search Box mainly for well-known brands or highly searched sites that have a search box on their site? We do try to show this algorithmically when we think it makes sense for the user. And sometimes, that's something where people try to go into deeper content. So it's not that it explicitly targets brands. It's not that it explicitly targets well-searched sites, but rather sites where we think it helps the user to be able to search for something specific within that website, kind of like we do with Sitelinks, where you can jump to specific parts of a site [INAUDIBLE]. How to effectively ban and block spam refer in analytics? I think that someone did a couple of blog posts about how to block this in .htaccess, which might be one way to do it. From our point of view, you can block these kind of spammers or referral spam in whatever ways you think makes sense. I have an English-French site. I'm confused with the language tag. We have a lot of information in the Help Center about the hreflang tag. So it's H-R-E-F-L-A-N-G, which is a way to connect these pages. I can see if we can do, maybe, a more specialized Hangout in the future with some explicit examples on how you could set that up. But in the meantime, I'd really recommend taking a look at the Help Center, at the blog post that we did. They've covered this as well. All right. I'm kind of running out of time. Let me see. Here's one. In a professional journal, I've read that, for Google, the order of the H tags is important. Is that true? No. You can essentially use them however you want. We do try to understand the structure of a page, based on these tags partially. But if you have a reason for doing them in a different way, or you have multiple H1 tags on a page, then that's absolutely fine. That's not going to cause any significant problem.

MIHAI APERGHIS: Is it true that Google also looks at the size of the font, for example, to understand that that is the title of the page, even if it's not marked with an H1 or H2 tag?

JOHN MUELLER: I imagine. I don't know. I don't know for sure. I mean, we try to understand how the page is structured. Especially with some of the older layouts that are table-based, that's not really trivial to do. With a lot of the HTML5 layouts that we see nowadays, they have a really nice semantic structure to them, so it's a lot easier for us to pick that up. When will be the next Google PageRank update?

BARUCH LABUNSKI: Yeah.

JOHN MUELLER: Probably never.

BARUCH LABUNSKI: Why?

JOHN MUELLER: This is something I think we've stopped updating, at least at the Toolbar PageRank that's shown. I don't know the future of the Toolbar in general, but at least from PageRank side, this is probably something we're not going to be updating again. I believe the last update was even an accident where someone said, oh, something is broken here. Let me just fix this and run this script that updates PageRank. And then, suddenly, it was updated, and nobody really noticed. So this is something where I think we're probably not going to do any updates here. I think this is not really a metric that's really useful to webmasters, so I'd focus on other metrics instead.

ARTHUR: John, can I step in with a short question?

JOHN MUELLER: Sure.

ARTHUR: I mean, this Toolbar PageRank was somehow a number, which helps finding a website if it's a spam, from just a blink of an eye. I mean, you take a look at it, and you see it has some page rank, some amount. And that would help you seeing that it is spammy or not? It's just one example. I mean, you see a bunch of commercials, and you don't know if you want to stay there or not. And that would help. And I have some other.

JOHN MUELLER: Yeah. I don't know. I think, just based on links, it's a kind of a weird metric nowadays. So at least from a webmaster point of view, I'd really focus on something else, like conversions, like how people are actually using your website, those kind of things.

ARTHUR: Yeah. I was just thinking, can you give us another example of how can you tell from another standpoint if a website is trustworthy or not?

BARUCH LABUNSKI: Just like on YouTube, for instance, you have 7,000 views, right? I mean, you know how many people watched and how many people hated it, before you even [INAUDIBLE].

ARTHUR: Yeah, but right now, you don't have anything to base, to put your skills on or base on to make a sudden decision to see if this is a spammy website or not.

JOHN MUELLER: I can't think of anything, off hand.

ARTHUR: Maybe you could take this into consideration.

JOHN MUELLER: Yeah. It's kind of tricky, because things like Chrome, the newer browsers, they don't have these toolbars that frequently anymore. So even if there were a metric, I don't think that's something that a lot of people would see, because nobody likes installing toolbars and having all of this stuff on their browser anymore. So I don't know. It'll be tricky. But it's a good question. Yeah.

JOSHUA BERG: So the metric is still served, though? In other words, if it's not going to be used, you'd think it might be useful to turn that off at some point. [INTERPOSING VOICES]

JOHN MUELLER: I could see that happening at some point. I mean, like you said, at some point, it's just going to be old and stale, and nobody cares about it anymore. If nobody's using the toolbar anymore, then why would we even keep those systems running? But I think this is a more medium-term, long-term discussion of what we actually do with those scripts.

ARTHUR: Yeah. Well, you're right. You're correct, John. But I think you should think about it. I mean, to take it as a suggestion that we need also something which can tell, from a fast view, if you can trust that website or not.

JOHN MUELLER: Yeah. Trust is always a really hard problem. Yeah. That's always a hard thing, too. All right. I have to go. Someone else needs this room. And so thank you all for your time, your questions, all of the questions that were submitted, your feedback here in the Hangout, as well. It's very appreciated. Next week, we have a special Hangout with a member of the Google News team. If you have a Google News website, a website in Google News, then I'd definitely tune in then. If you do have a website in Google News and want to join us live, then make sure you comment on the thread from Google Webmasters so that we know that you're interested, and we can invite you a little bit ahead of time. That's that. Thank you all, again. And I hope you guys have a great week.

ARTHUR: Thank you, John.

JOHN MUELLER: See you next time.

ARTHUR: You too.

JOHN MUELLER: [INAUDIBLE].

BARUCH LABUNSKI: Bye bye.

JOHN MUELLER: Bye.
ReconsiderationRequests.net | Copyright 2018