Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 07 April 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: Welcome, everyone, to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland, and part of what I do is talk with webmasters, publishers, people who make websites like the people here in the Hangout now or those of you watching along. There are a bunch of questions that were already submitted in the Q&A. I'll try to go through most of those how I can. If you who are in the Hangout live at the moment have any questions to the answers to the questions, feel free to jump in. And as always, if one of you wants to start with the questions, feel free to go ahead and ask away.

MIHAI APERGHIS: I'd like to ask one, if that's possible. I have a specific geolocation question, mainly about how much does geolocation matter, and if it matters over search users [INAUDIBLE]. For example, I have somebody who wants to build a rent-a-car website in Romania, and he was asking whether to use a .ro domain or a .com domain since many of the users using the service might be searching for it before they actually get to Romania. So they're searching from the United States before they get to Romania so they have everything in place, everything gets paid off and the car is waiting for them. Does geolocation still matter in that sense since geolocation is mainly to get users' results from that country, or a search querying term trumps that and it will show up regardless of the geolocation?

JOHN MUELLER: That's a good question. I don't really know for sure, and I imagine it depends a little bit. I think if you're primarily targeting users outside of the country for a service that you're providing within the country, then maybe using something generic like a .com is a good idea. Because essentially, with geotargeting, we're trying to make this website available for users in that specific country. So from that point of view, I think maybe a .com would make more sense there. We do try to take things into account where we can tell that a user is searching for a specific location, though. So if you're searching for car rental for some city in Romania, then that's something we probably take into account as well. So there is, I guess, a little bit of a balance there. In general, I'd probably go for a generic top level domain in a situation like that if the majority of your users are really outside of Romania. If it's a service that you're really targeting to the people within the country where people within the country are searching, then I think that's essentially up to you. You can use either one. But if you're really doing something where you're targeting more people outside of the country, then maybe a generic one would be better.

MIHAI APERGHIS: Well, a query like "rent a car Bucharest" has search volume pretty much 50-50 outside of the country and inside of the country. We want to make sure we appear for both types of users without shooting ourselves in the foot, because we haven't actually bought the domain yet, so we still have that option to go on the .com or .ro. So you're saying that if the option is on the table, we should just go with the .com.

JOHN MUELLER: Yeah. I'd probably go more with the .com in a case like that, where maybe in the future, you'd even have specific content for those countries or content in different languages where you really want to focus maybe on UK tourists going to Romania. I don't know what kind of tourists traditionally go to Romania. That's something where you have a little bit more options open if you have a generic domain. Whereas if you have a country specific one, you're really primarily targeting users there, and there's the option that we figure out the intent and see, well, this is actually specific to people searching for that city or the region or for the country as well.

MIHAI APERGHIS: OK. Sounds good. Thanks.

BARUCH LABUNSKI: Is there a way I can just quickly transition into the 21st of April?

JOHN MUELLER: What's happening on the 21st?

BARUCH LABUNSKI: Oh, I don't know. Just somebody's doing something. There's some critical things going to happen. So I wanted to ask regarding the mobile update, we spoke regarding those errors I told you. Let's say there's a site with over 1,000 errors. We fixed all the errors, but the errors are still there, but the errors are going to be clearer than [INAUDIBLE] for all sorts of reasons because it's different channels, the communication is [INAUDIBLE] between the developer. So before the 21st hits, I know that the update is not going to happen on the 21st because Michael was mentioning that there's a delay of two or three days, right?

JOHN MUELLER: We're rolling it out much over maybe a period of a week, something like that, so it's not bang on the 21st at midnight.

BARUCH LABUNSKI: What if you cleared all the errors but the errors are still showing in Webmaster Tools? Is that OK for the update? Will the update understand that? I'm very worried about that. That's why I suggested the mark as fixed.

JOHN MUELLER: The mark as fixed feature doesn't do anything there. It's really only for the UI, but what you could do is do something like submit a sitemap with a new change date for those pages so that we know that they changed, and then we can re-crawl and re-index them and pick up all of those changes fairly quickly.

BARUCH LABUNSKI: No, no, John. For the mobile usability, I mean you can't mark as fixed. That's what I [INAUDIBLE].

JOHN MUELLER: No, you can't. Exactly.

BARUCH LABUNSKI: What do we do if the errors are still there but they've been fixed before the update? Is that bad? Because the system, I guess your Google bot didn't have a chance to clear it, and it's still showing from 300 back to 60 or whatever.

JOHN MUELLER: So I guess in general, the thing to keep in mind there is we're doing this on a per page basis. So it doesn't matter how many errors your site has in general if the pages that are ranking are the ones that are seen as mobile friendly, then that's essentially fine. With regards to the errors shown there, that's something which does have a bit of latency so we have to re-crawl and reprocess all of those URLs to see what the current status is. So that's something where I imagine you'll see a latency of maybe two or three weeks there with the data shown in Webmaster Tools for the mobile usability reports.

BARUCH LABUNSKI: So that's not going to affect the site?

JOHN MUELLER: Well, we also have to re-crawl those pages to see that they're mobile friendly for search as well. That's something where if you submit them with a site map file and you say these pages changed on that date with a last modification date, then we go and re-crawl those pages, reprocess them for search as well. So that's something where essentially, we'll pick up on that.

BARRY: Just to be clear, if you want to just check and you're worried about it, just go to your phone, pull up the--

BARUCH LABUNSKI: Barry, I'm not talking about that. I'm just saying let's say everything is cleared, everything has been checked via Mobile-Friendly Tester and everything has been checked via Page Insights. So if everything is clean, all I'm saying is there's still errors left on the mobile usability errors, and now the update will get going on the 25th or whatever, 26th. I still don't have errors. I guess I won't be penalized with this, correct? It's just for Webmaster Tools--

JOHN MUELLER: I would look how that's actually shown in Search. Like Barry mentioned, search for something like your title, something like that, and see how that's shown on a mobile phone or within Chrome when you have the device mode simulation set up so that you're acting like a Chrome browser. And if you see that the mobile-friendly label is shown in search, then that's essentially a sign that we've picked it up properly. And you can check that on a per URL basis, so it's not something you can check across the whole site. And if there's still errors shown in Webmaster Tools but the important pages that you're looking at there are actually shown as mobile-friendly in Search correctly, then that's essentially fine.

BARUCH LABUNSKI: OK. Perfect. Thank you very much.

SPEAKER 1: John, I was hoping I can jump in with one question.

JOHN MUELLER: Sure.

SPEAKER 1: HubPages is a large, user generated content site that covers everything from knitting to cooking. It's all organized under one topic tree, and what we're doing is we give each user author a subdomain. We're running into a couple questions that we have if subdomains is still a good URL structure for a site like HubPages because it's so heavily interlinked. We're seeing a couple things. One is we're seeing no rich snippets will show up across any of the subdomains in the search results, regardless of how good the quality is of that specific subdomain. And then two, we had a question about disavows. If you disavow at the root, will it carry through for all the subdomains?

JOHN MUELLER: With the disavow, you'd have do that on the specific host where the links are pointing to, so on that specific subdomain. If there's usera.yoursite.com then you'd need to submit that on the usera.yoursite.com Webmaster Tools site.

SPEAKER 1: That's 1,000 subdomains. I don't know if that's still true.

JOHN MUELLER: Yeah, but usually the links would just be going to one of those subdomains, right?

SPEAKER 1: You'd have to go in, and a lot of those links potentially go to multiple subdomains.

JOHN MUELLER: I don't have a really easy answer for that, though. I guess one thing you could do is split this up into multiple Webmaster Tools accounts, add them with the API so you have them in Webmaster Tools, and try to go through it from there. That's one option I'd do there. With regards to whether or not subdomains is the right model there, I think for the most part, that's perfectly fine. That's a fine model to have for user-generated content like that. There is sometimes the rare situation where we say, well, most of this domain is really low quality or is really spammy so we'll just treat all of the sites on this in a way that works best for a search quality. That's something that we tend to do in situations where it's really a bulk hoster or a web hoster that basically says, well, anyone can set up a million sites, a million subdomains on my server. I don't really care what the quality is. You can put whatever you want up. Then that's the kind of situation where the web spam team might say, well, most of this domain actually just has spam. We need to take a bulk action on there. It's not possible to really be as specific as we'd like to be otherwise. But I don't think that's the case with a site like yours where you actually watch out for the kind of content that's hosted there, for the quality of the content overall, where you have also community policing things, sending you spam reports if there's something problematic happening.

SPEAKER 1: And we have lots of professional editors going through the content. It's a very different world from 2011. Where would you go look for this problem?

JOHN MUELLER: In a situation like that, I think the subdomain model is perfectly fine. If you have this per user setup, then we can treat the subdomains as essentially separate sites, so that's something that I think is a good idea there. With regards to links to those subdomains, I try to see if the owners of those individual subdomains can handle that directly.

SPEAKER 1: We can't. We can 404 the pages, which is what we're doing for the most part. We're saying, hey, if you build bad links, we're going to 404 the pages. And then what about the rich snippets issue, because that just seems strange to us that nothing is showing rich snippets.

JOHN MUELLER: I don't know how rich snippets would be handling that in a case like that. On the one hand, technically, it has to be marked up correctly, so you can check that with the structure data testing tool. I imagine you've done that. On the other hand, we have to make sure that the quality in general matches our quality guidelines for rich snippets, and I know the rich snippets scene is sometimes a bit strict on that so they try to see what really makes sense.

SPEAKER 1: Could you forward it to the rich snippets guys just to see?

JOHN MUELLER: Sure. I can pass that on to them. If you have any specific subdomains where you're using rich snippet markup, you definitely can send that my way.

SPEAKER 1: We know that some users use it sometimes a little odd, like they'll use it to rate a page, and we try to clean that up, but really good users [INAUDIBLE].

JOHN MUELLER: Sure.

SPEAKER 1: Thank you.

JOHN MUELLER: All right. Let's go through some of the questions here. And as always, if you have any comments on the questions or the answers, feel free to jump on in and we can take that on. "Due to screen size constraints on mobile, we're looking at having some content hidden on load but visible when a user clicks on a prominent button. Will this type of hidden content still be taken into account for textual relevance for search queries?" Yes, in general it will. Primarily, we focus on the desktop version for indexing, so if the content is visible on your desktop version, that's essentially where we'll be picking this up. We expect that the pages are equivalent on mobile where you can actually do that, and we understand that it doesn't have to be one-to-one exact match, full page content has to be the same. That's something where, if you find a way to change the usability, the user interface on mobile slightly to make it work better, then that's perfectly fine. So some sites, for example, hide the comments on their blog posts by default on mobile, and that's an option. So it's not that you need to show everything one-to-one on mobile. It can really be formatted in a way that works well for your users. We recently noticed a surge in links to our site in Webmaster Tools. In looking at the top sites linking to us, they're all spammy or appear to have been hacked, links in hidden directories, et cetera. Other than a disavow file, what steps should we be taking? Essentially, I'd just put those in a disavow file, so take the domains that you're seeing there, put them in a disavow file, submit that there, and then we essentially take those links out of our calculations, and they don't play a role at all anymore. This is a really easy way to take control of this on your side and make sure that you don't need to worry about this in the future. For the most part, we do figure these things out on our own, so it's not that you need to take care of that, but if you really want to be sure that things are being handled right and you found these yourself, then I'd just go ahead and put them in the disavow file. "John, you've often talked about primary content. Can you clarify the concept? There isn't much information on the net. What is and what isn't primary content? Does the percentage of primary content on a web page influence rankings?" That last part is an interesting question, I guess. So in general, when you're looking at a web page, we split that up into elements we call boilerplate, and boilerplate is essentially content that's duplicated across a site. That's usually things like a menu on top, a sidebar, if you have a sidebar on both sides, maybe there's a footer that's common across the website or common across large parts of the website, and that's essentially all boilerplate. So what happens there is we try to ignore the boilerplate when looking at these pages and focus on the rest of the content, the content that's left, and that's primarily what we use for indexing, for ranking, for relevance there. So we do understand that there's sometimes content in the boilerplate. So sometimes your company name, of course, things like addresses-- t those elements are common on the boilerplate content, but that's not something where we'd say, well, this is the primary content of this specific page. So with that in mind, essentially, when you look at the page and you focus on it as a user, you're automatically looking at the primary content of the page. You're reading through a blog post or a news article or a help center article or some kind of document. You're looking at the primary content automatically when you look at these pages, and that's essentially what we mean with the primary content.

ROBB YOUNG: John?

JOHN MUELLER: Yeah. Go ahead.

ROBB YOUNG: With that in mind, the primary is more important than the boilerplate, obviously, but if you're comparing two sites to look for duplicate content, do you look at the boilerplate stuff? Because in some sense, you could see if the boilerplate is the same on site A as site B, actually, that's a bigger indicator that they're duplicates than the primary stuff. Because if you were building two of the exact same sites, that's the stuff you'd make sure you've got the same to say, yes, this site is definitely the same as this site. So do you compare that stuff or do you ignore it?

JOHN MUELLER: It's really hard. When we try to recognize duplicates, on the one hand, there's the easy option when everything is one-to-one and the exact match of the whole page. That's the trivial part. When we get past that aspect, it gets a lot harder. We do try to figure out which part is the primary content and understand which content there is duplicate or which content is maybe the original content, so that's something you sometimes see if you search for a snippet of text. You'll see, well, this page is showing up in search but these other ones are hidden or hidden behind the link that sometimes shows up at the end of the search results. I don't know what it's called, where essentially, you see more of the same content.

ROBB YOUNG: Right. Our case is obviously different, but if you have a company and that company wants to launch three or four different websites in different areas, which is perfectly normal, the company information-- the address, a load of different stuff around it, the frame of the website-- a lot of that stuff could be boilerplate and the same but the content completely different.

JOHN MUELLER: That's perfectly fine. That's not something where we'd say, well, they used the same template. Therefore, it must be the same company behind it. Because obviously, if you look at things like Blogger or WordPress, the default site templates, they're used all over the web, and that's essentially the boilerplate that comes up from the template there. And just because the same boilerplate is used on different websites doesn't mean that they're in any way connected or in any way less relevant just because they're using the same kind of theme, the same kind of theme.

BARUCH LABUNSKI: So in other words, you're saying it's OK to recycle templates.

JOHN MUELLER: Sure. I mean, that just happens naturally. If you have a great template that works really well for one company and you make that available for other clients that also need something similar, then that's up to you. That's not something where we'd say, well, the content of this website is less relevant just because they're using the same infrastructure, essentially, as another website. That wouldn't really make sense.

ROBB YOUNG: But John, in our particular case, previously you've said go as far as using different HTML. So if we were trying to distance ourselves from the previous one, it would be 100% different rather than 80% different. I realize the answer to this is irrelevant to most other people.

JOHN MUELLER: I guess it's tricky in your case, but in a case like that, I wouldn't worry at that level of detail. There are different things that are happening within the template automatically. In your case, I think you have a different domain name, slightly different, so that's something that's automatically different there. You have different things in your main body, in your primary content already, so all of these small things add up automatically. It's not the case that you have a one-to-one copy of the whole template on another domain where we'd say, well, this is actually exactly a copy of the previous site where we see some kind of connection like that.

ROBB YOUNG: Well, at the moment we do, but the advice was don't, so we were making a different one, basically-- different domain, different HTML, different everything. Obviously, the main images and text will be the same because that's our product. We can't change the names of our products. They are what they are. But if your previous answer that the main content is the most important bit and the boilerplate isn't, if that's true, then even if we change domain and change all the HTML, you'll see the main content as the same, and therefore, we're back to where we started.

JOHN MUELLER: Well, there are automatically some small changes everywhere, so it's not something where I'd say this is a situation where you artificially have to introduce some artificial changes in the content or in the template. So from that regard, that's not something where I'd really worry about that.

ROBB YOUNG: OK. And then finally for me today, if we're with this new domain that we're using, currently, we still haven't separated the two, so the old domain is still pointing at the new one for various reasons we've discussed before. Could a new domain get to a point where it's seen as the old one, even if we separate it in the future? So if, in a couple of weeks, ready for mobile, we separate them, could the new one be tarnished to such an extent that you think, actually, we're going to follow this wherever it goes now?

JOHN MUELLER: In general, we look at the duplicates on the specific time point, so it's not something where we'd say, well, this used to have a duplicate. Therefore, we'll forever keep the duplicate signals for this domain. So that's not something where I'd say we keep that forever. I imagine we will have some information for some of the URLs already that's still tied together, but that's something that will disappear as we can re-crawl and re-index those things.

ROBB YOUNG: All right. So over time, once it's separated, it can remain separated.

JOHN MUELLER: True.

ROBB YOUNG: And eventually distance itself altogether.

JOHN MUELLER: True.

ROBB YOUNG: OK.

JOHN MUELLER: "Will non-mobile friendly websites be demoted in mobile search when the new mobile algorithm rolls out?" So as always, with changes we make in ranking in Search, when we promote some sites, like mobile friendly sites in this case, there are going to be sites that aren't going to be as visible anymore. We don't put 12 sites in the top 10 of the search results just because there are more good ones to show in there as well. So essentially, if we promote some mobile-friendly pages, then we kind of have to take some of those pages that aren't as relevant, that aren't mobile friendly out of the top search results and push them down a bit. There's always this balance of promoting some sites and, by doing that, you're essentially demoting some of the other sites that don't match what this algorithm is looking for. So with that in mind, if your pages aren't mobile-friendly at the moment, it's definitely possible that you'll see some drop in traffic from users using mobile devices to search for your site.

BARUCH LABUNSKI: And it will have no effect on the main desktop results?

JOHN MUELLER: Yeah. This only affects users who are using mobile phones. You call them "main" at the moment. I imagine next year or whenever, a lot of sites are going to primarily have traffic from mobile devices and "main" will mean smartphone and not these obsolete laptops and big monitor things. I guess that's happening there. "You said links and disavow files are processed continuously and treated as no follow. Is that no follow set when the file is processed or when the Penguin refresh happens?" Neither. So essentially, with a disavow, when you submit that, you're submitting the file to our systems and the next time we crawl and reprocess the page or a page that has links to your website that matches the disavow file, then we'll take that out of the calculations. So that's something where essentially, when we re-crawl those pages that have those links, we'll take that out of the calculations. And of course, the next time Penguin runs and takes those links into account, it won't have them to take into account anymore, and that'll be reflected there as well. There is obviously more to links than just the Penguin algorithm, so it's not the case that you always have to wait for that to see any kind of effect. Essentially, they're re-processed automatically over time.

SPEAKER 2: John, if I could quickly jump in to follow up on that. [INAUDIBLE], at least on my Facebook feed, I can't get rid of it. There are a lot of third party ads for tools that speed up crawling links, especially links that you add in your disavow file. Does Google have a stance on if that's effective? I've personally never tried it. Their ads are very attractive. Do you recommend anything like that?

JOHN MUELLER: I wouldn't go so far as doing anything like that. What a lot of these tools do is essentially, they build unnatural links to the unnatural links to your site, which seems kind of an odd strategy, right?

SPEAKER 2: Yeah.

JOHN MUELLER: So that's something I generally try to avoid there. And when it comes to algorithms like Penguin, that's not something that would be affected by any of these tools. It's not that they can push out algorithms or algorithm data any way faster.

SPEAKER 2: Thanks.

JOHN MUELLER: "What is authority according to Google? We heard a lot about this from you, [INAUDIBLE], and other people. How does Google understand this particular site where content is authority for this niche of authorship? Is authorship still relevant and have any connection to this?" So I guess first of all, we don't use authorship in Search anymore. We haven't used that for over a year now. It's been quite a while. Obviously, a lot of websites still have the authorship markup on their pages, and from our point of view, that's perfectly fine but we don't process that for web search in any way. So it's not the case that having authorship markup affects your search results at all. With that in mind, I guess authorship is something you can definitely strike from there and not something you'd need to worry about anymore. Authority in general, I don't know. That's almost a very philosophical topic, so I don't really have any short and quick answer there for you, but maybe we can take a look at this in one of the future Hangouts. I'll try to chat with some folks here to see if there's something specific that we can bring up about that. "Webmaster Tools is reporting duplicate meta titles for sets of pages that have been linked together with an hreflang relationship. Is this something that suggests my hreflang implementation may not be correct?" I'd take a look at the hreflang report in Webmaster Tools. That gives you information about whether or not we think your markup is correct or not, so that's essentially the main place I would look for the hreflang markup. With regards to the duplicate meta titles, that's essentially something that Webmaster Tools does on a fairly lower level, so just because we're showing that there doesn't mean that this is in any way any problem. If you have rel canonicals on these pages, if you have the hreflang markup between these pages and you look at that you see, well, these are normal pages and they happen to have the same description on them, then that's perfectly fine. That's not something that we would use to count against a site or to cause any kind of a demotion or see that in any kind of a spammy way. On the other hand, if you look at these URLs and you say, well, these are pages that are actually about different topics and I happened to accidentally use the same description across all of these pages, then that's probably something you want to clean up. We don't use the meta description for ranking, but we do sometimes show it in the snippet in search, and if all of these pages have the same description, then chances are we're either going to show the wrong description, so the duplicate one that you have specified, or we'll try to figure out what the description might be on our side with our algorithm. So if you want a specific description shown, then I'd definitely make sure that you're using that in the meta description tag so that we can pick that up there. Mobile algorithm. "Are non-mobile friendly sites going to end up at the bottom of the index and those that are tested as mobile friendly to rank at the top of the index?" Where can we start? So essentially, from a ranking point of view, we will promote mobile-friendly pages for users of smartphones when they're searching, so that's something that's going to happen. I think we mentioned that this is going to have a stronger effect than Penguin and Panda, so it's not something that's going to disappear in the noise. You'll probably see it if you have some kind of mobile traffic to your site, and most sites do have mobile traffic for their site. So if your sites are mobile friendly, then that's obviously great. If your sites aren't mobile friendly, then that's obviously something I'd recommend at least trying to find a way to improve that within a reasonable time. Maybe you can try to get this in before April 21. Maybe you can start working on this soon. But essentially, that's something I'd really recommend working on. A lot of people are primarily using smartphones now to access the internet and if your sites don't work on smartphones, then you might be alienating a lot of people.

ROBB YOUNG: John, on that, we use an app for our mobile site, a third party app, Shopgate. I know a lot of other people use these things as well. So within Webmaster Tools, obviously, on our main account, it's reporting lots of mobile usability errors, but it's just looking at the wrong site, I guess, if you like. Our mobile site is on m. our domain, and that is the one that's provided by the app. So should I ignore mobile crawl errors in the main one, and will Google know that actually, it's showing errors but they're not really errors so I can't demote it because they are giving the user what they want? It's just via this other site.

JOHN MUELLER: So essentially, what you need to do is make sure that you have the annotation set up properly so that we understand the relationship between your normal site and your m. site, which is the rel alternate tag, for example, or the rel canonical back to the desktop site from the mobile pages.

ROBB YOUNG: We have the canonical in there.

JOHN MUELLER: So if we can understand the relationship between the m. version and the desktop version, then essentially, that's already working as it should be. That's something where you don't have to worry about the errors showing for the desktop page because you have your mobile pages on a different site. What I would do there is double check on a smartphone to make sure that we're actually showing the m. pages in search, and you'll also see the mobile friendly label if we think that the m. site is working as expected. So if you see that happening, then we're picking it up properly and you should be all set.

ROBB YOUNG: So just ignore any errors? There's no way within Webmaster Tools to say, stop sending me errors on this. You don't know what you're doing. We have the m. app as a separate Webmaster Tools account, so within there, everything's fine. On the main account, it's not, but there's no way, other than the canonical or something, to say, I know, stop it. Can you add an "I know, stop it" button?

JOHN MUELLER: In fact, Mihai brought something up similar. I think that's something we need to fix in the long run in Webmaster Tools, that if you have an alternate connection to your mobile site, that we should just take that into account directly instead of focusing on the desktop pages that wouldn't be showing in Search. That's, I guess, something that we should be doing. In general, this is something that you can easily check yourself just by searching on a smartphone and seeing, well, is the right URL showing up?

ROBB YOUNG: It seems to come up correctly.

JOHN MUELLER: Then you should be all set. It sounds good.

SPEAKER 3: I have two mobile questions. One, I can't get an answer from anybody. The Google news team is telling me it's a web search thing. The web search team is being quiet on it. The In The News box, it's mostly web search related and it shows up a lot on mobile. Is it going to remove any non-mobile friendly stuff in the news box when April 21 hits?

JOHN MUELLER: I think they're working on a reply for you. Hold tight and wait for that. I know we've been discussing what exactly we want to say about this, and it's not completely trivial because there are a few things that come together there, but I'll defer that to the official answer when it reaches you.

SPEAKER 3: Thank you. Number two is brand name queries. So let's say Home Depot does not have a mobile site and somebody searches for it on their mobile device. I assume this will be the mobile algorithm for mobiles queries will not impact brands or navigational types of queries?

BARUCH LABUNSKI: I can't comment on that.

JOHN MUELLER: You can't comment on that. Oh my gosh. In general, what happens with these kind of demotions is we try to demote the sites a little bit, and what will happen with a branded query or one where we know that it's very navigational is we'll know that this is a really, really strong result. And even if we demote it slightly, it'll still be on top. So it's not something where I'd say there is no effect at all on branded queries because they're somehow magically different. It's just that these sites are often very, very relevant for these queries. And even if they're slightly demoted, then it's not going to drop them from page one, or drop them sometimes not even from the first position. That's something where we don't treat the branded queries in any way special there, but these pages are just really, really relevant, and sometimes even if they're demoted slightly, then they're still the top result.

SPEAKER 3: You're not treating them any different at all?

JOHN MUELLER: At all. I don't know if we can say "at all." I'd have to ask with all the teams. But in general, we try to treat them the same time, and we try to calculate the relevance based on the queries and based on the sites that we have there. So it's not something where I'd say, well, this is a branded query. Therefore, all of our other algorithms don't play a role anymore.

SPEAKER 3: But the mobile algorithm specifically is not changed at all, specifically around branded queries or non-branded queries?

JOHN MUELLER: As far as I know, it isn't. It's essentially just a normal, slight demotion that's happening there, and for some of these sites, they're just so relevant for the queries that it doesn't change anything for those specific queries.

BARUCH LABUNSKI: Even if you're not converting to the mobile update, basically, you still have a chance, right? It's never too late. You just have to wait for the next mobile update, right? [INAUDIBLE] going to do them.

JOHN MUELLER: These updates are essentially continuous now. Once we activate the mobile-friendly algorithm, essentially, every time we crawl a URL, we'll double check to see, is this mobile friendly or not, and we'll take that into account the next time we show that URL in search. That's something that's essentially continuously rolling from the point where we activate this. It's not that you have to wait for any kind of refresh or you have to wait for the algorithm to recalculate things once a month. It's essentially continuous.

JOSH BACHYNSKI: Hey John, that's the perfect segue for my question.

JOHN MUELLER: Great.

JOSH BACHYNSKI: So we now know that, or at least I believe that Panda is baked into the algorithm and it's refreshing on a much more regular basis behind the scenes, just reassessing signals as part of your normal algorithm. I'm wondering about Penguin. Has Penguin reached that state as well? It's just reassessing things on a regular basis?

JOHN MUELLER: I think both of those algorithms currently aren't updating the data regularly, so that's something for both of them where we have to push the updates as well.

JOSH BACHYNSKI: I see. So you still have to press a button there?

JOHN MUELLER: In a sense, yeah.

JOSH BACHYNSKI: In a sense.

JOHN MUELLER: I mean, it's not like we have this Panda button.

JOSH BACHYNSKI: With a big, plastic cover over it and with klaxons going, quick, press the Panda button.

JOHN MUELLER: Not really, no. Sorry.

JOSH BACHYNSKI: All right. Because that's not what I'm after, so let me ask it this way, then. If a domain suddenly loses rankings and there's no manual action, and there's been no technical changes on the sites, so no technical thing is happening, but suddenly it loses rankings, then really, what is the first thing that they should do?

JOHN MUELLER: We use a lot of signals, so it's not something where I'd say just because Panda or Penguin isn't updating, that quality or web spam doesn't play a role anymore, so that could be a lot of things. I think what I'd tend to do in a situation like that is, first of all, make sure that there's really nothing technical happening behind the scenes. We see a lot of sites that post on the forums or that come to us directly that have some technical problem, essentially, that they're working with, where maybe part of the content isn't shown properly, or maybe we can't render the pages because there's a big pop up or an overlay over the pages that's blocking us from actually seeing the content, or it's using some kind of Ajax to pull in content that's blocked by robots text. All of these technical issues still play a really big role with a lot of sites. And past that, there's still various web spam algorithms that are running all the time that are looking at the site, the way the content is presented, the way that it works with the rest of the web, with users, how we think that this is relevant. And that takes into account the content, takes into account essentially pretty much everything. That's something where, because we have so many different signals that we're using in Search, we don't rely on just or two individual things to say, well, this is a problem, and if a site changes today, then it must be exactly this change here. It could be a whole variety of things where we think, well, maybe this site isn't so relevant, or maybe they're doing something that's going against our guidelines or causing problems from a technical point of view.

BARUCH LABUNSKI: Is Amit going to update his 23 questions?

JOHN MUELLER: To add a few more questions?

BARUCH LABUNSKI: Yeah. I mean, things like [INAUDIBLE]. I mean, you have 200 factors.

JOHN MUELLER: We don't list all of the factors on our website, so that 23 questions blog post was specifically, I believe, for changes around Panda, how our algorithms see quality. And essentially, over time, we've changed the way a lot of those signals work, but the general questions, the general theme behind those questions is essentially still the same. So that's something where I think those questions are really good, and they're something that a lot of websites could take to heart a lot more to actually significantly improve their website, even outside of Google in general. These are things where, even if a site were ranking number one, if users go to your website and they don't trust it enough to actually complete a transaction, then ranking number one isn't really going to help anything. It's like you're getting a lot of traffic but they're not actually completing whatever it is you're trying to do. These are things that a lot of websites, I think, try to blend out because their website has always been this way and they love their layout, they love the way their website works. But essentially, taking a step back and really getting honest feedback about your website overall, even if it's things that you don't want to believe, that's really useful feedback to have. It can be really useful, even outside of search.

JOSH BACHYNSKI: What I'm hearing you saying, John, is if there's no technical problems and there's no manual actions in Webmaster Tools, then check to make sure the website is still maximally usable and relevant online? This is what I'm hearing.

JOHN MUELLER: Sure. That sounds like a good first step. I think from a spam point of view, there are always a lot of things that could be taken into account as well, things like, is this website filled with ads? Is this website essentially just driving traffic to other sites? Is this content actually relevant? Is this useful content? All of these things add up, and these are things where, over time, you as an SEO will see a lot of different websites where you look at them and the first thing you do is, oh, I've seen this kind of business model before. It doesn't really have a chance. These are all things that, with your experience, you can really help these people to make the right decision, which might be to significantly improve their website, which might even be, in the worst case, to say, well, this business model doesn't really make sense anymore. Nobody is using directory sites anymore, so instead of paying me lots of money to do SEO for your directory site, how about you try to figure out a different business model that works with what users actually want nowadays?

JOSH BACHYNSKI: You're right. You're absolutely right.

SPEAKER 3: John, there hasn't been a Panda update in a while, right? Still around October or so.

JOHN MUELLER: That's possible, yeah.

SPEAKER 3: OK. Thanks.

JOHN MUELLER: Let me just run through some of the questions here really briefly, and then we can get back to ranting and rambling. "If I have to split content, is it better to have a portion above the fold with a link to the remaining content at the bottom of the page or to hide it behind a Read More button that expands when clicked but leaves everything above the fold?" In general, I try to keep the content on one page so that it's directly visible when you go to that page. You can scroll down and you can actually see the content. I would shy away from having a button that says "read more" and that hides all of the content by default. That's something I try to shy away from. If it's hidden by default, then chances are Google is going to see that as being less relevant, and that's probably not what you're trying to do there. You're probably just trying to implement this in a way that is a little bit more user-friendly. "Is Google trying to detect self-serving back links? Say one owns site A and blog B and Google knows that, and blog B has a link to site A, is it going to trigger a self serving signal? Is this going to pass less page rank?" I don't think that's a particular problem. That's probably more of a problem when we see that people just have tons of different sites that are essentially trying to promote the same thing, and then that starts looking more like doorway pages, or essentially like a collection of sites that are not really providing any additional value, and that's something where we'd take an action on a slightly different level. So just because there are links between sites that are owned by the same person doesn't necessarily make those links a problem.

BARUCH LABUNSKI: When is the doorway page [INAUDIBLE] coming in?

JOHN MUELLER: When that's coming in. I don't know. I thought that would be rolling out around now, so I'd have to double check with the team on that. We have a lot of changes in Search that are happening all the time, so it's really hard to know exactly when every one of these changes is live. "Would you please explain why exact match domains and links to your pages are still ranking at the top of the search results? They're not even HTTPS or mobile-friendly. However, when I look at Bing, they seem to have caught on to this issue." So in general, if you're finding spammy results, then that's something you can report to us. You can let us know about this with the Spam Report forms. If it's something that's more complicated, like a bigger scheme or something, you're always welcome to send those to us directly as well. You can send that to me on my Google+ page, for example, or through my contact form, and I can pass on to the team. I can't guarantee that we'll be able to take action on these right away. Something else to keep in mind, especially with regards to exact match domains, something that I often see is just because a domain has the same words as the query doesn't necessarily make that domain less valuable. That's something where just because it's an exact match domain doesn't necessarily mean that it's low quality, that it's spammy, that it's something that we shouldn't be showing in search. That's something to keep in mind there as well. But again, you're welcome to pass these on to me. I can't guarantee that we'll be able to take direct action on the reports, but we'll definitely send them on to the web spam team. "We have a handful of pages not displaying the Mobile Friendly tag in search but it passes the mobile friendly with the testing tool. Is this an issue we should be concerned with? Why the gap?" I'd need to know some of the URLs there to see what exactly is happening to let you know what you might be missing or where our algorithms might be getting confused, so that's something where if you want to send me some examples, feel free, and I can take a look at that. What I've sometimes seen is that sites use the PageSpeed Insights tool, or pages are tested with the PageSpeed Insights tool, and that's not exactly a one-to-one match of what we would see with the mobile-friendly test. In particular, the robots text is something that's only taken into account with the mobile-friendly test and not with the PageSpeed Insights tool. You might want to double check that you're actually using the mobile-friendly test where we do take all of that into account. If this really doesn't match and there's a weird mismatch that you can't explain and these pages have been like that for a while, then feel free to send me some examples. "Webmaster Tools shows me a URL for my site with the status 404, which is causing hundreds of errors each week. When I check the origin of the error, I see that it's always the same page itself. How can a URL with a 404 generate 404 errors itself?" So theoretically, this should be a situation where the page is sporadically 404, and we could call it actually sometimes. That's something that could theoretically be happening. More commonly, I see this with different URL parameters where sometimes, some URL parameters lead to a page that exists and some URL parameters lead to a page that's a 404, and then you have this situation. You would see that if you look exactly at the URL to see what the exact URL was that was shown there. If you can't see that directly in Webmaster Tools, I'd recommend posting in the forum so that other people can take a look at that as well and see if there's something general that they can point you at, or maybe you're missing something easy, or they can pass that on to the team internally when we have to take a look at some of that. "Google Analytics announced comprehensive release notes recently. In the past, Google Search had similar releases in the search quality highlights on the Webmaster Central blog. Could you restart that?" I don't know what the plans are for that, actually, but I can check with the team to see if we can get some of that happening again. All right. We have two minutes. What can I do for you guys?

SPEAKER 1: I have a question on quality. You were talking about quality sites because this is one of the things we worked a lot on, is how do you measure quality across a site and actually do something that can move the needle? You guys have any recommendations for publishers on how they should be measuring quality and go about it systematically? Andrew talks about pages that are bad or parts of your site that are bad. How can we really find those pages? We throw a lot of babies out with the bathwater, so to speak.

JOHN MUELLER: That's something where you'd really have to look at your own website and try to figure out which metrics would make sense for a site. That could be things like if you have voting buttons on the pages where people can vote up, vote down, you can see which pages show that users actually like this content. You could maybe look at things like the bounce rate in analytics to see where people are going and are actually leaving your site again. Essentially, what you're looking at there are indirect metrics and aggregated data that leads you towards patterns of problems that you could be focusing on. So especially if you have a really large website, that's not something where you can look at all the pages individually and say, well, this is pretty good and this rates a little bit above the other page. You have to resort to aggregated information, aggregated metrics to find patterns of pages that maybe aren't that way.

SPEAKER 1: Looking at view weighted metrics for Panda, one of the things that we've put a lot of emphasis is thinking about the view weighted average or [INAUDIBLE] experience.

JOHN MUELLER: That might be a possibility. It depends on what kind of data you can collect and what kind of data you think makes sense on the website. There are obviously some sites where, when you look at the time on site, you know that your content is really short. Maybe you're providing weather reports or something like that and you don't expect people to click around on your weather report page and look at different cities or dig in deeper to your site. So you have to understand your site, the type of content you're providing, and how you expect users who are satisfied with the content to react to what you show to them, and that's essentially what you can use on an individual level. It's not that there's one solution that fits all websites. You really have to understand your website and take the metrics that you can collect and aggregate them in a way that guides you towards, well, this type of content is seen as lower quality content. Maybe users are confused by it. Maybe the layout was confusing just there, or maybe the content itself is actually low quality content.

SPEAKER 1: Would you say, though, that Panda is a view weighted algorithm, for example, that pages that don't really drive any traffic aren't really as important to your overall Panda signal as much as the pages that get a ton of visitors?

JOHN MUELLER: I wouldn't simplify it like that. We use a lot of different signals for these algorithms, so it's not something where I'd say you can only focus on the pages that actually get a lot of traffic.

JOSH BACHYNSKI: So some pages with no traffic can be boat anchors if they have quality problems.

JOHN MUELLER: They could be, yeah. I mean, essentially, what we're looking at are the pages that are actually indexed. There are simple ways that you can do that where you can say, well, this is something where I don't know what the kind of quality is on these pages. Especially if you have a really large site and you can't manually look at all of these pages yourself, then that's something where maybe you can find some kind of an aggregated method to see, well, nobody ever looked at these pages. Therefore, why am I even maintaining them on my server? Maybe I'll set them to no index now and see how they perform over time. If, after a year, still nobody has been looking at them because nobody bothers clicking through to them on my site, then what do you need those pages for, anyway? It's something where, regardless of Google's algorithms, that's something where if you can clean something up, then that's probably a good idea. We're a bit out of time. Lots of good discussions, though. So hopefully, we'll see some of you again in one of the future Hangouts. Thank you all for joining. Thanks for all of the questions and all of the lively discussions. It's been really interesting.

SPEAKER 1: Thanks, John.

JOSH BACHYNSKI: Thanks.

JOHN MUELLER: I'll see you.

JOSH BACHYNSKI: Been a pleasure as always.

JOHN MUELLER: Have a great week. Bye, everyone.

ROBB YOUNG: Thanks, John.
ReconsiderationRequests.net | Copyright 2018