Reconsideration Requests
17/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 12 January 2016



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: All right, welcome everyone to today's Webmaster Central Office Hours Hangouts. Unfortunately the last Hangout didn't quite make it, so I set up a new one. This also means I don't have the questions all the way that were submitted, but I copied them onto the site so we can still go through pretty much most of those. As always, if there's anyone here new or relatively new to the Hangouts and you have a question, feel free to jump on in and ask away.

MALE SPEAKER: I have a question.

JOHN MUELLER: I've seen your face before.

MALE SPEAKER: Oh sorry, I wasn't paying attention. When you want me, I'm here.

JOHN MUELLER: Jason or Andrew, is there anything on your mind?

MALE SPEAKER: Yeah, thank you. So I've been following a couple of things, the webmaster guidelines that you yourself have talked about as far as A/B testing, use of canonical versus the no meta, no index, and how long you leave tests up for. So I have a scenario that I haven't seen that's been talked about in the guidelines specifically, which is for our testing program, we do a hold out group from the beginning of the year. So say, we will always have 10% of traffic going to a version of the page throughout the entire year that's excluded from testing so we can make sure we're looking at measured lift. So say that's example.com/holdout. And our server system will say, 10% of the traffic go here. So for that audience, it's essentially the home page. 90% of the traffic will go to just example.com and be available for A/B testing and whatever else. So my question is, from a markup standpoint, how do you handle that example.com/holdout knowing that it's technically a test, but it's going to be 10% of the traffic for the full year? And it is a duplicate version of the home page in essence. If that's essentially equivalent to the normal home page, I'd just put a rel canonical on there. And even if that's for a longer period of time, if this is really like a smaller set of the traffic for your site and it's equivalent to the home page, then I'd just put a rel canonical on there so that if we run across like URL, we can at least forward everything to your main URL. Right. OK, great. Thank you.

JOHN MUELLER: Perfect All right, Barry, I'll let you jump in with your first questions, as long as we can get to these other ones too.

BARRY: Seems like there's a firestorm on line right now specifically around a few things. I'll start with-- it's really related. So there was a core ranking update this weekend, right? That's confirmation. OK, good. And also you guys confirmed at some point, I don't know when, that there was a Panda algorithm, or parts of the Panda algorithm, or however you want to classify it is now part of that core ranking algorithm. Confirm?

JOHN MUELLER: I think so.

BARRY: Could you tell me one, approximately when that happens, like two months ago-

JOHN MUELLER: I don't know.

FEMALE SPEAKER: December. No, just approximately. It doesn't have to be exactly.

JOHN MUELLER: I don't know what we specified there. I know we talked about it a while back. I asked Gary and Zineb. They weren't aware of the date exactly either. So I'd have to double check on that. I saw your tweet about that too.

BARRY: Will you get me a response? Because Gary is busy, so he can't get me a response.

JOHN MUELLER: Yeah, if I hear something back, I'll let you know.

BARRY: And there's a lot of confusion on Twitter for some reason right now about somebody thinking it's real time. The core ranking algorithm is somewhat pushed manually. It's not really running constantly real time. Or maybe pieces of it are. But it seems like people are saying it's real time. It's real time. It's probably a hard question to answer because obviously there's certain things that are real time, there's certain things that are not.

JOHN MUELLER: Exactly.

BARRY: But obviously what happened this weekend was not real time. Something changed. So what can you say about that to clarify?

JOHN MUELLER: I'd say that's just a normal part of our updates. So sometimes it's something that-- so when we talk with the team about this, they said well, it's nothing big. It's just like a normal part of the updates we always do. And I suspect maybe the SEO weather type tools out there are tracking things in a different way than we would look at it. So that's something where maybe something that overall doesn't really have a big effect is still picked up by them.

BARRY: OK. All right. I'll maybe come back with more questions later, but I'll let other people chime in. Thank you.

JOHN MUELLER: All right, let me run through some of the questions that were submitted already. I suspect the Q&A stuff is still-- the current one is kind of empty, yeah. Some of these that we got for the Hangout before we started. We received a high number of URLs warning from Google on our census content, contains 134 million people. The site indexed collections of people, about 6 million, which linked to no index person pages. We'd like to know if Google is having trouble crawling and indexing our site? So probably if you're getting this message, that's a sign. So essentially this message comes when we discover a bunch of new URLs for your website that we haven't known about first. And it usually comes before we actually go out and crawl those pages. So if these are all pages that have a no index, then of course they won't be indexed. But we'll still have to crawl those pages first to actually see that they're no indexed. So that's something where, if you add a bunch of different links to pages that are actually no index, you might trigger this message. What also happens in a lot of cases that I looked at, it's not so much that the site itself has a lot of pages by design, but also that they have a lot of, maybe, separate URL parameters in addition. So if you have 130 million pages and you have 10 different URL parameters with different variations, then that's kind of like all multiplied by each other. So then suddenly you have, I don't know, 1 billion variations of different pages which actually map to 100 million different individual pages, which means that's a lot of pages that we have to crawl for us to even realize that these are duplicate or to recognize that they have a no index on them. I'd take those example URLs that are given there and really double check to make sure that URL parameters that are in there, if you have any of those URLs, are something that you'd want to actually have crawled and indexed. Next question here, at the end of our blog post we have a plugin that shows four related posts. We made them all nofollow as Google doesn't like automatically generated content. But since we're linking to our own blog posts, should we make them do follow? Sure, you can make them do follow. That's absolutely no problem. This is essentially helping us to crawl your website by understanding which parts of the site belong together. So that's certainly a good use of normal do follow links. I'd kind of be cautious about this if you're linking to external blog posts, and especially if that plug-in is a part of some bigger scheme where maybe you're in an ad network where you link to related blog posts on other people's sites, and they link back to you, and it's kind of almost this advertising relationship there. Then in those cases I'd certainly use a nofollow. But if you're linking to your own blog posts on your own site, normal followed links are perfectly fine. Can I duplicate a subdomain of my site A to site B and then add the rel canonical to pass link juice from site B back to site A? Some competitors of ours are doing that and rank really well. Is that a violation of Google's policies? So with the rel canonical, you're essentially saying these pages should be followed together, which means we try to treat them as one page. It's not that you'd have any additional value of splitting that up and then folding it back together again. After a change of our site structure, the front page content was moved to a subpage, and that subpage should rank now. We link there internally with relevant terms, but there's still a lot of external links to our home page. Should we try to get those links changed? Maybe? It's hard to say. So in general, if you move your content from your home page to some lower level page, it's going to take us a while to actually figure out what kind of change you're doing there and what you're trying to achieve with that change. So that's something where I wouldn't expect any immediate change, especially since you can't redirect from the home page to that lower level page. So it's not like a normal change on your site where you move from URL one to URL two. You're essentially taking the content, moving it somewhere else, and then hoping that we treat that new page the same as the existing page, when actually the existing page continues to exist as well. So it's a bit of a different situation than a normal move with a new account. Is there any tool to test your AMP pages? We've developed AMP pages for our site and we want to test them. How do we diagnose that things are good to go? So on one hand we're running a beta test for AMP site. You can sign up at g.co/searchconsoletester, all one word. And that will probably help you to kind of figure out some of these problems. On the other hand, there are two main tools you can use on a per page basis. On the one hand, you can add hash development equals one to the AMP page, and then look into the browser's JavaScript console. That'll run the validator within the AMP pages and give you information if this AMP page is theoretically valid. In addition, you can use a structured data testing tool to double check the page to see if it has all of the right markup, in particular, the news article markup. Let's say, does Google not care about the browser cache? I mean, if a website uses the browser cache, does Googlebot also see the current version, or is there sometimes trouble like when you change a CSS file and you need to refresh everything to see that. We do cache the CSS and JavaScript files normally. And theoretically it's possible that there's this time frame between making changes in one kind of before we actually see it on the other pages. But in general, we're pretty good at following your cache guidelines that you specify on your site so that we try to pick up the new versions whenever we can. I have a multilingual site. Every customer can switch the language through a drop down menu that contains links to an alternate URL. I saw many requests for Googlebot to this links. This can be dangerous. We have an alternate tag in the head, and that works well. So in general that wouldn't be something that I'd really worry about, especially if those separate multilingual URL essentially result in that page loading in that language. Then that's something you'd kind of want to have happen. On the other hand, if your use those multilingual URLs to set a cookie and then change your language on the existing URL, then that would be something where we might not even notice that language change. But if you're saying that everything is working well with the rel alternate hreflang tag, then I wouldn't worry about it. Then it sounds like you're already doing the right thing. Let's see, when updating an article, what's the proper way to do in WordPress so that Google can show the last update date in search? I usually just write update and then the date in the post above the paragraph, but obviously that doesn't work well for Google. So that should actually work fairly well for us. We should be able to recognize that date on those pages and actually pick that up. I guess the tricky part is sometimes we'll show the initial article date. Sometimes we'll show the update date in the search results. So it's not something where you'd always explicitly see that date in the search results in the snippet. I'm not completely sure, but I think you can also specify the date using structured data markup, which gives us a little bit more information too. Let's see, part of your website is accessible only depending on your refer and by entering the URL directly in the browser. How does Google handle those pages for SEO? Is there any penalty for the main part of the site? Should be using no index? Let's see, tricky question. So on the one hand, Googlebot crawls without a refer. So if those pages are accessible without a referrer, then we should be able to crawl and index them. On the other hand, if when a user clicks through a search result, they don't have a refer. So that will be passed along. And if you're blocking users depending on the refer, then I would almost go ahead and put a no index on those pages to make it a little bit clearer that those are really pages that you don't want to have indexed, that you don't want to have people going to. If you want to take it a step further, if this is really confidential content that you have on those pages that you are trying to kind of keep out of search and keep inaccessible, then I would really go with a server side authentication for those kind of pages so that it's really locked down and nobody can just link to those pages and kind of make them available, or kind of access them directly or indirectly. So that's something where if it's really something confidential, then I would lock it down properly. Do interstitial commercial ads, not advertising the app of the website shown for mobile users, are they OK with Google's quality guidelines? I think at the moment, they're pretty OK in the sense that we wouldn't explicitly go against that. From a personal point of view, I find them extremely obnoxious, especially on mobile because it's sometimes really hard to kind of get past them. And if your website loads slow already because it's on mobile and then you essentially block the user from seeing the content unless they find the right millimeter to click on to kind of remove the interstitial, then that's kind of obnoxious, and I don't know if a lot of people would actually go through that and say well, this website is really worth the trouble to actually look at. So from my point of view, I kind of discourage from that. But from the webmaster guidelines point of view, I think that at the moment that's not something that we would say would be explicitly forbidden. I manage some movie websites, and in common practice is to force a YouTube popup when the user first lands on the site. I don't know, that would kind of be obnoxious to me too. And then to get to the main content, they have to close the YouTube popup. What are you thoughts on doing this from Google's content rendering point of view? It's possible that we would actually see that popup and try to index that as a part of the content, which probably isn't what you're trying to achieve there. And in general, this seems like something where you're kind of being obnoxious to a user and probably isn't really going to help your site in the long run. So again from a search webmaster guidelines point of view, this is probably not something that would be problematic there. But for your users, from a user experience point of view, probably not something I'd recommend doing. The guidelines say to use rel canonical for duplicate pages. I think we talked about this just before. Fetch as Google renders a website only to 1,024 pixels wide. Does Google calculate rankings for a website from this view? We have a normal website that looks good, 1,200 pixels, and separate mobile version that looks bad on 1,024. Can this be a problem? So as far as I know, the width that we use for rendering for indexing is also the same 1,024 pixels that we show in Fetch as Google. So if your website really looks bad there and the content isn't actually visible there, then that might be something to kind of look at and maybe improve. In general, even if it doesn't look perfect but the content is still there and we can actually pick up the content there, then that's not something I'd really worry about too much. To what extent does a user's search history and/or browser history affect their search results in Google? And does that vary depending on the device that they use to perform their searches, if at all? So we do have some amount of personalization. It's really not something that I'd be able to quantify and say to what extent, like some number that we do use personalization. You probably see this yourself when you search for something and you see your own site in search results where maybe you wouldn't expect to see it. And you check it in incognito window, and you see that it's not actually showing in that position for a not logged in user. So to some extent we do use kind of personalization based on the user's search history. With regards to the device, I believe you basically just have the different mobile factors as well. So on mobile, we do take into account, for example, the mobile friendliness of the site with regards to its rankings. So that might be something where you'd see slightly different results when searching with your account on a mobile device.

MALE SPEAKER: Hi John. Quick follow up question there. So does browse history-- so does my browse history on desktop if I've logged into my account impact search rankings on my mobile device if I'm logged in to the same username?

JOHN MUELLER: If you have your web history enabled, then I think that that would. So I'm not 100% sure, but I know my previous searches are always on my phone when I search. So I would assume that that connection would be there.

MALE SPEAKER: Great, thanks.

JOHN MUELLER: I was wondering if the algorithm is different between Taiwan and .com, for example for Panda and Penguin? For most part of our search ranking algorithms, they're essentially the same for all countries and languages. There are very few parts of our algorithms that are different or that are specific to individual languages usually, or countries. Sometimes a country's aspect plays more role with regards to which search feature is actually shown. So whether or not we'd have this featured snippet shown or not, that might depend on your country. How we understand your content might vary from language to language in that sometimes it's easier to understand synonyms. Sometimes it's easier to split words, especially when you're looking at maybe languages that we're not that good at traditionally. So those aspects can play a role there. With regards to most of our algorithms though, we do try to make them as generic as possible because if we have one algorithm that works really well across the board, then we don't have to worry about fine tuning it and testing it in all of these different languages or countries. We can kind of assume that it'll continue working across the board. We can update one version and make sure that those improvements are available to everyone worldwide. Let's see, hosting bad neighbors. Should we worry about it? What should we look for? So in general, we're pretty good at understanding that sites host things together. And if you go to an internet hoster, then chances are, especially with the lower end packages that you get for hosting, you'll be are on a shared server with maybe thousands or even hundreds of thousands of other websites. And some of those might be kind of questionable websites or even crazy websites. And that's not something that you can really control. It's something that essentially happens on the hoster side. And if we can tell that this is like a normal website hosting situation, then that's not something that we'd really about. More problematic situation might be if you're hosting hundreds of thousands of spammy website, and you include one kind of good website on the same server. Then of course what might happen is our algorithm would think, well everything on this server is really spammy. Therefore it's not worthwhile kind of watching out for that one individual site that's actually kind of good. So if you're hosting your website and you're going to a normal web hosting provider, one that you kind of trust to uphold their terms of service, that kind of is reactive to when people do crazy stuff, then I wouldn't really worry about that. I wouldn't try to do a reverse IP look up and look up all the websites that are also on the same IP. Chances are you will find something crazy and a lot of reasonable stuff there. So that's generally not something worth looking for. Creating instances of a CMS for our partners, they would sell the same sort of sites. What would be the impact on the site's SEO? Would the content be considered as duplicate? If you're copying the content to other sites, then yes, that content would be considered a duplicate version of the same content. But like I've mentioned before, we don't have a duplicate content penalty. It's not that there would be a manual action taken for sites that are kind of sharing the same content like this. So we'll try to figure out which of these are the right ones to show in the search results for specific queries, and show the right version. So in particular, if you're taking the same content and you have it on maybe one website that's hosted in German and the other website is in English, or one in the UK, one in Australia, we have the same English language content, then maybe depending on the user's location, we would guide them to one or the other. We know that they are duplicates, or that part of the content is duplicate. But that doesn't mean that we would never show that website. We'd try to pick the version that is more relevant to the user and show that in the search results there. Does a search appearance HTML improvements, duplicate titles, and descriptions in Search Console take into account the link rel canonical? We're seeing it complaining on pages that have this added. So what's probably happening there is that we're indexing the pages with a rel canonical first as the version that we find it, because we have to kind of index that version in order to process the rel canonical. And then in a second step we're actually following the rel canonical and using that to index the canonical version. So what might happen, depending on the timing, is we have one version indexed like this and one version indeed like that. And That. Might show up as a duplicate in search appearance there. In general though, that settles down fairly quickly in that we follow the rel canonical. We focus on canonical version. And over time, you would see primarily the canonical version in these search features. Let's see, maybe we can take a few from the chat. And I'll try to scroll through the questions that we have so far. What's on your mind?

BARRY: I should talk now? OK, great. I still want more clarification about what it means that Panda is part of the core ranking algorithm. I know there was lots of complications in terms of getting that to be in the core algorithm last time. And it's like yeah, we put it in the core-- it's like kind of matter of fact. Hey, yeah, we did this. We put it in part of the core ranking factors, who knows, probably a few months ago. And I'm kind of stunned by that because I never thought they would-- I kind of thought there will never be another Panda update. But I guess now there's always going to be these Panda updates within the core ranking algorithm. The question is, what does this all mean? It's hard to explain, but you know what I'm saying? Why didn't you let us know initially when it happens? Did you know? And why didn't you let us know? And two is, obviously you don't want us to worry about it. But for us, it's like all right, people are scratching their head. They're hit by this Panda 4.2 algorithm. And they never got out. And now you say hey, well now it's part of our core ranking algorithm. It'll probably run more frequently. And people were like, what do you mean by frequently? Is it real time? And these are the questions that are going on all over the place right now. So I'm trying to get some clarifications to people. So you talk, and then I'll ask you about Penguin for a second.

JOHN MUELLER: Oh my god. I don't know what we want to say about this specifically. As far as I know, it's more of a rolling update so that some changes from Panda take a while to kind of roll out. And over time, we kind of recalculate things. Then we kind of start the next cycle. So it's not so much a real time as something that's just periodically rolling out slowly.

BARRY: So could I [INAUDIBLE] that for you?

JOHN MUELLER: Sorry?

BARRY: Could I try to rephrase that?

JOHN MUELLER: Sure.

BARRY: So with the last one, 4.2, we knew it was rolling update. And I believe the assumption was that all the Panda scores were figured out on a specific day for the site. I'd say it was on July 4, 2014, whatever it might be. That's when the scores were calculated for the site. And then it was rolled out slowly from there. Are you saying now in the core ranking algorithm, they're still going to be those times where we recalculate the scores every so often. Not real time, but it's going to be-- the scores are not real time. Like tomorrow, every single day I'll have a new Panda score, which is baked into this algorithm?

JOHN MUELLER: As far as I know that's the case, yes.

BARRY: So as far as you know, it's not real time in terms of that scoring.

JOHN MUELLER: Exactly.

BARRY: But the roll out could still be gradual and slow. OK, I got it, I think. And keep going.

JOHN MUELLER: I don't really have that much more to add to this. I mean, it's something where it's possible over time that we'll have more information about how that specifically happens. But I think at the moment that's kind of the level that we think is relevant.

BARRY: When you say that people really shouldn't have to worry that much these days about this Panda algorithm now that it's baked in, it's going to be less apparent to people?

JOHN MUELLER: I wouldn't worry about-- if you know your website is lower quality, then of course you should worry about it.

BARRY: I didn't know my website was lower quality. I got hit.

JOHN MUELLER: Yeah, but I mean it's something where we try to look at the quality of a website and understand which ones are higher quality, which ones are generally lower quality, and take that into account in the ranking side. And this is essentially just a way of kind of making those updates a little bit faster or a little bit more regular.

BARRY: So you think it's going to be more regular, more faster?

JOHN MUELLER: I suspect, yeah. I won't make any promises.

BARRY: And [INAUDIBLE] that the core ranking algorithm, whatever you run this core ranking algorithm, it doesn't mean that there's going to be-- let's say you have like 100 signals in this core ranking algorithm. It could be 102, could be 300, could be 3000. But every single time you run this core ranking algorithm, which I think is always. But again, that's the confusing part. Does it run every signal in that algorithm, in the core ranking algorithm? Or could it run, let's say, 15 signals this time and 20 signals that time. Is it always running the same signals when you run these things? I'm making this more complicated that it has to be, right?

MIHAI APERGHIS: I guess Barry wants to ask, what's the main difference between having a separate algorithm versus having it integrated into the core algorithm? What's the difference? What does that mean, exactly?

JOHN MUELLER: I don't know.

ROBB YOUNG: John, let me ask a different question that's related. Would you ever foresee a time where you'd let people know within Webmaster Tools what part of the algorithm or separate algorithm is affecting their site? Because if not, then I'm not sure what the point is of knowing or guessing anyway.

JOHN MUELLER: Yeah. I can't promise anything in regards to, like, would you ever do this? That's kind of a very vague situation. But we do regularly push with the teams to make sure that-- or to try to get kind of more information about the algorithmic changes into Search Console so that, especially if there is something very actionable that the webmaster can do, that that information is given there. So if we, I don't know, had an algorithm that looking at, I don't know, duplicate content on your website, and we thought it would be useful for you to kind of know about this duplicate content so that you can fix it on your website, then that might be something that we'd shown in Search Console. It's really kind of tricky because a lot of the ranking algorithms that we had aren't really made for actionabilty on the webmaster's side. So if we think that your site isn't relevant for this query, it's not that we can tell you well, you should make your site more relevant. You kind of already see that based on its ranking for those queries. So those are kind of the tricky ones. And essentially most of our ranking algorithms are that way. They're made for us to figure out which ones are the most relevant sites for the search results, and not primarily made as a way of giving more information to webmasters.

ROBB YOUNG: So would it even help people then even if you-- I'm trying to play devil's advocate here. If you said, you're offended by Panda, or you're affected by Penguin-- because most people would just assume that Penguin's mainly links, Panda's mainly content or quality. I'm not sure what their next step would be anyway even if you--

JOHN MUELLER: I can see some situations where it might make sense if you knew a little bit more about what specifically the problem was. I imagine for most webmasters, it's either in the gray area that is kind of the way that you would type anyway. It's not something special that we'd look at. And a lot of them are probably in the area that, well, I know I did all of this sneaky stuff in the past.

ROBB YOUNG: If you take our site out of the equation altogether, I'm not sure how many people get that news and then are genuinely shocked.

BARRY: Yeah, but they might not be shocked, but they might go ahead and say all right, now it's time I clean up my act. I've already been arrested one time. me getting arrested a second time for whatever it might be, small thing. So you might be all right, it's a wake up call. Maybe it's time I stopped buying links, or whatever, trading links, or whatever it might be. The question is, I like the way Mihai phrased it is, what's the difference? Is it just going to be more regular? What's the difference that we can expect now that Panda is part of the core ranking algorithm?

JOHN MUELLER: I don't really have a good answer for you.

ROBB YOUNG: Barry, assuming you got an answer, what behavior do you think would change dependent on it? If John said OK, we treat them differently because X, would you or your clients then change that behavior?

BARRY: When I was hit by Panda 4.1, I just played around with the Disqus comments to kind of make sure I wasn't publishing the spam stuff better. I did take some action on the spam in my comments. That's the one little thing I did. I didn't think my content itself was bad, although some people in the community do think it's bad. But that's the one thing I did. And I know people when they got Penguin penalties, they do link [INAUDIBLE]. They hire people for that. I know people do things. So it's the same reason why Google will notify you if you have a link notification penalty or something like that, a manual link action. So I think it's useful, but at the same time, I see why Google doesn't want to provide that at this point or why it might be hard.

ROBB YOUNG: Google also has a benefit-- maybe this is paranoia as well. Google also has a benefit by notifying you of link stuff, because they collect a lot of information on links if millions of webmasters start cleaning up the same links. Just general quality guidelines, Google doesn't get anything in return really, does it?

JOHN MUELLER: Well, if you have good websites that show in search, then that's something about you to, right? But I think the tricky part is in general that most of our algorithms are really made for search. And the output that we get from them is really like how we should show this site in search results, and not so much, this is something specific that the webmaster should change. But maybe at some point we'll have more information on, I think especially the quality side. That might be something where we could give some information potentially where maybe you'd have something like a quality score like you have for Adwords. Or you can see well, we think your site is kind of low quality. And maybe you want to kind of rethink what you're actually doing on that.

ROBB YOUNG: Wouldn't you still have to break down-- because quality is still very subjective. You'd need to break that down into various different pots.

JOHN MUELLER: I don't know. I mean, it's all very theoretical. At the moment we don't have plans for that.

MIHAI APERGHIS: I think we're asking these questions regarding whether there's going to be information in Search Console about these things, the ranking algorithm. Here mainly worried about the web spam related rank parts of the algorithm, rather than whether a page is [INAUDIBLE] or not. Because we can kind of figure that on our own. And of course that's, as we said, not much the webmaster can change. If you're not relevant for a query, then you're not relevant. But when it's a web spam algorithm, that can be useful information. You can figure out if your SEO team is doing something maybe they should be doing. That might actually be something that the webmaster can use to change, actually do a change, especially when he's not in control or he doesn't know whether SEO related activities are being done or not. That could be something.

JOHN MUELLER: So one thing where we do do something similar is with some of the hacked site stuff where we do have algorithms that try to recognize when a site gets hacked. And we'll show that in the manual action section. And if you go through the reconsideration flow, then that gets cleaned up a little bit faster than if we can have to algorithmically wait for everything to be re-crawled and recognize it's kind of clean. So that's kind of almost like a mixed section there. It's good feedback to have. And it's good to hear it from time to time again, because that kind of reinforces that asking for more information like this in Search Console is the right thing to do.

MIHAI APERGHIS: By the way, regarding what Barry asked, so when you're going to launch the next version of Penguin-- which is going to be real time-- is that going to be directly integrated with the ranking algorithm, or is it going to be separate [INAUDIBLE]?

JOHN MUELLER: I don't know what the specific plans are there.

MIHAI APERGHIS: So there's no news regarding Penguin?

JOHN MUELLER: Nothing's for sure.

BARRY: Is your gut still this month?

JOHN MUELLER: I don't know. What can I say? I obviously am happy when it launches as fast as possible so that we can kind of move on from there. But I also want the team to kind of make sure that they're happy with the results. I don't want to kind of push something artificially that just by like use going in and saying oh, we said it would be there by the 1st of February. Therefore you have to push it, even if it's not quite ready. We prefer to kind of make sure everything is ready and actually working as it should be before we make it live.

MIHAI APERGHIS: John, is this practice usual where you kind of create a new algorithm, that's a certain ranking signal? You create it separately, and it feeds that up to the core ranking algorithm. Then later on when it's part of the development, you integrate it with the core ranking algorithm? Is that the usual process? Maybe RankBrain, for example, was initially separate, and now it's part of the core ranking algorithm? Is that the case right now?

JOHN MUELLER: It kind of depends. So sometimes we do it like this. Sometimes we do it differently. It really depends on the situation, on the data that's available, and how we can kind of process that.

MIHAI APERGHIS: I assume that the bigger data or the more complicated data you needed to create something separate, maybe?

JOHN MUELLER: You'd almost have to join us to find out more.

MIHAI APERGHIS: Got it.

JOHN MUELLER: All right, let me double check to see that we're not missing any important webmaster type questions.

MIHAI APERGHIS: Oh, I have one more. I actually Twittered you on this. This was a question on the search forums. If a user verifies his website in Search Console, then let's say it's verified meta tag. And they renewed the meta tag. How often does Search Console re-verifies this to check that the tag is dead?

JOHN MUELLER: I think it's once a month, approximately.

MIHAI APERGHIS: So it's meta tag, HTML file, any verification method is once a month, or independent?

JOHN MUELLER: I think even for like a once a month cycle-- and it's not necessarily on the first of that month, but kind of flowing. But you can also force that. If you're in the count you can say-- what is it? There's a way of kind of refreshing that directly if you're the owner of that site. So if you remove someone else's meta tag and you want their access removed immediately, then you can kind of force that to happen through Search Console. I forgot what the name was of that link. Let's see, here's one. I'm doing SEO for a used car dealer. If I get a backlink from a website having more traffic, does that mean that will improve my SEO rankings? So in general just because you have a link that gives traffic to your website doesn't necessarily mean that your website is more relevant or a higher quality and that we should show it higher in the search results. So in particular, if you buy advertising from a different website, and that advertising has a nofollow on there, that might result in a lot of traffic to your website. But that doesn't necessarily mean that we would see that as a sign that your website is actually high quality and that it should be ranking higher. So generally what I'd recommend doing is really making sure that your website is such that all of this traffic that you're getting to your website is actually people who are also coming back again, who are saying well this is a fantastic website that I found through this advertisement, or this promotion, or somewhere else, someone mentioned it, and it's something that I go back to that I recommend to other people as well. So kind of turning that one time visit from someone who's coming through an ad into something that's more regular, then that's something that we might pick up for the search result for the ranking as well, especially if we see people recommending this website and going hey, this is a really fantastic website. Maybe I found it here, or I found it here, or someone else recommended to me. But this is something that I want to recommend. Then that's something that we might pick up as a link and take into account for rankings.

MALE SPEAKER: [INAUDIBLE].

JOHN MUELLER: I think that's pretty much it from the questions here. There's a Penguin question that I think I answered, a question about the Google+ business pages changing. That's something you'd probably have to take up with the Google+, Google My Business team directly, not something that I'd be able to help out with. So any last questions from you all?

MIHAI APERGHIS: John, regarding AMP pages. If, for example, we used the hashtag numbers equals one, and you get some errors regarding tag 1B just let's say. We used certain attributes that aren't supported. [INAUDIBLE] Google will completely ignore the AMP page, or it doesn't matter [INAUDIBLE]? So if the AMP page doesn't validate, which could be something as simple as specifying the image wrong or not having an image, then we won't take that conviction into account for the AMP pages. So they really have to be valid AMP pages. They have to pass the validator. They'd have to pass the structured data testing tool so that we find the right markup on these pages. Then we can take them into account.

MIHAI APERGHIS: So it has to be 100% error free before it's taken into account.

JOHN MUELLER: Yeah. And we do that a per page basis. So if a part of your site validates and a part doesn't validate at all, then we'll take that part into account. And it's not that we'd say well, half of this site is at [INAUDIBLE] pages. We don't want to trust it. We'll essentially take those into account that we can pick up, and ignore the ones that we can't validate yet.

MIHAI APERGHIS: Any idea on when the AMP page test will being?

JOHN MUELLER: It should already be running the beta. So if you signed up for the general Search Console testing, then you should have a link to the AMP testing there as well.

MIHAI APERGHIS: I'll check it.

JOHN MUELLER: So with that, let's take a break here. I'll try to make sure the next Hangout works a little bit more smoothly so that we can start on time. But thank you all for bearing with me. And hopefully we'll see some of you again in one of the future Hangouts.

BARRY: Thank you John.

JOHN MUELLER: Bye everyone.

MALE SPEAKER: Bye.

MALE SPEAKER: Thanks.

MALE SPEAKER: Thank you John.
ReconsiderationRequests.net | Copyright 2018