Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 08 September 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: Welcome everyone to today's Google Webmaster Central Office hours. My name is John Mueller. I'm a webmaster trends analysts here at Google in Switzerland. And part of what I do is talk with webmasters and publishers like the people here in the Hangout or people that submitted lots of questions already. As always, when we get started, is there anyone new here from the Hangout live that wants to get started with some questions.

FEMALE SPEAKER: Yeah, hi, John. I don't know if you guys can hear me.

JOHN MUELLER: Yes.

FEMALE SPEAKER: I hope you do. OK, cool. Hi, first of all, thank you for inviting me. I'm really happy to be here. I always watch the Hangouts. It's the first time I'm actually on one. I got a question regarding something that happened to one of the websites I take care of. Part of it got de-indexed due to having hreflang tags for different languages but also canonical tags in the language folders. So every language would have their own canonical tags for that language and also the hreflang tags. And I'm wondering why it got de-indexed. Like two-thirds of it just disappeared from Google. Now, the weird part is maybe the canonicals were conflicting in signals or something. But whenever I would do a site search for the pages that got de-indexed, I would not get the result. But then if I would search for certain keywords for those pages, they would still be ranking. So if they're not indexed, how can they still be ranking?

JOHN MUELLER: I'd probably have to look at the specific URLs. If you have, I don't know, a forum thread or somewhere where you're discussing this already, I can take a look at that.

FEMALE SPEAKER: I've mentioned it on the Google forum, and that's how we found some of the issues being related to the canonical tags. But the problem is my client doesn't want me to make this public. So can I send you a private message with a URL? Would that work?

JOHN MUELLER: Sure, you can send that to me. I don't know what I can reply directly. For a lot of these things, I can pass them on internally. But I can take a look to see maybe there something missing in the forum thread as well. In general, the hreflang crosslinks they wouldn't remove a site from the index. It wouldn't remove the URLs from being indexed. The canonical, you could theoretically do that in a way that could result in some of these URLs being removed from index. But that wouldn't be if each language had their own canonical. That would be, for example, if all the canonical tags were pointing at the home page. And that's something where we might think, oh, well, all of these things that are the same as the home page will just do the home page and drop everything else. But that doesn't sound quite like what you saw.

FEMALE SPEAKER: Well, we did manage to exclude everything else. And there was one weird thing. The cache one of the pages was showing the URL for a different language. So it was cast as the other language that was still showing as being indexed.

JOHN MUELLER: So it was some of the language versions were removed or some of the pages completely?

FEMALE SPEAKER: Yeah, two out of three languages were removed. Now the other issue is it's not very different languages. So it's not English, French, Italian. It was variations of the same language because of different regions that speak the same language. So it was German for Germany and German for Austria, for example.

JOHN MUELLER: OK. Sometimes what can happen if we don't have a clear hreflang annotation there is that we'll assume that these pages are the same, and we'll fold them into one. So you'll see that when you do like a site query. We'll just show one of those pages there. If you do info query for maybe the German and Austria version, then we'll show that URL in the search results for the info query. But we won't show it for a site query, for example. So that might be what was happening there. But if you have cleaned up the hreflang markup, then that shouldn't be happening there anymore.

FEMALE SPEAKER: Well, actually, we've removed the canonical tags from the other languages and left the main language with canonical tags. We still have the hreflangs, and those are implemented correctly. So that is not a problem. The question is how can I make this faster? I've fetched it with Google. I've resubmitted site maps. I just can wait no longer.

JOHN MUELLER: Some of these things just take time. It's hard to say what you'd really need to do there. But I think one aspect would be really important is really to make sure that the canonical tag isn't across the different language versions. So your German for Germany should have the canonical pointing to itself, and the German for Austria should have the canonical pointing to itself. It shouldn't be that the German for Austria version has canonical pointing to the German for Germany version.

FEMALE SPEAKER: This is the situation-- every language folder has a canonical pointing to self and not to a different language.

JOHN MUELLER: That sounds good. But I guess I'd really need to take a look at the example then to see what the current status is and what is holding it up. Or maybe there's still something that needs to be done. Maybe it's just a matter of time.

FEMALE SPEAKER: Yeah, I'll understand that. Well, I'll try to message you. Maybe it gets to you.

JOHN MUELLER: OK, sure.

FEMALE SPEAKER: Thank you very much.

JOHN MUELLER: All right, more questions from some new faces in the Hangout? Anything specific? As always we'll just start with submissions.

MALE SPEAKER: I'm not a new face, but could I ask a question or should I wait till later?

JOHN MUELLER: Sure, go for it.

MALE SPEAKER: All right, so back on August 23rd there was some type of down trend in pretty much every index status report I saw in the Google Search Console. And I heard from German webmasters that you said in German Hangout that the normal new is a normal new level, and then I asked PR at Google, and they said, yeah, everything you see now terms of the lower number is a more accurate count. And then I saw a bounce back up on the 30th, and then again it's still at the pre-normal levels today when the index status report updates began today. So what was that down dip that you said was-- or that Google said was the new normal?

JOHN MUELLER: I think that was a normal data glitch where something got messed up with counts. And we showed that in Search Console like that. So I think the difficulty there is the timing in that we had this weird data glitch that we noticed afterwards. And at the same time, or roughly the same time, we also made adjustments with what we would show in general. So it looked like some of these days were based on the adjustments that we did, the answer you got from PR, the thing that I said for the German Hangout. But at the same time, there was this glitch in the numbers there. So it's a bit of an odd coincidence.

MALE SPEAKER: So both were correct?

JOHN MUELLER: Yes.

BARUCH LABUNKSI: So when glitches happen, it doesn't affect ranking whatsoever, right?

JOHN MUELLER: It shouldn't. So, I mean, theoretically, you could see that some kind of a glitch happens, and it does affect the search results. But in general, that's something you'd see right away in the search analytics report, in the traffic to your site-- all of that. So if you see a weird glitch like with the index status counts or you don't see any change in your search traffic, then probably that's just something that glitched with the counts there-- nothing crazy.

BARUCH LABUNKSI: Are there ever ranking glitches by accident?

JOHN MUELLER: I am sure there are. I don't know of any offhand. But things can always go wrong, and we have a lot of the kind of backup systems and a lot of checks and balances to make sure that things don't break. But theoretically it's thinkable that something will go wrong at some point, and it'll be visible in search.

BARUCH LABUNKSI: Speaking of updates, is it possible I put a question-- like I put up a question. So I was just wondering, I guess I'll wait for it, but I just wanted to know something regarding updates. So I was wondering if could transition to that question. If not, I'll wait.

JOHN MUELLER: All right, well then I'll get started on top and work my down. And if we don't make it to your question, feel free to ask at the end.

BARUCH LABUNKSI: Thanks.

JOHN MUELLER: All right, "I'm working on a site that was owned by a company in the Netherlands. The domain has been sold to a US company. I set the target to the US in Search Console and other best practices. Yet the site only ranks in Google Netherlands and not in google.com." This sounds like either a situation where you need to give it a little bit more time to update all of the information that we have there or where it might make sense to send me that domain and a little bit more information about what happened there so that I can double check to make sure all of our systems internally are working as expected and that they are not stuck on the old stage for that site. But in general, this kind of a change where especially if you have a really well-established site in one country, and you say, well, all of this should be valid for a different company or a different country, and to propagate all of that information, that just takes a lot of time sometimes. So that's not something where you'll see a change, a flipping over happen within a couple of days. Sometimes that does take a month or so. "If your site is suffering from an algorithmic penalty, and you make positive changes, how long before you see the results? Do you just wait until the page is re-craweled? Are many of the ranking adjustments done in real time? Or is there also a time element?" So in general, we don't call these algorithmic penalties. These are essentially algorithmic adjustments the way our search works. This is the way that we do rankings. And a lot of these algorithms that are active in search results, they do everything from trying to recognize the content to figuring out if this is a high-quality content source or not of if there are any webspam issues that need to be taken into account. And that's not something where we'd say it's a penalty that we're demoting this site because of something they're doing wrong but rather algorithms are trying to figure out where is the appropriate place to put this site in the search results. So just a note on the side I guess there. When it comes to how long it takes to see these results for some of these changes, those changes are visible very quickly. For other algorithms, it does take a bit of time until everything bubbles up, until everything settles down at the right place and is able to take into account the new changes that you've made. For example, if you've made site-wide changes across a website, then we do need to take a look at your website in general and understand that this new site-wide information that we have from your website applies here, and that can take a couple of months for everything to be re-indexed, re-processed-- all of that. On the other hand, if you change individual pages, you change, for example, a title on a page, then that's something that can be reflected in search within maybe a day or so. So it's not that there's one clear line where we say, well, you made this change now and this will be always visible exactly a week later or two weeks later. That's not the case. "Is it safe to reuse content from pages which have either 301s or 404? We're redoing our site structure, and we have a lot of useful content we want to keep. But we obviously don't want any duplication penalty if it's on more than one page." So first off, there is no duplicate content penalty, so you don't have to worry about that. That's one less worry, I guess. In general, if you're changing your site structure, you're welcome to reuse your site's content. That's the perfectly natural thing to do. If you're moving URLs, if you're removing URLs, if you're folding pages together, all of that is a good reason to reuse that content and take that into account. So by all means, you reuse the content that's good, make sure that your new site the best collection of all of your content, and then you should be good to go. "About six weeks ago we moved our site to a new domain together with a redesign, responsive design, HTTPS migration with 301 all new URLs and did a site move in Search Console. The week after, we lost 70% of the organic traffic and still aren't ranking. What's up?" So I probably also need to have this site to kind of see what exactly is happening there. In general, these kind of things should be less problematic. Sometimes we see that there are technical issues that are happening where maybe we can't follow the redirects properly. Maybe something is blocked by robots.txt. Maybe the server is blocking us somehow. All of these things can happen. So these are things that you usually find if you post, for example, in some webmaster help forum either in ours or in a general webmaster community where other people have gone through these steps as well. You're also welcome to send that to me, and I can take a look with the team to see if there's anything abnormal that's kind of holding the site back or that's taking time to be re-processed there.

BARUCH LABUNKSI: But also maybe in Webmaster Tools, they haven't done the site move correctly, right?

JOHN MUELLER: The question says that they did that-- the site move in Search Console. It does check for some of the default settings there so that the home page is redirected that you have both of them verified, for example. But it doesn't check everything that's across the whole website. So maybe there's something that's still kind of stuck somewhere.

MIHAI APERGHIS: By the way, John, they use the Webmaster Tools, or Search Console, to see crawl errors and maybe, if I remember correctly, it also shows the redirects from the old site, and in the new account, it shows 404 errors and links from I'm not sure it does that-- if it also shows the redirects as a source for 404 errors.

JOHN MUELLER: I don't know.

MIHAI APERGHIS: I think it does.

JOHN MUELLER: It's possible. There's some constellations where I think that happens. But I don't think it's by design. It's not meant to be a confirmation that you're doing a site move properly. But I think you might see some effects like that if you know exactly what to look for. But I wouldn't use that as a sign that the site move is working as expected. But like I said, you're welcome to send me the link, and I can take a quick look to see if there's something obvious that I can let you know about or if there's something stuck on our side. And sometimes these things do take quite a while, and if you move, for example, to an old domain that had a lot of problems in the past, and that's something that will be mixed in with your website as well. So, for example, if you move to a domain that's been used for years for really black hat spam tactics and has all of these spammy backlinks attached to it, then that's something where you inheriting at least some of that by doing a move to a domain like that. So that's something you might want to watch out for when you pick a new domain to make sure that things are really aligned properly not that you're moving into essentially a broken house instead of a shiny, new office building. "We use rel next and rel previous for pagination, and our rel canonical tag to the first page as opposed to the View All page, as we feel this is better than showing 100-plus items on a very long page. Is this wrong? Will it hurt rankings?" So what will happen there in a case like that is you have all of these paginated pages, and each of these pages has a rel canonical pointing back to the first page. So what will happen is we'll drop those individual pages from that paginated sent, and we'll focus on that first page. And if on that first page all the content is available that you want to have shown in rankings, then that's fine. On the other hand, if those individual other pages are important to be shown in search, then maybe that's not exactly what you're trying to achieve. One example there might be if you have something like an e-commerce site where you have one page that lists all of your items. And from there, it links to the individual product pages. Then if we don't have any other form of navigation to those product pages, and we're only focusing on the first page of their paginated set that we might miss links to those individual product pages. On the other hand, if you have a normal navigation on the website, and you have this long paginated set, and you're only indexing the first page, then we'll still find those individual product pages anyways through the normal navigation. So that's something where it depends on your website what you want to achieve. Maybe it does make sense to focus on the first page. Maybe the additional content is something you do actually want to have indexed. "I have two identical sites, a .com and a .co.uk. The cache query shows the site. The cache for the .com shows the site from the .co.uk. And I wrote rel canonical tags to make it clear that these are two different sites. Do you have any tips to speed up the canonicalization process?" So what probably happened there is we saw that these pages were the same or they appeared to be the same to our algorithms. And we thought, well, the webmaster just accidentally put up two versions of the same content. We'll help the webmaster clean that up and fold everything into one clean version make it easier to index, , make it easier to show in search results. So that's probably what you're seeing there. Adding any kind of additional information to let us know that these are really two separate pages is a really good idea. Self-referential rel canonical is a great thing to do here. Maybe the hreflang between those pages. Maybe making sure that the content itself is also unique would also help us to figure out that these are actually unique pages that need to be indexed separately. So all of these things are things that you could do. I don't think there's any magic tricks to have this happens right away. So doing or implementing some of these things makes sense. Making sure that it can be crawled and indexed like that make sense. And as we re-index those pages, we'll probably recognize that they are separate that they shouldn't be folded together and show those separately in search. So one thing that you could do to jump start that process is to let us know that these pages change that we should re-crawl them, which you can do either through a sitemap, for example, if you give us a new date for the sitemap URL. Or you can use the Search Console tool, what is it? Fetch as Google and submit to indexing where we can take that URL and send it to indexing right away. But the first step is really to make sure that all of the signals that we collect about these pages tell us that these are unique pages that we need to process separately. "Our mobile site is showing a lot of content using click-to-expand links, but our desktop version doesn't. Is this going to cause any problems either from ranking or other points of view where doing this? So our mobile versions offers a better user experience?" In general, that's fine. If your mobile pages are equivalent to your desktop pages, they don't have to be exactly the same content. They don't have to have the exact same UI. If this user interface works well for your mobile web pages, then by all means, keep using that. Yeah? Go ahead.

MALE SPEAKER: John, I had been trying for a long time to get connected, but was not able. Actually, John, I had one question regarding mobile sites.

JOHN MUELLER: Sure.

MALE SPEAKER2: Sometimes I feel that in folder what are some important links in my desk site items showing. All of them are important in my mobile site as well. But in many forums in many places when I went to get I found people are suggesting there should not be as many links in mobile sites as in desktop sites. Is there any restriction for this from Google site or not?

JOHN MUELLER: No. Not that I'm aware of at least. You can make your mobile site however you think makes sense. The important part for us that we can recognize that it's mobile-friendly. So kind of not blocking the resources that are shown there so that we can see the page as a mobile user might see it. But apart from that, we don't have any specific restrictions there. So if you think these links make sense for mobile users, by all means, keep them there. If you think that they're not so critical for mobile users, maybe it makes sense to fold them away in an expandable thing or maybe even to not show them at all for mobile users. That's essentially up to you.

MALE SPEAKER: OK, thank you.

JOHN MUELLER: Sure. There's this question about the video snippets. I don't have a newer answer for that, sorry. So I don't have any update there at the moment. "We load jQuery on all of our pages to minimize. The version we use is 96 kilobytes. This is 4 to 10 times the size of the HTML code for the page. Can downloading this amount of code relative to the size of the content dilute the relevance and quality of our content?" No. So one direct answer.

LYLE ROMER: Thanks, John. Also, just a quick follow up on that because the reason that question came up was from using the PageSpeed Insights and looking at their recommendations. Another thing that it always wants us to do is to minimize the HTML code and get rid of the extra tabs and lines-- blank lines and things. Is that's something that could have any effect on ranking? Or is it just--

JOHN MUELLER: That's essentially just tips to speed things up. So if you can reduce the size of your HTML pages and it works faster for users, then I guess this kind of indirect effect in that users like your site more if they stay on your site longer. They recommended it to others more. Maybe they'll stick around and actually buy something or convert if you have that set up. But essentially, there's no direct ranking change there. And for us, the page speed effect in ranking is more, I guess, kind of the line when a site is really really, really slow compared to a site that's kind of normal. So if you're in this kind of normal range where these pages load within, I don't know, 20, 30 seconds maximum, something like that. Then if you're tweaking things there for a 10th of second faster, then that's not going to have any effect at all in rankings. But it might, of course, have an effect on your users. And if users are happy, maybe they'll recommend you more, and that'll be this indirect circle back for your site. But it's not that if you can tweak a tenth of a second out of your site, you'll see any kind of a specific ranking change.

LYLE ROMER: Great. Thank you very much.

JOHN MUELLER: The other thing to keep in mind with the jQuery-- there is if you're loading a version from a CDN somewhere, then often users will have that version already in their browser cache. So if you're using a standardized version that everyone is embedding, and you're embedding it from that same CDN as everyone is using, then sometimes users will be able to just take that from their cache. The version that they picked up somewhere else visiting another website can reuse that. So that's a small speed bonus you can do there, even if you have this relatively large jQuery JavaScript file.

LYLE ROMER: That's a good suggestion. Thanks, I think we'll do that.

JOHN MUELLER: "We have a shop in Germany, Austria, Switzerland. We're using subdomains for each--" just try to mute you-- maybe not. OK. "We have a site in Germany, Austria, Switzerland. We're using subdomains for each region and follow the help of the Google Webmaster. Our rankings in the regions get mixed, for example, in Switzerland. The subdomain DE is showing. I also think we have a duplicate content problem. Should we use canonicals?" Using canonicals is always a good idea. It helps us to figure out which version to show in search results. So what you should be doing there is having the German for Germany version have a canonical to do the German for Germany version and the Swiss version to the Swiss version. So it shouldn't be canonicals across the versions, but rather within the individual versions. Using something like hreflang really helps us a lot to show the right version in the search results. It sounds like that's something that you should be doing there. I guess those are the main things there. The thing to keep in mind if you're using hreflang, you're seeing this kind of a mixup is probably we're not able to pick up hreflang information properly. So that's something where you should check Search Console as well in the international targeting section to see if there are any specific errors around hreflang that are shown there. Also, we have a German Hangout coming up in, I think, two days. So if you want to ask more questions about this in German, feel free to join that. And we might also have a bit more time there to look into your site specifically then. "We have dynamically generated search results pages that keep getting indexed Google over our actual content pages. How can we get these pages out of Google's index so Google will crawl more of the important pages?" I'd put a no-index on those pages. You can just put the no-index metatag on those pages. You could put noindex nofollow on those pages so that we don't get drawn into this maze of search results pages. Those are essentially the best recommendations there. "A competitor has bought over 100 URLs with the desired keyword optimized. Many websites under the guise of being different e-commerce stores link the sites to each other. They now hold the top two spots for this keyword. Is this black hat?" That does sound spammy, and maybe it's worth submitting a spam report for that. Usually, if you have a handful of sites, that's not something that we would really get involved in because sometimes it does make sense to have a small set of sites that are targeting something that's significantly different. But if you're talking about 100 different sites, then those start looking more like doorway sites, which is something that would be against our webmaster guidelines. "I recall you saying that if a 302 temporary redirect is in place long enough from one webpage to another, that Google will eventually treat it as a 301 permanent redirect. So after what time does that actually happen?" I don't think we have any specific time defined where we say after this many days or weeks or whatever we'll treat them as such. Essentially what we're trying to do is recognize that this is not a temporary situation, but rather that this is the permanent situation and that we should be treated as that appropriately. There's actually a pretty old blog post on this as well on Matt Cutts' blog about 301, 302 redirects and how they could be handled which kind of goes into this topic a bit as well. So that might be worth digging up I'm sure. I don't know. Maybe it's 2008, 2009 even, so pretty far back. But it's an interesting post with some of the difficulties around that. "Regarding schema.org, an attorney office has way over 20 attorneys. On the index page there is a slider showing all attorneys. Each one has its own index item type and several item props. Is this regarded as spammy?" That generally wouldn't be a problem for us in the sense that we don't use this kind of structured data markup for rankings. So it's not that there would be any kind of advantage from actually marking it up like this. It probably makes sense to have individual employee pages as well. If that's something that you want to highlight there, that might be something that we could show as rich snippets in the search results. But if this is a combined page with all of the employees and maybe linking to the individual detail pages, then marking that up doesn't really change anything from our point of view. So I wouldn't see that as being problematic. "My question is about x-default in hreflang. I have many languages on multiple domains, for example, .co.uk for English, IT for Italian, .com for German. Can I use an x-default to my .co.uk even though I already had it for my EN?" Yes, the x-default can point to an existing page. You can, on your site say, well, this is my page for English, and it's also the page I want to have shown for all other languages. You could also say, well, everyone speaks German. Therefore, my German page is my x-default page. That's really kind of up to you. And you can apply that on yet another level where you say, well, this is my English page for the UK and my English page in general and my x-default page. So you could take that essentially to three steps.

MIHAI APERGHIS: Yeah, John, what if you don't have any x-defaults? How does Google decide if there is no hreflang matching the users browser settings or anything else?

JOHN MUELLER: We try to do that the way we've always done that. So even if you don't have any hreflang markup, we try to show the appropriate version. And sometimes we pick a good version. Sometimes maybe we'll pick the wrong version. But it's essentially how we would handle any other kind of situation where there wasn't any explicit markup on this page is telling us which one to show. "My site uses session IDs in the URL. We have canonical tags pointing to the URLs without session IDs. Do you know of any issues that session IDs could cause in a URL even if there's a rel canonical pointing to the preferred URL?" So the rel canonical is a great way to handle this type of situation. Ideally, you'd avoid the session IDs completely so that we don't see them at all. But the canonical helps us to clean that up and to recognize that these session IDs are probably not that critical. Another thing you could do there is use the URL parameter handling settings in Search Console, which also lets you define this URL parameter is actually irrelevant for my website, which could be the parameter for your session ID if you have that as a clear parameter in the URL. So that's another way to kind of help clean that up. Ideally, like I mentioned, you'd try to avoid the situation at all so that when we crawl your website we only see links to the clean version of those URLs. We can crawl those clean versions of the URLs. We see them linked cleanly in the sitemap file. Maybe there are either redirects from alternate versions as in rel canonical there. All of information telling us to focus on the clean URLs. Sometimes you can't really avoid a situation with the session IDs. So just doing as much as you can to clean that up and letting us know about that is a good idea. The thing also to keep in mind here is if you explicitly search for both session IDs, we might still show them to you in search even if we follow the rel canonical. So what sometimes can happen is you do a site query for your site and you search for in URL with the session ID parameter, and we'll show that to you because we think, well, you're explicitly searching for this content. We know that this content exists here. Therefore, we'll try to show it to you. So that's something to keep in mind there if you explicitly search for it, it'll look like it's not working. If you search normally for content on your website, then it's probably end up working well. "Can you talk a bit about multilingual websites? What are the best practices? In WordPress, should content be on separate pages or on one? Can we use translation plugins?" We can hardly hear you. Let me go through this question really quickly, and towards the end, we'll have time for some more general Q&A. So talking a bit about multilingual sites is something that's probably worth a full hour Hangout, I guess, with all the different options. Some of the topics that you touched upon hear in this question are quick to answer. Should content be on separate pages or on one? It should always be on separate pages. We should always be able to use separate URLs to get to separate versions of your content. If you swap out the content dynamically on a single URL, what will happen is we'll index one version, and we'll never find the other version. So that's something where you're losing that content if you use the same URL as other things. "Shall we use translation plugins?" No, we would see that as auto-generated content. So I strongly avoid using plugins to create content, using plugins in addition on your pages to make it easier for users to see the translated versions as fine. "Do we need separate domains for each language?" No. Just separate URLs. Maybe you want to use separate domains or separate folders or directories or subdomains. But that's totally up to you how you want to handle different language versions. "We 301 redirect our product pages from domain.com/forsale to domain.com/sold subfolder. When a product is sold, will these URLs be omitted from the search results due to content duplication issues or will Google drop the old For Sale page eventually?" Yes, we'll drop the old pages eventually as we follow those redirects and see that the new URL is a new location, then we'll essentially take that into account. "I work for a company, and we're building a new website for our company domain. The owner wants to use the old website on a new domain for business purposes and A/B testing. What's the best practice? Do we noindex the old site?"

BARUCH LABUNKSI: I would--

JOHN MUELLER: Go for it.

BARUCH LABUNKSI: I would create a staging subdomain, and then you can do all the staging and give a password to whoever, and then you can test it that way. It's the best way.

JOHN MUELLER: Yeah, I guess that makes sense if you're doing QA testing with an internal team. But if you want to do A/B testing for users, then you need to split that somehow.

BARUCH LABUNKSI: The best way would be to do a subdomain, I guess because then you can--

JOHN MUELLER: Well, yeah, or any kind of way to kind of separate the things. I think from the question one thing that sounds like what might be happening here is that the website is moving from one domain to another domain, but actually, the old domain is going to be reused for something else. And that's a pretty bad scenario in the sense that we can't do a full site move in a situation like that because we'll have the new domain with the old content, and we'll need to figure out what kind of connection it has with the old domain. So we can't forward all of the signals from the old domain to the new domain because there's something new actually posted on the old domain. So ideally what you would do in a situation like that is use a new domain for the new content and keep the old domain for the old content or do a clean move from the old domain to a new domain and post something new on a completely separate domain. So try to avoid the situation where you're moving the content from one domain to another, and at the same time, re-using the old domain as well because then we can't-- we can't see any redirects from the old content to the new content. We can't process the site move for that. We don't really know what to do with all of the signals that are collected with the old domain like how should they be spread out. Should they be attached to the old domain for the new content or should they be moved to the old content on the new domain. So try to keep it as simple as possible so that search engines don't have to second guess. That's probably, I guess, the simplest rule of SEO is make it easy for search engines to figure out what you're trying to do. "We have separate desktop and mobile web pages. We don't use a subdomain. Mobile pages are in the same directory using rel canonical on the mobile pages. Is it necessary to put our mobile pages in our site map?" No, that's not necessary. As long as we can see that connection between the desktop and the mobile pages with maybe the link rel alternate and the rel canonical back, then that's perfectly fine. You don't need to list them separately in the sitemap file. "Is it OK to have AdSense above the fold?" I can't speak for AdSense policies. From a search point of view, that's perfectly fine. Lots of websites use ads on their pages to monetize the site. That's not something that we would see as being problematic. The main thing that we would see as being problematic is if the above-the-fold content is only ads. Then that's something where when a user clicks on a search result and lands on that page, then that's probably not a best user experience. But, otherwise, there are lots of websites that use AdSense above the fold or any other kind of monetization. And they still provide a great user experience, and they are perfectly fine in search. "We redesigned our site to optimize without keyword stuffing, got removed from spammy directories, et cetera, and we now rank on the first page of Bing and Yahoo, but not on Google. Any tips on why we can move up in other search engines but not Google?" I guess the most obvious answer to this kind of question is that, well, other search engines have different algorithms. They use completely different ways to handle rankings. So it would be normal to see different rankings in individual search engines. And maybe they get some things right that we get wrong. Maybe we get some things right that they get wrong and really it's essentially a different way of ranking the search results. So it will be normal to see those kind of differences there between different search engines. With regards to cleaning up a site, making it better, removing webspam issues, all of these things are things that definitely make sense that we do try to take into account. But it's not something that, in general, you would see a direct effect where you clean something up here and a day later then your site bumps up in the search results. That's generally not how it happens. It does take a certain amount of time for everything to get reprocessed, and that can be several months, maybe depending on what kind of issues that you cleaned up, how long those issues were in place, and how long it takes for us to technically recognize that these issues are actually all cleaned up. "Has the latest Panda fully rolled out?" I don't know. I tried to check before the Hangout, but I didn't receive an answer there yet. I don't know. So I don't have any full answer for you there. "One of our site index counts and Search Console and search results are higher than our own crawler counts. The difference is over 15,000 URLs. What might be happening there?" That can definitely happen. It's really hard to say what specifically might be happening there. So on the one hand, I don't know how you count pages on your site with your crawler. On the other hand, it's hard to say what specifically you're looking at there. So a site query is probably not something I would use for any kind of a diagnostic information if you're looking at the about count there. That's something where we optimize that count more for speed than for accuracy. It's not something where the average user when they do a query for cars wants to know exactly how many pages have this word on them on the whole internet, but rather we kind of bring a bit more of a general information in there. So that's on the one hand I wouldn't use a site query for that. In Search Console, the index stats information is a great way to get an idea of how many pages we do actually have indexed from your site. The sitemaps index count is another way to get that information based on the URLs that you actually care about. Some things that might happen to blow up this index count could be things like session IDs. It could be things like URL parameters that lead to multiple variations of the page. For example, if you have a product listing page that has different options where you could say, well, sort by shoes or sort by size, sort by color-- select a filter for these different options. Then that can easily lead to really tons of pages being found and indexed separately because they do have separate content. And we might show that in the site index count in Search Console as well because we found these pages we're indexing, and we're letting you know about them. That doesn't mean that these are pages that you care about-- that you need to kind of worry about them. So that's where the sitemaps index count comes in where you can say these are the 100,000 pages I really care about. These are all the products in my shop, the category pages, my new blog post, whatever I for my website. And I want to know how many of these pages are actually indexed. So that's what you could pick up in the sitemaps index count. "Does it make sense to take advantage of rel canonical or redirect 301 from post 1 to post 2 in order to take advantage of post 1's rank, even if, in this case, this is not 100% duplicate, but an updated content?" On the one hand, if you're updating content, you don't need to change URLs at all. So maybe you don't need to redirect at all. That is probably the cleanest approach there if you're just tweaking things, improving things, than I would take that extra complexity of separate URLs and redirects out of equation and just say, well, I updated the content. It's the same URL as before, and we'll pick that up and rank that appropriately. If you're updating the content and changing the URL and using rel canonical, then a 301 redirect is definitely a good idea. "My client's website is outranked by the new Local Pack results for brand terms. This also happens when I search for the brand name without specifying any location. Is there a way to rank above the results within the Local Pack?" So I don't know specifically how the local search results there are compiled, so that's something where I don't really have that great of advice for you. But, yes, in general, you can rank better. So it's not about that there's a kind of a block there where we'd say, well, this specific type of search result-- only this type of site can rank.

BARUCH LABUNKSI: But back in the day before the Hangouts with Johnston Simon, he was mentioning that was a beta test ranking above. But now I don't see anything that's ranked above the new 3 Pack. So he has--

JOHN MUELLER: I don't know. I haven't specifically looked for that combination there. But a lot of times when I search for something that could be local, I'll see that local block and some websites above it and some website below it. I don't know. At least I didn't notice recently that something like that changed. But in general, these are combinations put together based on the query, based on what we think the user might find relevant at the moment, and we try to bring the right combination of different search results elements together to help answer that. And it might be tricky here when the question is about the brand name. It probably depends on the brand name as well. If your brand is called, I don't know, SEO Toronto, then it doesn't automatically mean that you'll rank number one for SEO Toronto all the time because that's something people might search for other things for too. So just because it happened to the name you registered as a brand doesn't mean that it'll always rank for that.

BARUCH LABUNKSI: OK.

JOHN MUELLER: Maybe Baruch will rank for SEO Toronto first.

BARUCH LABUNKSI: I don't want to rank for SEO Toronto.

JOHN MUELLER: OK. "We're seeing more and more Knowledge Graphs, Answer Boxes, and Google Search features on page 1 in the search results. I understand Google is looking for what's best for users. However, from SEO purposes, websites are losing real estate. Are these graphs part of search?" Well, yes, they're part of our search results. But like I mentioned before, we do try to figure out what elements to show to a user based on the query, based on the additional context we might have from this user, and sometimes that includes things like Knowledge Graph information, OneBoxes in the search results. Sometimes we don't show them all the time. So it's something where I suspect over time we'll try to find a balance there. And if you notice that for specific queries we're showing results that don't really help the user, that are bad search results, that are showing information that's maybe even wrong or tangential to what the user is actually looking for, then that's good feedback to have. On the one hand, you can use the feedback link in the search results on the bottom. I know that's something that the team's here go through. On the other hand, if you find a general pattern where these types of results are always bad because Google shows this stupid element on top, which is not something that users ever want to see, then that's something you can definitely send to me, and I'll chat with the team to see what we can do to improve that or to double check that this is actually working as intended because what we do sometimes see as well is that people will come to us and say, oh, this is really wrong. Actually, you should be showing this other website instead of this block here. And this other website happens to be my website or a client's website, and just because there's this connection that you think this is the best search results, doesn't mean it's the best search result for everyone. But we do take this feedback really seriously, and we do try to find the right balance there to match what we show in search with what the user is actually looking for and to try to find the balance there. And we do recognize that a lot of this content does come from websites, and we try to guide people to those websites as well where we think that it definitely makes sense. So, for example, what we'll sometimes do when we show this type of answer on top in the search results is we'll have a direct link to the search results or we'll even have a text find out more or read more about this on this page to kind of really guide the user to that specific page where there's actually more detailed information that helps the user to answer a question. All right, it looks like we just have a few minutes left. So I'll open it up for questions from you guys.

BARUCH LABUNKSI: Yeah, I wanted to know how often title tags change in SERPs.

JOHN MUELLER: How often what?

BARUCH LABUNKSI: How often do you guys update the title tags.

JOHN MUELLER: The title tags? I don't know. It's possible that it's not for every crawl. But we do try to pick that up as quickly as possible.

MALE SPEAKER: Yeah, John--

JOHN MUELLER: Go ahead.

MALE SPEAKER: John, actually recently as you told us what was happening that my company CEO had the same name that some, you can say, fugitive had, and I was in Google Search typing my company's CEO name, it was picking up that fugitive information from a Wikipedia article. So I had no option in my mind. So what I did-- I just went to Wikipedia and mentioned there that Google should not pick up this information if somebody searching for my company brand name with this person name. I am not sure if it was really helping and if you suggest what should be done in case it is picking up some other person's information on the same name-- how to deal with it?

JOHN MUELLER: It probably makes sense to send that to us somehow. So you can send that to me directly through Google+, for example, and I can take a look at that. But putting a text on the Wikipedia article isn't something that our crawlers are going to pick up and try to understand.

MALE SPEAKER: OK, I will send you.

JOHN MUELLER: OK, great.

FEMALE SPEAKER: John, I got a question as well regarding the redirects. If the 302 redirects can have a permanent character after a while, we usually use them to send users from one website to another without passing value from its backlinks. So are we at risk at getting the value from those back links even if we think a 302 redirect would not pass it.

JOHN MUELLER: Yes, a 302 redirect will pass page rank as well. So using that as a way to block the page rank from passing doesn't work. What you can do is robot out that kind of redirect part of your website so that we can't actually follow that redirect at all. Then that link kind of gets stuck there. I know some forums do that, for example, that they'll have a jump script that takes them from one page to another or different website. And by blocking new robots.txt, then page rank doesn't pass at all.

FEMALE SPEAKER: We usually do that anyway, but I just wanted to be sure. Thank you.

ROBB YOUNG: John, do you have time to address our current issue again and let me know if the new site is having any issues? Because we're five months down the line from moving, and we're still showing 100,000 links from the previous one still is why they are being numb. And the new one is now dropping like a stone as well.

JOHN MUELLER: I probably need to take a look at that separately. Let me jot it down somewhere. Or can you post the URL in the chat? Then I can copy it from there.

ROBB YOUNG: Sure. It's the one with the E. I got it in there now-- listed chat.

JOHN MUELLER: Right.

ROBB YOUNG: There we go. We spoke about the 301s that we had on it. It's five months since they've been live and every single one of them is still showing from the old domain to the new. Somehow it's been picked up as intermediate links. And you said before that's not affecting the ranking of the new site. But given the issues--

JOHN MUELLER: I'll take a quick look. I don't know what specifically should be happening there. But you're trying to separate these sites completely, right?

ROBB YOUNG: Yeah,

JOHN MUELLER: Yeah, OK.

MIHAI APERGHIS: If you still have time for a question. I have a corporate client that I talked about last time. The issue is that-- let's say they have a very important page book, volume 1. And volume 2 appears, and if you search for it in Google, volume 2, Google still shows the volume 1 page instead of volume 2. The volume 1 has a lot of good links. It's ranking pretty well. But we want to show the volume 2 where people are searching for volume 2, and the volume 2 page is new, doesn't have any links yet. Is there any way we can tell Google this is more relevant?

JOHN MUELLER: Not easily. Not easily.

MIHAI APERGHIS: The links from volume 1 to volume 2 telling the users that we've just launched the book [INAUDIBLE] front page.

JOHN MUELLER: Yeah, all of these things help us. But if this page has collected value over a long time, then we'll still think this is not a critical patient to show. So I don't this is like the latest iPhone and everyone has been linking to it for a year now or two years or whatever, and you come out with a new model on a new URL, then it'll be hard for us understand that someone searching for iPhone is actually looking for that new page where we don't really have a lot of information about. So that's tricky in the sense that what does sometimes work is if you have a general page that you reuse where you say this is, I don't know, this revision of this item. And from there, you link to an archive of the older versions of that item, then we can focus on that general page and say this is the long-term page that we do show in search for that specific item. Sometimes that's not easily possible like book series that are actually separate items-- those kind of things. And, it's really kind of giving us all of these signals like you said, the link from the home page, maybe a link from the whole product to the new product, a banner for the user. Those kind of things help. But there's no secret metatag that you can add there to say, well, push this page instead, but don't completely de-index my old page.

MIHAI APERGHIS: Right, so just accumulate as many signals as possible.

JOHN MUELLER: Yeah.

BARUCH LABUNKSI: John, did you get my recent emails I sent?

JOHN MUELLER: Probably, I think so, yes. All right, with that, let's take a break here. There are people waiting for the room. So I have to jump on out. It's been great talking with you all again-- lots of good questions. Thanks for all of that. And if you could send me the individual details that will be useful, I can't promise to answer on every email. But we do take them into account. We do pass them onto the team to double check to see what's happened with them. So thanks again and hope to see you again in one of the future Hangouts. Bye everyone.

LYLE ROMER: Bye, John, Thanks.
ReconsiderationRequests.net | Copyright 2018