Reconsideration Requests
17/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 15 January 2016



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: Welcome, everyone to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. We just got some snow, here in Switzerland, so it's almost like the way it should be in the winter. And we have a bunch of questions that were submitted already through the Q&A. And if any of you who are new to these Hangouts, or relatively new who are live here in the Hangout now have any questions, feel free to jump on in and start with a question from your side.

DANIEL PICKEN: I'll jump in with my few questions, if that's OK, John?

JOHN MUELLER: Perfect.

DANIEL PICKEN: Excellent. So I'll try and keep it as brief as possible. So first one is, how long does it take for Google to recognize schema markup? I got an example, nearly two weeks ago. We have run the Data Highlighter, and we've put the code on the side. For whatever reason, the star rating isn't pulling through, and we don't really know why. So if you could shed some light on that? And I've got an example, if you need it.

JOHN MUELLER: So I guess there are two aspects there. On the one hand, we recognize it fairly quickly. So usually, within one or two crawls, depending on the markup, we should be able to recognize that the markup is there and take that into account for processing. So that part usually happens fairly quickly. The other aspect of that actually being shown in a search result is, essentially, separate from that, in that we need to really be sure that this is something that we want to show in search for that site, which means we have to kind of double check that the markup is correct, that it's used in the right way, and that the site is something that we would trust to show in the search results. And these are things that sometimes take a bit of time for us to actually pick up and understand properly. Sometimes that's something where you add the markup, and we don't show it all. Sometimes we'll pick it up and show it fairly quickly, within a day or so. So it kind of depends more on the site itself. So what I'd do there is double check that you have the markup implemented correctly with the testing tool.

DANIEL PICKEN: It's come up with-- we have done that, and it's showing as correct. It validated it, as well, as a schema markup. So that it has been validated, and that's why we can't get our heads around why it's not working.

JOHN MUELLER: Then I would double check, first of all, to see that it's implemented in a way that's compliant with our policies, so that you're not marking something up with stars that wouldn't be marked up with stars. I don't know if you have it on your home page, for example. And you're just saying, oh, my company has five stars and you put that in there. That's not something that we try to pick up. And the last thing, which is probably the hardest, is to really make sure that it's a high quality website, that it's something that doesn't have lower quality content, so it's really a clear sign to Google that this is something that we should be showing in search because we can trust this website to actually do it right.

DANIEL PICKEN: Because this is a colleague of mine, if that is all above board, which I imagine it is, where do we go from there? Where we would go from there? If we've run all of those checks, and the content's great, the site, as I know, is a legitimate site, it's a legitimate branding company, what would we do then?

JOHN MUELLER: So what I would do there is post in the Webmaster Help Forums that we. Some of the top contributors are really good with markup problems and they escalate individual issues to us when they see that it doesn't match the usual, potential problems. So that's something I would recommend doing there. And if they can tell that everything is really done properly, that this is a great site, and that we should be showing it, then they'll flag that to us. And they'll say, hey, John, you're doing something wrong. This is broken, you need to fix this. Talk to the engineers or whatever.

DANIEL PICKEN: Excellent. And my second question is around page rankings. When Google analyzes a link, does it decide whether or not to pass a PageRank to a page? Ignoring the follow and nofollow for a second. So it looks at a page, does it then decide, should it pass PageRank? And if it does or it doesn't doesn't, does it decide the levels of PageRank that it should go? And the reason why I ask is I know that Panda is very much a quality algorithm, and it assesses the page, and whether or not the information can be trusted, how authoritative the page is on the topic. So are any of these things taken into account, also, in relation to PageRank? I'm just trying to establish if we have a follow link, does that pass PageRank regardless of anything other than the nofollow tag?

JOHN MUELLER: We try to keep these things a little bit separate, so that we don't have algorithms that try to evaluate the same thing. So if we one that evaluates a quality, then it wouldn't make that much sense to have another rhythm that, essentially, tries to do the same thing for its own part. So we try to keep it a little bit separate. With regards to PageRank, I think the main issues that we see there are really if we recognize that this is a site that it doesn't make any sense to pass any PageRank from, then, on a site level, we might say, OK, we're not going to pass any PageRank from here. That can happen, for example, if we can tell this is a site where people have been spamming their links for a long time, and it's maybe an open forum where all the links are followed, and it's just filled with clutter. So those are the kind of situations where we say, well, we see this site is having trouble, maybe they can't keep up, maybe they're doing this on purpose. And we just don't want to trust it with regards to our PageRank calculations.

DANIEL PICKEN: Would affiliates and directories fall into the category of not passing any PageRank?

JOHN MUELLER: Not by design. No. Both of those kind of sites can be pretty useful. I think, especially if affiliates, it's where, essentially, their business model doesn't dictate the type of site they have. It doesn't mean that the content that they have is low quality. It doesn't mean that the links that they have are bad. It's just the way that they're monetizing their website. So some sites sell the products direct to these people or, essentially, just sell it, indirectly, through maybe a main distributor or something like that. So that's not something where I'd say that just because you're an affiliate, we wouldn't trust your website or we'd treat it as something sub par because they can be just as good as normal websites. I mean, they can be normal websites, in that sense. Directories are usually a little bit trickier because there are a lot of directories that, essentially, just accept any link. And if that's something where, essentially, anyone can just kind of drop the URL into their link dropping tool and get a link from that site, then that's probably not something we'd need to trust that well.

DANIEL PICKEN: We tend to disavow any irrelevant directory, but anything that may be relevant, we may, potentially, keep. And that's how we generally look at directories and that's good. Great. Thank you. Thank you very much for the answer.

JOHN MUELLER: Sure.

KREASON GOVENDER: Hi, John. Just a quick follow up question on that. In terms of Google News sites, how authoritative are those sites? Are they always seen as authoritative sites?

JOHN MUELLER: Sites that are in the Google News index, you mean?

KREASON GOVENDER: Yes.

JOHN MUELLER: Not always. So that's not something where we would take the status within Google News as a ranking factor for web search. That's something that we do separately.

KREASON GOVENDER: OK. So it would depend on things like relevance and stuff as well?

JOHN MUELLER: Yeah. I mean, we would treat them as a normal website. I don't even know if we, internally, within the web search site, kind of have this information of, this site is shown in Google News or is not shown in Google News. I think we should just treat them as normal websites.

KREASON GOVENDER: OK. Thanks, John.

JOHN MUELLER: Sure. Let me run through the questions that were submitted. If you all have any comments along the way, feel free to un-mute and ask away. Otherwise, I'll run through some of these and leave some time at the end for more questions from you all. If you have more than one link from a page, is the value divided equally or is it weighted? For example, more value through the first link. As far as I know, it's more equally divided. Obviously, it's not something that's so trivially defined as we take all the links and divide by the number because we have to figure out what's actually happening there. But, in general, we treat them the same. So you don't have to do anything special by putting your most important link on the top of your HTML file because we have a lot of practice with understanding how links, especially within a website, tend to make sense. So that's not something that I'd really spend too much time on as a webmaster of trying to tweak the location of your link. That's probably time better spent just working on your website in general. We recently found that Google indexed thousands of URLs containing JSESSIONID parameters. We disallowed them in the robots.txt. How long does it take for Googlebot to pick up the new robots.txt directives? So there are two parts here, I think. On the one hand, picking up the new robots.txt directives-- generally, for most websites, we look at the robots.txt file about once a day. So that would be about that day, time where, essentially, we might not know the current status of your robots.txt file. You can speed that up through Search Console. If you submit the updated robots.txt file for us, then we can pick that up in the robots.txt tool, I think it's called that in Search Console. And that lets us go and find that file that you made some changes to a little bit faster. The other aspect there is that you've disallowed some URLs in the robots.txt file and you're kind of wondering when they drop out of the index. And the important part to keep in mind there is that the robots.txt file only controls crawling, it doesn't control indexing. So if you need something to drop out of the search results, you would need to allow crawling and make it possible for us to recognize that this is something that needs to disappear from the search results. So that could be by returning a 404, it could be by having no index, it could be by having a rel="canonical" to your primary page, which I'd assume you'd have here. And in all of those cases, if we can crawl them, then we can recognize that we should be indexing them and we can kind of react to that. So what I would do, in a case like this, is not block these URLs by the robots.txt file. And instead, I would allow crawling. And make sure that you have a rel="canonical" set up on these pages, so that we can actually recognize that this is something that you don't want to have indexed like that. The other aspect here, with regards to robots.txt text versus canonical, or redirect, or no index, is that if there are any links at all through these individual URLs-- which might be within your website, it might from externally as well. If those links are blocked by robots.txt, then those links kind of get lost because we go find the URL and we see we can't crawl it. So we all think, oh, well, these links are for this specific URL, which might have the JSESSIONID in it, and that's probably not what you want. You probably want those links pointing to those specific URLs to point out the actual page that you're trying to show. So with the rel="canonical", you can forward that as well.

DANIEL PICKEN: Can I ask on that, if I link to a page and that page 404s, does that, essentially, kind of break that link? So even though there is a link going into the site, because it's going into a page that's just giving a 404, will that site no longer have that PageRank from that other site?

JOHN MUELLER: Yes. That's correct.

DANIEL PICKEN: That's correct. Double check.

MALE SPEAKER: So I have a question regarding that, too. So if the content is indexed in the Google index and we set it on disallow, you just mentioned that that will stop the crawler from crawling the page. Still, will that content ever disappear from the Google index?

JOHN MUELLER: So the content will disappear when we recognize that there is a disallow there, but the URL itself will potentially remain indexed.

MALE SPEAKER: So can people still find it? If people search for content on that page, can people still find the URL in the index?

JOHN MUELLER: Not based on the content itself, but based on links to that page. So if there's a link from somewhere within the site saying, I don't know, green shoes on this URL, if that URL is roboted, then we take the anchor text and use that to rank the page. But we don't have the content from that page anymore to rank.

MALE SPEAKER: So the anchor could still be found, so the title or the link anchor. But would normal people, if they search for any content at all, see that URL in the index? Is that very likely?

JOHN MUELLER: It's possible. Sure. I mean, if we see a lot of links to one specific page and that page is robots.txt, then we assume that there's probably still something valuable on this page that's relevant to the anchors that we've found so far.

MALE SPEAKER: OK. Good. Thanks.

JOHN MUELLER: When deciding which to index, HTTP or HTTPS, does Google take into account mixed content issues? I found some web pages with a valid certificate and a 301 to let Google index HTTPS, but each of them have mixed content issues. So mixed content issues is when you have a page that;s HTTPS, but it includes something that's hosted on HTTP. So that's the kind of situation where you have a secure connection to your web server. And everything that goes to that server is kind of encrypted except for this part that you're kind of pulling in, which could be an image, it could be ads, or a video, or something like that. So it's not really a secure connection for your content. It's something that, in the browser, is called a mixed content issue because you have two different types of content there. And in general, that's something should be avoided because it's not a clear, connected connection, in a case like that. So what tends to happen is sometimes browsers flag that in the browser, sometimes they flag that in the URL bar, on top, with a yellow triangle or something else. And from our point of view, we try to avoid indexing those specific URLs as canonical. So if we're in a situation where we see one URL has normal content-- everything is on HTTPS and it doesn't having to mixed content issues-- and the other URL does have mixed content issues, then we try to prefer, when we pick a canonical, the version that doesn't have mixed content issues. And with regards to HTTP and HTTPS, that can happen as well. We see one URL is on HTTPS that has mixed content issues and we also know the same content is on HTTP. And it's a kind of normal HTTP, it doesn't have mixed issues by design. Then we might tend towards the HTTP version, but these are really, almost subtle differences, from our point of view. If there are really strong signs that one or the other should be the one that we index, which could be maybe a redirect, or rel="canonical", or other things. Where there's a really strong sign saying the webmaster really wants the HTTPS version indexed even though it has mixed content issues, then we'll probably pick the HTTPS version, even with the mixed content issues. There are ways for webmasters to kind of learn about the mixed content issues on their site by using some tools that they can plug in on their site. There are server-side scripts that enable the CSP, Content Security Policy, where, essentially, a browser reports to the server, saying there was some mixed content issue on specific page. So that's something I'd definitely recommend doing. Having really good HTTPS pages, I think, definitely makes sense, if you want to go to HTTPS. And finding a way to make sure that you're not accidentally including mixed content is always a good practice. But in general, this isn't going to make or break your good sites indexing within Google. We have pages from our site that are scraped and inserted into hacked sites. The hacked sites rank very well. What can we do to get our original pages back other than normal web spam reports? So I think you're in the Hangout here, right? I think I saw you somewhere. So feel free to un-mute, if you want to add more comments. I took this issue up with the team as well to kind of figure out what, specifically, is happening here with regards to your site or why your site is seemingly being targeted a little bit more than we've seen other sites get targeted. And they're definitely looking at what we can do, on our side, to algorithmically tackle this problem. But what would really also help from your side as well is if you notice this happening repeatedly for your site or for any site out there, feel free to get in touch with me directly. And send me some sample URLs, so that we can really take this up with the team with a bunch of sample URLs from your site and we can work to get that cleaned up. Feel free to send me a note on Google+ or wherever and we can take a look at that. There's a link to a forum for that. I'll have to take a look at that later. Is there any limitation to the number of characters in the title attribute of a link? At some links on our site, the meta description of the link page gets displayed at link titles. That gets around 250 characters, is that too much? So the title attribute of a link. That would be within a page, if you add the title attribute. As far as I know, there is no limitation from the HTML side. And in general, for indexing, on our side, what we do is try to stick to the HTML spec. And if there's no limit there, then we'll try to take that into account as well. So that's something that we try to pick up there as well. With regards to the meta description and title element on the page, that's something where we have various algorithms to try to figure out what we should be showing in the search results. So on the one hand, we want to find something that's unique and relevant for that page. Maybe something short and succinct that we can pick up and give the user who's searching a little bit more insight of what's actually findable on this page. And what sometimes happens is we take a really long title and we decide to shorten it because we think it's better for the user to have something shorter. Maybe it's something that doesn't have as many keywords in it. And we'll show that in the search results. And this is specifically true when someone is searching on mobile, where there's really only a small amount of room for a title and for a description. So that's something where there's no hard limit on the number of characters, or words, or whatever in a title and in a description, but from a practical point of view, we have to find the right balance and show the right thing there. The other thing to keep in mind with the title and the snippet is that we change those depending on what is being searched for. So we try to match that to the user's query to really show-- with regards to the specific words that you searched for, this page is relevant for these specific reasons. So it's not the case that we would always show exactly the same thing. For App Indexing, are there any plans to have Search Console show the query data in the page for the install app button stats? I think we just added that last week. I would double check. If you have App Indexing set up, double check your Search Console account. I believe you can switch between two modes now. We're updating the website design, content, structure, and navigation in all of the URLs. SEO strategy has been created, but since every part is getting changed, how much can this impact our rankings and traffic? And how long can it take to rank again? It's impossible to say how much it will impact your rankings. We've seen situations where people put a new website up and they accidentally leave a noindex on all of the pages because that's from the development version. And in cases like that, you really have a really strong impact because we end up removing the whole site. But in general, websites, when they're updated and when you do a redesign, they really need to be kind of reevaluated by our systems. And that's something that takes a bit of time. We have to reprocess all of these pages. If you change your internal URLs, then we have to learn about that again. If you change your internal structure-- like which pages link to which ones? Do you have category pages? Do you link directly through the detail pages? Do you have multiple levels? All of these things, we usually have to re-learn and kind of re-index. And that's something that takes a bit of time on our side. And it can or it should, generally, result in some changes in the way that we crawl, index, and rank your pages. So that's something where you would expect to see changes. It's not that we would say that it will remain the same forever because you're making changes on your site and those changes are reflected somehow.

DANIEL PICKEN: Can I ask on that, John?

JOHN MUELLER: Sure.

DANIEL PICKEN: What I see a lot of-- and this is [INAUDIBLE] that's been done-- at the bottom of the home page or maybe the product page, they'll have a like a paragraph. And they will be internally linking to pages, category pages. It looks kind of spammy because it's exact match anchor. So it could be saying, dun, dun, dun, used cars, dun, dun, used car. And they're linking to the internal pages. Now I suppose we, generally, advise-- especially if we think that it's been impacted by Penguin. I know that's external links coming in, but that happens to be the phrase that's been told to us. We might advise to probably pull those links out. But how important is it? Because imagine that you need exact match anchor links internally-- do you treat home page anchor texts internally different to, say, navigation anchor texts internally?

JOHN MUELLER: We definitely treat them differently. I mean, we try to recognize what the relevance is of the individual links and we try to take that into account. But it's not something where I'd say, you need to have exact match anchor texts in the footer or something like that because, in general, we have a lot of practice with understanding the normal, navigational structure of a website. And we understand that there are categories and that there are specific, important pages on your site. Wow. It turns dark suddenly. We kind of understand how normal websites are structured. So you definitely don't need to do these almost spammy looking, long footer links where you're linking to blue, used cars for sale in London and pointing at a specific page of your site. That's not something you really need. If it's really important for your site-- if it's something that you think, this is our main product, I really want to promote this one, specific thing, I would just make that a normal part of your home page and really make that visible for everyone. Don't just hide it in the footer.

DANIEL PICKEN: It tends not to be in the footer, but it's just, generally, some content at the bottom of the page. For whatever reason, it's linking to these pages and I'm just wondering if that's really going to have that much of an impact or not.

JOHN MUELLER: Yeah.

DANIEL PICKEN: Do you see what I mean? If I'm pulling it from the category or that page, it's certainly relevant. If it's used cars, anchor text, it's definitely going to the used car's internal page. But it's a case of, could that be interpreted quite negatively, if you've got, say, a few sentences and there are, say, five or six links in that as anchor text?

JOHN MUELLER: That's something we see we see a lot on lots of sites. They will, essentially, almost place keyword stuff in texts on the bottom of their page, where no normal user would normally scroll. And there are bunch of paragraphs of text. And they're keyword rich anchor texts linked into internal pages in there. It's kind of like giving some background information for someone who might happen to accidentally scroll down that far. And that's something that, I think, looking back maybe five or so years, probably worked in the sense that this kind of keyword stuffing was picked up. And we though, oh, this page is relevant and we need to take these links into account. But nowadays, that's something that really doesn't really affect the site at all. So this is kind of like [INAUDIBLE] that's carried on from the past, where people are saying, oh, it worked at that time. Maybe it still works now. I'll just kind of maintain this. But in general, everything that you have on your site that you don't really need, I aim to kind of remove that because it makes it a lot easier to maintain, it kind of keeps the pages a little bit sleeker, and you don't have to worry about things breaking. So if you have this old keyword stuffing text on the bottom of your pages, maybe it's time to rip those out and just make sure you have great navigation within your normal site.

DANIEL PICKEN: Couldn't agree more. Thank you, John.

JOHN MUELLER: In Search Analytics, it's only possible to see rankings on a country level. Given the importance of local search, are there any plans to provide the information on a regional, city level? I don't know of any plans, but that's an interesting point. I'll definitely take that up with the team as they continue working on that feature. That sounds pretty interesting. You can also get, of course, some of this information through Analytics directly when people actually click through to your site. And so that might help you to kind of interpret what, potentially, people are searching for based on the number of impressions, and the number of clicks, and what you're seeing as kind of relationships in analytics. As far as I know, Penguin is initially released in the US and expanded to UK, Canada, and Australia. How is his territory these days? Has he already flown to Asia or some other areas? So Penguin, like most of our algorithms, is pretty global in that they apply to all websites, globally. And I don't think we ever really said that this was something that was specific to individual countries. So these kind of changes, where we look at web spam, in particular, we try to make them global, so that we have one have rhythm to maintain. And when we update that, it's valid for the largest number of sides. It kind of increases the scale of our actions.

DANIEL PICKEN: When can we expect that, John? When can we expect that to be released? I know it's some point this months. Are we close now?

JOHN MUELLER: I don't have anything to announce just yet. I don't know. I'm more cautious than other people. Personally, I prefer we announce it when it's actually ready and actually live, rather than pre-announcing it and disappointing people when we can't really make that date. Sometimes it works out that we can move things along a lot faster than we expected, sometimes unexpected things get in the way. So I prefer to announce it when it's ready and then release it when it's ready, rather than try to stick to an artificial date.

DANIEL PICKEN: Now that we're kind of on a conversation of updates, Panda-- I know that's part of the core algorithm now. How often are we expecting that to be updated? I think people were saying real time, but we don't think it's going to be real time now. So is that still going to be, on average, once a month? Or is it going to be a lot more frequent now?

JOHN MUELLER: As a part of the core algorithm, you probably wouldn't see those kind of updates happening. That's something that's kind of just rolling along. So it's not so much like in the past where you'd see, on this date, this actually changed and we updated that. As a more rolling algorithm, you wouldn't really see the individual, cut dates of specific parts of the data.

DANIEL PICKEN: So will it be real time? Or is not real time? Is it kind of as a WAN?

JOHN MUELLER: It's not real time in the sense that we don't crawl the URL and we have that data immediately. So it's something we have to bundle together, understand the data, and have that updated. And we do that more on a rolling basis now, so that when one is finished, the next one kind of starts. So it's real time, if you look at a really big scale in that things are happening all the time. But it's not real time in a sense that every second there is a new value that's being produced based on a new data point that we have.

DANIEL PICKEN: So that rolling basis, that period, how long will that take? Is that like two weeks, and then it kind of starts again, and then starts again? I'm just trying to understand what to expect as opposed to the Panda merge.

JOHN MUELLER: I think what you would usually see is that this is just kind of more like a subtle move from one to the other as things kind of roll out. So it's not that you would even notice this cycle.

DANIEL PICKEN: OK. Thank you. We're making AMP pages and wanted to know if we can test these pages with any tool within Google? How can we test our pages and see whether they're OK or not? So there are three things that you can do here. On the one hand, you can sign up for the Search Console beta because we're testing some new features around AMP. The short link for that is g.co/searchconsoletester. And if you sign up there, you'll find information about the AMP tests that we're doing. The other two things you can do on a per page basis is, on the one hand, check with the AMP validator that's built into every AMP page. So you can just add #development=1 to the URL, reload the page, and then look into your JavaScript Console in your browser. And it will show you information about the AMP version that is used, and it will render this page, and it will show you information on whether or not this page is valid AMP markup or not. In addition, specifically for the news article markup, which you would need for the news carousel where we kind of show these AMP pages, you can use The Structured Data Testing Tool to run your AMP page through that as well as your normal HTML page. And you can double check that we're picking up the news article markup properly, that you have all of the elements which we require for that markup, which might be the image or the publisher information. I don't know all of the details directly, but you'll find all of that in The Structured Data Testing Tool direct. So those are two ways you can test it on a per page basis. For more site level, I'd really recommend the AMP beta test. Increasingly seeing international sites returning inorganic results on queries which have local intent in the UK. For example, electrician+location doesn't seem very useful. Sites from the US, India, and Australia show up. So from my point of view, that sounds like something is broken. That's something that we should probably look at. And if you're increasingly seeing that, I would love to see examples. So feel free to send me some queries, some screenshots on Google+ and I can take that up to the team to figure out what exactly is happening here and where we're getting that wrong.

MALE SPEAKER: I have a related question maybe.

JOHN MUELLER: Sure.

MALE SPEAKER: International results. So if I am in Germany and my customers and their website is in Switzerland, what is the best way you suggest to check the rankings in google.ch for that particular customer? How the rankings look, how they rank, what their competition is doing. Because if I just switch to google.ch, of course, we will see I'm in Germany. And it gives me not the right results. So what is your take on that?

JOHN MUELLER: Hard to say. So to kind of appear like a Swiss user. So there are two things that I, generally, do. On the one hand, setting it to google.ch, which is kind of like the first step. The other thing is within the URL parameters, you can set the gl=parameter to the country code. So gl=ch. For example, for Switzerland, that would essentially say, I would like to be geolocated in Switzerland and have those results. That's not something you can do directly in the UI. Or maybe you can. Maybe in the Advanced Search settings, I'm not sure. But it's really quick to do in the URL for the query directly. I don't know about a lower level setting. Like if you want to say, I want to be located in Zurich or I want to be located in Geneva, I'm not sure how you would do that.

MALE SPEAKER: Of course, I could always take a proxy and act as if I am with my IP address, act as if I'm in Switzerland. But how does Google detect my settings? Probably, I would have to take a fresh browser and incognito window, so Google does not see my browsing history from the past, right?

JOHN MUELLER: Yeah. That's usually what I do. I just open an incognito window and check it like that.

MALE SPEAKER: So you open incognito, you use a proxy to look like you're from a different country, and then you look at the ch results?

JOHN MUELLER: I usually just use a gl=parameter.

MALE SPEAKER: Really?

JOHN MUELLER: So I don't even bother with a proxy. It's really rare that I have to pull up a local proxy from somewhere to look at something specific. But I probably don't check the rankings in the same way that you will check the rankings.

MALE SPEAKER: Yes, probably not. By the way, you probably serve via an American IP, don't you, when you're within Google?

JOHN MUELLER: Sometimes, yeah. It's really weird sometimes.

MALE SPEAKER: Is your VPN not always Google America?

JOHN MUELLER: I don't know. I don't know. Sometimes we have Swiss results, too. I don't know. I don't know how they have that set up.

MALE SPEAKER: Thanks.

DANIEL PICKEN: How can you allow the local search? Because for a client, when I want to find out where they're standing in Google and where they're ranking, I don't want it to be biased to my location. And I'm not really sure, now that Search tools has been removed, how I could go about that? Because I go through incognito, but that still biases me to whatever I'm based. How do I do it, so that it doesn't bias to any, specific location within the UK?

JOHN MUELLER: I don't know for sure. I don't know if there is still a URL parameter that you can use there. I know we removed that text box where you can enter your location. One thing I would do to kind of just double check is to just enter that location with a query together. So if you're searching for petrol station in London, then include the word London in the query, so that we can pick up the location. But it's really a bit trickier almost on a more local level than on a country level. On a country level, you can use the parameters, it's probably easier to find the proxy that's in a specific country. On a city or regional level, that's really kind of tricky. And that's something where I imagine individual users see things differently as well. It's not so clear that you can say, well, everyone in this specific region will see it exactly like this, if they're not logged in.

DANIEL PICKEN: Well, I want it so that it doesn't look at it. So for example, I'm based in Harrogate. Whereas, I'll have a client based in London. And we'll have different results when we're looking at rankings for their site, but I want us to be able to see the same thing. So I don't want it to find out where I am. I just want it to look at the UK as a whole. And I just have no idea how to do that.

JOHN MUELLER: I don't think you can do that. I think, from a personalisation point of view, we always personalize. And specifically, if we can recognize, from the query, that you're looking for something local, then we'll try to bring you something local, even if you're not logged in. So if you're looking for a pizza restaurant, you probably don't want one in some random city in the UK. You want something kind of nearby. So that's kind of the differences that we have there.

DANIEL PICKEN: So that's permanently switched on. You're not switching that off.

JOHN MUELLER: Yeah.

DANIEL PICKEN: OK. I thought I'd check, anyway, if there was a way.

JOHN MUELLER: The other thing to keep in mind is we permanently do experiments. So every time you're searching, you're probably in 10, 20, 30 different experiments at the same time, and everyone is. So it's kind of normal that someone, even sitting next to you, might see different results than you would see.

DANIEL PICKEN: That's interesting. I'll bear that in mind then. Thank you.

KREASON GOVENDER: Hi, John. I just have a follow up question on this. Let's say, for example, we're taking petrol stations in the UK and your petrol station is based in Manchester. Is there anything you can do, let's say, to improve it, so that it ranks high in search for Liverpool? So local users see it ranking higher in their search?

JOHN MUELLER: I guess you would do anything you would do for a normal, local business. You would have the address on the website, you would have information on how to get there. Maybe you'd markup the address use structured data markups, so that we can automatically pick that up and grab that information. But it's not the case that you can say, well, I have a local business, which is in this country, and I'll just add some keywords from a different location, and suddenly, you'll rank better as well. We really have to understand that it makes sense to show it there.

KREASON GOVENDER: OK. Thanks, John.

JOHN MUELLER: Let me run through some more of the submitted questions and then we'll have more time for your live questions, too. We see a lot of live updates, like the scorecard on Google Search, under the In The News box, but when you go to the page, you don't see any content, just the scorecard. I believe Google has guidelines to have a minimum word limit on a page to qualify for the In The News box, why is that so? I don't know, specifically, about the scorecard, how that's set up, or how the Google News guidelines are, in general, but in practice, from a web search point of view, we don't have limits on the number of words that you can have on your page. It could even be that your page is blocked by robots.txt and we see no words at all from that page. So that's something where it doesn't really matter how much actual words you have on your page. And if you have information, like a scorecard or a table of numbers, and those numbers are relevant to the user's query, then we'll try to show that, even if there's no normal text on the page. How to deal with duplication issues on a platform like YouTube and SlideShare where the public upload content. So I assume it's that other people are uploading copies of your own content. In cases like that, if there's something kind of like on a legal basis that you can do, you probably need to get in touch with those platforms directly to have them take action. If it's just a matter of this content being live on those pages, then, from our point of view, that wouldn't necessarily cause any problems or make your content look any worse. We look at these pages individually. If we see that they're duplicates, we'll try to fold them together. But if you're the original source of this content and we find duplicates of it is well, then we'll probably still show you in the search results. We will, generally, try to show the original or the most relevant results wherever possible. Our blog uses a plugin that automatically generates related posts at the end of all blog posts. Since Google doesn't like automated content, should these be nofollowed? No, not necessarily. So these related links within your content are a great way for us to kind of crawl through the website and find more content on the website. So that definitely wouldn't need to be nofollowed, even if that's generated automatically. Automated content is more of a problem for us, if the primary content of the page is generated automatically. So if you're taking a bunch of text and you're just juggling it up, or if you're spinning text automatically, where you're replacing words with synonyms, that's the kind of automated content that we think is not so valuable for search. Our site occasionally has a featured snippet for the query, how to refinance? It switches between us and another country. How is it deciding which site is being featured? Does user feedback impact that decision? Will it always switch between sites? In general, this is something where we try to find the most relevant results and we show that in the search results. And in a case where we're sometimes showing your site and sometimes showing another site, we're probably kind of on the edge there and don't really know, for sure, which of these ones we should show. And sometimes things tend more towards one side and sometimes more towards the other side. But it's not the case that, by design, we would automatically switch between different results. So that's probably just something specific in your case there. Let's see. What can we do here? Eight months back, our site migrated from HTTP to HTTPS. 301 redirects were implemented, every page is ranking, but the PageRank is lost. It was six for the home page and one through five for the sections. Now it's not available. How long will the 301 take to form a PageRank? So Toolbar PageRank is something we haven't updated for a really, really long time. So I would expect this to probably never update. So that's not something I'd really focus on, in a case like this. If you've moved to a new site, if you've moved to HTTPS, then the PageRank in the toolbar will probably not update there.

FEMALE SPEAKER: Can I ask a quick question?

JOHN MUELLER: Sure.

FEMALE SPEAKER: I had problems with my microphone, but now it works. Will Google penalize us if the first text of the home page starts below the fold?

JOHN MUELLER: No. I don't think so.

FEMALE SPEAKER: You don't think so. OK.

JOHN MUELLER: It wouldn't make sense. No.

FEMALE SPEAKER: OK. Another question. Assume there's a [INAUDIBLE] a care rental booking engine with a quality text about sightseeing in the city of Barcelona, but we instead have a guide about how to rent a car. Who will have the higher ranking position? The website with the more related text?

JOHN MUELLER: Probably the more related one, but the tricky part there is we use a lot of factors for ranking. So that's something where it's not really possible to, theoretically, say, this page or this page, which one would be the highest ranking one? It might happen that there are other things that kind of play more of a role on this page, and other things that play a strong role on another page, and we have to balance the whole thing, and we figure out which one to show.

FEMALE SPEAKER: All right. And one last question. On our website, we want to show some guides about how to run the car in the different cities of Spain. Assume there's no difference between renting a car in Barcelona and in Madrid. Therefore, the text would be the same. Would this count as duplicate content?

JOHN MUELLER: Duplicate content. So we would look at the texts individually. And we might recognize that there's a large block of content on here that's duplicated. So what would probably happen is if someone is searching for the text within the duplicated content part of the page, we'll fold those pages together and show one of them. Whereas, if someone is searching for something that's specific to this page, maybe a combination of the city and some other text that you also have, then we'll try to show that individual page. It's not that these pages would be penalized or less valued in search, we would try to just figure out which one is most relevant to show. And if we recognize that, from a relevance point of view, they're kind of similar because someone is searching just for the duplicated text, then we'll fold them together and pick one of those pages to show.

FEMALE SPEAKER: All right. Thank you.

JOHN MUELLER: Let's open up to other questions from you all.

MALE SPEAKER: I have another question. So it's an online shop and their active in Germany, Switzerland, and Austria. There are three different domains and the products are more or less the same. However, those are three, independent shops. The domains are called the same-- shop A, shop B, shop C-- in the respective, country level domains. And so the products may be a trash can. And it's more or less the same products, but the products don't know of each other in the other shop. So we cannot set the rel alternate language tag. What happens, of course, is if somebody searches for the shop brand name, they always see the wrong country's products. Is there any way to tell Google, more exactly, this is Switzerland, this is Austria, this is Germany?

JOHN MUELLER: For something like that, you would probably need to use the rel alternate hreflang tag.

MALE SPEAKER: Yes, but we can't because the products don't understand that they have twins in the other country.

JOHN MUELLER: Yeah.

MALE SPEAKER: The systems are not connected, you know, so there's no way at all for hundreds of thousands of products to manually set those rel alternate languages.

JOHN MUELLER: Yeah. Would that be something you could do Sitemap file, in a case like that? Or are these completely individual--

MALE SPEAKER: No. They're independent. They have different IDs, for example.

JOHN MUELLER: OK. In a case like that, the only thing you can do is really work with the geotargeting. And sometimes what will happen is we'll think that maybe a site that's geotargeted for Switzerland is still more relevant in Germany. So If you don't use the hreflang markup between those pages, then we'll treat them as separate pages. And we won't know which one we should show in which country instead of the existing pages.

MALE SPEAKER: Yes.

JOHN MUELLER: So we'll just know this page is here, this page is there. If someone is searching in this specific country, we know that one page might be more relevant for users in that country, but we probably also know that another page from another is just really, really strong. And sometimes we will still show that in search.

MALE SPEAKER: So with geotargeting, if you say, use geotargeting, you mean that we should detect the IP address of the user. And if they are in Switzerland, send them to Switzerland? And vice versa?

JOHN MUELLER: No. The geotargeting setting in Search Console, if you have a generic top level domain or the country--

MALE SPEAKER: It's not generic because it's .ch, .at, .de domains. So that's not possible.

JOHN MUELLER: Yeah. So in a case like that, you already have the geotargeting built into the domain.

MALE SPEAKER: Oh, I see. OK.

JOHN MUELLER: So that's, essentially, what you can do there. I would really try to figure out if you can use hreflang somehow.

MALE SPEAKER: Yes. I will do that.

JOHN MUELLER: That would be really kind of fix this problem.

MALE SPEAKER: Yeah, I know. All right. Thanks.

JOHN MUELLER: Sure. Hello.

FEMALE SPEAKER: OK, just one last question, please. So after improving the quality of the website, how long will it take until you see the first result?

JOHN MUELLER: That's a hard question. On the one hand, we have to re-crawl and re-index those pages, which can take a bit of time, especially if it's a whole website. Re-processing a website, that's something where some pages are re-crawled within a day, other pages take maybe a month or even longer to be re-crawled and re-indexed. And on the other hand, we have to kind of evaluate the quality of this content as well, which also takes a little bit longer. So that's something where you probably will see some initial changes maybe after a week or so, after everything is kind of starting to be re-processed. But it will definitely take maybe up to half a year or so for everything to be re-processed completely, and for us to really re-understand the new website, and how it interacts with the rest of the [INAUDIBLE].

FEMALE SPEAKER: OK, thank you.

JOHN MUELLER: All right. More questions from any of you? Otherwise, I'll grab the next ones that were submitted. Go ahead.

KREASON GOVENDER: With regards to new sites, how long does it take before they're fully indexed on your side?

JOHN MUELLER: New websites, you mean? Or a news specific?

KREASON GOVENDER: New websites. A completely new site with about, say, 20 pages of content, how long does it take before your algorithms pick up the relevance, and so forth, fully?

JOHN MUELLER: Fully is really hard to say. Depending on how we find the URL, we can pick that up within a couple of hours, a couple of minutes sometimes. If it's something where you submit the URL to us through Search Console and use the Fetch and submit the indexing feature, then probably we can pick up those URLs within, I would say, less than a day. With regards to fully understanding those pages, that's something, again, that takes a while. So in the beginning, what will happen is, technically, we'll index those pages. We know they exist. We kind of know the content. And we try to rank them based on the limited information that we have. But to fully understand them, that takes us a certain number of time. And that's something where you'll sometimes see that change in the search results where a site might be completely new, and maybe it ranks a little bit higher because we think, oh, this is probably a good site. We don't know anything about it yet, but it looks pretty good so far. And over time, we recognize, oh, it's probably not as good as we thought. And it kind of goes down a little bit until it settles down in the right place. And similarly, what can happen is a site kind of ranks a little bit low in the beginning because we don't know a lot about it. And then, over time, we learn that this is a really fantastic website, and we'll rank it a little bit higher. So that's something where, from a technical point of view, it can happen really fast that we index it. But to actually understand it more completely and to have everything settle down, maybe a couple of months, something like that.

KREASON GOVENDER: All right, thanks, John.

FEMALE SPEAKER: I was just told that we're thinking about listing car rental companies with their contact numbers, addresses, et cetera. So does a data table without text mean the loss of quality?

JOHN MUELLER: No. That can be perfectly fine, too.

FEMALE SPEAKER: OK. So it's not recommended to write a text instead of just, I don't know, numbers?

JOHN MUELLER: I would do what works for your users. If it works for users as a table, then I would keep it as a table. If you think it's confusing for your users, or if you see that it's confusing, then maybe write it up as text. That's not something where we expect full sentences on a page all the time.

FEMALE SPEAKER: OK. Thank you.

DANIEL PICKEN: I had a quick question. And John, it was about something you said a few weeks ago, actually. You said the title tag is a ranking signal, but you said that you use a part of the title tag. And that's really stuck with me. And I was just wondering what you meant by that, when you say you use a part of the title tag as a ranking signal.

JOHN MUELLER: We use that just as a part. I think it's not like the primary ranking factor of a page, to put it that way.

DANIEL PICKEN: Right.

JOHN MUELLER: We do use it for ranking, but it's not the most critical part of a page. So it's not worthwhile filling it with keywords to kind of hope that it works that way. In general, we try to recognise when a title tag is stuffed with keywords because that's also a bad user experience for users in the search results. If they're looking to understand what these pages are about and they just see a jumble of keywords, then that doesn't really help.

DANIEL PICKEN: So what about keywords that are directly relevant to the page? Is that OK, or you can tell that they are specific keywords, so therefore that might work against you?

JOHN MUELLER: I mean, having keywords in your title track is fine. I would just write the title tag in a way that really describes, in maybe one sentence, what this page is actually about. Really make a clear title rather than to just have keyword one, keyword two, keyword three in there.

DANIEL PICKEN: And you said it wasn't critical. What would you deem as critical to a page?

JOHN MUELLER: More like the actual content on the page.

DANIEL PICKEN: OK. Interesting.

JOHN MUELLER: I don't know if it's still live, but one of the bigger Q&A sites, for example, for a really long time, they didn't have a title tag on their pages. They didn't have a description, because they really wanted to Google to just take whatever was on the page itself. And that worked really well. So that's not something where we would say, if you don't have a title tag, you don't have any chance of showing up in search. From my point of view, the title tag is something that's worth specifying if you have something specific that you want to use as the title. And you can really refine it into something kind of short and to the point. If you just have a really long title tag that's almost a description of the whole page, then that's probably not that useful to users. And it probably would be something that we re-write in the search results.

DANIEL PICKEN: OK. If you re-write it, is that a good indication that it might be wrong for that particular query?

JOHN MUELLER: Maybe. Yeah. The tricky part there is, specifically on mobile, we don't have a lot of room. And on desktop, we do have a lot of room. So for mobile, we probably re-write things a little bit more aggressively than we would on desktop. So just because it's re-written, doesn't mean that it's bad. If you see a really short title tag, it doesn't mean that you should be using this one instead. Essentially, it depends on the device and, like you mentioned, the query as well, what we actually do with those titles.

DANIEL PICKEN: OK. Cool. Thanks.

JOHN MUELLER: All right. Let's take a break here. I'll set up the next Hangout as well, probably in two weeks again as we move forward. And hopefully, I'll see some of you all there again as well. Thanks a lot for all of the feedback, for all the comments and questions. It was really helpful. And I wish you guys a great year.

DANIEL PICKEN: Have you been doing this Friday?

MALE SPEAKER: Bye, bye.

DANIEL PICKEN: Are you usually doing it Friday or Friday morning?

JOHN MUELLER: Sorry?

DANIEL PICKEN: Does it tend to be a Friday that you do the Hangouts. The last few that I've been in tend to be a Friday. So going forward, will these be on a Friday?

JOHN MUELLER: I try to keep them on the same days, more or less. So we have Tuesday afternoon, Thursday kind of around noon, in German, and then Friday morning for English again.

DANIEL PICKEN: Cool. OK. Thank you.

JOHN MUELLER: Sure. Bye, everyone.

FEMALE SPEAKER: Thank you, bye.

DANIEL PICKEN: Thanks, John.
ReconsiderationRequests.net | Copyright 2018