Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 13 February 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: OK. Welcome, everyone, to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller. I am a Webmaster Trends Analysts at Google in Switzerland, which means I talk to webmasters and publishers like you all, try to answer any questions you might have, and bring feedback back to our teams here at Google. As always, if one of you wants to get started with a question, feel free to jump on in. Everyone's shy this morning. All right. Then I guess we can just get started with the Q&A. And if you have any questions to the questions or comments or anything about the answers, feel free to jump on in then. Let's get started with the first one here. How do you guys respond to the claim that subfolders are superior to subdomains when it comes to SEO? Which one would you recommend for a blog, for example? From our point of view, they're essentially equivalent. So you can do lots of things on subdomains. You can do lots of things on subdirectories. It kind of depends on your infrastructure, which way is easier for you to handle. So from our point of view, this is essentially up to you. If you want to put a blog on your main website in a subdirectory, that's fine. If you want to use a subdomain, that's fine. If you want to use a different domain name, that's possible as well. So all of these ways are essentially possible, and I wouldn't say that any one or the other is superior in any magical way. We have multiple ccTLDs for targeting different countries. Should we link all ccTLDs together with each other sitewide? Is it the right way to link, or can we have some other alternative? So if you have multiple ccTLDs for essentially the same content, I'd recommend, first of all, making sure that you're using hreflang annotations between the individual pages. And then, that's a great way to kind of link those pages together automatically. So if you want to link them on the page themselves, that's fine as well. If you prefer to keep them separate and have a country picker page somewhere, that's fine as well. It helps us a little bit to have them linked on a page-by-page basis so that we understand the relationship between the individual pages, particularly if you have one page on one topic on one ccTLD, and you have an equivalent page on a different ccTLD. Then that connection helps us understand that a little bit better. So if you can link them on a page-by-page basis, that sometimes helps us a little bit. Why is Google very, very slow to digest, considering and treat 301 Redirect, very often, a few months, and sometimes nothing after a year? I think there are two parts here, which are kind of mixed together, which are a bit hard to kind of look at separately. So on the one hand, we follow 301 redirects right away. So if we crawl a URL, we see a 301 redirect, we'll follow that immediately. We'll try to follow the redirect, index the content that we find at the destination of the redirect, and we'll try to put that in there under the new URL. So that essentially happens immediately. As soon as we can crawl a page, we'll do that. There are two aspects here that make it sometimes look a bit slow. On the one hand, if you're doing this on a sitewide level, then you have a lot of pages that are crawled very frequently, which we'd pick up very quickly, and usually also a lot of pages which are crawled very infrequently. So it can take weeks or sometimes even months for us to crawl those pages, because traditionally they don't change at all, and we don't expect them to change immediately. So this is something where we have to crawl the pages first. And sometimes it takes a while for us to get there if you have a big website. If you have a website that hasn't changed for a long time, then we don't do that so quickly. The other thing to keep in mind is if you're doing a site query in Search to see what the current status is, that's probably misleading, because when we see a site query, we think, oh, this person is searching explicitly for this URL. Therefore, we'll show them that URL, where we can find it. And sometimes we'll have a situation where we know that a page has moved to new URL, but we also remember the old URL. So if someone is searching explicitly for the old URL, then we'll still show that old URL in Search, even though we've essentially moved everything to the new URL. So that's a little bit misleading there. What I would look at in general, if you're moving a website, is make sure you're following our site move guidelines completely. So you're doing the redirects properly, you're using the Change of Address tool if you can use that, and then to look at the Index Status information in Webmaster Tools, because there you'll see that for the old type, the graph will start heading down as we recrawl those pages and notice that they've moved. And for the new domain, you'll see that it's headed up. Using a site query probably isn't a really good way to diagnose that situation, because, like I mentioned, it looks like you're explicitly looking for something, and we'll try to show you explicitly what you're looking for to make you happy. But that's probably not what you need when you're diagnosing a site move situation. Is a single Penguin update enough to recover from Penguin? Or does it require more updates? Essentially, a single update is all that it takes. But with these kind of algorithms, webspam algorithms, when we look at links, for example, that's something that takes quite a bit of time for us to reprocess everything that's associated with the website. So while a single update on our side would be essentially enough to update there, it doesn't mean that the next update will automatically make your website jump back up, because maybe we're still processing a lot of the data that's behind there.

MIHAI APERGHIS: We're on the topic of Penguin.

JOHN MUELLER: Sure. Go for it.

MIHAI APERGHIS: I asked you a few months ago-- I also made a thread on the product forums that I had a client that was affected by Penguin. And we did a lot of work, a lot of this evolves, a lot of good content to try to get a few more quantile things. We changed the platform itself. I can give you the URL. Is that all right?

JOHN MUELLER: Yeah.

MIHAI APERGHIS: When I asked you then, you said there was a lot of stuff that maybe Google needs to first check. A lot of stuff changed that maybe Google needs to first take account, take everything into account. And right then, you said there were still some issues regarding the links. I'm pretty sure we disavowed like crazy. I think we put even some good links in there as well, just to be sure. But we saw quite a big boost in November, but everything went back in a week afterwards to the same. And I noticed that whenever they're adding a new page, some new content, or modifying something, Google kind of picks up very slowly. Even with fetch and render and submit your index, it kind of picks it up in a week. So I'm pretty sure there's still some issues getting picked up. And still on the links, I'm pretty-- I don't know what else to do because we disavowed a lot. And it's two years now, almost a few days ago. But we got the client, and we can't just figure out what else we should maybe look for. I don't know how much you can--

JOHN MUELLER: Yeah. I'd have to look into the details of that to kind of see what's happening there. But I think this is one of those situations where things are essentially still being processed. So this is something that sometimes just takes quite a lot of time. And I know it's hard to be patient, and it's hard to do a lot of work and not really see what's happening. But essentially, these things take quite a bit of time to be reprocessed, to understand the new situation, and to have the data updated for the algorithms as well.

MIHAI APERGHIS: OK. So you can still say there are still some issues getting picked up by the algorithms somewhere.

JOHN MUELLER: I don't know if they're still new issues or still existing issues that you haven't fixed. I don't see that offhand. But in general, we still have the problem with the Penguin algorithm, that it's a bit unhappy with that. So that's probably partially due to the data needing to be updated for the algorithm, partially due to us recrawling everything. And it's hard to say if you've done all the right steps and it just takes time to be reprocessed, or if you're missing some steps and the existing things have already been reprocessed. So I don't know. It's hard to make a recommendation there. But in general, I try to just double-check to make sure that you've really got everything covered, so that you don't have to go through an iteration when things update.

MIHAI APERGHIS: Yeah. Yeah. Just as I said, we only went through this about with the domain column wait, just to be sure we get everything, all index pages. As I said, we might have even thrown in some of the good links just because I'm not sure how Google sees the websites in other languages than English , where there are more false positives in other languages. So I threw in just to be sure everything that could be interpreted as being not medium or low quality. But yeah, I was expecting to see-- I'm not sure how Penguin or the other webspam algorithms work, if it can slowly increase or it can see some slow increases over time, or is it just one big--

JOHN MUELLER: It kind of depends on the website. So, on the one hand, if you improve things, then you'll see those gradual improvements over time anyway. On the other hand, when the algorithm gets updated, then you'll probably see a bigger jump. So you'll see gradual improvements from the normal work that you're doing, which, kind of like moving up slowly. But you probably won't see that big jump until everything is updated for Penguin. And you probably also won't see it jump back to the previous position, because a lot of that might be based on the links that you have removed now.

MIHAI APERGHIS: Right. Right. Well, we have a separate blog that we have done some pretty good articles and got some exposure. And that's actually ranking better than our categories and our products.

JOHN MUELLER: That's possible, yeah.

MIHAI APERGHIS: So Penguin or other webspam algorithms can target certain pages, not the entire website?

JOHN MUELLER: We try to be as granular as possible. But sometimes it's easier done than we expect. So the Penguin algorithm is generally a website-based algorithm, which would affect the whole website in the same way. But what might be happening there is if this blog is, for example, somewhere else, then maybe that would be ranking slightly differently. If this blog is particularly good, then we might have a lot of really good signals that are kind of helping push this up, and the Penguin part is kind of pushing it just down a little bit. So it's not that we'd say, these pages are affected by the algorithm, and these pages aren't. But it might be that this is just something that's particularly useful that we think is relevant that we show a little bit higher there. So that's something where, in general, if your website is affected by an algorithm, you can work to improve that. And you'll kind of see this gradual rise as you work on improving that. But the algorithm is kind of holding you back, like, I don't know, a stuck brake or something like that. So once that gets resolved, then you'll be able to move forward in a bit more natural way.

MIHAI APERGHIS: OK. So patience is the key, I guess.

JOHN MUELLER: Yes. Yes. I think I missed one question somehow from switching back and forth. If you're live or watching, feel free to jump on in and add that again. Removing metadata from images, does that affect rankings? Not necessarily. So for Image Search, we take into account a lot of information from the context of the images. We can sometimes get some information from the metadata of the images as well. But that's essentially just more icing on the cake almost, because we find a little bit more information about things like when this image was made, when this file was created, the resolution of the image, which we also see when we crawl the image directly. But this is something where, I think if you want to remove the metadata from your images, I wouldn't hold back on doing that. That's essentially your decision, something you can do or something you don't need to do if you don't want to do that. The image size itself won't change significantly just from removing the metadata, so it wouldn't really have a big effect on Search. Our website has a Google custom search engine with application [INAUDIBLE] on the homepage as Google Search [INAUDIBLE] documentation, but has not gotten the rel+canonical. Only met property equals URL. Is this is a problem? I don't understand the question completely. This sounds like something you'd probably want to post in the Help forum, where someone can take a look at your exact markup and the things that you're doing there. In general, if you're using Google's custom search engine, you can mark that up in a way that we can use it for cycling search box. The important part with the cycling search box is that just marking something up doesn't make that cycling search box appear automatically. We really algorithmically have to first determine that we want to show the site link search box, and then we'll use your markup if you have that specified. But I think this is something where you'd probably want to post in the help forum to get explicit feedback on the exact markup that you have there. I'm having a single-page website. How should I target different keywords in it? I have about 50 keywords. If Google is considering the title as a main factor, what could the page be about? In general, you can make a single-page website. That's essentially up to. With regards to the keywords, that's something where you kind of have to think about it for yourself from a marketing point of view as well, to think about what is this page relevant for. And if you're creating a single-page website for diverse set of topics, then that's really hard, and that's really hard for our systems to say, well, this page is relevant on the topic of shoes as well as on the topic of cell phones. And if that's all on a single page, then that's really hard to get across. So that's something where, if you're expanding your content, if you're creating more and more things that you want to talk about, then maybe moving to multiple pages is a good idea. Another thing I don't think applies here is people sometimes call a single-page website something where they use a JavaScript framework to essentially create a website around a single URL. In cases like that, that's something totally different. Of course, that kind of goes into how we crawl an index JavaScript content. But I think in this case, you're actually talking about a single HTML page where you have content that you want to target different keywords, for example, or different topics that you want to focus on. If I have a website in two languages with two domains, once I point to index, for example, icecream.com, and another domain that points to the same hosting, but to index-th, for example-- I can't read that-- .com, should we verify in Google Webmaster Tools both domains? Yes. If you have multiple domains targeting essentially the same content in different languages, I'd always use Webmaster Tools. So even if you have multiple domains that are unrelated, I'd use Webmaster Tools there. But if you're essentially using the same content in multiple languages on two different URLs, then I'd, on the one hand, use Webmaster Tools, because that gives you good information. On the other hand, I'd make sure that you're using the hreflang markup and doing that on a page-by-page basis, so that we understand that this page on your English domain, for example, matches the same page on your Thai domain, for example, so that we understand the connection between those two pages. And if someone is searching in Thai, then we know that, OK, this is the relevant page in their language that we should show in Search. So that's essentially what I'd focus on there. If you would use hreflang like that, you also have statistics in Webmaster Tools that give you information on which parts might be implemented correctly, which parts might be implemented incorrectly.

MIHAI APERGHIS: Hey, John. Regarding hreflang, for example, I have a United States client that offers products for international markets. Everything there is in English, so United States, UK, Australia, Canada. But they currently just have a single website. It's not geotargeted to anything, the .com. And I was wondering, would it be useful to create the product pages to make them currency to each country, maybe, a version of the page and do the hreflang? Or just leave it like standard? Because I was looking at one of their competitors. They don't have multiple PLDs or hreflangs or multiple versions. They just have one version of the website that is able to rank pretty much everywhere in every single country, English-speaking country. And I was wondering if that would help, to have a different product version with the country's currency and maybe add another socket specifically to that country and link to that with hreflang. Would it be--

JOHN MUELLER: You could. You could. You don't need to. So usually what you end up doing in a situation like that is balancing, on the one hand, having a really strong generic page that works in all countries versus having individually country-targeted pages that are essentially separate pages, where you can link them with hreflang. That helps us a lot. But these are separate pages that kind of have to be maintained separately, have to be crawled and indexed separately. So if you have content that you think is specific for these individual countries, then I'd definitely go towards the individual pages. On the other hand, if you're just artificially kind of blowing up the content and saying, well, I'll write in British English and in US English on these pages, and that's essentially the difference that I have there, then probably you'd be better off with a single page that's very strong, that focuses on that product, which still ranks globally.

MIHAI APERGHIS: So if it's just something related to, I don't know, the currency, maybe we should keep the same page and maybe use a JavaScript to change it based on--

JOHN MUELLER: Sure. Yeah. I mean, that's one thing you could do. If you're using something like, I imagine Google Shopping, where you have different currencies and you have to submit URLs for the different currencies, then that might be something where you'd say, I have to create separate URLs for each currency. But that could be, for example, with a URL parameter instead of a completely separate page. That's kind of, I guess, almost separate from the search side. But in general, if you have a really strong page that's valid for multiple locations, then I'd try to keep that if you can. Because splitting everything up just creates a lot of work. When you have a lot of new pages, there's always potential for something to go wrong there. So keeping it simple makes sure that you can kind of avoid that.

MIHAI APERGHIS: By the way, regarding geotargeting, for United States-based websites, if they're pretty much only targeting United States, they have a .com that they're only targeting the United States, is it useful to set the geotargeting to United States in Webmaster Tools rather than just leaving it default?

JOHN MUELLER: If you leave it default, we'll try to figure it out ourselves, and we'll probably recognize the United States part automatically. But you can also set the United States if you explicitly want that. Setting it to a specific country is usually useful if you're moving a domain, for example. So if you move from your old domain to a new domain, and the new domain, we don't really know how we should handle geotargeting there. If you explicitly set it, then we'll explicitly follow that.

MIHAI APERGHIS: OK.

JOHN MUELLER: All right. I created a website with Wordpress a couple of years ago. After two years, I still show a question mark for the rank on that website, unranked. How long does it take to get ranked? Or what should I do to get even a 0 rank? I think this refers to page rank, the toolbar page rank data. It's important to keep in mind that we haven't updated Toolbar Page Rank in quite some time. And we probably won't update that again. So if you're explicitly looking at Toolbar Page Rank, then that's probably not something that's going to change anytime soon. So I'd try to focus on other metrics there other than Toolbar Page Rank.

MALE SPEAKER: Excuse me, John.

JOHN MUELLER: Sure.

MALE SPEAKER: How are you? Yeah. I am using the Open SEO stats. And it does show the Google rank. Is that the same rank that you say is not being used anymore? Or should I not rely on the Open SEO stat ranking?

JOHN MUELLER: I don't know what they show there. I don't know. Do any of you other guys know that Open SEO tool?

MALE SPEAKER: It's a Chrome add-on, and it shows me basically the page rank on any website that I go to. So I was relying on it. I don't know if it's of any use at all. My understanding is if my rank is high, with the key words properly on the website, that I could optimize the website, basically, and that this rank has something to do with the whole process getting to number one per SEO in the search results. Ideally, right? But I am not, if I'm doing the proper method to-- basically, I want to be number one, obviously, on the Search results. So I'm using the rank and the keywords to position myself on number one I feel. But I don't know if I'm doing it right?

JOHN MUELLER: Yeah. It sounds like that tool is probably using Toolbar Page Rank data, and that's the data that we don't update. So looking at the page rank data there probably isn't relevant for you at all.

JOSHUA BERG: There's quite a few tools that still give that out to people. And also people should be concerned about using any tools if you don't know who they're from, because they can also be giving away important data from your computer as well.

JOHN MUELLER: That's also possible. Yeah. I mean, I don't know all of these Chrome extensions, so it's really hard for me to say. But in general, what I would focus on is really making sure that your site is of the highest quality possible, that you're targeting, that you're talking about the content that you want to be found for, and look more at Webmaster Tools information, for example, where you can see the search queries that are leading users to your site, where your site is being shown in Search, the impressions, as well as the clicks, the people who are actually clicking in, through to your site. And that's something that can give you a little bit more information. For example, if in Webmaster Tools you see that a lot of people are seeing my site in Search for specific keywords, which means you have a high number of impressions there, but they're not clicking through to your website, then that could be a sign that somehow maybe your titles or your descriptions aren't really optimal. People find your site in Search, but they're like, well, this isn't really what I was looking for. So I won't click through there. So if those keywords match what you're trying to target and they're not clicking on your site, then improving your titles, improving your descriptions is a good way to kind of take that first step.

ROBB YOUNG: John, are there any plans to expand the date range in Webmaster Tools for that? Because there's still only three months for most things, whereas Analytics, why, as long as you like. And since the keyword stuff has disappeared from Analytics, getting any kind of comparison or trend becomes a bit more difficult.

JOHN MUELLER: That was, I think, one of our top requests in that survey that we did. And I know the team is taking that seriously and looking into what we can do there. At the moment, that's probably not going to happen in the short term. But we'll definitely try to find some way to improve that, I guess towards the next half of the year, somewhere like that. But I can't make any promises that far in advance anyway. So I know the team is definitely taking that seriously and trying to find a way to make that happen somehow.

MALE SPEAKER: John, can I just ask? You just said about international targeting and internet. For example, I'm just checking my Google Webmaster Tools. Your site has no hreflang text, and for country, we don't have anything selected. But what I have noticed, we have only about 7% of visits from United States. And usually overnight, there's a very small amount of people from US coming to our website. And our website is completely in English. So should we-- maybe something update about the hreflangs and country?

JOHN MUELLER: If you want to explicitly target the US, you can do that with the geotargeting setting. If your website is just generally English, and you essentially accept all English-speaking visitors, then I wouldn't use a geotargeting setting. I would just leave it generic.

MALE SPEAKER: OK. What about hreflangs? Do we need to insert that in the website, that it's English?

JOHN MUELLER: You only need to use hreflang if you have different versions of the pages. So if you have an English and a French version and you want to link them, then that's a good place to use hreflang. But if you only have an English version, then you don't need to use hreflang. You can't use it for just one page.

MIHAI APERGHIS: By the way, John, regarding Rob's question with the data rankings in Webmaster Tools, one of my feedbacks was regarding the automatic dates that you get when you go into Webmaster Tools, given that you can't see any data from the past two or three days. But it's automatically set when you go into search queries to the last 30 days, including the last two to three days, even if you don't have any data. And your comparison metrics compared to the previous updates, that's always not real because there are no data from it. I don't know if that's fixed in the new. I don't have access to the new beta.

JOSHUA BERG: Yeah. I mentioned that once before a few months ago. Those last two days actually throw off the statistics. So they're not accurate unless you set the date back manually those two days or till the last time the data was received. And then the percentage statistics on the chart are accurate.

JOHN MUELLER: I think it's meant to encourage you to try even harder. It's like, aw, man. You have 5% less traffic. You should work harder. So I don't know. I think one of the difficulties there is also that the time it takes for the data to be visible in Webmaster Tools is not always consistent. So usually it's two to three days behind. I think at the moment, we're a little bit more than two to three days behind. So it's hard to hardwire and say, well, we always ignore the last two to three days, and just take 28 days to compare that with. I think at some point, we have to figure out how to best handle this kind of comparison, though. And that's definitely valid feedback.

MIHAI APERGHIS: Wouldn't it be easier easier-- so when you go to Webmaster Tools, you can clearly see that that day is February 11th? So there's not February 12th and 13th with zero date. The last day is February 11th when it comes to the graph itself. But the period is to February 13th, so maybe you can just pull it up until--

JOHN MUELLER: That can be misleading, too, because sometimes we have partial data. It's not that we'd say, in your location exactly at midnight, we made the cut-off. It might be that maybe the cut-off for the current data

is, I don't know, around noon or around 3:00 PM or 8:00 AM. And depending on where that is, that could be misleading, too. But I think having some kind of way to automatically say, I just want to compare the previous period with the current period and to match the number of days, I think that definitely makes sense. All right. A lot of referral spam appears on my website, fake referrers promoting Webmasters to take a look. Is this something I shouldn't worry about from a Google ranking point of view? That's correct. You don't need to worry about that from a Google ranking point of view. Referral spam is kind of an old school spamming technique that used to be really popular, I don't know, 10 years ago, something like that. Somehow they figured out a way to get that into Analytics as well. So now it's visible in Analytics. But essentially, this is fake traffic. This is something you can ignore. You can block it on your server if you want to. That's not something that would have any effect on Web Search. All sorts of odd websites linked to some of mine, but disavowing these seems like confessing to a crime I didn't commit. What would your advice be? And are there any case studies published by Google on the use this tool? My general advice is if you're aware of a problem that affects links, and you want to clean that up, and you can't remove the links, then just use the disavowal tool. It's not something that the Webspam Team would look at and say, oh, they're admitting that they did something sneaky with these links. This is essentially just a technical tool on our side, where if you don't want these taken into account, you can just remove them with the tool. So by all means, don't feel reluctant to use this tool. I think it's a great way to kind of help solve a problem that you can't fix directly on your website. So don't hold back. If you see a problem, try to fix it.

MALE SPEAKER: John, I have to ask about this file. For example, we have a website that created an article about our service. And there are tags on the website, and they put a link in the footer. So we have like 500 links for that website. Should we disavow specific tag pages and only leave the article or what?

JOHN MUELLER: Disavowing the individual pages probably would be too complicated. If this is really a natural link to your website, I would just leave that be. I think that's fine. That's not something you have to be over-worried about. Of course, if you're going out and contacting all these sites and trying to get footer links everywhere, then that's a little bit different. But if this is a natural footer link that happened to be placed there, then that's fine. That's not something I'd be too paranoid about. For the cycling search box, does mark-up have to have the link rel=canonical on it? No. You don't need to have the link rel=canonical on your pages. In general, I'd recommend having that where you can, but it's not a requirement. Would you advise sites that don't really need more than one page or traditionally provide a service rather than content to build content? All the advice now seems to be to build content. So in general, this is something where you kind of have to look at what your users are looking for, what they're trying to find, how they're searching for your content for your business. And it probably doesn't make sense to just artificially create content for something if people aren't actively looking for that kind of content. So if I may want to rent a car on my trip to San Francisco, then I probably don't want to read someone's article about their experience of renting a car in San Francisco that kind of explains the different car models. I want to go to a website that provides a service for me. So that's something where just artificially creating articles for the sake of having articles doesn't really make sense. So if you have information that you want to share in the form of articles, then fine. If people want to search for this information, they want to digest it in article form, then that's a good way to get in touch with those people. But if people essentially just want to use your service, then artificially creating articles probably isn't the best way to get there. And some of these websites have really, really small and almost thin websites because they focus on their service instead. Or maybe they focus on an app that they want people to download instead, where if you search in Web Search for their website, you'll find a landing page that essentially says, hey. Sign up here and install our app. But they don't have any actual content there. And those kind of websites do well as well. It's not that they have to artificially create articles to appear in Search.

MIHAI APERGHIS: Actually, as far as I know, Airbnb has pretty good content, [INAUDIBLE] things done. So I think you can do both if you can [INAUDIBLE] and understand users and get something really nice and interesting going on. Why not do both?

JOHN MUELLER: Sure. I mean, if you do it really well, if you understand your users, if you provide something that they want to find in Search, then by all means do that. Yeah. I think that's a great idea. I just wouldn't artificially do it, and say, I have an app or I have a single-page website, and I want to get more traffic. Therefore, I'll dilute my existing content across a hundred different pages or I'll hire, I don't know, a lot of inexpensive writers to create fluffy article pages for my website. Because users are going to go there, and they're going to say, well, this isn't really what I was looking for. They're going to go away. They're not going to recommend your website for something that's essentially just fluffy text. John,

MALE SPEAKER: John, I wanted to ask you, just to clarify on the page rank situation, so basically, I should not include this page rank in my equation for SEO at all? Because page rank is not being updated anymore by Google, so it's irrelevant completely to my efforts in positioning my websites.

JOHN MUELLER: Yes. I wouldn't use it as a metric for creating a website, for tracking your website. I don't think that makes sense.

MALE SPEAKER: I mean, my understanding is if I have a high page rank, then Google looks at that, and it shows my website over pages that have lower page rank. That was my understanding. And now I am understanding that Google is really not updating their page rank tool anymore, that it's really not being used. I feel like I should completely forget about it, and I should focus on quality content and the webmasters' queries and the clickthrough rate.

JOHN MUELLER: Yes. Exactly. Yeah.

MALE SPEAKER: All right. Thank you very much. I'm Clear now.

JOHN MUELLER: Great.

MALE SPEAKER: Great.

JOHN MUELLER: OK. We have sites linking to our site mainly from SEO DNS reporting sites. Can this be a problem? Google Webmaster Tools doesn't report any issues on links to our site. That's usually fine. That's something where these sites are auto-generated. They link to pretty much every domain. That's not something I'd worry about.

MALE SPEAKER: So you automatically deny those websites, for example, look-alike website and they have a link to everyone. So you ignore them?

JOHN MUELLER: We try to ignore things that are essentially just auto-generated content. We don't put much weight into them.

MALE SPEAKER: I think that we have, like, 500 links, minimum, from that kind of website.

JOHN MUELLER: It's crazy. They come out of everywhere. Essentially they're just scraping WHOIS information. They're linking to websites. It's like this is this website, and here are 500 other websites that have similar domain names. And this is something that essentially we kind of say this is cruft on the web, that just happens to be there. And we have to live with it. But it's not something where we'd say this is a problem for any website. Of course, if you're running one of these websites, then making sure you have something unique and valuable on there is definitely a good idea to make sure that we're not just seeing this as some random HTML page that we can ignore. Does it really give an advantage to websites that are already serving on HTTP and HTTPS but use HTTP as canonical URLs? Should we think about switching? Does this migration of HTTP to HTTPS only count for Web Search or for images as well? At the moment, this is only for Web Search. This is a really small ranking factor at the moment. So it's something where you're not likely to see any big jump in Search. But I think over time, as more and more sites are moving towards HTTPS, this is something that people are going to notice as well. So from that point of view, if you already have your content on HTTPS and you can serve it properly on HTTPs, you don't have issues with embedded content, with ads or any things like that that you have on your pages, then I'd definitely switch HTTPS. I don't see why you would want to prefer serving an insecure version of your website when you have the ability to serve the secure version. Can you tell me if the disappearance of search query data from Analytics is related to applications to join the beta program? We haven't heard any more about the beta. But all our search query data seems to have disappeared. I wasn't aware that Analytics was removing anything there. So I don't really know what's happening specifically about that. I assume that they would be using the same data as we show in Webmaster Tools, and they'll probably switch over to the new format as well once that's finalized. So in the meantime, you can definitely still get this data in Webmaster Tools, the Search query data. If you signed up for the preview version, a set of you should have access already. We're probably going to do another set of tests in the coming weeks for some of the other people on the list. So I'd wait for that and see if maybe you can get access to that there. But I'll check with Analytics guys to see if there's something special happening there. I don't really think so, but we can find out. Regarding data in Search queries reported in Webmaster Tools, why can't we see the percentage of not provided like we do in Analytics? And why do I see 83 clicks in Webmaster Tools for a certain query but only 7 sessions in Analytics for the same period and query? So not provided is essentially when Web Search doesn't pass the query to Analytics. And Webmaster Tools collects data on a different side. So Webmaster Tools collects the data essentially when we show the Search results to the user, not when the user comes to your website. So from that point of view, we don't really have this notion of not provided in Webmaster Tools. We essentially try to bring you as much of the relevant data as possible directly. With regards to the clicks in Webmaster Tools and the sessions in Analytics, I don't know for sure how Analytics counts those sessions. It might be that someone is clicking over multiple times. So that would be my assumption there. Analytics also uses JavaScript to track these things. So if a user doesn't have JavaScript enabled, then that might not be recorded properly in Analytics. Also sessions is not necessarily the same as a click, so it could be that one user is clicking through multiple times and just being counted as one session in Webmaster Tools. Does Panda affect sites with pagination issues resulting in hundreds of duplicate titles, metatags, et cetera in Webmaster Tools? Panda is a quality algorithm on our side, which looks at the quality overall of a website to try to recognize higher quality content and show that a little bit higher in search. Titles and metatags are essentially not really a sign of quality. That's something that's more visible to the search engines. Titles are something we'd show in Search, but not really something that the user actually sees when they access a page within their browser. It's kind of hidden away in the tab on top. So from that point of view, just because you have duplicate titles and meta tags doesn't mean that that's a quality problem from our point of view, because we look at the general overall quality of the website, not at that the individual metatags and seeing, oh, this metatag is different here. Therefore, it must be a higher quality website. Because that's something that's really easy to accidentally get right or wrong one way or the other, that doesn't affect how users actually see and use your content. So from that point of view, if you have a lot of duplicate titles and metatags, that might be something worth looking at and fixing, but it wouldn't be affecting the overall quality of your website.

ROBB YOUNG: John, if Panda's looking at quality, and I know quality is subjective, does that mean Panda also considers links, the same as Penguin does? Or does Panda just look at site non-pay stuff rather than domain and off-site stuff?

JOHN MUELLER: We try to focus on the stuff you have on your website directly, so not external links, those kind of things. In general, we try to keep our algorithms so that they focus on individual aspects and not that they overlap. So, for example, if one algorithm were to look at low-quality links and the other algorithm were to look at spammy links, then there's a significant overlap there, where if something is considered spammy, it might be considered low quality as well. And that essentially means that we're running multiple algorithms that are kind of taking the same things into account, which means we have to maintain a lot of extra things that essentially use the same factor. So that's something we try to avoid. So from that point of view, when we look at the overall quality of a website, we try to look at the quality as we actually index it. So things like putting a no index on pages that you think are low quality help us to see that, OK, well you don't want us to take this page into account. We don't show it in Search, so we're not going to use that for a general quality evaluation of this website. Can you confirm for me that the web-out traffic from Amazon AWS IP range marked Googlebot gocrawl v0.4 is not legitimate Google traffic? I don't know about that specific traffic, but that does sound like something that we wouldn't be doing. So in general, if you get crawled by something that calls itself Googlebot, you can always do a reverse DNS look-up on that. We have that in our help center, where you take the IP address, you look up the host name, and then you confirm that the hosting actually has that IP address. And you can see that it matches something that goes back to Google. And if the IP address doesn't go back to Google like that, then that's probably someone else who's using a fake user agent to crawl your pages. And obviously, there is no legal binding that says people have to use a real user agent or they have to disclose where they're actually crawling from or who they're crawling for. So some people crawl with a normal browser user agent. Some people call with a fake Googlebot user agent. And if you don't want them to do that, then you're welcome, of course, to block them. That's essentially up to you. Just opened a blog with a hosted website and plan on linking to various content that I have under hub pages. Question, I have original graphics I've developed and currently have these graphics out on hub pages. Should I move the original pieces to my new website? That's always a good question to look at when you're essentially moving your presence from one place to another place. And I think, in general, if you're moving to a place that is your own, like you're moving from a shared hosting setup to one that you have under your own control, your own domain name, in general, I'd try to move that content to your own domain name so that you have a collection of all the great things that you've done under your own domain name. And whether or not you want to keep the same content elsewhere is essentially up to you. But in general, if you've been using something like Blogger or you have a Blogspot domain name, and you want to move to your own domain name, then I'd try to move that to your own domain name just so that you have full control over everything that's associated with that. And that sometimes helps to help Google understand that this is really a separate entity that we should take seriously on its own and not worry about so much what other people are doing on this domain. So from that point of view, I'd try to-- if you're currently on a shared hosting set-up or on Blogspot or somewhere else, and you want to move to your own set-up, then try to move that content as much as possible so that you really have everything together for yourself. OK. Wow. I think we actually managed to make it through the questions. And we have a couple minutes left. What have we missed? What can I help answer for you guys?

ROBB YOUNG: I'd like to ask another one.

MIHAI APERGHIS: I really wanted to see all the questions at once. I recently picked up a client that I think is my first client that has experienced a negative SEO attack. It actually happened quite recently. One of its competitors launched an active SEO attack using links to all of these other competitors, including this client I have. And you usually tell us not worry about negative SEO. Google is kind of good at picking that up. So the number of links that the competitor is adding is pretty high. Should I focus on trying to get the [INAUDIBLE] file, [INAUDIBLE] [INAUDIBLE]. I'm guessing [INAUDIBLE] no [INAUDIBLE].

JOHN MUELLER: My general advice would be if you see this problem, then you can take it on with a disavowal-type file, and put the domains in there and just let it be handled like that. If you suspect that something really crazy is happening because of this and Google is really getting confused, then you can always send that information to me. We can take a quick look with the Webspam Team. But in general, for a lot of these things, you can essentially just throw them in your disavowal file and leave it at that.

MIHAI APERGHIS: OK. I'm curious, though. Is there any way-- so for websites that already been affected by a website algorithm like Penguin, is there a way to make sure that on the next iteration of the algorithm, everything that is in the disavowal file has already been called. So we have in Webmaster Tools, the index that forces [INAUDIBLE]. Is there something we could do to make sure Google already has crawled the things in our [INAUDIBLE] file?

JOHN MUELLER: Not really. Not really. So especially if you have a domain directive, then we essentially have to recrawl that domain.

And if someone, I don't know, puts domain:facebook.com and clicks a button to say, recrawl all the links here to make sure I disavowed everything, then I think Facebook servers would melt and our Googlebot servers would melt, and we would never be able to catch up and just recrawl everything right away. These are things that essentially just take a while to be reprocessed. But I know this is feedback I've received from others as well, where you see some status information at least to know that, OK, out of the whole links for your website, we've been able to recrawl this part or something like that. I don't see that happening any time soon, though.

MALE SPEAKER: John, I am wondering-- moz.com gives advice about some good and bad linking techniques. And my understanding is this blog posting and link building would be to get a high rank. But given the fact that ranking is not a metric, should I completely avoid the link building or not use it at all? Is link building in any way good for webmasters?

JOHN MUELLER: That's a good question. In general, I try to avoid that so that you're really sure that your content stands on its own. And make it possible for other people to, of course, link to your content. Make it easy. Maybe put a little widget on your page, "If you like this, this is how you can link to it." Make sure that the URLs on your website are easy to copy and paste. All of those things make it a little bit easier. We do use links as part of our algorithms, but we use lots and lots of other factors as well. So only focusing on links is probably going to cause more problems for your website than it actually helps.

MALE SPEAKER: OK. Thank you.

ROBB YOUNG: John, I have a negative SEO question.

JOHN MUELLER: OK.

ROBB YOUNG: Let's just say I was malicious, which I'm not, obviously. Because of how a particular situation with our domain, and you told us to specifically not forward that to any new domain that we're interested in restarting, if I, for example, just took our old domain, which is completely toxic, and 301'd it to our main competitor, and they missed it in the disavowal, would that just take them down?

JOHN MUELLER: No.

ROBB YOUNG: But it would take down any site that we started?

JOHN MUELLER: I mean, we've seen this kind of situation a lot in the past, where people will have a penalty or something like that. And they'll try to redirect that domain to some competitor or someone they don't like. And that's something our algorithms pick up on. So that's something where, essentially, if we see that you're trying to redirect something problematic to an existing running website, then we understand that that's probably not meant in a way to combine these websites into one bigger website, but something crazy happening on the back. And sometimes these are things that happen accidentally, where, I don't know, maybe domain names expire and they still have a manual action attached to it. And then someone at the registrar redirects them to a generic landing page. And it's something where they're not really trying to forward the page rank or forward any signals there. They're essentially just trying to send the users to that other domain. And in cases like that, we really don't forward anything.

ROBB YOUNG: But if we were to forward ours to a new domain of ours, you've told me that it would be a problem. Essentially, the fields we're in, our content is not massively different to our competitors. Because they all sell essentially the same things, the URLS will be roughly the same. So it doesn't seem like that's very consistent.

JOHN MUELLER: Yeah. I mean, we try to differentiate between situations where you're moving to a new domain from situations where someone is just redirecting things to an existing domain. So if you're moving to a new domain, then that's something where maybe it makes sense to forward all the signals that we've collected over time to the new domain where we can. If you're just redirecting to a different domain but not actually moving to that, then that's kind of a different situation.

ROBB YOUNG: So could we do a-- if we do a site move-- because we're just going to end up with one domain sooner or later. We don't want two in the search. And we already have two in there now as we slowly transition over. If we do a full site move, is that issue going to follow us? Or should we-- your previous advice is completely disavow yourself of that domain, sever ties, take the links down, do whatever you need to do.

JOHN MUELLER: In a situation like yours where it's really hard to clean it up or to wait that out, I'd just create a separate site and not do it.

ROBB YOUNG: OK.

MALE SPEAKER: Can I ask you a question?

JOHN MUELLER: Sure.

MALE SPEAKER: On [INAUDIBLE] December 2014, you have said that, OK, we [INAUDIBLE] [INAUDIBLE] that if webmasters are suffering with some kind of issues with Panda. And so you guys are going to work on it. And you are going to raise a ticket or give us something, which will be referred by your team completely. I think I'll just share a link with you. Here is a link. Can you take a look at it? It was being said in this hand-out that a webmaster who was suffering with Panda and Penguin, you're going to help them out to come out of it.

JOHN MUELLER: I have to read what it was.

MIHAI APERGHIS: I think it was about the time when we were discussing what our most priority [INAUDIBLE] modifications [INAUDIBLE] [INAUDIBLE] the website is affected by any Webspam [INAUDIBLE] that.

JOHN MUELLER: Yeah. I don't see that level of information happening in Webmaster Tools anytime soon. So that's something. We see a lot of feedback about getting information about algorithms visible in Webmaster Tools, but primarily because algorithms aren't made in a way that is essentially actionable for a webmaster, that's something where, at least for the moment, we've decided to hold off on showing that data in Webmaster Tools. So I don't see that changing anytime soon. But I know that's a really big wish. And we're always looking into ways that we can give at least some kind of information there, where we can say, well, this is really actionable information for the webmaster. We know they're trying to do the right thing, or we can trust they're trying to do the right thing. And if they had this information, they would be able to improve their website in a way that also works well for Search. So that's something we might find a way to show in Webmaster Tools. But, in general, just showing the search quality algorithm directly in Webmaster Tools, I don't see that happening anytime soon.

MIHAI APERGHIS: Well, at least a notice like that of if you have an issue, they might need to deal with it. Because a lot of people--

JOHN MUELLER: Yeah. It's really tricky, because when we talk to the Search Quality engineers, they say this isn't a problem for the website. It's essentially just the way that we evaluate relevance for these queries for that website. So it's not the case that the webmaster has a problem that they're not relevant. It's just that we think, well, this isn't relevant for the user, so we shouldn't show it. It's kind of like saying, well, you're not ranking number one. Therefore, you could improve things. It's not a problem that you're not ranking number one. We don't see it as a fault on your side that you're not ranking number one. But it's something you could generally improve. So that's really tricky from our point of view. But I totally understand that having some kind of feedback information of something that you could really focus on and solve, that would be useful. Because in the end, even if people try to game the system by creating higher quality websites, then, my gosh, that's not the end of the world. If the web turns into really high quality content, wow, that's good, too.

MIHAI APERGHIS: Yeah. I totally understand you're focusing about giving information that webmasters can actually do something to fix or improve. I was talking mainly because most of my clients, especially from here in Romania don't really know SEO or Webmaster Tools. So when I get them as a client and see some of the things that the other SEO companies did for them, especially regarding links, and I tell them this is a bad thing, they always ask me-- try to understand why. Our competitors are doing that. Some of them are actually ranking good. And it's kind of hard to explain to them, no. This is not something that Google would like to see. And something in Webmaster Tools that [INAUDIBLE] actions that would tell them that we have an issue with your website would be enough to understand, OK, so what we have done in the past is bad. We need to change. Just trust me. This is better.

JOHN MUELLER: Yeah. That's good feedback definitely. But I think a lot of that also boils down to what makes sense for the user. And once you focus more on the user and make sure that they have really high-quality content, that they're happy with your website, then a lot of these indirect issues resolve themselves as well. Because essentially if Google tells you your content isn't seen as being high-quality content, for example, that's not something you should fix for search engines. That's something you should fix for your users, for your potential users. So from that point of view, taking a step back and saying, hey, you've been doing all these crazy things that you think are about SEO, but what does this actually do for your users? What does this change with regards to people who want to buy something from your business when they run across these pages? So maybe you have a page full of keyword stuffing, and they look at that, and they're like, well, users don't actually use any of this keyword stuffing text that I have hidden on the bottom of my page. So why are you're actually doing this? Search engines are going to notice. And you're creating more of a liability than anything else. And if you focus on the users more, then this stuff would be gone. You wouldn't have to worry about that. Because in the end, search engines are trying to find the most relevant results, the most useful results for the users. And as we improve our algorithms over time, which is always happening, we're going to focus more and more on these things. And if you have great content on your website, then we'll continue finding that. On the other hand, if you're only targeting very specific algorithmic issues, where Google says, oh, this mention of this word twice is a bad sign, and you fix that, then that doesn't automatically make your site higher quality content to make it more resilient against future algorithmic changes.

MIHAI APERGHIS: That's a possibility too. And when it comes to changes on the website. So, like you said, people were stuffing more pages that aren't really relevant for the users. That's really easy to explain to somebody. But when it comes to bad links, bad links that people might not be aware that their SEO company has built or bad links-- I don't know-- directories and all kinds of that, they kind of understand that less because they might still have a really good website, a really high-quality website that offers relevant content to their users, but the bad links are the factor that basically brings them down more. So especially, that is a harder issue to explain to the people who have a business, an online business. It's hard to explain that that brings you down, even if your website is good for the users. We still need to get rid of these bad links. And they're bad. And why are they bad?

JOHN MUELLER: Yeah. Yeah. That's good feedback, too. All right. Let's take a break here. I have to run for our next meeting. But it's been great. Great discussions here, great questions submitted. Great questions by all of you. I hope you have a great weekend. And we'll see each other, I'd imagine, in one of the future Hangouts.

MALE SPEAKER: Thank you, John. Bye-bye.

JOHN MUELLER: Bye, everyone.

MALE SPEAKER: Thank you.

JOSHUA BERG: Thanks, John.
ReconsiderationRequests.net | Copyright 2018