Reconsideration Requests
17/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 13 January 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: OK, welcome, everyone, to this year's first Webmaster Central Office Hours Hangout. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland. And part of my role is to communicate with webmasters like you all, publishers, people making websites, and make sure that you're all aware of the things that we're working on, maybe specific issues that we found outside when crawling and indexing the web, and also bringing all of the information that you guys have back to our engineers so that we can work with your input as well. So we've had, I guess, two years now or three years almost of these Hangouts. And I think these have been running fairly well. So traditionally, I ask anyone for a question in the beginning. Do any of you want to ask this year's first question? Don't be shy.

BARUCH LABUNSKI: Well, I just wanted to ask about HTTPS. So this is 2015 now, and like you said, first of all, happy new year. So for 2015, how important will it be for a company to switch, to make the change to go HTTPS? Because in reality, if I'm a blogger, and I'm sitting in a coffee shop, and I'm updating my posts, and there's somebody behind me that wants to hack, sure, they can probably hack, because it's HTTP. But if I'm wired, do I really need to make that change if you're doing it all in a company that's all wired and secure?

JOHN MUELLER: So I guess there are two aspects there. On the one hand, you're talking about an admin panel for your website. And that's something where, if you're using public Wi-Fi, I'd definitely try to find a way to secure that. That could be by using HTTPS. It could be by using a VPN setup, something like that. And that's something you might want to consider in general with any kind of a public Wi-Fi that you might be using, because you never really know what else is happening out there on the Wi-Fi. You can't really tell. The other aspect of making your website run on HTTPS in general for all of your users, I'd say that's something that is worth keeping in mind for the long run. I wouldn't say that it's like the most critical issue that you should address immediately on your website, at least for most websites. Of course, if you're running a banking website, then maybe that's a little bit different, and you should probably have some kind of security there. But if you have a normal blog and moving it to HTTPS is a big issue, a big hassle with the hosting, hassle with the embeds, maybe with the ads, with the tracking, whatever you have there, I'd say that's something to keep in mind for the long run. And at some point, you're probably going to switch to HTTPS anyway. But it's not something where I'd say you need to do this as a first step this year.

BARUCH LABUNSKI: Yes, John, but-- but there is an incentive that you guys are giving users. And the incentive is that there's a ranking factor with that.

JOHN MUELLER: Yes.

BARUCH LABUNSKI: OK.

JOHN MUELLER: It's small. It's a small ranking factor.

BARUCH LABUNSKI: Right.

JOHN MUELLER: And it's something where I think if you improve your website in general, you'll probably see bigger changes in search. But it's something I think that kind of helps boosts sites in situations where we essentially have equivalent sites ranking. where I would say, OK, we have this site that's pretty good, pretty relevant for this query. This other site is also pretty relevant. Which one should we show first? It's pretty close. So we'll pick the one that's running on a secure site out there. It's not the case that you're going to jump from number ten to number one and bypass all your competitors just by switching to HTTPS. I think that would be the what users would expect when they're doing searches. So that's something that we're still taking into account. I think HTTPS is definitely something to keep in mind. If you want to stay ahead of the trends, you should definitely learn how this works, how it's set up. Understand the subtle problems that come up with HTTPS, like embeds, like the ads, like tracking pixels, tracking scripts, all of that. And that's something that I think anyone who wants to stay ahead of the curve would be interested in anyway. But I wouldn't see it as something where I could say all of your clients have to switch to HTTPS immediately. Otherwise they're going to be lost in search. So this is for a search in general more of a long-term issue and not something that will turn the whole search results around next month because we decide to change the ranking factor to be 500 times as strong, something like that. I don't think that's going to happen. So focus on it in the long run. Be realistic with the expectations. And when you're making bigger changes on your website and you're moving to a different hoster, maybe think about if you can move to HTTPS while you're doing that as well.

BARUCH LABUNSKI: OK. Thank you so much.

MALE SPEAKER 1: Can I talk-- begin with a question as well, John?

JOHN MUELLER: Sure.

MALE SPEAKER 1: OK. I sent you a link on chat. This is a new client that I just took like yesterday. And at the end of October, they did a couple of things on their website. They moved from HTTP to HTTPS. And at the same time, they also redirected some of their categories. So basically, they had two sets of 301 redirects. And a couple of days later, their traffic tanked. Their organic traffic tanked. They say it's because Google didn't get the redirects properly. I checked them. They're pretty fine, as far as I saw. One thing would be that the HTTP or HTTPS redirect from Chrome or any browser returns a 307. But any webmaster tools and everything as we know are 301. So I'm thinking that's just a browser thing. And I told them I think there might be some algorithmic issues, because the end of October was also Penguin. So is there any chance you could tell me if there's any problems outside these directions if possible?

JOHN MUELLER: I'd probably have to take a more detailed look. From a first glance, it looks like these are essentially just normal algorithmic fluctuations. And that can happen when you make bigger changes on your website. I don't know what else was changed on the website or if it was just at the tripping point, essentially. But this is something where, when I look at it and see it on a website, usually it's just a matter of algorithmic fluctuations that settle down after a while.

MALE SPEAKER 1: Oh, well, it's been from October till now. And the traffic has dropped like 60, 70%, like that. So it's definitely something major going on. I was just curious. I was telling them to maybe look over the backlinks. They're insisting that they're fine, all of their competitors have those links as well. I'm not really very happy with them. So I was wondering if you could tell me if the algorithms might be picking up some issues with that-- with the links, I mean, with the backlinks.

JOHN MUELLER: I'd really have to take a more detailed look to see that. So at first glance, I'd say less a problem of the links, more a problem of general content. But it's really hard to say like in 5 seconds what the problem is with a specific website. At first glance, it's something that generally shows up when you make bigger content changes on a website. Just moving from one protocol to another probably isn't what would trigger this kind of situation. But if there are bigger changes with the content, if there are things that you know maybe you should have cleaned up, those kind of things might be happening here. But if you post in the webmaster forum, I can take a more detailed look there.

MALE SPEAKER 1: OK, cool. Thank you.

BARUCH LABUNSKI: If a site has a lot of dead links, suppose like his client, let's say, had a lot of dead links within his blog or website. And then does that affect like ranking? I mean, if a lot of-- if you have like, let's say, 300 dead links internally to a site, like in a site that's pointing-- I don't know-- to government websites or whatever, and they're all dead links?

JOHN MUELLER: No.

BARUCH LABUNSKI: No?

JOHN MUELLER: That-- I mean, it's kind of bad user experience. You're losing your users that way. But it's not something where I'd say our algorithms would pick up on that and say, hey, you're not maintaining the links on your site. Therefore, you're a low-quality website. I don't think our algorithms would pick up on something like that. So if you see that happening with your website and you run some kind of a crawler over it and you see, oh, 50% of my outbound links have kind of decayed and aren't working anymore, then, of course, fix that. I'd just fix that for the usability in the general. But it wouldn't have any effect on the ranking, the way we crawl index the site.

BARUCH LABUNSKI: Right, OK.

JOHN MUELLER: All right. We have a [INAUDIBLE] here.

MALE SPEAKER 2: May I ask a question?

JOHN MUELLER: Sure, go ahead.

MALE SPEAKER 2: My question is [AUDIO LOST], that if we use 301 redirect to HTTP2, HTTPS and we do not change any URL on the website, that means whatever URL are being hyperlinked on the website are HTTP version, just only that we need to click and they are being 301 redirected. So what is the right way of doing? Is it an efficient way of doing with cross-fitting a website to HTTPS? Or do we need to remove entire HTTP version of URLs and we need to put HTTPS as in hyperlinked as well?

JOHN MUELLER: Setting up the redirect is definitely a good first step. I would also make sure you have the rel=canonical set up properly. I would not block the old version of the website. That's something that should remain crawlable and indexable. We want to be able to find those redirects. So set up the redirect, make sure you have rel=canonical set up properly, make sure that pages actually work on HTTPS so that you don't have issues with mixed content type problems. And then essentially you should be fine. This is still kind of a site move situation, so it's not the case that you set up these redirects, and everything just works properly without having to worry about anything else. It can still take a while for everything to be processed. You might still see fluctuations happening there. So those are the kind of things to keep in mind. All right, here's a similar question about the move to mobile friendly. Why move to a mobile-friendly site? Mobile-friendly websites engagement should be higher from websites that are not page speed improvements. If the changes get a score of at least 90 out of 100, will I see a boost in ranking or traffic? Essentially, moving to a mobile-friendly site is a great way to help your users when they're using a smartphone to actually get to your content and be able to use it a little bit easier. So that's something that you're primarily doing for your users. I suspects, over time, this is something that we'll pick up on as well in search, and we'll try to kind of bubble up the mobile-friendly status a little bit more visibly. We do show it kind of as a badge in the search results already, if you're searching on a smartphone. Maybe it makes sense to rank these a little bit higher in the long run. I'm not sure where we'll be heading there. But essentially, primarily you'd be doing this for your users. We see a lot of people using smartphones on the web. There are studies I think that were recently done that show 60% of the e-commerce activity I think on Black Friday was done on smartphones. So that's something there's a large group of people who are using smartphones, and they might want to visit your site as well. And if you're blocking them by showing them a site that they can't actually use on their phone, then that's going to be the primary impact that you're seeing, that people go to your site, they can't actually do anything. They can't buy anything, or it's too complicated for them to get through all the processes that are set up for desktop that don't work that well on mobile. And they'll jump off and go to a competitor. So that's essentially the primary change you'd see there. I wouldn't just move to mobile friendly for SEO reasons. Of course, moving to mobile friendly is always a good thing. And if you're saying, I'll only do it for SEO reasons, then I'm not going to hold you back from doing that. But I wouldn't expect a gigantic jump in rankings just by having a mobile-friendly website.

BARUCH LABUNSKI: Right, and I am seeing the crawlers come down after two months. You know what I want? I want to mark it as fixed, but hey. So all the errors from our class are kind of coming down, and it takes time, right? But what I like is that it takes you to the page and sites, and you can see everything that you've corrected and so forth.

JOHN MUELLER: Yeah, the usability report, that's pretty cool, yeah. Good to see that the numbers are coming down.

BARUCH LABUNSKI: Yes.

MALE SPEAKER 3: Hey, John. On a related note, I'm filling in for Lyle about the mobile friendly.

JOHN MUELLER: Mm-hm.

MALE SPEAKER 3: We were wondering-- like, we have ad blocks on the desktop version. And we would like to move them, keep them up higher on the mobile. Right now, we have moved them below so it all follows in line. Is that frowned upon? Can that be looked on in any bad way of hiding blocks and having other blocks reappear, shifting where like adblocks would appear on mobile versus desktop?

JOHN MUELLER: Generally, you have to change the layout if you're going from a desktop layout to a mobile layout. And usually what you do is you take the individual blocks, and you shuffle them around a bit and see where which block makes sense. So, for example, if you have a sidebar, then that might be something you move into the main column. It might be something you kind of hide from the main column completely if you think that it's too confusing for mobile users. So essentially, that's up to you. The way that you shuffle these adblocks around, the way that you shuffle the content blocks around, that's something kind of between you and your mobile users. It's not something where we would say, you need to keep exactly the same order, or you need to keep it in an equivalent order. We expect the content to be equivalent, the primary content of these pages. But how you do the ads on those pages-- if you show them at all, if you show different types of ads-- that essentially up to you.

MALE SPEAKER 3: So that wouldn't be a negative. Google wouldn't see it as listed twice. One time it was-- on the desktop, it was showing up in one spot. And on the mobile, it would be unhidden on a different spot.

JOHN MUELLER: That's fine. That's totally up to you.

MALE SPEAKER 3: OK, thank you, John.

JOHN MUELLER: And that might be something you just want to test, where you like A/B tests with your users and see how they interact with your pages. I think there are lots of subtle things you could try out there. But that's not something that would have an effect on search.

MALE SPEAKER 3: OK, thank you.

JOHN MUELLER: Does Google consider brand searches as an indicator of quality and use it as a trust factor for the site? Also, are you using clickthrough rate to measure top 10 organic results? I'm not aware of us using brand searches specifically as a way to recognize the quality of a site. I think it's a good thing that people are searching specifically for your brand, but I don't know if that's a sign that we would need to rank this site higher in general. So if people are searching specifically for your site, then, of course, we want to show your site. But it's people are searching for a specific product that you're offering, just because people were also searching for your brand name doesn't necessarily mean that that product is going to be more relevant for them. So from that point view, I shy away from connecting brand searches with ranking elsewhere on your website. With regards to clickthrough rate, I think that's something that's really useful for a webmaster to look at, because they understand their site best. From our point of view, we primarily use that as a way to kind of check our algorithms in general. So if we make specific changes on our algorithms, we'll look to see, are people still finding the results they're looking for in the top positions? Or does this ranking change, for example, result in people clicking on things that are lower on the page? Then maybe we did something wrong with our ranking page. So that's something on a very aggregated level that makes sense for us to use. On a very detailed site or page-wide level, it's a very, very noisy signal. So I don't think that would really make sense as something to use as a ranking factor there.

MALE SPEAKER 4: Can I follow up on that?

JOHN MUELLER: Sure.

MALE SPEAKER 4: How do you-- if at all, how do you differentiate between a brand search and a domain search, if it's something like ours where we're Xperience Days. And that's what we sell. So whether someone's searching for the experiencedays.com or an Xperience Day being what we sell. And the same for anyone with an exact match domain, because there's lots of talk on forums about penalties for exact match domains, but I'm not sure there's any truth in that. Perhaps you can expand on that. But how do you know what's what unless someone's actually putting the dot-com in there?

JOHN MUELLER: I don't know. [BOTH CHUCKLE] I don't have a great answer for that. It might be that we're not actually looking for brand searches, that we're just trying to match to relevance there. And if we can recognize the relevance based on the content on the page, the external factors that we might have associated with the page. And that's something that's a good sign, I guess, for relevance and for ranking. And usually for brand names, those kind of things, that kind of aligns automatically. So if people are linking to your site with your brand name and someone is searching for your brand name, then that's something that we can pick up and combine there. With regards to exact match domains, the main issue that we sometimes see is that people will go out and buy an exact match domain, and they'll expect their site to always rank for those terms. And that's definitely not going to be the case. It's not going to be the case that if you have a domain name, like-- I don't know-- bestseocompany.com, that your site will automatically rank for best SEO company. We take into account a lot of factors in our rankings. So just because a domain name matches the keywords does not automatically mean that it'll result in a ranking. Obviously, if you have a very obscure brand name and people are searching for something like-- I don't know-- Google, which isn't really a keyword that people would otherwise be searching for, then it'll be a lot easier for you to rank for that kind of a keyword, because you have this really weird name and the only website that's out there that's really relevant for that weird name is your website. And if people are explicitly searching for that weird name, then obviously we don't have that many results to pick from. But I think this is something where people have too high expectations for exact match domains. And just because it matches the keywords that people are searching for doesn't mean that it'll rank for that.

MALE SPEAKER 4: Right, and that should be fairly obvious. But I've seen specific talk of suppression or penalties for EMDs when I don't know that that's ever been confirmed.

JOHN MUELLER: For us, the biggest problem with regards to web spam is if we see like this big collection of sites that are essentially targeting a whole collection of keywords, and they're really low-quality sites that we essentially just want to throw out anyway, then that's something where we could correlate the low-quality sites with this exact match domain and take them out like that from a web spam point of view. But that's not something where I'd say if you have an exact match domain and this is your main site and people have been searching for it like that, that's not something that you wouldn't have to worry about there. It's really a matter of this combination of exact match domains and really low-quality sites that kind of runs into that. So just having an exact match domain isn't really going to cause problems for a site.

BARUCH LABUNSKI: But sometimes-- sometimes there is a boost. Remember you were saying one time, there is a boost maybe like two months or something like that. And then the site will go back to page three or four for places that are not that competitive, if you think it's highly-- if the site is really well-made.

JOHN MUELLER: I think that goes back to the other point where if we think this is a relevant site, then we'll try to show it in the search results. And that's primarily independent of the domain name, so if you have bestseocompany.com, the domain name, or if you call yourself something else and have that as a domain name, it can still rank for those keywords. And that's not something that is dependent on those keywords being in the domain name. So if you're out there trying to target a specific set of keywords, I wouldn't just go out and buy an exact match domain name and put a website up there and assume that it'll rank there. But rather, I'd-- if you're buying a domain for the first time, make sure that you pick something that maybe leaves a little bit of room to grow, maybe embeds your brand or the image that you want to expose to users, something that you want to keep for the really long run. And use that. Because if you just jump out and just get like this three keyword combination domain name that just happens to be free, you're kind of narrowing your target audience down in a way that maybe makes it harder for you in the long run, because you'll have to either get more domain names to spread out for the new things that you're working or switch to a completely different brand name over time. All of those things are additional hassles that, if you can avoid them, that makes your life a lot easier.

BARUCH LABUNSKI: Right, yeah. Makes a lot of sense. Users are scared to come to you anyway, because they're like, OK, your name is Gennifer Flowers. Like there's so many of them, you know?

JOHN MUELLER: I wouldn't assume that the average user would pick up on this exact match domain stuff. They probably don't realize that this is some SEO trick from essentially the last century. It's just something to kind of keep in mind, especially when you're setting up a new website for the first time for a company. Picking an exact match domain kind of narrow things down for you. And picking something that works as a brand for the long run where you can say, I can definitely live with this for the next 10, 20 years, that makes it a lot easier for you to grow over time.

BARUCH LABUNSKI: For sure, yeah.

JOSHUA BERG: I always wondered if Google recognizes proper names better as being a brand name rather than some kind of keywords or something. But I think from what you're saying is maybe the association can be picked up better in queries and stuff.

JOHN MUELLER: I'm not sure what you're hinting at.

JOSHUA BERG: I mean, like-- [INTERPOSING VOICES] --would look at the site has a strong brand presence. Well, it would be difficult to do that without knowing what that brand is or maybe that's just overrated.

JOHN MUELLER: I think that's partially overrated, at least from an SEO point of view. I think from a marketing point of view, that's definitely something worth focusing on, because if people know about your company and they're explicitly searching for your company, that's always a good thing. And that's something that, I think, will always have a positive effect on a website, because people are going to explicitly for your website. Or instead of searching for blue shoes, they're searching for blue shoes yourbrandname.com, or your brand name. And of course, they'll find their way to your site, because they're explicitly searching for your site, too. So I think working to create this presence with your users so that they want to come back to your site. They know about your site. They remember your website. They can explicitly recommended it to other people by saying, hey, I always get my stuff there. That's something that's always a good thing to aim for. I imagine it doesn't work in every niche. It doesn't work for every type of site. But if you can get that kind of loyalty from your users, that's always a good thing to have. And that's completely independent of any SEO aspect there.

JOSHUA BERG: Yes, I was wondering what was mentioned earlier also with the queries being more associated with particular sites, if those sites are frequently searched for those queries. But I realize that that would have a big potential for abuse as well. But it seems like when you see the Google autocompletes the searches, that a lot of brands, persons, individuals-- like my name, Joshua Berg, will appear next to SEO, and then Joshua Berg actor, because there's another Joshua Berg who's an actor. So just because those are common queries, that those associations are made, is that reasonable to assume that that's how that works? Because that's what we see appearing in the autocomplete.

JOHN MUELLER: I'm not sure how autocomplete pulls that together. So I wouldn't assume that it's using the same things that we would use for web search for ranking, for example. So sometimes you'll see things happening in autocomplete that are kind of their own island there. So it's not something where I'd assume, because autocomplete is doing it like this, that this is the same type of thing we do in web search. But it's always interesting to see how these kind of different parts of Google pull in different information to try to get similar or maybe slightly different information as well.

MALE SPEAKER 5: Hi, John. Sorry.

JOHN MUELLER: Hi.

MALE SPEAKER 5: Hi. Can I ask you a question?

JOHN MUELLER: Sure.

MALE SPEAKER 5: Thank you. I was wondering about the difference between what's useful content and what can be considered thin content. An example, if you're doing a guide for shops in a city and you are publishing a website of their store hours, this is not very much information, but it's very useful. So it could be considered, if you are the only website with this data, you have collected all this information in one. Can that be considered thin content, or the quality score would be good?

JOHN MUELLER: The quality score, I think, is an AdWords metric. So I can't speak for that. But essentially, if you're fulfilling what people are looking for, that sounds like a good thing. And that might be very little information if that's all people are actually looking for. If they're looking for the opening hours, if they're looking for the address, and they land on a page with that information, that's great. You don't have to write a novel for that.

MALE SPEAKER 5: OK, thank you. I was wondering if we include maybe a custom-made quality for those stores, a metric in order to give more help to people, this is made by ourselves. It could be good or bad? Maybe it's a good idea to it ourselves, or we need to try a source, a dependent source, for that?

JOHN MUELLER: I think that's always a good idea. So any time you can go through your content and try to recognize the good parts and the bad parts and then take action on that. I think that's always a great thing to do. And sometimes there are easy ways to do that, depending on your website. Sometimes it's a lot harder. So you can't always say you should focus on this metric and use that as a quality proxy. Maybe there are some things that work better for your site. Maybe there are other things that don't work as well.

MALE SPEAKER 5: Thank you.

JOHN MUELLER: All right. Let's run through some of the submitted questions, make sure we have everything covered here. "I'm preparing a case study about mobile-friendly sites. I'm wondering if you're seeing that mobile-friendly websites are getting more search traffic and engagement metrics compared to non-mobile-friendly sites." I don't know so much about engagement traffic. That's really hard to say. With regards to more traffic, I recently saw a blog post about someone saying as soon as they kind of updated their site to be mobile friendly and got the mobile-friendly label, the clickthrough rate for their site in search rose significantly. So that's something that might be playing a role there. I know there are lots of studies out there in general about how users react to mobile-friendly sites. So that's kind of what I'd be looking for there. For example, the Boston Consulting Group put out a study, I think in December, regarding mobile sites, mobile internet users in general, about how they interact with mobile sites, how they use their mobile internet. And that can probably work as a basis as well and kind of help you to make a decision around making mobile-friendly sites. "Does Google use clickthrough rate as a ranking factor?" We talked about that briefly before. Different versions of Panda in small countries like Chile-- Chile is actually a pretty big country, I think, geographically. But I've see cases of search results when you simply do not understand how the latest versions of Panda apply. In general, we try to make our algorithms work as globally as possible. So I don't think we have any country-specific kind of algorithms out there. But of course, within the country-specific search results, you'll see different changes depending on the type of content that we have in those countries. So it's not that the algorithms are country specific, but the search results are country specific, and the algorithms are kind of interacting in a general way with those search results. So you're bound to see different changes in different countries depending on how these algorithms pick up that content. For example, if you're looking at a country where most of the sites are pretty low quality and a handful of sites are really high quality, then, of course, it might happen that a change like Panda that kind of looks for higher-quality sites has a more visible change, because that's just what we have to work with. Like these sites are there. There's a bigger threshold between the higher-quality and the lower-quality sites, so that might be more visible there. But it's not the case that the Panda algorithm is country-specific. It's just the websites that are out there that we have to work with are specific to those countries and might have specific aspects that are kind of unique or special in those countries. "For mobile friendly RWD sites certain lots of content and must be hidden from mobile, mostly navigation. Some images are hidden or displayed depending on resolution. I'm worried that doing this might violate the rule against hidden text. Is using displaynone in CSS for response in sites OK?" That's definitely OK. That's not something where we'd say this is a problem. Essentially what we want is that the user sees an equivalent page on their mobile phone. And equivalent might mean that the images are slightly different. It might mean that the sidebar is gone. It might be that the navigation is a little bit different as long as the equivalent primary content is the same. So if you have a page about-- I don't know-- Ford car insurance, and on mobile it's still about Ford car insurance, then that's perfect. On the other hand, if on mobile you look at it and it's about Ford-- I don't know-- car wheels or something else, something different, than that, of course, would be problematic from our point of view. But if the primary content is equivalent, even if the text isn't one-to-one exactly the same, that's fine. "How long takes calculating rankings for a new subpage after indexing? I mean possible rankings for relevant keywords." It's really hard to say. So the first step is crawling and indexing the pages. That's something that can happen in a matter of minutes to a matter of months depending on the website and how quickly and deeply we crawl the website. So that's, I think, the first step. You're looking at a time frame from minutes to a couple months, which is already pretty big. And with regards to ranking that page, that's something that, again, can happen within a factor of minutes or it could take a couple of months to understand how that page fits in with the rest of the web. So it's there is no absolute answer here where I'd say after five days, you will see exactly this. It really depends on the website. It depends on the page, on the information that we have about the page, and other signals that we've collected there. Here's a fun question. "If you have an SEO agency, would you offer link building as a service?" So I don't know. On the one hand, I don't have an SEO agency, so I can't really give you an absolute answer there. I think this is something that probably depends a little bit on how you define things like link building. There are a lot of things you can do to kind of promote your brand, to promote the content that you have that essentially might look similar to link building. So I don't really have any yes or no answer there where I'd say, yes, I'd do exactly this. I chatted about this with the other people here in the office briefly. And it was an interesting discussion, but it's not the case that I'd say, yes, I would have a link building team that would go out and email everyone to try to get links to my clients' websites. Or no, I would never talk to any other website at all. I think there's some middle ground there where finding the right balance makes sense. "Did you learn anything about the fate of search engine roundtable?" I don't have anything specific to share about that at the moment, very sorry.

BARUCH LABUNSKI: I want to know.

JOHN MUELLER: "Why some subpages don't show yourself in ranking for unique content?" I'd have to look at some examples there to see what exactly you're looking at. But there are definitely reasons why maybe we wouldn't show a specific page for a specific set of queries, where we'd say, well, maybe we look at this page and it looks like a keyword stuffing page or it looks like an artificial spammy page. Maybe we don't trust the site in general. There are some aspects that might come into play there. But I'd really need to look at the specific query and the specific URLs that you're looking at there. "I have a bilingual targeted website, Greek and English. We'll be implementing HTTPS. Is this setup correct?" Ooh, let's see. Everything will be redirected to .gr, and then 302 redirected to .com. So in general, there are two aspects I would look at there. I'd have to look at the site specifically to see what exactly is happening there. But in general, there are two things you should aim for. On the one hand, keep the number of redirects that are chained to an absolute minimum. So instead of setting up this complicated chain of redirects where you say, well, this version redirects to this, and that redirects to this and redirects to this, try to redirect directly to your target page as much as possible. When it comes to creating bilingual content, make sure that each of these language versions are crawlable separately so that we can crawl the English version, we can crawl the Greek version, and we can index those versions separately. If you can set up hreflang between those versions, if you have equivalent content, then that's even better for us. If you have one version that tries to recognize the user's language, settings, or location and reject them automatically, then I would set that up as a default in the hreflang set. So then you would have your default version, then you would have your Greek and your English version, and we'd be able to crawl and index those versions separately. So that's something essentially that I would aim for. On the one hand, make sure that the redirect count is minimum. On the other hand, make sure that you have these separate versions that are crawlable and indexable separately, and that you have hreflang set up appropriately between those versions. "If you're loading content by AJAX, should the AJAX response have a no index HTTP header or a canonical header pointing to the full version of the page?" No, I would not do that. We're generally not going to be indexing AJAX responses directly. So the no index header wouldn't really make sense there. And what we've sometimes seen is that people break the AJAX response by trying to inject HTTP meta tags into that as well. So if you're not paying attention completely, you can kind of break things for your site, or for indexing in general. So that's something that I try to focus on there. Make sure that you're serving your content directly. The no index tag won't have any affect there, so I wouldn't even include that there. Because any time you're adding additional complexity, you're adding additional room for things to break. "Can you ignore the really basic SEO questions in these Hangouts which have been answered in the past? Even the ones that been voted up, these Hangouts should be for really difficult and technical questions only." This is good feedback to have. But I think because people are still asking a lot of these normal questions, it's also important to get these answers out there. So I'm not going to try to censor these questions in any specific way, but rather take things are voted up and make sure that we have answers for them. If there are specific technical issues that you really need to have answered, put them in there early so that people can vote on them early. And usually if they're in there earlier, they can collect votes a little bit over time. So that's essentially what I would do there. "We're running a single page app on AngularJS for property listings. Besides not being able to serve escaped fragment versions fast enough, we also have dynamic URLs for the results. Where should we put our efforts first?" I think if you're setting up your site to run with a JavaScript framework like Angular or Ember, or something like that, I skip the step at the moment of creating the AJAX crawling site. That's something that has helped us in the past a lot to understood these data better. But more and more, we're able to crawl and index these pages even if they're using JavaScript to pull in the content. So make sure that your JavaScript versions have unique URLs, that we can crawl and index those URLs separately. And past that, make sure that we can actually crawl all of the content, the JavaScript files, the AJAX responses, all of that. So that's something you can test with the new Fetch as Google feature, the render view, to see what Googlebot would be able to see from your site.

MIHAI APERGHIS: Hey John, speaking of Javascript frameworks, I have a question about Penguin.

JOHN MUELLER: Sure.

MIHAI APERGHIS: I don't know if you know two Whiteboard [INAUDIBLE]. So I think it caught on like in January. Josh Bachynski had a Whiteboard Friday, and it stirred quite a bit of controversy. He was basically saying that disavowing links for removing the Penguin penalty isn't enough. You cannot simply get off with a Penguin penalty just by disavowing links. You have to take at least other steps like removing, actually removing the links themselves. Otherwise, just disavowing them won't be enough. Is this something that you can shed some light on?

JOHN MUELLER: It's kind of unrelated to JavaScript, but I'll take a shot at it. So for manual actions, we definitely expect you to also remove links. So if there's a manual action involved for unnatural links that were taken by a site, we definitely expect you to do that. With regards to algorithmic changes that we're picking up on problematic links, technically you don't need to do that. You can just use the disavow tool. Practically, I think it always make sense to make sure that you're covering those problems as broadly as possible, that you're cleaning up these issues that you or maybe your SEO or a previous SEO for the site created. So cleaning it up, I think, always make sense from a technical point of view for algorithms. The disavow file is essentially sufficient in those cases.

MIHAI APERGHIS: OK, thank you.

BARUCH LABUNSKI: But you said also like that with basically you don't-- so if you've never had a manual action-- honestly, trust me. I don't want to talk about this too much, too, you know? But you said that if you get better links, then it will overwrite kind of the Penguin, if you were affected by Penguin, overwrite the--

JOHN MUELLER: Yeah, I don't know. That's been pulled up a little bit recently.

BARUCH LABUNSKI: I said, hey, no need to disavow. Just--

JOHN MUELLER: No. So from my point of view, if you're aware of any problems that you were creating with regards to these links, you should always clean that up. It's like if you have-- I don't know-- typos on your website, then you know you have typos on your website. It's just like saying, well, people accept that there are typos on my website. It's fine. That's something you want to clean up. It's a problem. You're aware of it. Cleaning it up always makes sense, especially in cases where you suspect there may be an algorithm picked up on this and kind of took action based on that. Cleaning that up always makes sense. The disavow tool is a technical tool. It essentially takes those links out of the calculations for our algorithms. So it's not like an I'm guilty sign that you hold up where anyone who looks at your site and sees you have a disavow file and says, oh, you're an SEO, and you were buying links, and you're going to be penalized for having done that, even if you've cleaned that up now. The disavow tool is essentially a technical tool that takes these out of calculation. It's not something you should be ashamed of using. It's not something where you should say I don't want to use this because it does something crazy for my website. It's essentially-- it's kind of like a meta tag you put on your pages. And meta tag no index says, OK, no index those pages. The disavow tool says don't take these links into account.

BARUCH LABUNSKI: No follow. No follow.

JOHN MUELLER: Kind of something like that, yeah. It's a technical measure. It's a tool you can use. It's not something you need to be ashamed of. It's not something you need to avoid. So if you're aware of this problem and you would suspect that it has a negative effect on your site, then clean it up with the disavow tool. Clean up the links if you want to be extra safe, if you want to kind of take that extra step to make sure that they're not even out there anymore. I think both of those definitely make sense. And I'm hoping that-- I think like hoping that in the future new links at your site kind of gains naturally will override those problematic links that were set in the past, I think that's something that as a business owner, as a person running a website, you shouldn't be relying on hope like that. You should be taking the technical steps to clean up the problem to kind of really take it on and resolve those issues. So instead of relying on hope that maybe in the future my website will have gained enough natural links that the algorithms might not be taking into account those problematic links anymore, that's not something you want to rely on. If you have a business that you're running, you want to be able to rely on that stuff actually working that you're doing and not do something esoteric and say, well, I read this online somewhere. This should work more or less. I have no idea what it actually does. I'll just copy and paste it on my website and do it like. So if you know about problems, fix them.

MIHAI APERGHIS: John, so you said the disavow file is sufficient in removing a penalty. Would you at least say that actually removing the links or 404ing the page that they target, they lead to, would that be read faster by the algorithms rather than the disavow file? Or is it all the same?

JOHN MUELLER: It's essentially equivalent. So I think one aspect also to mention that from our point of view, Penguin isn't the penalty. It's essentially a search quality algorithm. I know some people call it a penalty because they see it is taking the same kind of steps. But from our point of view, penalty is really something manual that was done manually by a web spam team member. And the algorithms, like the search quality algorithms around web spam issues like that like the Penguin algorithm, they're essentially things that just run automatically. It's not that someone has manually looked at your site and penalized your site for that. So these algorithms look into the links that are still in our link graph for you website. And if you've taken them out with the disavow file, if you've removed those links from the other website, if you've removed the pages on your side, all of those things are essentially equivalent in that those links drop out of our algorithm. We don't take them into account anymore. And the next time the algorithm runs, it can focus on the rest of your site.

JOSHUA BERG: So John, you've mentioned that the main links that you need to be concerned on the disavow and sorting are the ones in Webmaster Tools, and that it would be kind of scraping the bottom of the barrel to go scraping for a lot of other links on the internet. And so if that was the case, then an assumption might be made that that Penguin only uses the links that are in Webmaster Tools. So would that be correct?

JOHN MUELLER: No. We show a sample of the links in Webmaster Tools, but Penguin and our other algorithms do try to take into account the full picture. So usually what you'll see with the sampling Webmaster Tools-- for example, if you have a site-wide link on one website, you'll see maybe one or two or three of those pages listed. But actually, we know that this is on all of those pages on that website. So just disavowing those individual URLs is not really going to solve that problem. So that's a case where, for example, using the domain directive in the disavow file kind of helps you to sidestep that problem. And one of these URLs is listed in Webmaster Tools. With the domain directive, you can make sure that anything else on that domain is disavowed. So that kind of helps there. Also, with regards to the type of links there, if you look into Webmaster Tools and you see a bunch of forum links there, then you can think about maybe a previous SEO ran some kind of script to drop the link on hundreds or thousands of forums out there. And maybe it's worthwhile taking that sample and seeing if that matches a bigger picture view that you have from somewhere else. So from that point of view, I think we try to show the general picture in Webmaster Tools and the links there. But I wouldn't say that these are 100% just the links that you need to rely on if you have a problem with regards to unnatural links to your site.

JOSHUA BERG: OK, thanks.

MALE SPEAKER 6: Just a very quick question with regards to the disavow file. What sort of time did it take to kind of process? So after you upload the disavow, is it-- is this instantaneous, or is it something that runs on a kind of a crawl that happens every kind of week or two, kind of referring slightly back to the kind of blog post that was mentioned earlier with regards to kind of seeing recoveries within a couple of days, which I don't see is physically possible, but you know.

JOHN MUELLER: So the disavow file is processed immediately. But that's essentially just the contents of the file is processed immediately. And you'll see the status update in Webmaster Tools. What actually happens with those links is actually just reprocessed when we reprocess those URLs. So when we recrawl those pages that had the links and we see that there's a disavow file there, we'll drop that link out. So that's something that can take a couple of months. It can take a half a year, maybe even longer, for all of that to be reprocessed completely. So it kind of depends on how quickly how often we crawl those individual pages. For things like if there's an individual link on a homepage somewhere, probably we can recrawl that and reprocess that within a couple of days. If there's a site-wide problem, if there are links on a lot of obscure forum pages, then that's going to take a half a year or longer to actually recrawl all of that. So that's something where I wouldn't expect to see changes within a couple of days. And even after having recrawled and reprocessed those URLs, you have to reupdate the algorithm data as well. So that's something where it'll still take a while for that data actually to be updated and to be visible in search. So having changes within a couple of days seems a little bit unrealistic. That doesn't mean that maybe there are other things that were changed and that happened to be processed during that time and suddenly see a jump because of that. So that seems like something that's more coincidence that these things happen to be timed fairly close together than a real relationship between updating the disavow file and a couple days later the site jumping up in rankings.

BARUCH LABUNSKI: So why not just putting more in brackets? Because you get an email after saying, hey, you've updated the file. It's been updated. OK, most webmasters do understand that. So what about putting in brackets something like, only the file got updated, you know? Like so they won't-- I don't know, because there's all sorts of webmasters.

JOHN MUELLER: I haven't looked at that message for a while. I-- yeah, seems like something we should double check. We're going through all of these messages that we're sending out in Webmaster Tools anyway to kind of update them to a new layout. So I'm sure we'll get to that message as well and see if we need to make any text changes there, too.

BARUCH LABUNSKI: OK.

JOHN MUELLER: All right.

MALE SPEAKER 7: John, my question is essentially the same, that whether the Penguin penalty, Penguin algorithm penalized [INAUDIBLE] overoptimization? Does it?

JOHN MUELLER: Penguin is a web spam penalty. So it looks for issues that are kind of related to web spam. And I don't know. I mean, depending on how you define optimization, that could apply to that. If you're defining optimization. For example, as creating unnatural links on various directories that are just out there to kind of past links on, then that could be calling that overoptimization. But in general, it looks for web spam issues. And if you overoptimize your site in a sense that you're overoptimizing for user experience, then there's nothing wrong with that. There's nothing web spammy about that. So just because you're optimizing doesn't mean that you're creating web spam.

JOSHUA BERG: John?

JOHN MUELLER: Yeah?

JOSHUA BERG: What would you consider to be best practice for using a Slider? And I don't prefer using sliders on the home page of a site. But in certain industries-- and we talked about it a few months ago-- they're preferred because they provide big images that are changing. So if we have clients that are quite certain they definitely want to use a slider on the home page, what would you consider some best practices? You mentioned not bearing things that are important on the third or more slide. Anything else on that?

JOHN MUELLER: So essentially what I'd look for there is that the page is still clearly usable, even if there's a part of the content that's kind of missing or hidden. So usually with a slider, you'll have some slides visible, some that are kind of hidden because you have to kind of flip through to them to get that content. And if there's critical information that's not visible directly when someone views that page, that might be a problem for algorithms, because we might not pick that up. But if the slider just provides additional information, for example, you have a real estate homepage and there's lots of information on that page about the type of work that you do, and the slider just shows different samples of links there, then that's something that's perfectly fine. That's not critical information for those pages. It's just additional information that helps people to find the right content within your website.

JOSHUA BERG: So do you think the above-the-fold algorithms, like a layout-type algorithm, is going to take much of a hit there? Or will that only get tripped if certain other low-quality indicators might be there as well?

JOHN MUELLER: Usually that's more of a problem if you have things like random ads above the fold, no actual content. When you're looking at a normal website and the slider contains samples of the content from the rest of the site, that's not an ad. That's like a part of your content, and that's not something where I'd say is a problem from that point of view.

JOSHUA BERG: OK, Thanks.

JOHN MUELLER: All right, I have to wrap up a little bit earlier, someone needs to grab this room. If someone has a really, really quick question, jump in now.

BARUCH LABUNSKI: It's the best, I think, question for 2015 that, well, since this Hangout started, nobody's asked this. OK, so on the bottom of the search results, it says "search related to." If you're a brand name, if you're a brand name and somebody keeps on typing example whatever 1, 2, 3, and keeps on typing it over and over and over and over again, and then uses like complaints or whatever and so on and so on, that ends up in the search results. I mean, how does a brand name remove that? Because a lot of people are having difficulties with this, and no SEO can remove such a thing. Do we just-- what? Does the brand take a lawyer and send you guys-- I don't know. Like how do you remove that?

JOHN MUELLER: That's essentially completely algorithmic. So that's not something where there's a magic button that we can push or that a webmaster can push and say I don't want to have this found there. Essentially, over time, as we see how people search, as we recognize problematic behavior as well, that's stuff that these algorithms will take into account.

BARUCH LABUNSKI: But John, it can affect the brand like overall, right?

JOHN MUELLER: It can. And sometimes I think there are good reasons for that. It's not-- I mean, if you're looking at-- I don't know-- something really problematic that a brand did, for example, and people are searching for that, then I think that's the right type of information that we should be giving people. So it's not something where I would artificially filter the type of content that we show there but rather try to expose what we think is relevant there. And sometimes our algorithms get it right. Sometimes we get it wrong. And if we get it wrong in those cases, I think that kind of feedback is always useful to have. It's not the case that we have any kind of direct feedback loop where we say, oh, if you put this meta tag on your pages, then we won't show this specific related query, or if you upload them to this part Webmaster Tools, it won't show up like that. It's really essentially algorithmic. And we try to improve our algorithms with the feedback. But we can't guarantee that we can manually take action on these things.

BARUCH LABUNSKI: But it does change, yeah? Like because it's been already like five years. It's the same thing.

JOHN MUELLER: Of course, yeah. I mean, it changes over time, just like the algorithms change over time.

BARUCH LABUNSKI: OK.

JOHN MUELLER: So that's something. If we see things happen or change with user behavior, we'll take that into account.

BARUCH LABUNSKI: Thanks so much. I'll also send you an email, John. It's pretty crazy there.

JOHN MUELLER: Yeah.

BARUCH LABUNSKI: [INAUDIBLE].

JOHN MUELLER: All right. Thank you everyone for your time and your questions. I wish you guys a good start into the new year. And maybe we'll see each other in one of the future Hangouts.

JOSHUA BERG: Thank you, John.

JOHN MUELLER: Bye, everyone.

BARUCH LABUNSKI: Thank you.
ReconsiderationRequests.net | Copyright 2018