Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 04 December 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: All right. Well, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what I do is talk with webmasters and publishers, like the ones here in the Hangout, the ones that submitted questions, sometimes in help forums as well. As always, I'd like to give people a chance to ask the first question. Is there anything on your mind specifically that we should talk about before going through the submitted questions?

MATT PATTERSON: I would love to ask a question. So we've been seeing quite a big drop in incoming traffic from Google for the last couple months-- well, the last three or four months. Before that, we were seeing significant increases in traffic from Google. And it seems to have all happened right around the time we implemented what we hoped was proper support for internationalization. So we started adding hreflang tags to pages. Now we're a music video site. So all that content is essentially, the language is-- it's not that it's immaterial, but the language is less important. So we have UR Chrome in English and German. So we're a Germany based site. So our primary market is German. So we used hreflang x-default to indicate that users could dynamically switch their language if they wanted to. But we didn't put specific hreflang rel alternate tags for English and German because the URL was this same. So it seemed like it was counterproductive to add extra rel alternate tags for-- it would be to the same page. And actually it would still be in whatever language you wanted it to be. And so it wouldn't be true. But since then we've seeing quite a significant drop in traffic. And we're wondering actually, did we do the wrong thing? Should we be putting in specific viable alternate URLs for languages so that Google can always see us in whatever language it wants to just by hitting the right URL. That's really the question.

JOHN MUELLER: So how would you implement it at the moment? Is it between the different language versions, or is just for one URL that you're saying is default, and you're not really marking up the other version.

MATT PATTERSON: So every page has hreflang x-default and a rel canonical pointing to itself. Because every page, you just set the preference you want for language. And if you have-- sorry, we do stuff with accept headers as well. So we detect what language you want. We're not doing language specific pages. So it's one page. And you can have it in whatever language-- well you can have it in German or English basically. We've been trialing support for sticking an hl equals en or d query [INAUDIBLE] onto the end of pages. But we haven't noticed any significant differences, either positive or negative, from the pages we've tried that on, which are the pages we have content licensed outside of Germany. Seemed like the simple ones to try it with.

JOHN MUELLER: So if you just have one version at the moment and you put the x-default on there, that essentially wouldn't change anything. That wouldn't have a positive or negative effect because essentially we would kind of ignore that markup because we don't have the individual page pairs that we can kind of mix and match. So if we know that one version is German, the other one is English, and they're equivalent, then we can swap them out as needed. It wouldn't change their rankings. It would just show the appropriate version depending on what the user is searching for. But if you only have one version and you're saying this is the default version, then essentially there's nothing to swap it out with. And at the same time it wouldn't affect the ranking. So if you're seeing changes in traffic, that definitely would be due to that. But at the same time, this markup probably isn't doing anything at all for your site.

MATT PATTERSON: So and equally, if we were to add the rel alternate hl equals de, hl equals en, with a rel canonical that pointed to the unadorned URL, that also shouldn't hurt, but actually would allow you to pick out German and English to display in German and English, indexes as appropriate.

JOHN MUELLER: You would need to set the rel canonical to the appropriate language version. So the German page has a rel canonical to the German version and the English page has a rel canonical to the English version. If you set it across language versions, then we essentially drop the noncanonical versions. So if the English page has a rel canonical pointing to the German version for example, then we would drop the English version because you're still telling us that we should focus on the German version.

MATT PATTERSON: So we've been setting it to-- I can probably did this out and post it in the chat, but asked this in the forum a couple of days ago. So we've been sending-- we have the query string extra for German and English, And that gets you German and English always, no matter what. But if you just visit the page, you get whatever your preferences are set to. And we kind of wanting to keep that URL as the canonical because it is the canonical, and the other stuff is just a way of forcing getting English or getting German, because we haven't had any luck with [INAUDIBLE] we're crawling. And because we're a .tv, we're a generic site, we get hit in English with all the accept language headers set to English by Googlebot. So it's actually quite difficult to manage that.

JOHN MUELLER: So [INAUDIBLE], there's a blog posts on specifically about home pages and how to manage that. But I think that would apply in your case as well. So essentially you have the generic version that automatically switches. That's like the one without the language parameters, that you have an English version and a German version. Theoretically the right way to set that up, if you want to have these versions indexed separately, is to have the canonical set to each version separately. So the generic one has a canonical to a generic one. The English one has a canonical to the English one. The German one to the German one. And with the hreflang set up so that you have the English version, the German version, and x-default is the generic version. And what would happen then is we would rank these pages normally. And depending on what we think the user is searching for, if it's the German version or the English version, or maybe they're searching in French and they want to see the generic version, we'd show that appropriate URL. That wouldn't affect the rankings of these pages. But it would kind of show the more specific URL in search. But I'd try this on maybe a small set of your pages just to see how that works for you. It might be that you say, well, I just want my generic version indexed all the time because I think that's the best user experience. And that's ultimately their decision. Or you might find out that people in Germany are actually searching for English language content and they want to go to your English version directly sometimes. Then that might also be interested for you because then having the different versions indexed could be avoided over all. But that's something you can definitely try with-- take a section of your site and do A/B testing essentially for search.

MATT PATTERSON: All right, thanks.

MALE SPEAKER: Can I ask a question?

JOHN MUELLER: Sure.

MALE SPEAKER: Basically there's a lot of the information about content writing and good ones trying to create a lot of content at the moment. And you nominated-- and it sort gave more published, but what you obviously would do is trying to differentiate good content and obviously the best content from just the general content that's being produced en masse, thus far is the best description. And to that end, there's discussions about authorship and referencing. And I'm probably a bit old school, so when I think of that, I just think of my exam papers. And at the bottom where I make footnotes and reference the names and the articles I've pulled the data from. And effectively what I'm asking is, how does Google, if I were to do it properly is, can you give me any tips on how Google reads that type of authorship or how they look at it? Is it that the same context as I do it from uni days, or is it in a different context?

JOHN MUELLER: So essentially you're referring to other content, or--

MALE SPEAKER: Yes. Yes, that's correct. So say I'm writing a paper on car insurance accidents or something along those lines, so I'm referring to different quotes from different people and for statistics, getting it from IBIS in Australia. So there's authorities, is it something that I should have clickable links to? Should I just reference [INAUDIBLE]? I just don't know. Is there a suggested method on that path?

JOHN MUELLER: Just do it as naturally as you want. So if these are essentially normal references that you're included, I think including a normal link is perfectly fine. That definitely makes sense. It's not such that we would say, well, there's a link to a well known paper here. Therefore, this other article must be really high quality as well. Essentially your content has to stand on its own. But if you're referring to something that you think users would find very useful, by all means, just link to it and point it out that way. And I think that helps provide a little bit more value for your users. It's not such that we would like treat that as being significantly more high quality automatically from a search engine point of view. But it might be that within the context of your whole content, these references do add value to your content.

MALE SPEAKER: That's what I'm just trying to capture. That's something that helps me just do it mentally. And the other one is Google+ [INAUDIBLE]. I'll stop there. [INAUDIBLE].

JOHN MUELLER: Sorry, I missed that last part.

MALE SPEAKER: Sorry. There's references to linking authorship to Google+. Do you have any views on that?

JOHN MUELLER: That's essentially the old authorship markup. So with the rel author markup where you could link to your Google+ profile, at the moment we don't use that at all. So that's something you wouldn't need to do. I think sometimes it makes sense if you have a social media profile that you'd like to associate with your content and you'd just like to make it possible for people to find there, then fine. But all means link to that. But it's not something that we use for authorship recognition at the moment.

MALE SPEAKER: Thank you. That's it.

JOHN MUELLER: All right, let's run through some of the submitted questions. Looks like a lot of stuff is in there already. I don't know how far I'm going to make it. Let's see how far we go. As always, if you have any questions in between, feel free to jump on in and ask away. Otherwise, let's get started here. We recovered from a partial match impact links penalty late last 2014. As we're still not ranking for our key terms, is it possible that Google will still not consider certain anchor text terms which were previously kind of discounted? This is sometimes a tricky question, because it's hard to know what the specific situation is with the website here. So on the one hand, essentially they're saying that they did have some spammy links, some problematic links pointing to the website at some point. And they probably cleaned those up otherwise, they wouldn't have been able to have this manually action removed. So on the one hand, those links that used to artificially support this websites are now gone. So it's not the case that it would automatically jump back up and rank for those terms or rank in the same way that it did before. On the other hand, since 2014, lots of things have changed in search. So it would be kind of weird if everything were to go back to the state of 2014 with regards to ranking, with regards to visibility of a website, with regards to how it's kind of picked up and viewed within the context of the whole web. So that's something where I wouldn't expect a website to jump back to the status that it was a year ago just after cleaning up manual actions. So one thing I'd recommend doing there is not focusing too much on the old state, but rather look forward, and look at your website, and see what you can do moving forward to make sure that you have the highest quality content, the best website possible, that it's easy for people to recommend your website, all of these kind of normal things that you would do to make a good website and make sure that it's well known by other people. Our directory website pages classed as doorway pages even if external links are rel nofollow, or are there any issues from Google's perspective? So I guess there are a few aspects here which are kind of mixed up. On the one hand, directory websites, which are essentially just a collection of links to other websites in an extreme case, that's not necessarily a doorway page. A doorway page is more if you have multiple pages that are essentially pointing at the same thing. So just because it has a list of different websites doesn't necessarily make it a doorway pages thing there. The other thing with regards to nofollow, I think that's a good practice if you have lists of sites there that you feel might just be submitting your directory in order to get a link value under that. I think that might make sense. In general, I suspect a lot of directory sites are probably not relevant to users anymore because there are lots of other ways to actually find this content. In the past where it was sometimes really hard to find something very specific, I think directory websites made a lot of sense because it was easy to go to a category of sites and find all of the information there. Nowadays, people tend to go directly to those sites or tend to search for those sites, and then directly go to those sites instead of going to a directory website that has links, which essentially just add another level of indirection to the whole thing. We have many partner sites. Some of them are linked with each other via the footer follow links. So far we don't think this has been problematic, but do you think we should consider nofollowing them? Is it something Google looks at as a spammy technique? If you have a handful of sites and they belong to your company, then Darryl, that's not something I'd really worry about. If you have a really large number of sites, then probably doesn't make that much sense to kind of cross link them like that within the footer. So that's something where maybe it makes sense to have one page that kind of lists the collection of the sites within your group. And you could like that. Should external links on your website to your social media accounts be follow or nofollow? I think having followed links there is perfectly fine. That's essentially something you want to point at. So I wouldn't really worry about that. Wow, so many link questions today. Natural link building, more link questions. Natural link building is very difficult unless you're a recognized brand or spend a lot of money. How important is it compared to other aspects of SEO? And if we can't get natural links, then what else should we be concentrating on to rank? So on the one hand, I'm really happy that we use a variety of factors for crawling, indexing, or ranking, not just links because it does make it possible for websites to appear in search results without having a large pile of links that they've been building on for a long time. So from that point of view, for a lot of things, you don't necessarily need to go out and artificially do naturally link building I guess, which doesn't really make that much sense. So from that point of view, I think a lot of sites don't really need to go out and actually do this kind of link building. And in some aspects, I do see a lot of people causing themselves more trouble than they're actually helping their sites by spending time focusing on search engine factors rather than actually making their website even better, making their business even better, making it something that people really want to go to on their own. So that's kind of one thing where I'd say, well, maybe it's worth focusing your energy more on your site, on your business, on your customers, rather than trying to focus on individual artificial algorithm factors. And these algorithm factors can change from time to time whereas if you have a really great website, then that's not really just going to be value that you lose when the algorithms change. So from my point of view there, I'd really focus more on your website, on your users, and making sure that you're doing the right thing there than to try to go out and compete on the basis of link building. We've got a spammy structured markup tag on manual actions page. Unfortunately we don't have any ideas what the problem is. We started discussion of the problem in the forums, but we don't find a solution. What can we do? Hard to say without actually looking at the situation. I don't know if I see something specific. So one thing that we sometimes see with regards to rich snippets is that the wrong type of markup is used. For example, we'll sometimes see situations where a site will use recipe markup on the pages even though they don't really have recipes on those pages. And that's something where on the one hand, it's against our policies. On the other hand, sometimes we don't pick that up algorithmically, so we might need to take manual action on that and say, well, in this case, this kind of markup is not acceptable. [AUDIO OUT] I'm back.

MALE SPEAKER: Welcome back, John.

JOHN MUELLER: It's not Monday, it it? I don't know why these things happen on Fridays. It's not correct. Luckily the laptop reboots fairly quickly. All right. So where were we? The site with the spammy rich snippets label. So I don't know how much of it managed to make it through. But essentially what we sometimes see is that sites use the wrong type of markup on their pages. So some of them might use the recipe markup for example. And that's something that be against our policies if it's not actually a recipe that's being marked up. And that's something where we might choose to take manual action to kind of clean it up in the search. And in general, if you're going to help forums for that, that's a great idea. Let's see, what else do we have here? So many questions. Do internally carry different amounts of weight depending on where they are on a site, on a page? In general, this is something I would just try to do naturally on the website. So we do try to recognize where content is on a page if it's more on the boilerplate or more on the actual content part of the page. And we do try to treat that appropriately. But this is something where I would just work to create a natural structure on your website. There is no need to try to do this page rank sculpting within a website. It tells us a lot if we were able to really understand the structure of a website. And for that, it makes sense to have a clearly structured hierarchy on the website that we understand the categories, subcategories of different sections of the site, and that we're able to figure out what the context of all of this is across those pages. So that's not something where I'd try to artificially put as many links as possible in the footer, or hide them in the main content to make it look more appropriate. I'd just create a natural structure of your website so that it's easy for users to navigate. And by doing that, it's automatically more or less easy for us to understand the context of these different pages. We have a client with locations in Hong Kong, Singapore, and the US. What's the best URL structure? So there's a Help Center article on this which essentially goes into the three main options that are available, that you can use top level domains, subdomains, or subdirectories. Subdomains or subdirectories are possible if you're using a generic top level domain. What we don't recommend is that you put the country specific part into the end of the path because then you can't take a clear section of the site and say this is for this country. This is for that country. So I'd double check the Help Center article. There are some pros and cons to each of these approaches. With a handful of locations, it's sometimes easier to pick one of the options that works for you. If you have a lot of locations, then obviously there's a big money factor involved as well where you can't just go out and buy a hundred top level domains and use them all for your website because it's a significant cost impact there. How do some pages rank better in Google without having more than one or two external links than pages with more referenced external links? So this kind of goes back to the other answer I had with regards to the over 200 factors that we use for crawling, indexing, and ranking. And it's definitely not the case that you need to have a lot of links or that you need to do exactly the same as your competition to rank in the search results. So that's something where we'll balance different things together and try to figure out, on a whole, how do we fit this together into search results pages. Is it possible that schema markup can affect rankings if you've marked up things incorrectly? I don't think that would be the case. At least I haven't run across any situation where something like that would have happened. With structured data markup, we do try to pull out things that we can understand a little bit better. But if you have them marked up incorrectly, or if the markup is kind of invalid because it's maybe broken HTML, maybe the schema markup is from different templates that are copied together, then that's something that shouldn't negatively affect your site. It doesn't really help your site to have incorrect markup like this. So if you're looking at it and you see that, I'd try to improve that. But it's not that you'd have any problems by having broken markup like that on your page.

MALE SPEAKER: And John, in case somebody has any penalty notification, what is the problem of using the schema then? What [INAUDIBLE] does Google take if he does not resolve this?

JOHN MUELLER: So if they have a manual action and they have broken schema markup, is that the question?

MALE SPEAKER: No, actually I want to [INAUDIBLE] somebody received a notification for normally using a schema. So he was trying to understand if he does not resolve this, will his ranking be dropped, or how will Google take action?

JOHN MUELLER: So if we take manual action based on incorrect use of the markup for rich snippets-- in a case like that, we essentially just don't show the rich snippets for those pages. We don't change the rankings. It's not that they're demoted, that they're removed from search. It's really only that we don't show the stars, or the recipe images, or whatever.

MALE SPEAKER: OK.

MALE SPEAKER: All right John, can I just ask a question? Because there was talk about the Penguin update happening before the end of the year. Is that still kind of happening, or do you know about that status?

JOHN MUELLER: Probably not-- probably not. We were talking with the engineers recently to try to figure out what the plan is, how we should plan our time around that. And it's looking like it'll probably be towards next year. So we try not to launch too many things towards the end of the year just because there are fewer people here to kind of take action if anything were to go crazy. So that's kind of the buffer that we had. And instead of pushing it towards the end of year where we might not be available in case something comes up, we'd like to have it during the time when more people are around to actually resolve things if they're problems.

MALE SPEAKER: Perfectly understand. That should all go to Ashton as an idea.

JOHN MUELLER: That sound like a pretty good plan.

MALE SPEAKER: Hey John, regarding the fact that you're moving Penguin to real time, is this because you're trying to appease webmasters that have to wait months or years between updates, or is because the Penguin algorithm, the algorithm itself has evolved in such way that it can handle much better link classification and decide on the fly whether our link is spammy or not? What is the exact reason, I guess?

JOHN MUELLER: I think it's a bit of both. It's a bit of both. It's not that awesome that it takes so long for these things to update. That's definitely one issue there. But on the other hand, if we can make it so that it doesn't require as much handholding from our side, then that's great as well.

MALE SPEAKER: Is it also a factor that you are maybe not putting as much value into links as a ranking factor anymore? So you're better off if, I don't know, a link gets misclassified or something like that?

JOHN MUELLER: I don't know. I wouldn't say it specifically like that. I don't know if we could say that we're putting less weight into links. Sometimes I wish I could just say that in Hangouts so that people would stop asking all these link building questions. But essentially, especially within a website, we do need those links to understand how the website is structure, how to find all of those pages within the website. So to some extent, some of this is something that we can't discard completely. But I think that the algorithms are always evolving. And as we add new factors, then we have to reconsider how we handle old factors, and try to find a new balance there. It's not that we can just say, well, if you put, I don't know, this special symbol of your pages, you'll get an extra bonus. It's not that you can rank minus two of everything else. But the whole search results page, we still have to kind of shuffle all of the factors that we use into a ranking, essentially. And if someone is searching for something, we try to show the most relevant results, and bring that and why that makes sense for the user.

MALE SPEAKER: One last thing is, are you using more machine learning algorithms into Penguin, or are you relying more on machine learning when it comes to Penguin?

JOHN MUELLER: I don't know specifically about Penguin. But we do do a lot with machine learning. I think that's a really fascinating area. It's something where we've worked with machine learning for quite some time now. Sometimes it's interesting in the sense that these machine learning algorithms, they learn something that maybe we wouldn't have come up with up intuitively, that we try to figure out how did the algorithms come up with this result? And sometimes it makes it hard for us to debug and diagnose what's actually happening there. So if our whole search results were built with one machine learning algorithms that learns the optimal for every key word, and every website, and every user, then if something were to go wrong there, that someone would say, well, it showed me something completely wrong, then diagnosing that would be really hard. So it's something where you kind of have to find a balance between the different types of algorithms and make sure that you can reproduce what's actually happening there, which signals we're picking up, and why we turned that into a specific ranking, and at the same time, use these advanced systems to kind of bring in new ideas and see if there are ways to kind of improve our existing algorithms in ways that we didn't think about before.

MALE SPEAKER: And are planning for maybe a January release if it's not coming this year?

JOHN MUELLER: I am not going to see anything with regards to dates.

MALE SPEAKER: OK, fair enough.

JOHN MUELLER: I am pretty confident. But it's like always, if things are more than maybe a week out, then it's really, really hard to say what all could happen in that time. I'm not going to say anything. I'm going to hope for the best. And obviously we're in touch with the teams that are working on this to try to make sure that it works in an optimal way and also that we have something that we can tell you guys when things are read.

MALE SPEAKER: OK, sounds good. Thanks.

JOHN MUELLER: All right, more questions here that were submitted. We're an e-commerce company dealing with apparels, footwear, and fashion accessories. Now the company has taken decision to dissolve the footwear and fashion accessories verticals due to some issues. How do we handle these pages? So the technically correct way to handle these pages would be just to return a 404, maybe create a special 404 page depending on the type of content that used to be there so that it's clear for the user what happened to these pages. That's essentially the best way to kind of handle this situation in a general case. There might be some situations where you say, well, I work together with a partner, and they have an equivalent product, and maybe I could link to that equivalent product for my 404 page. Maybe I could even just redirect users directly to that equivalent product. That's something you can figure out on a case by case basis. With a 404, essentially you're just telling us these pages just don't exist anymore. And that's fine by us. It's not that you need to limit the number of 404s on your page. If a lot of pages are gone at some point, that's life for us. We have to deal with that too. So 404 would be technically correct way to handle this. If you have options to redirect them on a per page basis to something equivalent, that might be an idea as well. Is there a way to handle multiregional targeting. For example, I have es.example.com and example.es that I watch really compete with each other. Both have the same content with different translations. The first is for South America, the second one for Spain. You can use geotargeting for this to target specific countries. If you have countries that you'll want to what this, you can use hreflang markup as well between different countries there. For situations where you have multiple countries for the same content, you can't use geotargeting, but you can use hreflang markup. So you can say, this specific page is for this set of countries. It's like Spanish in Mexico, Spanish in Argentina, Spanish in Chile. And this other page is Spanish in Spain. And maybe you'll pick one of these versions and say, this is my generic Spanish version. This is maybe my x-default version that I should show to everyone who's not searching in Spanish. That's one thing you can do there. So with the hreflang, you can assign multiple countries to the same content. With geotargeting itself, you have to pick one country. And it'll be targeted that way.

MALE SPEAKER: That's actually my question. And I'm using those hreflang tags. But even though I am using those, the South American pages is on the Spain and Google. So that's the problem. I am targeting Mexico, Colombia, Peru on one side. And on the other side, I am targeting Mexico. But even though I am using those hreflang tags, they are still competing with each other. So possibly the hreflang markup there is not working as expected. Maybe that's from something within the markup that's not being picked up properly. So in Search Console, there's a way to look at hreflang errors, which might be that maybe the wrong pages are being linked. Maybe there is no return link found. Maybe the markup isn't specifically in the head, but in something that looks like the head in the beginning, but there's also markup in between that kind of implicitly closes the head section of the page. What I would do there is maybe start at thread in the Help Forum, listing a query and that the pages that you have specifically marked up for that query. And then the people there have been in practice in digging in and kind of recognizing the common hreflang issues. So maybe that will help.

MALE SPEAKER: OK, thanks.

MATT PATTERSON: Could I jump in quickly and ask a tiny clarification on that, which is the no return error. Because it's kind of slightly hard to understand. Could you talk very quickly about the no return error with that?

JOHN MUELLER: So essentially if you have two language versions, many a Spanish version, or even a Spanish version for Spain and a Spanish version for Mexico, that you have the hreflang market between those versions. You kind of say, this is the Spanish version for Mexico. This is the Spanish version for Spain. And the important part there is that we see it from both sides so that this page points to that one and says, this is this page. And that page itself points to the other one and says, I confirm that I belong to this set of pages. And this is kind of my language, regional association between those sets of pages. The important part there is also that the pages that you have linked in the hreflang, they have to be the canonical versions that we pick for indexing, which is sometimes kind of tricky in that maybe you will link to a parametrized version because your CMS kind of internally tracks parametrized version of URLs. But we actually index maybe a text version of that URL. So you'll link to ID equals 512. And that page ID 512 is actually not indexed like that, but it's indexed as new shoes, for example. So that's something where the canonical helps us to kind of reinforce that we picked the right version of change. And the rel hreflang markup should be between those canonical versions and kind of confirm it from both sides.

MALE SPEAKER: So another thing. It's not a problem if there are several hreflang tags pointing to one page.

JOHN MUELLER: That's perfectly fine.

MALE SPEAKER: OK, thanks.

MATT PATTERSON: Could I ask another quick multiregional thing. So the idea of duplicate content and pages where essentially there is no difference between the pages except for the page navigation, because it's a UGC stars site, which is the issue that we have. So our content is essentially identical between languages. But the links at the top of the page and the links at the bottom of the page, which are essentially the administrative ones like signup, homepage, about page, those are translated. And dealing with rev canonical and multi-language versions of those places, ideally how would you play that in terms of picking which one is canonical? We don't want Google to think we're duplicating content, but we do want them to recognize the different language versions.

JOHN MUELLER: I wouldn't worry about the duplicate content part, because that's something we essentially handle on our side. And for us it's mostly a technical issue. It's not something where we would say there's a spam problem because they have duplicate content within their website. We understand that this is essentially something that's a natural part of a lot of websites. So that's not something I'd worry about from a quality or from a spam point of view. But rather I'd try to figure out what makes sense for your site. Does it make sense to have these different versions indexed? Does it make sense to call this the English version if you know that the main content is actually in German, and it's just the navigation that's maybe in English? That's something that's kind of left to you, and you can choose however you want to do this. I think for some sites it makes sense to say this is the English version and the German version, even if it's just the boilerplate, the navigation, all of that that changes.

MATT PATTERSON: OK, thanks.

JOHN MUELLER: One searched as CEO of brand. The featured snippet shows the previous CEO name. Is there any way to update this? So if this is in the Knowledge Graph on the side, for example, that's something where there's a feedback link on the bottom of that panel where you can actually give us feedback and tell us this is wrong, and maybe here's a link to a reference to let the team know what should be shown there instead. If this is something that's featured within the main search results kind of as a bigger snippet, then that's something that will be picked up automatically over time as we recrawl those pages and reindex that content. So that's not something that you'd need to let us know about. We'll pick that up automatically over time. I noticed some weird search results. If I search for pancake recipe in New Zealand, then I see a chicken salad recipe. I thought this was really fun, but it's probably not that expected. So I did send this on to the team to kind of take a look at, to see what's happening, where that's actually coming from. I don't know how people in New Zealand eat. Maybe they have pancakes with chicken salad, I don't know. But it seems a bit weird. I am administrating a multinational site that uses different ccTLDs for different countries. I've noticed some weird search results. For example, my UK site is displayed in Sweden for several key words above the Swedish version. Search Console is configured. This is another case where I would point more at the hreflang markup and really double check to make sure that that's actually set up correctly. Just by using geotargeting won't necessarily solve this problem because then we might want to know that these pages are equivalent, and we can kind of show the better URL for that specific user. We might try to rank them independently. And that's probably not what we're looking for here. We have a main e-commerce site, and also a number of smaller sites that sell specific category, which is also on our main site. We have follow links from the smaller site going to our main site to help endorse it. Is this classed as unnatural linking? Not necessarily. I wouldn't necessarily call this unnatural linking, but it sounds like it's kind of headed in the direction of doorway sites. So especially when you say you have a number of smaller sites which are essentially just selling a category of those pages, then that seems a bit like you're headed in the direction of doorway sites where you're essentially just creating specific sites only for one specific area of the general product catalog that you're offering. So that's something where you probably end up with more value for your site if you were able to focus more on your main site and actually pull all of that content in and make a really, really strong primary site rather than kind of diluting it across a number of different sites. So one thing you could do there if you wanted to keep those smaller sites more for marketing reasons is set a rel canonical from those smaller sites' pages to the same product on your larger page so that we can focus on the main issue, show that version in the search results, and you still have those smaller sites available if you want to use them for print advertising, those kind of things. It appears that Google has removed the ability to show search results from locations different than Google's autodetected location. For work, I adjust locations pretty regularly. Then personally, Google also puts me in the city. What's up with that? So I believe we did change this in the UI recently and that in general, when we look at things that we have in our UIs, oftentimes we try to find ways to simplify that, make it as easy as possible for users to actually use. And if we're pretty sure that we can figure out the location properly, then there's no reason to ask the user for the location again. So that's something where we try to simplify things and make sure that it works for the user. And sometimes that does result in removing these kind of settings specifically from search. With regards to being put into the wrong city, one thing I would do there is double check the Web Search Help Center. There is actually a form you can fill out where you can put your IP address in there and let us know about the location that we're picking up and the location that you're actually in. And that's something that the team here can use to kind of improve our understanding of where you actually are and show the appropriate search results for you in the case like that. One thing that might still work in cases like this is URL parameters. That might work there. Sometimes I believe the same thing is also available in the Advanced Search Settings, depending on if you're looking for country or city level information. So those are also options that you can use there. A website features resume making software. We contain the software portion in the subdomain. Currently the subdomain is set to disallow all. And the content is set to noindex. I've read that we should allow all and set the content to noindex. Will this create new problems? In general, with the disallow all, what will happen is we won't be able to craw that content and won't know what's there. So we might show those URLs in the search results without having crawled them. And you'll usually see a snippet saying Google isn't able to crawl this page because of robots.txt. But here's a URL that we found. And if that's not something that you see or if that's not something that you find problematic, you can definitely leave it like that. On the other hand, if you want to make sure that it's not shown at all in the search results, then maybe allowing crawling and setting it to noindex is an option. Maybe doing something like server side authentication is an option there so that we can crawl those pages. We can see that we're not allowed to actually look at the content because it's returning an authentication dialog. Then that's something that we'll pick up as well and drop from the search results. So it kind of depends on whether or not you're seeing a problem there or not. If you're not seeing a problem and you're just disallowing all crawling of this because it's not something that should be picked up, then that's fine from my point of view. You don't need to change that.

MALE SPEAKER: OK, thank you. I was the one who asked that question. I was wondering if I could ask another question actually.

JOHN MUELLER: Sure

MALE SPEAKER: One of the first questions you answered was about recovering from a manual penalty. Your suggestion was well, after you recover from the manual penalty, then of course your site is going to be set back a little bit. You should just move into the future and try to make your site as good it can. We had a website penalty in 2013, which I think is algorithmic. And to me an algorithm penalty is 8 million times worse than a manual one from Google because there's no way for us to know or contact anybody to know what exactly we're supposed to do. We've sort of recently remade the entire website. We've recoded the whole thing. We've killed off a million pages we've been disavowing for years now, to no avail. Can you tell somebody to just manually penalize us and we work with them?

JOHN MUELLER: I don't think that would really help in a case like that. I totally understand the general difficulty there in that you're trying to do the right thing and you're just not sure which direction to go. One thing you might be able to do is go to one of the webmaster forums and just get feedback from other people who also have experience with these types of issues. And sometimes at least you'll be able to figure out is it something that's within my website, like quality problems that I can work on, or is something that's outside of my website, maybe problematic links that previous SEOs built up over time that I'm still kind of battling with. But kind of being able to narrow it down on either stuff that you can affect directly and stuff that's kind of outside of your control where you'd have to use a disavow tool, or maybe clean things up externally, that sometimes helps. It sounds like some o f that you've already done. I'd really try to get advice from the community there and see what kind of issues they've seen in the past, and looking at your site, what they might offer as tips as well.

MALE SPEAKER: All right, thank you very much.

MALE SPEAKER: Could I just scoot in John another question just on manual penalties. Sorry, I didn't really understand that. I understand what it means literally. So does Google specifically, something inside of Google actually scrolling the net and specifically finds some site that they don't like and then says, I want to penalize this? That's what a manual penalty in English sound like. But is that just as simple as it is?

JOHN MUELLER: It's not that simple in the sense that it's not some random person going through, browsing the internet, and saying I don't like this site. It's really from the web spam team, based on the policies that they have, based on the guidelines that we provide, they will go through things like the spam reports. They'll go through the search results in general, and sometimes on specific topics, and try to find situations where websites are doing something that goes against our policies in a way that's negatively affecting our search results. And those are the kinds of situations where we'd say well, ideally we'd have an algorithm that figures this out across all websites. But that takes a certain amount of time. And we need to be able to clean up the search results now. So they'll take manual action on that to prevent the site from negatively affecting the search results that users are seeing.

MALE SPEAKER: [INAUDIBLE] a whole lot of detectives up there.

JOHN MUELLER: Well, essentially we try to create the policies in a way that we think makes sense, in a way that's fair for all websites, that helps to keep the quality of the search results high. And the web team tries to enforce those policies in situations where we think it's kind of manually necessary.

MALE SPEAKER: Thank you.

JOHN MUELLER: All right. Wow, it looks like we're out of time. And I think someone else needs this room after me. So I'll need to close down now. We still have lots and lots of questions left though. So let me just copy those out so that I don't lose track of those. And maybe I'll be able to answer some of these in the Hangout notes afterwards as well. All right, thanks a lot for all the questions, for all the feedback, and the patience while my laptop was rebooting in between there. I wish you all a great weekend, and maybe I'll see you again in a lot of the future Hangouts.

MALE SPEAKER: Have a nice weekend, John. Bye.

MALE SPEAKER: Thanks, John.

MALE SPEAKER: Bye.

MALE SPEAKER: Thank you, too.

MALE SPEAKER: John, take care.

MALE SPEAKER: Thank you.
ReconsiderationRequests.net | Copyright 2018