Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 19 May 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: Welcome, everyone, to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland. And a part of what I do is talk with webmasters and publishers like you all. It seems I have some new faces here in the Hangout, which is fantastic. Do any of you who joined these for the first time have any questions that you'd like to have answered or anything that's been unclear that's been on your mind?

MALE SPEAKER 1: Hey, John, I've got a question.

JOHN MUELLER: OK.

MALE SPEAKER 1: We've got a blog. And we've got about 800 blog posts, and those are ranking quite well. And I'm 100% sure that those blogs don't have any algorithmic or manual issues with Google. So what happened was last weekend suddenly 17 of those 800 blog posts have disappeared. So the first thing I did was check if those blog posts are crawlable for Googlebot. So there's no no-follow, no-index tag. There's nothing in the robots tags. And I am quite confused because those blog posts were ranking quite well and driving quite a lot of traffic, and suddenly, it's just disappeared. So one our developers were mentioning about [? jQuery ?] comments and one [? jQuery ?] comment. They just forget to close a comment in the [? jQuery ?] in the header section. So we were just asking if does that confuse Google bot? Do they index, so to say, the blog posts?

JOHN MUELLER: That shouldn't be the case. So essentially what you're saying is you kind of had some broken HTML on these pages, right? It's like the plug-in generated some HTML that wasn't well-formed, something around that. That shouldn't be a reason why it would remove a page from the index. So usually what we'd look for there is something like a no-index robots tag, or if it's maybe returning a 404, or maybe it's a 500 error, or something like that. Sometimes what we've seen is that the server returns this just to Googlebot for some weird reason. So maybe they were trying to do something else that was kind of special there, and somehow that broke. And now Googlebot is seeing an error or no-index, and normal users are seeing the full content. So that might be something that's happening there. I guess that's more of a technical issue. One thing I try to differentiate is whether or not this page is actually completely gone from our search results, from the index, or if it's just not ranking.

MALE SPEAKER 1: No, it's quite gone. So you cannot just even find any site. You cannot find it in a Google database, which is quite confusing.

JOHN MUELLER: So like a site query would say no results?

MALE SPEAKER 1: Yeah, no results.

JOHN MUELLER: OK, in a case like that, either the Web Spam team took action on it, which usually we send you an email through Webmaster Tools so you'd see that there. That's the first thing I'd double check there. The other thing that might be happening is especially if this happened from one day to the next, that's something where maybe there is a URL removal request that was submitted.

MALE SPEAKER 1: We double checked everything. So we don't have any requests, URL removal request and the manual action messages. So it's a kind of a mysterious foe.

JOHN MUELLER: Yeah, so one thing you need to watch for with the URL removal request is we remove things regardless. So if there are kind of like similar URLs, then we will remove that, which means that you have the HTTP, HTTPS versions www, non-www, and you have one site removal for one of those versions, then that will apply for all of those versions. So, for example, if you try to remove let's say the HTTPS versions from the search results because you wanted to focus on the HTTP ones, then the site removal request there would actually apply to both of those versions.

MALE SPEAKER 1: So we didn't actually do any remove requests. And as you say, if it's a manual action, you should actually receive an email. So the reason is-- there is no reason to get it manually removed by Google because that's just completely normal unique blog posts.

JOHN MUELLER: One thing you could do is maybe send me some of those sample URLs, and I can take a look afterwards. You can send them to me on Google+, and I will take a look afterwards. I can't guarantee that there's something specific I can get back to you. Maybe what can also happen is that there's some kind of a glitch on our side where, for whatever reason, these URLs dropped out briefly, and they'll drop back in as soon as we reprocess them. But I can double check on our side.

MALE SPEAKER 1: OK, that's great.

JOHN MUELLER: Sure. All right more questions? What can we start off with?

MALE SPEAKER 2: Hey, John, can start with one?

JOHN MUELLER: All right.

MALE SPEAKER 2: Thank you very much. A pretty short one. A product page has additional information structured in tabs, because otherwise, we'll render the page too long, and we'll lose the page speed. Well, I know there has been talk a lot on this method like adding that info on the same page and transforming the tabs into expanding menus and so on. What I'm interested in it's what's your latest advice on getting that content indexed to for adding a plus value?

JOHN MUELLER: The latest advice is still the same as the previous advice, I'd say there. So essentially if it's critical information for your page-- and make sure it's visible. If it's critical information that isn't visible now, maybe it makes sense to set up separate URLs for that information where that tab is open by default, and it's visible. If it's just additional information, then it's fine to keep it in a hidden tab like that.

MALE SPEAKER 2: OK, thank you.

JOHN MUELLER: All right, I think, Darcy, you had something as well?

DARCY: Yeah, I did leave a question in the chat. But I'll ask it anyway. Specifically to automotive industry, I'm just wondering, if when it makes sense to combine multiple businesses into one website. So multiple businesses have websites. They're not exactly the same. But they want to have the Walmart-type structure where it's here's the brand, and then it's your Toronto, Mississauga, whatever, your different city-based brands. When does that make sense? What needs to happen for that to make sense from an SEO standpoint?

JOHN MUELLER: Essentially that's up to you. So from my point of view, if you have more than a small number of sites that are essentially targeting something similar, I'd aim to combine those more because what will happen there is you'll have a much stronger central site for all of this information than you would if you had to maintain these individual websites, which probably would have trouble ranking on their own. So that's something where if you have different locations or different niches of the same business that you're active in, then that's something where as soon as you go past maybe two or three sites, I'd aim to really try to combine that into one website. And to combine that, you essentially just redirect URLs to that central place and see that as a site move. But essentially it's something where there's no hard line on our side where we say, well, if you have four websites, you must do exactly this, or if you have websites that are different in 50% of the words on a page or some-- I don't know, some other metric, then you must do it like this, that's not something where we'd have a hard line on. That's more something where you should try to think about it from a marketing point of view, like do you want to present your business? Is this essentially one business that you want to present to users as something really strong? Or do you really, really want to separate these essentially parts of your business completely and make sure that users really identify them as something completely separate. So that might be if you like targeting completely different audiences, if you have something that's for consumers and something for other businesses, maybe it makes sense to keep that completely separated.

DARCY: Yeah, my concern was just over the types of inventory and products. One site might not carry the same site specific to automotive, so the same car as the other one. So that's why it's not sort of the whole Walmart thing where you can say it's out of stock. They may never even carry that item. So that's where I got-- I wasn't sure. Do they need to share the same inventory for that to make sense?

JOHN MUELLER: That's essentially up to you. You can do it either way. I've seen sites that focus on specific locations where you say, in this location, we have the set of inventory, and it's kind of structured that way. And other sites take focus on the inventory and say this is what we sell, and it's available in these and these locations. So that's not something where I'd say from an SEO point of view you need to do it like this or like that. It's really up to you how you want to present your content, your products, your locations.

DARCY: Excellent, thank you.

BARUCH LABUNSKI: John, can I ask you a quick question about server errors?

JOHN MUELLER: Sure.

BARUCH LABUNSKI: So suppose there's about three to four servers for a site in a period of one year. Does that have any effect on the rankings or no? I know you were already starting to answer this. But I just wanted to know because it sticks there for about a month, two months, until it disappears back to zero.

JOHN MUELLER: That generally wouldn't have any effect on the site unless those URLs were affected by that server error. So, of course, if an important URL on your site has a 404, then that will drop out of search, and that'll affect your website. On the other hand, if it's some random URL that actually never existed, and it's a 404, then that has absolutely no effect.

BARUCH LABUNSKI: OK, thanks.

DON HALBERT: John?

JOHN MUELLER: Hi.

DON HALBERT: A question-- did you get my message in Google+?

JOHN MUELLER: Possibly.

DON HALBERT: I sent it 15 minutes before it started.

JOHN MUELLER: Oh, then I probably didn't .

DON HALBERT: Do you remember-- OK, I don't know how to get you the website because I'm really, really nervous about putting it into the chat, because the last time I did that, I ended up with an attack, and we still haven't recovered from that. So how can we do this? Do you remember my website by chance? The one with the historic Penguin?

JOHN MUELLER: Historic Penguin. I don't know.

DON HALBERT: The vacations one? OK, whatever. I've been attacked enough times. I'll put it into the chat.

JOHN MUELLER: There's also a way you can send a direct message in the chat.

DON HALBERT: Oh, well, a little late. The cat's out of the bag.

JOHN MUELLER: Actually, the people here are nice.

DON HALBERT: All right, in a nutshell, the advice I've gotten from you is A, it was a historic Penguin that originally hit it. I had a site audit done with the group here about a year and a bit ago. And for the most part, everything looked good. And some people were suggesting, hey, you need to start over with a new domain. But you were adamant that the domain was fine. Since then I pay close attention to what is told to Rob and what has been told to Gary Lee because they seem to be in a very similar situation. As of late, we've actually gone down to a skeleton staff. And we lost eight more people because we can't-- we're having a very hard time keeping the doors open. It's not your issue. My question is I've done a hreflang. I've done an entire translation of the website into Spanish. So if you go, you'll see an EN. You'll also see an ES. But now I haven't done a Fetch as Google since February or whatever, which is good product for me because I was doing it weekly. But I'm desperate right now, John. I need to know-- I was anticipating a release of something that will at some point-- algorithms hitting your website. And I'm wondering has there been any more talk [INAUDIBLE] because right now we spent thousands translating it and applying the hreflang. And right now we have a sandbox website with a complete redesign of the site because that template was actually based on a template design company off of one of the major theme sites. But they're out of Moldova, and I'm at my wit's end here. I don't know what to do. So we're actually sandboxing a new complete different design, and maybe that'll help it. I don't know what's going on with that website. But Google hates us.

JOHN MUELLER: We try not to hate you. When did you put up the new design? It looks really nice. So from my point of view, it's not something where you're waiting on a Penguin refresh or anything like that. It's essentially just our normal algorithms, the way that they're looking at the site. And I think something like a redesign, from what I remember, it looked very different in the past. I could imagine that this is something that would have a significant effect on your website where we look at this, and we say, well, this looks really nice. This looks like a really high-quality trustworthy site that we should be showing more in search. So from that point of view, it's something where if you just recently put this redesign up, I could imagine that you would see some changes. No?

DON HALBERT: No that's-- the sandbox site is still in the sandbox. I have not even come close to finishing it. But before I invest another $10,000 in the redesign, I wanted you to look. That's the same design [INAUDIBLE] and an ES version. And the canonical is applied.

JOHN MUELLER: I'd probably have to take a look at the details there to see what's happening. But in general, it's not that there's something specific that we're waiting for for your website where we're saying, well, you need to do this, or you need to wait this algorithm out. It's essentially our normal algorithms, the way they'd look at the site to try to make sure that we're showing something that's really high quality that people actually want to see. And I probably would have to take a look at your newer site if that's not the current one that's live there.

DON HALBERT: Would you be so kind as to tell me whether or not I still have a historic Penguin on my back on this website?

JOHN MUELLER: You don't have anything where I would say you'd need to focus on links there. So that's something where if you've been focusing on links for Penguin or other web spam issues, that's not something where at the moment I'd say you have to worry about that.

DON HALBERT: Thank you very much. That's very helpful.

MALE SPEAKER 3: John, is it a good time for me to ask for any kind of update as well then? Since I haven't asked you for about six months, I've laid off for a while. And we have just finished in the last two weeks.

JOHN MUELLER: But can you copy and paste your URL into the chat or somewhere?

MALE SPEAKER 3: Well, this is the one that we're currently on. This was-- that was the old. We have just changed, as you said, the entire HTML. It's completely responsive. It's new. It looks similar. But now it's 100% different. The content and the products and everything is still the same. But this site is now different to the old one. So the old one is the X.

JOHN MUELLER: I don't see anything holding back your new site. So I think that's kind of evolving as it organically would be evolving. So it's not--

MALE SPEAKER 3: There's no hreflang. There's no 301s. There's no nothing now in terms of being officially connected to the old.

JOHN MUELLER: It looks like the traffic is going up as well, right?

MALE SPEAKER 3: Yeah, but that tends to happen whenever we do anything. So when we-- if we do the hreflang, it works for a week or two. If we do something else, it works for a week or two. If we 301, it works for a week or two. When we unwind the 301, that works for a week or two. And then it's not until six weeks down the line after any change that we actually see what the genuine effect is once the whole thing's been spidered, and it's flushed out and essentially been caught again.

JOHN MUELLER: From what I see, I think your site is headed in the right direction. And things are looking up there. It's obviously hard to compare to the old site. But the new site looks like it's as it normally should.

MALE SPEAKER 3: Right, and there's no connected penalty or connected--

JOHN MUELLER: Not that I see there, no.

MALE SPEAKER 3: --change that's going to hold that one back?

JOHN MUELLER: No.

MALE SPEAKER 3: All right, OK, I shall everyone else get in before I pester you with something else.

JOHN MUELLER: All right. Let me go through some of the questions that were submitted. And we'll see how far we can get and go back to questions from all of you here in the Hangout. "How do I deal with content that's similar for multiple products. For example, FAQ pages for different products where the only change is the title, description, H1. Will having one page limit my ranking potential, or should I create multiple pages?" Essentially that's up to you. I try to focus on combining those pages as much as possible so that you have one really strong page rather than essentially a diluted set of pages that are attached to your site. So as much as possible, I'd try to do something like a rel canonical pointing at your preferred version so that we can focus all of our signals on that canonical version so that we can really show that as highly as possible in search. What will happen otherwise is we will index these separately. We'll crawl them separately. And we'll try to figure out which one of these is most relevant in search. We won't show all of them. We'll pick one of them and show it in search because it's essentially very similar to the rest. And that might not be the one that you want us to show. And it might not be as strong as you want us to show it because we essentially have to balance between all these different versions. So if you know that this content is essentially identical, use ta rel canonical. Point at the one that you want to have indexed, and we'll pick it up from there. "We're under manual action for unnatural links. We eliminated 96% of all unnatural links and disavowed the remaining 4%. Google rejected our last request and provided us three links whose domains were already disavowed. What's up there?" I don't know. I'd have to take a look. So if you want, you can send me that URL. And I can take a look with the Web Spam team to see if there's anything specific that we can let you know about. It's really hard to say without taking a look at the details there. It might also be that you're looking at the 100% unnatural links that you found are actually just a part of the unnatural links that we consider to be unnatural. So that's something where maybe it makes sense to give you some other example URLs that you can focus on. Maybe there's an issue on our side that we can pick up as well. So having that URL would really help. 'When optimizing for speed and using PageSpeed Insights, is there a number for mobile and desktop that we know we're good? It's hard to get 100% on both mobile and desktop. What are the ranking effects there?" Essentially, we look at mostly mobile friendliness which is part of that-- what you can test with the mobile-friendly test. And past that, the numbers that you see there are essentially more of a guidance for you with regards to what you could be focusing on, where you should be focusing your energies, where we or our systems see some issues that you might be able to resolve. It's not the case that you need to get 100% for some sites. It doesn't even make sense to get 100 there because you know you might have a perfectly fast site. But it's doing something that our systems are picking up and saying, well, theoretically this could be a bad practice. Maybe you should consider doing it in a different way. But if you really know that you're doing it the right way or in a way that's more efficient for your website, then maybe that's fine. So I'd focus on the mobile-friendly test primarily, which tells you if it's mobile-friendly or not. And the rest is essentially just working to make sure that you have a really high-quality, fast, and efficient website.

BARUCH LABUNSKI: But passing the 80s is a good idea, right? Because it was--

JOHN MUELLER: As high as you can get, yeah. It's always a good idea to aim high. With these kind of tests, I wouldn't try to be a minimalist. If you're going to work on this, there a lot of things that give you a big jump in those metrics without actually requiring a lot of work. So maybe you can just copy and paste some cache and code into your HT access file. And it'll automatically take care of a lot of server-side caching for you with maybe 10 minutes of work. So that's something where I wouldn't aim at a specific number, but rather figure out what works well for site, where you can go, how far you can take it with a relevant amount of work.

BARUCH LABUNSKI: OK, thanks.

JOHN MUELLER: It's also worth mentioning that PageSpeed Insights has an API. So if you want to check your client sites, for example, you could probably set up in a small script that actually go through all of those sites and tests them for you so that you know which clients you should be focusing on more and where you could see, well, I put them on this cheap server, and now the test says that the website is really slow. Maybe I should I talk to them about moving to better server pr a different host or whatever. "If from one domain going only a single link from a full article, but Googlebot finds this link on a couple of subpages from the original article and tags sub pages, for example, does the algorithm count this as one link only in the calculations?" So in Webmaster Tools, we would show that as multiple links so that we would probably give you one of those links as a sample in Webmaster Tools. But we count all of the ones that we actually find. The thing to keep in mind there just because it's from-- it's counted as multiple links doesn't mean that it's automatically seen as higher quality or higher value. We'll try to figure out how we need to evaluate those links individually. It's not that we would say, well, this is three links from this website, therefore, it's three times as strong. "Are you working on a Panda update or a data refresh? It's been a long time. Everyone's frustrated." We are working on updates there. I don't have any time frames at the moment. But I know the team is working on that. I know it's frustrating if you've worked a lot on your website to actually clean up these issues. The same applies to Penguin as well where maybe you've cleaned up a lot of web spam issues, and you're just waiting for things to open up again. And that's something we're definitely working on to update that again and to make it a little bit faster.

BARUCH LABUNSKI: So when that update happens, it's going to be a manual like everybody is saying, right? I mean, everybody knows that. It's going to be a manual update, not an algorithm update. But at the same time, everybody's thinking it's running-- it's running with all the other stuff.

JOHN MUELLER: Yeah, I think that was a bit confusing from our side. So it's definitely an algorithm that runs all the time. But we have to update the data for this specific case. So that's something where it's kind of a mixture of both. And I think we probably explained that in a little bit of a confusing way, an people picked that up and thought that, oh, it's completely like this or it's completely like that when actually it's a mixture. And we just want to apologize for-- I don't know-- confusing people about something. "How can I filter queries by multiple words, phrases in Webmaster Tools, search analytics? Also, are key words from the secure Google search included in those reports?" So, first of all, the easy question is, yes, keywords from the secure Google search are included in those search query reports. It also includes things that Analytics wouldn't be showing. Where you'd see the "not provided in Google Analytics," that would also be included in the search analytics reports. Filtering queries by multiple phrases, I think that's something you'd have to do individually. So if you have multiple phrases, you'd have to test them individually and maybe download the data as a CSV file or for Google Spreadsheets and combine it on your side. It's not something that we would say you would be able to search for this word or this word or this word and just giving one aggregated table, at least not at the moment.

FEMALE SPEAKER: Gotcha, thanks, John. I was actually reading one of the help texts on your site, and it said it was plural so that confused me. I thought maybe there was just something I was missing. But I appreciate the clarification.

JOHN MUELLER: Glad to help there.

DON HALBERT: John, on that same note [INAUDIBLE] or I don't think it was overly top secret. You just said it that in the search analytics [INAUDIBLE] consideration local [INAUDIBLE].

JOHN MUELLER: I can hardly hear you, Don, you keep breaking up. Or maybe you can type it in the chat.

DON HALBERT: Can you hear me?

JOHN MUELLER: Every now and then I hear individual words.

DON HALBERT: It should be working now.

JOHN MUELLER: OK.

DON HALBERT: OK, in Search Analytics, if it includes local [INAUDIBLE] page one-- page one for 20-- the term "exact match" receives 22,000 searches per day. So if you're on page one, you would expect that if Google [INAUDIBLE] zero. And when I talked to you before, you said that it is true if a site is punished by an algorithm, it can still rank in the local results. And I've seen that. It pops up into the local results, which screws up my average. But if you go over the last 90 days, it still shows an average of page one. And I get almost no traffic from it. So if that data is accurate, what's happening?

JOHN MUELLER: So probably what's happening is we're not showing your site often. But when we are showing it, we're showing it on page one. And people maybe aren't clicking through to page three, or four, or five or wherever your site is showing up. So we're not counting those views as something into the average. So essentially we're only counting the ones that we actually show in search. And if these are just individual users who maybe are local or have your site ranking number one for other reasons for personalization, for example, then that's something where we count those ones because that's-- page one is where we're seeing your site. And we wouldn't count the places where theoretically your site would also be ranking because nobody's really looking that far into the search results to actually find those. So from that point of view, it's tricky because we're showing you what we actually showed in search, which is I think the most correct thing. But it doesn't mean that this is where your site is always ranking. So what I'd look at there is the number of impressions and use that as kind of a qualifier for the actual rankings that you're seeing. So if you're seeing the number of impressions is really, really low and the rankings is really, really high, then chances are we're just occasionally showing your site. But showing it in the higher search results, but essentially not a lot of people are seeing that. So we had that, I believe, on our blog a couple weeks back where we would rank for the query 'Google.' And we would be showing up for the query 'Google' for the blog, which obviously is kind of specific and not something that people would generally want to look at when they're looking for the query 'Google.' And that kind of made those Search Analytics numbers look really weird in that we had to interpret what does this actually mean? We see a ton of impressions. Nobody is clicking on our blog anymore. The clickthrough rate overall is really miserable. But when you look at the details, you see, well, this is because our site was ranking where it shouldn't be ranking or accidentally ranking in some way that skews the numbers there. So you have to work on interpreting those numbers as well instead of just taking down as they show up.

DON HALBERT: OK, thank you very much.

MIHAI APERGHIS: John, I think Don was also asking if like a website is showing up in local results, but not in the organic search because it has been penalized or whatever, would that still show in Webmaster Tools as being on the first page because local results are just on the first page?

JOHN MUELLER: As far as I know, they would, yes. I would have to double check. But as far as I know, I think the last time I checked with the team, they definitely are included there.

MIHAI APERGHIS: That's interesting. So that means that for certain queries, page one might mean 15, 17 results because you have the seven local and the 10.

JOHN MUELLER: It's possible, yeah.

MIHAI APERGHIS: And what if a website appears both in local and in organic?

JOHN MUELLER: We take the-- what is it? The average top position. So if you appear once-- like if your website appears once with one URL at number two and another time with a different URL at number seven or something like that, we take the top URL and average that across the different queries.

MIHAI APERGHIS: What if it's the main page and you appear with the main page in organic and with the main page in local?

JOHN MUELLER: We take the top ranking one.

MIHAI APERGHIS: OK. That's interesting. Have you always done that? I don't think before Search Analytics was launched you were counting the local.

JOHN MUELLER: We we were doing it different before. But I think we changed, not for Search Analytics, but sometime before that where in the way they were getting, we would take the average of the search results and use that as the ranking number. And now we take the top one. The main thing that changed with Search Analytics is on the impression side, where previously if you were ranking with two URLs on one search results page, we would count that as two impressions. And now with Search Analytics, we count that as one impression because your site is showing one-- essentially in one search query. And that might be multiple URLs, but it's one search query, so one impression.

MIHAI APERGHIS: And is there any possibility that you would be able to add a local filter sometime in the future just to see local results?

JOHN MUELLER: There's lots of possibilities, sure. I can definitely pass that onto our team to see what we can do there. I think at the moment they have some other things they are working on. So I'm sure you'll see some interesting things coming out in the near future. We're also working on an API for the Search Analytics feature. So if any of you are interested in giving that a spin, then signing an NDA with us and trying things out with API, feel free to contact me as well. I see someone waving.

BARUCH LABUNSKI: How big is the NDA?

MIHAI APERGHIS: I worked at using the Google Apps Script to kind of parse the data automatically using all [INAUDIBLE]. So using the API will be pretty interesting.

JOHN MUELLER: OK, yeah, send me a quick note, and I can put you on the list. I don't know how soon that will be ready for testing. But I know they're feverishly working on it to make that more available.

MIHAI APERGHIS: Cool.

JOHN MUELLER: "We recently got hit with your so-called Phantom II update. Our traffic went down. We would like to ask some help and advice why our site got hit. I started asking other webmasters in the forum. It didn't bring much help. Our site is this URL." So essentially this is something where we're not-- I don't know, calling this anything specific. This is essentially just a normal algorithm update as we make them all the time. And sometimes this is something that affects more sites. Sometimes it's something that affects fewer sites. But essentially, we're working on trying to increase the relevance and the quality of search results, and that's essentially just a normal update that was happening there, nothing really specific. So if you're seeing changes within your site's traffic, within the impressions that are coming from Search, I think that's something where you can work on your website in general. For most cases, it's not something where there's any technical issue that you need to focus on. You would see that in Webmaster Tools, obviously. But it's not the case that there's this one line of HTML that you need to swap out these two letters, and then suddenly, it'll rank much higher again. We do these algorithm updates all the time. And for some sites, it goes up. For the other sites, it goes down. Our general goal is to make sure that the quality of the search results is really as high as possible. So if you would like to be more visible in these kind of updates, focusing on your site, making it really as best as it could possibly be is always a good strategy. I realize that's not an easy answer and not something that you can just say-- take back and say, OK, I will increase the quality of my website by 10% by Friday and hope that'll change things. It's more something where you have to take the long-term approach and look at for maybe a little bit, take a little bit of distance to look at your site and figure out what you could be doing in general to increase the quality overall. Maybe there are things you could do with your users, chat with them to figure out where maybe they're getting stuck, what you could be doing in general to really increase the quality of your website. "Does there exist another algorithm similar to Panda or Penguin which only works all the time now? If yes, is there any possibility to refresh before the big-- to recover before the big refresh?" So we have lots and lots of algorithms that are running all the time. And some of them run frequently. Some of them run less frequently. So I guess the answer is yes. There are lots of algorithms that are similar to the existing ones that are known. And, yes, of course, you can recover from algorithms when they run again. So it's-- I don't know. There's nothing exotic essentially that I can announce to you and say, well, we have a new animal in our farm that we'd like to announce to you at the moment. "Having added the rel canonical tag to some potential duplicate pages that are now pointing at a primary URL, I've noticed that these two pages are still indexed. How long should I wait before these pages are de-indexed. Will they ever get de-indexed?" The rel canonical tag is something that's on the page itself, which means we have to crawl and index those pages to actually process that. So in a sense, in order for us to keep using that rel canonical tag, we have to have that page on our index. We have to know about that page. We have to crawl it from time to time to see what's actually on that page. So if you do something like a site query, it's very possible that you'll still continue to see these pages there even though we're essentially passing all the signals that go to these pages to your preferred canonical. So this is something where I wouldn't focus on things like a site query or blindly focus on these individual URLs, but instead assume that when these pages crawled and processed for indexing and have a rel canonical, then those signals actually do get forwarded, even if we still index those pages individually. So I wouldn't assume that they'll drop out of the index completely. It's not something that you artificially need to push out of a index by putting a no-index on it or something like that. They essentially won't show up in normal search results because we have passed everything to your preferred canonical. But they might still show up in something artificial like a site query.

MALE SPEAKER 4: John, I have a quick question about disavowing.

JOHN MUELLER: Sure.

MALE SPEAKER 4: We've been hit, as you know, multiple times by negative SEO link attacks. Do we need to ever disavow a no-follow link?

JOHN MUELLER: No.

MALE SPEAKER 4: So then, in other words, no-follow links cannot hurt us or any site in any way?

JOHN MUELLER: They don't pass any page rank. They don't pass any signals there. So you can essentially ignore those completely.

MALE SPEAKER 4: Easy answer. Thank you.

JOHN MUELLER: "How can we help algorithm to link brand mentions with the domain?" I think this goes back to the question of if someone randomly mentions your brand in a text somewhere, is that's something we'd pick up as a link? And the answer there is, no, we don't pick that up as a link. We don't pass any kind of page rank to a website just because someone mentions your brand. If you have a URL, of course, that's something that people can link to if they want to do that. But if someone has just mentioned your brand, that's not something we'd pick up on as a link. I imagine there might be indirect effects where if people are talking about your brand, then that's something that might bring traffic indirectly to your website, which indirectly could bring more links to your website. But it's not the case that we would use that as a primary ranking factor in saying people are talking about this brand. Therefore, it must be a fantastic brand to rank it number one.

MIHAI APERGHIS: By the way, John, do you still use that maybe to understand more about that specific website because you associate the brand with a certain website. You see the brand mentioned together with a certain topic and maybe you use it to understand that, OK, so this brand is-- or this website is about this or that? Maybe, OK.

JOHN MUELLER: I could imagine that maybe there are some algorithms that do something exotic like that. But I don't think that's anything you would see any kind of visible change in search results from that. It might help us to understand things a little bit more. But you're talking about this tiny little thing where we can't really say this is like a primary ranking factor because it's such a vague element around that. It's like what do they mean by mentioning your brand. Are they saying that this is a terrible website? Are they saying this is a great website? How should we take that into account? It's really tricky to kind of--

BARUCH LABUNSKI: I was listening to Mihai's question. Are you guys taking any from what Yandex has done from their experimenting, removing the links and bringing the links back. I think we had a really-- I don't know if you remember when [? Spock ?] was here at that Hangout. Yeah, so regarding the links, are you taking that as a learning experience for what they've done? I mean, they brought the links back because the results were not that great. Does that help you guys?

JOHN MUELLER: I think they just have a different search engine. So it's not something where we could say, well, what they did in our search results is something that we need to do or we need to learn from. It's very different search engines. It's really hard for us to take that as something-- some kind of useful feedback for algorithms for our engineers where essentially what you're saying is some other company tried this specific setup. And it didn't work for them. So they went back to a different one. Does that mean no company should try that specific set? Probably not. I mean, maybe there are ways that it can be made to work. Maybe there are different configurations where it does make sense. I imagine if it didn't make sense at all for them, they wouldn't have tried it out for such a long time. So, obviously, some part had some positive impact there. But it's-- I don't know-- I think you can assume that our engineers in the Search Quality team, they're not all playing pool. They are actually trying new stuff out all the time and working on improving the algorithm. So some of that is probably around links and trying to understand links better and thinking about ways we can pick that up without it easily being gamed or without causing any problems if someone does accidentally game it or game it on purpose. I would totally expect changes to keep happening. Otherwise, if I was the only person working here at this company, that would be awkward, I guess.

BARUCH LABUNSKI: But it was a good learning experience, I guess, for anyone that it's-- without links it cannot work, right? I mean, to really deliver the 100%.

JOHN MUELLER: I don't know if you could say that. It's really hard to say. I don't know what specifically they changed and what specifically they were looking at. From what I heard, it was just a part of the search results. It wasn't even the case that they removed it from all of the search results. So I don't know. It's always interesting to see what other people are doing there.

MALE SPEAKER 2: John, can I ask you another question?

JOHN MUELLER: Sure.

MALE SPEAKER 2: We have an on-page JavaScript variable starting with the URL column followed by an HTTP and so on. Google finds that specific variable within the JavaScript on PageScript and tries to use the get method to crawl it which returns a lot of server errors as we use the parse method to render it. How can we block specific JavaScript syntaxes from being crawled by Google, keeping in mind that these JavaScripts are not separate resource files to be blocked through robots.txt there, but they are their own page elements?

JOHN MUELLER: You can't. So essentially we'll try to crawl these URLs if we find them. But if we see errors or any problems from those URLs, that's completely fine for us. So that's something where in general that's not something you need to worry about that our bot runs across a bunch of things in JavaScript that look like URLs. We'll try them out. And if they don't work, that's completely fine. That's not something you need to block. That's not something you need to obfuscate in the JavaScript. If those specific URLs are problematic in that they cause problems on your server, maybe changing the JavaScript so that it doesn't recognize that makes sense. Maybe using robots.txt makes sense, but essentially it's not something where in general if we run across any random URL, we try it and see an error, that that would be anything that you'd need to worry about.

MALE SPEAKER 2: OK, because the only way we worried about eating up our resources, not because of the errors, because Google doesn't show them in Webmaster Tools. But we have a bunch of them, because in each page, Google tries to do the same. So it's eating up of our resources. So we will try to rewrite that syntax.

JOHN MUELLER: That's something that often helps because we try to look for these links in various ways. We probably have like these very simple link searchers that essentially look for anything that looks like a URL and try to crawl that. And that's probably what we're picking up on there. And the more complicated ones where we render the JavaScript and try to see what the content is, what the links are actually being shown to the user, that's probably working out well because it's leading to links that are actually working. So if you essentially just obfuscate that link a little bit in JavaScript, then chances are we'll pick up something maybe slightly different and try to crawl that. But if it's something that leads to an obvious 404 and doesn't cause problems on your side, maybe that's preferred for you.

MALE SPEAKER 2: OK, thank you very much.

JOHN MUELLER: All right, let's grab some more of the questions that were submitted since people took time to send those in. "Some of our mobile pages have an accordion menu, and the content appears when you press the accordion menu. Is that an OK practice? Or is that similar to hidden text on the desktop?" That's essentially fine. A lot of sites use this as a way to make their content mobile-friendly or make the menu mobile-friendly. That's definitely not something I'd worry about. "This may have been answered already. But is it possible to set up filters to only show user sessions and page views from Google image results?" So the sessions will be something you probably see more in Analytics. In Webmaster Tools, you can limit it to get results if you want to look at those specifically. And that you should be able to see the queries you should be able to see the pages from image search results separately in Webmaster Tools in the Search Analytics feature at least. "I have added a new client to my Webmaster Tools page and immediately got flagged for security issues. I've gone through the server, and everything looks good. I clicked Security Issues. And it all looks good. Now, how can I remove the flag?" If this is something that's visible in Webmaster Tools, then generally that would be either malware or your site got hacked in a specific way. And if you cleaned that up, then you can send a review request in general in Webmaster Tools. But there are other issues that we essentially just pick up when we recrawl the pages and we see, well, this hacking is removed now. So we can essentially remove that. And sometimes there's just a lag between what we remove in search for where we say, well, this is no longer hacked. This is normal. And what we show in Webmaster Tools where maybe there's a delay of a day or so at most. So that's something where maybe you're just seeing that delay, and essentially everything is OK now. If you're seeing this for a longer time where Webmaster Tools is flagging something as a security issue that you can't actually request a review for, then maybe that'd be something you could post about in the forum. And we can pick it up from there and see what actually is happening there. Maybe there's something else on your site that's still being flagged that you could work on cleaning up as well. "Will Google take manual or algorithmic penalty for links acquiring only for brand or brandless keywords, assume all branded links are follow attributes and follow-- and links from sites, nature of content, or similar to the main site." We try to recognize unnatural links regardless of the anchor text. So that's something where just because it's branded anchor text, it doesn't necessarily make that link natural, right? So we try to apply a variety of different methods to figure out which ones are natural, which ones are unnatural links. "Does Googlebot understand and index web pages built with Polymer without any issues?" I'd say without any issues is probably exaggerating. But we are pretty good at crawling and indexing JavaScript content. And if you have your pages set up in a way that there are URLs that we can pick up for indexing and the content is crawlable, so meaning the JavaScript is crawlable, the server responses are crawlable, then we do a pretty good job of being able to index JavaScript-based sites. There was recently a blog post I think on "Search Engine Land" that reviewed a bunch of our JavaScript indexing capabilities. And it actually ended up pretty good, I imagine it'll continue to get better. So if you're using Polymer or other JavaScript frameworks to build sites, that's something where you can test that out on your site to see how far we can go. And if you're still seeing issues that we're not picking up everything, I'd love to get feedback in the Help forums. And what you can also do, of course, is the Java Ajax crawling scheme to make those pages crawlable by default if you wanted to do that. "Since March 25th, our search traffic has dropped by 70%. We have lots of pages indexed in Google. And I'm sure users like them. We tried several actions to solve this without success. Can you tell us what's wrong?" I probably would have to take a look at this in detail. My recommendation in general for something like this is to also get help in the Help Forum. So kind of make sure that you're covering all the technical issues by double checking with peers in the Help forums. But also make sure that the quality of your site is really the highest it can possibly be. And get feedback from peers, even if it's harsh feedback, even if it's something you don't necessarily want to hear because sometimes that's really good feedback. So that's kind of direction I'd head there. "Here's something specific to a set of sites. When searching for a site in Google IE, we see a lot of our UK page titles. And the company's also called Le Boat in the UK and Emerald Star in Ireland. Why does Google use UK titles for IE domain? The IE page titles should just say Emerald Star." In a case like that, where you're seeing a mix-up between different versions, I'd definitely take a look at the hreflang markup so that you can really tell us which version of your page is relevant in which locales and let us know that this is different content, not exactly the same. So if we can really crawl these pages and index them separately, we'll try to show the right titles, right pages. And with the hreflang markup, we can get that better for individual countries as well. So that's something I'd definitely take a look at there. "Is there any difference ratios if we use capital letters in the URLs?" Which is better? Small letters or capital letters?" You can use whatever you want. Keep in mind that most of the internet is case sensitive. So we're also case sensitive on our site with Googlebot. So if you pick one upper or lowercase version for a URL, make sure you use it consistently within your website and not that you link once with a capitalized version once with a lowercase version. Really pick one of those versions and stay with it so that we can consistently call, crawl and index the same URL over and over. All right, just a few minutes left so that. let's switch back to you all.

MIHAI APERGHIS: I'm going to go ahead and ask a website-specific question. I talked about, I think two Hangouts ago, of a client of mine who has multiple websites regarding ISO standards. They offer implementation and training for any specific ISO, or at least some of them. And they have a website for each ISO standard. And the idea is to combine them into a single brand because they've already acquired quite a few websites. So it's now time to best combine them in a single site. The problem is that while we can do the redirect, the new website will have each ISO as a subdirectory. And I know that Google doesn't-- Google Webmaster Tools doesn't offer on option for change of address to a subdirectory specifically. So what do you think how we should deal with that best?

JOHN MUELLER: Just redirect.

MIHAI APERGHIS: Because in the example in the Help page for the change of address, they only give the example-- you cannot move it from example.com to example.com/directory. But they don't mention example.com to another example.com/directory. So I am guessing that's not possible either.

JOHN MUELLER: Yeah, I don't think that's possible [INAUDIBLE]. I would use page-by-page redirects for a case like that.

MIHAI APERGHIS: And just let Google figure everything out.

JOHN MUELLER: Yeah.

MIHAI APERGHIS: OK, so Webmaster Tools is out of the question when it comes to a subdirectory.

JOHN MUELLER: Especially when you're combining sites, that's not something where Webmaster Tools would really help. It really makes sense from our point of view. You're moving everything from one side to another side. And you're essentially replacing the destination site with a new one. So in those cases, the Webmasters Tools setting helps. But for combining things, it's not really right.

MIHAI APERGHIS: OK, and should we add subdirectories as new entities in Webmaster Tools for each?

JOHN MUELLER: If you want to, sure. That's up to you. What you'll see is things like Search Analytics is separated then, which might make sense for you.

MIHAI APERGHIS: OK, yeah, that would. OK , thanks.

JOHN MUELLER: Sure.

MALE SPEAKER 2: John, another old question. How much weight, if any, does Google put on HTML elements which help visually impaired visitors navigate the website, like let's say link title attributes. I have posted in the chat an example. Maybe you can take a look to see exactly what I mean. I'm interested if Google gives any weights to this.

JOHN MUELLER: I don't think we use that at all. I think at the moment we use the Alt text. But we don't use the link title.

MALE SPEAKER 2: I understand. If there are any other-- using rank, anything who helps visually impaired visitors or not?

JOHN MUELLER: Well, text on the page? I mean, if it's-- yeah, if it's text on the page, then that's something that works well for screen readers, and that's something we can definitely pick up on. If it's just like a link attribute like that, I don't think we'd pick up on that because if the content is really only in there, then for the most part, users, when they go to that page, they won't see it by default. It'll just be visible there if you hover over it or if you have a screen reader that reads that out. So that's something where if its text on the page, we can definitely use that for everything.

MALE SPEAKER 2: I understand. OK, thank you.

MALE SPEAKER 3: John, can I ask one followup to ours again?

JOHN MUELLER: All right.

MALE SPEAKER 3: If we-- now that we've safely moved and changed the HTML and done everything, is there any risk based on our previous situation to us starting to contact the old-- any sites that are linked to us and ask to link to the new site? Or is the links the thing that might cause an issue?

JOHN MUELLER: I think that would be fine, yeah.

MALE SPEAKER 3: OK, because we've got a load of old very high ranking. I mean, obviously a load of the crap we'll just ignore. It will be a natural disavow effectively. So that's fine.

JOHN MUELLER: That sounds like a good idea, yeah.

MALE SPEAKER 3: OK, thank you.

JOHN MUELLER: All right, with that, let's take a break here. I'd just like to point out the Hangout on Friday will be with someone from the Webmaster team from Google who has worked on lot of the responsive sites here. So if you have any questions about responsive web design or if you want to talk about responsive web design, be sure to join that Hangout and be sure to submit your questions there. If you've submitted questions for Friday that are about ranking and changes in our algorithms, then you probably want to move those to one of the future Hangouts that I'll set up as well. So I wish you all a great week. Thank you all for joining again. And thanks for all the questions that were submitted here live in the Hangout. Hopefully we'll see you again in one of the future Hangouts.

MALE SPEAKER 4: Thanks, John.

MIHAI APERGHIS: Thanks, John.

MALE SPEAKER 2: Thank you, John, have a good day.

JOHN MUELLER: Bye, everyone.
ReconsiderationRequests.net | Copyright 2018