Reconsideration Requests
Show Video

Google+ Hangouts - Office Hours - 29 August 2014

Direct link to this YouTube Video »

Key Questions Below

All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video

JOHN MUELLER: OK. Welcome, everyone, to today's Google Webmaster Central office hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google Switzerland. And I'm happy to help answer your webmaster or web search related questions. As always, if any of you want to get started and ask the first questions, feel free to go ahead, and jump on in. OK. No early question?


JOHN MUELLER: Hi. Go for it.

AUDIENCE: OK, thanks. I want to ask only one question about the redirect through 301, if I'm at a site that has on a domain name of first part AAA.IT to AAA.EU, it's more effective than AAA.IT than Making remain at the first part equal, and changing only the domain extension?

JOHN MUELLER: That doesn't matter so much for us. That's one thing up to you. So if you want to move from a domain that has the same name on a different TLD, or a different name on the same TLD, or a different name on a different TLD, that's all essentially the same for us. So I would pick whichever one you'd like to stay with for the longer run. I wouldn't make these kind of changes regularly. But pick one that matches what you're trying to do, and use that one.

AUDIENCE: Just only, one- another question.

JOHN MUELLER: Sure. Oops, you seem to have been muted. I can't hear you.

AUDIENCE: Hello, are you receiving me?


AUDIENCE: OK. I was saying if, in Webmaster I had two key words, repeating two or more times in a different page, for a different web page, but the keyword is the same. Why that's happening this?

JOHN MUELLER: In Webmaster Tools?

AUDIENCE: In Webmaster Tools in a number of key words for the web content, over the entire sites.

JOHN MUELLER: So for that feature in Webmaster Tools, we show the words that we find on the page. Even-- well, essentially when we find them by crawling. So when we look at those pages individually, and we find those words, we'll show them in Webmaster Tools. So that doesn't necessarily mean that these are important words. But rather that they were just found by crawling. And if you find them there in Webmaster Tools, that isn't the sign that anything specific is broken or anything specific is being misinterpreted. It's essentially just the content we found through crawling.

AUDIENCE: It's muting me. Because I see for different pages categories and products on i-commerce, the same key words but in [INAUDIBLE]. Oh my god, it's muting me. I'm here.

JOHN MUELLER: Okay, try again.

AUDIENCE: Okay. On the call I was saying I see that the key word was repeating two times for, in the same positions, in the page, but for different page categories and products. Because I think--

JOHN MUELLER: I wouldn't worry so much about that report in Webmaster Tools. Because it's really based on what we find by crawling, and that's very low-level information there. And if those words are really on those pages, that's fine. It doesn't mean that we index or arrange them differently because of that report. I would mostly use that to try to recognize situations when something is very broken on your website. But when that report listing-- that have nothing to do with your website at all. It could be a sign that maybe it got hacked, or something like that.

AUDIENCE: Oh, OK. Thanks. Thank you very much, John.


AUDIENCE: Hi John. Can I ask a question please?

JOHN MUELLER: Sure, I wouldn't worry so much about that report in webmaster tool, because it's really based on what we find by crawling.



AUDIENCE: Hi. Sorry. We're seeing a lot of chat about Google Penguin rolling out last night. Has that actually happened? Or can you say or not say?

JOHN MUELLER: I don't know anything specific about that, that I can even mention at the moment. So I know that the teams are working on these things. But I don't think we have anything specific to announce just yet.

AUDIENCE: OK. Thanks very much, Thanks for your time.


AUDIENCE: John, I had a question about Google News. Specifically how affiliate marketing can impact a site's relationship with Google News. I was wondering if you could give any-- shed any light on that.

JOHN MUELLER: OK. I think your question is in the Q&A as well.


JOHN MUELLER: Yeah, so essentially I can't speak for Google News. I don't really know what their guidelines are there. From our point of view, if these are affiliate links within your content, I'd recommend that you just no follow them, like you would with other affiliate or page links. But apart from that, it's not something that I can say anything about, like the content guidelines on Google News. Whether that will be OK, or whether that would be a problem.

AUDIENCE: OK. Might you know who I could talk to about that? Because we're considering doing it, but we want to go about it the right way, if we do do it.

JOHN MUELLER: I'd go through the Help Center. There is a contact form for publishers, for news publishers.

AUDIENCE: OK. Thank you.


AUDIENCE: Hey John, do you mind if I ask a question?


AUDIENCE: OK. So I have a small SEO [INAUDIBLE] here in Romania. And we picked up a client about a year and a half ago. He's a Romanian adviser. It's a Romanian website. It seemed at the time to be affected by a Google Penguin penalty, due to a high amount of over-optimized director mix. So at the time, we tried to remove as many of those things as possible. And created [INAUDIBLE] file for the Webmaster Tools. And unfortunately we didn't see any effect for the following year. So six months ago we updated our [INAUDIBLE] file again to really ensure we got everything right. We also did some content marketing campaigns. We created some guides for the products which got picked up by local bloggers, by media, and so we got in some quality links towards their websites. And two months ago, we also changed the website itself. So we updated that. Previously it was a custom CMS that wasn't really as accessible as possible for most search engines. So we fixed that as well. But it still seems that they haven't picked up in traffic at all. Some of their main key words are on page three or four of the search results. And also so I'm not sure if it's still a Penguin penalty, or if there are more factors that are involved here. Because in Webmaster Tools I can also see that so for example, when we launched the new website two months ago, we've seen a surge in the Google box activity. But it stopped a month ago. And every medias that we changed, so for example, we had some rel canonical links to manage some filters they have in ecommerce. So filters were a bit of a problem. We added that to fix it. But since, for example, Google bot hasn't picked it up. All the links are still available in the index. And any errors in Webmaster Tools reported are from a month ago. So nothing has been updated since then. And I was wondering if there's any way I can find out exactly where I should look, or what the problem is. I'm out of ideas.

JOHN MUELLER: I would definitely check in the Help Forum, and also check with other webmasters, to see if they see anything specific to watch out for. It sounds like it's more a combination of different things, rather than just one specific problem that you're seeing there. So it could be perhaps, something technical that is also playing a role here. It might be from the quality of the content, from the type of the content there. That might be a problem. So I'd try to get some other people to look at as well. And if you can't find anything there, or if you're really out of ideas, you're welcome to send me a link to maybe your forum thread. And I can take a look here, and see if there's anything specific we need to talk about with the engineers. Because maybe something is stuck. Maybe something, there's just some additional information that we could be giving you that Webmaster Tools is somehow missing out on, to help you figure out what you need to be doing there.

AUDIENCE: Right. So I'm pretty sure it's not the manual actions, because there wasn't ever any such thing in the Webmaster Tool account. So it's definitely something algorithmic, but I've went through all the options, and tried to fix and get them a bit more popular, a bit more quality content. And I was expecting in one year and a half, for at least something to move. But it's very odd that it didn't.

JOHN MUELLER: Yeah. It's really hard to say without looking at the site. I don't know. It's possible that you're just missing something small. It's possible that maybe there is a setting in Webmaster Tools and you overlooked. It's really hard to say without looking at the site.

AUDIENCE: Oh, would I be able to leave you a link in the group chat? Is that OK?

JOHN MUELLER: Sure. Sure, I copy that out, and keep a copy of that for me. So you could put in the chat. You could send me a link to your forum thread. Any of that is possible.


JOHN MUELLER: Sure. All right. Let's go through some of the Q&A questions that were submitted. We have a bunch of them here. If I want to change the server and the sites I have on it. We'll also change the IP address. Can I lose my rank in Google. Basically they want to know if changing the server or the site's IP can impact the sites. Generally that's no problem. So people have moved to different IP addresses all the time, different hosting systems all the time. That's not something that we'd worry about. That's not something where I'd say, you need to be careful or do anything special in any way. We have guidelines on moving servers in I think the developer site, or at the Help Center. And that's kind of what I'd go through to make sure you're not missing anything. Sometimes what will happen is if you move to a different server, we'll crawl a little bit more cautiously, a little bit slower in the beginning, until we're sure that your server can actually handle that load. And when it can handle the load, we'll ramp up crawling again, and crawl as much as possible again. So that's essentially this biggest impact that I'd see there. I wouldn't expect any big changes from the ranking side. What if tons of phantom pages have a rel canonical, that points to one of my legit pages? Will they trigger any kind of penalty? They're not back links, or redirects. I don't think that this eval tool would even work. Essentially what will probably happen there is we will just ignore those kind of rel canonical links. We try to recognize incorrectly-placed rel canonicals, and just ignore them. It's very common that people use rel canonical incorrectly. So it's something we have a bit of practice on, kind of catching those incorrectly set up rel canonicals, and just ignoring them. So that's definitely not something I'd worry about there. If we didn't ignore them, and you find them there, then that's not something specific that you'd need to care of. So as long as you don't see any links going through your site, through those pages, I don't see any problem with that kind of a situation.



AUDIENCE: Yeah, you mentioned that you would ignore a rel canonical if it was out of place. Is that to say that potentially it would be indexed and show in the index?

JOHN MUELLER: Pages that have a rel canonical on them do get crawled and indexed, like they are initially. So even when we process the rel canonical, we first index the page that has the rel canonical on it. So if you have a rel canonical from a parameter of URL to your clean URL, for example, then we'll index that parameter URL initially. And that when we process the rel canonical, and kind of transfer the signals, we'll focus more on the clean URL, based on your rel canonical there. But, in a first step, we will have indexed the kind of parametrized URL first. So depending on the site structure, what might happen is if you do a site query, or in-URL query, something like that. You'll see these URLs indexed, despite them having a rel canonical. So from our point of view, it's kind of like a redirect, but it's not as hard as a real server site redirect, in that we still have content that we can pick up here that we could show in the search results first. We don't follow that rel canonical at the same time as we would like the server side redirect.

AUDIENCE: And if we were to come across that, and realize that we've made that error, and it has actually been crawled anyway, is it sort of good housekeeping to then remove it?

JOHN MUELLER: Sure. Yeah. I'd definitely clean that up if you run across something like that. The problem on our side is when we've run across a website that has a lot of incorrectly placed rel canonicals, so the common mistake for example is all of the pages have the rel canonical set to the home page. So obviously, you still want these individual pages indexed. We can see that you're doing something wrong. When we run across a site that has that kind of a set up, we're not sure how we should treat new pages that have a rel canonical on it. So maybe the new pages will have it set up correctly. But all these others are wrong. So we're kind of in a situation where we don't really want to trust the rel canonical on this site, because you're shooting yourself in the foot. But at the same time, maybe these individual ones here are OK. And the algorithms then kind of have a hard time making a judgment call that makes sense for both sides. So--

AUDIENCE: Even though a good-- sorry.

JOHN MUELLER: So if you're kind of aware of broken rel canonicals on your site, I'd definitely fix that so that you don't have this kind of misleading, inconsistent situation.

AUDIENCE: Yeah. Like I said, I mean, if Google is second guessing what's potentially supposed to be, I mean it can only do that so much. So if you're making all sorts of mistakes, cleaning them up afterwards if you notice them, or if a new webmaster comes through and notices them, is a good idea.


AUDIENCE: Cool. Thanks.



MALE SPEAKER: The way I read that question, I don't know if you were answering that. I was wondering whether he was asking about negative SEO using canonicals from external cross-site canonicals, rather than internal. Is that what you are answering? Because internal is obviously, you then take the whole site into account, et cetera. But I thought he was asking in regards to a tactic that someone may have employed against him from their poor site, canonicalizing cross-site to his.

JOHN MUELLER: I can't imagine that having any kind of effect at all. So from that point of view, I mean theoretically someone could be doing this. And they might have bad intentions. But I can't imagine that having any kind of a negative effect at all on your site.

MALE SPEAKER: All right. Is that because with any, in reality the site that they're pointing to may be better quality than the original. So they're saying the canonical version of this poor page is this good one. And so [INAUDIBLE]. I've never heard of it being used as a tactic.

JOHN MUELLER: I imagine that it's not that heard of, because it doesn't really have any effect. So I could imagine situations where you could artificially see some kind of an effect. But I don't think that's something that any normal website really has to think about.

MALE SPEAKER: All right.

JOHN MUELLER: OK. And we have one about the iframe link-building method. We talked about, I think it was Monday, where basically sites would allow, other sites to embed videos, and the video iframe automatically included a link to the video landing page, and the video site. I talked about that a little bit with the web spam team. And their concern is essentially that the webmaster is trying to embed a video on the site. They're not knowingly having this link as well. So from their point of view, they'd love to see editorial links to your sites. So if webmasters really say, I really like this site. Here is a link to this site. They love those kind of recommendations. But if that link is pathed in with a widget, packed in with something of a package where the primary reason for embedding this is actually something else in that package. Then it starts to look like this link isn't really editorial. That the webmaster was primarily trying to do something different by embedding this group of code, and not primarily considering that he was linking to this other website in this very specific way. So from their point of view, that's something they'd recommend using no follows on that. And if you want to make it possible for webmasters to link to your pages, offering them a code snippet that's basically just a link to those pages is fine. Letting them add that separately is completely fine. But kind of providing a complete package where the primary part of this package is actually something else, not specifically that link. That starts to look a little bit sneaky, and it feels like the webmaster isn't really editorially deciding to add this link to this other site. They're just primarily trying to embed this video. So from their point of view, adding an no follow there would be the best solution. Instead of mass penalization of entire networks, spamming free host, et cetera, why don't you develop a positive ranking factor that prizes entire network class C IPs, clients of the same SEO company, et cetera? So I guess to some extent that's always the two sides of the search results. As one side goes up, the other side goes up automatically. I think automatically ranking higher all sites of the same SEO company doesn't really make that much sense, because they could be all kinds of sites. So just because they have a common SEO company, it doesn't mean that they're going to be high quality content. Maybe it's just that the SEO knows how to technically make a website work well, how to make it crawlable, how to make the content visible. But that doesn't necessarily mean that this content is suddenly of higher quality that it should be shown much higher in the search results. So that's something where I think it's good to be kind of critical, and kind of watch out for over-generalization of different factors. But at the same time, it's a legitimate question in the sense that on the one hand, people get penalized for doing something wrong and it would also be nice to really positively encourage webmasters to actively do something right. And that's something maybe we need to work on how we communicate that to webmasters. Maybe we need to think about ways to actively also promote really good websites in-search results, instead of just focusing on the demotion part of things. What if I could create a box able to write unique and useful content about a topic by making some decisions, reading data from other websites, exposing ideas, et cetera? Should it be penalized or prized by Google? Looking at this in a general situation, if this is auto-generated content that would be against our webmaster guidelines. So in general, if this is content that's automatically generated, then that's something we'd prefer not to have indexed in our search results. I could imagine at some point there might be some artificial intelligence thought that actually creates great content on its own. And maybe that's a little bit different, but if you're essentially spinning existing content, if you're rewriting it, if you're creating this content automatically, then that's something we would prefer not to see in our search results. Because that's not really of the highest quality, of the highest value for our users. So at the moment, I'd definitely say this is something you should avoid doing. Looking forward, maybe I don't know, 10-20 years in the future, I don't know what kind of artificial intelligences there might be that create great books or I don't know. It's theoretically possible. But at least at the moment that's not something that we'd want to see in our search results. Go for it.

AUDIENCE: John just one question about page speed optimization. I noticed that on page speed Google doesn't recognize the gzip or deflate compression. But other tools online on web does recognize. I checked. It's there are proxy operational [INAUDIBLE] are awarded, but nothing. Also in Google Chrome and on page speed insights, Google does not reveal gzip or deflate. is it some bug? Something like this?

JOHN MUELLER: I am not aware of anything specific there. So is this testing your pages, or testing any pages where you're seeing this happening?

AUDIENCE: My pages and also other pages where deflate is active but page speed does not really-- page speed is activated, but speed is still installed on server, but nothing. Google does not reveal any gzip or deflate compression.

JOHN MUELLER: OK. I will talk with the page speed guys about that. That sounds like a bug, potentially on our side. We should be able to recognize that. I thought we did show that. But I will definitely check with the team about that. Thanks.

AUDIENCE: OK. Thanks. Thank you very much, John.

JOHN MUELLER: All right. Straight answers about a Penguin refresh would be appreciated. At the moment, I don't have anything specific to announce. I don't have any magic answer that I can give you guys there. I know the teams are working on it, and looking into what it takes to get this better updated. To kind of update things to refresh a little bit faster. But at the moment, I don't have anything specific I can announce there. Usually we try to announce these things when they're actually live.

AUDIENCE: Hi, John. Can I ask something related to this?


AUDIENCE: So I was thinking the last Penguin update was last year, about four months-- not sure. I was talking earlier about my client. And when I saw in October that he hasn't picked up in traffic, that's when I decided to really do the disavow file make sure that we really got everything right. And I wanted to ask, if a website has a Penguin penalty, and the webmaster removes or disavows all the bad links. And let's say he also has good links as such. Would the Penguin penalty be removed without refresh of the-- a relaunch of the penalty?

JOHN MUELLER: It would need to be refreshed first. So on the one hand, you have to fix these issues on your side. Then we have to recrawl everything to recognize those changes. And then we have to update the data, or the algorithm that actually uses that data. So it's something where if you're effected specifically by this specific update there, then that's something you'd have to wait for at the moment.

AUDIENCE: OK. So it doesn't require just recrawling the links, from example, in the disavow file?

JOHN MUELLER: Yeah. It takes more than, yeah exactly. One thing I recommend doing there, since this is kind of a long cycle, at least for the moment, is really making sure that your disavow file covers everything. All of those problems that you ran across that use a domain directive, if at all possible, not just individual URLs. For example, we had someone in the health forums use a disavow file. And they disavowed the non-dub, dub, dub version of all the links. And actually the dub, dub, dub version was the one that we were showing in Webmaster Tools. So they were essentially disavowing a bunch of individual links, but we weren't taking that into account all. So that's something where using your domain directive essentially fixes that.

AUDIENCE: By the way, you said about-- or you talked about the Google Webmaster links. If I use a disavow file would the links disappear from the Webmaster Tools account?

JOHN MUELLER: No. They would remain visible there. It would be the same as if they had a no follow, more or less. And we also show no follow links in webmaster tools, so they would remain visible there.

AUDIENCE: Right. And the disavow file, you can also use the domain directive to remove some domains?

JOHN MUELLER: Yes. Yes. All right. Here's another Penguin question. One of our sites was hit by Penguin last year. Since then we've completely disavowed all bad links, built new one. However we've not seen any improvements in rankings. When we will see improvements? That's essentially what we just talked about. So that would take an update of the algorithm to actually be visible. Again, I'd make sure you really have your disavow file set up properly so that you don't have to wait another cycle. And make sure you're really kind of strict with yourself when you're looking at the links to your site, and not saying, well theoretically this directory link could be OK. If you see that you have a big problem with directory links, I'd just make sure that you're cleaning those up completely, and not being too picky about those kind of links. But again, this is something where you'd also need to wait for the update to happen. But since you're doing all this work now, you'd like to avoid having to go through another cycle.

AUDIENCE: Can I ask a question, John?


AUDIENCE: Thank you. I want to ask a question about the SSL certificates. Most of the sites where we are installing SSL certificates, we take much longer time to load. Especially the CMS like Wordpress. I have a personal experience of several instances since the certificate, my site doubled the loading time, from 1 1/2 seconds to almost 3 seconds. The problem I have is the fact that a very strong ranking factor is indeed the site speed. So we tried to gain some points, but instead we are losing some, no?

JOHN MUELLER: So the rendering time that we use for ranking is this essentially trying to recognize sites that are really, really slow. And differentiating those from sites that are kind of normal fast. So if you're talking about changes in the order of half a second, a second, a couple seconds; that's not something where we would use that as a ranking factor at all. It's really only if it takes pages over a minute to load, and then equivalent page loads in a couple of seconds, then that's a clear differentiation. But just a couple seconds, more or less, that's not something that we would even worry about. So on one hand, that's at least a little bit of help there. On the other hand, there are various ways to make HTTP sites also just as quick as normal sites. So there is a site that one of our developer relations member's put up called That lists a bunch of the optimizations that you could be doing to make sure that these sites are just as fast as everything else. And now, since we mentioned that we're using or starting to use HTTPS as a ranking factor, we've seen a lot of activity with the various CMS providers, with the various hosters, to make sure that this kind of process is as easy as possible and it's being run as quickly as possible. So I wouldn't be surprised if for the common CMSs, there were tweaks or updates available in the near future that really make this just as fast as a normal site. But in the beginning at the moment, I can see how either it takes a little bit more technical effort to get things running just as quickly as possible, or they potentially run a little bit slower. So that's kind of something where you have to kind of work with, but on, when you're living in the bleeding edge almost. And finding ways to improve the speed of your site while still keeping it kind of secure.


AUDIENCE: [INAUDIBLE] John? Were you expecting HTTPS to take on so quickly with this announcement? And was it quite a big deal to make the announcement about HTTPS?

JOHN MUELLER: Yeah. So we've been working on promoting security for a long time. And we think the time has come to make this kind of announcement. On the one hand to encourage sites to move forward. On the other hand, it's also just something that we've seen from users who you are kind of more and more expecting that things are secure on the web. So It's taken a long time to get here. I think with future websites it'll be a little bit faster. HTTP Version 2, for example, is supposed to have encryption built in by default. So that's something where the next generation will have it a lot easier. Moving from where we are now, where a lot of sites don't have secure connections, to where a lot of sites do have secure connections might be a little bit rough at times. But I think since this is a very lightweight ranking factor at the moment, it's not something that you have to implement urgently. But rather something I'd recommend looking at what you're doing the next redesign, when you're working on your site's infrastructure in general, when you have time to think about these kind of changes as well.

AUDIENCE: Yeah, I guess when you're going to have to do a whole bunch of redirects, you may as well be doing redirects to HTTPS as well.

JOHN MUELLER: Yeah. Or if you move to a different domain name, if you move to a different setup on your website, a different CMS, then maybe that's also a good time to think about moving to HTTPS as well. But it's something where, depending on your website, it's not really trivial. So we don't want to force everyone to move from one day to the next. But rather we just want to bring that information out there and say, hey, we're looking at this is a ranking factor. In the future at some point in the long run, we imagine it'll be a bit of a stronger ranking factor. So you have time to think about when or where you'd like to do this transition, or it could even make sense for your website.

AUDIENCE: Sure, sure. You've also mentioned that site speed, especially when it's smaller numbers, doesn't really have an impact. One of the thoughts that I've constantly had with Google is the fact that people travelling in and out of websites, Google potentially can track. And I know that they're really good with big data. So for example, if you're searching for cars, and you look at the first listing as a user. And you go in there and it's an outlandish color, and it doesn't have any information, and you're like, no, no. That's not what I want. You leave within five seconds. But the second result maybe you're there for three minutes. Now potentially you come back to Google. Potentially you don't. Either way you are traveling back and forth from Google to sites, and Google knows where you're going. I would think it would be remiss of them, of you guys not to be doing anything with that data. Can you clarify that you do or don't use that data to work out whether engagement on those sites are great.

JOHN MUELLER: We look at some of that data. Especially when we see them within the search results, where we see people clicking around, going from one site to the next. We use some of that data to generally understand how our search results are doing. So taking a lot of these data points into account, we can say, well the changes we've seen in the past, people are generally clicking on the first results, so generally happy with those changes. So in general, our search results are kind of in a good point at the moment. Whereas if we can tell that people are clicking all over the search results, then that's a sign for us that our algorithms probably aren't bringing up the best search results that they could be bringing. So at that level, it's a lot easier to take that into account. Taking that into account on a site or page level is a lot harder, because the data is just so noisy. Some people might be going through your pages and finding what they want in five seconds. Other people might want to read all the content on those pages. So on a page level, it's really hard for us to do, because we don't understand what your pages are completely about, and what kind of reaction users should have. And there are just so many different types of reactions users might have. So it's something, on a page level, it's really hard for us actually get something useful out of that. So I don't think we use that at all in our algorithms. On a general level, when we're looking at our algorithms in general across the whole web, across all of these searches, that's something where we could definitely use that information. And I believe we do use that to kind of measure how our results are doing at the moment.

AUDIENCE: All right. So potentially a particular keyword search, maybe it's giving a lot of bounce backs very quickly from all the results that are on there. So that may flag the algorithm to refresh that data?

JOHN MUELLER: No, no. That's not something that we'd use on a keyword level, or something like that. It's just a more general way of kind of double-checking that our algorithms are doing the right thing by having this quality check that comes into play at the end. So we look at these numbers to just double-check that our algorithms are moving in the right direction. That we're giving people the information that they want in the search results. And they don't have to hunt through our search results to find what they're actually looking for. So that's something we take into account more on a general algorithm level, and not on a per-query [INAUDIBLE].

AUDIENCE: Thanks, John.

JOHN MUELLER: Sure. Perhaps some more from the Q&A. I work in the tourism industry. When we write content we're confused as to [INAUDIBLE]. We're confused as to whether Google will favor long, in-depth content about things to do in an area, or just shorter concise content about how to book your holiday. I think this is something where you'd want to look at what your users are specifically looking for, primarily. So on the one hand we want to understand what these pages are about. And we want to make sure that there's something unique and compelling on these pages when we show them in the search results. Sometimes having a simple page on being able to book a holiday in a specific location is exactly what people are looking for, and exactly what we'd like to show. So sometimes that can be fine. If you recognize that people want more in-depth content, and you have more in-depth content about locations, then adding information about that can also be fine. At the same time, if you have a holiday booking page, and everyone else has the same holiday booking page, maybe for the same affiliate system, maybe the same hotel even. And there's nothing really valuable on those pages by themselves. Then that's something our algorithms will probably try to recognize and say, hey these specific pages are the same as the other ones that we are already showing in the search results. We don't really need to bring these separately. So I try to look at the market that you're active in, the type of content you have available, the content users are looking for. And just make sure that whatever you provide is really unique, compelling, and of high quality. So that it's something that we'd want to show in the search results. And whether that's long-form content or short-form content, something that's kind of left up to you.

MALE SPEAKER: John? [INAUDIBLE] Over the last couple of years, I've seen quite a lot of Google supposedly targeting the travel industry. There's been a lot of volatility in the searches also for travel sites. And a couple of travel companies have been hit quite hard by various algorithms. Is there anything specific about that industry that you know that you've gone after, or seen that's being abused? And why therefore those see more volatility than most?

JOHN MUELLER: It's tricky. I mean it's a very competitive area. So there are lots of sites that are trying to do as much as possible there. Because it's so competitive or maybe it's just because they're trying so hard, we see a lot of sites that are kind of going over and across the line of what we would say is acceptable within our webmaster guidelines. So that's kind of, I think, the primary reasons we see so much activity there. Also, with regards to web spam, with regards algorithmic changes, we just see that a lot of these sites are crossing the webmaster guidelines' general order, and going into areas that we generally wouldn't recommend doing. And once you're kind of going down that path, and you have a big website that's doing a lot of things kind of at the edge, or a little bit even past the edge. Then that's where you would expect to see the algorithm sometimes kicking in, the manual web spam team sometimes coming in and saying, hey this is not OK. You're really going way, way too far here. You need to pull back and stick to our guidelines and make sure that you're active in a way that's fair towards all the other people in the search results as well.

MALE SPEAKER: And can that-- I don't know how really to word this. Can you be considered part of that industry, and then somehow be penalized? Are there industry specific filters, algorithms, things going on where.



JOHN MUELLER: No, it's not that we'd say the travel industry is bad. It's just that we see a lot of sneaky sites active there. And just because there are sneaky sites active there doesn't mean that your site is also bad if it's in the same niche. So that's not something where kind of the bad behavior of your competitors kind of brushes off on you, as well. So we look at these sites separately. And if one site is doing something sneaky, we'll will try to recognize that, and treat the others sites normally.

MALE SPEAKER: Right. For that, so something like say the payday loan industry or porn, those are not industries as such. Or they are totally separate bases?

JOHN MUELLER: We treat those the same.

MALE SPEAKER: Am I wrong, that they have been treated differently?

JOHN MUELLER: We try to treat them the same. So I mean there are legitimate sites out there for payday loans, and we need to make sure that we include those in a search [INAUDIBLE].

MALE SPEAKER: People want them.

JOHN MUELLER: That's something where we need to show some kind of content anyway. And that's something people are searching for. We can't provide a blank search result and page and say, hey all of these sites are spammers. You shouldn't be doing anything with these guys. We need to provide something there. So it's not the case that the industry itself is bad, and we're blocking all of these sites. But rather, of a lot of these sites are doing something sneaky. And we need to make sure that we're catching the sneaky sites, and letting the good sites remain visible in the search results.



JOSHUA BERG: I got a question for you.


JOSHUA BERG: Google authorship, about the Google Authorship minor change that we just had. So Google Authorship is officially removed. And the other day we talked about, I asked a question about the Google Plus posts that were still showing images. And I noticed since then that for the most part that they aren't showing images anymore. Is that correct as far as the Google Plus posts go in personalized search?

JOHN MUELLER: I'm not completely sure which posts you mean, but some of the [INAUDIBLE] are also kind of, I believe were also dropped with this general change. But we still show things from people in your circles. We still show profiles, I think, in the sidebar. Kind of like the knowledge graph style, so a lot of that information is out there. We definitely still index these posts. But some of the annotations might have changed as well.

JOSHUA BERG: But we won't be expecting to see the little images next to Google Plus posts, profiles and pages? The authorship images that were still appearing.

JOHN MUELLER: Yeah, yeah. I mean the authorship images are gone there, as well. So essentially that's a part of the normal authorship.

JOSHUA BERG: It seemed to be going through a server change, because through are a lot of proxies I could see that the images were gone, but there was some where they were still there. So some people were thinking that they weren't going to go away. But it seems like--

JOHN MUELLER: Yeah, I imagine some of that is just technical nature of how these things roll out to different data centers. And maybe you're just seeing the effects of that. But I'm not completely sure which images you mean. So it's hard for me to say definitively that these are going to be gone.

JOSHUA BERG: They would usually be relevance related to people that you know in personalized search that appear next to a Google Plus post in personalized search, and a search query.

JOHN MUELLER: Yeah, I think personalized search is essentially just the same. But I'd really have to look at the individual queries to see what exactly you're looking at, or a screenshot or something. I know some of that, the annotations were also removed there. But I don't know how far you're just seeing technical effects of things rolling out to your data centers, or if this is effectively the final situation.

JOSHUA BERG: But some of those will be going through changes as well?

JOHN MUELLER: It's-- there are always changes, yes.

JOSHUA BERG: OK. So with Google Authorship, since that is no longer being processed correct, are there-- what advantages are there? I mean you hinted in the comment, which of course, is that links are links. So I mean a Google Authorship link is still a link. So that's something there. And is there any-- I mean gradually with the Google Plus posts going no follow on hyperlinks, and all these different things that have been changing over the last year or two years, I get a lot of questions from folks. And I would like to be able to answer a little bit more definitively. What, if any, advantages are there with search to using Google Plus? I mean there's personalized search. But are we really seeing all that going away, or is it-- in my opinion it's still a phase.

JOHN MUELLER: I mean all of this is still fairly new, compared to the rest of the web. So you'll definitely see experimentation across the board, when it comes personalized search, when it comes to Google Plus. There's definitely going to continue being innovations and changes there. But at least with regards to authorship, that's something we don't process authorship anymore. So if you have the authorship annotations on your pages, we'll see that as a link to your profile. If it has a no follow, we'll treat it like a normal no follow link, and not pass in page rank there. But apart from that, we don't give it any special value. There is no, let's say, esoteric second or third-order effect where we'd say, well some algorithms are trying to recognize which pages are written by authors. And we'll use this annotation in 10% of the cases when we can recognize that the author is of the right type. That's not something that we're doing at the moment. So at the moment, authorship information essentially is not being used at all anymore on the Google site. So you can leave it there if you think your users appreciate that link to your profile, which they probably do, to understand a little bit more about the author. You can remove it if this was the only reason you had it there was for Google Authorship. Then it's, of course, fine to remove that. I think you could philosophically see something around authorship happening again at some point, when we have better systems set up at some point to understand the content, the authors, all of that a little bit better. But I would assume that that's more long term. And that it probably wouldn't be using the same markup. Maybe it would be using something completely different. Maybe there's Google Plus Plus, at that point, and no Google Plus anymore. So that's something where theoretically you could imagine something kind of esoteric being built up around authorship like that. But at the moment--

JOSHUA BERG: I might quote you on that. Maybe there's no Google Plus anymore.

JOHN MUELLER: I mean everything changes regularly. It's not something we'd say [INAUDIBLE]. I mean maybe there's Yahoo Plus then in five to 10 years. And everyone's connecting their profiles to that. It's always possible. Things always change. But I think at the moment, if you're working with websites I wouldn't assume that leaving his authorship markup there will provide any kind of esoteric extra value. So that's something [INAUDIBLE].

JOSHUA BERG: You know, a lot of people were-- I believe got carried away with this author rank thing, believing there was some kind of a separate algorithm. From my understanding of agent rank, is that it always was a link-based patent. And just something that was part of the system, part of the verification of entities, and the way that links would pass. So in my mind from what I've heard of the authorship change, and search results being changed, it doesn't appear that there's much that would take away from the link-based system or benefits that we would get. Just as those pages, like you mentioned being a Google Plus post, is if it's indexed it would be evaluated like a regular page. And so the links may be, too, et cetera in that regard.

JOHN MUELLER: Yeah. I mean theoretically there are all kinds of possible algorithms that could be happening there. But I'd just kind of be a little bit cautious and say that at the moment there's nothing specific focusing on authorship, with this authorship markup here. So it's not something where I'd recommend to webmasters they must leave it there. They should include something similar. They should include bylines, those kind of things. That's something that was very specific to our authorship program. And I think if the webmaster wants to remove that, that's fine. If they don't want to add it to new sites that's fine from our point of view. It's not something where I'd say this has any kind of special value of having that there. Obviously for users that can make sense in some cases, to provide a little bit more background information. But from a search point of view, we're not really digging for that information at the moment.

JOSHUA BERG: OK. And so one more thing, just as a personal opinion, not representing Google, I was wondering. Do you think that if mobile were not the primary issue here, and also the potential for antitrust cases, which people have been coming after Google with related to search. And some people have mentioned the advantages of Google Plus. That social media, if that gives advantages, there's that issue. Do you think if those were not an issue here that we wouldn't be talking about the elimination of authorship?

JOHN MUELLER: I think we'd probably still do that. So I'm not aware of any antitrust issues being involved in this decision. I know from a mobile point of view it makes sense to have a little bit of a cleaner UI. It really slows things down to have all of those photos. But I think even past that, it's something where we'd like to see really significant advantages of keeping this system running, of maintaining it, of having all of the support around authorship. And that's something we just haven't found as much as we would have expected.

JOSHUA BERG: So the adoption was really not as well as well as could have been liked. And am I to understand that Google has algorithms that can better-- I mean we've seen examples of the second, better understand entities now and parse that data. So that it may not be as necessary to use manual things like authorship, because--

JOHN MUELLER: I could imagine maybe in the future that that's something that we'd be doing or picking up like that. So theoretically I could definitely see some kind of artificial intelligence recognizing the writing style of these articles, and saying, oh this is written by the same guy. And this is a person that we trust in the past to write great content. So that's something, I mean theoretically all kinds of weird things could be possible. But it's not just the adoption issue there. It's really also being able to use all of this information. And not just having this information on our site, but really actively maintaining all of the pipelines that are involved in extracting this information from the web, all of the support and documentation things that we have to do to make sure that webmasters are implementing it correctly. It's not that trivial. So it's not something that we'd say, oh we'll just set it up and keep it running. It doesn't cost us that much in comparison to everything else we do in search. It's still a significant amount of work to keep all of that up and running. And all of these additional elements of our search algorithms, they slow us down when we're trying to innovate in other areas. So if we have a way to improve the search results, let's say by a factor of 10-20%, and we're being held back by something that's being shown to a tiny portion of the users, then that's kind of weird mismatch. Where we'd say, well maybe it makes sense to clean up this small issue so that we can focus on the bigger issue. But [INAUDIBLE].

JOSHUA BERG: Anyone, I mean I think that some people may come to the assumption that either authors or publishers or entities are not as important to Google anymore as they were. And I think that's not the case. I think they'll still be just as important. Would you say the entities are any less important than they were prior to this? Or verified entities are any less important or will be considered in any way less important?

JOHN MUELLER: It's hard to compare. So I mean obviously structured data is something that we're still very focused on, is something that we're heavily investing in. So that's kind of the structured data side that also involves entities, and those kind of things. But it's hard to compare something like that to authorship, and say that authorship is actually just a kind of markup as well. And they should be connected and all that. So from my point of view. when you're working with websites and looking at things like adding authorship markup or not adding authorship markup. I think it should just be clear that at the moment Google is ignoring this markup, and not doing anything special with it. So I wouldn't spend an unnecessary amount of time kind of adding this markup, supporting this markup, maintaining the markup within the website. I think your time is much better spent focusing on general structured data use, for example, or general implementation of entities.

JOSHUA BERG: OK. So John, would you say that verified identities are any less important than they were before?

JOHN MUELLER: So by verified identities, you mean like Google Plus profiles, those kind of things?

JOSHUA BERG: Well, that is the main method that Google uses to verify entities. But OK, even not verified identities, Google's other systems hypothetically that can recognize entities. Are they, entities, any less important? Because with semantic search, it's we feel in the industry that entities are an important thing. And Matt Cutts has talked about it a lot, like we would like to recognize entities more. And if we understand that Danny Sullivan says something, then people will want to know what it is. So would you say so would you say that they are less important?

JOHN MUELLER: So with entities you mean specifically the people involved in creating this content, not the concepts on the page. So with structured data, of course, you can markup different entities on the page. So if you're talking about Obama, you could mark that up and say, this is this specific person, for example. And those kind of entities will continue to be very important for us because they help us to recognize the content in the context of these pages. That provides a lot of value to search engines like us. Other search engines, I believe Bing and Yandex for example, also use this kind of markup to understand pages better. So that's definitely something I'd strongly focus on. With regards to entities about determining who wrote the content on the page, where is this content coming from in general. I think that's something where we don't really use that information. So the authorship markup isn't really used anymore. The publisher markup is used on the Google Plus side. It's not used on the web search side. So that's kind of the other way around, and not something we'd use in search anyway. So from that side, we're not really focusing on the authoring entities.

AUDIENCE: John, if you are moving away from authorship, why wouldn't Google just simply remove the ability to do authorship in the future?

JOHN MUELLER: And just keep the existing content running?


JOSHUA BERG: Is contributor too going to disappear on our profiles?

JOHN MUELLER: I don't know what Google Plus will be doing. I imagine they might respond to that. Maybe they'll say this is a type of semantic markup that they want to keep anyway. The problem with just leaving the existing markup there and the existing data, is that we have to continue keeping this pipeline running. And all of our future algorithm updates, they have to keep taking this into account as well. And that could potentially slow us down, could cause new problems, new issues that we're not thinking about now. So any time we're in the situation where we say, we have an existing algorithm that's kind of OK. But we have a lot of new algorithms that are a lot better. We generally prefer to remove the unnecessary algorithms completely. So that they don't cause any problems. So there's less code to maintain. If the infrastructure changes on our side, if maybe the data structures internally change that's not something we have to worry about anymore. So from that point of view, any time we can recognize that something isn't really needed anymore, we prefer to really remove it. So that it's actually removed completely, and not still inherently something we have to maintain or watch out for.

AUDIENCE: So would you say that publisher, which is similar to authorship, is that something that is also going away?

JOHN MUELLER: Not that I'm aware of. So that the publisher markup is actually kind of the opposite of authorship. In that with authorship we use information about the profiles and pull it into search. And with the publisher markup the Google Plus pages are pulling information from search, and using that in Google Plus. So it's kind of flowing the other direction. It's not flowing towards search. It's really bringing information to the Google Plus pages associated with that website. And that's something that Google Plus can use within their whole system. So from that point of view, it's kind of separate. It looks similar, but it's not really something where we'd say it's really related or tied to authorship.


JOHN MUELLER: All right. I reserved this room a little bit longer, so we could go through a bunch of the questions that are left. And maybe we'll have time for more one-off questions in between as well. Although this is quite an old dodgy tactic, in our industry we're seeing a trend that exact match domain is ranked really well. We're considering changing one of our sites to an exact match. Is there any possibility that we could get penalized for this? From our point of view, exact match domains are primarily problematic if they're low quality. So if you have a really high-quality website, and you move to an exact match domain, that's not something we'd really worry about. It's also something where you'd probably not see any visible change in the search results either. So while we do use keywords in domain and in the URLs to understand the content a little bit better, it's really not a primary ranking factor. Where we'd say, oh this has the keyword in the URL, therefore, it must rank number one. So that's something where moving to a different domain is fine. It's something you can technically do. But I wouldn't expect that you'd see a really big jump in the search results because of that. When addressing the need to accurately describe large amounts of data, in particular taxonomy, neighborhood, city, is it really better to use a type of content-spinning generating service to create paragraphs of information about the data or boiler plate? I'd definitely shy away from content spinning and any kind of automatic content generation. That just looks bad. It looks like low-quality content. When users get there, they feel they're not really getting what they're looking for. Boilerplate is sometimes a little bit easier in the sense that we recognize that it's the same. And we'll try to just show the most relevant page there. At the same time, you also need to watch out for things like doorway pages in the sense that a lot of your page are essentially the same, and you're just swapping out different city names. And that starts to look really spammy from our point of you, and might be something that our algorithms will manually take action on. So I'd try to stick to making sure that the content you provide on your site is really unique and compelling, and has high quality. So no spun content, but really good content, and if you don't have good content for individual pages, maybe it's worth folding them together into one page that's a little bit higher quality. And not just hundreds of different pages that are essentially duplicates of each other. I get reports that Google is changing the color of the rich snippets. Can you let me know on what criteria do you change the colors of the stars? I don't have anything specific to announce about that. I saw Barry's post about that as well. He notices all of these changes. I imagine these are just experiments that individual teams within Google are doing. And these are the kind of experiments that we will continue to do, and to try to figure out which way of displaying this information makes the most sense. So it's not something specific that you can control.



JOSHUA BERG: A couple months ago authorship images were removed from search. Was that part of a phase out or I mean was it at that time-- it wasn't concluded or authorship removal entirely was going to be considered?

JOHN MUELLER: We primarily did that for the UI reasons that we mentioned there. It's not that we thought this would be a good step-by-step way to remove authorship in general. And we, I believe we just made the decision to remove authorship completely after that. So that's something that was essentially done independently of that profile photo change.

JOSHUA BERG: All right. Thanks.

JOHN MUELLER: I mean, if it all possible we try do these changes completely instead of doing them step by step, when we know that our end goal is actually to go somewhere completely different. That's I guess just bad timing, I guess you could say.

JOSHUA BERG: So do you think we might see Matt come back after he's had you give all the bad news out?

JOHN MUELLER: Well, I mean there's always good and bad news. So it's something that happens. And I think it's good to give Matt his time to kind of relax and do something different for change for a while. It's quite intense.

JOSHUA BERG: Yes, you have brought us lots of good news, John. That's for sure. Lots of help for webmasters.


AUDIENCE: John, could I ask a question about local listings?

JOHN MUELLER: Sure. I don't really know that much about the local side of things. That's more on the Map side, though. But maybe I could help. I don't know.

AUDIENCE: It's more about-- Google seems to run lots of experiments in how the information show in the search results. And the localized listings quite often vary. And you'll get a [INAUDIBLE]. You might get a map showing. You may not. And what I'm finding frustrated at the minute is a lot of Google Plus pages and listings appear and they're not complete or with any real substance or good information. And I would have thought that Google would put more emphasis on Google profile pages that are fully complete, and perhaps give them a bit more of the limelight. So I just wondered what your thoughts were on that. Whether or not there's a way of highlighting pages that are dead, for instance, where the business doesn't exist. So if we're talking about quality of results in what's shown. This is probably one of the big areas, I think, from a local perspective.

JOHN MUELLER: Yeah. I don't know how they do that ranking there, to bring those search results up. So I can't really speak for those specific cases. But if you have something specific or you're saying, the search results here, including the local search results, are really bad. Or there's really a big problem here, I'm happy to forward that on to the team. But I don't know what the plans are, the changes are, that are happening there. So I can bring bug reports back to the team, but I don't really have any real insight into what they're actually doing.

AUDIENCE: No worries. Thanks.

JOHN MUELLER: OK. I'd like to know more about authorship. Is this is just a temporary change or is authorship gone forever? Yes, this is essentially removing it completely. In the case that authorship is gone forever, is Google looking for other ways to connect? At the moment we don't have anything specific that we'd want to announce there for kind of authorship type markup. So this is something where at the moment we're really removing authorship, and all of the markup, if you want, you can remove it. It's not something specifically that we're saying this is going to be replaced by something else. Any idea of the processing time for disavowed information? There are two aspects there. On the one hand we have to recrawl those individual links to reprocess that. On the other hand, the algorithms using those links have to be updated as well. So that might be the bigger part of the processing time that you're seeing. In general, you don't see information in Webmaster Tools about the disavow and what happened there. So the links will continue to be listed in Webmaster Tools in the links to your site section. So that's not something where you would see things change from red to green suddenly, and you see that the file was processed. Also this is done on a per-URL basis. So if you submit a disavowed file with a bunch of domain directives, for example. Some of those will be processed fairly quickly. Some of those will take a little bit longer. It's not that the whole file is processed within a couple of minutes, or a couple of months. It really depends on the content that you have there. I have some external links. This is the external links pointing to the first I put rel canonical link from the first domain to the second Would all benefits from these links pass to second What would happen if there were two subdomains at the same top level domain? In general, we try to forward all of our signals with the rel canonical. If we can read that properly, if we can trust this information, we'll try to forward that. If you are essentially trying to redirect, I'd still recommend using a server site redirect, instead of just the rel canonical, so that we can really process it a lot faster, and it's a lot clearer what you want to have indexed there. So from that point of view, if you're using this instead of a redirect, I'd try to stick to a redirect. If you're just generally asking about how this data is combined, we do try to forward all of those signals to the target canonical URL.

AUDIENCE: John, can I ask another question, please?


AUDIENCE: So I have a question about multi-language sites. I'm really confused. Supposing I want to make a site like YouTube, and give my users the same content in five-- or four or five different languages. Basically what I want to do is to serve the same content, the same video content, plus on my own server, but in different languages. So the [INAUDIBLE], the categories, links, content descriptions, targets, contact page. Everything will be in every language page. My question is should I put a rel canonical meta tag from every video, post, page, category or target point into the default version? Or should I put the canonical as a normal site? So in a case like that, I would use hreflang markup to link these different language versions together, and only use the canonical on a per-language basis. So the English page has a canonical to the English page. The German page has a canonical to the German page. So that if someone were to link to the wrong version of the English page, you would find the correct version of the English page. So that's the way I would do that. I wouldn't have a rel canonical across languages, or across regional pages. So if you have a page in German and in English, and the original was in English, for example. I wouldn't make a rel canonical from the German to the English page. Because then we would essentially drop the German page completely, and not know that there's also a German variation of that page.

AUDIENCE: So every language, every language page should have his own canonical.


AUDIENCE: Pointing to the end domain, let's say?

JOHN MUELLER: Pointing to the preferred language version of that specific page. So not the home page, but to that specific page.

AUDIENCE: Thanks. It's very useful. Thank you.

JOHN MUELLER: OK. All right. Who wants to grab some ask questions? What can I help you guys with?



AUDIENCE: I'm going to be really quick, just a yes-or-no question. You talked about, earlier about relevant content, and automated generated content, and actually about travel sites, which goes into my point. Let's say you have a bunch of travel sites that offer users, or people who want to rent their apartments, for example. Offer the chance to list them, and these people entered the same description on multiple websites. You said a few Hangouts ago that you'll not penalize these websites. You'll just show what you think is the most useful version, useful website for that subject. Could one of these websites, for example, use some automated content in the sense that let's say they use the GPS coordinates from the apartment, and then show a bunch of nearby attractions near that area from that area, so that would be actually increase the user experience, and the amount of unique content they have, as opposed to other websites [INAUDIBLE].

JOHN MUELLER: Yeah, I think that's fine in that regard, because you're providing additional value. It's not that the primary content is auto generated. But this is additional value that you're providing on these pages, which definitely makes sense. What you'll probably still see is if someone is searching for the primary content that's duplicated across different websites, we'll still try to filter that out. And see, OK this primary content is the same on these five or six pages, so maybe it's worth just picking one of these pages and showing it in the search results. So if someone is searching just for that primary content, we'll probably still fold them together, even though it has other additional content that might be useful there. If someone is searching for a mix of content, like something from a description, and something from I don't know-- an apartment in Barcelona nearby a playground. Then that's something where your page, of course, has an advantage. Because you have this additional content additionally on the page. So from that point of view, that could definitely makes sense. I'd just really make sure that the primary content isn't auto generated. It should be something unique for itself.

AUDIENCE: OK. Cool, thanks. Another one would be regarding the Webmaster Tools itself. Actually, is there any update on when we might see some APIs for the search queries? Or maybe when we can see search queries be updated more frequently, rather than just in up until the last two to five days?

JOHN MUELLER: Good question. So we are looking at an API update, which will probably happen in the near future. But that won't include search queries at the moment. That would be maybe a next bigger step in the mid or the long term. But we're definitely doing an API update in the near future. We're also working on the search queries feature. So if you have specific ideas on things that you'd want to find there, now would be a great time to bring those up. That's something we can bring up to the engineers, and say, hey everyone wants this specific way of looking at search queries. And now would be a great time to kind of get that feedback in. But those changes will also take a bit of time to actually be visible. So that's something where I'd say-- I'm just guessing based on the things we've seen from the team, maybe a half a year, or something like that would be a reasonable time frame for that to be actually visible in Webmaster Tools. But getting that information in there early, those ideas, that feedback is a really good idea. And if you have something specific that you'd want us to bring up with the engineers, send it our way. And we'll get it in.

AUDIENCE: So there's not going to be a search query API or any way to-- for example, I know there is the Python scripts that allow you to download the data. I actually made a version for Google Docs. So in JavaScript to connect with HTTP authentication to connect and grab data automatically. Is that going to be available to users until there's an API?

JOHN MUELLER: That's definitely the plan, to keep that existing functionality up until we have something that replaces it. I can't guarantee that we'll be able to that. Maybe there are technical reasons why we have to make a cut at some point and say, we have to update that. But at least at the moment, that's the plan.

AUDIENCE: OK. Will we be able to see fresher data then up until the last two to five days.

JOHN MUELLER: That's something I know the team is also working on, yeah. I'd also want to see your JavaScript version. That sounds really interesting. I'd actually like to see it.

AUDIENCE: To write tomorrow's blog post on that. So

JOHN MUELLER: Yeah, even better. OK,



JOSHUA BERG: Our profiles and pages can still build-- can they still build authority the way that they always have through their use, engagement, and mentions, or there kind of--?

JOHN MUELLER: I think that's something that's probably more on the Google Plus side, then something where we treat those in any kind of a special way in search.

JOSHUA BERG: Right. So also not being treated any special way in search means that anything that does create links in that engagement process is treated links as links.

JOHN MUELLER: Yeah. So, I mean we show the profiles in our pages in search results. We use partially for the information we show in the sidebar. Kind of like the knowledge part of the information we have there. So from that point of view, we do take those into account. But I wouldn't say that there's any implied authority behind that. It's more I'd say a technical measure that we're trying to recognize these profile pages, and show them appropriately there.

JOSHUA BERG: But for our Google Plus posts specifically, and the way those rank in search, there's clearly authority on posts.

JOHN MUELLER: That's possible that there's kind of an indirect effect there.

JOSHUA BERG: But it's not the traditional ranking?

JOHN MUELLER: Yeah. Or based on the Internal linking structure within Google Plus, in general, that's possible. Yeah.

JOSHUA BERG: And so the-- would you say, I mean, aren't our profiles pages and different content that we use, all of this and are related to, for example, my profile is related to this YouTube account. Or and is engaged with all these different people or there's the relevance algorithms. These things, they all still exist. And so say in the future, if Google had some nostalgia moment about authorship or something similar or related, these things theoretically would retroactively still be important.

JOHN MUELLER: Yeah. I think you could make a lot of possible futures where it could make sense to have this information there. But at the same time, when you're looking at business websites, when you have a limited amount of time to work on these websites. I think you need to focus on what works now, rather than what could theoretically work in the future or in the long run. So I think there is definitely an argument of how users engage with these types of pages, this type of content that could be made. Where you say, well, users are recognizing that I'm an authority in this field, and they go to my Google Plus profile. And they follow my posts there. There is definitely that argument that could be made. But I don't think it makes sense to specifically focus on the authority side of things when it comes to search at the moment. Because we're turning this really, let's say, direct authorship off. And at the moment, we don't have any plans to announce any kind of an esoteric almost, authorship-like setup that uses similar links, those kind of things. So I think there are definitely arguments that could be made to say it makes sense to focus on social media. It creates strong profiles. But it's not something that we would directly use in web search at the moment.

JOSHUA BERG: OK. I mean, Google Plus though is still Google's best understanding of all the related verified entities compared with any other social media platform. There isn't anything that comes close. That Google has the knowledge of from the back end. So if we were to say that social media matters at all, then Google Plus would still have to be there at the center.

JOHN MUELLER: To some extent that's, of course, the case because we're kind of involved with both of those sites. The other aspect is that some sites we just can't crawl properly, because they have limitations, because maybe there are terms of service issues there. So from that point, we can't really take all of that into account. As much as possible we'd like to remain neutral in that regard, though. So if we can get this kind of information, then I could imagine it changing in the future. Where we'd say, OK, any kind of profile where we can really recognize this information from, where we can kind of buildup a Knowledge Graph type information of like what's involved there could be shown in the knowledge graph type sidebar there. So that's something where theoretically a lot of things can change. But so at the moment, we can definitely crawl Google Plus pages really well. We can crawl this content like other HTML pages really well. It's easy for us to process that information.

JOSHUA BERG: Well that is good to hear, because if there are any changes in that direction, then influential profiles are something that Google looks at. They can understand then those are going to be at the front of the line.

JOHN MUELLER: Yeah. I mean, this is something where I think you'd probably want to use social networks for what they do within the social network, and not for specific SEO reasons. That's across the board anyway. So that's something where if you have public profiles that are easily crawlable elsewhere, that could be just as useful. If you have a Twitter profile that you use very actively, that can be very relevant to queries for your name, for the type of content that you write about there. And it doesn't necessarily need to be on Google Plus. We can crawl Google Plus really well. We can crawl a lot of other social networks really well. We can pick that content up, show it like other HTML pages that we know about. So I primarily focus on, I guess the social engagement within those networks, and those aspects when you're working on that, and not assume that there's any inferred SEO benefit of actually being active there.

JOSHUA BERG: Yes. Because that natural engagement and reaction with your audience is going to benefit either way. Right? [INAUDIBLE]

JOHN MUELLER: I think that's something that has a lot of value, and definitely something that can make sense for your users.

MALE SPEAKER: It seems there is a lot of the focus on Google authorship, obviously in this one. Correct me if I'm wrong, but Google didn't invent authorship. People have been writing stuff for quite a long time. So unless you're using Google Plus as your sole way to get business, or it's something you're using for SEO, surely an author would have a following outside of Google Plus dated before it existed, and/or on other sites anyway. I'm not really sure what the big issue is.

JOHN MUELLER: I think it's natural to focus on this a bit, because we just announced this specific change. And for some people who've been working in this area quite a bit, who've been promoting this for their clients for other websites, I think it's quite a big change. So

MALE SPEAKER: But are they promoting it as a ranking factor, or as something they should do as a [INAUDIBLE]?

JOHN MUELLER: We haven't used authorship as a ranking factor in normal web search anyway. So it's not directly there. But I can imagine, there's a lot of indirect effects where if people go from one network and find the same people, they find their search results. That can make it easier for them to trust these people. So that's definitely something that's also involved there. But there are lots of other things you could be doing to engage users, to engage customers, to keep them coming to your website, to keep them happy with your services. So there's always something that could be done. All right. With that, I have to take a break now. So I thank you guys for sticking around so long. Thanks for all the good questions, the good discussions. And I hope to see you in one of the future Hangouts. I think I still have to set them up. But it will be generally in the same time frames again.

AUDIENCE: Thanks, John.


JOHN MUELLER: Have a great weekend.

JOSHUA BERG: Thank you very much. That was very good. Lots of information. Beautiful.

JOHN MUELLER: Bye, everyone.

AUDIENCE: See you, John.

AUDIENCE: Bye. | Copyright 2019