Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 28 August 2015



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: OK, welcome, everyone, to today's Google Webmaster Central Office-Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland, and part of what I do is talk with webmasters, web publishers like the folks here in the Hangout, or the folks that have submitted questions or in the forums, everywhere. As always, those who haven't joined these Hangouts regularly, you're welcome to ask a question at the beginning if there's anything on your mind that you'd like to get started with. Nothing? Otherwise we'll get started with the questions that were submitted, and if you have any comments or questions along the way, feel free to speak up, and then we'll open it up again for normal questions at the end, or towards the end.

AUDIENCE: I sent you a question offline, so I just wanted to fix you up and let you know that I sent you one offline. That's all.

JOHN MUELLER: Thanks. I'll double-check that. All right. Let me just mute you guys to clean up the sound a little bit. If there's anything in between that you need to ask, feel free to unmute, but let's go. Why does the blocked resources report in the search console display a list of blocked images which are not blocked by robots.txt and not used anywhere on the affected pages? That sounds like something where I'd probably need to take a look at the example, so if you can send me some sample URLs or a screenshot, that would be really awesome. In general, the blocked resources report includes embedded content within the pages, so if you have a page that embeds an image, for example, and that image file is blocked by robots.txt, then we would show that there. But if the images are not blocked by robots.txt or not used at all within those pages, that sounds like something a bit weird. But I'm happy to take a look at an example to see if there's anything we need to do our side. A website with 190,000 pages has been duplicated without permission to another domain recently by a previous colleague who had access to the server. The company has filed a DMCA request recently. Does this duplicate website affect the original website's rank? In general, that's not a problem, in the sense that we're pretty good at recognizing which website is original, and we tend to show that one in search results. So if you're taking action to have the copy removed, then I think that's a great thing to do, if you can do that, but essentially our algorithms are used to some amount of duplication being there, and we have to work with that and still bring the most relevant results. Google is showing video snippets for articles that don't have any videos. This is greatly impacting our traffic. I've posted this question in the forum and messaged John a couple of times. What else can I do? Essentially these are algorithms that are picking up on signals that assume that there are some video snippets here. I know this is something that some sites have a bit of a problem with, in the sense that we sometimes pick them up incorrectly. I know the video team is also working on this problem to solve it on a broader scale, so I don't have any immediate solution for you at the moment, but I know the team is working on this problem. What factors does Google use to show tweets in the search results? Is it follower count, social authority, or anything else? What should webmasters do to get their tweets indexed by Google? I don't actually know how we embed the tweets there. Part of that is, I think, because these are only visible in the US, so I can't really see them directly here, but I don't know what factors would be involved there. I'm guessing we just try to recognize ones that are really relevant to the user's query, and then, if we can tell that tweet is a good result, then we might include that. That's kind of how we sometimes choose to embed images in the search results or news blocks-- those kind of things. We had a site with just 10 backlinks, we have a pretty good ranking, and a low-competition niche, and in a short time we've gained 30 more, but it looks like these a poor-quality links. Is this possibly negatively impacting our ranking? No, in general that wouldn't be a problem. So if low-quality sites are linking to your site, that's not something that would pull you down. On the other hand, if these are unnatural links that you or maybe a previous SEO or someone else placed on those other sites, then that's something I'd take care of regardless. But just because a site links to you, and it doesn't look as awesome as CNN or some other big site doesn't mean that this is going to cause any problems. It's not something you need to take action on.

AUDIENCE: Hi, John. Can I do my question? This is the first time I do this, so I don't know how it works, probably, so sorry. I have four questions for you, and here I go. How long does Google take you to consider or see news and very good content, new content, in a very competitive market as flights and hotels? Because we are trying to create super, very good content, and we are competing in this very big market of very big companies.

JOHN MUELLER: It's hard to say about how long that would be, because that really depends on the type of search result that you're active in, which niche you're active in, the type of site that you're active on, so it's not something where I could say there's a specific time that it would take for that to happen. So I don't really have any specific answer that I can help with there.

AUDIENCE: We are in Spain, so the domain is .es, and we are trying to get into UK too. So maybe in six months, maybe three? I don't know. Are we talking about 10 years?

JOHN MUELLER: No. If it's something really, really good that we can recognize as really important for that type of query, then that can be a matter of a couple of days. It's not that there's any artificial limit where we would say a site has to be this old in order to show up in these search results. If it's really, really good, and we see that through all our signals that we collect, like people recommending the site, then that's something that we would reflect fairly quickly. That can be a couple days. So it's not blocked for any time, but, at the same time, it's not something that automatically happens as soon as we find something that looks nice.

AUDIENCE: The second question. We used to have a site about rental cars between individuals, and right now we have on our site about rental cars with new information. It's a full-price car rental comparison, and all these links are redirected to this new website. How would Google think about it or how would it react to it, because it's the same subject but just a little bit changed?

JOHN MUELLER: That should be fine. In general, with the redirects we'd like to see that it's going to something that's kind of equivalent, and it sounds like it's pretty similar there. So that's something where I think that shouldn't really be a problem. Sometimes websites change their focus and they redirect maybe to a different domain, and that's completely fine. That wouldn't be a problem.

AUDIENCE: Third one. Does Google ban or penalize Spanish sites with Spanish content and a Spanish domain if the English version content is a little bit worse, because it's almost the same but not exactly the same?

JOHN MUELLER: No, we wouldn't penalize the site for something like that. What we would do is try to rank those pages individually, and we would rank the Spanish pages, maybe for a user searching in Spain or a user searching in Spanish and the English pages for people searching in English. But it's not that we would penalize a site for having somewhat different English versions.

AUDIENCE: The last one, I promise. What do you think about linked lists in the footer? I know that some time ago it was very useful, and I consider them very useful for me, does it still work?

JOHN MUELLER: So do you mean links within your website in the footer?

AUDIENCE: Yes, for example, rental car in Spain, rental car in France-- a list.

JOHN MUELLER: I think it's useful for us to recognize the individual content within your website, but as soon as you include lists of keywords in the footer in the small print, then that starts looking a lot like keyword stuffing, Then it's not so much a matter that these links are bad, but it's just that these pages are kind of keyword-stuffed. And then we say, well, we don't really know if we can trust the content on this page, because it looks very artificial because all these keywords that are stuffed in the footer. So I would use the footer for useful navigation. I would use the normal navigation on the website. I wouldn't use it to kind of artificially push a lot of pages to have it linked from the home page, for example.

AUDIENCE: OK, that same list but in the home, as in a box or something?

JOHN MUELLER: Sure, that can make sense. That's something where you'd have to look at what users are looking for, and make it easier for the user to go to the home page and find the content that they're looking for. But it's not something where I would artificially inflate those links and say, well, this is a rental car website in Spain. Therefore, every link on the site has rental car Spain Ford, rental car Spain truck-- something like that-- as anchors. I would try to use natural anchors as much as possible.

AUDIENCE: I thought it was useful, just because they change the nations, and since they can find that information from every other page in our site, I was thinking it was useful for everybody, but I wasn't sure.

JOHN MUELLER: | would try to keep it as natural as possible and such that it works for the user. If you repeat the same words over and over again for every link, then that's something that probably doesn't help the user that much.

AUDIENCE: Can I touch base on the original question? On her original question you had said that if you have these links that are very weak or not great, it's not going to hurt you any. So does what you're saying kind of negate negative SEO? Because I know that a lot of people that attack people-- they'll throw a million crap links at a site and say that that will basically push them to the bottom of the list. So do those basically crap links water down your good links, or it doesn't matter?

JOHN MUELLER: It doesn't water down the good links, but usually what happens in a case like that is we'll recognize that these are kind of spammy, problematic links, and we'll just ignore them on our side. So it's not that it would reduce the value of the existing ones, it's just that these new ones wouldn't really provide any additional value.

AUDIENCE: So just no help.

JOHN MUELLER: Yes. All right. Let me go through some of the submitted questions, and then we'll open it up for discussion again, because it sounds like there are lots of things to discuss. What's the impact of switching to HTTPS on a website that relies on some sort of display advertising? Sometimes networks don't serve ads correctly on HTTPS. What about those sites, and what should webmasters do? From our point of view, we think moving to HTTPS is a great thing, and we really recommend doing that. I am aware that sometimes there are issues around ads, for example, that make it really hard to move to HTTPS completely. So that's something where you kind of have to weigh the two sides and think about how much more is it worth for me to move to HTTPS now, or maybe it makes sense to wait half a year or so, until all of my ad networks that I rely on to keep the site running are able to handle HTTPS properly. That's something where sometimes it makes sense to wait, and sometimes it makes sense to move regardless and kind of be ahead of the curve, and then just kind of take that into account, that maybe these ad networks won't be able to serve everything completely. Should we redirect the HTTP version of the old robots.txt to the HTTPS version as well, like all other URLs? Is Google planning on building a report for mixed content errors? Yes, I'd recommend redirecting all URLs from one site of the version to the new site if you're changing, be that to a different domain or be that from HTTP to HTTPS, so redirect everything. With regard to building a report for content mismatch errors, I think that's a good suggestion. I know I've looked at this with the Webmaster Tools team, but also with the Chrome team that's working on security problems, and it's something that I think is on the radar, but I don't think this is something that will be happening anytime soon. So if you're moving to HTTPS and you suspect that maybe you have content mismatch issues on your website, then that's something that I'd try to solve on your side, for the moment, and not wait for any specific tool from Google. Just to kind of back up with regard to content mismatch, the thing with HTTPS pages is if you're serving the HTML on HTTPS, then all of the embedded content also needs to be served under HTTPS. Otherwise you're kind of leaking that information that you're protecting with HTTPS by pulling in HTTP unencrypted requests. So that's something that browsers at the moment show as a little yellow lock in the corner to signal that some part of the content on this page is not served in a secure way. I imagine browsers might be a little bit stricter about that in the future at some point. I know this is something that's kind of tricky to track, because sometimes this content is pulled in with JavaScript or with other ways, so sometimes it makes sense to crawl your website and double-check that you're embedding the content properly. There are also ways that you can use-- I believe it's called a content security policy-- that you can specify on your site, that lets browsers report mixed content errors to specific URL on your website. So if you search for CSP, that should give you some information on what kind of possibilities there are to make it possible for browsers to let you know about this mixed content problem on an HTTPS site.

AUDIENCE: John, would be embedding, for example, an image from HTTP that would cause this issue with the yellow lock here-- would that be a problem, or from the Google algorithm's point of view, would that page not receive the bounce that the HTTPS version usually gets?

JOHN MUELLER: We do recognize that when we render those pages, and we try to take that into account with the canonicalization. So if we have the HTTP version and the HTTPS version, and we understand that the HTTPS version isn't really set up correctly, then chances are we'll try to take the HTTP version to show in search instead. So it kind of affects that, but it's not that there's any kind of a penalty around having mixed content.

AUDIENCE:But, for example, if there's only an HTTPS version, so if the HTTP one is redirecting to the HTTPS one, and you're just embedding an image from an HTTP source from maybe outside your website, and that will still show the yellow lock. Would the page get less bounce from the HTTPS--

JOHN MUELLER: At the moment, we would recognize that as an HTTPS page, we'd know internally maybe that it's kind of serving mixed content. I imagine maybe at some point we'll say, well you have to really serve clean HTTPS for us to show that in the search results. I don't know. I don't think that there would be any kind of a penalty around that. It's just that we would say, well, we won't count this as HTTPS, so it won't get that small HTTPS ranking boost. Is it true that if I have all no-follow links on my website for other sites, Google frowns on my site, considering I have advertising content or user-generated content? No, that's not true. If you have lots of no-follow links on your website, that can be perfectly fine. And if you have lots of advertising on your website, that's perfectly fine. If you have a lot of user-generated content, that can be fine too. That's something where we would focus on the content that you provide on your website and try to rank it like that. Otherwise, social networks like Twitter, Google+, and Facebook that all have no-follow links on the postings would be seen as being kind of problematic sites, and that's definitely not the case. So if you have a lot of no-follow links on your site, that's perfectly fine from our point of view. On the other hand, if you have natural links within your content, if you have links to great content that you think is actually useful and relevant for the user, then by all means use a followed link. Make it clear that this is a link that you're placing editorially on your website that you think makes sense to count. So I wouldn't kind of artificially put everything into a no-follow area, but I also wouldn't be afraid of using no-follow where it does make sense.

AUDIENCE: Hi, John, can I make you a question? Imagine the case where we have an old domain, maybe three years old, with old pages about, maybe, all they'll see in Paris or all they'll see this in France, and it's a very competitive field, and they are lost in pages 4-10 in search rankings, so it's not well-ranked. But we suddenly improve dramatically the content. We do one of the best contents in the area different. We make a new booking engine doing things quite different and useful for the user. My question is, in this scenario, how much time-- and we get some backlinks, not a lot, but some backlinks from blogs or thinks like that, especially doing something new and we are having some attention. How much time could we expect to have better rankings, to improve our rankings maybe the top 10 or top 20?

JOHN MUELLER: I don't think there is any specific time that I could give for that. It can happen that we see something that's really fantastic, and we push it up within a couple of days. It can happen that we see a trickle of good feedback for individual pages that improves steadily over time. But it's hard to say how much your site improves over time compared to how much the other sites in that niche are improving over time-- where that line is. If your site is going up like this, and others are kind of flat, then, of course, you'll meet them and surpass them at some point. But if both are trending up in a similar way, then those search results are probably not going to change that quickly. So it's not the case that I could give you any amount of time and say, well, after one month, you should expect to be the top 10, or after one year or 10 years. That's something that really depends on your site on the one hand, but also the whole market, the whole niche to other sites that are active in there as well.

AUDIENCE: Now what happens a lot of times is that sometimes competition in these very competitive fields it keeps very, very fixed. Pages are in good position, so they don't have any incentive to be better. So new entrants have all the motivation to do it much better We are a startup, so it would be great if we could have-- we are working on all this stuff, so how much time should we [INAUDIBLE] to arrive to some point?

JOHN MUELLER:I don't have any answer for that, sorry. I think as a startup you obviously have some advantages in that you can move quickly, you can change quickly, and you can focus on something that is kind of untraditional that grabs people's eyes and makes them interested in what you're offering. So I think there are a lot of advantages there if you're a passionate startup active in an area that's maybe gotten a little bit stale, in that there are some good competitors, but they're not really fighting for the market anymore. So I can definitely see that there is lots of opportunity there, but at the same time, it's not easy. You have to be more than a good idea and a nice-looking website.

AUDIENCE: Thank you very much. Maybe I'll send my URL when the work is finished to take a look.

JOHN MUELLER: Posts that are quite long and have 7,000 words on my site are not showing entirely in Fetch as Google in the preview window. Shorter posts are OK. Is this because the preview window only shows a part of the entire page, especially of lengthier ones? Yes, I believe the preview screenshot is limited in size, so that's something that doesn't expand for really long pages. I think that's just for usability purposes within Search Console. It's definitely not the case we would limit the amount of content that we would see on a page like that. There is a limit on our side with regard to HTML pages with how long they can be for us to actually index the content, but usually that's in the order of, I think, several megabytes of HTML. So if you're writing something that has 7,000 words jn it, then you're not going to be reaching several megabytes of HTML. One thing that you can do in a case like this to double-check is to find some text that's unique that's towards the bottom of your page, and just search for that in Search and see if your pages show up. If they do show up, then of course we're indexing that content like that, and we can bring it in Search. I have an image-centric site that uses an Image Lazy Load script where a Googlebot can still index the images. Google Fetch and Render shows all blank images, because Google can't render the images. Could it be hurting their rankings from a quality perspective? I'm not really sure how you're serving those images if the Fetch and Render can't see them, so that's something you might want to double-check to see what you're doing there. For example, one thing I've seen on a site that uses lazy loading images is that the images only load when you start to scroll along a page. Of course, when Googlebot is rendering the page, it's not going to be scrolling through the page to see what happens. It's going to render one essentially large page view and try to get all of the content from there. So if your Lazy Load image relies on any kind of interaction with the user and the browser before those images are shown, then chances are Googlebot won't be able to actually see those images, because they're not loaded. So that's something where you might want to take a look and see if that's the case with your site, with your code that you have there. One option could be, of course, to change to a different kind of lazy loading work for Googlebot. Another option might be to just directly embed maybe the main images or the first set of images in the HTML so Googlebot can definitely pick those up, and also so that the first page view of a user can see those right away. That way we can at least get those main images. In any case, this wouldn't be hurting the rankings of the website in web search. It would just be affecting whether or not we can pick up the images for image search. My site currently says in Google Search Console your site's domain is currently associated with the target United Kingdom. Will this affect its performance in other English-speaking nations? Should I add hreflang because I can't undo the UK setting? If your website is hosted on a. .uk top-level domain, then we'll automatically associate the country UK with your website. That's similar with other country code top-level domains, where we'll automatically associate that country with your website, because usually that's what users expect there, and you can't change that. The only place you could change that is if you use a generic top-level domain like a .com or .eu, or anything that's kind of new from those new top-level that are coming out. Then you can send your geotargeting to whatever you want. With regard to the performance, what happens with geotargeting is when we recognize that a user in that country is looking for something that's probably local, then we'll use that to boost your website in that country for those users searching for something local. So if someone is searching for a local bookstore or known a business to buy an iPhone or whatever, then that's something where geotargeting can help to kind of push your site slightly in the search results. So it's not that it would demote your site if it weren't geotargeted, but it's just that maybe other sites are recognized as being local, and they would be promoted instead of yours. Your site's performance in other countries won't necessarily be less than if your site were just not geotargeted at all. What you can do if you're targeting users in multiple countries-- so maybe you have a UK version of your site and a US version of your site-- is to use either two domains, like a UK version and maybe a generic one. I guess for the US, since there isn't really a US top-level domain that we'd use there. Or you could use a generic one in general, and set up subdomains or subdirectories where you say this subdirectory is for the UK and this subdirectory is for the US, and then we can geotarget those sections separately. The hreflang markup between the pages is more so that we pick the right page in the search results when we do rank your site. So the hreflang doesn't affect ranking. It just swaps out the URL. So if a user in the US were searching and we were showing your UK page, if we know that there is this hreflang connection to the US page, then we would swap it out and show the US page instead.

But: it's not that your site will be ranking higher in the US. It's just that we'd be showing the more correct URL.

AUDIENCE: I have a question about hreflang. If we don't have all the pages per page translated from maybe .com to .us or .uk, it's a problem?

JOHN MUELLER: No. We do this on a per-page basis, so if for hreflang you have some pages with a lot of languages and other pages which just very few languages, that's perfectly fine. That's totally up to you.

AUDIENCE: Yes, because we can translate everything at the same time.

JOHN MUELLER: That's completely normal.

AUDIENCE: John, does geotargeting take effect for people searching for something relating to a location, but they themselves aren't in that location? So if someone from US or from UK searching for a rental car in Bucharest, would geotargeting help a Romanian website because it fits with what the user is searching for, even if the user is not in that country, or is that just based on [INAUDIBLE]?

JOHN MUELLER: I don't think we'd use geotargeting for that, but if we would use geotargeting in a case like that, we would probably use that for a US-targeted website that has rental cars in Bucharest. So not that we would say, well, a Romanian website because they're looking for a Romanian location, but rather if the user is in the US and we think that they're trying to find something that matches their interest there, then we would show a US-targeted website for those queries.

AUDIENCE: OK, because I've noticed that doing that search from, like, the US, I see a lot about .ro websites showing up, so I was wondering if there's any connection with geotargeting.

JOHN MUELLER: I don't think so. These are always kind of tricky situations, especially when you're looking at tourist locations, where you know that maybe you have holiday homes in Spain, and your page is targeted for users in Spain, because you think you're offering something that's available in Spain. But actually, if your users are in the UK, for example, then you would need to target the users in the UK.

AUDIENCE: It's a different idea to not geotarget when you're doing that; and let Google pick up on relevancy or--

JOHN MUELLER: Sure. We try to take into account relevancy anyway, so it's something where if we don't have really clear signals that they're looking for something really local, then chances are we'll just try to pull the search results together, however we think makes sense anyway. Does Google weigh more on links present in the main content as compared to links present in the secondary content, like a menu or a sidebar? In general, we do try to recognize the difference between what we call the primary content on a page, which is like maybe an article that you're writing about, and the boilerplate of the page, which is something that's repeated across your website that's also used on these pages. That's usually something like the header, the sidebar , or the footer-- those kind of things-- and we try to value the primary content a little bit more. So if someone is searching for something, then we'll assume that we'll try to match something that's in the primary content first. With regard to links on those pages, we still use those links to navigate the website, to find other pages within the website, so it's not that you need to move your navigation into the primary content, because then we'd probably just try to recognize the boilerplate differently. So that's something where I wouldn't artificially move links around if they're in the sidebar or the footer or something like. We still pick those up. We crawl through that, and we pass page rank through those links. That's completely fine. I notice a drop in rankings on a 10-year-old site after two off-page situations happened. A set of IPs hit my site and increased my bounce rate 25% site-wide from November until June. Then in July, hundreds of spam backlinks started linking to my site. It's really hard to say what might be happening in a case like this. We make algorithmic changes all the time as well, so even if nothing were to change on this website, chances are the rankings would be changing over time. So that's something where just because it's been ranking like this for the last 10 years doesn't mean it'll continue ranking like that, because there might be some startup that comes up and dethrones you and instead shows up in the top 10. That's something where I wouldn't necessarily assume that these things are related with any kind of a ranking change. That said, I'm happy to take a look at the details. |f you want to send me the URL and maybe some more information through Google+, I'll definitely take a look at that with the team here to see if there's anything that's causing a problem here that you might be able to fix, or that might be picked up in a confusing way by our algorithms. My client marked up a page listing open houses. His site focuses on Austin real estate, so he competes against larger real estate listing sites. The schema isn't clean, yet competitors who are ranking seventh for open house terms still get their markup shown. In general, using structured data markup doesn't directly affect your site's ranking. If you add this markup to your pages, that's something that helps us to better understand your pages' content, but it's not something where you'll see any drastic change in rankings in the search results. So just because you're marking up the content using structured data and your competitors aren't doesn't necessarily mean that they're ranking regardless of your structured data markup. So I think that's something where you shouldn't assume that adding structured data markup to a page will significantly change its ranking. Rather, I would do this to help us to better understand the content on your page, and of course to make sure that we can pick up things that we could use for Rich Snippets, because while Rich Snippets also aren't a ranking factor, they do make the search result a little bit more interesting, and they might attract more people click on your site even if it's not ranking first

AUDIENCE: John, I had a question regarding that. We actually work in the real estate niche, and the agents actually push listings onto our competitor's site as well as on to us. So what happens is the listings go live on a daily basis. When I look into the Page Authority, they are zero, because they obviously have a new result, and we also have a greater Domain Authority than our competitors. But in some instances I've seen that our things don't even show up if you search for a given address, but the competitor's page shows up. We also obviously have the schema markup, but as you said, it's not a ranking signal. We do have a greater Domain Authority, and we also submit site maps so that the live listings are submitted on a daily basis. But there are certain instances where our live listings don't even show up, and competitors are just ranked straight away on the same given day. Is there any particular reason why it is happening?

JOHN MUELLER: It could be lots of things. So it could be that the competitors are just a little bit faster in the way that they're submitting these listings to Google. For example, if you have an RSS feed for your content and use pubsubhubbub, then you can set it up so that immediately when you click the button to publish something, it'll be sent to Google as well. That's, for example, one option that could be used to make sure that this content is with Google a little bit faster. The other thing is that we do try to look at a site overall to better understand how this content could be reused. If we think that a site is a little borderline or from a quality point of view not that awesome, then even if we know about new content, we might not go out and crawl and index it that quickly, because we think maybe this isn't really the best use of our time. I don't assume that this is the case with your site, but it's something to keep in mind. Where we sometimes see this is with lower-quality blogs. They'll submit content and put it in their feed, and maybe we'll pick it up after a couple of days instead of immediately like the competitor. So that's something to keep in mind, especially with a real estate site. I assume it's important that you also work to clean out your site regularly so that you don't collect this cruft over the years, where maybe Googlebot is looking at your site overall and saying, well, overall there's lots of old content here that's not relevant anymore. I don't know if new content is really going to be the best use of our time. So that's something where, on the one hand, from a technical point of view, make sure that you're doing things so that we can pick it up immediately as much as possible. On the other hand, from a quality point of view, take a step back and look at your site in the bigger picture, and make sure that you're not collecting cruft or older things and that everything that you're providing to Search is of the highest quality possible that you want to provide.

AUDIENCE: John, does the location of the listings also affect-- our competitors have the listings located just one level below the IA, but we have, let's say, six levels below the IA. Does that also affect?

JOHN MUELLER: That does affect how we crawl the page. So if on a website we have to go through multiple levels to actually get that content, that's something that does make it a little bit harder for us to get to that content. That does also affect how we pass page rank within a website. If something is linked really far from the home page, for example, and the home page is the one that everyone is linking to, then that's something that does take a bit of time for us to trickle down the page rank to those pages. So they won't be crawled as quickly or as frequently as when they're linked directly from the home page or closer to the home page. This is one reason why it, for example, makes a lot of sense to have maybe a block with new information on your home page, where you say, well, these are new things that just came in, these are things that maybe someone will be interested in who's regularly looking at your website. But it's also something that Googlebot would be interested in and say, oh well, this is new content, and they're linking to it from the home page. It has to be something useful. We'll go and index it right away.

AUDIENCE:Regarding this advice about having a blog on the home page-- if we have only a snippet of a post, and it links to a full page with a full post, is this still all right?

JOHN MUELLER: Sure. That's great, yes. Let me run through some more of these questions, and then we'll open it up for more discussions. It seems like we could do this for hours. I think there was a question about the crawling rate and JavaScript or CSS on a site. The question was, "We show the user when they're browsing a site the JavaScript and CSS, but we hide that from Googlebot to make it a little bit faster. Is that cloaking?" Yes, that is cloaking. If you're changing your pages for Googlebot to make them look streamlined or optimized in a way that users wouldn't see that, then that would be considered cloaking From our point of view, that's something we strongly discourage, because it also prevents us from actually seeing what these pages look like for the user, and there's one good place where we do take that into account. That's, for example, when we try to recognize mobile-friendliness of a site. If you hide all the JavaScript and CSS, then we probably won't be able to recognize that this is a mobile-friendly page, and we wouldn't be able to treat that appropriately in Search. So that's something where you would see that directly, but in general I'd really recommend serving Googlebot exactly the same content as you would be serving users. That makes it a lot easier for you to diagnose any kind of issues and makes it a lot easier for us to say, well, we can really trust the content on this site, we can reuse it one to one, we see exactly what a user would be seeing, and we have no problems with this site.

AUDIENCE: Thanks so much for that I just want to add to that question as well. We produce a news agency site in South Africa, and they get about 6 million views [INAUDIBLE]. So what we plan to do is a slow migration for the [INAUDIBLE] some users to the Baker site for that. But because of the way the site was set up with all these redirects and JavaScript redirects, it kind of slows down the site, so we are thinking [INAUDIBLE] Google a text version of it that might be a little quicker to put all those things, I'd really appreciate that. The other question that I had is how do you do JavaScript redirects? Is it 301? Is it a 302?

JOHN MUELLER:Good question.

AUDIENCE: When I say JavaScript, I mean, for example, using Window.location reassignment.

JOHN MUELLER: I'm not actually sure, but in general, what happens with redirects that we recognize don't match the 301 or three 302 directly is that we try to understand if this is a permanent redirect or not, and then treat it appropriately. If we see the JavaScript redirect is always in place, then we'll treat it as 301. If we see that it's used as a way to recognize the language or location of a user, then we'll treat that as a 302. But it's not the case that a 302 would be problematic, for example. It's just with a 302, we would be indexing the URL that isn't redirecting, and with a 301 would be indexing the URL that it's redirecting to, so it's not in any way problematic. We just try to pick the right URL to show.

AUDIENCE: The last question with regard to that. So domain [INAUDIBLE] the domain [INAUDIBLE]. Obviously we'd use the [INAUDIBLE] iPad or iPhone, domain [INAUDIBLE] and [INAUDIBLE] is exactly the same content. But obviously the layout is different, because the Baker one is more for mobile users, so [INAUDIBLE] site, for example. Should we then add canonicals to prompt Baker to domain pages, and should we block Googlebot from indexing the Baker site, because in essence, we want the domain [INAUDIBLE] to have all the fun here, not the Baker. But eventually what will happen is everything will go back to the domain [INAUDIBLE]. .

JOHN MUELLER: I would use real canonical for that to let us know about that, but I wouldn't block indexing of those pages, because if you put a no index or block it with robots.txt, then we don't know what you're trying to say. But if you say, here's the canonical-- focus on that one, that's all we need.

AUDIENCE: Perfect, thanks.

JOHN MUELLER: We enter the tag's hreflang, but Google Webmaster Tools reports a lot of mistakes, but the tag is inserted correctly. What's happening? I'd have to take a look at the page to see what exactly is happening there. What I would do in a case like this is post in the Help forum, because the people there are pretty good at recognizing issues like that. Oftentimes it's just a matter of putting hreflang in the right location, so it has to be in the head of the page, making sure that it's confirmed from all sides-- those kind of things. But that's something that's usually pretty easy for the folks in the forum to recognize. Another thing that might be playing a role here is that if you fix these mistakes, and they're still showing in Search Console, then that's probably just because we haven't been able to recrawl all of the pages in that set yet, so sometimes it's just a matter of time for us to see those changes. We updated our site to a responsive design, and the star ratings in Search disappeared due to bad schema markup. We've since fixed the markup, and months later the ratings are still not shown. Are we missing something? Can you take a look? If Rich Snippets aren't shown for a site, then on the one hand, it could be a technical reason. On the other hand, it could be for policy reasons, that you're using it incorrectly. It could also be for general quality reasons, in that we think this site isn't really something that we can trust completely, so maybe we shouldn't be showing the Rich Snippets right away. So make sure from a technical point of view you're using them correctly, from a policy point of view you're using them in the way that complies with our policies, and if both of those are OK, then I would really focus on improving the site significantly across the board. I still have a bunch of questions left, but I see people still have things on their mind here in the Hangout as well, so let me just open it up for discussion from you guys.

AUDIENCE:Regarding CSS and JavaScript files, let's say we have a situation where the CMS is built in such a way that whenever you do a modification to the CSS file or the JavaScript file, it automatically gets a new finding as well, for some reason. And originally Google crawls it, or let's say we do a Fetch as Google and submit our index, and everything looks fine. But a few days later, you looked at the cached version, and Google cannot see that's a CSS file that it used to see when it was a part of the website, even though when you go to the website now, you see it normally because it uses the new CSS file. Would that be a problem? So if you're seeing the cached version, you see only the text form, and you don't see any CSS handling. Is that something that Google takes into consideration, or is it just Google sees so it was the first time and that's all that matters?

JOHN MUELLER: I believe on the cached page we wouldn't show the CSS directly anyway. We would only refer to whatever you have on your website at the moment. So that's something that wouldn't be visible directly in the cached file. I'm working on a blog post, actually, on best practices for embedded content, and this is one of those things where, on the one hand, I'd recommend not changing the URL so frequently. So if at all possible, don't put a session ID into the image or CSS file URLs. The other thing is if you do change the URLs, then make sure there's a redirect. So even for a CSS file, if you change the URL of a CSS file, make sure that the old version is redirecting to the new one, so that when we cache the old version, we can refresh the old version. If we see the redirected new version, then we'll be able to use that for our rendering as well. That's something you would do with normal HTML pages anyway. Just make sure you're following those kind of practices as well for embedded content, be that images, CSS, or JavaScript files.

AUDIENCE:But isn't the problem from a ranking point of view something that Google might lower the [INAUDIBLE] of the page, just based on the cached version?

JOHN MUELLER: No. I wouldn't worry about the cached version too much. That's kind of separate from the whole crawling and indexing part. We do use something from the index for the cached version, but I wouldn't say that this is the version that is the reference for the ranking of this page. So those update cycles are sometimes a bit separate. But in general, if we can't get to this CSS file properly, we can't render the page properly, we can't take the rendered view into account properly, or it just takes a bit longer for us to actually re-render those pages. That's something where if you're looking at it from a mobile-friendly point of view, for example, then we might not notice that it's mobile-friendly, because we can't get the CSS to load properly. That's one aspect that might come in, but otherwise it's less of an issue where if this is a normal page, and the CSS is just not able to be loaded, then we have to live with that too.

AUDIENCE: But from a mobile-friendly point of view, if at point A you crawl the website ad everything is looking fine, the CSS got mixed up, and that the same file doesn't exist anymore, but when you recrawl the website, you see the new version, and everything looks fine as well, then the cached version doesn't really [INAUDIBLE].

JOHN MUELLER: Yes. What happens in a case like that is that we have to refresh all of the embedded content too, and in order to render the page, that might mean that it takes two or three more cycles than if we could just use the cached version that we already have of that CSS file.

AUDIENCE:I have another question about content in this case. If we have used statistics from several resources to build an article about how long people wait to do whatever or something like that, is it useful to put the resources in order to show that the data we used is real?

JOHN MUELLER: Sure. From our point of view from Search, it's not that we would look for links to high-quality sites and say, oh, this must be high-quality content just because it's linking to high-quality sites. So it's something, I think, that makes sense for users. It's not something we would directly use for Search. This is an old, spammy tactic that was used maybe 10 years ago, where someone would put together a page on some spammy topic, and on the bottom they would put links to Wikipedia and Google and CNN, and say, oh well this is a really high-quality page, because it has links to high-quality content. So just because it has links doesn't necessarily mean that it's a high-quality piece of content, but of course for users, that's probably something they'd find useful.

AUDIENCE: Yes, because we are using data, and we could invent that data, so how to know. So we are trying to do our best, so a way to show that it's real, or we try to do it as real as we can, because it's something derivative, we don't have the exact statistic for what we want to show, to use some different statistics, and to create something with them. So that's the point. We are trying to show that we are trying to do an analysis, a good analysis with that.

JOHN MUELLER: Yes, I think for users, that definitely makes sense. Personally that's something I always look for, if someone is saying, our statistics show this and this and this, and I'd like to see how you came to that conclusion. But from a search point of view, that's something that we wouldn't say, you have a link to an Excel spreadsheet. Therefore your content must be good.

AUDIENCE: Another question was We have a long page with a lot of content about, maybe, a hotel in Paris-- what to do, where to go, and how to choose-- and it's quite, quite long, too long, maybe, for a user. Is it better to separate in several pages, or nothing happens and it's only one page with all the content. What would be best?

JOHN MUELLER:I'd leave that up to you. I'd try to split it into sections that logically can stand on their You can think of this similar to if you have a shoe store, and you have one model of shoe that you have in different sizes and different colors, does it make sense to make separate pages for the different sizes and colors? Sometimes it might make sense if this is something really specific and special. Sometimes it might make sense to fold it into the main page. So if the individual sections make sense in the sense that users might want to go to this specifically, then sure, split it off. If on the other hand it makes sense to just have the whole thing, then I would keep that together as one thing.

AUDIENCE: If it's too long, the text, it's bad, it's good, it's the same-- does it matter?

JOHN MUELLER: The one aspect I think Mihai is mentioning it as well is it takes a while to load these pages. If they're really long, then maybe it makes sense to split them up for practical reasons. But essentially that's left up to you. That's not something where we would say, from a Google search point of view, the page should be maximum five full pages or something-- that's really up to you.

AUDIENCE:Can I touch base on that question? I've noticed a lot of mainstream sites recently that are starting to do exactly what he's talking about. You'll have an article. It'll be long enough where you could fit on one page. But in that article they'll have paragraph 1, next page, paragraph 2, next page, paragraph 3, and so on and so forth, and a lot of people in the SEO world are saying that those sites are doing that so that they can raise their time on site and page views, because that is a ranking algorithm. Is their any truth to that?

JOHN MUELLER: I suspect they're mostly doing that for ad views, not really any SEO advantage there. I just see that on a site on blogs when they talk about this, in the sense that of course we have ads where you get paid per impression, and if you split an article up into 10 pages, then you have 10 impressions for the same article. So that's something where I suspect most sites would be doing that for that, not for any kind of SEO advantage there. I think from an SEO point of view it's almost counterproductive, in the sense that these individual pages don't really have a lot of unique content on them anymore, and it's really hard to understand the context of this content without having some additional lines of text on those pages. So that's something where I think from a ranking point of view, it makes sense to have enough content on the page so that it can stand by itself. Of course, from an advertising point of view, I can't really speak for that.

AUDIENCE:So if you're not trying to juice your advertising, it's best to have all content on a single page.

JOHN MUELLER: There are definitely reasons to split things up, maybe practical reasons. When it really gets too long, maybe it makes sense to split them up into logical chunks, because you notice people are looking for this chunk of content specifically, but essentially that's left up to you.

AUDIENCE:I had one other question for myself. I had talked to you before about it, but we use a combination of Cloud Player, MaxCDN, and W3 with dedicated servers. We've got two or three of them with the same content, so that with any traffic spikes it splits. So what we do is we link real the images, because of course they'll say CDN our domain .com, and like I've told you before, those are not showing up in the Webmaster Tools as indexed images. So we've made sure again that they do link real to the original source. That way we avoid any duplicate content problems. But they're still getting indexed. Is there anything that we're maybe missing that maybe we could do better to get that done?

JOHN MUELLER: I don't know. I'd double-check to make sure that Googlebot can see those images using the Fetch and Render tool, for example, so that you can double-check to see that they're actually making their way to us.

AUDIENCE:Yes, they show up.

JOHN MUELLER: Sometimes you can also check in image search by doing a site query. It's kind of tricky in a case like this if the images are hosted on a different domain than on the pages themselves, so sometimes you'll have to do is site query for the hosting domain and sometimes for the embedding domain to see if they're actually showing up there. But I'd double-check to make sure that you're really looking at the right things there, and not that you're seeing some reports in Search Console that's showing something that doesn't really match what you would find when you practically search for it.

AUDIENCE:OK, well check again.

AUDIENCE: Do you have time for one question more?

JOHN MUELLER: Sure.

AUDIENCE: Thank you. I was wondering about, we have a booking engine, so our listings, in our conversation we had before, Google considers as content also, if I'm right. Our listings actually are very visual, very graphic-- images and graphic bars and so on. Should we put some text around to have more context, maybe to be better-considered by Google than the other results?

JOHN MUELLER: Sure. Having context around images definitely helps. We have some really smart algorithms to try to understand what an image is about, but it's always a lot easier if you tell us directly. So if you have an alt text to the image, or if you have a caption that's right below or right about the image, that helps. Additional context on the page itself helps us a lot. So all of these things add up and give us more information about the image, and help us to understand how we should rank it in image search.

AUDIENCE: In the list of results, maybe for "autos in Paris," is it good to have text on context in this case?

JOHN MUELLER: Yes. I don't know how useful image search would be for a rental car site. It s hard to say. But theoretically you could do it, I guess, if you had something creative where you noticed people are searching for It, and this is one way to get to your site.

AUDIENCE: Can I ask a question about app indexing errors? We used to have, like, 100K crawling errors for the app URIs, but since last week, it's up to 1.6 million errors. We didn't change anything, and the intent URIs are still working properly. I'm not sure why this is happening. But the weird part is also that Google is referring to either crawl dates that are old, or the app version is old.

JOHN MUELLER: Those are the two errors, or is it like a contact mismatch error?

AUDIENCE: No, it's mostly intent URI errors.

JOHN MUELLER: OK. One aspect there is that we do use sampling to estimate the total number, so it's possible that we've recognized that you have a lot of links on your site that are used for app indexing, and a large fraction of those have shown this error. Therefore we assume that even more of these on your site are actually showing this error, so that's something where maybe what we're estimating it incorrectly for that report. The other thing that I would probably look into there is using the app indexing API instead of relying on crawling, and seeing if it would make sense to switch your app over to that, because a lot of the errors that we see around app indexing usually come from us trying to crawl the app instead of from being able to get that information directly, and with the app indexing API you can give us that information directly-- what the deep links are, what the title and description are on these pages, and how we should reuse those.

AUDIENCE: John, is pubsubhubbub faster than pinging a sitemap? Get the content indexed faster?

JOHN MUELLER: Yes. I would check out that blog post that someone from the sitemaps team did, I believe last year, about when to use RSS feeds and when to use sitemaps. So, especially if you have something that's fast-moving, where you have a small section of pages that you need to get indexed fairly quickly, then using a smaller RSS feed with pubsubhubbub definitely makes sense. With a sitemap file, the problem is if you ping a sitemap file that has 50,000 URLs or a sitemap index with millions of URLs, then we have to go through all of those first before we actually find the content. And with pubsubhubbub with the RSS feed, we can actually get that content right away-- we don't even have to even crawl the URL.

AUDIENCE:What if it's a news sitemap with just the latest [INAUDIBLE]?

JOHN MUELLER: Sure, that works really well too, because that's a very small sitemap file that's focused on something very specific.

AUDIENCE: John, one last question. [INAUDIBLE] a site a lot of products on there. They obviously change all the time. We take also products that [INAUDIBLE] doesn't stock anymore. What do we do with pages? Do we treat them as 404's, because we're not going to get new models of those products, so what do we do? 404 negative?

JOHN MUELLER: A 404 is definitely not negative. What would just happen with a 404 is they would drop out of the index. So if these are collectibles or very unique items, it might make sense to keep that page indexed and say, these are no longer in stock, we're probably never going to have them anymore, but here the technical details, here's a link to the instructions, or something like that. On the other hand, if these are just consumables where millions of other sites have the same content, I would just 404 them and move on.

AUDIENCE: Thank you so much And thanks for the robots.txt discussion the last time.

JOHN MUELLER: Great. Good to hear that was useful. All right. With that, let's take a break here. It's been a really interesting Hangout-- lots of questions here. I'll try to go through some of the extra questions that didn't make it, and see if there's anything I can add, maybe in the event listing, to try to cover some of those. I don't know how much I'll have today to actually get that done, but I'll put it on my list of things to do. Thanks again for joining, I'll set up a new Hangout again for maybe in two weeks, and maybe I'll see some of you again then. Have a great weekend, everyone.
ReconsiderationRequests.net | Copyright 2018