Reconsideration Requests
18/Dec/2018
 
Show Video

Google+ Hangouts - Office Hours - 29 January 2016



Direct link to this YouTube Video »

Key Questions Below



All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video


JOHN MUELLER: Welcome, everyone, to today's Google Webmaster Central office hours. My name is John Mueller, I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do is talk with webmasters and publishers like the ones here in the Hangouts or ask any questions that post in the forums, on Twitter, wherever. So we try to get your feedback in and see what we can do to help you make better websites. All right. As always, there are a bunch of questions already submitted. But if any of you, especially the newer ones to these Hangouts, have any questions that you want to get answered right away, feel free to step on in and let me know. Otherwise we'll have more time for questions in between as well.

MALE SPEAKER: Hi John. I'm new, so maybe I'll start. I had a question on infinite scroll and the best practice for getting content index for that. For example, like a blog where new content may be added each day. The last guidance I had read was from-- this is actually the example that you built on Google Webmaster Central Blog from early 2014 where as you scroll, push state is changing the URL. And if JavaScript is disabled, it's still indexable. Is that still the best way to do it, or is Google sort of involved in the way they crawl infinite scroll now where maybe that's not necessary?

JOHN MUELLER: That's probably still the best way to handle that situation. For us, there are two aspects there. On one hand, we want to be able to get to the content somehow, so we need to-- I don't know, be able to load a page and actually see all of that content. So if it's one URL and you have to scroll forward to kind of load that content, the Googlebot isn't going to trigger any scroll events. It's just going to try to load one big page view, maybe one scroll event up or something like that. But it's actually focusing on that one page view. So if there's content that needs to be loaded that needs these scroll events, then that's something we'll miss. On the other hand, if you have forward and next links on the bottom anyway, and for example, on a blog, you link to all of your blog posts anyway, then we probably wouldn't need to actually do the infinite scroll part to actually get to all of your content, because we have all of these normal links that go to that content directly as well.

MALE SPEAKER: Got it.

JOHN MUELLER: That might be something that makes it a little bit easier for you. Because if we can go directly to the content anyway, then you don't have to do anything fancy with infinite scroll for us. You can do it for the users, wherever it makes sense, but you don't need to do it for us specifically.

MALE SPEAKER: Got it. OK. And if there is-- an alternate version we're considering for the users is a version of infinite scroll, but it doesn't automatically load more content as you scroll. There's a load more button that would go down, and that's JavaScript as well. So you would recommend still handling it the same way? Google's not going to press that button and continue to load all of our content on its own.

JOHN MUELLER: Exactly. Exactly.

MALE SPEAKER: OK. OK, great. Thanks John.

JOHN MUELLER: Sure. All right. More questions from any of you folks who are kind of new to these Hangouts? It's good to see some new names and faces here.

DANIEL PICKEN: I've got a question, John, if that's OK. I wouldn't say I was new, but I'll throw it in there. AMP, accelerated-- I mean, I'm looking at it now. Accelerated Mobile Pages. I think it's platform that helps you speed up. Speed Up is a platform that's actually a lot quicker than your traditional responsive design sites. I suppose the question is, is that something you're going to be looking at maybe to favor, favor over responsive or any other type of platform? I think it's open source, and I think it's pushed by Google as well. [INAUDIBLE]

JOHN MUELLER: Yeah. It's a big priority for us. At the moment, I wouldn't consider it in a state where you can replace your whole website with AMP pages. It's really primarily useful, at the moment, for more of the content packed pages. Specifically, news articles are ones that we're focusing on where we can show those pages directly into search results then. So essentially, someone sees your link in the search results, we can load your page directly there with whatever content you have there and really show, very quickly to the user, they can swipe between the different search results to see the other pages as well. So by having something really fast, we expect them to kind of do a little bit more to actually read more content, look at more pages. But that's primarily really just for the content pages at the moment. I expect it will kind of branch out from the news side to more different types of articles fairly quickly, but I don't think it will be such that you can just replace your whole website with AMP pages, at least anytime soon. I think, at the moment, it'll probably be more like more of an RSS feed where you provide an alternate version of your content that your CMS kind of puts together automatically. You don't have to do any extra work, it just kind of happens for you on the side. It's not something where I'd say you need to redesign your whole website to do that.

DANIEL PICKEN: OK. And you think you may favor that at some point when you get to that stage, so that if you aren't-- [INAUDIBLE] AMP, oh.

JOHN MUELLER: Yeah.

DANIEL PICKEN: Not use this AMP then you might, I suppose, not see a ranking benefit for mobile if it's not [INAUDIBLE]. Or is it too--

JOHN MUELLER: I think that's probably too far out at this point to really make any judgment calls on that. I think, with regards to news content, of course, if there is this kind of AMP carousel on the top of the search results for the news periods you're trying to rank for, then obviously it makes sense to also be a part of that carousel, because then you're very visible there. But it's not such that we'll say, oh, you have seven AMP pages, therefore we will rank your site higher than other sites in the search results. I think that's a story that has to be worked out over the long run. Maybe we'll see it makes sense to promote these pages or these sites in search, maybe we'll see that things are kind of even and it doesn't really change that much. I think that's way too early at the moment.

DANIEL PICKEN: OK, thanks.

JOHN MUELLER: All right. Let me run through some of the questions here. I'll mute you again, it's a little bit of background noise. All right. Please, please, please, offer me site audit. Yeah, we can definitely do that again. It's always tricky because it's hard to kind of refine what you should be working on into a five minute audit and say, this is what you need to do in order to rank better. There's some sites where, obviously, some small issues can have a big effect, but for the most part, a lot of sites nowadays are kind of useful and valid and the main feedback we can give them is that you should work on making it even better. So that's sometimes tricky. But I'll definitely see what we can do with regards to maybe setting up one of these in the next series or the series after that, so that we have a bit of time to collect sites and actually look at what we can show there. We and other SEOs have analyzed our site against our rivals and were better in every way, yet Google doesn't believe us. What can we do to kind of rank above all of our rivals? I think this is always a tricky topic, because things that you might look at aren't necessarily exactly what Google might look at with regards to ranking and their algorithms. I think one really useful aspect here is that for crawling, indexing, and ranking we look at a lot of different factors. We say over 200 different factors, which means you don't have to match one to one your competitor or be like one better for each of these factors for your competitor. You can focus on things that are your own strength, and we'll try to balance that with things that we find on other sites to see how we should rank those against each other. So it's definitely not the case that if you see a competitor that has, I don't know, five links and you have six links, therefore your site will automatically rank better. We try to look at the balance view of these websites. Sometimes those are things that are really hard to quantify, such as how useful this is really for users, where do we think it actually makes sense to show this content, and these are things that are really hard to judge, especially if you're looking at your own site. One thing that I found really useful is going back to the old blog post by Amit Singhal-- which must be, I don't know, three or four years back now-- where he goes into the things that you could look at with regards to a high quality site. It's not so much that you want to look at this as one to one algorithmic factors, but kind of as something where you would take someone who's outside of your website, who is not directly associated with your website, and really give you honest and direct feedback on these different factors which could affect how users look at your website, how they feel about your website, which in turn might also reflect what we would look at from our algorithm's point of view. So that's something where it really helps to take a step back, have someone else who is neutral about your website really give you honest and direct feedback, and accept that feedback as well. So don't just go, oh, well, this guy just doesn't know my business, therefore I don't want to trust his feedback on why my site looks terrible. I chose these colors myself, they're awesome. You really want to have neutral feedback about your site and things that you can work on. So that's kind of what I'd recommend doing there. Not just focusing on individual ranking factors and saying well, mine is quantitatively one better than my competitor, but rather really looking at your site overall. And really getting honest feedback by people who aren't directly associated with your site. We had some low quality mini commerce sites which we removed about 18 months ago, but we 301'd the pages to a domain level to our main site. These domains are set to expire soon. Should we renew them to keep the 301s and their value in place or not? From my point of view, that's kind of up to you. I think, if you say they're low quality and you redirected them more than a year ago, then you're probably not going to change anything by not having them redirect anymore. Sometimes it makes sense to keep those kinds of sites around, just because people are still going to those URLs directly. That might be one source of traffic that you have, especially if you had them maybe in ads, if you still see people who have them bookmarked, then that's something that might play a role there. From our point of view, it's not something where I'd say you should or you should not keep redirecting those or renew those. That's more kind of up to you and more of a marketing question, almost. Auditing our site to make sure all advertise links are rel="nofollow", we found a couple of cases where there is an affiliate URL inside a form tag that has an action. Is this a problem, if it needs to be nofollow, how do we do it inside a form?

JOHN MUELLER: That's an interesting question. From what I've seen on our side is that we use these kind of form links as a way to discover new URLs, but not as a way to forward page rank. So that's something where you wouldn't need to actually nofollow those specific form links. And as far as I know, you can't really set a rel="nofollow" attribute to a form anyways, so that's something where it probably wouldn't make that much sense. That said, if you really want to isolate that out, maybe it makes sense to put that in iframe and then to just robot the iframe out to really kind of keep that away. If you're using these form links as a way, essentially, just to replace a normal static link, then maybe it just makes sense to show a normal static link instead. Of course, if you're using the form to actually send some information to the other site, maybe a form that people have to fill out, then that's not something you can easily replace with a static link. But in general, this isn't something where you'd need to put a nofollow on those form links. Well, it's not naturally like those form actions. Yeah.

DANIEL PICKEN: John? Can I ask on that forum, how is Google detecting whether or not a website is a forum or not? How does it know?

JOHN MUELLER: Oh, a forum. So the other question was more about the forms, so if you have a form where you fill out your name, address, something like that, and you click submit and it goes to an external site. With regards to forums, where you get kind of like a discussion forum, I think we mostly pick that up based on the content. So we try to do things like recognize how many posts there are in threads, what the dates are, and that's something we sometimes show as root snippets. That is mostly done just based on the content itself, it's not that we're looking for any kind of structure data or metatag that says this is a forum, this is not a forum.

DANIEL PICKEN: If you had a forum as part-- so you've got your main commercial site, and then you have a forum where people probably talk a lot about maybe your products or services, or other services, are you saying that if you were to link out of your site, you would pass page rank, but if a large portion of your site was a forum, then that wouldn't pass page rank? Is it--

JOHN MUELLER: No, no. I think you're confusing the two words. The forum kind of as a discussion forum, which I think you're talking about, where people are kind of discussing topics, and the other form as something that you fill out, and you click Submit, and it goes somewhere else.

DANIEL PICKEN: Oh, right. I see, I see, I've got you know. So forms don't pass page rank, forums do depending on whether or not everything is left followed, for example.

JOHN MUELLER: Exactly. Exactly.

DANIEL PICKEN: OK, sorry about that.

JOHN MUELLER: Yeah. These words that sound the same are really confusing. In German it's even worse, because webmasters talk about their site as either the page or the whole website. So sometimes you're talking about something, you're not really sure. Do they mean like page or the whole website?

DANIEL PICKEN: I could [INAUDIBLE] as well, I imagine, depending on the answer.

JOHN MUELLER: Yeah. All right, let's see. Next question here. In the past, you've mentioned [INAUDIBLE] signals, like a site is full of ads and is driving traffic away. Could you share some more details about driving traffic [INAUDIBLE]. Can you see if a visitor from Google lands on a page, spends some time, and clicks onto other sites? As far as I know, we don't see that from web search. So that's something where what happens on your site is essentially up to your site. But the kind of indirect effects you would see from a website like that is something that we might pick up. Where we see, for example, take for example an affiliate site. People go to their affiliate site, but they actually just go to Amazon and actually buy the product there directly. Then they're not going to be recommending the affiliate site because there is nothing unique that they can actually do there. That's the kind of data aspect that comes into play there. It's not directly that we recognise people are doing this weird jumping off to other sites. But it's more the fact that people aren't really recommending your website or recommending someone else's website. So that's something where if you had a great website, even if you have affiliate links on there, for example, and you send people off to other sites to buy that, then if people still recommend your website, then that's something we can obviously count with regards to search signals.

MALE SPEAKER: Would it be safe to say then that maybe Analytics data is not used in the algorithm, because they may be more broadly that sort of the intent to that question.

JOHN MUELLER: Yeah, we don't use Google Analytics at all with regard to the search algorithm. Part of that is just because not all sites use Analytics, so you can't really balance. We have some signals for this site here and we have no signals at all for this other site here, what does that actually mean. How do you weigh that.

MALE SPEAKER: I've got a quick question, actually.

JOHN MUELLER: Sure.

MALE SPEAKER: So myself and others, we've noticed that the heading for in-depth articles has disappeared, but I'm still seeing instances where I think in-depth articles are showing up in the search results. And they're kind of denoted by these thin gray lines that will be around three articles above and below them, and those three articles kind of move together in the search results. Are in-depth articles still a thing? There's no heading now, is that how it works?

JOHN MUELLER: As far as I know, that's just the UI change. Yeah. And I expect that to change over time. We always kind of test these things and we see what kind of UI treatment make sense there. Maybe that's something that will be refined again when we have more information from AMP articles as well. So that's something where, depending on what we have on the web, what we can pick up, we might be able to show more or less of that in the search results as well. And we kind of try to react to how users respond to these features to see, does it make sense to show them more, does it make sense to show them less, should we make them more prominent or less prominent. These things just change over time.

MALE SPEAKER: OK. Yeah, that makes sense.

JOHN MUELLER: If you own a number of websites and you decide to give a follow link from these to, say, your main website, is this bad, unnatural, can that be penalized even though they are your own sites? So in general, if you have something like a handful of websites and you're linking to them from your footer or you're linking them otherwise between your sites, that's less of a problem. If you have a really large collection of websites and you crosslink them, then that starts looking like a collection of doorway sites. So that's something where I try to find the right balance between number of sites on the one hand that you run anyway, because sometimes it makes sense to just concentrate things and have really strong websites rather than a lot of mediocre websites. That's one aspect there. On the other hand, if you have tons of cross links between sites that you own and they're all followed, then that starts looking kind of spammy, a little bit. So I try to just limit this to be a handful of sites, something around that number. My question is about single page optimization. What's the benefit of SPAs, so Single Page Apps, as my website is already mobile optimized? Do I need to switch or just improvise it? Does Google crawl JavaScript, Ajax, et cetera? So we do try to render these pages as a browser would, but one thing that we kind of need there is some kind of anchor to understand that this is a specific page and that we can link to it directly. So we need a unique URL for this specific piece of content on your site. If you have a single page app where you basically let the user go through your site with JavaScript and you track keep state with cookies, for example, then that's something that we wouldn't be able to actually navigate through your site. We wouldn't be able to link to a specific part of your site. As long as you're changing the URL, for example, with pushState, then that's something where we can say, well, this specific URL leads to this specific block of content. Therefore, we can show it in search, we can send users there, we know that they'll be able to see the right content. So that's one big aspect to kind of keep in mind, that we actually have this unique identifier for each piece of content. The other thing is more from a technical level, we need to be able to crawl and index all of this content. So if you're using JavaScript to create your content, you've got to pull it all in. If you're using server side responses, these Ajax responses that you pull in and display on your website, then that's something we also need to be able to crawl. So these are all things that need to be crawlable and indexable so that, for example, they're not blocked by robots.txt. These are things that you can kind of try out as well, where in Search Console, if you have the site verified, you can enter URL and request that this page is rendered like Googlebot would, and then you'd see what content Googlebot would be able to pull in, what it looks like. And also you'd have information about any blocked content that we run across along the way. So I'd definitely test it with the testing tool, make sure that we can actually pull up your content. And if that looks pretty good, then chances are we'll be able to pick that up and show it in Search. Another one about Angular. We want the best, the fastest UIX on mobile web. We wonder if we're hurting our organic traffic on mobile. What precautionary measures should we take if we go before adopting this JavaScript framework. Again, like I mentioned before, I'd really double check this and see that things are actually working well, that we can crawl and index that content using Webmaster Tools or Search Console. And one thing maybe worth mentioning here specifically is before you revamp your whole website to use this technology, maybe set up a set of sample pages either within your website or external to your website that you can kind of track to see how Google actually picks up this content directly. So if there is a specific configuration that you want to use that you think your developers will be able to-- allow them to be really productive and create something really fantastic, then set it up in a test environment, see how Google can crawl and index that. And if you're happy with how that works, then maybe it's worth following up and implementing that on a greater scale. But if you see that it's not being picked up properly, then that's something you might want to double check what's happening there, why it's not being picked up. If there are technical issues on your side, if Googlebot just isn't able to completely parse all of the JavaScript that you have in that framework, any of these things might be happening. Does the page need to have no error in order to be considered a valid AMP page? Yes and no. From purely a technical point of view, if it's a valid AMP page in the sense that you go to the hash development equals 1, and you see that there is no validation errors, then that means that this is, from a technical point of view, a valid AMP page. And we could theoretically use this as AMP content. Other services could theoretically use that as AMP content as well. On top of that, there is this structured data that we require at the moment for pages to be shown in Search. That's, specifically, I think either the blog posting schema.org markup or the news article schema.org markup, so that's something that also has to be there. From our point of view, we require definitely that the AMP page be valid. That's kind of the baseline requirement. And on top of that, we're really looking for a valid implementation of this structure data markup. I assume over time that we'll evolve a little bit, that we'll allow different kinds of structure data as well to be shown as AMP pages in Search. And maybe there will be some tweaks with the validator that we say, oh, everyone is messing up with the logo image, for example. Therefore, maybe we should just use that as a warning and not as a critical issue. But if you really want to be sure that we can pick up these AMP pages, then I'd definitely make sure that it passes both of these validators. Google Analytics supports AMP, but we still can't get search terms. Google Analytics just shows not provided. I suspect that's the same as with normal web pages. I haven't actually looked at what Google Analytics shows. I think Google Analytics just announced it yesterday, their support for AMP pages specifically. So that's something where it's probably worth testing to see what actually happens there. In any case, what we're going to be doing, as far as I understand, is give you this information as well in Search Console so that you always see the clicks and impressions that we would show anyway, together with the keywords that were used for searches. You might not see the specific query terms in Analytics, which I think makes sense to be consistent. You'd see the traffic, obviously, in Analytics. We'd see more of the search information in Search Console. Can you please take me into count so that we can explain our case regarding single page technology on mobile web. I don't know if you're here. The difficulty here is that these Hangouts are limited to 10 people, and if you're not in the first batch of people who actually jumps in, or if nobody leaves in between, then it will kind of be filled up here. Maybe what would make sense here is to actually post in the Help forum. You're welcome to send me to a forum thread, and I can take a look at that and kind of review the specifics of what we have mentioned there. Because as soon as we look into specific use cases of, I want to use this specific setup, that's not something I can easily do live in the Hangout. Do we have access to the Google Search Console? My client has been trying to get added as a user to both the dub dub dub and store dot account, but they haven't had any luck in tracking down the verified owner who will have admin rights to add a user. This is a question we sometimes get, who actually has my site verified, how do I find that person so that I can get added as well. We can't help with that. So we can't tell you who has any random website verified. That's something that's kind of up to the person who has it verified. However, what you can do is just add new people to Search Console. So anyone can verify this site using the normal verification methods, and all of the data will be there. You will also see the other owners shown there, so that's something where just add a new user to the account, make sure you have the verification files on the server, then everything will work just fine. It's not the case like with Analytics where you have to have exactly that account to get the same data again. With Search Console, essentially you would see all of that data even if you were a new user to that site. It looks like another copy of the question. We're concerned about the metric time spent downloading a page in Search Console which seems to increase when our traffic rises. I'd have to take a look at that forum thread to see what specifically is happening there. But in general, if the load on your server is high and Googlebot is also trying to crawl, then it might be that it's a little bit slower. What usually happens in cases like that is Googlebot will actually crawl a little bit less, because it tries to stay out of the way of the normal traffic on your website. So if we're seeing that your server is really slow to respond to requests, then we'll probably try to back off. The thing also to keep in mind with this specific metric is that it's on a per request basis per page. Which means if you have some pages that are really, really large, and we happen to go and crawl all of those pages at once, then you might see that spike as well. So specifically if you have really large PDF files, things like that which we would crawl for a web search, then you might see a rise there. You might see a similar rise if you have functions on the server that just take a lot of processing power. So maybe fancy search features, or maybe some page where, when you pull it up, it does some really complicated calculations on the back. That's something where if we crawl a lot of those pages, we'll also slow your server down. We'll think well, we have to back off and not crawl so much. What I would do there is take a look at your server logs and see what kind of request Google is doing during that time. And if you notice that these requests are actually problematic in that you don't need Googlebot to actually pull up these pages, you know that these pages are going to be really slow, then maybe it makes sense to put those URLs in the robots.txt so that they're actually not crawled at all. But that kind of depends on what the actual issue is there. If it's really just a matter of your server being slower as the load increases, as the number of users increases, then maybe the right solution there is not to block Googlebot but rather to say, OK, I'll just upgrade my server so that it's faster for everyone. Site is crawled and indexed with no problems, but URLs can't be found in Google. And a link to a forum thread. Let me just take a very brief look at the forum thread. Maybe there's something obvious to say. We offer a site map for each individual file. I do a site query, there's only the home page. I really have to take a look at more details there. It looks like one of the top contributors is already kind of helping out, so maybe that will work there as well. But I'll keep that thread in mind and see what we can do there.

MIHAI APERGHIS: I actually took a look over that. It's a lot of PDF files hosted on that subdomain, and they don't seem to be indexed by Google, though they are indexed by Bing, for example, and that has seemed to be a problem with the robots.txt. I assume that maybe Google sees or implemented some kind of canonical to maybe some other pages. I'm not sure, exactly.

JOHN MUELLER: I don't know. I can take a look afterwards. If the home page content is less or images only in the form of a slideshow without content, will that affect ranking? Can I put contents in a displaynone div or can I put the content in all types of images? So if there is no textual content on the page, then it's really, really hard for us to actually rank that page properly. Some of that we might be able to pick up through context of the page, maybe the title of the page, maybe how it's linked from other pages within your website or from other websites. We can kind of guess a little bit at the context. But without any textual content at all it's really kind of hit or miss. Alt text can definitely help there, but obviously if you have a sentence for alt text, that doesn't really replace actual content on the page. There's not really that much you can put into an alt text. With regards to showing the content in a hidden div, we would see that as hidden text and we would try to ignore that. That's not really going to help you. What I recommend doing there is on the one hand, first taking a step back and thinking about does it actually makes sense to show these pages in web search. Is there actually something on here that I need to have indexed like this, and what should it be ranking for? What would you like this content to be shown for? Because sometimes you might just have fantastic images in a slideshow and you say, well, this is something more for the user within the website rather than something that needs to be shown in Search. And in those cases, maybe you could just live with these pages not getting traffic from Search and say they're made for users within my website, I don't care about traffic from Search. On the other hand, if there is something specific that you want these pages to rank for or these images to rank for, then I'd make sure that that's really obvious on the page so that anyone who goes to these pages understands oh, this is about this specific topic. And if it takes a while for the images to load, I'll still know what this page is about. Those are the two options I would aim for. Either maybe saying well, fine, I don't need this specific page or these images to show up in Search. That's fine with me as well. Or saying well, I do have something very specific that I want to rank for, therefore I'll make that a lot clearer on those pages. How many characters in the image alt text? Is there any limitation? There's no limitation on the image alt text, at least that I know from an HTML point of view. I think from a practical point of view, I kind of keep it reasonable. We'll probably pick up a lot from the image alt text, but it's not really that useful for the user if you put a whole novel in the alt text and it's not visible anymore. If a website is penalized as manual action for a company's money keyword after recovering it, how can someone improve its ranking in search engines. What's the answer here. So in general, if you have a manual action and you've cleaned that up, and the manual action is resolved, then there is nothing manual holding back a website from ranking as it normally would be ranking. That doesn't mean it will jump back in rankings the way that it was before, because maybe the previous rankings were based on doing something sneaky that the manual action tried to protect. So that's something where there's no absolute or simple answer here. You really have to think about your website, how it's actually working. And the things that you are doing on your website, you can work on improving those to make sure that you're really providing a fantastic website on its own. Again, the manual action, if it's expired, that won't be playing a role anymore. So there's nothing like an anchor holding your site back from the manual point of view. But that doesn't mean it will be in the same place that it was before.

MIHAI APERGHIS: John, can I ask you about a specific use case?

JOHN MUELLER: Sure.

MIHAI APERGHIS: A webmaster has site A, and there are SEO engine seed builds a lot of low quality director links, for example. So the website is too compromised now, so they decide to [INAUDIBLE] from domain site B. However, the webmaster does a 301 redirect, so they actually forward all of those signals to site B as well. And we actually notice that and we remove the redirect, we move the whole site, and returned all the alt site's URL to 404 or 410 for Googlebot. Would that be not to stop passing-- would that be a barrier for any of the links from site A to where the signals are. We built [INAUDIBLE] site B include both of site A.

JOHN MUELLER: I would look at the links that are shown in Search Console and make a decision based on that. Some of that-- depending on the timing, if you do it very early on, you'll kind of block that. Then sure, that will block that. But if we've already seen all of those redirects and they're kind of still in our system like that, then it might be that they're still logically tracked like that, even though you've blocked them in the meantime. So if you see those bad links in Search Console then I'd definitely disavow them there. If they're not shown there at all, then probably you've blocked it in time to prevent them from forwarding.

MIHAI APERGHIS: OK, but when you recrawl some of those links and you see that they lead to a 410 page, for example, would you drop them for the index or maybe to do multiple crawls to make sure?

JOHN MUELLER: It depends on if we know that it's actually a 404 page. So if you've redirected all of those old pages to your new website, then maybe we don't crawl those old pages that much anymore. We might crawl the page that's linking and see, well, it's linking to this page that the last time we crawled it was actually redirecting. But we're not recrawling that page again because we think, well, it doesn't make any sense. It's been redirecting the whole time. So that's almost like a time skew problem there. So from that point of view, if the redirect was just in place for a very short time, then obviously you have a better chance of us saying well, we saw a 404 last time we looked at it, therefore we'll drop it. But if the redirect was there for a long time, then we might say, well, it's always been redirecting, therefore we'll just follow it directly.

MIHAI APERGHIS: OK, so when you reprobe those low quality links, it's not necessary that you will follow them to check it. It's still the need to--

JOHN MUELLER: Exactly. So with redirects we do follow them directly. But with links, we just see oh, it goes from here to here. And this one used to redirect here. So we just assume that it [INAUDIBLE].

MIHAI APERGHIS: Yeah, OK, thanks.

DANIEL PICKEN: Can I ask a quick question about interstitials, please?

JOHN MUELLER: Sure.

DANIEL PICKEN: I'll use Forbes as an example. So Forbes's site-- you've probably heard this already. But Forbes's site, it's a mobile friendly site, but before you get to their landing page, it usually flashes a page that has got an ad on it. It's a quote with an ad, they've been doing this for a while. That's a lot of their pages. Now, they're deemed mobile friendly. I wouldn't have said that was a mobile friendly experience. So I just thought I'd get your views on that.

JOHN MUELLER: It depends a bit on how that's actually implemented. If that's such that we might not even see the interstitial, then we'll look at the actual content that's shown there. Even if we do see the interstitial, and in the worst case, if we index the interstitial content, if that interstitial is mobile friendly, then we would say well, this is mobile friendly content even though we kind of indexed an interstitial instead of your actual content. I think that's a very tricky area, and probably one where we will have to figure out some new policies over time if we see that this is actually a broader problem. But I think at the moment we do try to keep it more on a technical level in the sense that we'll crawl and index those URLs and see what we pick up. If the company content we pick up there is mobile friendly, then we will treat that as mobile friendly.

DANIEL PICKEN: OK, thanks.

JOHN MUELLER: Let me run through a bunch of the questions here, and then-- or, you've been waiting to ask a question, right?

MALE SPEAKER: OK, OK.

DANIEL PICKEN: I'll tell you what I will ask. What about Penguin? What's happening there? I thought that Penguin was definitely a January rollout, and it doesn't to have been any-- you probably get asked this question [INAUDIBLE]. But where are we with that, with Penguin? I thought we were already before Christmas.

JOHN MUELLER: I don't know.

DANIEL PICKEN: [INAUDIBLE].

JOHN MUELLER: Dewey said it's looking like Q1. So I don't know. I try to keep my hands away from any launch dates.

DANIEL PICKEN: So with saying Q1, i.e. could be up until March then, as far--

JOHN MUELLER: I haven't followed up with the team there recently.

DANIEL PICKEN: OK. OK, there you go. I'm done.

JOHN MUELLER: All right. Someone else had a question?

MALE SPEAKER: Yeah, hi John.

JOHN MUELLER: Hi.

MALE SPEAKER: Yeah, hi, OK. So I have a question about single page application technology. We have pretty much good traffic on our mobile web. And we actually, guys, are going to do-- want to have very good user experience and fast user experience within pages on our mobile web. And we wonder about hurting our organic traffic on our mobile web. As I search on whether there are certain drawbacks implementing single page technology on mobile web, you might have indexing and caching issues. So first question in that, should we go for single page application technology or not? That's my first question.

JOHN MUELLER: I think you can definitely make a website that works well in Search that's built on this technology. But there are also things that might be done which make it really hard to crawl and index those pages. It really depends on the specifics of the implementation and not so much about the theoretical architecture.

MALE SPEAKER: OK. OK. So what should be the URL structure of our mobile website, of our mobile web pages with such JavaScript framework. What do you suggest?

JOHN MUELLER: What I would do is make a handful of test pages and see how they work. So take the technologies that you want to use, the back ends that you want to use, and just create a rough and dirty test page, and see how that actually works in Search. And based on that, maybe make a decision and say yes, we want to do more of this, it works really well, it's easy to implement. Or maybe you'll say it doesn't work well in Search, and we would have to invest a lot of time to make this specific setup that we want to use work in Search. But with a couple test pages, you see that quickly.

MALE SPEAKER: OK. So basically we do not want to hurt our mobile traffic because I see that they will search [INAUDIBLE] and hash in URL, and there will be some caching issues or there will be some canonical or redirection. So I have heard about pushState technology. What Google says about [INAUDIBLE] technology, and I mean, would it be good for us to go for pushState technology? What do you say about that?

JOHN MUELLER: Yes. I would definitely use standard-looking URLs and not with a hash.

MALE SPEAKER: OK. OK. So any other thing? Google suggests about the fast and best user experience on mobile web? anything? I mean, if we think of an alternative of Angular JavaScript framework, what [INAUDIBLE] suggest to go any other technology, AngularJS or any other, so we have the best and fast user experience on mobile?

JOHN MUELLER: I can't make any recommendations on the technology. So I would check with your developers, what they're comfortable with, what works for them.

MALE SPEAKER: OK, OK. Basically I have read about the developers, our engineers are excited to adopt this technology. But we might keep those SEOs when they're aboard hard boards on [INAUDIBLE]. Over again, it might not hurt, you know. Because we are [INAUDIBLE].

JOHN MUELLER: Well, yeah.

MALE SPEAKER: You are [INAUDIBLE] and I'm going to say that.

JOHN MUELLER: I would definitely test it. I think this is a really interesting topic because lots of sites are actually moving to this technology, but it's also an area that has lots of potential pitfalls where if you do it wrong, then suddenly your site disappears and you can't see the content. So it's definitely worth testing.

MALE SPEAKER: OK. One last thing is that what if a user lands on our mobile web page, and we give him the option to move towards the single page app technology, and we allow bots to crawl all the pages on the mobile web, and we disallow all the pages on single page app technology. So how do you see that? How do you see this option? This is the kind of option we are giving to the user.

JOHN MUELLER: I would see this kind of as A/B testing. We have a Help Center article on A/B testing, so I'd double check what it has there.

MALE SPEAKER: OK.

JOHN MUELLER: All right let me just run through a handful of other questions here, then we can switch to more open Q&A. In Search Console I did a fetch and render on a bunch of new pages. Googlebot said it couldn't see the images on these pages, they were temporarily unreachable. If I submit these pages to index, will it have a negative rating effect as these images can't be seen? No. Specifically with regards to Web Search, we don't take content into account. That's in Images. So for Image Search, we might not be able to pick them up, but we would still be able to pick them up for Web Search.

ROBB YOUNG: John, does that mean that an image on your page is having no effect on the quality or rankings of that page then, if you don't worry if you can't pick it up or not?

JOHN MUELLER: So whether or not you have images on your pages.

ROBB YOUNG: Are you glad I'm back?

JOHN MUELLER: I don't think there is a relevant ranking factor where we'd say well, you have five images that we can pick up, therefore we'll rank you higher than a site where we have one image that we can pick up. I don't think we will do it on that level. I know there are still a lot of sites that want to block images from being crawled on the server with robots.txt, and that's something we have to live with as well and say this page, this site, is really working really well. Therefore, we will still rank it properly. I don't think we have any kind of ranking factor that specifically says you have a great image here, therefore, we will rank you in Web Search higher. I know for Image Search, we do use those kind of factors where we try to say, well, this is a high quality image, therefore we will try to show it higher in Search. But within Web Search, I don't think we do that.

ROBB YOUNG: So no, there's always discussion around the variety of content and the depth of content you're giving to consumers. So if there's two identical product pages apart from one has a video or an image, some reviews, whatever, then that gives variety and quality. But then if those images are anything I'm missing, you're kind of missing out on that.

JOHN MUELLER: I don't think we use those.

ROBB YOUNG: But it's quality and user experience.

JOHN MUELLER: I don't think we'd use that as a direct ranking factor. I think indirectly, you'll definitely see effects in that people stay longer on your site, they'll recommended it more to others. Maybe they'll buy more from your site, you have higher conversions, but I don't think there is any kind of a direct ranking factor that we'd say, well, there are fewer images on this page compared to this other page, therefore we will rank it lower because of the number of images that we can look at.

ROBB YOUNG: I was thinking in the most basic case, none versus one rather than three versus four.

JOHN MUELLER: I don't think we do that. I think part of that also comes from the traditional or the older thinking that some sites just robot out their images by default, so those are kind of the situations where we don't see any of the images. We don't know if they're useful images or not, but we still have to rank them as a normal website. I don't think, going from that situation in the beginning, that we'd use that specifically as a ranking factor.

ROBB YOUNG: OK.

DANIEL PICKEN: [INAUDIBLE] actually, in terms of guidelines for images, it still says to make sure your filenames are optimized and your alt tags are optimized. I know it's not a ranking factor. Do you still use alt tags, sorry, to understand the relevance of that image to the contents around it?

JOHN MUELLER: Yes.

DANIEL PICKEN: You do.

JOHN MUELLER: So for Image Search, we definitely need that kind of context. Where we don't look at this image and say, oh, there's, I don't know, a nice landscape here. Therefore, we'll rank it for landscape. We do look at things like the alt text, like captions on the page, other information on the page where this image was used, and we rank those together. So any image that we show on image search is always associated with a landing page, and we kind of connect those two based on what we find there. It's a bit different for Web Search, though, because for Web Search, we-- again, as far as I know, we don't use actual content of the images within Web Search. But things like alt text or something that we do pick up for Web Search, and you can find that in Search. If you search for something that's only in an alt text, you'll see that in a snippet in the search results, as well.

DANIEL PICKEN: So are you saying that you do-- so an alt tag, does that add to the content? So if I had two words as alt tag, does that [INAUDIBLE] words to the content?

JOHN MUELLER: Sure. It's essentially like a caption for the image. It's not one to one the same for us, but it's essentially kind of saying a very similar aspect with regards to text on the page.

DANIEL PICKEN: OK, thank you. Thanks for the clarification.

JOHN MUELLER: All right. We still have a bunch of questions left. Let me just browse through them, but maybe you guys can go ahead and ask some questions as well.

MIHAI APERGHIS: John, I'm curious about if there's any real plans to work more with structured data in 2016, maybe. Like more types of [INAUDIBLE] or dev kits. Or, I don't know, encourage users to use it more. Is useful for you, or are you trying to become less dependent on it, I guess, to understand certain things about webpages?

JOHN MUELLER: I think both. On the one hand, it's really useful for us to better understand this content. It kind of makes sure that we don't misunderstand it wrong. So from that point of view, I think that's really useful. The other aspect, of course, is the more sites use structured data, the more we can work on features that maybe highlight that more in the search results. I suspect that's also an aspect there, where the better we can understand these pages, the more we understand the details of where this page is relevant, the better we can show them in Search. That's kind of one aspect there as well. So I suspect over the course of the year we'll definitely be seeing more from structured data because it does really help us a lot to understand these pages better. And it's also something that can be used by other services as well. If you're another social network and you see that this specific markup is on these pages that someone had it, maybe you can do something fancy with that, show that to users as well. It's kind of a part of the open web, essentially.

MIHAI APERGHIS: What about rich snippets? Are you planning to introduce more types of rich snippets?

JOHN MUELLER: You guys always ask these future looking questions which, to a big extent, I can't answer. I know there are always tons of people who have plans in their head at Google who are working on designs. I don't know which of those will launch, and we try not to pre-announce them ahead of time unless there's really something where we'd say, well, this specific feature is going to launch like this, and in order for you to profit from it, you need to prepare by creating this kind of markup. Which is something, for example, we see with AMP, where the preview kind of helps us to encourage people to do the right thing because they see it's going to result in this. But for the most part, other features in Search, we essentially try to launch them and then use that launch to encourage people to actually do something there.

MIHAI APERGHIS: Mhm. Well, thanks. I've got one more, actually.

JOHN MUELLER: All right.

MIHAI APERGHIS: This is regarding Panda. We all know that Panda looks at the quality of the pages and, I don't know, provides, I guess in the most simplistic way, a score or algorithm to tell you whether it's a low quality page or not. Does it also provide kind of a boost to good quality pages, or is it just to signal low quality ones?

JOHN MUELLER: It's essentially the same, right? If one half goes up, the other half goes down. So it's not something where I'd say Panda just tries to demote low quality content, essentially. The goal is to promote high quality content. But it's kind of like if you push one up, the other one will go down.

MIHAI APERGHIS: OK, so it's not like a flag. This site is flagged, so-- and this is not. It's not just a-- this is bad or not.

JOHN MUELLER: Not specifically like that.

MIHAI APERGHIS: OK. Because Penguin, for example, I think, works like that because it's a flag that Google can take, and it's kind of a-- [INTERPOSING VOICES]

JOHN MUELLER: Yeah. I think Penguin is something where you're looking more at low quality links, and that's something that is more like one sided. Rather than if you're looking at the quality of the content, there's good quality and there's lower quality. So it's kind of more like a scale. But when you're just looking at low quality factors, then that's obviously a bit different.

MIHAI APERGHIS: Right. So for Penguin, for example, for high quality links, you don't use Penguin for that. You have your own, other algorithm.

JOHN MUELLER: Probably, yeah. So here's one that came up recently with regards to a change we made in the webmaster guidelines. We mentioned you should use correct or valid HTML. The question here is, is WC3 validation a ranking factor now, or should we care about it. So it's not directly a ranking factor. It's not in the sense that if your site is not using valid HTML we will remove it from the index, because I think then we would have pretty empty search results. But there are a few aspects there which do come into play. On the one hand, if a site has really broken HTML-- and this is something that we see really rarely-- then it's really hard for us to actually crawl and index the content, because we can't pick it up properly. The other two aspects which are kind of newer is with regards to structured data. Sometimes it's really hard to pick up the structured properly if the HTML is broken completely. So you can't easily use a validator for the structured data to actually understand that really well. The other thing is with regards to mobile devices and, in general, cross-browser support. If you have broken HTML then that's sometimes really hard to render on newer devices. So those are two aspects that kind of come into play there, which are why we kept that point. So I think it used to be use correct HTML, and we decided well, correct doesn't actually say anything properly, so we might as well just call it valid HTML. I think we even pointed at the validator. It's not that it's a requirement that you must use valid HTML and we'll remove you from search if you don't, but it really makes it a lot easier for you to diagnose issues and for us to extract things structured data, for example, and to recognize that it's actually mobile friendly.

MIHAI APERGHIS: At this point do you now recommend more structured data based on JSON-LD, just so you don't have to deal with-- or users don't have the risk of messing up the code?

JOHN MUELLER: I think JSON-LD mostly comes from the developers in that they're really comfortable with that kind of markup and they prefer to use that. So we've kind of shifted to that a little bit as well. I don't think it's because we have trouble with the kind of inline structured data markup. It's more that people really like this type of markup, therefore we'll try to see if we can actually allow that as well. Let me double check to kind of see what else we're missing from the questions. There's still some really interesting questions here, but I think we're kind of out of time to go through those. But maybe we can do a few more from you all live.

ROBB YOUNG: What's the question here about the clear hierarchical concept? Is that-- someone's asking there.

JOHN MUELLER: All right.

ROBB YOUNG: Are they talking about flat versus hierarchical, or have you got a new guideline?

JOHN MUELLER: I don't know what specifically we changed there. I'd actually have to look at what it was before and what it is now. We kind of refine those texts for a while now.

ROBB YOUNG: You don't have a preference, do you, in terms of flat structure versus hierarchical structure?

JOHN MUELLER: Not really.

ROBB YOUNG: As long as you can get through within a couple of links.

JOHN MUELLER: I mean, what helps us is if we understand the context of the content of your pages. So if you have clear headings and we understand that this block of text, this image, belongs to this heading, then that's something that helps us to better understand how we should fit this in together. So from that point of view, that's really helpful. You can also do that in a fairly flat structure. And now you have one heading on the page, and all the-- oops. Yeah, but I kind of have to double check to see what we actually changed there with how it was before and how it is now.

ROBB YOUNG: I just googled that phrase and I couldn't find anything at all.

JOHN MUELLER: I think we just updated those guidelines yesterday, and some blogs have been finding all of these small changes that we made there. I'd really have to double check. I haven't looked into this in detail for a while, since it has to go through translations and all the reviews. All right. One last question from any of you.

MIHAI APERGHIS: John, [INAUDIBLE] back story that you have on the Wikipedia page, or Google can show Knowledge Graphs or [INAUDIBLE] for example, that don't have a Wikipedia?

JOHN MUELLER: I think we can show it without Wikipedia pages as well. I don't really have any example that I could point at. Because some of this information for the Knowledge Graph we do take from Wikipedia, or we kind of crosscheck with Wikipedia. But it doesn't mean that all of this information has to be mirrored in Wikipedia. So for some things we might be able to pick that up and then we're saying, well, we have this concept from the web page where we understand this is a company, this is what they're working on, we'll pick that up for a Knowledge Graph page even if it's not already shown in [INAUDIBLE].

MIHAI APERGHIS: I was asking because Wikipedia used to be less strict about what gets in Wikipedia a few years ago. So a lot of brands have a Wikipedia page, and we've been trying that for our automotive plant in the US, and it's much harder to get that without a significant amount of a certain type of references and so on. So I was curious whether it is like a mandatory requirement.

JOHN MUELLER: I don't think it's a mandatory requirement. But I mean, it always helps us to be able to cross check the information that we get. I don't think it's an absolute requirement. I mean, for example, we also show local business information in the Knowledge Graph. And for a large part, that's not something that would be findable in Wikipedia anyway.

MIHAI APERGHIS: Do you recommend any other sources that you cross check? Other than [INAUDIBLE].

ROBB YOUNG: That's a yes, I know something, and I'm not telling you.

JOHN MUELLER: No. I haven't actually looked into the details of how we get this information in there. I think one thing that does help us is if we can recognize that a website is really the source of this content, so we can recognize this is really where the business is located. We see the information that's provided on this web page is kind of authoritative on this topic in the sense that this is the business, they have their address there, we can confirm that these are the opening hours, the phone number, those kind of things. And that really helps us to be able to trust this page. So that definitely helps compared to some random blog where you happen to put the structured data markup on there and say, well, this is for this really well known business, and you should trust me. I'm a blogger, I know what I'm talking about.

MIHAI APERGHIS: Right. I don't know if I explained correctly. I was mostly talking about brands that aren't necessarily local businesses. So like, I don't know, Nike or something like that. Is that still called the Knowledge Graph, or is it another term?

JOHN MUELLER: I don't know. I'd have to look into specific examples to be able to tell you something smarter. But in general, if we recognize that this is really the main page for this topic, and it has structured data markup that says well, this is the organization, the logo, the address, all of that, we will try to take that into account.

MIHAI APERGHIS: OK, mind if I give you Google+ [INAUDIBLE]?

JOHN MUELLER: Sure. Sure. All right. Let's take one last question.

MALE SPEAKER: Hey John.

JOHN MUELLER: Hi.

MALE SPEAKER: Hey. I wanted to ask you, what are the top homepage factors that Google uses to calculate the relevancy of a page to any keyword or keyword phrase?

JOHN MUELLER: So ranking factors, or for app?

MALE SPEAKER: Ranking factors, on page ranking factors.

JOHN MUELLER: Oh man, you come with the easy questions at the end. I don't know if we'd have any ranked list like that that we can give out. So we use a lot of different factors, and it's not so much that we'd say, well, this factor is the most important, you all need to do this. Some pages use this, some pages use that, and we try to balance all of that. So it's not that there is any specific factor that I could say, well, this, this, this, and this that you would have to do. I don't have any list of top ranking factors that I can really share like that. Sorry.

MALE SPEAKER: OK, thank you.

JOHN MUELLER: Sure. All right. Let's take a break here. Thank you all for your questions that were submitted and live as well. If I didn't get to your question that you submitted, feel free to post on Google+, in our Help forums, or just add it to one of the future Hangouts, and we'll try to get to it then as well. So thanks again, and I wish you all a great weekend, and see you in one of the future Hangouts, maybe. [INTERPOSING VOICES]

MIHAI APERGHIS: Nice weekend.

MALE SPEAKER: Bye John. Thank you.
ReconsiderationRequests.net | Copyright 2018