All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video
JOHN MUELLER: OK.Welcome everyone to today's
Google Webmaster central officehours hangout.My name is John Mueller.I'm a Webmaster Trends Analyst
at Google in Switzerland.A part of my role is to talk
to webmasters like you guysand make sure that you're
getting help information youneed from us to help make
the web a great place.So we have a bunch of questions
that were already submitted.And before we get started,
as always, if one of you guyswants to ask the first
question, feel free to go ahead.
DEREK MORAN: I'll go ahead John.
JOHN MUELLER: OK.
DEREK MORAN: OK.About three months
ago, I freaked outbecause I've discovered I
had 90 subdomains, indexfor a website that
didn't actually exist.And what I worked out that
had what had gone wrongis [INAUDIBLE].
JOHN MUELLER: Go ahead.
DEREK MORAN: OK.What I worked has gone
wrong is in my DNS records,there'd been a wild card entry.Alright, so I ended up
with these 90 subdomains.Alright, so I had to create
a webmaster accounts for themand then I just canceled, did
a complete website removal.But what I want to know
is that OK to do thatand then I just cut out that
wild card entry altogether.
JOHN MUELLER: Yeah.That sounds like a
good thing to do there.So from our point of
view, the wildcard DNSis particularly problematic
for crawling because we end upcrawling all these duplicates.And we think, oh there must
be something good here,we'll keep looking.And we keep running
across these subdomainsand trying to pick them up.So that if you remove
the wildcard DNS entry,that's essentially the
most important part.Using the removal tools for the
stuff that's already indexedis fine as well.
DEREK MORAN: Yeah.Cause I ended up with like
6,000 different duplicate pages.It was just out of control.
JOHN MUELLER: Yeah.So from a practical
point of view,this kind of duplicate
content shouldn'tbe causing problems
your website.It's not that we penalize
the website for that.It's just technically
a problem that wehave to kind of crawl
all of these pages,and then figure out that
they're actually duplicates,and we'll filter them out
of the search results.But it's a lot of work that
has to be done on your serveron our side to
actually get that far.So it's not that you'd be
penalized for that, youwouldn't be demoted
or anything like that.It's really just a
DEREK MORAN: OK.Cool.Thank you.
JOHN MUELLER: Sure.
MALE SPEAKER: Morning John.How're you doing?
JOHN MUELLER: Good morning.
MALE SPEAKER: Good.So about, in May,
we had a site thathad a manual link
penalty applied.And we actually saw the penalty
appear across the main www, aswell as a couple of
other subdomains.And after about four months,
the penalty happen to expire.And so that manual
action disappearedfrom within Webmaster Tools.And so what I'm curious about
is because those manual actionsdisappeared from all
of the subdomains,was that a single
penalty that happento be applied across the
entire site or were thosesubdomains subject to
individual penaltiesthat all happen to be applied
and expire at the same time?
JOHN MUELLER: So both could
be possible in the sensethat if we can, we try to
be as granular as possible.So we tried to target
a specific subdomainif that's just one subdomain
that's problematic.If we can't do that,
then sometimes,we're kind of broader and just
apply it to the whole domain.So theoretically, both of
those would be possible.Usually, you'd tell if this is
just affecting one subdomain.You'd see that it's just
there in the Webmaster Toolsand you wouldn't see it
for the other subdomain.So for instance, if you
have something like Blogger,for example, where you
have lots of completelydifferent websites on
different subdomains,then that's something
where we probablytried to do something
more granularand just say this is what
this specific subdomain.Whereas, if these
are essentiallyjust different parts
of your websiteand they happen to be
on different subdomainsand it's possible that the
web's campaign will say,well, it doesn't make sense to
target each of these subdomainsindividually, so we'll just
apply it to the whole domain.
MALE SPEAKER: Thank you.
JOHN MUELLER: All right.Let's go through some of the
questions that we have here.Is it possible to
accidentally lock outof the Google crawler other
than using the robot's text?I guess, there a lot of
ways you could theoreticallyblock Google bot.There are things that you could
do, for example, by blockingthe IP address, by having some
kind of firewall in betweenthat recognizes Google bot as
some kind of malicious scriptthat's trying to
scrape your website,for example, and then some
firewall in your network,essentially blocks it.That's something we
see from time to time.Sometimes, we see it
that a website responseto different user agents
in different ways.And for whatever
reason, recognizesGoogle bot as some kind
of a script and says,oh I don't want to
serve you any content.We sometimes see that.With the robot's .txt file, one
thing we do sometimes see thereis sites that serve a server
error for the robots .txt file.So instead of
serving content, theyserve as server error like
this, the 500 error, whichessentially tells us that the
server has some content that itwould like to show but
it can't at the moment.And on our side, we treat that
as the robots text essentiallyblocking us from
problem completely.So if you serve a server
error or the robots .txt file,then we'll assume that we
can't crawl anything because wedon't know that we're allowed
to be crawling and we'll stopcrawling completely.These are all things
that we can showin Webmaster Tools in the
crawl area section botton.Even if you're
blocking the IP addressor blocking the user
agent, then you'dsee things like server
unreachable, those kindof errors in the crawl area
section in Webmaster Tools.I noticed there's a whole bunch
of App Indexing questions here.I can't really help
with those because Idon't have that much experience
of actually debugging those.But we do have a separate
Google Moderator pageset for App Indexing
questions specifically.So I'd copy those in there.I copied down all the
questions, so I'll doublecheck that they're already
in the moderator page.But I can't really
help with thathere at the Hangout
at the moment.It's going to see that people
are using App Indexing though.So I hope we can resolve
those questions there.Let me just clean these
out here for the moment.It's a little bit easier to
find the other questions.Wouldn't it be better if instead
of penalizing algorithms,you invested resources in
developing and launchingrewarding algorithms that
dramatically increasethe rankings for good websites?Essentially, we do this
with things like Panda,where we try to recognize
higher quality contentand show it
appropriately in search.And even with all of our other
algorithms, if it at some pointwe're recognizing
lower quality contentand showing it lower in search,
then of course, the higherquality content that's
left, we kind ofput this up a little bit higher.So it's always the
case that thereare two sides to
these algorithms.It's never the case that we
penalize all of these websitesor we demote them in
search and nothingelse bubbles up because
we have to show somethingto use as when they research.So it's always the
case that thereare both sides involved
with these algorithms.And that's something that we
also include in our analysiswhen we analyze how these
algorithms are doingand how we need to tweak them.It's not just that we look
at the sites that we'removing from search,
but also the sites thatare showing up higher in
search and making surethat those are actually the
right kind of sites, the sitesthat we'd like to show.
JOSHUA BERG: Hi, John.
JOHN MUELLER: Yes.
JOSHUA BERG: So
just to be clear.Panda does some promotion of
sites as well as filteringor what we might call
demoting of content?
JOHN MUELLER: Well,
it's always both sidesthat are involved there.If you're doing some then the
other stuff comes up higher.So it's something
where it's moreof a philosophical
question, I guess,if you want to look at
it as something that'sbubbling up higher
in quality contentor pushing down lower
quality content.Because in the end, the
lower quality contentgoes a little bit
further to backand the higher quality
content goes a little bitfurther to the front.
JOSHUA BERG: All right.So but does it highlight
any of the qualitypoints or is it always focusing
on the problem, negative partsof a slice?
JOHN MUELLER: It always
involves both those parts.So that's something
where in our algorithms,we try to recognize the
higher quality contentand treat it appropriately.So it's never the case that
the algorithm will justgo out and look for
bad signs and signsthat you're doing
something wrong.It would also need to
make sure that it'streating the sites that
are doing somethingright appropriately as well.
JOSHUA BERG: OK.So it's not fair to call this
a penalty algorithm of Panda,for example.They also looks at good
content or good pages as well.
JOHN MUELLER: Yeah.I mean, we don't call
these penalties internallybecause essentially they're
just the way of us tryingto show the more relevant higher
quality content in search.And it's not the case that we're
only looking at the bad sitesand say oh these guys are
doing something wrong,we need to get them out
of our search results.It's really that
the matter of ustrying to provide really high
quality search results overall.And that means, bringing up
these higher quality pages thatmight not otherwise
have ranked there.
JOSHUA BERG: OK.Does that apply more to
newer additions of Panda.For example, are they
better at lookingat good quality content or items
than say the older versions?
JOHN MUELLER: I'd like to say
that, of course, every time wedo an update, we try
to make them better.So that's something where we try
to bring this in all the timewith our updates.So I don't know specifically
what the newest we have there.But that's something
where we're definitelytrying to make sure
that we're treatingthese sites appropriately.And when we make bigger updates
that we actually call out,we do hope that it takes
a significant step furtherin that direction.
JOSHUA BERG: Yeah. 'Cause
I remembered Matt talking.This was back a year or so.We're looking for
some signals of someof the medium or
borderline sites, you know,which might mean these
sites are good sitesand, you know, they didn't
need to get lumped in.So it could filter some
of those borderline sitesbetter and maybe some of the
positive aspects as well in it.
JOHN MUELLER: Yeah, exactly.That's the kind of
feedback that weuse to kind of refine
our algorithms.So from time to time
we'll try to call outa request for feedback
on these kind of things.I know we also have a forum open
for Penguin since the beginningthat we kind of go through
every now and then to doublecheck what's happening there.And it's always good to give
us feedback in that regard.And even if we don't
have anything specific,open any kind of feedback
channel or something,if you see something
working particularlywell or particularly
back, take the timeto bring it to us,
so that we can talkto the engineers
about it and see whatwe need to improve there.
JOSHUA BERG: Alright.Thanks a lot.
JOHN MUELLER: Thanks.Are you advised to
continually updatethe disavow file if and
when new URLs are found?Yes.If you notice
problems, I definitelyadd them to the disavow
file, so that theycan be taken into
account the next time wecrawl an index so as urls.So this isn't something where
I'd say, you fill it out onceand you leave it
running forever.If you notice a
problem, you can helpus to fix that by adding
it to the disavow file.
MALE SPEAKER: Yeah, hi John.
JOHN MUELLER: Hi.
MALE SPEAKER: Yeah.Small question for you.
JOHN MUELLER: OK.
MALE SPEAKER: Yeah.The question is that, when
we are searching some moretitles in Google, we did
some keywords related to itand it was displaying
another title whichis from another rather
reduced not specifiedin the title's metadata display.They'll go, why this is
happening and we'll do.We don't know.
JOHN MUELLER: So
with the titles,we try to rewrite them
when we can recognizethat the original title on
the page wasn't that great.For example, if there are
a lot of related keywordsin the title, that's
something that maybe the userwasn't that interested in, kind
of this keyword stuffing issue.When we recognize
it, the same titleis used across large parts of
the website, that's somethingthat we try to improve on.When we recognize that the
title is particularly long,we'll try to find
something that fitsin the shorter visible
space in the search results.That's particularly useful
for mobile, for example.On a smartphone, you
have even less space.So we have to have
an even shorter titlethat we can show users there.And sometimes, we'll also
take into account somethingfrom the DMOZ, the
Open Directory Project.If there's a site
title there, then wemight look at that
as well and use that.You can block the ODP title
by using the no ODP metatab.But the other titles
that are rewritten,that's something that our
algorithms do automatically.That's not something you
can specifically block.You can help to avoid
that by really making surethat all of the titles
across your site are shortand to the point,
that they're reallyabout the content on your page
that we can really show themone to one in search and make
sure that users understandwhat your page is about.One thing to keep in
mind there, as well,is that titles can change
depending on the query.So from our point of
view, we don't justhave one title for a page.We might have a small
collection of titlesthat we swap in
and out, dependingon what the user
is searching for.So that the user can
recognize that this is reallya great page for
this specific topic
MALE SPEAKER: Yeah.Thank you.
JOHN MUELLER: Does that help?
MALE SPEAKER: Yeah.Nothing else is going on.
MALE SPEAKER: Alright John,
just a quick question.
JOHN MUELLER: OK.
MALE SPEAKER: You launched
description snippets recentlyand they seem showing up in
a lot of it, the rankings,so I say they help quit
a lot with in regardsto kind of adding
extra informationto your actual subsnippet.I just wanted, I know, that
this scheming for lot of sitesbut looking at a lot of the
sites that have descriptionsnippets, they don't seem
to have any schemer on that.So just kind of
wondered how you wentabout getting that
information of the pageand if there's anything that
we can do to come and enticeyou to do that of
our sites so I think.
JOHN MUELLER: We have
magic algorithms.Yeah, I mean, this is something
that sometimes quite hardto do algorithmically.So the clearer the information
is structured on the site,the easier we can pick that up.Sometimes it helps to have
like tabular informationor to use a clear
list definitions,something like that
to help pick it up.If you want to use
schema.org markupto give us more information
about the specific entitiesthat you're talking
about, that'ssomething you do as well.But I think it's also important
to keep in mind these are stillvery experimental features.And it's something
that we might notice,doesn't make so much sense
to show this much informationin search.Or other webmasters
aren't happy with showingthis much information in search.Or users are confused
by seeing informationthat was extracted incorrectly.And so I should
imagine that thisis something that might
be changing in the future.So as a webmaster,
I guess if youwant to kind of
leverage this thing,I'd try to make sure
that your information isas structured as possible on the
pages and that where you can,you have that org markup to
let us know about entitiesand attributes and
those kinds of things.But I can't guarantee that we'll
be showing them specificallyfor your site or for any site.
MALE SPEAKER: OK.Now, that's fine.I know with them,
schemers, well before youused to have to kind of
submit a form to Googleto let you know that this
site's got schemer on them.Should we, we don't
need to do that now?
JOHN MUELLER: No.You don't need to do that.We pick that up automatically.
MALE SPEAKER: OK.No, that's it.
MALE SPEAKER: John, I
had a follow up questionon the auto rewriting.It would be a really
useful if there was a wayto surface that information
in Webmaster Tools.So as a webmaster, obviously
as we're looking at our sitesand how they
display, titles willget rewritten depending on the
query, which you just said.So you know, having that
information available or havinga separate report in
Webmaster Tools wouldbe really helpful for us.You see something like
that making an appearance?
JOHN MUELLER: What would you
do with that information then?So how would that kind of
lead back to your website?Or would you react tp that?
MALE SPEAKER: So I
think, one of the userswould be around
clickthrough rate.So if we saw that a particular
URL was being favoredor a particular format
rather, then for optimizationfor the rest of
the site, we couldtake those hints and those clues
and apply that to other URLs.
JOHN MUELLER: OK.So essentially something maybe
in the top search queriesreports where you
could click on a queryand you could see that these
were the titles that we'reshowing and this was
a clickthrough ratefor those specific titles.
MALE SPEAKER: Yeah.That will be really useful.
JOHN MUELLER: Oh, yeah.That sounds interesting.We're working on the search
query report at the moment.So maybe I can give that
to the team on time.
MALE SPEAKER: OK.Great.
That's interesting.All right.Yeah, go ahead.
MALE SPEAKER: Changing
metadata for every 15 days,will it have any negative
impact on the keyword ranking?
JOHN MUELLER: Changing the
metadata every 15 days,that shouldn't be a problem.I mean, one thing
to keep in mindis we don't use a description
or the keyword metatagsfor ranking at all.So if you want to change
them regularly, that's fine.If you want to keep them
the same, that's fine too.That wouldn't be
cause for any problem.
MALE SPEAKER: Do
you find, dependingon the ranking
strategies and everythingupdated by the Google,
we are going ahead.Is there get any back
impact like in the droppingof keyword rankings like
that by changing the data?
JOHN MUELLER: If you just change
the description and the keywordthat metadata, I don't think
we care about that all.You can do that
however often you want.I know some sites had used
that to do kind of testingfor the snippet, to see
which snippet works bestfor your users, to see
the clickthrough ratefor the snippets.So if you want to do that,
that's essentially up to you.
MALE SPEAKER: Yeah.Because we are changing the
GDPS site readily from HTTPto HTTPS as a Google's
lost the updateand we're concentrating on that.We would like to know that.Is there any negative
impact on that length?Because the website is
presently in good conditionand we want to get according to
the Google updates presently.
JOHN MUELLER: I wouldn't
expect any visible changewhen you move from HTTP to
HTTPS, just from that change,just from SCO reasons.So that kind of ranking effect
is very small and very subtle.It's not something
where you willsee a rise in rankings
just going to HTTPS.I think in the long run,
it's definitely a good ideaand we might make that factor
stronger, at some point,many years in the future.But at the moment, you won't
see any magical SCO advantagefrom doing that.And that said, any time you
make significant changesin your site and
change the site's URLs,you are definitely going
to see some fluctuationsin the short term.So you'll likely see some drop
or some changes as we recrawland reindex everything.And in the long run,
it'll settle downto about the same place.It won't settle
down to somethingthat's like a point high
or something like that.
MALE SPEAKER: OK.Thank you.
JOHN MUELLER: So I think, that's
just important to keep in mind.When you make these kind of
changes on your website, movefrom HTTP to HTTPS,
it's not a magic bulletthat fixes your website.It's rather something
for the long run,that I think makes
a lot of sense.And you might see some
effects from the userside at some point.But at least in the
short term, you'renot going to see any
visible SCO advantages.It's a really small
breaking factor then.
MALE SPEAKER: [INAUDIBLE].
JOHN MUELLER: OK.We recently performed
the 301 to a new domain.After more than three
months, we stillhaven't gotten back
our previous rankings.Although, the pages didn't
have any particular changes,how long does it take to recover
from this type of psych move?In practice, this
should be somethingthat goes fairly quickly.So moving just from
one domain to anotherand following all the steps
that we have in our help center,we recently updated
the help centerwith a lot more information.So that might be
something to double check,that you're doing
everything right.But theoretically, after a while
that should be settling down.If they're saying it's still
not good after three months,that sounds like either
there were some issues thatwere unrelated to that
change that are happening.So maybe, whatever
algorithm just picking upproblems on your
website in generaland this is something that would
happened with your old site tooor there's something
technical that'skind of stuck on your
side or on our side.If you want, you're welcome
to send me those URLsand I can take a
quick look on our sideto see if anything is on
our side that's problematicor if there's anything I can
let you know about specificallywith this kind of change.But in practice, if you
do a site move properlyand everything goes
the way it should thenI would expect after a
month, maybe two months,it should be kind
of stable againand similar to the
previous visibility.Can you go into
a bit more detailon why this Penguin refresh
cycle is almost 12 monthscompared to the previous
data refreshers of on averagesix months?I don't have any
specific detailsI can share with you guys.I know the team is
working on this.So it's something
where we're tryingto find a way to improve
that overall and that takesa little bit longer.So sometimes, things don't
move as quickly as we'd likebut that's not because
we're completelyignoring the feedback or
ignoring these algorithms.Whoop, another
[INAUDIBLE] question.Do you upload a disavow to
help with the Penguin update?And if you do, will
you see the differencebefore the next refresh?So uploading a disavow file will
change those links essentiallyinto no follow links the
next time we crawl them.And if you do that,
then that's somethingthat effects all
of our algorithms.So that could have an effect
before Penguin refreshif those links weren't
specifically tiedto any Penguin related problems.It could have an effect with
regards to manual actionsif they're link based manual
actions in place for your siteand it could have an effect
on the Penguin refreshwhen that happens, if these
links are essentially processedby then.So it's something
that I wouldn't onlypm do for Penguin,
but rather to kindof clean up these
old league issuesthat you might know
about that you justdon't want to have associated
with your site and email.On September 12, I might
migrating a site from HTTPto HTTPS.I followed every step
of the instructionsand it followed that 301.From then on, I came under
significant loss of trafficand it dropped in
search results.Why?I'd have to look at
the site specificallyto see what exactly
is happening there.But let's see, September
12, isn't that far back?So this might be that in
this area fluctuations,where everything has to be
crawled and reindexed again.But if you want, you're
welcome to send methose URLs I could
double check to seewhat exactly is happening there.I'm pleased to hear that small,
medium, high quality sites willbe treated more fairly by Panda.Is this a global update?Yes, this Panda is
a global update.It affects different
languages and countriesin slightly different
ways but thisis something that applies
across our whole search results.OK.And here's a URL.I'm not sure which
URL this is for.But I'll copy it down just
to make sure that I have itafterwards.I imagine this is for one
of the recycling question.Let's see, Cyclops
since October 2013suspect the Penguin
algorithm change.We can't find any
unnatural links.That's great.Content duplicates, early we're
in top position for keywords.I'd have to take a look
at the site specificallyto see if there's
something I can find there.But in general, I kind
of take a step backand think about what you're
trying to do with your websiteand just double check to make
sure that what you're providingon your website is really at
the highest quality possible,and is really something that
we should show number one,for any of the queries that
you're specifically targeting.So I guess I'd have to
double check your siteto see what it's actually
about first to say anythingmore specific than that.We spoke with two
of the Penguin teamin the last month and a half.They updated your own progress.We do regularly
speak with the searchquality teams that are
working on these algorithms.And we kind of catch up to see
what we can do to help themand to see where they are.So I am guessing this is still
something that would easilyfall within your range
of before the year.But as always, I can't
make any promiseson these kind of things
because things can changeand maybe this is something
that will come outnext week because
everything is ready by then.Maybe it will take
a little bit longer.But I do know the team is
working on these updates.So hopefully we'll have
something for you guys soon.How do you cycle?
JOSHUA BERG: John.
JOHN MUELLER: Yes.
JOSHUA BERG: Another question.So expounding on
what you said earlierthat these algorithms
are never, youknow, all either all negative.Then Penguin also has positive
aspects to it where, you know,these sites are
more trustworthy.I mean we've heard about,
for sometime, we know aboutthat there are trust banking
algorithms for, you know,how sites might pass authority
that have a higher trust.So is Penguin involved
in that as well?
JOHN MUELLER: I
wouldn't specificallycall it like trust algorithms
or trust raking or anythinglike that.But as I mentioned before, when
we review these algorithms,we have to view the
results as they appearin the search
results in the end.So if some of these sites
are demoted for the webspamtechniques that these
algorithms find,then those that pop-ups still
play a role in our analysis.So we have to make sure
that the right kind of sitesare showing in
the search resultsand not just look at the
sites that were demoted.So especially when we do
an analysis over algorithmslike this, we look at
things like the first couplepages of the results for
lots and lots of queries.We send those out for
review by neutral peoplewho are kind of reviewing
before and after this algorithmchange.And they're not
going to review whichsites don't show
up there anymore.They're going to review
which sites that actuallyvisible for those query.So in that sense,
that's somethingwhere if we show
sites that are visiblethat are actually higher
quality, that are good sitesthat we could trust, that
we feel users can trust.That's essentially
a good change.Some of these
algorithms might befocusing on the web spam side
and kind of taking those out.But every time we
review algorithms,we review what's left.We don't review just
what was removed.So what's left has to
be really, essentially,what we'd like to show you
as is in our search results.So with that in mind, it does
bring both sides in therebut it's not that
I'd say what's lefthas this inherent
higher trust rankor with something like that.
JOSHUA BERG: OK.Well for example, one thing
Matt mentioned specifically,previously was how if one
site is caught with some linkproblems, they may
have received like 20,30, 40% demotion in either
the rank or authoritythat which everyone will it.And then, you know,
they may not beable to pass on any authority
as a secondary effect.So the you know, 20, 30,
40% that these sites mayget demoted relative to
their linking problemsis it there just say then those
numbers could also go positive?
JOHN MUELLER: I don't think so.I think, I don't think so.But I think, what
you're point at whatMatt said was specifically
with regards to manual actions,where when we notice that a site
has a lot of unnatural linkson it, that maybe
it's selling links,maybe it's engaging in link
spam, or in link exchange,those kinds of things.And it's linking to a
lot of irrelevant sites,then that's something
where we takemanual action on those
unnatural outbound links.And where we might say,
well we can't reallytell which of these
links are actually good,so we're going to ignore all
of the links on this site.And that's something that we
do manually, where we manuallykind of take action on a
site that has this problem.We'd let them know about this
in Webmaster Tools as well.They'd see the unnatural
outbound links message there.So that's something where we
try to do that manually and tryto recognize that as
something that we can't reallytrust this website anymore.Algorithmically, it's
probably a little bit harder.I could imagine there
are smart algorithmsI could try to recognize
the same kind of situation.And also say, well
we can't reallytrust the links on this
site because there'sso many standing
links on here as well.But I don't see that
going in the opposite waythat we would say, oh there are
lots of good links on the site,therefore I'll trust
every link here,twice as much as
I otherwise would.Because the page rank
algorithms already,kind of, take that into account.If a site has a
higher page rank,then the links will be passing
a little bit of higher patron.So that's something
where I don'tthink it would makes
sense to, kind of, amplifythat aspect additionally.
JOSHUA BERG: Alright.Thank you.
JOHN MUELLER: OK.How to remove a site
link from search results?We have a feature in
Webmaster Tools thatlets you do demote a site link.It doesn't let you
remove it completelybut it's a strong signal
for our algorithmsthat you don't want this
specific site link shown.So I definitely
take a look at that.One thing to keep in mind
is that site links are alsobased on the query.So it's not something
that will alwaysbe appearing in
the search results.And sometimes, it
can happen that wethink this set of site links
is pretty good for this site.And if you search for the
site in a different way,we might say well these
site links aren't reallythat relevant for this specific
query for this specific siteat this time.So just because you
see a site link there,when you do maybe an artificial
query or search directlyfor the URL of your
website, doesn't necessarilymean that users will also
see that cycling whenthey search natural.But you can give
us that informationthrough Webmaster Tools
and our algorithmsdo take that into account.
MALE SPEAKER: Hi, John.Can I ask you one question?
JOHN MUELLER: Sure.
MALE SPEAKER: I would like to
ask for a voodoo search query.My website getting
[INAUDIBLE] impressionlike [INAUDIBLE] in
impression of the keyword.But clicks are [INAUDIBLE]
foreign [INAUDIBLE] only.How can I improve clicks
rather than impressions?It depends upon the URL's
[INAUDIBLE] URL band marketingand [INAUDIBLE]?
JOHN MUELLER: That's
essentially somethingyou would need to ask your
users because it's nota technical problem how to
improve the clickthrough rate.It's essentially a matter
of the user believingthat your site has the best
content for the specific query.So those are things like maybe
titles, maybe the snippetthat's shown, maybe
even the content,the quality of the
content on your site.So it's not a technical aspect.You don't need to change
the URL structure.You don't need to
kind of tweak thingstechnically on your website.It's really a matter
of making surethat your content and
the way you presentthat to users in the search
results matches their needsand encourages them to say, oh
well this matches what I waslooking for, let me
check the site out.And that's something that's hard
because there is no simple ruleto say how to improve
the clickthrough ratesfor these groups.
MALE SPEAKER: OK.Fine.One more last question, John.What is the fetch and render
in the [INAUDIBLE] option?
JOHN MUELLER: The what?
MALE SPEAKER: What should be?Fetch and render.
JOHN MUELLER: Yes.
MALE SPEAKER: What is the
options that [INAUDIBLE]helpful from the website?
JOHN MUELLER: I didn't quite
understand the last part.Sorry.
MALE SPEAKER: Fetch
and render, this optionin fetch and render.
JOHN MUELLER: Yes.
MALE SPEAKER: What is
the fetch and render?Or it will be helpful
from the website?
JOHN MUELLER: OK.How is fetch and render
helpful for website in general?So in general, the
fetch as Google featureis really helpful
to double checkthat Google bot can see content.And the fetch and
render option thereis so that you can see
how Google bot wouldsee your content.So sometimes, there are things
like the site getting hackedand there's different
content shown.That's something fairly obvious
that you can find there.But sometimes, you also
see that your CSS filesare being blocked
in the content is blockedby the robots or .txt files.And that means, that we
can't render the pagesin the same way that our
browser could render them.And that sometimes
makes it a lot harderfor us to kind of
understand the pages,understand the content on there.In particular with mobile,
that's a really big problembecause if we can't
see the CSS, wecan't tell that this
website is actuallya great website for smartphones.So we can't treat
it appropriatelyfor smartphone search
because we don't know.We can see what it looks like.So that's something that,
I think, helps a lot.And the fetch and render
tool is that on the bottom,you see the robot and
resources as well.
MALE SPEAKER: Yeah.Thank you.Thank you again.
JOHN MUELLER: Sure.One thing, may also going
back to your clickthrough ratequestion.One thing to keep in mind,
is that not all theoriesare essentially the
same that you can justlump them together and say
the overall clickthroughrate for my website is bad.You really need to look at
the queries individually.Sometimes if someone is
searching for your brand namethen maybe they don't need
to click on your websitebecause they already
know your website.They're may be looking for
something around your websiteor on your blog
or somewhere else.So you kind of have to
look at the type of queryas well as just looking
at the clickthrough rate.So try to group
things into thingslike branded queries
or navigational queriesif someone wants to go
somewhere specificallyor informational queries
if they want informationabout the specific type of
product that you might offer.And if I could treat
them separatelywhen you're looking at
this on your websitebecause it helps to kind of show
the real problems and not justhide them in a bigger picture.
MALE SPEAKER: Thank you.Thank you.
JOHN MUELLER: OK.There's some URLs here.I copied those down
and I'll take a lookat those separately afterwards.They've always just
kind of complicated.This eval question,
in the first attempt,I submit a link to
abc.com for disavow.Then in the second attempt, I
upload again without abc.com.Will Google count
abc.com in my back links?Abc.com isn't really that bad.So I don't know.Seems like a good
tie keep on my linksif they're linking to your site.But I'll this is
just a placeholder.If you upload a second disavow
file and remove links or removedomains that you had
in the previous ones,then those disavow files
override the previous ones.So essentially, the
next time we crawlthat link on that
website, we'll see it'sno longer the disavow file.So we'll treat it
as a normal linkand have it pass page rank,
have it effect our algorithmsappropriately.In both of these cases,
if it's disavowed or notit will still show it
in Webmaster Tools.just because something is
disavowed in your disavow file,doesn't prevent it from
showing up in Webmaster Tools,just like we show other
links in Webmaster Tools thatalso have a no follow on them.So that's something you'd still
see it in Webmaster Tools.If you disavowed
it, you'd still seeif it has no index,
sorry, no follow on it.And if you remove a
link from a disavowfile then that's no
longer is in effect.So if you're updating
your disavow file,you keep adding and
adding more things,then you would keep the
base structure the sameand just keep adding.You wouldn't replace
it with a new file.Let's see.About such cite links.I see many webmasters
claims about improper sitelink on their site.It is not possible
to give them upthat chance in Webmaster Tools?Like I mentioned
before, you can letus know site like that
you'd like to have demoted.We don't remove them
completely in some casesbut we could take that into
account for our algorithms.This data is processed I'd
say, maybe once a week.So you wouldn't see
that change immediately.I definitely give it a
week or so to kind of leveldown into this
specific algorithmsto see what has changed there.If people in video
finishes, is itOK to ask them to leave so
that other people can enter?OK.I am hereby asking you to
leave with other people,if you want to make
room for other people.But I'll leave it up to you.Usually, if you
want to join this,you have to be kind of quick.I'd post a link in the even
invite and usually maybe oneminute or two before we start.If you've never made it to
these Hangouts and you really,really want to join me, let
me know on Google+ beforethe Hangout starts and I'll add
you a little bit before I addeveryone else.So sometimes that helps.I made the question about
the migration from HTTPto HTTPS any consequent
drop and havoc.Here are the URLs.OK.I'll just copy that down, make
sure I take a look at thatlater.We've split our website in
two, due to corporate branding.Our rankings had dropped
but we implemented301's economical
tags were necessary.Have you seen anything
like this before?Websites split not migration.So yes, we sometimes do
see website split or kindof separating into
completely separate domains.In practice, you're always going
to see more fluctuations then,than when you do a site move.The main problem
there is that wehave to take all
of those signalsthat we have kind of
collected over the yearsfor those individual URLs
or the website in generaland find out how we
should separate that.Whereas with the site
move, we can say wellthe whole website is
moving, so we can justtake all of the
signals that we haveand pass them onto
the new website.You'll always see fluctuations
with site moves as well.But usually, it's
easier with site moves.So seeing fluctuations and drops
in rankings, at least temporarylike this is probably
normal if you're splittingthe website up, just because
that's such a bigger step,let's say, on the
complexity scale.So obviously, this
isn't somethingthat you can easily avoid.Usually, there
are bigger reasonswhy you have to
split a website up.But you should just keep in mind
that this is, kind of normalthat you would see
stronger fluctuationsin a case like that.Now, I think Penguin
2.1 only lookat the website's
inbound link profile.Penguin one looked on site
issues as well as links, nowPanda covers on site.Was Penguin 2.0 and 2.1 change
to only look at link signals?We call the Penguin
algorithm a website algorithmand it tries to take into
various web spam aspects.So I don't think it would
be fair to say that it onlylooks at links.So that's something
that tries to gettaken into account in general.Panda, on the other side is more
of a quality algorithm, wherewe try to focus more on
the quality of the content,the quality of the
website overall.So those are essentially
two different aspect.Sometimes, they
overlap a little bit.Sometimes, WebCenter
issues are therebecause equality is also low.Sometimes they're completely
independent that somethinghas web spam issues,
but actually ithas a really high
quality website.And obviously, when they
don't overlap that well,that's harder for us
to handle correctly.Because on the one hand, we
want to discourage the web spamaspect, on the
other hand, we wantto show great results
in search results.
MALE SPEAKER: John, John.Can you hear me?
JOHN MUELLER: Yes.
MALE SPEAKER: Yeah,
that was my question.There seems to be
a lot of confusionabout page keyword spamming,
on which kind of algorithmstake that into consideration.I think Penguin 1 really did
take that into consideration.But when there's no questions
in the Google Webmaster forums,there never seems to get
a great answer for it.So I just wanted
to know if there's,you know, a definite answer on
what they should be looking atand which algorithms probably
mostly affected them?
JOHN MUELLER: So we have a lot
of algorithms and most of themdon't have any
flashy public names.So it's hard to say
that we should likeor we'd be lumping like
the keyword stuffing typealgorithms into one or the other
of these bigger algorithms.But this kind of stuff is
taken into account on our side.When we recognize
that there's keywordstuffing happening
there, that wetry to figure out how
to best handle it.From my point of view,
a lot of these topicswhen we recognize kind of
this spammy activity on pages,I generally prefer our
algorithms to like justreact to that spamming
part and say well,I'm going to ignore
the spamming part herebecause maybe the
webmaster didn't reallyintend to stand us like this.Maybe it's something that was
on their website for yearsand years now.Maybe they didn't even
notice it themselves.And just focus on the
good parts of the websiteand treat the website
appropriately like that.So I think, for
more aspects that'sa reasonable approach to take.And that's something where
maybe you wouldn't evensee a drop in rankings
if your keyword stuffing.But you wouldn't also
see a rise in rankingsbecause of this
keyword stuffing.So from that point of view,
it's often not a critical issuethat you really, really need to
resolve as quickly as possible.But something that if you have
this kind of keywords stuffingon your website and you keep
maintaining your website,you're going to have
to keep thinkingabout this keyword stuffing
and kind of artificiallytake that into account as
you revamp your website,as you create a
new design for it.And that's just a
lot of work thatcan cause more problems
than anything else.And since you're not
having any advantagefrom this kind of
keyword stuffing,it just makes sense
to clean that up,so you don't have to worry
about it in the future.
MALE SPEAKER: So you think
that a company name couldbe classed as a
keyword spamming,if you have your company name
say 15 times in a new page?
Theoretically, I couldimagine that might be a problem.
MALE SPEAKER: Yeah.
JOHN MUELLER: If
the company nameis something like, I don't know,
cheapmortages.com or something.In those kind of situations,
the company name almostoverlaps with the keywords
that they're trying to targetand that could be seen as
something where we'd say,well this looks like
you're artificiallyspending these key words.We don't really recognize
it's a company name.But it looks like you're
trying to artificially spamthese keywords and that's
something we might react to.And [INAUDIBLE] the
company name like,I don't know, XYZ.com
or acne.com, Google.com,that something where
you're not goingto be like spamming
the keyboardsbecause it's your company name.You rank for it anyway.There's nobody
else that's tryingto compete for these terms.That's not something where
we'd say this is reallygoing to be a
problem if we ignorethose keywords on your change
because the rest of your siteis all about this company.But if your company name is just
like keyword one, keyword two,keyword three, then
obviously our algorithms
are going to say: Well is
this really a company name?Is this someone just trying
to spam those keywords?Where do we draw the line?How much should we ignore here?And how much of we
give this weight?We see that sometimes,
when sites create domainsspecifically for
keywords where they say,my website is
cheapmortagages.com,therefore I should rank for
number one for cheap mortgages.Because you know, everyone's
looking for my site.They're researching specifically
for my keyword, right?And that's definitely
not the case.
MALE SPEAKER: My company name
was registered 10 years ago.So we've been trading 10 years.Our website is eight years old.I'm just really worried that
Google might be thinking,our company name, Whole
sale Clearance UK Limited ,because we've got that on
our page four, five times.Are we overdoing
our company name?It's really hard to be one page
and not mention our companyname too much because
that's what we do as well.We sell wholesale stock.So would Google see
that our company'sbeen registered 10 years
before we had a website?
JOHN MUELLER: We wouldn't
look into it in that detail.So it's not that
our algorithms arelooking for like when this
company was registeredand doesn't match
the records there.But we'd looked at it
over all on our website.And if you're using that
in a reasonable way,then I wouldn't really
worry about that.
MALE SPEAKER: OK.
JOHN MUELLER: And especially
our keyword stuffingalgorithms, we do try to
recognize keyword stuffingbut we're not going to penalize
a site for keyword stuffingin general.So you'd have to
be really, really,really obnoxious to actually
trigger something on our site.What we'd say, we're going to
demote this website completelybecause it's just stuffing
keywords all over the placeand we have no idea
what to trust anymore.But if this is just a name that
you're repeating on your site.In the worst case, we won't be
looking at the name that often.We'll say, well this name on
the site or this specific pageis mentioned a lot
of times, thereforewe have to be careful about it.But if the rest of your site
really focuses on that nameand that's what people
are searching for,to find your company, then
that's generally a good thing.That's not sign that we remove
your site from those searches.
MALE SPEAKER: OK. [INAUDIBLE].Thank you.
MALE SPEAKER: Yeah, hi John.
JOSHUA BERG: John, what would
be some of the other web spamsignals that Penguin might
look at that you would suggest?And one question is,
was the email spamever thrown in that mix for--
JOHN MUELLER: Email, spam.
JOSHUA BERG: --Penguin?For example, I
mean Google make ita large quantity of
bounce backs, our email?Before it's related to
spamming a link [INAUDIBLE].
JOHN MUELLER: I don't think we
take email spam into accountbecause it's just such a
completely separate partof Google.It's not that we like read the
links in Gmail and say, oh wellwe'll pass catering to these.Because a lot of people
will share things privatelyand we're not going
to like dig into that.So I don't think we take email
spam in that sense into accountas well.But of course, sometimes,
if you're reallyobnoxious with email spam,
that stuff ends up on the webas well, going to mailing
lists that are public,going to, maybe, forums that
post these mailing lists thatare public.Sometimes that's the kind of
leaks out into the web as well.And that's something
that we might pick upfrom those places.So I don't think we'd take
into account somethingspecifically from Gmail.That's just such a
different part of Google.
JOSHUA BERG: But Google may
look at some of the site relatedto a negative sentiments
on this mix as well.
JOHN MUELLER: That's
always tricky.I mean, we do try to recognize
that kind of situation.But it's really, really hard
to do that in an accurate way.So that's something where we
might take that into accountwhen they're really, really,
really strong signalsthat are saying, here's a
link and it's totally terribleand you should not
look at it at all.But if we find it as
a link, we kind ofrecognize the context of that
link, then that's something.In really extreme cases, we
might take that into account.But in general,
we're not going to dothat much of sentiment analysis
around every link on the webto figure out, is this
is a positive mentionor a negative mention?We're going to kind of
trust the aspect therethat, if it's passing page
rank, that's somethingthat we might want
to take a look at.If it doesn't pass page
rank, than whatever,you can pop-out as
much as you want.
JOSHUA BERG: Thanks.
MALE SPEAKER: John.
JOHN MUELLER: Yes.
MALE SPEAKER: Excuse me.Excuse me, John.
JOHN MUELLER: Sure, go ahead.
MALE SPEAKER: We're
working on a dot com site,but in another country.John, are you listening?
JOHN MUELLER: Yes.
MALE SPEAKER: We are
working on a dot com siteand we are targeting to
another country like Arab.But if we are not able to
find data ranks in there,is there any possibility to
get level ranking in countrieslike--
JOHN MUELLER: So
essentially, howto optimize a site for
a different country.There are a few aspects that
you'd want to look at there.Primarily around geotargeting.So either using a top level
domain that's specificallyfrom that country
if you can do thator using a generic
top level domainand setting the geotargeting
Webmaster Tools.That helps us quite a bit.If you have content that's
valid for different countriesor that's translated
into different languages,you can also use
the href lang markupto let us know about
those different versions.So you could say, this is a
version of English for SaudiArabia, this is a
version of Englishfor maybe another country
there or maybe for the UKor maybe for the US.And let us know about that and
we can take that into account.
MALE SPEAKER: Yeah, OK.And one more question, John.We are having a website
and then we are completelybanned in all the
internal links,only we are targeting
the home page.And coming to that, we
are posting some blogsregarding that URL.And we are not able to find
the blocks we are posting.But we are getting the category
links in the searching MPas well.
JOHN MUELLER: OK.So essentially, like a
category pages of your blogare showing up in search,
instead of a actual blog posts?
MALE SPEAKER: Yes.Yes.Yes.
JOHN MUELLER: Sometimes,
we've seen sites use plug-insin a wrong way, in the sense
that they have accidentallyput no index on these pages.That's something I
will definitely check.But the other
aspect is sometimesjust that we have to learn about
this website a little bit moreand we have to learn
to trust it better.And that happens
over time essentiallyas you grow a little
bit more popular,as more users are using your
site, as people recommendingyour site.We can crawl a
little bit deeper.We can make sure to index
a little bit deeper.And that kind of
happens naturally.So I would first
check to make surethat technically
nothing is blockingthose pages from being indexed.And if technically,
everything's OK,then I just continue working
on the quality of your websiteand continue making it has this
higher quality as possible.
MALE SPEAKER: Hey, John.Can I get a question?
JOHN MUELLER: Sure.Go for it.
MALE SPEAKER: OK.We have a website.It's seven years old.Something like the YouTube, OK?We have been key stuffed
by the SU companies.They created more than 20,000
profiles on our websiteand they link built to us.We closed everything before
401 deleted all the pages.For more than eight
months, we havebeen trying to clean everything
and when will we see changes?We have 50,000 visits
from Google per day.Now we're on 200.So we really don't
know how to recoverand we've tried everything.It wasn't us.It was users building
links to our website.So is there any way to find out
what's real problem in this?
JOHN MUELLER: You
could send me the URLand I can take a quick look.I can't promise that I have
a quick answer for that.But I can definitely
take a look there.We have some recommendations for
handling this kind of profilesspam when people are
putting it on your site.It sounds like, you
figured most of thatout yourself, in the meantime.These are things using CAPTCHAs
to kind of block scriptsfrom creating these
pages automatically.Maybe no following links
there, those kind of thing.But I imagine, this is something
you learned the hard way.
MALE SPEAKER: We all handled.Yeah.We disavow more than 10,000
domains and everything,but it's just not
refreshing, you know.We launched new version of
the website about 10 days ago,Google is crawling about
150,000 pages per day.
JOHN MUELLER: OK.
MALE SPEAKER: Can
we expect changesin some normal times,
two to three months?
JOHN MUELLER: Usually,
if you significantlychanged your website,
that's somethingwhere you would
see changes, yes.
MALE SPEAKER: OK.I have sent you a link
on the right side.It's flippy.com.
JOHN MUELLER: OK.
MALE SPEAKER: If
you have to time,send an email firstname.lastname@example.org
or nitro@flippy.I can send you my email address.
JOHN MUELLER: OK.
MALE SPEAKER: If you
can please help us.It's not us building inholded
links, just users, sameas the YouTube has the problem,
and the other media companies.So we're not trying
to be bad but usersare using us in illegal way.So I don't bother anymore.Please just help us.Just give me a how to fix this.
JOHN MUELLER: Yeah.I'll take a quick
MALE SPEAKER: OK.Thank you.Thank you.
MALE SPEAKER: John, can
I ask you a question?
JOHN MUELLER: Sure.We have an AdWorks account.We have our HCDR about 6% to 7%.We recently created
the marketing campaignthat works fine, but the
links down our average.It's there are about 2% to 3%.Is it better to have
their marketing campaignto another AdWorks account?Nothing's have fixed.
JOHN MUELLER: I can't
give you any ads advice.I'm sorry.I really don't know.We split the web search, the
organic site, from that sitecompletely.So I really don't know what
I could help you there.I'd checked with
AdWorks forum, perhaps?I don't know if they also do
these kind of paying outs,but I can't help
with that sorry.
MALE SPEAKER: OK.Thank you.
JOHN MUELLER: Sure.Let me just run through
some of these questionshere to see if there's
anything I can add here.After a successful
disavow with my webmasterlinks can decrease?No.As I mentioned before, disavowed
links still remain visible.How can I tell that my
disavow's successful?In general, if you
would submitted itin a technically
correct way, it'llalways be processed
continuously.It's not something
that is processed onceand you'll see those OK.It really is process
continuously as we recrawl.I had a former hack
and cleaned up my site.Will I recover
traffic as before?Usually, yes.If your site was hacked
and that you've cleaned itup completely, then
that's somethingthat we'll pick up as we
recrawl and reindex your sites.Sometimes, it takes
a little bit longerand I just also double check
with the fetch as Google tool,to make sure that your
site is really clean.Sometimes, there are different
forms of hacks on a siteand some of them
you'd see directlyin the browser and others
only Google bots see.So kind of double check
that it's really clean.Should I be worried
about a constant flowof irrelevant links
to my website.In general, if you
look at your linksand you notice that there
are things that you reallydon't want to be
associated with,I just put it in the
disavow file and move on.That way, you don't
have to worry about it.So if you see something
problematic, take care of itand it'll be kind of cleaned up.If a site was built completely
using AJAX, Campus and WebGeoand this has only one
page with little text,will Google bot the site
that it's a bad page?No.We won't treat it as bad site.But what might happen is
that it's harder for usto actually get to your content.So that's something where
I use the fetch as Googlewith the rendering option to
see what we would actuallysee when we call a page.And maybe we'll be able
to pick up the content.Maybe you'll see that
some of the contentis blocked by
robots text that youmight want to
allow crawling for.The other aspect is if your
whole website is essentiallyon one URL for
crawling purposes,then that makes it
essentially impossiblefor us to find the
rest of your contentbecause it depends on how
you click through your site.So if you can set it up
to use separate URLs,you can do that using HTML5,
pushState for example.Then that makes
it possible for usto crawl those individual
URLs and to actuallyindex them separately.So that might be something
worth looking at as well.But is definitely
not the case that wewould treat a site
like that as being bad.Let's see.How we bring more visitors
to our Google+ page?Essentially, this is a page
like any other on the web.You can recommend
it to your users.You can encourage your
visitors to recommend the twoother friends.It's not something
where we would say,Google+ pages are inherently
different than any other kindof webpage out there.When I search keywords
in search results showingdifferent types of Meta Title
rather than the original title?We talked about this briefly.I have a bunch of a
number of questions here.So maybe, I'll just
open up to you guys.What's left?What's still on your mind?What can I help with?
DEREK MORAN: Yeah.I've got a quick
top questions John.
JOHN MUELLER: OK.
DEREK MORAN: It's
been three weekssince I've converted my
entire site to HTTPS.And I've noticed an
interesting pattern.Every single page the did not
have rel equal canonicals in itconverted to TTPS, it
would think 24 hours.But every page that
did have rel equalscanonical has not
converted in the index itall after three weeks.It's still stuck at the old one.
JOHN MUELLER: OK.That shouldn't
actually be the case.Yeah.That's a bit weird.
DEREK MORAN: Yeah.So it's basically--
JOHN MUELLER: --have
in you name tag.
DEREK MORAN: Yes.
JOHN MUELLER: OK.
DEREK MORAN: So
which means, it'sbasically all the best part of
my website is not converting.But my forum, which is still
good part, that converted.I don't know it.
JOHN MUELLER: OK.One thing that you might want to
do is look at the cached pagesthere.So what sometimes
happens and I knowit confuses WebMasters
a lot is we'llhave multiple URLs associated
with the same contentbut we'll actually
index it primarilyunder one of these
weird versions.And you usually see that if
you look at the cached page.So if you do something
like the site query,you'll see those
old URLs that youthink should be
under the new URL.If you click the drop down
and click on cached page,it will show you the URL
on help that we actuallyindex this content down there.And what might be
happening thereis we show it in the site
query but actually we indexit as HTTPS s and then
you're essentially covered.
DEREK MORAN: OK.
JOHN MUELLER: But
I'll double checkto make sure that there's
nothing, otherwise weird.
DEREK MORAN: Well, we
opt out everything.We opt followed
everything, you know,the tap there and not given up.
JOHN MUELLER: OK.I wouldn't necessarily
worry about that.That's something
that sometimes, itjust takes a little
bit longer to catch up.We notice that we can't always
trust rel canonical, especiallywhen it's used in a bad way.But in general, when we find
it and we see it used properly,we do try to take it into
account, in addition to 301.It's a pretty good signal.For example, we've
seen a lot of siteshave a rel canonical
set to their home page.So they'll have a big website
with lots of different pages.But the canonical is that
to their home page, whichtheoretically, if
we follow that,we drop all of these other pages
and just index the homepage.So that's the kind of situation
we'd ignore the rel canonical.But if you're using
this in a clear wayto kind of let us know
or confirm a move,then that's something we should
be taking you into account.And it sounds like
something, maybewe should figure out if
something broke on our side,we got stuff there.
DEREK MORAN: Yeah.Well, I definitely have
added all the canonicalsto the HTTPS.
JOHN MUELLER: OK.Sure.
DEREK MORAN: Yeah.Cool.Thank you.
MALE SPEAKER: Hey, John.I wanted to go back to
the disavow files again.
JOHN MUELLER: OK.
MALE SPEAKER: So you
mentioned earlierthat any of the sites that
you put in there, effectivelychanges those links
to be no follow.Do you think that those sites
will also be used on aggregate,so Google looks at all of the
various disavow files thatare being published by all
of the various Web Masters.Would they be used to
enhance, say Panda, or someof the other search algorithms?
JOHN MUELLER: I wouldn't
rule that out completely,but it's very tricky.It's not something
where we'd say,we can take this one
to one and use itfor our website algorithms.For example, there's a situation
where maybe a very legitimatemarket is out there that has
a lot of high quality contentand good links in
it, but it happenedto get stuck on some
list of some scriptthat out of post comments.And a lot of sites
might have autoposted comments there that
are essentially uselesslinks that we should
be taking out.But the rest of the
content on this blockis actually really
high quality content.We see that happening a lot
with government websites.For example, that they'll
have really good contenton the website but they
haven't set up in a waythat allows random people
to add comments and linksto those pages.So they're taking advantage
of bilateral [INAUDIBLE].And if the sites all disavowed
those government pages,then that kind of cleans
up that connectionbetween that government
site and they're site,those spammy links that they
dropped there in the past.But that doesn't necessarily
mean that this other governmentsite is really low
quality spammingwebsite that's just
spamming everyone.It just happened to be open for
other people to get spammed.So that's something
where I could imagineto some extent we might take
that into account to doublecheck some of our
website algorithms.But we really, really need to
be careful when we do that,that we don't kind of take
into account sites thatare essentially good
site that just happenedto get taken advantage of.
MALE SPEAKER: Alright, John.I have to ask.Why doesn't Google just
ignore all the commentsso they immediately
stop all spammingonline on the comments?
JOHN MUELLER: I don't think
it would work that way.But yes, it would
be nice if we couldstop that kind of comment spam.We saw it, for example, when we
introduced the no follow tag.Lots of websites moved
to the no follow tag.But a lot of these auto
posting spam scripts,they essentially post
their comments regardless.They don't even recognize
that these commentsare being no follow.So it would be nice if we
could just like flip a switchand say, OK all spammy
comments will disappearor people will stop spamming
after we make this change.But realistically, I don't
think that's quite that easy.Alright.We're a bit over time already.So I just want to
thank you all for allof your questions and comments.It's been really interesting.
MALE SPEAKER: John, one second.John, one second.
JOHN MUELLER: OK.One last one.
MALE SPEAKER: Happy New Year.
JOHN MUELLER: Oh, yeah.It's new year, yeah.
MALE SPEAKER: Right?
JOHN MUELLER: Yeah.Not here in Switzerland but yes.
MALE SPEAKER: Great
talking to you.
JOHN MUELLER: OK Great.
MALE SPEAKER: Thank you.
JOHN MUELLER: All right.So have a great
weekend everyone.I hope to see you guys again,
set up the new Hangouts latertoday and feel free to add any
questions that I missed there,so that we can go
through those then.Thanks again.
MALE SPEAKER: Thank
you from [INAUDIBLE].Thank you.Bye.Thank you.Bye.Bye, John.Get it.