All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video
JOHN MUELLER: OK.Welcome everyone to today's
Google Webmaster Central OfficeHours Hangout.We have a bunch of people
here already and a lotof questions that
were submitted.Simon had a question
just before westarted regarding whether
or not it still makes senseto make websites for
the users, or with allof these new technologies
like Ajax et cetera,if there isn't a large component
involved that requires makingsomething special for
the search engines.Is that about right?
SIMON: Yeah, that
sounds very good.Thank you.
JOHN MUELLER: OK.So I think, in general,
it still makes a lot senseto primarily focus on
the user and to make surethat their user experience
is as good as possible,that you have something
really fantasticthat you can offer them.At the same time, if
you need to make surethat this content is also
available in search engines,there's some technical
kind of requirementsthat you need to fulfill.On the one hand, we need to be
able to crawl these pages sothat we can actually
find this content.On the other hand, we need
to be able to understandthis content on
those pages as well.So for instance, if a
page is just an image,then that's sometimes very
hard for us to understand.But if there is something
like a subtitle on an image,that helps us a lot.If there are things
like links with anchorswithin the website
to those images,that helps us a lot as well.So I think, primarily, you
still want to focus on the userand make sure that
they are able to usethe website as good as possible.If you need to have search
engines crawl and indexyour content, there's
some basic requirementsthat are still required.So you can't take any
random new technologyand say, this works
great in some browsers,and assume that it will also
work for search engines.Specifically for Ajax, there's
the Ajax crawling techniquewhere, like you
mentioned, you canmake a parallel
website for crawlers.We're getting better and better
at understanding Ajax contentas well.But even when we can process
in, we need to have unique URLs.So it needs to be that you have
some kind of a URL structurefor this content that
we can pick up on.And we need to have some
kind of basic unique contenton these pages when
our technical side.But at the same
time, those few userswho don't have
a study about that not too longago.They found that there was still
a significant portion of userswho don't have
blocking advertising or maybeblocking malware problems
that they're afraid of,those kinds of things.Or maybe they're using some
older or simplified browser.And that was, I think, somewhere
around 5%, something like that.So it's a small number
of users, but it's stillsomething to keep in mind.So if you want to
go towards Ajax,I think that's a
great opportunity.You just need to make sure
that you have unique URLsand that, on these URLs,
when they're crawled,there's at least some minimal
have some minimal amount.
SIMON: OK.I understand.Thank you.So but the second
part of the questionwas more like websites
which focus on imagesand multimedia content.Would you recommend doing
some marketing blah, blahat the bottom or the left
side, which is very not so muchimportant for a user, but
doesn't at least disturb him,to show search engines
what the page is about?
JOHN MUELLER: I generally
wouldn't do that,if it's just for search engines.So if this is a text that's
not meant for the user,then that's almost seen
as keyword stuffing,and it gets close
to being hiddentext, all of those
kind of problemsAnd usually, we do
a fairly good jobof understanding what
these pages are about,based on the context
within the websiteand based on the context
from outside of the website,so when people are
recommending those pages to us,maybe a forum or somewhere else.So I definitely wouldn't
add text to these pages justfor search engines.
SIMON: OK.So the final addition
to the question.If a website which
is based on imageshas a great user experience
is it equally likelyto be ranked in Google
as high as websiteswhich do provide a lot of text?I mean, is the amount of text
important for the ranking?Or is it really
the user experiencethat counts at the end?
JOHN MUELLER: Yeah.I think the user experience
is very important for us.But we need to have some
minimal amount of informationon these pages.So an extreme
example, for example,is lots of photographers
have a very simplifiedgallery on their
pages where theyhave the title is DSE and
then some long numbers dotJPEG, which is the
name of the filethat they took that photo of.Then they have this
nice photo on there.And for us, when we see
that, it's very hard to say,OK, this is a nice photo, but
what should we rank it for?We don't really know.Whereas, if someone
were to take this photoand bring it into a
forum, for example,and they write a forum
post about their vacationthat they had, and they
include this photo and say,this is kind of what it
looked like at my hotel, thenwe have a lot of
information on that pagethat gives us a little bit
more context about the image.And at the same time, that's
something which users sometimesalso want to find.For example, if they don't
just want to find the image,they might want to find
some context about this.So if someone is searching
for hotel vacationin that location, for example,
then maybe that forum threadis a lot more relevant
for them, because ithas this information
on it, even though it'sthe same image as the one on
that fairly blank gallery page.
SIMON: OK.Thank you.
PANKAJ: Hi, John.This is Pankaj.
JOHN MUELLER: Hi.
PANKAJ: I have the same question
regarding in this context.I have two questions for
you regarding the Ajax.One is we are serving HTML
snapshot of the HTML pagesfor the request of the URLs.However, that snapshot
is taking timeto generate because
there are lotsof images used on that page.But Googlebots have
very limited timeto crawl and just skip away.So how can we deal
with that problem?
JOHN MUELLER: If you have
an HTML snapshot like that,I think what you'd need to
do is some kind of cachingon your side, so that Googlebot
can still crawl fairly quickly.And you don't have to
refresh that snapshotas often as may be
otherwise necessary.But it really depends
a lot on your websiteand how you have that
implemented within your site.
PANKAJ: OK.So we have lots of
images which arecoming from the
different system.So if we can stop those
images while we'recreating a snapshot, would
it be considered as clocking?Because we are solving
the same content,just we are blocking images just
to serve the snapshot faster.So would it be fine?
JOHN MUELLER: So
with snapshot, youmean like the escaped
fragment version of the page?
JOHN MUELLER: For this
escaped fragment version,the images aren't
actually included.It's just like the
image link there.It's not that the image
itself is embedded.So that would be,
actually, separatefrom the escaped fragment part.But you can definitely
block images and robots.txtif you think that it takes
too much to crawl themor causes problems.You just need to keep in mind
then, without those images,we can't show those
images in image search.But maybe that's OK.
PANKAJ: OK.Fine.Then I have a second question.We have implemented that,
but how can we check that?Because if there
is a normal URL,we can check it in
Fetch & Render option.But we are creating a URL
which is temporary URLand we don't want to
serve to the users.So if we do the Fetch
& Render of that URL,we can see the content in that.However, if we are putting that
content of that hashtag thing,so we are not able to see that.So how can we confirm that
it is serving the purpose?
JOHN MUELLER: You would have
to compare the escaped fragmentversion in Fetch as Google
and compare it to a versionthat you see directly
in your browser.So that's not something
that you wouldbe able to use the
Fetch as Google for,because the Fetch as Google
doesn't support fragmentsdirectly.
PANKAJ: Yes.So our question is,
how can we checkthat we are serving the right
content in this Google site?Because we know that we have
already implemented that.But Fetch & Render
is something that wecan cross-check and confirm.
JOHN MUELLER: Yes.
PANKAJ: So how can we do that?
JOHN MUELLER: I
would take a lookat the preview image that's
shown in Fetch & Renderand compare that to what
you see in the browser.
PANKAJ: OK.So you mean to say we
have Fetch and Escape URL?
JOHN MUELLER: Exactly.
PANKAJ: Not Fetch & Render?And if it is the same, so
we are serving the purpose.
JOHN MUELLER: Yeah.Exactly.
PANKAJ: Thank you.So Ajax is something
that we are discussing.Now I have second question, my
other aspect to discuss aboutis related to the
dynamically servingthe content on the same URL.
JOHN MUELLER: OK.
PANKAJ: So in that, let's say we
have country-specific content.And we have only single
URL, let's say, xyz.com.On that, we are
serving, for US users,US content, for India
users, India content.But my bind is, will
it leverage or givethe same priority of using the
ccTLD for these two documents?
JOHN MUELLER: If you
use the exact same URLto serve the different
content, we'llonly see one of those versions.We won't see both versions.So if you dynamically
serve the different countryor the different language
content on the same URL,we'll index only one version,
probably only the US version.So we won't even see
the Indian version.So what you would
need to do is havethe Indian version
on a separate URL,or the other language,
other locationversions on separate
URLs, so that we can crawland index them separately.And then, using something
like the [INAUDIBLE],you can refer us to
the different versions.
we find that meanswe are using various HTTP
requests to serve this?So that means we
are telling Googlethat we have different content.So wouldn't it be
the same purposeserving as ccTLDs on
these different URLs?
JOHN MUELLER: If you
use different URLsand you use the geotargeting
feature in Webmaster Tools,yes, that would be equivalent.Yeah.So if you have a
dot com and you haveslash India, for example, slash
USA, and in Webmaster Toolsyou say slash
India is for India,slash USA is for
the United States,then that's fine for us.
PANKAJ: OK.So let's say it's
country-specific.Now, if we talk about
the device-specific,we are serving content
for mobile users,and we are serving
content for desktop users.So in that case, also,
Google will indexonly one type of content?
JOHN MUELLER: I would check
out the smartphone guidelinesthat we put together.There are the three options
that you can use for that.You can use different URLs.You can use the same URL.But you have to let us
know about the settings.But in the
guidelines, you shouldfind all of that information.
PANKAJ: What we look
at the guidelines,we didn't find that
they will indexonly the one content
or the two content.What it says that we need to use
a various HTTP header to servethe content to inform the Google
and searching bots that we areserving different content
in different userbot agents,so just to avoid the clocking.But I just want to ensure that
in searches for, let's say,mobile searches, we are
serving the mobile content.In desktop searches, we are
serving the desktop contenton the same URL.That, I want to confirm that.
JOHN MUELLER: What
we would do is wewould index the desktop version.And we would recognize that
there's a mobile version that'sequivalent to this content.Then we would show it for the
mobile users in the searchresults.So in a case like that, we don't
need to index it separately,because it's the
JOHN MUELLER: You access it
once on desktop, once on mobile.Maybe the layout is different.Maybe the sidebar is different.Maybe there's some
content missing.But if it's equivalent,
then that's fine for us.Then we don't need to
index it separately.
PANKAJ: So can we
conclude that wehave to have two different URLs
to index to different content,whether it be a
different language only?
JOHN MUELLER: Yeah.For language and
location, you wouldneed to have to
have different URLs.For different devices,
you can use the same URLs.
PANKAJ: OK.Fine.Thank you.
JOHN MUELLER: Sure.All right.Let's grab some
questions from the Q&A.And if you guys have
anything to add in between,feel free to jump in.[MICROPHONE FEEDBACK]Oops.Let me just mute you.I get negative SEO attacks.Page A loses rankings.I see new spammy links
and even bad 302 redirectspointing to it.I disavow those URLs with
the domain directive.And the ranking for
that page pops back upin three to four days.Is this actually working?You're probably seeing
something differentthan anything involved
with negative SEOor with those specific
links, because it generallytakes a bit of time for us to
actually crawl and re-indexand reprocess all
of that information.So if you're seeing the
ranking pop up and downwithin a couple of
days, then that'sprobably something different.Usually, that happens
when our qualityalgorithms that
kind of on the edge.And they don't really
know, is this a great page?Or is this a middle
quality page?And it's kind of fluctuating
between those two variations.So what I recommend doing
there is just making surethat your website is as
high quality as possible,so that we don't fluctuate
back and forth between thinkingit's a great site
and thinking it'smaybe not such a great site.So I'd focus more on making
sure the quality of your pageis as high as possible,
and probably worry lessabout these negative
SEO attacks.If you see problematic links,
you're welcome to disavow them.But you generally
wouldn't see effectswithin a couple of days.
JOSHUA BERG: John?
JOHN MUELLER: Yes?
JOSHUA BERG: So
what you're sayingthere is that if your
site fails one filter,like doesn't do
so good on Panda,then it might get a little
more focus from another filter?
JOHN MUELLER: These
algorithms tendto run completely independently,
because we try to make surethat our algorithms are able to
focus on one specific aspect,so that it's not this
overlapping aspect therewhere one algorithm triggers.And because of this algorithm,
the next one triggers,the next one triggers, leading
to a cascading failure.Instead, we want to make sure
that these algorithms runas independently as possible
so that there is, essentially,no big overlap between them.So just because
your site isn't seenas being such a
high quality site,doesn't mean that links will
be seen as more problematic.
JOSHUA BERG: OK.Yeah, because I
remember one time therewas a discussion about-- or
Matt was saying last summerwhen they were looking
at softening upsome of the algorithms
for sites thatwere kind of on the
borderline, that theywould look for
additional signals.So in my mind, I'm
thinking, OK, if that'sPanda looking at quality, then
are those additional signalsrelated to something else, like
off-site links or something?Or are we looking at
more specifically thatalgorithm where it is just
looking closer at the quality.
JOHN MUELLER: Yeah.Yeah.Usually, we try to
keep the signalsas separate as possible,
so that it doesn't happenthat one change triggers a
lot of different algorithms.Because then, we could
probably just simplifythose different
algorithms into one.And having fewer algorithms
always makes life easier.Is it true that your
website needs fresh contentto stay ranked high
in the search engines?For example, is adding
a blog to your websitegiving you better search results
for your keywords on your homepage, because Google sees
that your website keeps fresh?No, it's not true that you
need to keep updating contenton your website.Sometimes, websites
are really fantastic,and they haven't
changed in years.And that doesn't mean
that they're low quality.That just means that the
content there, essentially,has been the same for
a long period of time.And people might be happy
with that content as it is.So it's definitely
not the case that youneed to keep updating
your content.On the other hand, especially
in things like blogs,they give a lot of extra
context for your website.So maybe, if you only
write about your articles,your products, or
services that you'reselling within
your main website,and your blog gives a
little bit more contextto the bigger picture of how
these products and servicesmight be used, then
that could attracta different set of users.So it's not the case that
the blog itself makesit look like the
website is fresherand therefore ranks
more, but it'sjust that the blog has
slightly different contentthat might be attracting
different users.So I'd definitely look into
different kinds of contentthat you might be able
to add to your website.That could be a blog.That could be something
more about-- Idon't know, maybe photos.Maybe even a
feature where peoplewho use your
products and servicescan comment on it
directly, like reviews,for example, or
something like that.So essentially, adding more
content to your websiteis almost always a good thing,
as long as you can make surethat this content is really
high quality and somethingyou'd like to stand
for on your website.
SIMON: OK.But is it so that the blog also
tells something about the homepage, if you read a lot of blog
articles about the subject?Or does Google read the home
page as a separate page?
JOHN MUELLER: We look
at that separately.What might be happening
is, if you have the blogalso, like in a sidebar
on the home page,something like
that, we recognizethat the home page is
changing more frequently.And we try to crawl and
re-index it more frequently.But just because we
crawl and re-indexit more frequently doesn't
mean that we'd rank it more.So we see the crawling frequency
more like a technical aspect.And the ranking
side is somethingbased on the various signals
that we collect therethat isn't necessarily based on
how often we crawl and re-indexa page.
SIMON: OK.Thank you.
JOHN MUELLER: There are
URLs-- yeah, go for it.
PANKAJ: John?Yes, I have a question for you
that, basically, is very muchimportant and to this scenario
to rank higher, right?Am I correct?What signals?
JOHN MUELLER: Not
that important,but it's something which users
definitely recognize, yes.
PANKAJ: OK.So in mobile contest, how much
valuable it is [INAUDIBLE].
JOHN MUELLER: At
the moment, we don'tuse any kind of page
[INAUDIBLE] measurementfor the mobile ranking.So we try to use the
desktop rankings on mobile,except for situations
where we finda mobile doesn't work
at all on mobile.So for example, if a mobile
page has a lot of Flash content,then that something
where we'd say thisdoesn't really make
sense to show itto people on smartphones,
because they can't see it.So that's when we would demote
it in the search results.But if the mobile page
is kind of reasonable,we would show it normally, like
we would with the desktop page,even if it's fairly slow.So I think, over
time, we might beable to go into that
direction and say,your mobile page
should be really fastand should be able to render
within a couple of seconds.But at the moment,
where it's stillvery early on the
mobile side and thatso many websites are still doing
so many mistakes on mobile,that we can't really be too
picky about what we would showin the mobile search results.
PANKAJ: OK.so it means if we make
up a site response--so we may have multiple
CSS and different JSS.But it means Google
to crawl that.So somewhere, is it helping
in crawling our website?
JOHN MUELLER: If
usually, that can be fine.We recommend not doing
that, primarily sothat we can recognize the
or CSS, for example, to doa responsive design
for mobile, and wecan't crawl that
recognize that you'redoing something really smart.So if you let us
crawl that, then wecan credit your website
with that extra valuethat you're providing.
PANKAJ: OK.So if we are offering the JSS
and all this stuff accessible,and if you find that it is
something providing valueto users, so you will
value those signals, right?
JOHN MUELLER: Yes.
JOHN MUELLER: Yes.I mean, if we can pick up more
from the CSS,then that definitely helps us.It's not that we'll put your
website on number one rankingfor all of your keywords,
but it definitelyhelps us to understand the
context and your contenta lot better.
PANKAJ: OK.And now, I have two questions
related to internal links.
JOHN MUELLER: Let me get
through some of these submittedquestions first.And I'll open it up for general
discussions in a little bit.
PANKAJ: OK.Thank you.
JOHN MUELLER: All right.There are URLs in
my Disavow filethat are no longer in
Google index or cache.If no one is looking
to these URLsand Google never revisits them,
will they ever get disavowed?Generally, if they're
not in our index,you don't need to disavow
them, because we're notgoing to use them
for any links there.So theoretically,
you could remove themfrom your Disavow file.You could also leave
them there, if you'reworried that, at some point,
we might crawl and index themagain.But if they're not in our index,
then that's essentially fine.Why is it taking so
long to refresh Penguin?Come on, Google, you
should be faster.We talked with some of
the engineers about this,and they are working on this.So it's something where
we're working on an update.At the moment, I
don't have anythingto share with regards to when
that will actually happen.But it's been a while
since the last one,so I can see that
they're probablyworking a little
bit harder on this.I broke my blog posts into
pages with 50 comments.The pagination is reported
as duplicate contentin Webmaster Tools.I already use canonical URLs.What else can I do?Essentially, that can be fine.If you're just paginating
based on comments,and you have a lot of
comments on your pages,then that's something
that we'll recognize.We'll try to crawl and
index those individual pageswith the different
sets of comments.And even if the titles and
maybe part of the articleis exactly the same, we'll try
to recognize the unique partsthere.So if somewhere were to search
for that general article,and you have the whole article
copied on all of these pages,then we'll probably pick
one of these pages, maybethe first one, generally
the first one, probably.If someone is
searching for somethingunique within the
comments, then we'llrecognize that these
comments are separateand try to write that
in the search result.So duplicate
content, on its own,isn't something that you
need to be afraid of.It can have totally
normal purpose here.Sometimes, there are
technical reasons for that.Sometimes with pagination you
have things like this as well.So nothing really to
worry about there.If Google gives a co.uk domain
a 10 out of 10 ranking regardingintended geographic
location, what scorewould a dot com that's been
sent to Webmaster Tools to UK?We don't give any
ratings to ccTLDs.But generally
speaking, if you havea generic top-level
domain, and youuse Webmaster Tools
to set geotargeting,that would be
equivalent to a ccTLD.A lot of new generic top-level
domains are coming out.I think it's also
important to keepin mind, with these new
top-level domains thatare coming out,
some of them looklike they might be
geographic, but they actuallyare seen as being
generic on our side.So if you have something
like .nyc or .berlin--I think these are two
that are coming out--that's something which
we would, I think,at first look at and treat as
a generic top-level domain.So you can use geotargeting,
if you want, for that.If, over time, we recognize
that this top-level domain onlyhas content from a
very specific region,then we could take
that into account.But definitely, at
the start, we'llbe treating all of these
new top-level domainsas generic top-level domains.So you can use geotargeting,
set that up however you want.I'm a designer.And I want to promote my
design portfolio blog.How would you recommend
optimizing individual pageswhen content is
image or video heavy?Also, is a page with one
or two images and, say,200 words considered
low quality?So our algorithms don't
count the words on a page.It's not that you need to have
more than 200 words on a pageto be seen as high
quality content.Sometimes, a page has very
little textual contentand is very important
and high quality.So just because there
are words on this pagedoesn't make it higher quality.At the same time, as
I mentioned before,if we can't understand
what this page is aboutbased on the words on
this page, then we'llhave a hard time showing it
in the right search results.So a minimum amount of text
is definitely a good thingto have on these pages.It really depends on
how far you want to go.There are some sites I've seen
where a graphic designer hascreated something that looks
really nice in a browserwindow, but it has no
textual content at all.And for us, that's
something whichmakes it extremely hard
to rank, because wedon't know what we should
be showing for this.Maybe we can't even recognize
what's shown on the picture,or it's not that it's a photo
of something very specific.So sometimes those are
very tricky situations.But at the same time, if you
have something like your homepage, or if you have
a separate blog,if you have other pages
that do have a lot of text,then that can compensate.At least those pages can
rank for those queries.It doesn't have to be that
all pages within your websitehave to have a lot
of text on them.Sometimes, like a gallery,
that's totally fine.
SIMON: I have a similar question
about video in your website.Does Google see when you
embed videos from YouTubein a website?For example, you're
making a video coursethat can be very
interesting to people.But how do the search
engine look at your pagewhen you only have
video and a few words?Does it recognize the YouTube
video with the keywords in it?
JOHN MUELLER: Good question.I don't think we pull in
content from the YouTube video.So we try to primarily
focus on your pageand how that page has a
context within your website,within the internet
on the whole.So things like
anchor text for linksleading to that page,
that helps us a lot.But if there is primarily two
to three videos and very littletext, maybe just a heading,
then that's really hard for usto figure out what we
should be showing this for.In comparison to, maybe, the
YouTube landing page thathas a lot of context
on it, maybe thateven has a lot of comments
on it, that's a lot of textthat we can use to rank for.But if we don't have
much, it's very hard.
SIMON: OK.But does Google see the video?Because it's on an IFrame,
does it read the IFrame?Or totally not?
JOHN MUELLER: We generally
try to pull in contentfrom an IFrame.I imagine, with
YouTube videos, that'sreally hard, though, because
know how YouTubepulls in things like the
titles of the videos,or the comments,
those kind of things.So I would assume that we're not
going to see a lot of contentfor those IFrame embeds.
SIMON: OK.So a little text will be nice.
JOHN MUELLER: Definitely.Yeah.I mean, sometimes
it's also the contextof these pages that
makes a difference where,if you're seen as an absolute
authority in this topical area,and you compile a
list of two to threeor four videos that are really
important that maybe youcreated yourself where everyone
says this is the absolute bestcompilation on this
specific topic,and they link to you with
relevant anchor text,that's something where we
could pick up on that as welland say, OK, this is a really
good page on this topic.We should be showing
it when someoneis searching for it,
even if there's nota lot of text on this page.
JOHN MUELLER: But text
definitely always helps.
SIMON: OK.Thank you.
JOSHUA BERG: By your authority,
do you mean the channel, John?
JOHN MUELLER: What?
JOSHUA BERG: You said we
can tell if, say, you'rean authority on this
topic of these videos.Then do you mean the
channel, or thosewho are sharing the video?
JOHN MUELLER: More about
your website itself.So if we can recognize
that your website reallymatches this topic that
someone is searching forand this page is very
specific to something uniquethat they're searching for.So I don't know.It's hard to come up
with an example, offhand.But you could imagine, maybe,
let's say, a car manufacturerthat is known to produce
a lot of good cars.And they have a page with
some advertising videoson that, advertising
campaigns that they had.And if someone is searching
for this car manufacturerplus advertising videos, then
maybe that kind of landing pagewhere they compile those videos
could have a good chance.Because we understand
that this websiteis about this car manufacturer.It's something
that's essentiallyseen as being very
relevant to this website.And we can recognize
that maybe this page hasthe title, "Advertising
Videos" on it.And we can kind of work
to combine those two.So that's something where we
try to understand your websiteand understand how it
fits in with the query.And if there's something
unique within your website thatmatches the exact query, then
we'll try to pick up on that.
JOSHUA BERG: So in other
words, an authoritative pageis a page that it's on maybe
passing like a topical pagerank or a topical authority back
to the video that is showing?
JOHN MUELLER: I
wouldn't necessarilylook at it like that, but I
think that the general conceptis quite similar in that when we
recognize that this website isvery relevant for this
general kind of query,and it matches the
specifics of that query,than we think that might
be a good result as well.But again, if this is
something that you're creating,and you have these
videos on your website,then adding more text definitely
helps us understand it better.And it's not something
that I'd say you can alwayscount on happening, because
maybe we see a YouTubechannel that has a lot of
descriptions that also matchesthis kind of content that
the person is looking for.
SIMON: So does it help to
get a link from YouTubewhen you make those
YouTube videos to link themto your website?
JOHN MUELLER: I think
those links are nofollow.I'm not sure.
JOHN MUELLER: OK.Let's grab another one here.If Google can detect
unnatural links, why dowe still need to disavow them
instead of Google ignoring themautomatically.I read that competitors with
negative SEOs and spammylinks to harm their
competitors [INAUDIBLE].I think the whole negative
SEO topic is somethingwe've talked about a lot in
some of the previous Hangouts,so I'll generally
skip that part here.In general, we do try to
recognize problematic links,but we understand that we
can't catch all of them.So the big problem here,
from our point of view,is we see a lot of
really problematic linksor a lot of problematic
things that, maybe, you'redoing on your website or
outside of your website.And that leads us to
generally, maybe nottrust your website as much
as we would otherwise,because we don't know if all the
other signals that we're seeingfrom your website are
really unique and compellingand really authentic
signals that weshould be able to trust.So from that point of
view, it's somethingwhere we see these
problematic links,and we don't really know
how we should react to that.It's not that we can just close
our eyes and say, oh well,we can recognize these
problematic linksand ignore them.It's more that we
don't know whatwe should do with all
of the other signalsthat we find attached
to your website,if we realize that we really
can't trust one of these.So that's something
where I generallywork on making sure that
your website, on the whole,provides really consistent
signals, that it's notdoing something really
badly in one areaand doing really
well in other areas,so that the algorithms, when
they look at it, they don'treally know is the
whole website bad.Is the whole website good?Should I treat it
somewhere in between?And instead, if
you can make surethat everything across the
board looks really good,then that's always
a good sign for us.With regards to negative
SEO, I think we generallydo a pretty good job of
catching those kind of things.If you run across
situations whereyou think we're not
catching something properly,then you're welcome to
forward that on to us.And we'll take a look
at that with engineers.
PANKAJ: Hey, John?
JOHN MUELLER: Yes?
PANKAJ: Now can I
JOHN MUELLER: One more.One more.Just one second.Could you elaborate
on the choiceof removing other
snippets from web results?I don't know.Joshua, you made
this cool plug-in,I saw, that restores
JOSHUA BERG: Yeah.That's a useful one.This extension could restore
your authorship photo.And then it could also restore
your ranking, you know?So you can see it however
you want to see it, you know?
JOHN MUELLER: I thought
that was really cool, yeah.
Sometimes like Nexus+.
JOHN MUELLER: Yeah.I think that was a cool idea.So in general, our
idea here is to tryto make the search results a
little bit more consistent,especially with regards
to mobile or desktop.And on mobile, adding
additional images on a pageadds a lot of latency.And it takes away a lot of
room that could otherwisebe used for other things.So we're trying to make it a
little bit more consistent.And from our experiments,
we've noticedthat the click-through rate
is more or less the same.So that's something where we
think this is the right moveto take to make sure that
things are consistentand that authorship
is used appropriately.So we're still using authorship.We still use Google+.This is not the end of Google+
or the end of anything.We still process authorship
information on these pages.We still keep that.We'll show your name
in the search results,when we can recognize that.And it's essentially just
a change in the search UI.
JOSHUA BERG: But you
mentioned, or wasit was mentioned that, in the
news, or in some articles,the publisher or page might show
a different and smaller versionwhat we see some page images
used more or sometime--
JOHN MUELLER: I think
the results thatcome in from Google News, which
are in this News Universalblock, they might still
have authorship photos.
JOSHUA BERG: Yeah, In-Depth.
JOHN MUELLER: The In-Depth
articles, I'm not sure.They might also still
have authorship photos.But at least, for
the general webresults where removing the
photo in the circle countand just making the name
and the link to the profilethere, unless, of course, you
have your extension installed.
JOSHUA BERG: Yeah.Thank you, guys, on that.
JOHN MUELLER: All right.
PANKAJ: Hello, John?
JOHN MUELLER: Yes?
PANKAJ: Now can I
JOHN MUELLER: Sure.
PANKAJ: Thank you.My question is related
to internal links.When we go into
the Webmaster, wefound a lot of internal links
numbers that Google showed.But we have a couple
of [INAUDIBLE] linkswhich are consistent.And those, they are a variation
in the numbers significant.Let's say we have, if we
have terms and conditions,they have 70,000.We have a Contact Us page, or
maybe some Customer Care pagewhich has 73,000 or 75,000.So there's a huge difference
in that internal link numbers.So how does this happen?
Especially if you havethings that are consistently
linked across the website,then I think those numbers
are hard to use directly.I use them more with caution.So I would more
worry about pagesthat you find that don't have
any internal links at all,which might be a sign that
there's something technicallybroken with your internal
linking structure.But as long as you see
some information there,I think you're probably
on the right path.
PANKAJ: OK.Then the next
question is, when welook at the top pages
for internal links,we have same total links.But all that means two different
URLs or internal links,other links have
different top pages.So how Google will
acknowledge that theseare the top pages for
this link, and theseare the top pages for this one?What are the [INAUDIBLE]
JOHN MUELLER: So within the
ranking within your websiteyou mean?
PANKAJ: It's internal
links within the website.
JOHN MUELLER: Yeah.I'm not really sure where
the exact number comes from.I've seen those fluctuations
that you mentioned, as well,where the same page will
be linked internallyacross the whole website, but
they have different counts.I'm not absolutely sure
where that would come from.But that's something that
would be on our side.That wouldn't be something that
you would need to worry about.
PANKAJ: OK.Thank you.And one thing that I would
like to know that, if weusing one of the
videos, Matt mentionedthat Google reserves the right
to create links in footerdifferently as the
links in main body.So how differently means what
rights you are talking about?Can you elaborate that?
JOHN MUELLER: OK.So in general, what we find
is when a page is linkedin a footer across
a website, it mighthave a lot of internal
links, but thatdoesn't mean that it's
the most important pagewithin your website.For example, you might
have a Contact pagethat's linked across all
of your whole website.But that doesn't automatically
mean that this Contact Uspage is the most important
page for your website.Maybe the home page is
even more important.So that's kind of why
we try to understandthat these are footer links.These are, we call them,
boilerplate information,for example.So we have to look
at the whole page.And we see that maybe the
header, maybe the footerare things that are copied
across the whole website.And from our point
of view, that helpsus to understand
which part of the pageactually changes,
which part of the pageis important to focus on.So if there are
footer links there,then we'll recognize
that and say, thereare a lot of footer links here.But it's not something
where we'd say,this is the most important
part of the website.So essentially, that's
kind of what we do there.We try to recognize on a page
which part is boilerplate,so which part is repeated
across the website.And we try to discount that
part of the page a little bitso that, when we crawl
and index those content,or when we use them
for ranking, we focuson the main content that's
actually visible on the page,that users would feel is
the main content as well.
PANKAJ: So if you want
to be more conservative,means, all right,
what we call it as gogive the page rank
to other pages.So if you put those pages for
the links, would it be fine?
JOHN MUELLER: I wouldn't
necessarily do that.So you're going into the area
of page rank sculpting then.And that's not something
you need to do there.So we recognize these
are footer links.And we'll treat
them appropriately.It's not that we'll funnel
all of your site's page rankinto those pages.We recognize that automatically.And we can deal with
that automatically.You don't need to
manually tweak those linkswithin your website.
PANKAJ: Thank you.
JOHN MUELLER: Sure.
JOHN MUELLER: All right.We have 10 minutes.I could go through
more questions,or take questions from you guys.Do you have anything?Nothing special?OK.I'll just go through
more of these questionshere in the Q&A.If an IP address was
used for spam, et cetera,does that IP stay
marked when recycledthrough a new user or a company?Does Google see a
change of use of the IPand release it as a fresh IP?Or do you suggest
switching to a clean IP?In general, there are
a very limited numberof IP addresses left.And almost all of them were
probably used for somethingat some point or another.So that's not something
you could reallyknow the whole history of
the IP addresses involved.I think there are very
few IP addresses whichwe see as being problematic, in
the sense that we can't reallycrawl them.I forgot what the name was for
those kinds of IP addresses.But usually, you can't
access them normally.So those are the
kind of things whereyou need to be careful about.But if there was
content there before,if content was indexed
and crawled normallybefore on these IP addresses,
even if that contentwas spammy, then that's not
something I'd worry about.If you have a new
website and you put iton one of these IP addresses,
that's absolutely fine.Sometimes, when you look at
things like content deliverynetworks, those IP addresses
change all the time.And they get switched
between different websites.And that's not something
you could even control.So we wouldn't
necessarily take action,based on the IP
address of a website.
SIMON: So it doesn't matter
if you change your websiteto another hosting
company with another IP?That shouldn't matter.
JOHN MUELLER: That
shouldn't matter.We have recently put out
an article about site moveswhich covers that kind
of situation as well.I'd just make sure that, from
a technical point of view,you're doing the right things
to move from one IP addressto another.But it's not
something where you'dsee any changes in the search
results because of that.
JOHN MUELLER: It used to be that
we would focus a lot on the IPaddress for geotargeting.Or we'd try to recognize the
location of that IP address.But with the ccTLDs and
the geotargeting featurein Webmaster Tools
and [INAUDIBLE],that doesn't really
play a role any more.So if your IP address is
located in, I don't know, Italy,or in France, or in
the UK, or the US,that's not something
you'd need to worry about.Some CDMs even automatically
switch, depending on the user.So that's all fine.We have to be able
to deal with that.
SIMON: OK.That's great.Can I ask another question?
JOHN MUELLER: Sure.
SIMON: When we
transport the website,the company gets a new website,
and yes, different languages,but we want to
drop one language,does that give a negative result
because all pages give errors?
JOHN MUELLER: No.
SIMON: Sometimes from a language
that we don't use any more?
JOHN MUELLER: If nobody is
searching for those pages,that's fine.If people are searching
for those pages, of course,they'll be kind of disappointed.And maybe you'll see a drop
in traffic for those pages,at least.But if those pages
are ones that youdon't want to have on your
website anymore, removing themis fine.It's not that we would say there
are a large number of crawlerrors here, therefore,
we'll demote the website.So we see that more
as a technical problemand say, this website is
serving crawl errors, whichis technically correct.It's the right way to
handle pages that disappear.And that's fine.That's not something
that we wouldsay we would penalize
or demote a website for.
SIMON: OK, great.
JOSHUA BERG: John?Another question related
to the authorship.For a little while
there, we wereseeing a few different
levels of authorship.Some people were
calling it A and B,or first and second level.And then some were
also query related.And maybe, if your authority
wasn't good enough,or a lot of your content was
not as high quality as it shouldbe, then your image
didn't appear.And sometimes, your
authorship entirelydidn't appear in search results.So is that still
effectively the same?In other words, I
imagine it would be.The content you
associate with the thingsyou put on contributor
to and you post to,these things are all still
going to affect your authorityand authorship, just
as they ever were.Is that correct?
JOHN MUELLER: I think,
for the most part,what we would do
there is just tryto focus on the
technical aspects ofwhether or not authorship
is implemented correctlyand just show that now.And now that we don't
have to differentiatebetween showing the photo
or not showing the photo,it's also more
consistent, I think.If we don't show
authorship at all,usually, that means that
the technical requirementsare missing, or broken, or
implemented incorrectly.But if you have it technically
implemented properly,we've been able to crawl and
index and process all of that,we should be showing authorship,
at least with the name.Going forward, of
course, with just a name.
JOSHUA BERG: Yeah,
because I thinkit was Matt or someone
mentioned at a conference how,when they started
doing that, they found,if they reduced the authorship
showing by about 15%,based on the quality
related to authors,then they were able
to greatly improve,significantly
improve the resultsof the displaying
JOHN MUELLER: Yeah.I think, especially
with the photos,that definitely makes sense.Maybe that's something
we have to reevaluatewhen we have more
experience with justthe name-based authorship
annotations that we have now.So I think these are all
things which we're alwaysexperimenting with where you're
bound to see changes over time.So this particular
change that we did nowis, I think, just one
of the many changesthat we've done in
authorship over the years.I think we've had it now for two
years now, something like that.And I can definitely
see the teamreevaluating how it's
working now, maybe lookingat this in a couple of months,
maybe in a year and saying,oh well, maybe we need to tweak
it slightly differently, basedon this, to make sure that
people are recognizingthat these are great authors,
or to make it easier for peopleto see the names or the
history behind these people whoare creating this content.It really depends on
how things evolve,how users react
to these changes,how we see the community
around it evolving as well.
JOSHUA BERG: Right.And as with everything in
algorithm, nothing's permanent.I said the only constant
with Google is change.So you can also look
at it as experimental.But in that sense,
everything is.So is that what you're saying?
JOHN MUELLER: Yeah.Yeah.
JOSHUA BERG: I know what's
going to happen with them.
JOHN MUELLER: Well, we try
to keep things consistent,because users would
be totally confusedif we changed everything.But we do have to react to
the changes in the environmentas well.So for example, if we were
to show the authorship photofor all search results,
then maybe thatwould be too much for the
majority of the users,even if we had that information.So that's something where, in
the beginning when only veryfew sites implemented
authorship, maybeit made sense to show them all.Maybe now that a lot of sites
are implementing authorship,maybe it makes sense
to reduce that,or maybe to switch over to
the text-based annotation.It's something that always
has to be reevaluated.And when you look at, maybe,
eye tracking studies thatwere done two years ago
with the search results,they'll be completely
different nowwhen users are
actually looking at itand when they've had some
experience with the searchresults as they are now.So that's something that, I
think, is always evolving,that makes it interesting for
us, because we can't stop.Sometimes we'll see
people come to usand say, hey, what are you
still working on search for?It works, you know?Stop changing things.At the same time,
the expectationschange all the time.People are using search
on different devices,on their phones, on the
watches now from Google I/O.So it's always evolving.
JOSHUA BERG: What were
you referring to there?Was that internal eye
tracking studies, or somethingthat you've seen a study about?
JOHN MUELLER: I saw
something a while back,but I'm not really
sure where that was.That was extremely somewhere.But we do a lot of
studies internally.And as you can imagine, we
don't make changes that easilyin the search results.It's not that we wake up in
the morning and say, hey,we should change
everything to green,because that's a cool color.We really try to evaluate
the changes that we makeand make sure that we're
doing the right thingand that we're headed
in the right direction.So we definitely do a lot of
internal studies for that.But I know, externally,
people do studies about Googleas well.So I'm sure there are
some eye tracking studiesaround authorship
as well somewhere.
JOSHUA BERG: We do studies
on Google all the time.
JOHN MUELLER: Yeah.Yeah.I totally understand.It can be frustrating when
changes like these happen.But it's something that
will always be evolving.It's never going to be static.And there's going to be new
things that are coming up.And there are going to be old
things that are going away.And things that maybe
worked out well,maybe things that
didn't work out so well.So that's something that
comes with the internet.It kind of keeps you young,
or at least keeps you active.All right.I think we're out of time.But if any of you have
one last question,feel free to go ahead.No last questions.All right.I see there's still a bunch
of questions in the Q&A.I am going to set up some
new Hangouts in July.I'm going to be in
Boston for a while,so maybe we'll switch
time zones a little bitand have more US
ranking ones then.But I'll definitely
set up some new ones,so you can add your
questions there,if you weren't able to
get an answer here now.And until then, I want to
thank you guys for joining,thank you for all the questions.It's been really insightful,
SIMON: OK.Thank you, very much.
PANKAJ: Thanks, so much, John.
JOHN MUELLER: Have
a good weekend.
SIMON: Thank you, very much.It's really appreciated.