All questions have Show Video links that will fast forward to the appropriate place in the video.
Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video
JOHN MUELLER: Welcome, everyone
to today's Google WebmasterCentral Office Hours Hangout.My name is John Mueller.I am a Webmaster Trends Analyst
here at Google in Switzerland.We just got some snow,
here in Switzerland,so it's almost like the way
it should be in the winter.And we have a bunch of questions
that were submitted alreadythrough the Q&A.And if any of you who are
new to these Hangouts,or relatively new who are
live here in the Hangoutnow have any questions, feel
free to jump on in and startwith a question from your side.
DANIEL PICKEN: I'll jump in with
my few questions, if that's OK,John?
JOHN MUELLER: Perfect.
DANIEL PICKEN: Excellent.So I'll try and keep it
as brief as possible.So first one is, how long
does it take for Googleto recognize schema markup?I got an example,
nearly two weeks ago.We have run the
Data Highlighter,and we've put the
code on the side.For whatever reason, the star
rating isn't pulling through,and we don't really know why.So if you could shed
some light on that?And I've got an
example, if you need it.
JOHN MUELLER: So I guess
there are two aspects there.On the one hand, we
recognize it fairly quickly.So usually, within
one or two crawls,depending on the
markup, we shouldbe able to recognize
that the markup is thereand take that into
account for processing.So that part usually
happens fairly quickly.The other aspect of
that actually beingshown in a search
result is, essentially,separate from that, in that
we need to really be surethat this is
something that we wantto show in search for that site,
which means we have to kindof double check that
the markup is correct,that it's used in the right way,
and that the site is somethingthat we would trust to
show in the search results.And these are things that
sometimes take a bit of timefor us to actually pick up
and understand properly.Sometimes that's something
where you add the markup,and we don't show it all.Sometimes we'll pick it up
and show it fairly quickly,within a day or so.So it kind of depends
more on the site itself.So what I'd do there
is double checkthat you have the markup
implemented correctlywith the testing tool.
DANIEL PICKEN: It's come up
with-- we have done that,and it's showing as correct.It validated it, as
well, as a schema markup.So that it has been
validated, and that'swhy we can't get our heads
around why it's not working.
JOHN MUELLER: Then I would
double check, first of all,to see that it's
implemented in a way that'scompliant with our
policies, so that you're notmarking something up with stars
that wouldn't be marked upwith stars.I don't know if you have it on
your home page, for example.And you're just saying, oh,
my company has five starsand you put that in there.That's not something
that we try to pick up.And the last thing, which
is probably the hardest,is to really make sure
that it's a high qualitywebsite, that it's something
that doesn't have lower qualitycontent, so it's really
a clear sign to Googlethat this is something
that we should be showingin search because we can
trust this website to actuallydo it right.
DANIEL PICKEN: Because this
is a colleague of mine,if that is all above
board, which I imagine itis, where do we go from there?Where we would go from there?If we've run all
of those checks,and the content's great,
the site, as I know,is a legitimate site, it's a
legitimate branding company,what would we do then?
JOHN MUELLER: So
what I would do thereis post in the Webmaster
Help Forums that we.Some of the top contributors
are really good with markupproblems and they escalate
individual issues to uswhen they see that
it doesn't matchthe usual, potential problems.So that's something I would
recommend doing there.And if they can tell that
everything is really doneproperly, that this
is a great site,and that we should
be showing it,then they'll flag that to us.And they'll say, hey, John,
you're doing something wrong.This is broken, you
need to fix this.Talk to the engineers
DANIEL PICKEN: Excellent.And my second question
is around page rankings.When Google analyzes
a link, does itdecide whether or not to
pass a PageRank to a page?Ignoring the follow and
nofollow for a second.So it looks at a page,
does it then decide,should it pass PageRank?And if it does or it
doesn't doesn't, does itdecide the levels of
PageRank that it should go?And the reason why
I ask is I knowthat Panda is very much
a quality algorithm,and it assesses the page, and
whether or not the informationcan be trusted, how
authoritative the page ison the topic.So are any of these things
taken into account, also,in relation to PageRank?I'm just trying to establish
if we have a follow link,does that pass PageRank
regardless of anything otherthan the nofollow tag?
JOHN MUELLER: We try
to keep these thingsa little bit separate, so that
we don't have algorithms thattry to evaluate the same thing.So if we one that
evaluates a quality,then it wouldn't
make that much senseto have another rhythm
that, essentially,tries to do the same
thing for its own part.So we try to keep it
a little bit separate.With regards to PageRank,
I think the main issuesthat we see there are really
if we recognize that thisis a site that it
doesn't make any senseto pass any PageRank from, then,
on a site level, we might say,OK, we're not going to pass
any PageRank from here.That can happen,
for example, if wecan tell this is a site where
people have been spammingtheir links for a long
time, and it's maybean open forum where all
the links are followed,and it's just
filled with clutter.So those are the
kind of situationswhere we say, well, we see
this site is having trouble,maybe they can't keep up, maybe
they're doing this on purpose.And we just don't want to trust
it with regards to our PageRankcalculations.
DANIEL PICKEN: Would
affiliates and directoriesfall into the category of
not passing any PageRank?
JOHN MUELLER: Not by design.No.Both of those kind of
sites can be pretty useful.I think, especially
if affiliates,it's where, essentially, their
business model doesn't dictatethe type of site they have.It doesn't mean that the content
that they have is low quality.It doesn't mean that the
links that they have are bad.It's just the way that they're
monetizing their website.So some sites sell the
products direct to these peopleor, essentially, just
sell it, indirectly,through maybe a main distributor
or something like that.So that's not
something where I'dsay that just because
you're an affiliate,we wouldn't trust your
website or we'd treat itas something sub par
because they can be just asgood as normal websites.I mean, they can be normal
websites, in that sense.Directories are usually
a little bit trickierbecause there are a lot of
directories that, essentially,just accept any link.And if that's something
where, essentially, anyonecan just kind of drop the URL
into their link dropping tooland get a link from
that site, then that'sprobably not something we'd
need to trust that well.
DANIEL PICKEN: We tend
to disavow any irrelevantdirectory, but anything
that may be relevant,we may, potentially, keep.And that's how we generally look
at directories and that's good.Great.Thank you.Thank you very much
for the answer.
JOHN MUELLER: Sure.
KREASON GOVENDER: Hi, John.Just a quick follow
up question on that.In terms of Google News
sites, how authoritativeare those sites?Are they always seen
as authoritative sites?
JOHN MUELLER: Sites that are
in the Google News index,you mean?
KREASON GOVENDER: Yes.
JOHN MUELLER: Not always.So that's not something
where we wouldtake the status within Google
News as a ranking factorfor web search.That's something that
we do separately.
KREASON GOVENDER: OK.So it would depend on things
like relevance and stuffas well?
JOHN MUELLER: Yeah.I mean, we would treat
them as a normal website.I don't even know if we,
internally, within the websearch site, kind of
have this information of,this site is shown
in Google Newsor is not shown in Google News.I think we should just treat
them as normal websites.
KREASON GOVENDER: OK.Thanks, John.
JOHN MUELLER: Sure.Let me run through the
questions that were submitted.If you all have any
comments along the way,feel free to un-mute
and ask away.Otherwise, I'll run
through some of theseand leave some time at the end
for more questions from youall.If you have more than
one link from a page,is the value divided
equally or is it weighted?For example, more value
through the first link.As far as I know, it's
more equally divided.Obviously, it's not
something that'sso trivially defined as
we take all the linksand divide by the number because
we have to figure out what'sactually happening there.But, in general, we
treat them the same.So you don't have to
do anything specialby putting your most important
link on the top of your HTMLfile because we have
a lot of practicewith understanding how links,
especially within a website,tend to make sense.So that's not something
that I'd reallyspend too much time on as a
webmaster of trying to tweakthe location of your link.That's probably time
better spent justworking on your
website in general.We recently found that Google
indexed thousands of URLscontaining JSESSIONID
parameters.We disallowed them
in the robots.txt.How long does it
take for Googlebotto pick up the new
robots.txt directives?So there are two
parts here, I think.On the one hand, picking up
the new robots.txt directives--generally, for most websites,
we look at the robots.txt fileabout once a day.So that would be about that
day, time where, essentially, wemight not know the current
status of your robots.txt file.You can speed that up
through Search Console.If you submit the updated
robots.txt file for us,then we can pick that up
in the robots.txt tool,I think it's called
that in Search Console.And that lets us
go and find thatfile that you made some
changes to a little bit faster.The other aspect there is that
you've disallowed some URLsin the robots.txt
file and you'rekind of wondering when
they drop out of the index.And the important part
to keep in mind thereis that the robots.txt file
only controls crawling,it doesn't control indexing.So if you need something to
drop out of the search results,you would need to allow
crawling and make it possiblefor us to recognize
that this is somethingthat needs to disappear
from the search results.So that could be
by returning a 404,it could be by having no
index, it could be by havinga rel="canonical" to
your primary page,which I'd assume
you'd have here.And in all of those cases,
if we can crawl them,then we can recognize that
we should be indexing themand we can kind
of react to that.So what I would do,
in a case like this,is not block these URLs
by the robots.txt file.And instead, I would
allow crawling.And make sure that you
have a rel="canonical"set up on these pages, so that
we can actually recognize thatthis is something that you don't
want to have indexed like that.The other aspect
here, with regardsto robots.txt text
versus canonical,or redirect, or no index, is
that if there are any linksat all through these
individual URLs-- whichmight be within your website, it
might from externally as well.If those links are
blocked by robots.txt,then those links kind of get
lost because we go find the URLand we see we can't crawl it.So we all think, oh,
well, these linksare for this specific
URL, which mighthave the JSESSIONID
in it, and that'sprobably not what you want.You probably want those links
pointing to those specific URLsto point out the actual page
that you're trying to show.So with the rel="canonical",
you can forward that as well.
DANIEL PICKEN: Can
I ask on that, if Ilink to a page and
that page 404s,does that, essentially,
kind of break that link?So even though there is a
link going into the site,because it's going into a
page that's just giving a 404,will that site no longer
have that PageRankfrom that other site?
JOHN MUELLER: Yes.That's correct.
DANIEL PICKEN: That's correct.Double check.
MALE SPEAKER: So I have a
question regarding that, too.So if the content is
indexed in the Google indexand we set it on
disallow, you justmentioned that that
will stop the crawlerfrom crawling the page.Still, will that
content ever disappearfrom the Google index?
JOHN MUELLER: So
the content willdisappear when we recognize
that there is a disallow there,but the URL itself will
potentially remain indexed.
MALE SPEAKER: So can
people still find it?If people search for
content on that page,can people still find
the URL in the index?
JOHN MUELLER: Not based
on the content itself,but based on links to that page.So if there's a link from
somewhere within the sitesaying, I don't know, green
shoes on this URL, if that URLis roboted, then we take
the anchor text and usethat to rank the page.But we don't have the content
from that page anymore to rank.
MALE SPEAKER: So the anchor
could still be found,so the title or the link anchor.But would normal people, if they
search for any content at all,see that URL in the index?Is that very likely?
JOHN MUELLER: It's possible.Sure.I mean, if we see a lot of
links to one specific pageand that page is
robots.txt, thenwe assume that
there's probably stillsomething valuable on this page
that's relevant to the anchorsthat we've found so far.
MALE SPEAKER: OK.Good.Thanks.
JOHN MUELLER: When deciding
which to index, HTTP or HTTPS,does Google take into
account mixed content issues?I found some web pages with
a valid certificate and a 301to let Google index
HTTPS, but each of themhave mixed content issues.So mixed content issues is when
you have a page that;s HTTPS,but it includes something
that's hosted on HTTP.So that's the kind
of situation whereyou have a secure connection
to your web server.And everything that
goes to that serveris kind of encrypted
except for this partthat you're kind of pulling
in, which could be an image,it could be ads, or a video,
or something like that.So it's not really a secure
connection for your content.It's something that,
in the browser,is called a mixed
content issue because youhave two different
types of content there.And in general, that's
something shouldbe avoided because it's not a
clear, connected connection,in a case like that.So what tends to happen
is sometimes browsers flagthat in the browser, sometimes
they flag that in the URL bar,on top, with a yellow
triangle or something else.And from our point
of view, we tryto avoid indexing those
specific URLs as canonical.So if we're in a
situation where wesee one URL has normal
content-- everything is on HTTPSand it doesn't having
to mixed contentissues-- and the other URL
does have mixed content issues,then we try to
prefer, when we picka canonical, the
version that doesn'thave mixed content issues.And with regards to HTTP and
HTTPS, that can happen as well.We see one URL is on HTTPS
that has mixed content issuesand we also know the
same content is on HTTP.And it's a kind of normal HTTP,
it doesn't have mixed issuesby design.Then we might tend
towards the HTTP version,but these are really,
almost subtle differences,from our point of view.If there are really strong signs
that one or the other should bethe one that we index, which
could be maybe a redirect,or rel="canonical",
or other things.Where there's a really strong
sign saying the webmasterreally wants the HTTPS
version indexed even though ithas mixed content issues,
then we'll probablypick the HTTPS version, even
with the mixed content issues.There are ways for
webmasters to kind of learnabout the mixed content
issues on their siteby using some tools that they
can plug in on their site.There are server-side
scripts thatenable the CSP, Content Security
Policy, where, essentially,a browser reports to
the server, sayingthere was some mixed content
issue on specific page.So that's something I'd
definitely recommend doing.Having really good
HTTPS pages, I think,definitely makes sense, if
you want to go to HTTPS.And finding a way to
make sure that you're notaccidentally including
mixed contentis always a good practice.But in general, this isn't
going to make or breakyour good sites
indexing within Google.We have pages from our site
that are scraped and insertedinto hacked sites.The hacked sites rank very well.What can we do to get
our original pagesback other than normal
web spam reports?So I think you're in
the Hangout here, right?I think I saw you somewhere.So feel free to un-mute, if
you want to add more comments.I took this issue up with the
team as well to kind of figureout what, specifically,
is happening herewith regards to your site or
why your site is seeminglybeing targeted a little
bit more than we'veseen other sites get targeted.And they're definitely looking
at what we can do, on our side,to algorithmically
tackle this problem.But what would really
also help from your sideas well is if you notice
this happening repeatedlyfor your site or for
any site out there,feel free to get in
touch with me directly.And send me some sample URLs, so
that we can really take this upwith the team with a bunch
of sample URLs from your siteand we can work to
get that cleaned up.Feel free to send me a note on
Google+ or wherever and we cantake a look at that.There's a link to
a forum for that.I'll have to take a
look at that later.Is there any limitation
to the numberof characters in the
title attribute of a link?At some links on our
site, the meta descriptionof the link page gets
displayed at link titles.That gets around 250
characters, is that too much?So the title
attribute of a link.That would be within a page,
if you add the title attribute.As far as I know, there is no
limitation from the HTML side.And in general, for indexing,
on our side, what we dois try to stick
to the HTML spec.And if there's no
limit there, then we'lltry to take that
into account as well.So that's something that we
try to pick up there as well.With regards to the meta
description and titleelement on the page,
that's somethingwhere we have various
algorithms to tryto figure out what we should be
showing in the search results.So on the one hand,
we want to findsomething that's unique
and relevant for that page.Maybe something
short and succinctthat we can pick up
and give the user who'ssearching a little bit more
insight of what's actuallyfindable on this page.And what sometimes happens is
we take a really long titleand we decide to
shorten it because wethink it's better for the user
to have something shorter.Maybe it's something
that doesn'thave as many keywords in it.And we'll show that
in the search results.And this is
specifically true whensomeone is searching on
mobile, where there's reallyonly a small amount of room for
a title and for a description.So that's something
where there'sno hard limit on the number
of characters, or words,or whatever in a title
and in a description,but from a practical
point of view,we have to find
the right balanceand show the right thing there.The other thing to keep in mind
with the title and the snippetis that we change
those dependingon what is being searched for.So we try to match that
to the user's queryto really show-- with
regards to the specific wordsthat you searched
for, this page isrelevant for these
specific reasons.So it's not the case
that we would alwaysshow exactly the same thing.For App Indexing,
are there any plansto have Search Console
show the query datain the page for the
install app button stats?I think we just
added that last week.I would double check.If you have App
Indexing set up, doublecheck your Search
Console account.I believe you can switch
between two modes now.We're updating the website
design, content, structure,and navigation in
all of the URLs.SEO strategy has been created,
but since every part isgetting changed, how much
can this impact our rankingsand traffic?And how long can it
take to rank again?It's impossible to say how much
it will impact your rankings.We've seen situations where
people put a new website upand they accidentally leave
a noindex on all of the pagesbecause that's from the
development version.And in cases like
that, you reallyhave a really strong
impact because we end upremoving the whole site.But in general, websites,
when they're updatedand when you do a
redesign, they reallyneed to be kind of
reevaluated by our systems.And that's something
that takes a bit of time.We have to reprocess
all of these pages.If you change your
internal URLs,then we have to learn
about that again.If you change your internal
structure-- like whichpages link to which ones?Do you have category pages?Do you link directly
through the detail pages?Do you have multiple levels?All of these things,
we usually haveto re-learn and
kind of re-index.And that's something that takes
a bit of time on our side.And it can or it
should, generally,result in some
changes in the waythat we crawl, index,
and rank your pages.So that's something where you
would expect to see changes.It's not that we would say that
it will remain the same foreverbecause you're making
changes on your siteand those changes are
DANIEL PICKEN: Can
I ask on that, John?
JOHN MUELLER: Sure.
DANIEL PICKEN: What
I see a lot of--and this is [INAUDIBLE]
that's been done--at the bottom of the home page
or maybe the product page,they'll have a like a paragraph.And they will be
internally linkingto pages, category pages.It looks kind of spammy because
it's exact match anchor.So it could be saying, dun,
dun, dun, used cars, dun, dun,used car.And they're linking
to the internal pages.Now I suppose we, generally,
advise-- especiallyif we think that it's
been impacted by Penguin.I know that's external
links coming in,but that happens to be the
phrase that's been told to us.We might advise to probably
pull those links out.But how important is it?Because imagine that you
need exact match anchorlinks internally-- do you
treat home page anchor textsinternally different
to, say, navigationanchor texts internally?
JOHN MUELLER: We definitely
treat them differently.I mean, we try to recognize
what the relevance isof the individual links and we
try to take that into account.But it's not something
where I'd say,you need to have exact match
anchor texts in the footeror something like that
because, in general, wehave a lot of practice
with understandingthe normal, navigational
structure of a website.And we understand that
there are categoriesand that there are specific,
important pages on your site.Wow.It turns dark suddenly.We kind of understand how
normal websites are structured.So you definitely don't need to
do these almost spammy looking,long footer links
where you're linkingto blue, used cars
for sale in Londonand pointing at a specific
page of your site.That's not something
you really need.If it's really important
for your site--if it's something
that you think,this is our main
product, I reallywant to promote this
one, specific thing,I would just make that a
normal part of your home pageand really make that
visible for everyone.Don't just hide
it in the footer.
DANIEL PICKEN: It tends
not to be in the footer,but it's just,
generally, some contentat the bottom of the page.For whatever reason, it's
linking to these pagesand I'm just wondering
if that's reallygoing to have that much
of an impact or not.
JOHN MUELLER: Yeah.
DANIEL PICKEN: Do
you see what I mean?If I'm pulling it from
the category or that page,it's certainly relevant.If it's used cars,
anchor text, it'sdefinitely going to the
used car's internal page.But it's a case of, could
that be interpreted quitenegatively, if you've
got, say, a few sentencesand there are, say, five or six
links in that as anchor text?
JOHN MUELLER: That's
something we seewe see a lot on lots of sites.They will, essentially, almost
place keyword stuff in textson the bottom of their
page, where no normal userwould normally scroll.And there are bunch
of paragraphs of text.And they're keyword
rich anchor texts linkedinto internal pages in there.It's kind of like giving
some background informationfor someone who might happen
to accidentally scrolldown that far.And that's something
that, I think,looking back maybe
five or so years,probably worked in the
sense that this kindof keyword stuffing
was picked up.And we though, oh,
this page is relevantand we need to take
these links into account.But nowadays, that's something
that really doesn't reallyaffect the site at all.So this is kind of
like [INAUDIBLE]that's carried on from the
past, where people are saying,oh, it worked at that time.Maybe it still works now.I'll just kind of maintain this.But in general,
everything that youhave on your site that
you don't really need,I aim to kind of remove
that because it makesit a lot easier to
maintain, it kind of keepsthe pages a little bit
sleeker, and you don't haveto worry about things breaking.So if you have this old
keyword stuffing texton the bottom of your
pages, maybe it'stime to rip those
out and just makesure you have great navigation
within your normal site.
Couldn't agree more.Thank you, John.
JOHN MUELLER: In
Search Analytics,it's only possible to see
rankings on a country level.Given the importance
of local search,are there any plans to
provide the informationon a regional, city level?I don't know of any plans, but
that's an interesting point.I'll definitely take
that up with the teamas they continue
working on that feature.That sounds pretty interesting.You can also get,
of course, someof this information
through Analyticsdirectly when people actually
click through to your site.And so that might help you
to kind of interpret what,potentially, people
are searchingfor based on the
number of impressions,and the number of
clicks, and whatyou're seeing as kind of
relationships in analytics.As far as I know, Penguin is
initially released in the USand expanded to UK,
Canada, and Australia.How is his territory these days?Has he already flown to
Asia or some other areas?So Penguin, like most
of our algorithms,is pretty global in that
they apply to all websites,globally.And I don't think
we ever really saidthat this was something
that was specificto individual countries.So these kind of changes,
where we look at web spam,in particular, we try
to make them global,so that we have one
have rhythm to maintain.And when we update that, it's
valid for the largest numberof sides.It kind of increases the
scale of our actions.
DANIEL PICKEN: When can
we expect that, John?When can we expect
that to be released?I know it's some
point this months.Are we close now?
JOHN MUELLER: I don't have
anything to announce just yet.I don't know.I'm more cautious
than other people.Personally, I prefer we announce
it when it's actually readyand actually live, rather
than pre-announcing itand disappointing people when
we can't really make that date.Sometimes it works out that
we can move things alonga lot faster than we expected,
sometimes unexpected thingsget in the way.So I prefer to announce
it when it's readyand then release
it when it's ready,rather than try to stick
to an artificial date.
DANIEL PICKEN: Now
that we're kind ofon a conversation of
updates, Panda-- Iknow that's part of
the core algorithm now.How often are we expecting
that to be updated?I think people were
saying real time,but we don't think it's
going to be real time now.So is that still going to
be, on average, once a month?Or is it going to be a
lot more frequent now?
JOHN MUELLER: As a part
of the core algorithm,you probably wouldn't see those
kind of updates happening.That's something that's
kind of just rolling along.So it's not so much like in
the past where you'd see,on this date, this actually
changed and we updated that.As a more rolling algorithm,
you wouldn't reallysee the individual, cut dates
of specific parts of the data.
DANIEL PICKEN: So
will it be real time?Or is not real time?Is it kind of as a WAN?
JOHN MUELLER: It's not
real time in the sensethat we don't crawl the URL and
we have that data immediately.So it's something we have to
bundle together, understandthe data, and have that updated.And we do that more
on a rolling basisnow, so that when
one is finished,the next one kind of starts.So it's real time, if you
look at a really big scalein that things are
happening all the time.But it's not real
time in a sensethat every second there
is a new value that'sbeing produced based on a
new data point that we have.
DANIEL PICKEN: So that
rolling basis, that period,how long will that take?Is that like two weeks, and
then it kind of starts again,and then starts again?I'm just trying to
understand what to expectas opposed to the Panda merge.
JOHN MUELLER: I think
what you would usuallysee is that this is just kind
of more like a subtle movefrom one to the other as
things kind of roll out.So it's not that you would
even notice this cycle.
DANIEL PICKEN: OK.Thank you.We're making AMP
pages and wantedto know if we can
test these pageswith any tool within Google?How can we test our pages and
see whether they're OK or not?So there are three things
that you can do here.On the one hand, you can sign
up for the Search Console betabecause we're testing some
new features around AMP.The short link for that is
g.co/searchconsoletester.And if you sign up there, you'll
find information about the AMPtests that we're doing.The other two things you
can do on a per page basisis, on the one hand, check
with the AMP validatorthat's built into
every AMP page.So you can just add
#development=1 to the URL,reload the page, and then look
about the AMP versionthat is used, and it
will render this page,and it will show
you information onwhether or not this page
is valid AMP markup or not.In addition, specifically
for the news article markup,which you would need
for the news carouselwhere we kind of
show these AMP pages,you can use The
Structured Data TestingTool to run your AMP
page through that as wellas your normal HTML page.And you can double check that
we're picking up the newsarticle markup
properly, that youhave all of the elements
which we requirefor that markup, which might
be the image or the publisherinformation.I don't know all of
the details directly,but you'll find all of that
in The Structured Data TestingTool direct.So those are two ways you can
test it on a per page basis.For more site level, I'd really
recommend the AMP beta test.Increasingly seeing
international sitesreturning inorganic
results on querieswhich have local
intent in the UK.For example,
electrician+locationdoesn't seem very useful.Sites from the US, India,
and Australia show up.So from my point of
view, that soundslike something is broken.That's something that we
should probably look at.And if you're
increasingly seeing that,I would love to see examples.So feel free to send
me some queries,some screenshots on Google+ and
I can take that up to the teamto figure out what exactly
is happening here and wherewe're getting that wrong.
MALE SPEAKER: I have a
related question maybe.
JOHN MUELLER: Sure.
International results.So if I am in Germany
and my customersand their website
is in Switzerland,what is the best way
you suggest to checkthe rankings in google.ch
for that particular customer?How the rankings
look, how they rank,what their competition is doing.Because if I just
switch to google.ch,of course, we will
see I'm in Germany.And it gives me not
the right results.So what is your take on that?
JOHN MUELLER: Hard to say.So to kind of appear
like a Swiss user.So there are two things
that I, generally, do.On the one hand,
setting it to google.ch,which is kind of
like the first step.The other thing is within
the URL parameters,you can set the gl=parameter
to the country code.So gl=ch.For example, for Switzerland,
that would essentially say,I would like to be
geolocated in Switzerlandand have those results.That's not something you
can do directly in the UI.Or maybe you can.Maybe in the Advanced Search
settings, I'm not sure.But it's really quick to do in
the URL for the query directly.I don't know about a
lower level setting.Like if you want to say, I
want to be located in Zurichor I want to be
located in Geneva,I'm not sure how
you would do that.
MALE SPEAKER: Of course, I
could always take a proxyand act as if I am
with my IP address,act as if I'm in Switzerland.But how does Google
detect my settings?Probably, I would have to take
a fresh browser and incognitowindow, so Google does not
see my browsing historyfrom the past, right?
JOHN MUELLER: Yeah.That's usually what I do.I just open an incognito
window and check it like that.
MALE SPEAKER: So
you open incognito,you use a proxy to
look like you'refrom a different
country, and thenyou look at the ch results?
JOHN MUELLER: I usually
just use a gl=parameter.
MALE SPEAKER: Really?
JOHN MUELLER: So I don't
even bother with a proxy.It's really rare that
I have to pull upa local proxy from somewhere
to look at something specific.But I probably don't check
the rankings in the same waythat you will
check the rankings.
MALE SPEAKER: Yes, probably not.By the way, you probably
serve via an American IP,don't you, when
you're within Google?
JOHN MUELLER: Sometimes, yeah.It's really weird sometimes.
MALE SPEAKER: Is your VPN
not always Google America?
JOHN MUELLER: I don't know.I don't know.Sometimes we have
Swiss results, too.I don't know.I don't know how they
have that set up.
MALE SPEAKER: Thanks.
DANIEL PICKEN: How can you
allow the local search?Because for a
client, when I wantto find out where they're
standing in Googleand where they're
ranking, I don't wantit to be biased to my location.And I'm not really sure,
now that Search tools hasbeen removed, how I
could go about that?Because I go through incognito,
but that still biases meto whatever I'm based.How do I do it,
so that it doesn'tbias to any, specific
location within the UK?
JOHN MUELLER: I
don't know for sure.I don't know if there
is still a URL parameterthat you can use there.I know we removed
that text box whereyou can enter your location.One thing I would do to
kind of just double checkis to just enter that location
with a query together.So if you're searching for
petrol station in London,then include the word
London in the query,so that we can pick
up the location.But it's really a bit trickier
almost on a more local levelthan on a country level.On a country level, you
can use the parameters,it's probably easier to
find the proxy that'sin a specific country.On a city or regional level,
that's really kind of tricky.And that's something where I
imagine individual users seethings differently as well.It's not so clear
that you can say,well, everyone in
this specific regionwill see it exactly like this,
if they're not logged in.
DANIEL PICKEN: Well, I want it
so that it doesn't look at it.So for example, I'm
based in Harrogate.Whereas, I'll have a
client based in London.And we'll have
different results whenwe're looking at
rankings for their site,but I want us to be able
to see the same thing.So I don't want it to
find out where I am.I just want it to look
at the UK as a whole.And I just have no
idea how to do that.
JOHN MUELLER: I don't
think you can do that.I think, from a
personalisation point of view,we always personalize.And specifically, if we can
recognize, from the query,that you're looking
for something local,then we'll try to bring
you something local,even if you're not logged in.So if you're looking
for a pizza restaurant,you probably don't want one
in some random city in the UK.You want something
kind of nearby.So that's kind of the
differences that we have there.
DANIEL PICKEN: So that's
permanently switched on.You're not switching that off.
JOHN MUELLER: Yeah.
DANIEL PICKEN: OK.I thought I'd check,
anyway, if there was a way.
JOHN MUELLER: The other
thing to keep in mindis we permanently
do experiments.So every time you're
searching, you'reprobably in 10, 20, 30 different
experiments at the same time,and everyone is.So it's kind of normal that
someone, even sitting nextto you, might see different
results than you would see.
That's interesting.I'll bear that in mind then.Thank you.
KREASON GOVENDER: Hi, John.I just have a follow
up question on this.Let's say, for example,
we're taking petrol stationsin the UK and your petrol
station is based in Manchester.Is there anything you can
do, let's say, to improve it,so that it ranks high
in search for Liverpool?So local users see it ranking
higher in their search?
JOHN MUELLER: I
guess you would doanything you would do for
a normal, local business.You would have the
address on the website,you would have information
on how to get there.Maybe you'd markup the address
use structured data markups,so that we can
automatically pick thatup and grab that information.But it's not the case
that you can say,well, I have a local business,
which is in this country,and I'll just add some keywords
from a different location,and suddenly, you'll
rank better as well.We really have to understand
that it makes senseto show it there.
KREASON GOVENDER: OK.Thanks, John.
JOHN MUELLER: Let me run through
some more of the submittedquestions and then
we'll have more timefor your live questions, too.We see a lot of live
updates, like the scorecardon Google Search, under
the In The News box,but when you go to the page,
you don't see any content, justthe scorecard.I believe Google has guidelines
to have a minimum wordlimit on a page to qualify
for the In The News box,why is that so?I don't know, specifically,
about the scorecard, how that'sset up, or how the
Google News guidelinesare, in general,
but in practice,from a web search
point of view, wedon't have limits on
the number of wordsthat you can have on your page.It could even be that your
page is blocked by robots.txtand we see no words
at all from that page.So that's something
where it doesn't reallymatter how much actual
words you have on your page.And if you have information,
like a scorecard or a tableof numbers, and those numbers
are relevant to the user'squery, then we'll
try to show that,even if there's no
normal text on the page.How to deal with
duplication issueson a platform like
YouTube and SlideSharewhere the public upload content.So I assume it's that
other people are uploadingcopies of your own content.In cases like that, if
there's something kindof like on a legal
basis that you can do,you probably need to get in
touch with those platformsdirectly to have
them take action.If it's just a matter
of this content beinglive on those pages, then,
from our point of view,that wouldn't necessarily
cause any problemsor make your content
look any worse.We look at these
pages individually.If we see that
they're duplicates,we'll try to fold them together.But if you're the original
source of this contentand we find duplicates
of it is well,then we'll probably still show
you in the search results.We will, generally, try to
show the original or the mostrelevant results
wherever possible.Our blog uses a plugin
that automaticallygenerates related posts at
the end of all blog posts.Since Google doesn't
like automated content,should these be nofollowed?No, not necessarily.So these related links
within your contentare a great way for us to kind
of crawl through the websiteand find more content
on the website.So that definitely wouldn't
need to be nofollowed,even if that's
generated automatically.Automated content is
more of a problem for us,if the primary
content of the pageis generated automatically.So if you're taking
a bunch of textand you're just juggling it
up, or if you're spinning textautomatically, where you're
replacing words with synonyms,that's the kind of automated
content that we thinkis not so valuable for search.Our site occasionally has a
featured snippet for the query,how to refinance?It switches between us
and another country.How is it deciding which
site is being featured?Does user feedback
impact that decision?Will it always
switch between sites?In general, this
is something wherewe try to find the
most relevant resultsand we show that in
the search results.And in a case where we're
sometimes showing your siteand sometimes
showing another site,we're probably kind
of on the edge thereand don't really know, for
sure, which of these oneswe should show.And sometimes things tend
more towards one sideand sometimes more
towards the other side.But it's not the case that, by
design, we would automaticallyswitch between
different results.So that's probably just
something specific in your casethere.Let's see.What can we do here?Eight months back, our site
migrated from HTTP to HTTPS.301 redirects were implemented,
every page is ranking,but the PageRank is lost.It was six for the
home page and onethrough five for the sections.Now it's not available.How long will the 301
take to form a PageRank?So Toolbar PageRank is
something we haven't updatedfor a really, really long time.So I would expect this
to probably never update.So that's not
something I'd reallyfocus on, in a case like this.If you've moved to a new site,
if you've moved to HTTPS,then the PageRank in the toolbar
will probably not update there.
FEMALE SPEAKER: Can I
ask a quick question?
JOHN MUELLER: Sure.
FEMALE SPEAKER: I had
problems with my microphone,but now it works.Will Google penalize us if the
first text of the home pagestarts below the fold?
JOHN MUELLER: No.I don't think so.
You don't think so.OK.
JOHN MUELLER: It
wouldn't make sense.No.
FEMALE SPEAKER: OK.Another question.Assume there's a
[INAUDIBLE] a care rentalbooking engine with a quality
text about sightseeingin the city of
Barcelona, but we insteadhave a guide about
how to rent a car.Who will have the
higher ranking position?The website with the
more related text?
JOHN MUELLER: Probably
the more related one,but the tricky part
there is we usea lot of factors for ranking.So that's something where
it's not really possible to,theoretically, say, this page
or this page, which one wouldbe the highest ranking one?It might happen that there are
other things that kind of playmore of a role on this
page, and other things thatplay a strong role
on another page,and we have to balance
the whole thing,and we figure out
which one to show.
FEMALE SPEAKER: All right.And one last question.On our website, we want
to show some guidesabout how to run the car in
the different cities of Spain.Assume there's no difference
between renting a carin Barcelona and in Madrid.Therefore, the text
would be the same.Would this count as
JOHN MUELLER: Duplicate content.So we would look at
the texts individually.And we might
recognize that there'sa large block of content
on here that's duplicated.So what would probably happen
is if someone is searchingfor the text within
the duplicated contentpart of the page, we'll
fold those pages togetherand show one of them.Whereas, if someone is
searching for something that'sspecific to this page, maybe
a combination of the cityand some other text
that you also have,then we'll try to show
that individual page.It's not that these pages
would be penalized or lessvalued in search, we would
try to just figure out whichone is most relevant to show.And if we recognize that, from
a relevance point of view,they're kind of similar because
someone is searching justfor the duplicated
text, then we'llfold them together and pick
one of those pages to show.
FEMALE SPEAKER: All right.Thank you.
JOHN MUELLER: Let's open up to
other questions from you all.
MALE SPEAKER: I have
another question.So it's an online shop and their
active in Germany, Switzerland,and Austria.There are three
different domainsand the products are
more or less the same.However, those are
three, independent shops.The domains are called
the same-- shop A, shop B,shop C-- in the respective,
country level domains.And so the products
may be a trash can.And it's more or less
the same products,but the products don't know of
each other in the other shop.So we cannot set the rel
alternate language tag.What happens, of course,
is if somebody searchesfor the shop brand name, they
always see the wrong country'sproducts.Is there any way to tell
Google, more exactly, thisis Switzerland, this is
Austria, this is Germany?
JOHN MUELLER: For
something like that,you would probably need to use
the rel alternate hreflang tag.
MALE SPEAKER: Yes, but we
can't because the productsdon't understand that they have
twins in the other country.
JOHN MUELLER: Yeah.
MALE SPEAKER: The systems
are not connected,you know, so there's no way at
all for hundreds of thousandsof products to manually set
those rel alternate languages.
JOHN MUELLER: Yeah.Would that be something
you could do Sitemapfile, in a case like that?Or are these
MALE SPEAKER: No.They're independent.They have different
IDs, for example.
JOHN MUELLER: OK.In a case like that, the
only thing you can dois really work with
the geotargeting.And sometimes what
will happen iswe'll think that
maybe a site that'sgeotargeted for Switzerland is
still more relevant in Germany.So If you don't use the hreflang
markup between those pages,then we'll treat them
as separate pages.And we won't know which one we
should show in which countryinstead of the existing pages.
MALE SPEAKER: Yes.
JOHN MUELLER: So we'll just
know this page is here,this page is there.If someone is searching
in this specific country,we know that one page might
be more relevant for usersin that country, but
we probably also knowthat another page from another
is just really, really strong.And sometimes we will
still show that in search.
MALE SPEAKER: So with
geotargeting, if you say,use geotargeting, you
mean that we should detectthe IP address of the user.And if they are in Switzerland,
send them to Switzerland?And vice versa?
JOHN MUELLER: No.The geotargeting setting
in Search Console,if you have a generic top
level domain or the country--
MALE SPEAKER: It's not
generic because it's .ch,.at, .de domains.So that's not possible.
JOHN MUELLER: Yeah.So in a case like
that, you alreadyhave the geotargeting
built into the domain.
MALE SPEAKER: Oh, I see.OK.
JOHN MUELLER: So
that's, essentially,what you can do there.I would really try
to figure out if youcan use hreflang somehow.
MALE SPEAKER: Yes.I will do that.
JOHN MUELLER: That
would be reallykind of fix this problem.
MALE SPEAKER: Yeah, I know.All right.Thanks.
JOHN MUELLER: Sure.Hello.
FEMALE SPEAKER: OK, just
one last question, please.So after improving the
quality of the website,how long will it take until
you see the first result?
JOHN MUELLER: That's
a hard question.On the one hand, we have
to re-crawl and re-indexthose pages, which can
take a bit of time,especially if it's
a whole website.Re-processing a website, that's
something where some pages arere-crawled within
a day, other pagestake maybe a month
or even longerto be re-crawled and re-indexed.And on the other hand, we
have to kind of evaluatethe quality of this
content as well, whichalso takes a little bit longer.So that's something
where you probablywill see some initial changes
maybe after a week or so,after everything is kind of
starting to be re-processed.But it will definitely take
maybe up to half a yearor so for everything to be
re-processed completely,and for us to really
re-understand the new website,and how it interacts with
the rest of the [INAUDIBLE].
FEMALE SPEAKER: OK, thank you.
JOHN MUELLER: All right.More questions from any of you?Otherwise, I'll grab the next
ones that were submitted.Go ahead.
KREASON GOVENDER: With
regards to new sites,how long does it take
before they're fullyindexed on your side?
JOHN MUELLER: New
websites, you mean?Or a news specific?
KREASON GOVENDER: New websites.A completely new
site with about, say,20 pages of content,
how long doesit take before your algorithms
pick up the relevance,and so forth, fully?
JOHN MUELLER: Fully
is really hard to say.Depending on how
we find the URL,we can pick that up
within a couple of hours,a couple of minutes sometimes.If it's something where
you submit the URL to usthrough Search Console
and use the Fetchand submit the
indexing feature, thenprobably we can pick
up those URLs within,I would say, less than a day.With regards to fully
understanding those pages,that's something, again,
that takes a while.So in the beginning,
what will happen is,technically, we'll
index those pages.We know they exist.We kind of know the content.And we try to rank them based
on the limited informationthat we have.But to fully
understand them, thattakes us a certain
number of time.And that's something
where you'll sometimessee that change in
the search resultswhere a site might
be completely new,and maybe it ranks a little
bit higher because we think,oh, this is probably
a good site.We don't know
anything about it yet,but it looks pretty good so far.And over time, we recognize,
oh, it's probably not as goodas we thought.And it kind of goes
down a little bituntil it settles down
in the right place.And similarly, what can
happen is a site kindof ranks a little bit
low in the beginningbecause we don't
know a lot about it.And then, over time,
we learn that thisis a really fantastic
website, and we'llrank it a little bit higher.So that's something where,
from a technical point of view,it can happen really
fast that we index it.But to actually understand
it more completelyand to have everything
settle down,maybe a couple of months,
something like that.
KREASON GOVENDER: All
right, thanks, John.
FEMALE SPEAKER: I was
just told that we'rethinking about
listing car rentalcompanies with their contact
numbers, addresses, et cetera.So does a data table without
text mean the loss of quality?
JOHN MUELLER: No.That can be perfectly fine, too.
FEMALE SPEAKER: OK.So it's not recommended to
write a text instead of just,I don't know, numbers?
JOHN MUELLER: I would do
what works for your users.If it works for
users as a table,then I would keep it as a table.If you think it's
confusing for your users,or if you see that
it's confusing,then maybe write it up as text.That's not something where we
expect full sentences on a pageall the time.
FEMALE SPEAKER: OK.Thank you.
DANIEL PICKEN: I had
a quick question.And John, it was
about something yousaid a few weeks ago, actually.You said the title tag
is a ranking signal,but you said that you use
a part of the title tag.And that's really stuck with me.And I was just
wondering what youmeant by that, when you say
you use a part of the titletag as a ranking signal.
JOHN MUELLER: We use
that just as a part.I think it's not like the
primary ranking factorof a page, to put it that way.
DANIEL PICKEN: Right.
JOHN MUELLER: We do
use it for ranking,but it's not the most
critical part of a page.So it's not worthwhile
filling it with keywordsto kind of hope that
it works that way.In general, we try to recognise
when a title tag is stuffedwith keywords because that's
also a bad user experiencefor users in the search results.If they're looking to understand
what these pages are aboutand they just see a
jumble of keywords,then that doesn't really help.
DANIEL PICKEN: So what
about keywords that aredirectly relevant to the page?Is that OK, or you can tell
that they are specific keywords,so therefore that
might work against you?
JOHN MUELLER: I mean, having
keywords in your title trackis fine.I would just write the title tag
in a way that really describes,in maybe one sentence, what
this page is actually about.Really make a clear title
rather than to just have keywordone, keyword two,
keyword three in there.
DANIEL PICKEN: And you
said it wasn't critical.What would you deem
as critical to a page?
JOHN MUELLER: More like the
actual content on the page.
DANIEL PICKEN: OK.Interesting.
JOHN MUELLER: I don't
know if it's still live,but one of the bigger
Q&A sites, for example,for a really long
time, they didn't havea title tag on their pages.They didn't have a description,
because they reallywanted to Google to just
take whatever was on the pageitself.And that worked really well.So that's not something
where we would say,if you don't have
a title tag, youdon't have any chance
of showing up in search.From my point of
view, the title tagis something that's
worth specifyingif you have something
specific that youwant to use as the title.And you can really refine it
into something kind of shortand to the point.If you just have a
really long titletag that's almost a
description of the whole page,then that's probably not
that useful to users.And it probably
would be somethingthat we re-write in
the search results.
DANIEL PICKEN: OK.If you re-write it, is
that a good indicationthat it might be wrong
for that particular query?
JOHN MUELLER: Maybe.Yeah.The tricky part there is,
specifically on mobile,we don't have a lot of room.And on desktop, we do
have a lot of room.So for mobile, we
probably re-write thingsa little bit more aggressively
than we would on desktop.So just because it's re-written,
doesn't mean that it's bad.If you see a really
short title tag,it doesn't mean that you should
be using this one instead.Essentially, it
depends on the deviceand, like you mentioned, the
query as well, what we actuallydo with those titles.
DANIEL PICKEN: OK.Cool.Thanks.
JOHN MUELLER: All right.Let's take a break here.I'll set up the next
Hangout as well,probably in two weeks
again as we move forward.And hopefully, I'll see some
of you all there again as well.Thanks a lot for
all of the feedback,for all the comments
and questions.It was really helpful.And I wish you
guys a great year.
DANIEL PICKEN: Have you
been doing this Friday?
MALE SPEAKER: Bye, bye.
DANIEL PICKEN: Are
you usually doingit Friday or Friday morning?
JOHN MUELLER: Sorry?
DANIEL PICKEN: Does it tend
to be a Friday that youdo the Hangouts.The last few that I've been
in tend to be a Friday.So going forward, will
these be on a Friday?
JOHN MUELLER: I try to keep them
on the same days, more or less.So we have Tuesday afternoon,
Thursday kind of around noon,in German, and then Friday
morning for English again.