Transcript Of The Office Hours Hangout
Click on any line of text to go to that point in the video
JOHN MUELLER: All
right, welcome everyone
to today's Google Webmaster
Central Office Hours Hangout.
My name is John Mueller.
I am a webmaster trends analyst
here at Google in Switzerland.
And part of what I do
is talk with webmasters
and make sure that
any open questions can
be answered and also
bring the feedback back
to the engineering
team internally here
at Google to make sure that
we're doing things that help
you make better websites too.
As always, if any
of you want to ask
a question in the
beginning, feel
free to go ahead
and jump on in now.
MIHAI APERGHIS: I
have one, of course.
So one of our clients is a
US-based e-commerce website.
And their target
audience is basically
any English-speaking user.
So country-wise-- it's the
United States, Australia,
Canada, UK.
And I noticed recently that
their Webmaster Tools account
is actually geo-targeted
to the United States.
Somebody probably
made a mistake one day
for the account-- not
sure how long ago.
And obviously, this is
not what they really
want because their audience is
not just in the United States.
And this obviously has an effect
because 80% of the traffic
is from the United States and
from other English-speaking
countries is almost none.
And my recommendation
was to actually remove
the geotargeting option.
But they're kind of worried that
just like a website redirect
or migration, they might
see a loss of traffic
before Google picks
up this change.
And they won't show up as well
as far as the United States
until they get the other traffic
from the other countries.
Is that something they
should worry about?
JOHN MUELLER:
Usually not so much.
So I guess there's one
thing worth mentioning here
with that in general is
that there are two ways
that we've seen people kind
of take out the geo-targeting.
And one is just by removing
the checkbox above the setting.
And if you do that,
essentially our systems
will try to figure
out where you're
trying to do geo-target and
do that automatically for you.
So in a case where a website
is really focused on the US
primarily, then we'll
probably just say, well,
the user didn't
specify anything,
but we see they are
focused on the US.
So we'll geo-target them
automatically for the US.
So that's kind of--
if you just deselect
that checkbox on top,
that's probably not what
you want to do.
What you can do is leave
the check box selected,
but choose I think it's
unlisted in the list or no-entry
or something like that
in that list of countries
where essentially you're
telling us explicitly
you don't want to
have geo-targeting
applied for this website.
It should really be global.
MIHAI APERGHIS: Oh, I didn't
know about this, I think.
JOHN MUELLER: Yeah, it's
something obscure almost.
That's something you
could do if you really
wanted to say this is a
global website that shouldn't
be geo-targeted at all.
Whether or not they'll
see a change in traffic,
I expect that they
would see a change.
But it's hard to say in advance
if maybe this drop of traffic
from the US-- because
it's not explicitly
targeting the US anymore--
is compensated directly
with additional traffic
from other countries.
That's really, I guess,
impossible to say in advance.
But the good thing
is this setting
gets picked up
relatively quickly.
So after a week
or so, you should
see kind of the effects
of what changed here.
MIHAI APERGHIS: Yeah,
that's very interesting.
Thanks [INAUDIBLE].
JOHN MUELLER: OK, Great.
JOSHUA BERG: John?
JOHN MUELLER: Sure.
JOSHUA BERG: People
who are tracking
with bated breath the
mobile-friendly algorithm
updates are saying that they're
seeing very a small change now
into the third day--
nothing unusual.
Is that just because it's
expected to be slowly rolling
out?
Or is it just not
going into effect yet?
JOHN MUELLER: It's
definitely rolling out.
I know in some of
the data centers
it's already rolled
out completely.
So that's something where
I think you'll probably
see this change over
the course a week--
or maybe a week and a
half-- something like that.
So from the first
day to the next day,
I don't think you'd
see a big change.
But if you compare last
week to maybe next week,
then you should be able
to see a change there.
And I've seen some
blog posts out there
that have noticed
this difference
and tried to
document a difference
between the desktop results
and the new mobile results.
So there are definitely people
that are noticing it as well.
JOSHUA BERG: So there is
usually a several day difference
with a lot of updates
between the data centers?
JOHN MUELLER: Yeah, I don't
know if I could say usually
or if that's something that
always happens that it really
depends on the way
that it's rolled out.
But in this case,
it's something that
does take about a week-- a week
and a half to be displayed.
JOSHUA BERG: OK,
and since people
have also noticed that
about 70% of search results
or at least a lot of the top
queries are mobile-friendly.
I don't know if that's an
accurate number or not.
But doesn't that
mean that within all
of those top queries,
we wouldn't necessarily
see that big a difference.
JOHN MUELLER: Well I mean,
there are more and more
sites that are mobile friendly.
So that's something where
those sites will probably
see a positive advantage there.
And I think it's pretty rare
that you'd find the search
results where all of
the top 10 results
are already mobile-friendly.
Obviously, that
would be awesome.
But I think at the moment,
that's still pretty rare.
So in those cases where really
all of the top 10 results
are mobile-friendly
already, I don't
think you'd see much
of a change there.
But in situations where some
of them are mobile-friendly,
but some of them aren't,
then, of course, you'll
see some shuffling around.
JOSHUA BERG: All right, thanks.
JOHN MUELLER: All right, let's
jump through the questions
here.
What do we have?
"About duplicate content--
if an e-commerce company
is legally bound to share
an identical official brand
product description with
other big e-commerce sites
and the brand itself is the
site going to be penalized?
How to avoid being hidden
in organic search?"
So first of all, we don't have
a duplicate content penalty.
So it's not something
where I'd say
there's something specific
that you have to watch out
for that we're going to
penalize the site because it
has some of the
product descriptions
that are similar to others.
That's definitely not the case.
So there are things
you can obviously
do to make your site more
relevant or more-- provide
more or slightly
different information
and that always helps.
That includes things like having
your address on your pages,
having additional information
about these products,
allowing your customers to
review those products directly
so that they-- all of this
additional information that you
provide on your pages is
something that isn't available
everywhere else.
And that definitely helps
make it a little bit easier
for us to rank your
site appropriately.
So that's something that you
can definitely always do,
even if you have to
use the same product
descriptions as other sites
so that the more you can
differentiate yourself, your
site, from all the others
that are out there,
the easier it really
is for us to understand
what makes your site special
and what we need to
show your site for.
And that's not something
we penalize the site for.
Like I mentioned
before, even if you
do use exactly the same
product descriptions
as other sites, that's not
something where we'd say,
well, this is spam because we
know that this is something
that happens on the web.
And we have to handle it.
But, again, the easier
we can understand
what makes your site special and
the clearer, of course, users
understand that,
the more we'll be
able to take that into account.
All right, these traditional
questions about hidden content.
"You said Google algorithm may
view tabbed content as less
important as initially
hidden from the user.
Will the same apply to
content below the fold?
For example, if we
had a title that
had little at the top
and main content below,
would this be considered
thin content?"
No, not necessarily.
So we do have one algorithm that
might apply in cases like that
is where we see that
they're just a lot of ads
above the fold where essentially
where the user opens the page,
they get bombarded with a
lot of information that's
totally irrelevant for them--
for this page-- for what
they're searching for.
And that's something we
would algorithmically
try to recognize and
take action on that.
But if it's just a matter
that you have a nice layout
and you have a nice image
on top and the image
matches it, the
article or the content
that you're writing about
and you have a lot of content
below the fold because
it's a really long article,
then that's perfectly fine.
That's not something
I'd worry about.
That's not something
where we'd say, well,
we have to discount this because
it's not immediately visible.
Essentially, if the site
uses a normal how can
I say-- normal site structure
where you have normal content
that you can scroll
through, that's something
that we can see.
It's visible immediately.
Users understand
that they can scroll
through large piece of text
and we show that appropriately.
"I once asked about the
mobile versions of a site--
how I need to do the
robots.txt because I
have two versions of
a site www.site.com--
and in that m.site.com.
Do I need to close
m.site.com for Googlebot
and just open it for
Googlebot mobile?"
No, we recommend that you
make both of these versions
available to
Googlebot so that we
can crawl both of these versions
and recognize what they are.
So if Googlebot--
normal Googlebot
can crawl the mobile version and
see that this is a mobile site,
we can automatically taken that
into account as a mobile site.
On the other hand, if Googlebot
can't crawl the mobile site.
then it won't really know
what kind of content is there
and if that's really a
mobile version of the site--
if that's a desktop
version-- if that's
something that shouldn't
be indexed at all--
it's really hard to say.
So as much as
possible, I'd recommend
using the same or very
similar robots.txt
directives for the desktop
and the mobile site.
"Before, you said you
may look at this later.
Many are confused between the
landing and doorway pages.
Is this a doorway page?
It fits into a doorway
page in many ways."
I probably need to take a
look at that page in detail.
If that's your website,
I'd also recommend
posting in the help forum
to get some feedback
from other people who have
similar pages like that.
But it's kind of hard to
look at pages on the fly
during the Hangout.
"How does Google protect
sites that are producing
unique and compelling content?
Content that's so very
unique and its continuously
copied and used to
market other sites.
Thus, the rankings just keep
falling as duplicate content
penalty."
As I mentioned before, we
don't have a duplicate content
penalty.
It's not something
where we'd say
if there's content duplicated,
that this page is automatically
spammy.
Oftentimes there are very
technical reasons for content
being duplicated.
And sometimes you
have situations
like this perhaps
where other people are
copying your content.
And just because it's the same
as what other people have,
doesn't mean that your
version is less valuable.
So we do try to recognize
the original versions
of the content and treat
that appropriately.
So that's something
where it's not the case
that you have to do anything
artificial to keep changing
your content to try to
prevent it from being the same
as other people have
copied from your site.
It's really the case
that I'd recommend
focusing on your website and
making it just the best that it
could possibly be.
"Some of the content
of my website
on their domain and iframe.
What are the implications?
And you have any
advice on what to do?"
Essentially, this is something
that's kind of up to you
what you want to do there.
One thing that you can do
from a technical point of view
if that's something
you want to block
is there's a special
header that you
can add to your
pages to prevent them
from being used in an iframe.
So that's something from
a technical point of view
that you could put on your
pages if you saw this happening
and you really wanted to
prevent that from being done.
From a search
point of view, this
isn't something I'd
really worry about.
We do look at the pages that
are embedding their iframe.
We do try to look at the
content within the iframe.
But we recognize this kind
of situation fairly quickly.
Because, of course,
it's embedded
in the original source.
And we already had the original
source indexed as well.
So that's not something
where you need to do anything
from a search point of view.
But maybe you want
to do something
from a user experience point of
view or from the point of view
that maybe you are
trying to block people
from using your pages
for phishing or something
like that.
So that's kind of up to you
what you'd like to do there
from a search point of view.
It's not something
you need to block.
"I don't understand when
I should and shouldn't
use a no-follow.
If we write news and
blog posts on our site,
all are written in-house.
We never link to anyone that's
not for genuine reasons.
And I've kept them all no-follow
and wonder if that's correct."
Essentially a no-follow
on a link on a website
tells us that you don't want to
pass page rank for that link.
And that's something
that you can legitimately
use if there is, for example,
a commercial relationship
between these sites-- if
the other site is paying you
for advertising or paying
you for linking to your site,
then that's something
where you could
put a no-follow there and let
us know about that so that we
don't take that into
account in our link graph.
On the other hand, if
this is a natural link
that you've placed
on your website
because you think this
is just a great resource
or if this is maybe
additional documentation
or an additional reference
that users might use--
and there's really no
commercial version relationship
there-- then that's something
where you don't really need
to put a no-follow on those.
It's really for
us more a problem
if there's this
unnatural relationship
between the website
that's linking out
and the site that's
being linked to.
And that link is
only there because
of this kind of
commercial relationship
that's happening there.
"I see a new trend
when sites are
creating white-label
versions of their product
and getting do-follow
links on their site
like the homepage--
st.com, for example.
Is this allowed.
I see it abused a lot recently."
I'd have to take a look
at some of these examples.
I don't know.
Essentially if you're just
generating different variations
of your website or your product
pages, and to some extent,
that's essentially OK
because sometimes it
does make sense to
have different web
pages for slight variations of
a product or for a business.
For example, maybe you
have a consumer branch
where you're selling things
directly to consumers.
And you also have
variation where
you're doing more business
to business sales.
Then maybe it makes sense to
have two websites for something
like that.
It's essentially
the same product.
But you're targeting a
very different audience.
And for that, maybe
it makes sense.
On the other hand, if you
have dozens and dozens
of different variations,
you're just saying, well,
I want to make a website
for wooden garden furniture
because I know that's a
very special user group.
But I also want to make a
website for aluminum garden
furniture because they want
something different too.
And you start building this out.
And you add more
and more variations.
And that's something
that quickly
looks like a collection
of doorway pages
where you're essentially
just generating these sites
to target very niche keywords.
And that generally
doesn't make sense
from a search point of view--
from a technical point of view
because you're creating all
of this additional work that
has to be crawled and indexed
and ranked appropriately.
And it also starts looking very
spammy from our point of view.
So that's something
that I'd avoid.
On the other hand,
if you're really
looking into two or three
variations where you're saying
these are really clearly
separated user groups.
And these users really want
to be targeted individually,
then maybe that makes sense.
JOSHUA BERG: John, did that new
doorway page algorithm roll out
about what it's going to
be-- or was that an update?
JOHN MUELLER: I believe
that is rolled out, yeah.
JOSHUA BERG: OK.
JOHN MUELLER: "Our site
lists our URLs as HTTP
when they are actually HTTPS.
Is that going to be a
problem for Google indexing
those pages?"
It's not a problem in the sense
that it'll cause problems.
But it's something
that doesn't really
help your site that much
because the bigger issue here
is you're submitting--
you're essentially giving us
inconsistent signals.
And when search engines run
across inconsistent signals
like that, they have to
guess for themselves.
And that's probably not
what you'd like to do.
So as much as possible,
I'd recommend really
focusing on the URLs
that you actually
do want to have indexed
and using them consistently
and using them in
the sitemap file,
using them internally when
you link within your website,
using them for hreflang
annotations, if you have that,
using them for any other
kind of annotations
where you're using them
within your website
or where you're kind of
pointing to this URL.
So that's something where
if you can give us really
clear and consistent signals.
And if you redirect from
your alternate versions,
use a rel=canonical,
all of that,
then we can focus on that and
be really certain that this is
really what the webmaster wants.
On the other hand, if
during crawling we see HTTPS
and in the sitemap it says HTTP.
And some of the links go here.
Some of the links go here.
We kind of have to make a guess.
We don't really know
what you're trying to do.
So our algorithms
makes some kind
of an educated guess about
what you're trying to do
and how we should index that.
And maybe that's not
really what you'd like.
Maybe you have a
strong preference.
And if you want to let
us know about that,
then make sure that you're
doing in a consistent way.
"Does Google still use
a supplemental index.
If yes, how am I
able to find out
if pages are in the
supplemental index
and which ones are
in the main index?"
So we have I guess a bunch
of different indexes.
So it's not the case that there
is just one supplemental index
and one main index.
We do try to split things
up separately depending
on various technical factors.
And that's not something that
you really need to worry about.
That's more or less something
technical internally at Google
that is just a
technical way that we
keep track of these URLs.
It doesn't mean that they are
any way better or worse URLs.
They are just different
ways that we categorize URLs
and that we keep track of
them in our various indexes.
So that's not
something where you
have to do any kind of a magical
query and say, oh, my gosh.
My pages are in the
supplemental index.
I need to run the
anti-supplemental index
tool because you're
essentially indexed.
And that's not
something where you
have to worry about
which index you're in
or which, I don't know, which
ID you have in our database
essentially.
That's a technical
element on our side
that you don't really
need to worry about.
It doesn't have any effect
on crawling, indexing,
and ranking.
So that's essentially
not something
you need to worry about.
"We have running two separate
domains for recipes and food--
one in the UK, the
other in Australia
with two different domain names.
We use from time to time
recipes from each domain.
We use hreflang tag for
country-specific duplicate page
URLs.
Is that good or
bad for a search?"
So if you're running
two separate domains
and you are targeting
different countries,
and some of the
content is equivalent
across these domains, then
using the hreflang markup
is definitely a good idea.
The hreflang markup works
across different domains--
works across different
top-level domains.
It could also be
within the same domain
if you want to do that
within one domain.
So that's something
you can definitely
do there, even if the
pages are both in English,
but one is for the UK and
other one is for Australia,
you can set up the hreflang
markup between those pages.
There are two things that we've
noticed that people sometimes
get wrong with the
hreflang markup.
One is you need to make sure
that you use the URLs that
are actually indexed.
So if you have one
version of your pages
that would slash
default that HTM.
And we actually indexed
that just with the slash--
without the default.
HTM because that's kind of
an unnecessary suffix-- then
your hreflang markup needs
to be between the pages that
are actually indexed.
So you kind of link from the
slash version on one domain
to a slash version
on the other domain.
And that's essentially the
way that we have that set up.
The other thing that you need
to watch out for is the markup
needs to be confirmed
by both sides.
So the Australian page needs
to point to the UK version.
And the UK page needs to point
back to Australian version.
That way we can kind
of close this loop.
And we can be certain
that this markup is really
used correctly and used in a
way that the webmaster wanted.
So those are essentially
things to watch out for.
Hreflang is something
that's on per-page basis.
So if you have some of
these recipes that are only
available in Australia-- some
of them are only in the UK--
then you don't need to
use hreflang for that.
But for the recipes
that are shared
across the UK and
the Australian pages,
you can definitely use a
hreflang markup between that.
And another thing
to maybe just know
is if these recipes are
essentially equivalent, but not
identical, then it's still
possible to use hreflang markup
between those two recipes.
For example, if you have
the same recipe in the US
with US measurements, and
you have the UK version
with metric measurements,
then essentially it's
a slightly different recipe.
But it's still the US
version of the same content.
So you can use hreflang
markup between those pages.
"What does Google like more?
Responsive design or
its own mobile site?"
We do recommend
responsive web design.
But we support all three
variations-- so responsive,
separate mobile URLs,
or dynamic serving.
And we treat them equivalently.
So that's not
something where you
need to change your design from
one set up to another one just
to match Google's preference.
They are essentially
equivalent for us.
They all rank the same.
They are all recognized
as being mobile-friendly.
They would all get the
mobile-friendly boost,
for example.
So that's not
something where you
need to pick the one
that Google wants.
I would recommend picking the
one that works best for you.
And if you have no idea
which one you'd like to do,
then we recommend using
responsive web design.
If you do have a strong
preference already,
or if you have something
set up already,
then clearly go
with that instead.
"Does the 'if
modified since' header
have an impact in rankings?"
No, it doesn't.
It does help us understand when
to crawl a little bit better.
So if you are giving
the 'if modified
since,' that's something where
we'll try to crawl the URL with
that additional header.
We can say, yes,
this page has changed
or, no, this page
hasn't changed.
Then we can kind of simplify
what we need to crawl
and how much we need to
crawl from your website.
So it doesn't impact rankings.
But it can impact how quickly we
can crawl through your website
and if your website is
very quickly changing--
if it's a news website-- if
we have a lot of new products
or a lot of new
articles on your website
that you need to have indexed as
quickly as possible, then being
able to crawl efficiently can
make a difference because,
suddenly, that new content
will be available for rankings.
It's not that it'll
artificially rank higher.
But if you're the first one
out there with this topic,
we can crawl an index
site content quickly.
Then, of course, we'll be
able to show that in search.
"App indexing is now
used as a ranking signal
for both installed apps
or non-installed apps
as their relevance to
users are different.
Do they get a different
degree of impact
by the app indexing signal?"
I believe at the
moment this is more
or less equivalent
from our point of view.
So there's is no tweak
between those two variations.
I know that this
is very new and I
know this is something that the
team is definitely working on.
So I would expect that there
might be some changes there
in the future.
So that's not something
where I'd say,
well, this is the
current status.
And it will always be like this.
We're definitely watching
to see how this works
and how users react to it
and adjusting as we go along.
"Sitemaps www.abc.com
and m.abc.com
have the same canonical
pointing to www.abc.com.
Is it necessary to have a
sitemap file for m.abc.com?"
Good question.
I would say yes
just from the sense
that you probably
want the mdot version
to be crawled as well so
that we can pick that up.
It's kind of tricky in the
sense that we recommend
putting the URLs in
your sitemap file
that you want to have indexed.
But the sitemap file doesn't
affect indexing directly.
It's something that primarily
affects the crawling.
And if you need to have
these URLs crawled,
then putting them
in sitemap file
is also a legitimate use there.
For example, if you're moving
from one domain to another,
and you have set up
all the redirects,
then submitting
the sitemap files
for your old URLs probably
helps because we'll
be able to crawl those
URLs a little bit faster,
recognize that they redirect,
and process that accordingly.
So that's something
where ideally you'd
want to put the URLs
in your sitemap file
that you'd like to have
indexed for the long run.
For the short term,
if there's something
that you need to
push for crawling,
then that's something
you can definitely do.
"Do natural links
to your website
all have the same
benefit if they
link to a page with
a no-index tag on it
as they would linking to
a regular index page?"
Probably not in the sense
that if the page is no-index,
it won't be able to rank.
Of course, if it
doesn't have a no-follow
on that page-- a
robots no-follow tag,
then it'll be able to
forward that information--
that additional page ranks
from those links pointing
to it to the links
that are linked
to this page that has no index.
So they kind of spread that
page rank a little bit.
But, of course,
there is a difference
between a page that
can be indexed,
that can be shown in search,
that can rank in search,
and a page that can't be
shown in search at all.
So obviously there's
a difference there.
Pages that are
no-index can still
forward page rank, though.
"You stated that you only
support rel=canonical for HTML
pages, not for images
of video files.
I wonder what the reasoning
behind this decision was."
I think this is mostly a
technical decision that
was made at the time.
And that's something
where, at the moment,
I believe it still
makes sense there.
I do know that in
some situations
it would be helpful to be
able to specify canonical
for images.
But at the moment
that's not possible.
"I was wondering if Google
is aware of the spam referral
traffic issue within
Google Analytics
and if there's anything
that can be done
in the foreseeable future."
I know that the Analytics
team is aware of this
and that they're
working on finding
different solutions for that.
And I believe they
have been following up
on some of these issues.
I don't know if there's an
absolute solution lined up
for the future.
I imagine they're working
on something around that.
But for that, you'd probably
need to ask the Analytics team.
And maybe check in
with them in the forum.
I'm sure there are threads
about this already.
"Can I consider that
algorithmic detection ranking
change against doorways
is already live?
In the Japanese Hangouts
the Googlers say, we've
already started working on it."
I believe it's live, yeah.
So the Japanese
Googlers probably
weren't telling
you anything crazy.
Let me see.
"We have a problem with
JavaScript indexing
in Scribblelink,
which is a live blog
tool used in reporting on
events that are happening live.
Sometimes Googlebot
is indexing the page
with all the content in the live
blog, but in other articles,
it doesn't index it."
I'd probably have to take a
look at some of those pages
when they're being
updated to see what
exactly is happening there.
I guess that one of
the tricky aspects,
especially with live
blogs with content that's
updated by JavaScript, is that
we process JavaScript kind
of as a second step.
And that's not something that
happens as quickly as being
able to crawl the HTML.
So if there's
something really that
needs to be found
quickly in search,
and we can just crawl a URL and
find the content in the HTML,
then we'll be able to pick up
that HTML content very quickly.
On the other hand, if we need to
process JavaScript to actually
see this content,
that's something
that probably takes
another cycle or two longer
for us to actually process
all of the JavaScript
and make sure we have the
JavaScript files-- make
sure none of embedded content
is blocked by robots.txt,
and then figure out what
this JavaScript is actually
producing.
So that's something where
I imagine in a situation
where you're live
blogging something
and you want your news visible
as quickly as possible,
you probably want to make sure
that you're using more HTML
pages-- static HTML pages to
push those updates-- then just
to rely only on JavaScript.
We'll definitely pick
it up afterwards.
Like after we've been
able to reprocess
these pages for JavaScript.
But that's probably not as quick
as something that really needs
to be updated just
now and we need
to have that visible
in search immediately--
something that you would be
able to do with HTML pages.
"We've not been crawled now
for a week from Googlebot
because of fetch error
and unreachable DNS.
We've checked everything.
We can't find a solution."
So there is a thread on
the mailing list called
public DNS discuss, which is
for Google's public DNS servers.
And apparently there
were some issues
there that they ran
across where we weren't
able to really get a clear DNS
response for some kind of DNS
servers.
So I double checked that thread.
I can try-- I can add a link
to it afterwards to the event
entry.
And from there, I believe
you can post there.
And they will put you
on a temporary list
to make sure that it's
definitely crawlable.
And otherwise, this issue should
be resolved fairly quickly as
well.
But this is something that
we're definitely aware of
and we should be able to
handle it fairly quickly.
Let's see.
"We have a site
albanihotel.com, which
has been in the top
three all the times.
We're redesigning it now
and building a mobile site.
But we can't make it
before the 21st, which was
a couple of days ago, I guess.
What will happen with
SEO ranking and mobile.
Will the desktop site be
penalized because of this?"
We did a short, FAQ blog
post as well about some
of these questions.
So if you haven't seen that,
I'd double check the "Webmasters
Central" blog.
In general the ranking affects
only mobile search results.
So only people searching on
mobile will see this change.
The desktop site, or the
desktop search results,
won't be affected by this.
If you start putting your
mobile-friendly site up.
I don't know-- at some point.
Then as we re-crawl
those pages--
as we re-crawl the
pages from your website
and see that they
are mobile-friendly,
we'll be able to take that
into account fairly quickly.
So I saw someone posted saying
that it was 12 to 18 hours
after they updated it.
They got the mobile
friendly badge.
And as soon as you have
the mobile-friendly badge,
then we're treating this
as a mobile-friendly page.
And we'll be able to
rank it accordingly.
So that's something where
if you start now and make
your website mobile
friendly, as soon as we're
able to crawl those
pages, within a day or so,
you should be able to see
those results as well.
"Any suggestions
for when an old page
is split into two new pages?
Presumably you cannot add
canonicals to both of the pages
from the old page."
If you're splitting one page
into two pages, that's I
guess always kind of
a tricky situation.
One thing I will do
there is try to make sure
that there's at least
the connection to one
of these pages-- maybe the
first page or the first part
of the page to the new pages.
So that could be either
that you use the same URL
as the old page for the
first version of the page
or the first part of the page.
Or that you redirect
from the existing URL
to the first part of the page so
that we can at least understand
that there's this relationship
between the old URL
and one of the new URLs.
And that's I think a
really important part
because that way
we can kind of move
on continuously
understanding the pages,
understanding its
context and don't
have to relearn everything
about this page.
So that's really
something that helps.
If you're splitting off part
of the content into a new page,
then I think that's something.
Sometimes you just
have to do that.
And I'd just make sure that you
have a link from the existing
page to the new page
as well so that there's
this clear connection.
If you're essentially turning
this into paginated content
where you say this is part
one, this is part two,
or maybe there's a part three,
or maybe part three is coming
at some point later, you
could also start looking
into the rel=next and
rel=previous link attributes,
which let us know about
paginated content so that we
understand its context
a little bit better.
So that's something you could
also look into, especially
if you're splitting these
pages up into more than just
one or two parts.
"I have used a data highlighter
in Webmaster Tools for several
of my client's website to
promote upcoming events
and products.
How long does it
take for the results
to show up in Google Search?"
It's different.
It really depends on the
website and what kind of markup
you used there.
I know the events
markups sometimes
takes a little bit
longer because we really
have to be sure that this is a
proper use of the event markup.
Product markup usually gets
picked up fairly quickly.
The same applies if you're
doing this on the page itself
or if you're using
a data highlighter.
So you'll probably
see product markup
a little bit more
visible or visible faster
than events markup.
But essentially they
are different ways
just of marking things
up on your pages.
"Due to Google Webmaster policy
on mobile-friendly pages,
I've recently moved
and redesigned
my site to a new host.
So the Webmaster Tools--
I submitted a new sitemap.
Was that the
correct thing to do?
And how long will that take
for Google to recognize?"
If you've redesigned
your website,
one thing I'd
recommend looking at
is the Webmaster Central
Help Center on moving sites.
So if you're saying that
you move to a new host--
if you've redesigned
your pages, then that's
something you might
want to look into what
all is involved there.
If you're moving from one domain
to another, then obviously
you need to set up the
redirect as easily as possible.
If you're changing the URLs
in your website, that's
something where you also need
to set up redirects-- also
the sitemap file, of course, to
let us know about that change.
One of things probably
worth watching out
for is if you are
redesigning your website
and keeping the same
URLs, but you have
things like embedded content.
You have a lot of
images on your website.
Then also make sure
that those images
try to use the same URLs as
much as possible or at least
that the image
URLs also redirct.
So one case I ran
across recently
was a site that did a redesign
that kept all the same URLs.
But they had a lot of
traffic through image search.
And all of the
image URLs changed.
And the old image URLs went 404.
And the new one
showed the new images.
And essentially
what happened there
is from an image
search point of view,
the old image URLs
returning 404.
So we dropped them
out fairly quickly,
but it takes quite a bit
of time for new image URLs
to get shown in image
search and to be picked up,
crawled, and processed
appropriately.
So this is something
that generally takes
longer than normal HTML pages.
So what happened
there is the site
was suddenly almost not visible
at all in image search anymore.
And it can take a
couple of weeks, maybe
a couple of months
for those images
to start showing up
in image search again.
So that's something
where if you're
doing a redesign-- if you're
working to keep the same HTML
URLs and really also work to
keep the same embedded content
URLs, especially the image
URLs as much as possible.
And if you do have to change
the embedded content URLs,
also make sure that you have
301s set up for those URLs.
So for the images-- even if
you have CSS and JavaScript
files, and you
change those URLs,
and make sure that you have
redirect set up for those
so that we can follow along
and use them appropriately.
I guess the other
part of the question
was about how long
it takes to be
recognized as mobile-friendly.
As I mentioned before, this
is on a per-page basis.
If we recognize that
during crawl, then usually
fairly quickly we'll
be able to take that
into account for ranking for the
mobile-friendly label as well.
"An app on Google Play
with a deep app indexing
is a ranking signal now.
What about people who don't have
an app on Google Play because
of a similar business store
of free apps and games.
It seems wrong.
We're a recognized store.
We have better capabilities
for our users."
Obviously, If you
don't have an app,
then that's
essentially up to you.
That's something
that is not something
that we're saying everyone
should have an app.
But it is something that
users, for example, on Android
smartphone-- they might
be interested in seeing.
And sometimes these apps provide
really good user experience,
which is why we're showing them
in the search results as well.
So that's something where it's
not that we're forcing websites
to have an app.
It's just that having
an app and linking
that app with your website
is sometimes similarly
interesting for users as
having a mobile-friendly page.
So we try to treat
those kind of similarly.
"Does Google have
a special focus
on SEO agencies besides the
typical black hat forms.
If yes, in which way?"
I am not really sure
what the question is.
I guess one thing
to mention here
is that SEO agencies
per se aren't bad.
I know a lot of SEOs do really,
really well-- good things.
And there are lots
of technical things
that have to be done to make a
website that works really well
in search.
And these are all things
that normal webmasters often
don't know about.
And that's something
where an SEO and SEO
agency has a lot of
experience understanding
what search engines are looking
for, how search engines work.
And that's really
valuable information.
That's a lot of
really valuable work.
That's important
for the website.
But it's also important
for us as Google
because if we can't crawl
and index content normally,
then it'll be really hard for
us to show that in search.
And it will be
really hard for us
to give users really
representative, relevant search
results if we can't crawl
and index that content.
So it's not that SEO is bad
or that SEO is something
that's going to go away.
There's just lots of things
that SEOs do which are really
really useful for us.
So equating the average
SEO with a black hat,
I don't think that
really make sense.
And I don't think
that's fair at all.
So from that point
of view, I think
working together with
SEOs is a great thing.
And they do a lot of
really important work.
"I plan to move one folder
of my site to a new domain.
How to do it correctly?
For example, mysite.com/-- now
I want to get a separate domain
for that folder-- how to avoid
getting lots of 404s after
removing the content?"
So I wouldn't
remove the content.
I'd redirect the content.
So you can change the
content of that folder,
put it on your new
domain, and then redirect
from that folder
to your new domain
so that if a user has
bookmarked your old content
in the browser, and
click on that bookmark,
they'll see that redirect
be sent to your new domain.
And they'll be able to find
all of your content there.
So that's something that
search engines do similarly.
If we know about your old
content in this folder,
we can follow that
redirect when we see it.
And we can go to
your new domain.
And we can forward
all the information
we have about that folder
to your new domain.
This is something that sometimes
takes a little bit longer
than just a normal
move where you're
moving everything one-to-one.
But this is a good thing to
do if you think it makes sense
to split up your content.
Similarly, you can do it the
same way the other way around.
If you have one
domain, and you want
to put that into a
folder on your website,
make sure you have
the redirect set up
there so that we can follow from
the domain to the new folder.
Let's see.
"How to avoid the duplicate
content for that new domain?
How to keep all the links for
the folder after migration
to a new domain?"
Again, if you have a
redirect from the old version
to the new version,
then you don't
have to worry about
the duplicate content.
Also, we don't have a
duplicate content penalty.
So if you can't set up a
redirect for technical reasons
perhaps, that's also
something we can live with
and something that won't
cause your site to disappear
from Google Search.
If you do set up
redirects, we'll
be able to forward
the links as well.
So content within that folder
has received a lot of links
from external websites.
And we'll, of course, forward
all of that information
that we have to your new domain.
Let's see.
Maybe I can open
it up for questions
from you will in the meantime?
MIHAI APERGHIS: I'll go
ahead with another one.
JOHN MUELLER: OK.
MIHAI APERGHIS: This one
is actually regarding
content syndication mostly.
I don't know if that's
the right way to put it.
Basically we focused on
building really quality,
informative content
for our users.
That also has a small
book for the media
to pick it up and
increase our exposure.
So, for example, we did a guide
on how to prevent and treat
allergies-- spring allergies--
since it's the spring
season and everything.
And we approached
some journalist.
And asked them if they were
interested in this material.
And they gave us
two options usually.
They came to just
pick it up one on one
and mention us as
an original source.
Or they can show us a
small summary-- maybe
we can summarize that for them
and do like a 'read more here'
or something like that.
What do you think is
the best option that we
could have allowed them to do?
JOHN MUELLER: I
think both of these
are sometimes valuable,
correct options.
So it's hard to say which
one would be better.
I think sometimes you
don't really have a choice.
Sometimes these
sites essentially
just do it on their own.
A lot of times what we see
when sites reuse content.
Usually, they'll just use
a snippet of the content
or a section of the content and
have the rest of their article
around this content.
So that's something that I
think is traditionally done.
But essentially both of
these are valid choices.
I've also seen situations
where maybe you
have a newspaper network where
one newspaper site publishes
the content first
as a primary source.
And the other sites also
to republish content.
In a case like that, if it's
more of a network situation,
I'd just make sure that you have
a rel=canonical set up properly
so that we can focus on that
one version that you think is
the most authoritative for
that individual article.
I realize if this is
other people's websites
and reusing your content,
that's not something
you can easily force.
Maybe they're technical reasons
that they can't just put
a rel=canonical on an
individual article.
So that's something where
I think working together
with them to see what makes
sense is probably a good idea.
JOSHUA BERG: John.
JOHN MUELLER: Yes.
JOSHUA BERG: A few
weeks ago I had
noticed that I wasn't
seein-- couldn't
find any in-depth
articles when searching
through international proxies.
Was I mistaken in thinking
that had already rolled out--
that there were in-depth
articles appearing
internationally?
JOHN MUELLER: I think that's
just individual countries.
I think that's just
individual countries.
I don't think that's really
globally for everyone yet.
JOSHUA BERG: OK,
but for example,
within the UK for
another local search.
JOHN MUELLER: I don't know.
I don't know which countries
specifically it rolled out in.
So I do think it's
not completely
global at this point--
in-depth articles.
But it's more countries
than just the US.
But I don't have the full
list in my head at the moment.
JOSHUA BERG: OK, do you think
the adoption has not been as
wide as would have been hoped?
JOHN MUELLER: I don't know.
I don't know.
I know they've made some
changes there recently.
So it's not that it's
completely abandoned
and running by itself.
But I also don't know what
the future plans are there.
So that's really hard to say.
I think-- I find it personally
really useful because it
does give some background
information on some
of these queries.
But I don't know what
the full plans are there
or if that's something
that we'd even
be able to discuss
if we did have them.
JOSHUA BERG: All right.
Thanks.
JOHN MUELLER: All right.
For some of you new
here in the Hangout,
is anything specific
that I can help with?
Is there something
that's been on your mind
that you've been wanting to ask?
Not really.
Let me run through the
submitted questions
briefly to see
what we have here.
"Does the disavow file
take redirect into account?
Getting a bunch of
spam-- the links
want to disavow the domain.
But the top page of the
domain redirects to yahoo.com.
Can it happen if disavowing
spam is recognized
as disavowing yahoo.com?"
If it's just one URL
within that site that's
redirecting somewhere else,
I wouldn't worry about
if you want to disavow
all the content-- all
of the links from that
domain, then by all means,
go ahead and do that.
It's not the case that
we would follow this one
individual redirect
and say, well, Yahoo
is also being found here.
Therefore, the
webmaster probably
wants to disavow all of Yahoo.
That's not what
we're doing there.
Essentially what
we're looking at
is where we find the
individual links coming from,
where they're going to, and if
the site that these links are
going to has a disavow file for
those links from that domain,
then we'll disavow
those specific links
an d not anything else that's
kind of around that domain
or that's connected
with individual URLs
from that domain.
"Google says add schema
organization markup
to your official
website that identifies
the location of your logo,
whether [INAUDIBLE] JSON
or logo schema markup is
used to link to an image file
hosted off the
website's main domain?"
Yes, that should work as well.
It doesn't have to
be an image that's
directly on the main domain
or on the same host name.
It can be somewhere else.
All right, with that , we're
just about out of time.
Thank you all for joining.
Lots of questions again.
It looks like mobile
questions are going down,
which I take as a good
sign that people know what
to do or have done it already.
So hopefully that
trend continues a bit.
I wish you guys a great weekend.
And hopefully I
will see you again
in one of the future Hangouts.
JOSHUA BERG: Bye, John.
Bye everyone.