A transcription of MHC’s SNYCU podcast featuring Google’s John Mueller on June 3rd, 2020. You can listen to the full episode here.
Or watch it here:
M: So welcome to a very special episode of Search News You Can Use and I’m Dr. Marie Haynes and I have my first ever guest on the podcast; John Mueller from Google. Welcome John.
J: Great to be here Marie.
M 0:15 – So John tell me- I think we all know that you’re a Webmaster Trends Analyst at Google, how long have you been with Google now?
J 0:23 – Oh boy what is it… 12 or 13 years, something like that.
M 0:30 – Really? So my first introduction to SEO was in the.. do you remember the SEO chat forums? I don’t know if you were a part of those but that was back sort of 2008-ish. And then at some point I started watching your Google help hangout videos. Did you start those shortly after you joined Google?
J 0:57 – Um lets see. I don’t know when exactly we started that. I think that came pretty much when Google plus came up so as soon as kind of the whole setup was there and we could do these public hangouts and ask people to join in, all of that then that’s kind of where we started doing those to kind of try those out to see what works, what doesn’t work.
M 1:29 – Gotcha gotcha. So first of all I want to say thank you for, I mean from the whole SEO community, that you have gone above and beyond over the years and I don’t know how you do it. I was going to start off with our first question with being something about folders versus subdomains or you know, but I’m not, I’m not. You’ve covered that so much and so I can’t imagine what it’s like as every word you say, we as the SEO community will jump on it and say “Well this is what google says” and it must be very hard to have every one of your words analyzed and here we go again! We’re going to be asking you a bunch of questions to analyze. I think we’ll just jump right in with the questions and we’ll see where things go. At any point if you want to jump in and say something, anything that you want to say would be so helpful. I want to start off with a fun question and ask you so what’s the deal with bananas and cheese?
J 2:28 – Bananas and cheese.. Well I guess cheese is kind of the obvious one, it’s like in Switzerland there’s lots of cheese so that’s, I don’t know, that’s kind of the easy part there. I think that one tweet that started everything, from my account was I love Cheese. That tweet, that was actually made by Gary so it’s not my fault. I don’t know. I went to get a coffee and Gary was like “oh, twitter is open” so off he went and tried to post something. Probably tried to find something that wouldn’t get him fired so.
M: It could have been much worse, yes.
J: And Bananas is, it’s more random in that I was looking for something to try out with Google Trends and Bananas was just the first word that came to mind and I started receiving all these emails like Google Trends for bananas was going up or going down, it was like well “Okay”.
M: That’s fantastic. So do you still get the e-mails about the trends of bananas?
J: Yeah, yeah.
M 3:52 The way we came up with our questions was we have a whole team that works for me and my team actually trains on your help hangout videos so they watch them and you know something will come up and you know, you’ll talk about a canonical tag and my team, my new trainees will say “Well what is that” and well start talking about those things. And so a lot of my team came up with questions and some of my listeners to my podcast came up with questions as well. Let’s talk about nofollow and Google announced last year that there were some changes coming to how you could use nofollow. I believe at that time you gave us rel=sponsored and rel=ugc. I feel like there’s a bit of confusion about how Google can use nofollowed links and part of the confusion is.. well people ask me often, if someone has pointed spammy links at me in an attempt for negative SEO, like comment spam and links like that, and they are all nofollowed links. Can those nofollowed links ever be used as signals for Google?
J 5:02 – Well we don’t use them in a negative way. So it’s something where if we can use them to discover some new content then that’s something where I believe we try to use that but it’s not the case where we say well these are normal links on the web so we will count them at full weight and if they are bad links on the web then they’ll count against you and that kind of thing. So if these are links out there that you don’t want associated with your site and they’re with a nofollow then that is perfectly fine. I think a lot of the ads on the internet are also with nofollow, that’s something where we wouldn’t see that as a paid link just because we now understand nofollowed links a little bit better.
M 5:44 – That makes sense. So I think one of the concerns- a question that a lot of people have asked me is whether they should disavow because there is a massive number of nofollowed links then you know, the scale could tip something off and we’ve always maintained that there’s no point in disavowing a nofollowed link because the whole point of disavowing is to tell Google ‘I don’t want to pass these signals’ so I think that makes sense right? We’re on the right track?
J 6:11 – Yep, no need to disavow them.
M 6:15 – This is a question that we talked about. The first and only time we met in person actually, in New York at the Google Office and I would love for us just to revisit it because I think there’s so much confusion about who should be using the disavow tool. So the main question is we know that the disavow tool is there if you have a manual action and we try to remove unnatural links and the ones that we can’t get removed, we put them in the disavow tool. The thing we talked about before is is there ever a reason for a site that does not have a manual action to use the disavow tool. So let’s start with that perhaps
J 7:00 – I think there are two times when it would make sense to perhaps use the disavow tool. On the one hand when you look at the links to your site, you’re pretty certain that you will get a manual action. So for example, if the previous SEO was doing all kinds of weird things with regards to links to your site, if they were off buying links, doing guest posting, kind of all of the usual things where we’d say this is against the guidelines and if you haven’t received a manual action for that yet then that’s something where if you go and look at a site like that and you see all of this then that seems like something where you’d want to proactively clean up. So essentially the kind of activities that you would do when you receive a manual action, kind of do that proactively ahead of time so that you don’t even get into this whole manual action problem. And that’s something where every now and then I’ll run across people like that where maybe they’ll come in one of the office hour hangouts or they’ll post on Twitter or in the help forums where if you look at those links you can tell well they’ve been doing a lot of things they probably shouldn’t have done and it’s unclear if they did them themselves or if they hired an SEO to do that, it doesn’t really matter and maybe they don’t have a manual action yet but you would assume that if the webspam team ran across that site they would probably take actions. So that’s kind of the thing where I’d say you can take care of that yourself with the disavow by cleaning up the links as much as you can, that’s I think the obvious one. The other one is more along the lines of if you’re really unsure what Google’s algorithms will do because we do really try to ignore the spammy links out there, we do try to ignore that random links that are dropped in the forums which are sometimes just automatically dropped all kinds of places. And if you’re really unsure whether Google’s algorithms are going to ignore those then you can just put them in the disavow file and be like, well I did what I could do, at least I don’t have to worry about this.
M 9:26 – Disavowing is a prevention for a manual action. That makes sense right? If we look at a link profile that is very overtly against Google’s guidelines and we say oh my goodness if the webspam team looked at this like this would be scary, we should probably disavow. I think the confusion is what you mentioned about Google’s algorithms and I understand there’s certain things that you can’t share for obvious reasons. We have had cases where we really do feel that we filed a disavow, the site did not have a manual actions and then at some point, either a few weeks or a couple months afterwards, the site starts to see an increase in traffic and our thought was, as we talked about in New York City, is that Google’s algorithms might have less trust in their link profile and the disavow kind of improved that trust and saw the benefits. So I think the main, and again I can understand if you can’t answer this, so we know we can disavow preventatively to prevent getting a manual action but can we see the improvements in Google rankings, Google traffic from disavowing even if they don’t have a manual action.
J 10:47 – I think that would be very rare. So I could theoretically imagine a situation where our algorithms are really upset about the links to a site and by disavowing you might clean that up but that’s something where essentially it would be a very similar situation to if a webspam team member looked at the site. So just because there are some random links out there and you’re kind of cleaning things up and you’re focusing on the links that you know are good, I wouldn’t expect to see any change in a site’s visibility in search from something like that. So that’s something where my guess is something like that is you’re most likely seeing the effects of other things that are happening in search which could be algorithm updates, maybe changes in the ecosystem in general, all of the usual things that can go up and down in search.
M 11:38 – That makes sense. It’s always hard to test anything in SEO right? Because something we changed today, it’s not like we’re going to make no changes tomorrow or the rest of the month or something. I’d like to ask you something about- there’s a case we have now where we’re dealing with a manual action for a client who came to use with a manual actions for unnatural links and I think back when I first started helping people with unnatural links, it was really easy that we could see oh there are these spammy directories and PageRank R US or something, those were super obvious. And I feel like the types of unnatural links now that cause manual actions are ones that actually could move the needle for sites. They’re ones that were working and Google’s algorithms weren’t handling them or whatever so the webspam team has given them a manual action. This particular client of ours, most of the links they’ve made are in articles. And they’re in articles in relatively authoritative places, like places, like they’re not ultra spammy articles and we can see when we look at them that they are very clearly made for SEO but they are articles that people would read. And one of the struggles that we’re having with communicating with this client is that they’re still ranking really well for their main keywords. So we’ve come in and said ‘Look, there’s this list of hundreds of links that we know you made for SEO, you have a manual action and we want you to start removing those or disavowing if you can’t’ and they are saying ‘Well, we’re ranking for all of these terms, why would we want to do that?’. Do you have any thoughts on that, is there any advice you can give us on that situation?
J 13:20 – Those are essentially guest posts where if they create the articles and they include a link to their site in there then that’s something where the webspam team would say ‘well this is again, against our guidelines’. With regards to them still ranking for those keywords, my guess is kind of hard to say. I mean I don’t know the situation there but it is something where if there’s a manual action, my general advice would always be to try to get that cleaned up as much as possible and not to kind of leave it there because it’s not bad enough, kind of thing. So that’s the kind of place where I’d try to clean up if you’re seeing that people are reading those articles, clicking through to your site then put a nofollow on those links and that’s just as useful for traffic to your site and at the same time it really helps to show the webspam team that okay, you really understood that this was problematic, you put the nofollow there, you put them in disavow where you couldn’t do that and helps them to say ‘okay we can let this site be completely kind of natural in search’.
M 14:36 – Okay and I think that’s kind of the struggle sometimes; a site that has some fairly good natural links combined with a site that has done ‘high level link building’, link building in ways that you know, some of those are good. It’s not always wrong to ask for a link, it’s perfectly okay but scale can be hard, like maybe it can be good when you had five of these links but now that you’ve got 500, it’s not so good. One of the things that is confusing to a lot of people is when we look at Google’s guidelines on link schemes, they’re a little bit vague on links in articles. So it says in the guidelines “large scale article marketing or guest posting campaigns with keyword rich anchor text links” and what were finding often these days is that these manual actions that were getting, the example links that Google sends us aren’t always keyword anchored and I think it’s not clear to a lot of people that an unnatural link can still be unnatural if the anchor is not a keyword, that’s correct right?
J 15:46 – Right, I mean ultimately a link is a link, it can pass signals to the other site and that’s kind of what the webspam team is watching out for.
M 15:57 – Okay yeah so we always try to point- we call this unnatural because Google’s link schemes say this and most of the links we’ve been getting back these days as example links are links in articles. They’re not all necessarily guest posts. They are ‘Hey I wrote this content, you need content so let me give it to you. There is no money involved and oh by the way there’s a link to my site in it’. So those are unnatural, right?
J 16:25 – Yes
M: Yes, okay. That’s a struggle I think a lot of SEO companies struggle with; ‘as long as we don’t use keyword anchors, we’re good’ and I feel like that’s a risk.
M 16:38 – We will get off of manual actions in a minute but we did have a lot of questions on this. Can you tell us if there was a delay in responding to manual actions because of the pandemic?
J 16:51 – I don’t know so much about a delay but it was a lot slower than usual
M: We’ve been getting some responses now, do you know if you are back on track?
J: So yesterday some of the folks from the webspam team double checked to make sure that I don’t say anything crazy and it seems like they’re pretty much on track. I mean there is always a certain amount of time it takes to manually review these and for the most part, we do try to manually review them and try to make this process a little bit faster. The one thing the webspam team is doing is trying to find ways to automate this. So in particular if a site is hacked then that’s something where we can kind of automatically try to figure out if the site is still hacked after the reconsideration request and if our algorithms are pretty sure that this is resolved then we can just let that go. And that kind of frees up some time for the rest of the manual actions team work through the queues that they get.
M 18:00 – That makes sense. Are you able to tell us- I know that you’ve mentioned that all manual actions are reviewed by humans. And that makes sense for hacking situations that you may want to use a machine to do that. Can you tell us anything about the review process? Is it a webspam team member that does the review for the manual actions? Is it a two step process where somebody sees like ‘They’re making some steps here, let’s pass that on to a senior member’. Is there anything you can share with us about that?
J 18:34 – It kind of depends on the size and the complexity of the issue. So if there’s something that’s affecting a large part of the web, if it’s something that’s particularly complex and it’s kind of borderline and it depends on how you look at it then that’s the kind of situation where the webspam team will often pull in other people to try to get a second opinion for it. And I think in general that makes sense. It’s also something that we try to do if a second reconsideration request comes in, that it’s not always the same person looking at the same site like well, they didn’t change much. Whereas if someone else were looking at it then they might say well actually, this is far enough with regards to what we would expect them to do. Sometimes, it’s kind of hard to draw that line because when you’re looking at it manually, there are definitely some things where this is a clear line, we can say well everything above this has to be accepted and everything below this is rejected. But there are a lot of cases, for example with links or with low quality content, and you can kind of say well they did a significant amount of work but how do you quantify it. It’s not really possible to say they did 17% of what they should have. like you can’t come up with a number.
M 19:57 – I feel like we tell our clients that the goal is to convince the webspam team that you understand the problem and that you’re moving on, you know you’re not doing the same thing. That can be challenging sometimes especially for these sites that have a real mixture of ‘well this was from this SEO effort and this SEO effort maybe we took too far’ and it can be really hard sometimes to get these things removed. But I did tell people that sometimes it’s good to get one because then you know that these are the issues, you know that you have to move forward and move on to better ways to get links or whatever.
M 20:35 – We’ll move on to another fun topic is thin content which is something that we’ve gone back and forth on as SEOs over the years. I’m going to read this question out because it’s one that I believe came from my team. “You mentioned in the past that when Google assesses site quality, they take into account all indexed pages. There’s a large site with 100,000+ pages indexed and overtime many of those pages become no longer relevant or already thin. Would noindexing or redirecting to reduce the count of pages increase Google’s perception of quality and flow through to increase rankings, again with UX staying the same.”
J 21:18 – Yeah I think what I see a lot there with this kind of a question is that people sometimes see all pages are being equal whereas from our point of view, things are very different with regards to how important a page is for a website. So that’s something where it’s really hard to say well, you’ve removed 1,000 pages therefore the other 90,000 pages are good. So one example might be you have a concert venue or some event site and assuming everyone goes back to concerts which I’m sure will happen at some point, but you might have this site and some really well known artist there, you have some really good content on the concerts there and you have an event calendar where you can essentially click through to the year 9999 and most of those days are going to be empty. So looking at something like that, you might have hundreds of thousands of really empty pages and a handful of really good pages and just because you have hundreds of thousands of low quality, thin pages doesn’t necessarily mean that 90% of your website is bad and you have to clean that up. Because what will happen on our side is that we will try to figure out what are the most important pages and we’ll try to focus on those. And those are probably the current concerts or whatever is happening there. And even though you have all these other thin content pages, it doesn’t necessarily mean that your website has kind of averaged out to something that is mediocre. So from that point of view, it doesn’t make sense to look at the absolute** numbers but what is the important content for your site and is that content actually high quality.
M 23:13 – Okay so that’s really interesting because we’ve always maintained that a site has.. Say their CMS makes these random image pages and they get into the index.. So Google should just ignore those in terms of quality for the site. It’s not like removing, let’s say 90% of index pages were these random pages that shouldn’t have gotten into the index, can removing those from the index improve Google’s assessment of quality for the rest of your site?
J 23:44 – I don’t know. A lot of times – like if it were something that we would ignore already then I don’t think that would make any difference. The main difference that would make is with regards to crawling of the website and being able to find the new and updated content a little bit faster where if we get stuck in an infinite calendar then we would go off and spend a lot of time there because Googlebot is very patient but that’s not content that you really need to have crawl for your website. So that’s kind of, I think, the primary part there. I think there is room for kind of this middle ground though, not necessarily things that are completely useless but some kind of middle ground where you see these are pages that are getting a lot of traffic from search but when I look at them they are really pages I don’t want to be known for. It’s almost like I can recognize that Google’s algorithms think that these are important pages but actually they are not important pages for me. And if you are in that situation then that’s a situation where you can take action on. So not necessarily looking at the absolute number of pages that you have but these are the pages that are getting traffic from search and these are the ones that I want Google to focus on therefore I will try to get rid of things that are less important maybe to improve them, something like that.
M 25:08 – What about a situation where a smaller website, like maybe.. I mean years ago businesses were told to blog everyday and there’s a lot of people that have very poor quality blogs because they’re like here’s my coffee today, here’s the bananas and John got an alert about it and things that maybe didn’t need to be in the index. And then let’s say well, starting this year I kind of got my head in the game and said oh everything I put in my blog needs to be the best of its kind. Could that improve Google’s assessment for the quality of my blog overall if I went back and noindexed some of those blog posts that nobody cared about or does Google just keep that in mind and just focus on the new stuff?
J 25:52 – I don’t think that would make a significant difference especially if you do have this significant chunk of really good content, if you’re starting to put new content out there. Then I don’t think noindexing the older content would play a big difference. But similar to a newspaper site where you put out maybe 20 articles a day and 19 of those articles that after a week, those will be irrelevant then it doesn’t mean that those 19 articles are automatically things that you should no index at some point but maybe move them to an archive section where they are less empathized for users and for search but you can still keep them. It’s not that we’re saying that this is something bad and you need to get rid of or clean up.
M 26:40 – Okay that makes sense. Can you give us any tips on how Google makes a quality of assessment as to what types of content could be considered high quality. Is Google using BERT now to better understand now whether content is good?
J 27:00 – So I don’t really have a lot of insight into how we determine when things are high quality content but I guess the one thing where people sometimes get thrown off is with regards to BERT, the BERT algorithms, all of those things. Those are essentially algorithms to understand content better. It’s not so much to understand the quality of the content but more to understand what is this content about, what is this sentence about, what is this query about to figure out what are the different entities that might be involved here and how are they being tied in here. Where that kind of overlaps into the content quality side is when you’re writing in such a way where essentially it’s impossible to understand what it is that you’re trying to say. And that’s the kind of situation where the BERT algorithms will say well, I really don’t know what it is that they’re trying to say and it’s not so much that the BERT algorithm is making an assumption that this is low quality content, it’s more like I just don’t know what to do with this. And sometimes I suspect.. I haven’t seen this first hand or tried it out with any pages, but I suspect some of the old school SEO writing kind of falls in that where you’re just swapping in all of the synonyms that you can possibly add to a sentence and as a human when you read that you’re like this is just totally over the top and I can imagine that our algorithms are like well, this doesn’t read like an actual sentence, I don’t quite know what to emphasize. Like is this really about this subject or is it about a different subject like what is the primary element in this sentence or this paragraph.
M 28:54 – And is that something that could be seen as a negative? Like for example if we have an ecommerce site that has a product page and you often see at the bottom of the page- we call it SEO-copy that’s you know, it’s written for search engines and it’s just a big block of text that contains a bunch of keywords and no human is ever going to actually read it. So the way I’m thinking of BERT trying to figure out if this query matches this page, maybe BERT just struggles and says this isn’t relevant or whatever. Or could Google treat that as a negative to say ‘oh this page looks like it was SEO-ed, these keywords are here for Google and make that an actual detriment to the page
J 29:41 – I’ve seen a few cases where that happens but it’s usually along the lines of keyword stuffing. So not so much like they’re writing a wikipedia article on the subject and putting it on the bottom of a shoe page but more that they’re just adding thousands of variations of the same keywords to a page and then our keyword stuffing algorithm might kick in and say well actually this looks like keyword stuffing, maybe we should be more cautious with regards to how we rank this individual page. So it’s not that BERT is confused and therefore our algorithms are confused and we’ll say that the page is bad because our algorithms are always confused, there is always something on the web that’s confusing so it would be bad to say that just because something is confusing, it’s low quality. But I guess with regards to BERT one of the things that that could be done because a lot of these algorithms are open-sourced, there’s a lot of documentation and reference material around them, is to try things out and to take some of this SEO text and throw it into one of these algorithms and see does the primary content get pulled out, are the entities able to be recognized properly and it’s not one to one the same to how we would do it because I’m pretty sure our algorithms are based on similar ideas but probably tuned differently but it can give you some insight into is this written in such a way that it’s actually too confusing for a system to understand what it is that they’re writing about.
M 31:25 – Okay so you’re saying take the text, put it into a natural language processor and see can the tool figure out ‘oh this page is about this’ and if not then maybe we need to rewrite it because if the tool can’t figure it out like humans probably find it boring or don’t want to read it?
J 31:41 – I don’t think that would be feasible on a day by day basis but it might be an interesting experiment just to take some of those old-school SEO text things and throw them into these modern algorithms and see does this still figure out what this page is about and what is the difference if I rewrite this wikipedia article into maybe a two sentence summary that might be readable by a user, would the algorithms still figure out that it’s about the same thing
M 32:16 – So rather than having that SEO text at the end of an ecommerce page, do you have recommendations? I mean there are some obvious things that users would want but are there certain things that would be helpful in terms of what Google would want to see on an ecommerce page that you can share with us?
J 32:35 – It’s hard to say. The one thing that I notice in talking with the mobile indexing folks is that when the ecommerce category pages don’t have any other content at all other than links to the products then it’s really hard for us to rank those pages. So I’m not saying all of that text at the bottom of your page is bad but maybe 90%, 95% of that text is unnecessary but some amount of text is useful to have on a page so that we can understand what this page is about. And at that point you are probably with the amount of text that a user will probably be able to read as well, be able to understand as well. So that’s kind of where I would head in that regard. The other thing where I could imagine that our algorithms sometimes get confused is when they have a list of products on top and essentially a giant article on the bottom when our algorithms have to figure out the intent of this page. Is this something that is meant for commercial intent or is this an informational page? What is kind of the primary reason for this page to exist and I could imagine that our algorithms sometimes get confused by this big chunk of text where’d we say oh, this is an informational page about shoes but I can tell that users are trying to buy shoes so I wouldn’t send them to this informational page.
M 34:05 – Okay and that seems to be.. So is BERT used to understanding the query as well?
J 34:13 – Yeah so we use these algorithms to essentially understand text and that comes in on the query and that comes in on the pages themselves.
M 34:24 – Okay and I know this has been hinted at before.. I think it was the Bay Area search, I think Gary said something that got me thinking on this. Are most search results.. You mentioned like Google wants to determine oh the person wants to buy this. Are there a certain amount of spaces, like if Google has figured out oh this is probably a commercial query then we want to rank only sites that seem to be transactional or do you say well let’s throw in a couple of informational ones? Or am I just simplifying things too much?
J 35:00 – I think you would see some amount of mixed there naturally where our algorithms tend not to be completely on or off where we say ‘well this is clearly commercial in nature’ and therefore we would only show commercial pages because we just don’t know for sure what it is what the user is searching for. That’s something where I think 10-15% of all queries are still new so these are things where even if we wanted to manually classify all these queries and say well this is clearly someone trying to buy shoes, that’s something that we would never be able to do because people come and ask us in different ways all the time. So that’s something where I suspect our algorithms will try to say ‘well probably it’s this or very very likely it’s this’ so we will try to include, in our search results page, I don’t know 80% like this and a little bit like that just to cover those other variations.
M 36:08 – Okay that makes sense. Let’s have another great subject that’s fun to talk about; doorway pages. So sometimes a lot of websites struggle with location pages. So lets say a client came to us and they had a business that serviced 50-100 cities in their radius and what tends to happen is their location pages, they’re unique in terms of words but really for the user, they could be all the same page. The services of the business are the same no matter the city. Is it within Google’s guidelines for me to have 50 different city pages? Is there a better way to do it?
J 36:56 – A lot of times these tend to go into the doorway direction and tend to end up being low quality. And I’d say the quality aspect is almost a bigger issue there in that you have all of these pages on your site and you say well these are really important for my site but at the same time they’re essentially terrible pages. Like you wouldn’t be able to send a user there and say like well you’re in this city therefore this page mentions your city name and the service that you’re looking for so we should show that to you. So that’s something where from the point of view I try to discourage that. Obviously if you have locations in individual cities, sometimes you have those addresses on separate pages with separate opening hours, all that. Another option is of course to list those different addresses on a single page, maybe by region with a map, a dynamic map, something like that. But otherwise I think it’s really kind of tricky where if you’re saying well I don’t really have locations in these cities but anyone from any of these cities are welcome to call me up then making individual pages for all of those cities feels kind of tricky. And I realize sometimes these sites rank, sometimes these kind of pages rank well but it is something where I could imagine the search quality team saying well we need to figure out a way to deal with this better.
M 38:39 – Do you give many penalties or manual actions for doorways pages these days? It’s been a while since I’ve seen one.
J: 38:46 – I don’t know.
M: Yeah it’s been a long time. I’m not saying you should by any means. If all of a sudden people start getting- I think they fell under thin content penalties, it’s not my fault if that happens.
M: This is a subscribers question. I’m going to shorten it down because it’s a long one but it’s about… This person has a site that’s YMYL, it competes with major brands and government websites so ‘.gov’ websites. And something that we really noticed lately is that Google, for a lot of YMYL queries are really really favouring authoritative sites. So this person is saying that their content is better, I mean that’s subjective, but their content is better, solves the user’s query better, great videos, optimized to the fullest and they want to know, and I know this is hard because you haven’t seen the site, I haven’t seen the site but is it ever possible to outrank a giant authoritative website for a YMYL query.
J 39:53 – Sure, I mean it’s possible to outrank these sites. It’s not that any of these search results positions are hard coded and they can never change so that’s something where I would say that’s certainly possible but depending on the topic, depending on the competition, it can get hard so I wouldn’t expect that to be something where you can just throw together a website, make some really nice looking pages, have someone rank some really good content for your page and to automatically have that rank above these authoritative sites. Especially on topics where it’s kind of important that we make sure we are providing high quality, correct information to people. So technically, it’s possible. Waiting it out is something I personally wouldn’t recommend in cases like this. Obviously you need to continue working on your site, it’s not something where you can just say okay I’ll wait til my site is 10 years old then it’ll be just as good as these other 10 year old sites. That’s not the way it happens, you have to kind of keep at it all the time. And the other thing to keep in mind is that if these are really good websites then generally speaking you’d expect to see some traffic from other sources as well so obviously search is a big source of traffic but there are lots of other places where you can also get traffic and that’s something where kind of combining all of that and continuing to work on your website, focusing maybe on other traffic sources if that’s something that you can do and kind of growing out from there. But it’s not that we would never rank these sites for these kinds of queries but it will be really hard, you have to prove that what you’re providing is something of equal quality, equal relevance, equal correctness as maybe an official government website which depending on the government website might be hard, might be a little bit easier, it feels like government websites are generally getting better and better so that competition is not going to get less.
M 42:08 – Yeah I think in the past a lot of the time we’d say oh this .gov site page is ranking but it’s horrible so if I can create something better then I can outrank it. Is this connected to EAT? Like let’s say I went on a particular type of diet and it worked really well for me and I wanted to create a website about this diet but the people who are ranking on the first page are the mayo clinic and some government authoritative site. Can you give me any tips on what types of things would have to happen. Let’s say I was a multimillionaire and I had access to any resource, what would it have to take for me to be able to compete with websites that are of authority like that?
J 43:03 – I don’t know. I don’t have any magic bullet where you can just say like be on national tv or be listed in wikipedia or something like that. It’s really hard to say.
M: Can I get an ‘it depends’?
J: It depends, sure. Like even if I knew a specific situation, it wouldn’t be something where I’d be able to say oh you just need to tweak this one factor here and buy some gold plated network cables and then you’re all set.
M 43:43 – Understandable and that was a bit of an unfair question. I think I ask it because people do that all the time, you know people come to us and say hey I want to dominate this and I’ve got investors. I think in the past if you had enough money you could buy links that would trick Google so we’re trying to essentially tell people like you can’t be the biggest authority unless you are the biggest authority and that’s a struggle that SEO can’t always generally fix.
M 44:14 – We’re going to rap it up soon. This was an interesting one. We have a client whose site is used in the quality raters guidelines as an example of a low quality site and it’s a screenshot from many years ago. And they’ve changed, the page is way better now. They wanted me to ask you is there anything they can do to appeal to Google to be taken out because it’s not good for their brand. Any thoughts on that?
J 44:43 – I’m happy to pass something on if you have some examples of things like that but in general, the quality rater guidelines are not meant to be absolute in a sense that this particular URL should be rating like this but rather, this kind of problem should be rated like this when it comes to the general quality raters set up where were trying to figure out what are the best algorithms to use with regards to search ranking. So it’s not something where I generally say like just because we have that particular site there doesn’t mean that people should be watching out for that particular site and then be taking this action but rather.. this is a really obvious example of this one particular case and this is the kind of situation you need to watch out and not this is the exact URL you need to watch out for. I think the alternatives that we could use in a case like that… I mean I’m happy to pass that on..
M 45:48 – In all honesty I mean I hesitated to ask you that question because I don’t Google to take away those examples. There is a lot to be learned to what you point out as high quality and potentially low quality so.. But I can see as well, I’d be quite upset if people were using my site as like ‘oh Google says your site is low quality’, nevermind that was something 10 years ago or whatever but fair enough.
M 46:15 – So John I hear you have your own podcast coming out soon? Tell us about that.
J 46:20 – We’re working on it. We started looking into that I think at the beginning of the year. At some point we got all the equipment set up in the office and like ready to go and we recorded the trailer and all that went really well and then the office closed and everything went downhill so that kind of threw a wrench in the gears there. So we’ve started to pick that up again and I hope we can get the first episode or two out fairly soon. I don’t know what the timing is there. But it is something where we thought we’d do something a little bit less formal and provide some human look behind the scenes of what’s going on with regards to Google.
M 47:16 – Will it be something where we can ask questions like this sort of thing or is it more of a kind of fun, light..
J 47:23 – I don’t know. We’ll see how it evolves.
M: And we’ll see how challenging it is as we pummel you with questions. I’m looking forward to it, I think it’ll be good.
J: It’s not meant to be a replacement for the office hangouts so it’s not something where it’s question, answer, question, answer type of thing but more where we realize people would kind of like to know what actually happens behind the scenes at Google when they make these kind of decisions and that’s kind of what we’d like to bring in.
M 47:56 – There was a video that came out years ago of, I think it was Matt Cutts in a search meeting where the whole team was discussing ‘well we could do this and it would take longer to load.’ That was fascinating. I would love to see more stuff like that, what goes on behind the scenes. Just a thought. Is it going to be video? Or I guess you don’t know right now right? Or is it a podcast.
J 48:22 – It’s just a podcast.
M 48:24 – Alright was there anything else that you wanted to share with us at this point?
J 48:29 – Nope, it went pretty good.
M 48:31 – John, thank you so much for doing this. I can’t tell you how much I appreciate it and thank you for being on twitter and putting up with all of our questions and humour too. Do you know how many times we throw into our slack channel ‘oh John’ because you’re just such a great guy and very helpful. Anyways, I’ll stop. I’m going to make you blush so thanks again John and I hope that everything’s going okay for you in staying at home and all that. Hope to one day see you. We were supposed to meet again in Munich for SMX and that got cancelled like a week before we were going to be there so one day I’ll see you again sometime.
J: It’ll happen, don’t worry.
M: Take care!
J: Thanks a lot bye.
Google update newsletter
Want an update when Google makes a big algorithm change or other announcement? Sign up here!