SEO Is Not That Hard

Best of : Deciphering E-E-A-T with Google Patents

Edd Dawson Season 1 Episode 252

Send us a text

Google's EEAT (Experience, Expertise, Authority, Trust) evaluation likely works through sophisticated AI that analyzes writing patterns rather than just author bios. Two key Google patents reveal how the search engine might classify both authors and websites based on writing style fingerprints compared against established experts.

• EEAT stands for Experience, Expertise, Authority and Trust—factors used in Google's Quality Rater Guidelines
• Google added "Experience" to EAT in 2022, recognizing the value of firsthand knowledge
• Author bios are likely insufficient for demonstrating EEAT as they're easily fabricated
• The "Generating Author Vectors" patent identifies writing style fingerprints to classify authors
• The "Website Representation Vectors" patent categorizes sites within niches by expertise level
• AI models trained on human quality rater feedback can recognize patterns matching genuine experts
• Topical authority requires comprehensive content coverage and writing at the appropriate expertise level
• Simple solutions like adding author bios won't overcome content that doesn't match expert writing patterns

Try Keywords People Use today for free at keywordspeopleuse.com to find the questions people are asking online.


SEO Is Not That Hard is hosted by Edd Dawson and brought to you by KeywordsPeopleUse.com

Help feed the algorithm and leave a review at ratethispodcast.com/seo

You can get your free copy of my 101 Quick SEO Tips at: https://seotips.edddawson.com/101-quick-seo-tips

To get a personal no-obligation demo of how KeywordsPeopleUse could help you boost your SEO and get a 7 day FREE trial of our Standard Plan book a demo with me now

See Edd's personal site at edddawson.com

Ask me a question and get on the show Click here to record a question

Find Edd on Linkedin, Bluesky & Twitter

Find KeywordsPeopleUse on Twitter @kwds_ppl_use

"Werq" Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/

Speaker 1:

Hi Ed Dawson here, and, as I'm a bit busy at the moment and need a break, welcome to another one of my best of SEO is not that hard podcasts. These are the episodes from the back catalogue that I think have the greatest hits and ones that are still relevant and provide great value for you. So, without further ado, let's get into the episode. Hello and welcome to. Seo is not that hard. I'm your host, ed Dawson, the founder of keywordspeopleusecom, the solution to finding the questions people ask online. In today's episode, I'm going to talk about deciphering EEAT with Google Patents. Let's start by discussing what EEAT actually stands for. So it's an acronym that stands for experience, expertise, authority and trust, and these are things which are mentioned in Google's Rater Guidelines. Now, google raters are a set of people who go online and look at websites that Google asks them to and they score the websites on a number of factors based on a whole bunch of guidelines. That's actually published in the Google Rater Guidelines that anyone can go and read. It's a big, big document, but these raters are human raters and they go out there and they score a whole bunch of websites against these criteria. Now, they're not there to actually promote or demote the individual websites they're looking at. What they're doing is they are training Google's machine learning models. So it's a way of Google will aggregate all the data from all their raters and then see how their algorithm is performing against these websites, and it will then use that feedback of its model, using machine learning techniques, to actually help it re-rank and improve the rankings across all websites over time.

Speaker 1:

Now, originally it wasn't E-E-A-T, it was just Eat, and that was expertise, authority and trust, and that goes back to about 2014. And expertise meant what is the expertise of the creator of the content? Authoritativeness meant what's the authoritativeness of the creator of the content, the content itself and the website, and then the trustworthiness of the creator of the content, the content itself on the website. So basically, um, what's is this person an expert? Do they talk authoritatively and can we trust them? Then, in about 2022, google added an extra e, an extra E at the start to stand for experience. So this is where I think they were trying to realize that not everybody has an expert in terms of a qualified expert, but if someone has a huge amount of experience in a subject area, then their input, their content, has value too. So we now have EEAT Now. It's clear from Google's rate of guidelines and the amount of effort it's put into classifying EEAT over time that it's a really important thing to score highly on if you want your content to rank Now. This means that obviously lots of people have been spending a lot of time trying to work at what EEAT is, how to demonstrate it on their website, and this is especially coming to focus again with the helpful content update, where lots of people who have a lot of experience in their subjects or so they claim anyway, and I've no reason to doubt them I've seen massive drop-offs in traffic and they've obviously clearly been found by Google to not be helpful. So you would think logically that if you are faring badly in the helpful content update, you probably have problems with your EEAT.

Speaker 1:

For a long time, people have been suggesting that the way to demonstrate experience, expertise, authority, trust is to have clearly identified authors on a website with link to author bios which detail who this author is, why they're an expert, and things like that which detail who this author is, why they're an expert, things like that. Now, I've always been slightly sceptical that it's as simple as that, because it's very fakeable. How does Google know whether people have qualifications that they do or don't claim to have. We see people out there who are creating personas, um so like, essentially creating fictitious people and giving them lots of qualifications. There's nothing to stop people just looking up a genuine doctor, for example, if you're trying to give medical advice, to look up the details of a genuine doctor and making out that you're posting as them. It just seems so easy to game that way and Google tends to try and avoid things that are easy to game, so there must be other ways that they're doing this and especially with the recent helpful content update, loads of people getting smashed who were saying but I've got all my author bios on there, I've got my details, I've got all these things on there, but clearly it's not enough. I'm also rather sceptical as to it being a simple create author bios solution, because on broadbunnetcodeuk we did have author bios and author pages, but on most of my other sites that I've created, I've never done that since and it's not caused any problems in terms of them becoming topically authoritative and ranking and coming through core updates and all these things with no issues.

Speaker 1:

And if you're also looking at plenty of other sites, you see online that rank really well for very competitive terms. Some of them have author bios, others don't have author bios. There's such a variety out there and it's not that every page you see ranking has got all these particular things ticked off, so there's definitely something more to it than just simple fixes like that. So this is a perfect example of where actually going back a step and thinking now what can we learn? Is there anything in Google Patents that we can sort of attribute to EEAT? Now, there is no EEAT patent out there and Google themselves say, however true or not but they say that they don't talk about EEAT as a thing algorithmically in-house and they possibly don't if that's the way that they just train their raters to think about that and then the raters score sites and then the machine learning bases scoring of other sites on that data set.

Speaker 1:

So, doing some research into what patents might specifically be related to EEAT, I came across a great article by Lily Ray and Bill Swosky from October 2020, which dives into what Google patents might have an impact on EEAT, back when it was just called EAT, and there's two in particular that it comes up with and that's the generating author vectors patent and the generating website representation vectors. Now I'll link to the article on both of these patents in the show notes. Now, essentially, what these two patents patents are doing they're trying to create a repeatable, programmatic way of classifying authors and websites in a way that is hard to fall, hard to gain. So let's look at each of these patents individually. The first one is the generating authors vector, and in this one, what google's trying to do is they're trying to be able to characterize authors, identify unique traits about their writing style and then identify other authors whose writing is similar. So this means that they don't have to rely on people saying who the author is on an article, because a that can be faked, b people might not do it anyway and it's just not safe to rely on people self-classifying. This way, they can actually start to spot authors based on their writing style. So it's like identifying the fingerprints. Everybody has a unique way of writing. There's little mannerisms they'll use within their writing and their speaking that is unique to them.

Speaker 1:

And this patent uses what they call a neural network, or what we call now AI, to be able to start to classify authors so it can start to spot new authors coming along. It can start to spot where their writing is, it can classify how their style of writing, how now it fits with others. So if they've already got a corpus of experts, if you are starting to write in a similar way to those experts on a particular subject, it can start to classify you with them or it can classify you at different levels. So, for example, you could see, if you started a website and you claimed to be a medical doctor and you started putting medical information up there and claiming to be that kind of expert, google would be able to compare your writing style and the and the fingerprints within your uh, the content you're producing on that subject and compare it to known experts on the same subject. And if you're not matching the same kind of patterns and styles, if you are too far away from from what it's expecting, then it can probably internally say no, this person isn't a doctor, I don't trust this person because they don't match the profiles that I've got for this subject area and for the experts in that subject. And remember it's getting its training data by real world rating, where we'll google raters who are going out there and giving and doing that that sort of expert rating for all those websites. So it's got that training set that is built from the Google Raters and it can compare you against that. If you're not matching what it's expecting, then it's not going to class you as an expert. Now, if you think about it, this is a really elegant solution for identifying authors, identifying levels of expertise, in a way which doesn't rely on people self-certifying, doesn't rely on people presenting information in exact ways or identifying who they are. It's really smart and if you're trying to classify the entire web and all the content that's been produced out there, then it would work. So you can see how clearly it could stop you faking people's personas. It can assign authorship to an article without having to assign who the author is and, yeah, it analyzes the quality and the style of an expert in a particular subject and then can classify you against that corpus that it's expecting you to meet if you're going to meet certain levels. Clever.

Speaker 1:

The second patent that's quite interesting in the EEAT respect is the generating website representation vectors. Now, this is similar in many ways to the generating author vectors patent. What it does is it essentially looks to classify a website into a particular topic or niche, like health, finance, that kind of thing, and then classify that site within that niche as to the level of expertise that's on that site. Classify that site within that niche as to the level of expertise that's on that site. So, for example, the example it gives on the actual patent is, if it was a medical site, that you would have first-tier sites which are written by experts like actual doctors, second-tier sites which would be written by, say, medical students, so people with slightly less knowledge but still heading toward an expert level. And then the third tier being that produced by lay people, those with no medical qualifications doesn't stop them writing about medical subjects, but you know, obviously they're a different level of classification than either doctors or medical students. And again, the key thing here is that how it makes that decision is based upon a machine learning model that's been trained by an artificial intelligence neural network based on feedback from Google Quality Rater, so real human feedback into a training model. That then allows the AI to essentially look at any site and classify it against that model. So we can see, with taking these two patients together, how it can classify authors and classify websites and the different levels of expertise of each. That that could quite clearly feed into the EEAT theory and sort of hypothesis that they're working on. If sites and authors meet those two things, based on these machine learning models that they've built that says this is what all the criteria that a site should meet then, yeah, you can see how it would work and you could see how it gets around lots of the problems that a site should meet. Then, yeah, you can see how it would work and you could see how it gets around lots of the problems that we've discussed earlier.

Speaker 1:

Now how does this feed into how you approach doing your sites, your content, on an everyday basis? Well, I mean, clearly, the thing is obviously on your website level. You need to be a topical authority. You need to have a topical authority. You need to have that depth of content on a site to be able to get yourself classified as in the right topic area and then, within that, you then need to make sure your content gets classified as high up the authority scale as possible. But the first core thing is to make sure that Google clearly understands what topic you are aiming at and that you cover that topic in as much detail as possible to find answers from, because they don't search every single document in the web. When you put in query, they clearly classify that query on its intent and on its topic, and then it helps get results back quickly because it hasn't got to go and search everything. It's narrowing down the search process at every single point and that's how they can get results out as fast as they do. And then, in terms of authorship authoring your content you know obviously your content's got to be at the right level, with the right level of expertise and knowledge that google is expecting at this level.

Speaker 1:

Now this is early days for me thinking about this and I still haven't quite determined what the next steps are on that. So I say that's one. Watch this space. I'll obviously share more as we dig into this more and think about it more. But it's clear that I think that just adding an author bio, while not necessarily a bad thing to do, it, can be valuable, but that's not going to be the thing that swings the pendulum. If you are having trouble with the level of your content, then just putting an author bio in is not going to be the thing that swings it positive and sorts all your problems out. I think there needs to be a lot more research and thought during this one, but it's clearly, clearly something everyone should be thinking about now.

Speaker 1:

One just one word of warning that I always say when it comes to patents just because it's in a patent doesn't necessarily mean it is implemented in google algorithms. It doesn't necessarily mean that, even if it's partly implemented, that it's completely implemented like that. These things change over time. But it gives you at least an idea in the direction of thought of the engineers at google. So when they're thinking about how do we solve this problem, that's when they produce these patents, and if you can see common themes within them, then you can quite often see yeah, yeah, this is how it's going, the direction of travel they're going. And putting on my computer scientist hat, because I'm actually a trained computer scientist, I can see how this gets around.

Speaker 1:

Lots of the problems that the simple solutions that people come up with for how to maybe sort out EAT. There are problems in scaling it. There are problems in scaling it. There are problems in using it reliably. There are problems on processing the whole of the web, on those simple terms. So these two patents, I think, show a direction of travel that I think there's. If they're not completely implemented like this, I think there's probably a strong possibility that there's elements of it within it. So I do hope that's given you some food for thought. I do strongly recommend that you go and look at these patents and read through them yourself, see what ideas it sparks in you, and if you've got any thoughts that agree or disagree with me, then I'd love to hear from you.

Speaker 1:

Just email me at podcast at keywordspeopleusecom, or find me on Twitter. My handle is at Channel 5. Thanks for listening. I really appreciate it. Please subscribe and share. It really helps. Seo is not that hard. It's brought to you by keywordspeopleusecom, the solution to finding the questions people ask online. See why thousands of people use it every day. Try it today for free at keywordspeopleusecom. If you want to get in touch or have any questions, I'd love to hear from you. I'm at Channel 5 on Twitter, where you can email me at podcast at keywordspeopleusecom. Bye for now and see you in the next episode of SEO is Not that Hard.

People on this episode