Google this week has published a new white paper on Quality Score and finally updated their 5-year-old video on how Quality Score works.
I asked them “why now” and they said that it’s just time to redo the old stuff, but part of me thinks that they’re trying to quell a Quality Score rebellion that myself and others (including Frederick Vallaeys, etc.) may have inadvertently started – the wording of some points in the new materials seem to directly reflect points that I’ve articulated in articles I’ve published in the last year.
Image via Kirk Williams – @ppckirk
As usual, Google trotted out the same old “happy users,” “happy advertisers” and “happy Google” platitudes. It’s mostly a rehash of the same old Google fairy dust about Quality Score, but there are a few interesting nuggets in there worth responding to (and debunking).
Quality Score Is a Helpful Diagnostic Tool, Not a Key Performance Indicator
Why: Your Quality Score is like a warning light in a car’s engine that shows how healthy your ads and keywords are. It’s not meant to be a detailed metric that should be the focus of account management.
I’ve often argued that Quality Score (or, essentially, Click Through Rate) is *the* most important key performance indicator to track in PPC, since it plays a huge role in AdRank which in turn directly impacts CPC, Ad Position, and Impression Share, and thus directly impacts number of conversions and cost per conversion. Across the many accounts we manage here at WordStream, the ones with higher Quality Scores are almost always better off than those with lower average Quality Scores. So why the heck are they downplaying the significance of the metric?
I think it’s because by definition, half of us have low Quality Score accounts (they grade on a curve – we can’t all beat the average expected CTR – by definition, half of us will be “below average”). By my estimation, 66% of Google revenues come from below average Quality Score keywords (due to the CPC penalties for low QS keywords and CPC discounts on high QS keywords). So I think it’s smart and understandable for Google to downplay the significance of QS – but you, as an individual advertiser, should know better.
Furthermore, if the check engine light in your dashboard is flashing, it means your car could break down soon and you are endangering the lives of yourself and your passengers. Strongly disagree with Google on this one and if your “check Quality Score” light is on, I think you should definitely focus on fixing it.
This was pretty big – the new Google white paper says:
“For Newly-Launched Keywords, Performance on Related Keywords: Does Matter”
Note that Google has never explicitly stated this before. Now on this one, I’m pretty sure I’m the reason why they changed their stance. I recently pointed out that all accounts have keywords with no clicks and no impression data that still have Quality Scores, and that the “default Quality Scores” were always high for great accounts, and always low for terrible accounts. So clearly related keywords can affect the Quality Score of other keywords in your account.
The corollary of this is that it confirms another theory of mine – having keywords with higher average CTR/Quality Score has a beneficial impact on other keywords in your account, which is why I always run a branded keyword campaign. (Branded keywords get super-high CTR’s and thus can float your whole account higher. In other words, if you’re starting off with a high QS account, your new keywords will always have a higher Quality Score out of the gate.)
Alternatively, it also means you should delete the terrible low QS/CTR keywords that are killing your account so that the rest of the account can breathe.
A few months ago Google announced that the use of Ad Extensions would impact AdRank but didn’t provide much detail into the precise weighting of the new “Ad Format Impact” factor.
In the new video, Hal Varian gives us 7 clear equations to work with, and using a little algebra you can reverse engineer the weighting of Ad Format Impact on Ad Rank in relation to other factors (Bid and Quality Score).
Four equations come from a calculation of ad rank @ 4:55 in the video:
And three more equations come later when he calculates their appropriate actual CPC based on the 2nd price of the auction at 6:15. (So the actual CPC would be the required bid to have an ad rank of the advertiser immediately below them – the first advertiser would need a bid of $1.73 to have an ad rank of 15).
He doesn’t give us exact numbers to work with for Quality and Format impacts, so I’ll use discrete variables to represent the impacts of “High”, “Medium”, “Low”, and “No” rated influences of Quality (Q) and format (F).
Expressed as formulas, these 7 equations look like this, with ad rank on the right of the equation:
Let’s normalize all bids to $1 and directly compare ad rank across identical bids:
What does this show us? Well, these numbers imply that the MAJORITY of ad rank is influenced by the impact of ad formats. For instance, take these 2 equations:
Same quality impact, but the difference from moving from a “high” impact of ad formats from “low” doubles the ad rank. Some manipulation brings us to:
Implying that the impact of ad formats on ad rank is greater than the impact from Quality Score, which seems a bit over-stated in my opinion.
We suspect that Varian’s script wasn’t checked by his engineers as we see discrepancies in the numbers between minutes 5 and 6 once we normalize them for their bids.
For instance, consider the advertiser with $2 bids. His ad ranks don’t balance when you normalize:
The advertiser with $3 bids:
And the one with $1 bids:
It’s odd that he gives very specific numbers for bids that don’t translate correctly to ad rank, but it’s probably more a marketing video than anything else. Google gets lucky with how few people actually sniff these numbers for accuracy.
Another reason I’m skeptical of Google’s example is that it doesn’t match up with our own customer data – for example you can look in your own AdWords accounts and see how CTR varies for ads with and without extensions – we did that a while ago and noticed that the use of ad extensions does indeed raise Click-Through Rates, as shown here:
And those ads with extensions raise Quality Scores too, as shown here:
In both cases you can see that there is indeed some uplift, but that it is modest and nowhere near as massive as the Hal Varian examples would have you believe.
For now, I suspect there’s a bug in the example and I’d hope that Google would correct it. (Thanks to Mark Irvine, our resident data scientist, for help with the equations here.)
This is 100% true and validates our own internal findings.
When looking at our customer accounts, we found that the average Quality Scores were similar regardless of the mobile share of account clicks, even though we noticed the expected CTR for mobile was very different from expected CTR for desktop. So the key takeaway is that Google uses lower expected CTR numbers when calculating QS from mobile.
Pay Attention to the “Big Three” Component Parts of Ads Quality: ad relevance, expected CTR and landing page experience.
This is true but they’re listing them out in a table as though they’re all equally weighted factors. Just beating expected CTR trumps all other factors by far. They should make this more clear. The old QS video had a pie chart showing the components of quality score where 2/3ds of the algo was based on CTR – I thought that was a better way to explain it.
They’re not wrong, but they aren’t telling you the whole story either for obvious reasons. The level of detail in the new white paper is higher than previously, which indicates that they were leaving out key information. I can only assume that the new video and white paper also omit key details. So when it comes to Quality Score (still the most important metric in your account), I think you’d be better off doing your own homework, as I have done, than taking the Google advice verbatim.
Please read our Comment Policy before commenting.