AdWords Campaign Experiments Adds Ads: Why I'm Disappointed
Was anyone else a little confused and let down when Google launched AdWords Campaign Experiments back in August without the ability to test ads? Instead of the obvious new ad testing features, Google focused on split testing across ad groups and campaigns for changes to keywords and ad groups.
For example: The first test I ran was a keyword reduction test. I paused low impressions, low Quality Score keywords, etc. for an entire campaign. ACE worked great for this type of test.
But why not start with the ads? Ads are the single most tested element of anyone’s PPC campaign, and yet they were mysteriously missing from the initial release.
So, Google has now announced the ability to use ACE with your text and display ads. Better late than never—right?
You can watch the video below to learn how to start using this feature.
Doesn’t look very easy to use, does it?
As someone who has experienced the benefits of testing ads, I am glad that Google added this functionality and seems to care about the importance of ad testing. We should all test more; but I’m not sure if the implementation of this new functionality even encourages testing.
Google has us jump through a lot of hoops just to get an ad test started, and it’s fairly confusing. Sure, you get some new capabilities like being able to control the split, see the statistics, etc., but the workflow doesn’t make it easier to test. I believe that software should, when possible, make our lives easier, not more complicated.
Here are some questions that went through my head as I was watching the video above:
- Why not treat an ad group with more than one ad as a test by default?
- Couldn’t Google treat the oldest ad as the control and any new ads as experiments?
- Why not give us control over the split at the ad group level with a simple button or drop-down menu?
- Wouldn’t it have been easier and a more intuitive workflow to turn on the statistics for any ads in the ad group by default?
The fact is, an ad group with more than one ad in it is a test (what else would it be?). We just have to do the math to figure out which ad is a winner. If Google turned on statistics by default and you could control the split either as a campaign setting or ad group setting, then the life of a PPC-er would have truly been improved, and Google would have done us all a favor.
I hate to be negative, but it looks like Google went out of its way to make it hard to test ads with ACE. By using the framework it built for ACE, Google has made ad testing much more difficult and confusing than it needs to be.
It’s even hard to look at the results in the interface. You have to use the "experiments" segment, which adds multiple rows to the results. You should really only need one row of data for each ad compared to the control.
So, it’s harder to set up an ad test than it should be and it’s harder to make a decision on the results than it should be. Why?
If Google had released a feature for just the testing of ads before ACE, would it look anything like ACE? By waiting until after the release of ACE (which is great for testing across ad groups and campaigns) we are left with a complicated and confusing user experience for something that could be quite simple and elegant.
Elegant and simple like an "Optimize for Conversions" setting, similar to the "Optimize for CTR" setting that has already been around for years.
An "Optimize for Conversions" setting would make all of our lives easier and more profitable. If they can make it so easy to optimize for CTR, why wouldn’t they use the same technology to make it easy to optimize for conversions?
The fact that they didn’t just add an "Optimize for Conversions" setting (how you make money) but have had an "Optimize for CTR" setting (how they make money) for years now might be a strong signal for just how much impact testing ads can have on your bottom line. Apparently, Google is capable of making elegant solutions when it’s to their benefit.
So, if you are still convinced you should use this new ACE ad testing feature, here are some things to think about.
- You still have to monitor your results and pick a winner or remove a loser. Don’t just pick 30 days and wait for results—there is always risk when running an ad test.
- Picking a winner based solely on conversion rate could be a mistake; you probably want to optimize for total conversions or impressions-to-conversions.
As for me, I will probably test this new feature, but I don’t like working this hard when it could be so simple. From what I can tell about this new feature, I’m better off using a spreadsheet for ad testing for the math and regular ad groups with an even ad rotation for the test.
I’m even considering outsourcing PPC ad testing altogether for some fresh eyes on my ads. After all, we all need to focus on writing new, better, more relevant ads to test rather than worrying about such a labor intensive workflow.