Perhaps the most powerful thing about paid search is that PPC campaigns offer instant feedback. This is what makes A/B testing so powerful: you’re able to funnel the firehose of data that pay-per-click campaigns fire at you into either of two theses and get instant feedback.
This is why a new AdWords feature called AdWords Campaign Experiments (or ACE) is so powerful. Basically this feature allows you to isolate certain aspects of your Google AdWords campaigns and test certain elements, splitting off traffic in whatever way you like. In this post I’ll walk you through:
- The things you can test using AdWords Campaign Experiments
- The things you can’t test
- How to set up an experiment
- How to measure your results
As background for anyone not familiar with the feature, Google has a nice overview video:
According to Google you can test "keywords, bids, and placements." So what does that mean, exactly? Joe Kershbaum of Clix Marketing  had a great piece on five ideas you can try out  with a lot of great information on things you can test, what you might do with those controls, and some of the things you can't test -- definitely take a look at Joe's post for more information, but I'll lay out a brief summary of what he talks about here:
- Ad Group Structure - Joe makes a great point about being able to break down ad groups to try different keyword/ad text combos and different segmentations to see how they perform from a CTR and conversion perspective.
- Keyword Match Types  - We'll talk more about this later in the overview, but you can test different match types within an Ad Group to see how they perform.
- Keyword Expansion and Reduction - A very neat feature of the tool is the ability to remove and/or add single keywords or multiple keywords to see how expansion opportunities and/or eliminating a keyword truly effects your campaign.
- A Note on Google Instant Experiments - Joe also points to a great article  by our friend Brad Geddes on the bg Theory blog about using ACE to test new keyword suggestions from Google Instant results.
- Keyword Bids - The ability to split-test bid changes is another very powerful feature of AdWords Campaign Experiments.
- ACE Limitations - Joe also points to a few current limitations of the offering:
Now that you have an idea of what you can test, let's walk through setting up an experiment.
A great question you always get whenever you talk about testing is "what should I test?" Unfortunately the real art and opportunity of "testing" lies outside the mechanics of using tools like Website Optimizer and Campaign Experiments: you need to have good ideas about things that will likely improve your campaigns  to create good test design. In the case of ACE, your testing impetus can be very data-driven:
- Focus on High-Impact Areas - Look for big spending campaigns and Ad Groups, particularly high-volume segments of your campaign where CPAs need work.
- Low Quality Scores - If you have a campaign with low Quality Scores  (and subsequently probably low click-through rate ) try some different keyword segmentations and keyword/ad copy combos: you might be able to find a structure that better resonates with your searchers.
- High Volume but Poor CPAs - This is often an indication that broad match is inefficient -- but don't just guess! Test some different match types. In the example below we'll walk through how to convert a campaign from all broad to all modified broad keywords.
- Great CPAs but Low Volume - Test out some keyword tool  ideas that are iterations on a theme that's working but not driving enough volume -- see how it works!
- Head Keywords Attracting Specific Search Queries - So what if you have a single keyword in a campaign sopping up all the impressions and clicks, but the search query report  tells you that much of what it's getting matched against is more specific queries? Create an experiment where you set the high-volume keyword to exact match only and see if those impressions going to your more specific keywords help improve your CTRs and conversions.
There are many other examples of great things to test, particularly as you get into bidding experiments, but hopefully this gives you a general feel for the types of reactions you can make to your data and the types of experiments you might want to set up (if you have a great action/reaction example for an awesome split test, drop it in the comments!).
Setting Up the AdWords Campaign Experiment
So we have our design idea: we want to take a campaign that's been using broad match, and see what the impact would be when measured against a campaign that used all modified broad match. For this example we'll do this simple transition, but you could just as easily test broad vs. phrase or broad vs. all your keywords on all match types.
First, you need to navigate to the campaign > settings tab and scroll to the bottom of the tab, under the advanced section:
Next you simply name your campaign, and determine the percentage of impressions you're willing to dedicate to the experiment (if the campaign you're experimenting with is core to your business, you may not want to jeopardize 50 percent of the impressions on the experiment in case results are worse, so you can choose to only run the experiment against say 10 percent of impressions and mitigate some of your risk):
Next, once you've saved the settings on this screen, Google will allow you to "start your experiment" -- but that won't work because we haven't changed anything! Google doesn't really hold your hand for this stage, but luckily we're here to do just that.
Before running the experiment, you need to jump back to the Ad Groups tab and start to make your changes.
Let's walk through converting an Ad Group from broad to modified broad. First, select the Ad Group you want to change within the campaign where you've enabled experiments, and toggle to the keyword tab. You'll see an experiment icon next to each keyword:
Here we can determine what to do with certain keywords. In this case, we want to leave all of the existing keywords as the control, and set up a fresh list of keywords on Modified Broad match for our experiment. We can do this by selecting all the keywords via the check box next to the keyword header, and changing the status of all of our keywords to "control only":
Now we've designated all of the broad matched keywords as our control. Next we need to generate our list of modified broad match keywords. For this task there are two awesome (and free) tools.
First, we can just export the existing keyword list from the AdWords interface by:
- Clicking the download report button
- Getting a CSV export of the keyword list
- Copying the keyword row from our Excel sheet
(Brad Geddes had an awesome video  that will be a huge help in walking you through reporting in the new AdWords interface if you're not confident with the new reports available).
Once you have your list of keywords, you can simply drop them into a free tool:
- Acquisio Modified Broad Match Keyword Tool  - This is a really nifty tool that lets you create modified broad matched versions of keywords with different controls. I'll actually have a follow-up post with some different ways you can use Modified Broad Match that will go into greater depth on this, but the idea is that the tool can help you generate any sort of modified broad match version of a keyword you like. For our purposes we might just upload our list and get it back with the modifiers attached.
- Chad Summerhill's Modified Broad Match Keyword Tool  - Our friend Chad has created a great spreadsheet that will create every variation of modified broad match for a given keyword, so that you can grab (and possibly test) some different combinations.
Next, grab the output from whichever tool you use and input your new keyword list into the Ad Group as experiment only keywords:
Click save, and you've set up the experiment! Now you just need to set it live and analyze results.
To enable the new campaign experiment you've created, you'll need to jump back to the campaign > settings tab and scroll all the way to the bottom again. This time, you want to apply launch changes, and start running the experiment.
The final step, of course, is actually monitoring your results! For this, you'll need to do a bit of data manipulation in Excel to really get a good picture.
First, to get the data, look at whatever aspect of your campaign you want to do the analysis against (the Ad Group or Campaign level to get a quick picture of performance, or at the keyword level to get a more granular view) and use the segment tab to create a report segmented by your experiment data:
The output will look something like this (though hopefully your results aren't as blurry):
Note: you may have data labeled "outside experiment" -- typically this is just the clicks and impressions that occurred before or after you ran your experiment.
Finally, we can take this report, download it into Excel, and start to get some interesting insights, such as:
- Have traffic and impression volume dropped off with modified broad?
- Have conversions?
- What about costs and cost per conversion?
From here we can see which areas of our campaign work better with modified broad, which areas sacrifice volume and/or cost per acquisition with the new match type, and react accordingly.
AdWords campaign experiments is a pretty awesome tool -- if you have any anecdotes, use cases, things that drive you nuts about it, or specific tests you like to run with it leave them in the comments!
Additional AdWords Campaign Experiments Resources
Tom Bates on the Epiphany solutions blog  had a nice post on ACE as well f you're really looking to dig in further with campaign experiments, here are a bunch of handy links from Google themselves:
- An overview of AdWords Campaign Experiments 
- Step 1: Setting your experiment parameters 
- Step 2: Defining your experimental changes 
- Step 3: Monitoring your experiment 
- Step 4: Applying or deleting your experimental changes 
- Advanced video: Understanding your experiment results and statistical significance 
- Advanced video: Ad Group Experiments 
- What is AdWords Campaign Experiments? 
- How does AdWords Editor work with AdWords Campaign Experiments? 
- How do bid management tools work with ACE? 
- Glossary 
- How can I experiment with my ads? 
- Which parts of my campaign can I experiment with? 
- Can I run experiments for free? 
- How do bid multipliers in advanced ad scheduling and demographic bidding work with Campaign Experiments? 
Setting up your experiment
- Campaign Experiments: Getting Started and Set-Up Guide 
- How is traffic split between the control and experiment? 
- What do the start and end dates mean? 
- How long should I run an experiment? 
- How many experiments can I run at once? 
- How are budgets affected by experiments? 
- Which features are incompatible with AdWords Campaign Experiments? 
- How do I structure my experiment? What's the difference between in-line edited experiments and ad group experiments? 
- How does ad rotation interact with Campaign Experiments? 
- What traffic split should I choose for my experiment? 
- Why do I need to enter experimental bids as percentages? 
- What happens to my campaign data while I'm running an experiment? 
- How many experimental changes can I make at a time? 
- What happens when I have a control-only ad group that contains an experiment-only keyword? 
- Will my Quality Score be affected by experiments? 
- How can I use Adwords Campaign Experiments with tracking URLs? What is the ValueTrack tag for ACE? 
- How can I test destination URLs with Campaign Experiments? 
Monitoring your experiment
- What is statistical significance? 
- How do I stop an experiment manually? 
- What does Date Last Modified mean? 
- Why don't I see any experiment statistics? 
- Why can't I change my experiment while it's running? 
- Why do some elements have statistical significance, while others don't? 
- How likely is it that my experiment results will continue if I apply the changes? 
- Why is the number of impressions in my control not identical to the number of impressions in my experiment? 
- What is a holdback? 
- What are some tips for evaluating an experiment? 
Applying or deleting an experiment