So, you want to improve your Google Ads campaigns? Enter Google Ads Campaign Experiments, or ACE for short. ACE is a feature that helps you measure the impact of potential changes before applying them to a campaign. When used properly, it does this as a vehicle for A/B testing.
As we discussed in our previous post, A/B testing is essentially a process applying basic scientific principles to digital advertising. Following the first things we learned in elementary school science, this is achieved by comparing two versions of a campaign wherein all variables but one are controlled, and it’s the effect of this manipulated variable that’s being measured.
An A/B test begins by establishing a clear question and resulting objective that the test’s outcome should answer. For example, “Will a different landing page outperform the first landing page?” or “Will changes in the ad copy outperform the original ad copy?” Also important is establishing which key performance indicators (KPIs) you’ll use to determine the definition of “outperform.” Whichever KPIs you choose will depend what your objective is for this campaign — for example, a campaign trying to increase conversions would probably be primarily measuring, well, conversions, whereas a campaign trying to increase people’s awareness or consideration of a brand would be better served with a different KPI like impressions, clicks, site engagement, or numerous others.
And again, the most important thing to remember: Other than the single manipulated one, control your variables!
A Big Ol’ Guide to the Google Experiment Process & Settings
The first part of beginning a Google Ads experiment is to create a draft. This is a duplication of a pre-existing campaign that allows you to easily make your variable change and submit as an experiment.
After creating your draft and defining the manipulated variable, you’ll select your start and end date. Google recommends no shorter than four to six weeks for experiments testing bid strategy, since it takes the system a while to learn how to best optimize your bids. And even if you’re conducting an experiment based on creative factors like ad copy or image ads, it’s important to remember they must go through an approval period. Usually, that should take no longer than a day, but it’s worth factoring into the schedule.
Once the start and end dates are determined, you’ll decide on your experiment split. This is where you define how the daily experiment budget will be split between the A and B variations. Per Google, it’s best to stick with a 50/50 budget split as the quickest way to achieve statistically significant results. Along with the budget split, there’s the split options to decide on. These options are search-based or cookie-based. Search-based experiments allow search queries to be evenly split between A/B variations. This carries the risk that a user sees both the A and B variations if they search for the terms multiple times, which could influence their behavior around one of the versions and make data less accurate. Cookie-based experiments, meanwhile, use your remarketing lists to prevent the same user from seeing both the A and B versions of your ad. Intuitively, this means the resulting data can be more accurate. Google generally recommends a cookie-based approach for this reason.
During the campaign experiment and once it’s over, you’ll be able to view statistically significant data metrics at the top of the experiments campaign section. This way, you can gain an easy and accurate understanding of the performance of a campaign update.
Ways to Experiment
When it comes to variables to test, there’s no “right” one. This is because different campaigns have different objectives and relative weaknesses upon which to be improved. There are, however, some common choices. Here are some of them:
Testing bid strategy is where you control all the creative aspects so the ads appear and link to the same destination but are powered differently — for example, manual CPC versus target CPA. Like we mentioned earlier, bid strategy experiments require at least four to six weeks to see statistically significant results since the algorithm learning period takes time for Google Ads software to optimize bidding.
Similarly, you can experiment with bid or scheduling adjustments. This is to find out if ad performance is affected by time-based or bidding factors. For example, what happens if you only run ads during a certain time of day, or bid higher for high-converting terms? These kind of variables are about determining how to use your input resources as efficiently as possible to decrease your cost per acquisition (or other cost rate metrics).
Otherwise, the likely first variables that come to mind are for creative variations. This can be a change in ad copy, imagery, or other visual factors. The reasoning behind this kind of experiment is pretty self-explanatory; will changing wording or other creative factors improve our campaign’s performance?
Adjacent to creative variations in the ad itself is variations in landing pages. This isn’t actually about creative distinctions, but rather if linking an ad to a different established landing page leads to better results.
Lastly are changes to ad group structure, keywords, and match type. This is pretty straightforward; while your KPIs obviously would vary depending on your campaign objectives, whichever you choose can be measured in relation to new keywords, ad groupings, or match type variations.
Overall, running frequent campaign experiments will help you get the most out of them. They’re a good way of gaining valuable insight into how people engage (or don’t) with your digital advertising and give you a foundation on which to improve your performance. Successful online marketing is fueled by attentiveness and curiosity — A/B testing gives you a vessel to engage those attributes and subsequently get more out of your efforts.