A/B testing refers to running two ads on the same budget and the same settings except changing one thing to see which aspect works better. It is also called split testing because you split your campaign into two ads to see which performs better.
In this article, we will dive deeper into each of these practices covering their value and the nuances of implementing them in your campaign. By the end of this post, you'll have a solid foundation to start split-testing your ad campaigns.
1. Be Patient for the Duration of the Test
Split tests take time because ads don't get served uniformly when you publish them. Some ad sets might get audience access immediately, while others might be in review for longer.
The ad platform also runs a little test to assess an ad's interestingness to the audience. Different ad platforms have different campaign duration expectations. And because of that, they have different review periods for new ads. You shouldn’t touch your test ads on Google Ads for at least a week. On Facebook, avoid interfering with the ad for at least 48 hours. (Related: Scaling Facebook Ads)
Sometimes, one ad will seem like it is outperforming the other. But don't be fooled: this doesn't prove ad superiority, just that the platform is serving it while holding back the other ad.
Trust the process.
On platforms where you pay by impressions instead of bidding on a cost-per-click (CPC), you might initially get a very high CPC.
Let's suppose your cost per 1,000 impressions is $5. If you get one click, that seems like $5/click initially. If you prematurely multiply that with your target clicks, you might think you'll spend $5,000 to get the 5,000 clicks you need for 500 sales. And if 500 sales make less than $5,000, you might prematurely terminate the ad.
The initial high cost, however, results from irrelevant people seeing your ad. Once the platform's algorithm understands the right people to serve the ad, you might get 50 clicks for every 1,000 people who see your ads. (Bringing your cost per click down 50x). Unfortunately, most advertisers turn off test ads long before that point. Be patient. Trust the process.
2. Set a Separate Split Testing Budget
One of the reasons why advertisers stop serving ads long before the test has matured is that their ad testing is taking a chunk out of their advertising budget. The best way to avoid this trap is to have a separate testing budget.
Doing this will force you to exhaust a specific amount of money on testing alone. And since you won't worry about the campaign budget, you will not have a reason to abort the test prematurely with half-baked conclusions. If this makes sense to you, your biggest question is going to be, “how much should I spend on testing my ads?”
That depends entirely on your overall campaign budget:
You can vary these percentages but once you have set aside a certain amount of money for testing purposes, stop thinking that you can save it for the actual campaign by eliminating tests before they are finished.
Before setting a test budget, you must ensure it can cover the testing period advisable for the specific platform.
3. Test One Hypothesis at a Time
You can test an ad creative, copy, copy length, or even the platform it is on. But you cannot test all of these at the same time. Because if Ad A has a different copy and graphic than Ad B, you won't know for sure what makes Ad B perform better than Ad A. In the short term, you might be able to scale up your ad spend on Ad B and get a better cost per action.
But what if you tested ads A and B with just the copy variation?
You might find that Ad A's copy does better. And then conducting another test with Ad A's copy on two ads with different graphics, you might find Ad B's graphics doing well with the audience. Using Ad A's copy and B's graphics might get you the best ROI.
You gain long-term insights when you test for a single variable's effectiveness. You find out whether your customers like long-form text copy or short captions with plenty of emojis. You learn what kind of graphics attract your audience. Next time, you can test different variables.
4. Use Drastic Contrasts
Even if you want to test one thing at a time, you might have a problem.
What do you test first?
If you're starting out advertising for a brand that has never advertised before, you should test the hypothesis with the most significant variation.
For instance, if you're looking to test two fonts in an ad creative, you might be splitting hairs. But if you want to test a real-life talking head presenter against an animated one, you have a drastic enough contrast to get results in a shorter period.
5. Use Appropriate Tracking Strategies
Split-testing might make you feel smart, but if you track vanity metrics, you'll find out how to feel good while spending minimal money. But the goal isn't to pay the least for a like or a follow.
The goal is to pay the least amount possible to acquire a customer, which means tracking clicks and the entire customer journey. Different ad-serving platforms have their respective tracking tools that can help you figure out not just the volume of clicks but the quality of the traffic as well.
Let's suppose you're advertising a brand of guitars. If one of your creatives features a photo of a guitar and another features a photo of a girl in yoga pants doing squats, the latter will get more likes and clicks. You can see which visitors are actually buying your product by tracking the customer journey via Google Analytics, Meta pixel, or any relevant tool.
6. Minimize Luck and Misfortune
Make sure you conduct ad tests during neutral periods, so misfortune and luck don't color your conclusions. If you had advertised Bitcoin-related products and services in early 2021, you would have received a lot of attention. If you run the same ads after the Bitcoin crash... well, the metrics would be wildly different.
7. Do not Jump to Conclusions
Testing one variable at a time and not conducting tests during periods of turmoil or peak spending can help you get the right idea. But put away your jump-to-conclusions mat because it can hinder your advertising efficiency.
To understand your audience, you must run over 100 split tests across multiple campaigns. And even then, they can surprise you with a sudden shift in taste. Talk about a moving target! It helps to run at least two tests per hypothesis before deciding that your audience likes a specific type of graphics, copy, presenters, or offers.
Even after multiple tests, your conclusions will be skewed by your subjective experience and perception. Moreover, findings that might be accurate for one campaign might be inaccurate for another.
Sooner or later, all data becomes dated, so you should make split testing a regular practice in your marketing. Where salespeople are told to “always be closing”, marketers need to “always be testing”.