Many people want to immediately start experimenting, testing and learning. This is great. But are you sure it’s time for a/b testing?
You might not have all the conversions you want and won’t get any meaningful results.
This won’t accelerate your business, it will slow you down.
A/b testing is not the goal, it’s the means to and end. What’s the end? Testing your assumptions and improving the amount of conversions.
In this article, you’ll learn exactly when you can start a/b testing and what you should a/b test in every step of the funnel.
What you'll learn
Rule of thumb: 1000 conversions a month
Let’s do a thought experiment.
You’re in the desert. You’re dried out and out of water. You start seeing all these tropical pools in the far distance. You keep walking and walking in the direction of the water in the distance… because you’re desperately in need of some hydration.
It’s a fata morgana. These things don’t just happen in the desert, also on your website.
This is what’s happening with a/b testing. The desert is your website and water is your conversions. If you don’t have enough conversions on your website you might start to see funky things and ‘results’ that aren’t really there.
You simply don’t get statistical significant results. Even though you desperately want to see legit results from your a/b test. It’s not there.
You might think that you can simply lengthen the testing time to get simply results. Smart thinking, but a bad idea.
With a test there’s the problem of cookie deletion. After 2 weeks, about 10% of your audience will have deleted his/her cookies. When those people come back to your site, they’ll come into a different variation. Now your audience is mixed up. Running a test for more than 2 months, causes the results to not be trustworthy anymore. Over time, the results of the different variations will all be the same because of cookie deletion.
What we can do to still a/b test is to try a different scope. Depending on what stage you are in the funnel, the definition of a conversion is different.
What step of the funnel are you going to test?
Conversions don’t have to be final customers, they can happen in every step of the funnel.
Let’s go over every step of the funnel to define a ‘conversion’.
Awareness: Clicks from ads, -emails, -blogs, -posts, -referrals, etc.
Acquisition: Button clicks, email captures, accounts created, downloads, installs, subscribes, etc.
Activation: People reaching the wow-moment. Some examples (x = a certain amount):
- Playing x songs (Spotify);
- Adding x connections (LinkedIn);
- Uploading a part (3D Hubs);
- Doing x guided meditations (Headspace/Calm);
- Sending x messages (Slack);
- Completing the onboarding (SaaS solutions).
Retention: Go for 1000 valuable actions of your product. Don’t just look at people returning, but at people using your product in a valuable way. For example:
- Playing a song (Spotify);
- Scrolling down 10 posts in the timeline (LinkedIn);
- Uploading a part (3D Hubs);
- Doing a guided meditation (Headspace/Calm);
- Sending a messages (Slack).
If your product has a usage interval of over 3 months, it will be more valuable to test things on the activation level.
Referral: Invitations sent and invitations accepted.
Note: this list is extensive, but never complete.
Make sure to calculate the effect your test is going to have on the bottom line. How much users will see your test and what will the expected results be? A critical step not to be forgotten. Make sure to think about a hypothetical test in every step of the funnel so you know where the biggest opportunities are.
Does it make sense to test?
Some things don’t need to prove themselves to test. Since testing is ‘sexy and cool’ (at least for growth hackers, for other people this may sound very weird), it’s a pitfall to launch an a/b test for things that are already proven to give you a positive outcome.
- Matching your ad copy with your landing page copy (message matching).
- Implementing proven CRO best practices.
- New ideas from watching user recordings.
- Outcomes of interviews with users that did and didn’t become customers.
- Clear problems in your site you found via Google Analytics or Mixpanel data.
- A shorter form.
All of the above already has the data to support the implementation. By running an a/b test you waste an a/b test opportunity and for the time of the test, you miss 50% of the positive result.
Sure, you can test it ‘just to be sure’, but I like to test riskier things myself.
A/b testing will only get you so far. You’re optimizing for a local maximum. While a whole different value proposition may lead you to a new global maximum.
Tests that do make sense
A/b testing is a way to test assumptions. Everything that’s still unknown territory and you’re not sure about yet, a/b testing is a method to help you out.
Look for serious assumptions to test.
- Most prominent headline or image of the page or ad;
- Shipping costs or delivery time of your product;
- Onboarding journey;
- Email drip campaign;
- Email subject lines;
- The way you communicate your value proposition;
- Referral rewards;
- Button copy.
All these things you read above are not about changing the color of your button. They are about serious business.
What to do when you can’t test
First of all, don’t panic. A typical a/b test gives an average result of 4%. If you’re a startup, you have bigger fish to catch. You just need to think bigger.
In your phase, there’s more to gain by going all-in and fast in one than to spread your efforts in multiple directions.
You can either implement all the things that make sense from the qualitative (interviews and surveys) and quantitative (analytical) data. Or you are going after big assumptions with things like smokescreen tests.
The beauty about the funnel is, is that you can always go a step higher. Because every step up in the funnel has more data. For example: don’t have enough visitors on your website to test your headline? Test the new headlines in your ads and check if the difference in clicks is statistically significant.
Favourite a/b test tools
The one I use most to check the audience size and check the minimum detectable effect beforehand is the a/b test calculator from Conversion XL.
If you are more savvy in statistics I recommend AB Testguide’s calculator.
Dive deeper into the topic
If you liked this article I highly recommend you to watch “Stop A/B-testing or speed up – when to do what – Ton Wesseling explains ROAR” to get an even deeper understanding of this topic.
Furthermore, the ‘1000 conversions rule of thumb’ is based on the ROAR model from Online Dialogue.
If you want to dive deeper into this specific model, I recommend you to read the full (Dutch) article ‘A/B-testen & ROAR Kijk verder dan Cialdini’s neus lang is’ on the website of Online Dialogue.