A/B testing is a powerful tool used to try out 2 different versions of your website to see which performs better.
You could try out something as big as different layout or style, or something simple like the text of a CTA button. And it will give you real data to help decide which version gets more conversions or whatever the goal of it is.
But there is a major flaw with A/B Testing which few people know about. You need to get a certain number of data points before the results become statistically significant. This means that, before you hit that threshold, the results could be wrong. And that threshold is also unknown which makes things even trickier. The only thing worse than not not having any information is having the wrong information. Then when people see the results, they think the know the truth, but it is actually an incorrect statistical fluke. And they go on believing in a false conclusion.
A different type of A/B testing invented by Dr. Alex Mehr is what he calls A/A-B/B testing and it has a built in mechanism which lets you know when you have hit the threshold of statistical significance where you can draw much more accurate conclusions.
With A/A-B/B testing, instead of testing 2 different pages, the control and the variant, you use 2 of each. So you would test 2 A’s which are exactly the same page as each other, and 2 B’s which are the same as each other too. And when you have not hit statistical significance yet, the same page’s will be different.
A1 and A2 are the same page as each other, and B1 and B1 are also the same page as each other.
You would think B is better than A at this point since Atot=1108 < Btot=1289.
So if you have 2 A’s which are the exact same page, but are not the same, something is not right and you are not yet at the point where you have accurate data. But once the same pages are even, you can assume there is enough data to draw accurate conclusions with.
But now that the A’s and B’s are almost the same, you can see A is actually better than B with Atot=3922 > Btot=3001