A/B Test Calculator – Statistical Significance Calculator
Find out which variation of your website will help you to reach your goal
Run an A/B Test in 4 Steps
How to Run an A/B test
To run an A/B test you need to compare two pages (page A, and page B) with a different single element on each. Page A is the original page – this is known as the control. Page B should be identical to page A, but with one small modification.
The best elements to modify are those that drive conversions and call your visitors to action. You should only change ONE element on the page at a time in order to have a statistically significant test result. If you change multiple elements, you won’t know which change worked, and which didn’t.
Common elements to change include:
- Call to action button
- Ad copy
- Product copy
Within all of these elements you could also consider changing:
The page that gives higher conversions is the winning variant. The more you optimize your page, the higher your conversion rate will be.
Once your pages are ready, half of your traffic should divert to the control page (page A), and the other half should be routed to the modified page (page B).
What is an A/B Confidence Score?
The confidence score is a way of measuring the reliability of an estimate. If the confidence score is high, it means that the results are statistically significant, and you can be confident that these results are a consequence of the changes you made, and are not just a result of random chance.
|95-100%||Your A/B test is statistically significant, congratulations! You should implement the winning variant.|
|90-95%||Your A/B test is unlikely to be statistically significant. You could cautiously implement the winning variant, but it would be safer to try another A/B test first.|
|<90%||Your A/B test is not statistically significant. Do not implement the winning variant.|
Frequently Asked Questions
What does A/B testing stand for?In an A/B test you are testing two variants against each other – variant A and variant B. A is the original (also known as the control), and B is a duplication of A, but with one small modification. It’s also possible to test more variants (C, D, E…etc.) but for statistically significant results, we recommend testing A against B.
How do you perform an A/B test?To run an A/B test you will need an original or control page (variant A). Once you are ready with variant A you will need to duplicate it, and then modify something small on the page to create variant B. This could be a small portion of text, a headline, a button, a call to action, or even the color or style of the font used. You will need to show these two variants to a similar sized audience for a period of no less than a week. From the data collected during the week, you will be able to determine which variant is the highest-converting, and therefore, the winning variant.
How long should an A/B test run?You need to run an A/B test for as long as it takes to get statistically significant results. The number of user sessions is key. If a website page has 1 million visitors per day, the A/B test could show clear results in hours. A website page that receives 2 visitors per day, could take a year to show statistically significant results. User behavior tends to differ throughout the week, therefore we recommend running A/B tests for at least one full week.
What is statistical significance?Statistical significance is the likelihood that the reason behind one variant being stronger than the other is caused by something other than chance. It means you can reliably assume that the winning variant is stronger because of the changes you made. You can use our statistical significance calculate at no hidden cost to plan out your A/B test.
So happy you liked it!