The Chance to win indicator enables you to ascertain the odds of a strictly positive gain on a variation compared to the original version. It is expressed as a % and is available for the Visitor view and for the Sessions view, for any selected goal. This measurement is visible in the reporting of your campaign.
We recommend following three business rules before making a decision after running a test:
The Chance to win is a statistical index which indicates the odds percentage of a strictly positive improvement for a test goal. This measurement is based on the number of conversions collected.
It enables us to determine the risk percentage (100% minus the Chance to win) and is currently available for the Visitor view and for the Sessions view. The reading mode is the same, but the algorithm differs depending on the selected view.
If the Chance to win is equal to or greater than 95%, this means the collected statistics are reliable and the variation can be implemented with what is considered to be a low risk (5% or less).
AB Tasty displays the Chance to win for all campaign variations. The reference variation can be changed in the top right-hand section of the page (the original version is selected by default). This version is used as a reference for all displayed calculations.
Thus, you can view the Chance to win of the variations compared to the selected reference version.
The Chance to win enables a fast result analysis for non-experts. The variation with the biggest improvement is shown in green, which simplifies the decision-making process. The Chance to win is displayed as a progress bar divided into five sections. The higher the Chance to win, the fuller the progress bar. When the Chance to win is higher than 95%, the progress bar turns green.
This index assists with the decision-making process but we recommend reading the Chance o win in addition to the confidence intervals (or Bayesian test), which may display positive or negative values.
The Chance to win calculation varies as a function of the selected view.
In the Visitor view, the calculation is based on the confidence intervals of the Bayesian test.
The index is found in the Chance to win tab of a goal, which is displayed by default. We recommend viewing the advanced statistics in addition to the Chance to win.
For the Sessions view, the principle remains the same but the calculation is based on the Mann Whitney U test.
The Chance to win is the only indicator because the Bayesian test cannot be calculated in the Sessions view. The decision-making process is therefore based purely on this indicator.
The Bonferroni correction is a method that involves taking into account the risk linked to the presence of several comparisons/variations.
In the case of an A/B test, if there are only two variations (the original and variation 1), it is estimated that the winning variation may be implemented if the Chance to win is equal to or higher than 95%. In other words, the risk incurred does not exceed 5%.
In the case of an A/B test with two or more variations (the original version, variation 1, variation 2 and variation 3, for instance), if one of the variations (let’s say variation 1) performs better than the others and you decide to implement it, this means you are favoring this variation over the original version, as well as over variation 2 and variation 3. In this case, the risk of loss is multiplied by 3 (5% multiplied by the number of “abandoned” variations).
A correction is therefore automatically applied to tests featuring one or more variations. Indeed, the displayed Chance to win takes the risk related to abandoning the other variations into account. This enables the user to make an informed decision with full knowledge of the risks related to implementing a variation.
When the Bonferroni correction is applied, there may be inconsistencies between the Chance to win and the confidence interval displayed in the Confidence interval tab. This is due to the fact that the Bonferroni correction does not apply to the Bayesian test.
The Chance to win can take values between 0% and 100% and is rounded to the nearest hundred.
- The closer the value is to 0%, the higher the odds of it underperforming compared to the original version and the higher the odds of having confidence intervals with negative values.
- At 50%, the test is considered as neutral. There is a much chance of the variation underperforming compared to the original variation as there is of it overperforming. The confidence intervals can take negative or positive values. The test is either neutral or has not enough data.
- The closer the value is to 100%, the higher the odds of recording a gain compared to the original version. The confidence intervals are more likely to take on positive values.
If the Chance to win displays 0% or 100% in the reporting tool, these figures are rounded up/down. A statistical probability can never equal exactly 100% or 0%. It is therefore preferable to display 100% rather than 99,999999% to facilitate report reading for users.
Case #1: High Chance to win
In this example, the chosen goal is the revisit rate in Visitor view. The A/B test includes three variations.
The conversion rate of variation 2 is 38.8%, compared to 20.34% for the original version. Therefore, the increase in conversion rate compared to the original equals 18.46%.
The Chance to win displays 98.23% for variation 2 (the Bonferroni correction is applied automatically because the test includes three variations). This means that variation 2 has a 98.23% chance of triggering a positive gain, and therefore of performing better than the original version. The odds of this variation performing worse than the original therefore equal 1.8%, which is a low risk.
Because the Chance to win is higher than 95%, variation 2 may be implemented without incurring a high risk.
However, in order to find out the gain interval and reduce the risk percentage even more, we would need to also analyze the advanced statistics based on the Bayesian test.
Case #2: Neutral Chance to win
If the test displays a Chance to win around 50% (between 45% and 55%), this can be due to several factors:
- Either traffic is insufficient (in other words, there haven't been enough visits to the website and the visitor statistics do not enable us to establish reliable values): in this case, we recommend waiting until each variation has clocked 5,000 visitors and a minimum of 300 conversions.
- Or the test is neutral because the variations haven't shown an increase or a decrease compared to the original version: this means that the tested hypotheses have no effect on the conversion rate.
In this case, we recommend referring to the Bayesian test in the Confidence interval tab. This will provide you with the confidence interval values.
If the Bayesian test does not enable you to ascertain a clear gain, the decision will have to be made independently from the test, based on external factors (such as implementation cost, development time, etc.)
Case #3: Low Chance to win
In this example, the chosen goal is the CTA click rate in Visitor view. The A/B test is made up of a single variation.
The conversion rate of variation 1 is 14.76%, compared to 15.66% for the original version. Therefore, the conversion rate of variation 1 is 5.75% lower than the original version.
The Chance to win displays 34.6% for variation 1. This means that variation 1 has a 34.6% chance of triggering a positive gain, and therefore of performing better than the original version. The odds of this variation performing worse than the original therefore equal 65.4%, which is a very high risk.
Because the Chance to win is lower than 95%, variation 1 should not be implemented: the risk would be too high.
In this case, you can view the advanced statistics in order to make sure the confidence interval values are mostly negative.