# Metrics Guide

In this guide, you will find useful information about goals and how to measure goal performance: definitions, implementation, how to build your reports, and how to make decisions.

## 1. Generalities and best practices for your campaign

### Performance follow-up and campaign types

AB Tasty gives you the capability to create different types of campaigns, and regarding their type, you’ll need to follow metrics to make the right decisions.

#### Test campaigns

These campaigns are based on a hypothesis: Is the evolution idea I have in mind better for my website (whatever the decision metric) than the current product version?

They need at least one primary goal - one main metric to follow - to make a decision.

This is the purpose of a test: To be able to base the final decision on specific and reliable data.
Secondary goals are made to double check that there are no critical collateral impacts.

📎 For testing activities, implementing events/trackers and following metrics based on them is mandatory.
📎 The more goals you select for your campaign, the more detailed information you will have, but the harder the decision-making will be.

#### Personalization campaigns

These campaigns are not based on testing a hypothesis. Their objective is to push what you think the best message is to the best audience segment.

For Personalization activities, implementing events/trackers and following metrics based on them is recommended to keep an eye on the general performance of your website.

Personalization initiatives might also be the result of a deeper analysis of an A/B Test campaign's result - filter features might highlight higher performances on your traffic attributes (device, loyalty, etc.).

#### Patch campaigns

These campaigns are designed to push a fix to your website in seconds. The objective is to deploy fast, for all the traffic, waiting for a hardcoded and more definitive fix.

For Patch activities, the follow-up of the performance is not relevant.

### Events, Trackers, Metrics, and Goals

In the AB Tasty platform, you’ll encounter different terms that need to be defined.

#### What is an event?

An event is a simple interaction between a visitor and your website. It can be:

• A click

• A hover

• A pageview

• A transaction

• A bounce

• A scroll

• The number of seconds on a page

• A form-filling

• A validation

• An element that arrives on the visible screen area (above the fold)
etc.

Tracking events are the base of every analytics tool and constitute the primary material to build metrics.

There are two ways to count events:

• At a unique visitor level - unique count
That means that we count 1x a visitor that will trigger a specific event twice or more. In this case, we remember that the visitor did the action versus the other ones who didn’t do the action. It’s a boolean way to count events.

• At a session level - multiple count
That means that we count Nx a visitor that will trigger a specific event twice or more. In this case, we can follow the frequency of an event.

These will be useful to know if you need to check your metrics at a unique visitor level (to track the percentage of visitors that have done a certain action vs. those who did nothing) or at a session level (to track the frequency of an event).

#### What is a metric?

A metric is based on an event and helps to analyze the number of collected events (or their mean/average) and compare it to a baseline, generally the total number of unique visitors or the total number of sessions.

📎 NB: You can run a campaign without setting goals, but it is strongly recommended against because you won’t be able to monitor the imapct of your campaigns in AB Tasty’s reporting.

A metric is a calculation, specifically:

Metric = number / average number of events that occurred / number of visitors or sessions

Metrics are useful to challenge a recorded number of events relative to the total number of occasions to perform them.

You’ll find:

• Click rates (Action Tracking)
• Pageviews
• Scroll rate
• Average time spent on page
• Transaction rate
• Bounce rate
• Average number of viewed pages
etc.

#### What is a goal?

A goal is a metric (objective) that you will follow throughout your campaign, guiding you to make a decision at the end of your campaign.

The Primary Goal is the most important one:

When you create a campaign, you will have a hypothesis: “Changing this element will positively impact the visitor’s behavior by helping them to perform more of this specific action.”

e.g. Changing the color of a CTA from red to blue will be more calming, so visitors will click more.

The Primary Goal should be the metric based on the event that will be most impacted by your change. For example, any change on a specific block can have a direct effect on the click event on this element, or on the time spent on the page, depending on the nature of the change (add some digest content, highlight an action, etc.).

⭐️ Tip: Choose the metric that seems obvious in terms of cause and effect. A change on a button > Click Event > Click Rate

The Secondary Goals are optional:

Your final decision should not be based on a secondary goal, especially since the link between the change on the website and its effect on an indirect event is not proven.

For example, we can’t be certain that a modification on a CTA on the product page will have a direct impact on the transaction rate, as the event is too far removed from the modification (it might be 3 or 4 pages away from the event and the goal, which is not close enough to be certain).

Still, it can be interesting to create and follow relevant secondary goals, including:

• Keeping track of the most important metrics for your business, such as the transaction rate if your business is an ecommerce website
• Deciding between two variations in a test campaign: if the two variations are the same in terms of Primary Goal results, the Secondary Goals can help to find the best option

## 2. Glossary

In this section, we have compiled all the definitions of the variables and goals AB Tasty uses and calculates, based on the events we can track on your website.

### General variable definitions

#### Sessions

Grouping of hits received from the same unique visitor.

The session is closed after 30 minutes of inactivity, or every night at 2am depending on the time zone.

AB Tasty variable: sessionID

#### Unique visitors

To be recognized as a unique visitor, AB Tasty gives a unique visitor ID to each visitor that doesn’t get the AB Tasty cookie. Then the visitor ID is stored in the AB Tasty cookie.

Each time a visitor clears their cookies, changes their browser, or goes in incognito mode, they are not recognized and AB Tasty gives them a new unique visitor iIDd and a new AB Tasty cookie.

AB Tasty variable: visitorID

### Metrics based on the event “transaction”

#### The event transaction

It’s triggered each time a visitor performs a transaction on the website and is normally sent from your checkout/validation page.

AB Tasty receives the event transaction but also all the information added in the transaction tag:

• transactionRevenue
• affiliations
• paymentMethods
• currencies
• shippingMethods
• customVariableNames
• productCategories

All this information is useful to calculate metrics and also to filter your report on specific purchases.

This data is displayed two times and with a different definition and calculation in the reports:

• Total transactions: total number of transactions/single purchases performed during the campaign in each variation

• Unique transactions: total number of different buyers (unique visitors who have performed a transaction)

In this example, as the unique transaction and the total transactions column are not equal, we conclude that some unique visitors have purchased more than one time during the campaign duration, in both original and variation groups.

AB Tasty variable : transactions

#### Transaction rate

Transaction rate at a visitor level

Calculation

Number of unique visitors that performed at least one transaction during the campaign period

/

Total number of unique visitors during the campaign period

*100

AB Tasty variable : transactionUserConversionRate

Example

$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}$$

$${"240 visitors have bought at least 1 time" \over "2400 unique visitors" } = 10% conversion rate$$

Transaction rate at a session level

Calculation

Total number of transactions that occurred during the campaign period

/

Total number of sessions during the campaign period

*100

AB Tasty variable : transactionUserConversionRate

Example

480 transactions have occurred

2400 unique visitors

= 20% conversion rate

Transaction rate growth

In a testing campaign, this metric compares two transaction rates (at a visitor level or a session level) and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).

The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this guide to learn how to change the baseline in a report.

Calculation

(Transaction rate variation - transaction rate original)

/

Transaction rate original

Example

(8% conversion rate variation - 4% conversion rate original) / 4% conversion rate original

= +100% growth

#### Other transactional metrics

Average order value

Average order value calculated on all the recorded purchases in the variation.

Calculation

Total amount of revenue / number of transactions

Example

20 different transactions recorded, for a total amount of $10,000. Average order value =$10,000/ 20 = $500 Average order value growth In a testing campaign, this metric compares two average order values and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version). Calculation Average order value variation - average order value baseline Example Average order value original =$154.20

Average order value variation = $153.90 Average order value growth =$154.20 - $153.90 =$+0.30

Average product quantity

Average quantity of products calculated on all the recorded purchases in the variation.

Calculation

Total number of items purchased in all transactions / number of transactions

Example

Total number of transactions = 153

Number of purchased items = 298

Average product quantity = 298 / 153 = 1.94

Average product price

Average price of a purchased item per variation

Calculation

Total revenue / number of items purchased

Example

Total revenue: $10,000 Number of purchased items: 298 Average product price =$10,000 / 298 = \$33.55

Revenue

Revenue generated by each variation (turnover = sum of all transaction values)

Please consider the variable you use to capture the amount of a purchase when you’ve installed your transaction tag, this amount doesn’t have to contain the delivery fees or taxes.

Revenue uplift

The difference between the revenue of a variation compared to the revenue of the baseline (original)

Calculation

Revenue variation - revenue original

Revenue uplift (potential)

The fictive amount that could have been earned if 100% of the traffic of the campaign had been assigned to the variation (assuming the same behavior in terms of transaction rate and average order value).

Calculation - for an A/B Test with only one variation

(total unique visitors original + total unique visitors variation) * transaction rate variation * average order value variation

For A/B Tests with more than one variation, you need to add the unique visitors of the other variations in the first part of the calculation to get 100% of the traffic of the campaign.

Variable

Variation potentialRevenue - Baseline potentialRevenue

### Metrics based on the event “click”

#### The event “click”

This event is sent via our tag from every page where the tag is displayed and an “action tracking” has been set up.

It’s displayed as “unique” in the column “unique conversions” in the reports, conversions meaning “click done” when the view “visitors” is activated.

• In this case, it represents the number of clickers, the number of unique visitors who have clicked at least one time (if a unique visitor clicks three times, the total number of clicks will still be one)

It’s displayed as “total” in the reports, conversions meaning “click done” when the view “sessions” is activated.

• In this case, it represents the total number of clicks (if a unique visitor clicks three times, the total number of clicks will be three)

#### Click rate

Visitor scope

The click rate represents the percentage of clickers (unique visitors who performed at least one click) on a certain element vs. the total traffic on the variation.

Calculation:

Number of unique conversions / number of unique visitors * 100

Example:

88 unique visitors have clicked

Total traffic is 880

Click rate = 88 / 880 * 100 = 10%

Session scope

The click rate represents the percentage of clicks (all the clicks) on a certain element vs. the total number of sessions on the variation.

Calculation:

Number of total conversions / number of sessions*100

Example:

100 clicks have been performed

Total number of sessions is 900

Click rate = 100 / 900 * 100 = 11.11%

#### Click rate growth

In a testing campaign, this metric compares two conversion rates (at a visitor level or at a session level) and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).

The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this guide to learn how to change the baseline in a report.
Variation 1 growth calculation (vs original) = 1-((original session / original total conversions) / (Variation 1 sessions / Variation 1 total conversions))

### Metrics based on the event “pageviews”

#### The event “pageview”

This event is sent via our tag from every page where the tag is displayed.

• This event is universal and automatically sent: You don’t have to set up anything in your campaign to use a metric based on pageviews because these events are stored in the session history in our database.
• This event is sent by the tag: The higher the generic tag is placed on the page, the faster the event is sent to our database. It means that some events can be sent even if the visitor changes their mind and doesn’t wait for the end of the load of the page (goes back, click elsewhere, or closes the tab)

It’s displayed as “unique” in the column “unique conversions” in the reports when the view “visitors” is activated.

• In this case, it represents the number of visitors to a certain page (depending on how your pageview has been set up), the number of unique visitors who have decided to land on the page at least one time (if a unique visitor lands on the page three times, the total number of pageviews will still be one)

It’s displayed as “total” in the reports when the view “sessions” is activated.

• In this case, it represents the total number of visits to a certain page (depending on how your pageview has been set up) (if a unique visitor lands on the page three times, the total number of pageviews will be three)

What’s inside a page view event setup?

It can be composed of:

• A single page (e.g. homepage, basket page), declared with a specific and unique URL
• Several pages (e.g. three specific product pages), declared with the three specific URLs
• A series of equivalent pages (e.g. all the product pages), declared by a specific rule (regex or other)
• etc.

#### The metric pageview conversion rate

Visitor scope

The pageview conversion rate represents the percentage of unique visitors who have visited a specific page at least one time, versus the total traffic on the variation.

Calculation:

Number of unique pageviews/number of unique visitors * 100

Example:

88 unique visitors have seen the page

Total traffic is 880

Conversion rate = 88 / 880 * 100 = 10%

Session scope

The pageview conversion rate represents the percentage of pageviews (total of impressions) of the pageview versus the total number of sessions on the variation.

Calculation:

Number of total pageviews/ number of sessions*100

Example:

100 impression has been performed

The total number of sessions is 900

Pageviews conversion rate = 100 / 900 * 100 = 11.11%

#### Pageviews conversion rate growth

In a testing campaign, this metric compares two pageview conversion rates (at a visitor level or a session-level) and helps to identify the best performer between the two variations (the variation is compared to the baseline, which is the original version).
Growth calculation: (

The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this guide to learn how to change the baseline in a report.

### Metrics based on the event “scroll tracking”

#### The event “scroll tracking” or “percentage of scroll”

This event is sent if you have added the widget “scroll rate tracking” to your campaign and a visitor reaches the defined percentage of scrolling during their navigation.

This event is sent via our tag from every page where the campaign containing the widget is triggered (see Targeting: Where section). AB Tasty considers this type of event as action tracking (label in the report/same type of hit stored in the database).

It’s displayed as “unique” in the column “unique conversions” in the reports, conversions meaning “percentage of scroll reached” when the view “visitors” is activated.

• In this case, it represents the number of visitors who have reached the defined percentage of scroll during their sessions at least one time, on the targeted page(s) of the campaign (if a unique visitor has reached the scroll level three times, the total will still be one).

It’s displayed as “total” in the reports, conversions meaning “percentage of scroll reached” when the view “sessions” is activated.

• In this case, it represents the total number of events when the percentage of scroll has been reached, on the targeted pages(s) of the campaign (if a unique visitor has reached the scroll level three times, the total will be three).

#### The metric percentage of scroll conversion rate

Visitor scope

In this case, the scroll conversion rate represents the percentage of scrollers (unique visitors who performed the level of scroll) on the targeted page(s) versus the total traffic on the variation.

Calculation:

Number of unique conversions/ number of unique visitors * 100

Example:

88 unique visitors have scrolled

Total traffic is 880

Percentage of scroll rate = 88 / 880 * 100 = 10%

Session scope

In this case, the scroll conversion rate represents the percentage of sessions where the percentage scroll has been reached on the targeted page(s) versus the total number of sessions on the variation.

Calculation:

Total number of events when the percentage of scroll has been reached / total number of sessions * 100

Example:

The scroll percentage has been reached 120 times

Total number of sessions is 1,000

Percentage of scroll rate = 120 / 1,000 * 100 = 12%

#### Percentage of scroll conversion rate growth

In a testing campaign, this metric compares two percentages of scroll conversion rates (at a visitor level or a session-level) and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).

The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this guide to learn how to change the baseline in a report.

### Metrics based on the event “dwell time tracking”

#### The event “dwell time tracking” or “time on page”

This event is sent if you have added the widget “dwell time tracking” to your campaign and a visitor reaches the defined number of seconds on a targeted page during their navigation.

This event is sent via our tag from every page where the campaign containing the widget is triggered (see Targeting: Where section). AB Tasty considers this type of event action tracking (label in the report or the same type of hit stored in the database).

It’s displayed as “unique” in the column “unique conversions” in the reports, conversions meaning “defined time on the page reached” when the view “visitors” is activated.

• In this case, it represents the number of visitors who have reached the defined time on the page during their sessions at least one time, on the targeted page(s) of the campaign (if a unique visitor reaches the time on the page three times, it will only count as one time).

It’s displayed as “total” in the reports, conversions meaning “defined time on the page reached” when the view “sessions” is activated.

• In this case, it represents the total number of events when the defined time on the page has been reached on the targeted page(s) of the campaign (if a unique visitor reaches the defined time on the page three times, it counts as three times).

#### The metric “time on page” conversion rate

Visitor scope

In this case, the time-on-page conversion rate represents the percentage of visitors (unique visitors who performed the time-on-page objective) on the targeted page(s) versus the total traffic on the variation.

Calculation:

Number of unique conversions/  number of unique visitors * 100

Example:

88 unique visitors have reached the time on page objective

Total traffic is 880

Percentage of scroll rate = 88 / 880 * 100 = 10%

Session scope

In this case, the time on page rate represents the percentage of sessions where the time on page has been reached on the targeted page(s) versus the total number of sessions on the variation.

Calculation:

Total number of events when the time on page has been reached / total number of sessions*100

Example:

The time on page has been reached 120 times

Total number of sessions is 1,000

Percentage of time on page = 120 / 1,000 * 100 = 12%

#### Percentage of time on page rate growth

In a testing campaign, this metric compares two “time on page reached” conversion rates (at a visitor level or a session level) and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).

### Metric based on the event “visible element tracking”

#### The event “visible element tracking”

This event is sent if you have added the widget “visible element tracking” to your campaign and a visitor has seen (in the visible part of their screen) the defined element during their navigation.

This event is sent via our tag from every page where the campaign containing the widget is triggered (see Targeting: Where section). AB Tasty considers this type of event action tracking (label in the report or the same type of hit stored in the database).

It’s displayed as “unique” in the column “unique conversions” in the reports, conversions meaning, “the visitor has viewed the defined element” when the view “visitors” is activated.

• in this case, it represents the number of visitors who have seen the defined element on the page during their sessions at least 1 time, on the targeted page(s) of the campaign (if a unique visitor has viewed the element on the page three times, it will only count as one view).

It’s displayed as “total” in the reports, conversions meaning “defined element on the page seen” when the view “sessions” is activated.

• In this case, it represents the total number of events when the defined element on the page has been seen, on the targeted page(s) of the campaign (if a unique visitor has viewed the defined element on the page three times, it will count as three views).

#### The metric visible element conversion rate

Visitor scope

In this case, the visible element conversion rate represents the percentage of viewers (unique visitors who have seen the element on the screen) on the targeted page(s) versus the total traffic on the variation.

Calculation:

Number of unique conversions/ number of unique visitors * 100

Example:

88 unique visitors have seen the element

Total traffic is 880

Visible Element conversion rate = 88 / 880 * 100 = 10%

Session scope

In this case, the visible element conversion rate represents the percentage of sessions where the element has been seen on the targeted page(s) versus the total number of sessions on the variation.

Calculation:

Total number of events when the element has been seen / total number of sessions * 100

Example:

The element has been seen 120 times

Total number of sessions is 1,000

Visible element conversion rate = 120 / 1,000 * 100 = 12%

#### Visible element conversion rate growth

In a testing campaign, this metric compares two percentages of element visible conversion rates (at a visitor level or a session-level) and helps to identify the best performer between 2 variations (the variation is compared to the baseline aka original version).

The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this guide to learn how to change the baseline in a report.

### Other navigation events & metrics

#### Bounce

The event bounce

A “bounce” is recognized and sent via our tag each time a visitor lands on a targeted page and decides to leave the website immediately after having seen the tested page.

A visitor can only bounce once.

This event is sent via our tag from every targeted page(s).

The bounce rate

The bounce rate represents the percentage of unique visitors who have bounced in a campaign.

Calculation:

Bounce rate = number of visitors who have bounced / total number of visitors * 100

Example:

100 visitors have bounced on the targeted page(s)

Total number of unique visitors in the test variation is 1,000

Bounce rate = 100 / 1,000 * 100 = 10%

#### Number of viewed pages

Average of viewed pages

The average number of viewed pages per visit is the average quantity of pages that have been viewed per session on the entire perimeter where the AB Tasty tag is placed, starting from when the visitor is assigned to a campaign.

Calculation:

Total number of viewed pages after having been assigned to a test/number of sessions

Example:

Visitor A has visited the website three times during the campaign: session #1 for five viewed pages, session #2 for six viewed pages, and session #3 for seven viewed pages.

Visitor B has visited the website two times during the campaign: session #1 for 10 viewed pages, session #2 for 12 viewed pages

Average viewed pages = (5+6+7+10+12) / 5 = 8

Number of viewed pages growth rate

In a testing campaign, this metric compares two amounts of average viewed pages and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).

The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this guide to learn how to change the baseline in a report.

#### Revisit

Revisit event

The number of revisits represents the number of unique visitors who have triggered the campaign at least two times in at least two distinct sessions.

Revisit rate

The revisit rate is the percentage of unique visitors who have triggered the campaign at least two times in at least two unique sessions.

Calculation:

Revisit rate = number of unique visitors who revisited / total number of visitors * 100

Example:

100 unique visitors have visited the targeted page(s)

80 unique visitors have visited the targeted page(s) only once

20 unique visitors have visited the targeted page(s) at least two times

Revisit rate = 20 / 100 * 100 = 20%

Revisit rate growth

In a testing campaign, this metric compares two percentages of revisit rates to help identify the best performer between two variations (the variation is compared to the baseline, which is the original version).

The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this guide to learn how to change the baseline in a report.

Readiness is an indicator that lets you know when your campaign or goal has reached statistical reliability and is therefore ready to be analyzed.

Readiness is available in the reporting of both your test and personalization campaigns.

The readiness test is also displayed in the campaign dashboards to help you to identify easier what campaigns you can pause and analyze. In this case, it is based on the readiness of the Primary goal you’ve chosen for your campaign.

#### Functioning

• Each goal you have selected for your campaign (goal readiness): the readiness is based on the 3 following metrics:

 Campaign duration (days) The campaign must be live for at least 14 days. Traffic volumetry (visitors) At least 5,000 unique visitors must see each variation. Number of conversions At least 300 unique conversions must take place on each variation on the primary goal.

• Each metric (days, visitors, and conversions) has a progress bar that indicates how close you are to the target. When this target is reached, the bar turns green.
We recommend waiting for each goal to be ready before analyzing its results.
• The whole campaign (campaign readiness): the readiness is based on the campaign’s primary goal performance. When the primary goal is ready, meaning that it has reached the required number of days, conversions, and visitors, the campaign is considered ready as well and reliable. We recommend waiting for your campaign to be ready before analyzing its results.

Good to know 💡

For browsing metrics (Revisit rate, Pages per session, and Bounce rate), the conversion metric is not taken into account in readiness calculation, meaning that goal readiness is based on days and the number of unique visitors only.

There are 4 readiness statuses, which enable you to know whether or not the campaign is ready for analysis at a glance.

Good to know 💡

If you change the traffic allocation of your campaign after it has been launched, campaign and goal readiness will automatically adapt.

Variations that have less than 1% of the traffic won’t be taken into account in the readiness calculation.

#### Filtered data

When filtering your reporting data, the readiness of the filtered data is displayed in addition to the goal readiness, to let you know if the reporting data are also ready to be analyzed.

The calculation is the same and based on campaign duration, traffic volumetry, and number of conversions.

Once these criteria have been reached, the readiness of the filtered data turns blue, meaning that filtered data is ready to be analyzed. You will also see a blue-striped banner at the left of the Unique visitors card to inform you that filters have been applied to the reporting.

Good to know 💡

Readiness of the filtered data may take more time to be reached as the number of visitors meeting the filtered criteria is lower. For example, if you filter your reporting data on mobile visitors, readiness won’t be reached until at least 5,000 unique visitors have seen each variation.

### Statistical indicators

Statistical indicators characterize the observed eligible metrics for each variation, as well as the differences between variations for the same metrics. They allow you to make informed decisions for the future based on a proven Bayesian statistical tool.

When you observe a raw growth of X%, the only certainty is that this observation has taken place in the past in a context (time of year, then current events, specific visitors, …) that won’t happen in the future in the exact same way again.

By using statistical indicators to reframe these metrics and associated growth, you get a much clearer picture of the risk you are taking when modifying a page after an A/B test.

Statistical indicators are displayed with the following metrics:

• All “action tracking” growth metrics (click rate, scroll tracking, dwell time tracking, visible element tracking)
• Pageviews growth metrics
• Transaction growth metrics(except average product quantity, price, and revenue)
• Bounce rate growth
• Revisit rate growth

Statistical indicators are not displayed with the following metrics:

• Transaction growth metrics for average product quantity, price, and revenue
• Number of viewed pages growth

Lastly, statistical indicators  are only displayed on visitor metrics and not on the session metrics. The former are generally the focus of optimizations and, as a consequence, our statistical tool was designed with them in mind and is not compatible with session data.

These indicators  are displayed on all variations, except on the one used as the baseline. See this guide to learn how to change the baseline in a report.

#### Confidence interval based on Bayesian tests

The confidence interval indicator is based on the Bayesian test. The Bayesian statistical tool  calculates the confidence interval of a gain (or growth), as well as its median value. They enable you to understand the extent of the potential risk related to putting a variation into production following a test.

Where to find the confidence interval

How to read and interpret the confidence interval

Our Bayesian test stems from the calculation method developed by mathematician Thomas Bayes. It is based on known events, such as the number of conversions on an objective in relation to the number of visitors who had the opportunity to reach it, and provides as we have seen above a confidence interval on the gain as well as its median value. Bayesian tests enable sound decision-making thanks to nuanced indicators that provide a more complete picture of the expected outcome than a single metric would.

In addition to the raw growth, we provide a 95% confidence interval.

“95%” simply means that we are 95% confident that the true value of the gain is situated between the two values at each end of the interval.

👉 Why not 100%?

In simple terms, it would lead to an confidence interval of infinite width, as there always will be a risk, however minimal.

“95%”is a common statistical compromise between precision and the timeliness of the result.

The remaining 5% is the error, equally divided below and above the low and high bounds of the interval, respectively. Please note that, of those 5%, only 2.5% would lead to a worse outcome than expected. This is the actual business risk.

👉As seen previously, the confidence interval is composed of three values: the lower and higher bounds of the interval, and the median.

Median growth vs Average growth:

These values can often be very close to one another, while not matching exactly. This is normal and shouldn’t be cause for concern.

In the following example, you can see that the variation has a better transaction rate than the original: 2.3% vs 2.46%. The average growth is about +6.89%.

Zooming in on confidence interval visualization, we see the following indicators:

• Median growth: 6.88%
• Lower-bound growth: 0.16%
• Higher-bound growth: 14.06%

An important note is that every value in the interval has a different likelihood (or chance) to actually be the real-world growth if the variation were to be put in production:

• The median value has the highest chance
• The lower-bound and higher-bound values have a low chance

👉 Summarizing:

• Getting a value between 0.16% and 14.06% in the future has a 95% chance of happening
• Getting a value inferior to 0.16% has a 2.5% chance of happening
• Getting a value superior to 14.06% has a 2.5% chance of happening

👉Going further, this means that:

• If the lower-bound value is above 0%: your chances to win in the future are maximized, and the associated risk is low;
• If the higher-bound value is under 0%: your chances to win in the future are minimized, and the associated risk is high;
• If the lower-bound value is under 0% and the  higher-bound value above 0%, your risk is uncertain.  You will have to judge whether or not the impact of a potential future negative improvement is worth the risk, if waiting for more data has the potential to remove the uncertainty, or if using another metric in the report for the campaign to make a decision is possible.

Good to know💡
The smaller the interval, the lower the level of uncertainty: at the beginning of your campaign, the intervals will probably be spaced out. Over time, they will tighten until they stabilize.

Heads up⚡️ In any case, AB Tasty provides these Bayesian tests and statistical metrics to help you to make an informed decision, but can’t be responsible in case of a bad decision. The risk is never null in any case and even if the chance to lose is very low, it doesn’t mean that it can’t happen at all.

#### Chance to win

This metric is another angle of the confidence interval and answers the question, “What are my chances to get a better/strictly positive growth in the future with the variation I’m looking at?”, or a better/strictly negative growth in the future with the variation I’m looking at?” for the specific bounce rate which have to be the lowest possible.

The chance to win enables a fast result analysis for non-experts. The variation with the biggest improvement is shown in green, which simplifies the decision-making process.

The chance to win indicator enables you to ascertain the odds of a strictly positive gain on a variation compared to the original version. It is expressed as a percentage.
When the chance to win is higher than 95%, the progress bar turns green.

As in any percentage of chances that is displayed in betting, it gives you a focus on the positive part of the confidence interval.

The chance to win metric is based on the Bayesian test as it is based on the confidence interval metric. See the section about Bayesian tests in the confidence interval metric section.

This metric is always displayed on all variations except on the one which is used as the baseline. See this guide to learn how to change the baseline in a report.

Where to find the chance to win

•  In the “Statistics” tab for non-transactional metrics

• In the detailed view of transactional metrics

How to read and interpret the chance to win

This index assists with the decision-making process, but we recommend reading the chance to win in addition to the confidence intervals, which may display positive or negative values.

The chance to win can take values between 0% and 100% and is rounded to the nearest hundredth.

• If the chance to win is equal to or greater than 95%, this means the collected statistics are reliable and the variation can be implemented with what is considered to be low risk (5% or less).
• If the chance to win is equal to or lower than 5%, this means the collected statistics are reliable and the variation shouldn’t be implemented with what is considered to be high risk (5% or more).
• If the chance to win is close to 50%, it means that the results seem “neutral” - AB Tasty can’t provide a characteristic trend to let you make a decision with the collected data.

👉 What does this mean?

• The closer the value is to 0%, the higher the odds of it underperforming compared to the original version, and the higher the odds of having confidence intervals with negative values.
• At 50%, the test is considered “neutral”, meaning that ​​the difference is below what can be measured with the available data. There is as much chance of the variation underperforming compared to the original version as there is of it outperforming the original version. The confidence intervals can take negative or positive values. The test is either neutral or does not have enough data.
• The closer the value is to 100%, the higher the odds of recording a gain compared to the original version. The confidence intervals are more likely to take on positive values.

Good to know 💡

If the chance to win displays 0% or 100% in the reporting tool, these figures are rounded (up or down). A statistical probability can never equal exactly 100% or 0%. It is, therefore, preferable to display 100% rather than 99.999999% to facilitate report reading for users.

Bonferroni correction

The Bonferroni correction is a method that involves taking into account the risk linked to the presence of several comparisons/variations.

In the case of an A/B Test, if there are only two variations (the original and Variation 1), it is estimated that the winning variation may be implemented if the chance to win is equal to or higher than 95%. In other words, the risk incurred does not exceed 5%.

In the case of an A/B test with two or more variations (the original version, Variation 1, Variation 2, and Variation 3, for instance), if one of the variations (let’s say Variation 1) performs better than the others and you decide to implement it, this means you are favoring this variation over the original version, as well as over Variation 2 and Variation 3. In this case, the risk of loss is multiplied by three (5% multiplied by the number of “abandoned” variations).

A correction is therefore automatically applied to tests featuring one or more variations. Indeed, the displayed chance to win takes the risk related to abandoning the other variations into account. This enables the user to make an informed decision with full knowledge of the risks related to implementing a variation.

Good to know: When the Bonferroni correction is applied, there may be inconsistencies between the chance to win and the confidence interval displayed in the confidence interval tab. This is because the Bonferroni correction does not apply to confidence interval.

Examples

✅ Case #1: High chance to win

In this example, the chosen goal is the revisit rate in the visitor view. The A/B Test includes three variations.

The conversion rate of Variation 2 is 38.8%, compared to 20.34% for the original version. Therefore, the increase in conversion rate compared to the original equals 18.46%.

The chance to win displays 98.23% for Variation 2 (the Bonferroni correction is applied automatically because the test includes three variations). This means that Variation 2 has a 98.23% chance of triggering a positive gain, and therefore of performing better than the original version. The chance of this variation performing worse than the original equals 1.8%, which is a low risk.

Because the chance to win is higher than 95%, Variation 2 may be implemented without incurring a high risk.

However, to find out the gain interval and reduce the risk percentage even more, we would need to also analyze the advanced statistics based on the Bayesian test.

✅ Case #2: Neutral chance to win

If the test displays a chance to win around 50% (between 45% and 55%), this can be due to several factors:

• Either traffic is insufficient (in other words, there haven't been enough visits to the website and the visitor statistics do not enable us to establish reliable values)
• In this case, we recommend waiting until each variation has clocked 5,000 visitors and a minimum of 500 conversions.
• Or the test is neutral because the variations haven't shown an increase or a decrease compared to the original version: This means that the tested hypotheses have no effect on the conversion rate.
• In this case, we recommend referring to the confidence interval tab. This will provide you with the confidence interval values.
If the confidence interval does not enable you to ascertain a clear gain, the decision will have to be made independently from the test, based on external factors (such as implementation cost, development time, etc.).

✅  Case #3: Low chance to win

In this example, the chosen goal is the CTA click rate in visitor view. The A/B Test is made up of a single variation.

The conversion rate of Variation 1 is 14.76%, compared to 15.66% for the original version. Therefore, the conversion rate of Variation 1 is 5.75% lower than the original version.

The chance to win displays 34.6% for Variation 1. This means that Variation 1 has a 34.6% chance of triggering a positive gain, and therefore of performing better than the original version. The chance of this variation performing worse than the original equals 65.4%, which is a very high risk.

Because the chance to win is lower than 95%, Variation 1 should not be implemented: the risk would be too high.

• In this case, you can view the advanced statistics to make sure the confidence interval values are mostly negative.

#### User session

An AB Tasty session begins when a visitor first accesses a page on the website and a cookie named ABTastySession does not exist. To determine if a current session is active, the code checks for the presence of this cookie. If the cookie exists, a current session is active. If the cookie is not present, a new session is initiated.

A session ends when a visitor remains inactive on the website for 30 minutes or more. This inactivity is tracked regardless of whether the website is open in a tab or not. Once the session ends, the ABTastySession cookie is removed, and all data stored in the cookie is lost and will not be reused in the browser.

For example:

• A visitor comes to the website, visits 2 pages, and closes their browser. 30 minutes later, the session will end.
• A visitor comes to the website, visits 2 pages, and closes their tab. 30 minutes later, the session will end.
• A visitor comes to the website, visits 2 pages, and stays on the second page for more than 30 minutes. The session will end.

The ABTastySession cookie contains useful information to assist the tag in functioning. The cookie stores:

• mrasn data: data filled by the tag during a redirection campaign when the "Mask redirect parameters" feature is activated.
• lp (landing page) data: the URL of the first page of the website viewed by the visitor during their current session.
• sen (session event number) data: the number of ariane hits sent since the beginning of the session.
• Referrer data: the value of the document.referrer variable on the first page viewed by the visitor during their current session. This data is only available when the targeting criteria "source" or "source type" is used in an active campaign.

The cookie is only added to the browser if the tag is permitted to do so based on the "restrict cookie deposit" feature. The cookie cannot be moved to another type of storage, unlike the ABTasty cookie.

/
<% if (previousArticle || nextArticle) { %>
<% if (previousArticle) { %>
<% if (previousTitle) { %>

#### <%= previousTitle %>

<% } %> <%= previousArticle.title %>
<% } %> <% if (nextArticle) { %>
<% if (nextTitle) { %>

#### <%= nextTitle %>

<% } %> <%= nextArticle.title %>
<% } %>
<% } %>
<% if (allItems.length > 1) { %>