AB TEST
A/B tests let you compare two versions of an offer — for example, a 10% discount versus a 20% discount — and see which one converts better before scaling it. Instead of guessing what works, you make decisions based on real data from your store.
This guide walks you through how to create your first test, step by step.
To create an A/B test you need at least two offers already set up in the Bundles and Quantity Offers section. If you haven't created them yet, start there before continuing with this guide.
Step 1: Go to the A/B Tests section
Inside the Releasit Bundle app, click A/B Tests in the left sidebar. If you haven't created any tests yet, you'll see an empty screen with the message "No A/B tests yet".
Click Create A/B Test to get started.

Step 2: Name your test and set an end date
Start by filling in the basic details:
- Test name — Use a descriptive name that lets you identify the test easily, especially if you end up running several at the same time. For example: "Christmas Bundle – 10% vs 20%" or "2x1 vs 3x1 Quantity Offer – January".
- End date (Until) — Set when you want the test to stop. We recommend letting it run for at least 2 weeks to collect enough reliable data.

Step 3: Select the trigger product
In the Product and variants section, click Select product and choose the product the test will run on.
This is the product that, when a customer views it in your store, will trigger one of the two offers you're comparing to be shown.


Step 4: Choose your two variants
This is where you define exactly which two offers you want to compare:

- Variant A is your control — the offer that's already running in your store.
- Variant B is the one you want to test against it.
For example: Variant A = bundle with 10% off, Variant B = bundle with 20% off, same product, same design. The only difference between the two should be the variable you're testing. If you change more than one thing at a time, you won't be able to tell what actually caused the result.
Step 5: Set the traffic split and create the test
The traffic split determines what percentage of visitors sees each variant. You can choose from several preset options: 50/50, 60/40, 70/30, 80/20, and 90/10.

Which one should you pick?
For most cases, start with 50/50. Both versions receive the same volume of traffic, which makes the results easier to compare.
If your store gets high traffic and you want to limit exposure to an untested offer, use 80/20: 80% of visitors continue seeing your current offer, and only 20% see the new variant.
Step 6: Read the results in Analytics
After a few days, you can check the results in the Analytics section. The key metrics to look at are the conversion rate for each variant, revenue generated, and number of orders.

When should you make a decision?
Don't close the test too early. As a general rule, wait until you have at least 100 conversions per variant before declaring a winner. With less data, the result may be due to chance rather than a real difference between the two offers.
- Once you have a clear winner, you can deactivate the losing variant and keep running only the one that converted better.
Frequently asked questions
What happens if a variant's product goes out of stock?
If one variant runs out of stock during the test, that variant stops showing automatically. Traffic is redirected to the other one.
Can I run more than one test at the same time?
Yes. You can run multiple tests on different products simultaneously.
How long should I let a test run?
At least 2 weeks, or until you reach 100 conversions per variant — whichever comes first.
Updated on: 27/04/2026
Thank you!
