The ABC's of A/B Testing
As a product manager; you sometimes have to make decisions with limited data. The most accurate method to evaluate your conversion funnel is to get data directly from your customers’ behavior. This article introduces a tool that will help you make better decisions with customer data.
A/B Testing (or split testing) is an approach to decision making that tests two or more options on a limited audience while collecting data to prove which produces better results. This data drives the product manager’s decision to pick the best approach for broader application.
What is A/B testing?
In A/B testing, two or more versions of a variable (webpage, page element, etc.) are presented to different visitor segments during the same time period. An A/B test is often called an experiment. Data collected during the experiment determines which version results in the greatest impact and drives the most positive business metrics. Depending on visitor volume, experiments may run for days or weeks, and always include:
- The control variation ("A") presented with the default experience without any variables. This variation is sometimes called the “champion.”
- At least one variation (“B”) to the control experience; presented to all visitors not in the control segment. This alternate variation is sometimes called the “challenger.”
During an experiment, visitor segments are assigned to one variation. However, there is only one control variation.
Testing Process
The heritage of A/B testing is scientific exploration. A/B testing on a website follows a similar sequence:
- Research and formulate a hypothesis
- Define and build an experiment to test the hypothesis
- Run the experiment and collect data
- Analyze the data results and draw a conclusion
Testing Example
Research: Ecommerce Platform XYZ presents visitors with a catalog view of many items on a page. Sometimes an item is out of stock. This is indicated on the individual item card while in the catalog view.
Hypothesis: If a call to action (CTA) button is included on the out-of-stock items card, customers will be drawn to click the action and view alternate items, ultimately resulting in increased sales.
Experiment: Create three variations of CTA buttons displayed on out of stock item cards. Randomly assign visitors to one of the four test segments:
Segment 1 - Control Experience: no call to action (CTA) on out of stock items
Segment 2 - CTA = “View Related Products”
Segment 3 - CTA = “In Stock Related Products”
Segment 3 - CTA = “Available Related Products”
Results: After week 4 of the experiment, approximately 80,000 unique visitors were presented with one of four variations on out-of-stock items. The variation with the CTA “View Related Products” had a 66% probability to be the best performer. The uplift over the control group was a +3.06% increase in revenue.

Testing Technology Support
Proper equipment is essential to any laboratory experiment. Successful A/B testing depends on four key technologies:
- Audience Segmentation Technology - assigns site visitors to one or more segments during the experiment window.
- Presentation Technology - enables presentation of one or more variations to the control experience at the same time to different audience segments.
- Data Collection Technology - captures activity data in support of the hypothesis defined by the experiment.
- Data Analysis Technology - Assists in summarizing and visualizing a large volume of experiment data to support conclusions. Examples range from simple spreadsheets to BI platforms.
Summary
The use of A/B Testing can support data-driven decisions in product management, reducing the need to rely on guesswork or opinions. The foundation of A/B testing is the scientific process which requires research, hypothesis and experimentation. When actual performance data is used to identify the best option, product managers can make decisions with greater confidence.
A good scientific practice is to reduce the number of variables that could impact an outcome. Teams are constrained to minimize the scope of changes under any A/B test variation. The results are more frequently smaller, lower risk incremental changes and a reliable, steadily improving value stream.
Successful A/B experiments require the right laboratory and tools. Often teams can build test equipment from their own resources. When resources are limited, there are several off-the-shelf tools and services that can be the foundation of your A/B test platform.
I’d like to hear your thoughts on this topic. Contact me on LinkedIn so I can learn from your experience and ideas for improvement to the practice of A/B Testing.
For Further Reading:
- AB Testing: The Beginner’s Guide To Higher Conversions For 2022
- A Step-By-Step Guide to Effective Ecommerce A/B Testing
Henry Pozzetta is an Agile Coach with years of experience in software engineering and product management. His goal is to accelerate the delivery of value from product management with progressive adoption of agile best practices and lean servant leadership principles.