Experiment & A/B Testing Measurement

Category: Data & Analytics

Subcategory: Experimentation & Optimization


1. Purpose

To ensure that all experiments at Cashkr are structured, measurable, and repeatable, leading to confident decisions about product, marketing, and process improvements.


2. Who Is Involved

Team Initiating Experiment

  • Marketing

  • SEO

  • Product/App

  • Ops (pickup slot tests, vendor flow tests)

  • CX (scripts, contact strategy tests)

Data Analyst (Ashwini)

  • Designs measurement setup

  • Creates tracking

  • Monitors experiment results

  • Produces final report


SECTION 1 — DEFINING AN EXPERIMENT (Team Responsibility)

Every experiment must have these 4 components:


A. Hypothesis

A clear statement of what you expect to happen.

Format

:

“If we change X, it will cause Y because Z.”

Examples:

  • “If we shorten our landing page text, conversion rate will increase because users understand value faster.”

  • “If we show trust badges above the fold, lead conversion will improve due to increased trust.”


B. Primary Metric (Success Metric)

The main metric that decides if the experiment is a WIN or LOSS.

Examples:

  • Lead Conversion Rate

  • CTR of Google Ads

  • Order Completion Rate

  • Add-to-Cart Rate

  • Pickup Success %


C. Secondary Metrics

Metrics that help understand side effects.

Examples:

  • Bounce Rate

  • Average Time on Page

  • CPC / CPA

  • Vendor Acceptance Rate

  • Customer Complaints


D. Test Duration

Experiments must run long enough to get reliable data.

Standard Minimum Durations:

  • Website Experiments → 7–14 days

  • App Experiments → 10–21 days

  • Ads Experiments → 7 days

  • CX/Sales Scripts → 3–5 days per variant

Rule:

Test must run until both variants get enough traffic (statistical power).


Experiment Requirement Sheet (Team Must Submit to Ashwini)

Item

Description

Experiment Name

Short clear title

Hypothesis

Expected change

Variant A (Control)

Existing version

Variant B (Test)

Modified version

Primary Metric

What defines success

Secondary Metrics

Supporting KPIs

Target Users

Web, App, City, Device type, etc.

Tools

GA4, Firebase, Ads experiments, etc.

Start Date

Planned

Duration

Planned


SECTION 2 — ASHWINI’S RESPONSIBILITIES


A. Tracking Setup (Before Experiment Starts)

Ashwini must:

  1. Enable event tracking in GA4 / Firebase

    • For clicks, scrolls, new UI tests, buttons, banners

    • Confirm parameters: experiment_name, variant_name

  2. Set up A/B testing tools if needed

    • Google Optimize (if used), Firebase Remote Config, or in-app flags

    • Google Ads A/B Experiments (if ads test)

  3. Create Looker Studio Experiment Dashboard

    • Control vs Variant metrics

    • Trend charts

    • Conversion funnels

  4. Verify Data Flow

    • Test in DebugView

    • Confirm events firing correctly

    • Confirm variant segmentation


B. Monitoring During Experiment

Ashwini checks performance daily for sanity and every 3 days for trends.

Things to Monitor:

  • Volume balance → Are both variants getting similar traffic?

  • Conversion Rate trends

  • Any abnormal spikes (bot traffic, tracking failure)

  • Data reliability

If data problem detected:

  • Pause test

  • Fix tracking

  • Restart if required


SECTION 3 — FINAL ANALYSIS & RESULT SUMMARY

When experiment ends, Ashwini produces a Test Result Summary.


A. Compute Core Metrics

For each variant:

  • Conversion Rate

  • Total Users

  • Primary Metric lift (%)

  • Statistical significance (if possible)

  • Secondary metrics impact


B. Result Classification

1. WIN

Variant B beats Variant A on primary metric significantly.

2. LOSS

Variant B performs worse than Variant A.

3. NEUTRAL

No significant difference → Keep control.


C. Recommendation Summary

Ashwini must provide a final recommendation:

Status

Recommendation

WIN

Roll out Variant B fully

LOSS

Revert to Variant A

NEUTRAL

Keep Variant A, no change


D. Documentation Template

Ashwini writes a short document:

Experiment Summary

  • Experiment Name

  • Dates

  • Hypothesis

Result

  • Variant A vs Variant B

  • Primary metric difference

  • Secondary effects

  • Statistical confidence

Conclusion

  • Win / Loss / Neutral

Recommendation

  • Scale / Revert / Retest

Notes

  • Issues faced

  • Learnings


SECTION 4 — STORAGE & RECORD KEEPING

All experiments must be stored in a central folder:

Fusebase / Google Drive → Cashkr → Data → Experiments

Inside each experiment folder:

  • Experiment Requirements Sheet

  • Tracking setup screenshot

  • Mid-test checks

  • Final Analysis (PDF)

  • Data sheet (CSV)

This builds institutional knowledge for future team members.


SECTION 5 — Approval Workflow

1. Marketing/Product/SEO requests experiment → sends requirement sheet

2. Ashwini validates + sets up tracking

3. Team runs experiment

4. Ashwini measures + reports outcome

5. Leadership approves rollout for wins


SECTION 6 — Quality Standards

✓ Clear hypothesis before experiment

✓ Primary metric defined; cannot change mid-test

✓ Test must run for minimum duration

✓ All traffic must be fair & unbiased

✓ Final report must be easy to understand


Was this article helpful?
© 2026 BigBold Technologies Pvt. Ltd.