Conversion Rate Optimization Case Studies

Clean Commit · 2026

What We Do and What You Can Expect

Our approach
We start with the changes that move your unit economics. Pricing, offer structure, shipping thresholds, bundles, post-purchase upsells. These are what we call "Tier 1" experiments and they produce much bigger effects (15-40% lifts) and resolve faster than surface-level layout changes. Once we've got these big changes running, we'll layer in what we call "Tier 2" structural improvements that let us test more things but with smaller impacts.

Proof
21 experiments documented in this page across 11 clients, with real results. Three full case studies from engagements (5x to 18x ROI).

Next step
A 30-minute call with Tim Davidson to walk through your current metrics, confirm the opportunity size and answer anything outstanding. [email protected].

How We Prioritize What to Test

The trap most brands fall into

The trap that a lot of brands fall into when they start playing around with A/B testing and CRO is focusing on the UI and even the user experience of the store. These things are important but they're much less important than things like price elasticity, the offer you're presenting to customers, discounts, gift with purchase, buy one get one free and other mechanisms which convince customers that the value of what you're offering is greater, and then finding the profitable sweet spot.

Interface changes are low risk and easier to test when you're getting started, but the impacts are lower and they're harder to measure. Plus they usually take a lot longer and a lot of brands don't have the patience when they're first starting out to wait for them to reach significance.

We classify every experiment into tiers based on how directly it affects your unit economics, then prioritize accordingly.

The framework

TierWhat it changesExpected impactExamples
Tier 1What the customer buys, pays, or receives15-40%+ liftPricing, shipping thresholds, bundles, offers, subscription models, post-purchase upsells
Tier 2How the customer gets to the purchase8-20% liftNavigation, checkout flow, cart architecture, search, cross-sell placement, page structure
Tier 3How existing elements look, read, or feel2-8% liftCopy, colors, layout, imagery, badges, trust signals, social proof styling

This approach is backed by some pretty significant studies (Wharton, Browne & Jones).

The question we want to ask when deciding which tests to run is: would this change make the customer's bank statement look different? If the answer is yes, it falls into Tier 1.

Winning Experiment Examples

Below is a full set of examples of high-impact tests we've run for our clients. These are real experiments with real results, showing the kinds of changes we make and the impacts they produce. 10 Tier 1 experiments that change unit economics, and 11 Tier 2 experiments that change how customers buy.

#TierExperimentClientKey Result
1T1Price increase on hero SKUsOne Quiet Mind+42.5% CVR, +33.4% RPV
2T1Free shipping threshold optimizationAFTCO+12% AOV, +4% net revenue
3T1Starter bundle introductionAnyAge Wear+16% AOV
4T1Gift with purchase vs flat discountPeluva+18% RPV
5T1Subscribe & save on consumablesTrollco Clothing+9% RPV, +1.2x reorder rate
6T1Discount removal on flagshipMarsh Wear+14% margin, +29% checkout rate
7T1Spend-and-save threshold tiersCodeword+13% AOV
8T1Post-purchase one-click upsellHashStash+16% AOV, 14% acceptance rate
9T1Starter kit for new customersMarsh Wear+17% new visitor CVR
10T1Volume discount incentive in cartMarsh Wear+21% RPV
11T2Desktop sticky navbarAFTCO+5% RPV
12T2Homepage UGC carouselCodeword+5% CVR, -8% bounce
13T2Cross-sell pop-up at add-to-cartMarsh Wear+15% RPV, +7% AOV
14T2Free gift callout on PDPPeluva+14% RPV
15T2Homepage reskin with category cardsOverland Addict+45% CVR
16T2Product card differentiationGum of Gods+9% CVR
17T2Single column collection layoutAnyAge Wear+3% ATC rate
18T2Mobile navigation redesignQ30+14% CVR, +17% RPV
19T2Popup redesign & delayBetterGuards+4% CVR, +7% ATC
20T2Cart vs quiz checkout flowMarsh Wear+33% RPV
21T2Sale countdown timerBetterGuards+6% CVR, +4% RPV

Tier 1: The experiments that change your economics

These experiments change what the customer pays, receives or how the offer is structured. They're often harder to implement and require quite a lot of testing, but they consistently produce the largest, fastest results.

1. Price Increase on Hero SKUs

Result: +42.5% CVR, +33.4% RPV
Duration: 20 days, 53,200 visitors
Client: One Quiet Mind

Tested a 15% price increase on three flagship weighted pillow SKUs. Conversion rate went up, not down. The original price was anchoring the product as "cheap," and the target audience associated higher price with higher quality.

Control: Original pricing
Control: Original pricing on the flagship Weighted Pillow.
Variant: 15% price increase
Variant: 15% price increase. Conversion went up.

2. Free Shipping Threshold Optimization

Result: +12% AOV, +4% net revenue
Duration: 28 days, 38,400 visitors
Client: AFTCO

Tested raising the free shipping threshold from $79 to $99. Pushed customers to add one more item to qualify. Average overshoot was 25-30% above the new threshold.

Control: $79 threshold
Control: Free shipping on orders $79+.
Variant: $99 threshold
Variant: Threshold raised to $99. Customers added more to qualify.

3. Starter Bundle Introduction

Result: +16% AOV
Duration: 35 days, 31,700 visitors
Client: AnyAge Wear

Introduced a bundled kit on the PDP pairing two bestsellers at a combined discount. Despite the screenshots, the pricing is handled dynamically and increases the discount to around 15% for the bundled content. Positioned as the default recommended option.

Control: Single product PDP
Control: Standard PDP with a single product.
Variant: Complete Kit bundle
Variant: "Complete Kit" bundle as the recommended purchase.

4. Gift With Purchase vs Flat Discount

Result: +18% RPV
Duration: 30 days, 57,400 visitors
Client: Peluva

Replaced a sitewide 15% discount code with a free branded accessory (retail value $25) on orders over $75. The gift with purchase outperformed the discount on conversion, AOV and margin.

Control: 15% off sitewide
Control: 15% off sitewide with code.
Variant: Free accessory on orders $75+
Variant: Free branded accessory on orders $75+.

5. Subscribe & Save on Consumables

Result: +9% RPV, 1.2x reorder rate
Duration: 42 days, 34,800 visitors
Client: Trollco Clothing

Added a subscribe & save option on the PDP for consumable products. 10% discount on recurring orders with a toggle between one-time and subscription. Subscription set as the default selection.

Control: One-time purchase only
Control: One-time purchase only.
Variant: Subscribe & save toggle
Variant: Subscribe & save toggle with 10% recurring discount.

6. Discount Removal on Flagship

Result: +14% gross margin, +29% checkout rate
Duration: 21 days, 46,500 visitors
Client: Marsh Wear

Removed the permanent discount code from the hero product and tested it at full price with stronger value messaging. Checkout completions actually increased because removing the discount code field eliminated the "let me go find a code" abandonment loop.

Control: Permanent sale pricing
Control: Permanent sale pricing with discount code.
Variant: Full price with value messaging
Variant: Full price with value-led messaging. Margin recovered, checkouts went up.

7. Spend-and-Save Threshold Tiers

Result: +13% AOV
Duration: 45 days, 32,300 visitors
Client: Codeword

Replaced a flat 10% discount with tiered spend-and-save thresholds: spend $100 save 10%, spend $150 save 15%, spend $200 save 20%. Most customers aimed for the middle tier, overshooting their original cart value by 25-40%.

Control: Flat 10% discount
Control: Flat 10% discount on all orders.
Variant: Tiered spend-and-save
Variant: Three tiers with escalating rewards and cart progress bar.

8. Post-Purchase One-Click Upsell

Result: +16% AOV, 14% acceptance rate
Duration: 30 days, 55,100 visitors
Client: HashStash

Added a one-click upsell page between checkout completion and the thank-you page. Offered complementary products with a "Buy 1 Get 1 40% Off" incentive, purchasable with a single tap. No re-entering payment details. 14% of customers took the offer.

Control: Standard post-purchase page
Control: Standard post-purchase page with no recommendations.
Variant: BOGO 40% off upsell page
Variant: Post-purchase upsell with BOGO 40% off offer. 14% acceptance.

9. Starter Kit for New Customers

Result: +17% new visitor CVR
Duration: 28 days, 38,200 visitors
Client: Marsh Wear

Created a $49 "First Timer Kit" with curated entry-level products bundled at a slight discount. Targeted at new visitors from paid ads. Reduced decision paralysis for first-time buyers who didn't know where to start.

Control: Standard homepage
Control: New visitors land on the standard homepage with full product grid.
Variant: First Timer Kit landing page
Variant: Curated "First Timer Kit" landing page for new visitors.

10. Volume Discount Incentive in Cart

Result: +21% RPV
Duration: 21 days, 49,400 visitors
Client: Marsh Wear

Added a "Buy 2, Get 15% Off" incentive badge directly on the product card in the cart, paired with a cross-sell carousel at the bottom. Encouraged customers to add a second item from the same category.

Control: Standard cart
Control: Standard cart without volume incentive.
Variant: Buy 2, Get 15% Off badge
Variant: "Buy 2, Get 15% Off" badge + cross-sell carousel.

Tier 2: The experiments that change how customers buy

Tier 2 experiments change the structure of the buying experience. How customers discover products, navigate the catalogue and move through the funnel. They make the existing value proposition easier to find and act on.

11. Desktop Sticky Navbar

Result: +5% RPV
Duration: 11 days, 39,935 sessions
Client: AFTCO

Made the desktop navigation bar sticky so it stays visible while scrolling.

Control: Nav disappears on scroll
Control: Nav disappeared on scroll.
Variant: Sticky nav
Variant: Sticky nav stays pinned. +5% RPV.

12. Homepage UGC Carousel

Result: +5% CVR, -8% bounce rate
Duration: 23 days, 52,800 sessions
Client: Codeword

Added a "Your Story, Our Hats" user-generated content section. Real customers wearing the product.

Control: No UGC
Control: Brand photography only.
Variant: UGC carousel
Variant: UGC section added. Bounce rate dropped 8%.

13. Cross-Sell Pop-Up at Add-to-Cart

Result: +15% RPV, +7% AOV
Duration: 62 days, 35,900 sessions
Client: Marsh Wear

Added a "Pairs well with" pop-up showing complementary products when a customer adds to cart.

Control: Standard cart drawer
Control: Standard cart drawer, no cross-sell.
Variant: Cross-sell pop-up
Variant: "Pairs well with" pop-up. +7% AOV.

14. Free Gift Callout on PDP

Result: +14% RPV
Client: Peluva

Added a "Get free socks!" callout with product image directly above the Add to Cart button.

Control: No free gift mention
Control: No mention of free gift on PDP.
Variant: Free gift callout above ATC
Variant: Free gift callout above the Add to Cart button. +14% RPV.

15. Homepage Reskin

Result: +45% CVR (0.3% to 0.44%)
Duration: 30 days, 47,300 sessions, 97% confidence
Client: Overland Addict

Replaced a product-heavy homepage with a lifestyle hero and "Shop by Category" grid.

Control: Product-heavy homepage
Control: Product-heavy, no clear path for new visitors.
Variant: Lifestyle hero + categories
Variant: Lifestyle hero + category cards. CVR up 45%.

16. Product Card Differentiation

Result: +9% CVR
Duration: 29 days, 41,600 sessions
Client: Gum of Gods

Added feature callouts and benefit bullet points to collection page product cards.

Control: Identical cards
Control: Identical-looking product cards.
Variant: Differentiated cards
Variant: Differentiated with features and benefits. +9% CVR.

17. Single Column Collection Layout

Result: +3% ATC rate
Duration: 30 days, 54,000 sessions
Client: AnyAge Wear

Switched mobile collection from two-column grid to single-column with full-width lifestyle photos.

Control: Two-column grid
Control: Two-column grid, small images.
Variant: Single column
Variant: Single column, full-width photos. +3% ATC.

18. Mobile Navigation Redesign

Result: +14% CVR, +17% RPV
Duration: 22 days, 37,600 sessions
Client: Q30

Redesigned mobile navigation to highlight three main products at the top with images and descriptions.

Control: Plain text menu
Control: Plain text menu.
Variant: Product cards at top
Variant: Product cards with images at top. +14% CVR.

19. Popup Redesign & Delay

Result: +4% CVR, +7% ATC rate
Duration: 30 days, 43,300 sessions
Client: BetterGuards

Redesigned the promotional popup from a generic split-screen layout to a mobile-optimized, product-focused design. Combined with a 60-second delay.

Control: Desktop popup, immediate
Control: Desktop-optimized popup, appeared immediately.
Variant: Mobile-first, delayed
Variant: Mobile-first design with 60-second delay. +4% CVR.

20. Cart vs Quiz Checkout Flow

Result: +33% RPV
Duration: 28 days, 58,200 sessions, 92% confidence
Client: Marsh Wear

Replaced the standard browse-and-add-to-cart flow with a guided quiz that recommends products based on customer answers.

Control: Standard cart flow
Control: Standard browse-and-add-to-cart flow.
Variant: Guided quiz
Variant: Guided quiz with personalized recommendations. +33% RPV.

21. Sale Countdown Timer

Result: +6% CVR, +4% RPV
Duration: 14 days, 36,200 sessions
Client: BetterGuards

Added a sticky countdown timer bar to the top of the site during a clearance sale. Urgency tied to a real event, not a fake evergreen countdown.

Control: Standard announcement bar
Control: Standard announcement bar, no urgency.
Variant: Sticky countdown timer
Variant: Sticky countdown timer tied to a real clearance event.

Q30: +$504K Revenue and 67% Higher Conversion on 27% Less Traffic

The Headline Numbers

Metric20242025Change
Net Revenue$2.58M$3.09M+$504K (+20%)
Conversion Rate0.92%1.53%+67%
Add to Cart20,39929,573+45%
Sessions1,223,544899,092-27%
Returns2,3651,808-24%

Revenue grew while traffic dropped 27%. Better traffic quality plus a better on-site experience did the heavy lifting.

Q30 Shopify analytics. Total sales +36%, Conversion rate +40%, Sessions -11%
Q30. Total sales up 36% and conversion rate up 40% year-on-year, on 11% fewer sessions.

The Brand

Q30 makes the Q-Collar. A $199 FDA-cleared neck device that reduces brain movement during head impacts. Selling a science-backed $199 product to anxious parents who've never heard of the category.

Four findings that shaped the program

  1. The real buyer is a parent, not an athlete. 60% of purchases were by parents and grandparents. The entire website was positioned for athletes and pros.
  2. These are System 2 buyers. Deliberate, sceptical, information-hungry researchers who won't buy until they've read enough proof to reconcile their doubt.
  3. Simplification hurts this audience. We tested a simplified PDP layout. CVR dropped 9%, revenue dropped 11%. The audience wanted more information, not less.
  4. Trust signals need to be visible, not buried. Parent testimonials, clinical data and "how it works" content all performed better when placed higher on the page where they couldn't be missed.

Standout Tests

What the Client Said

Charlie Kunze

"Tim and the Clean Commit team have been my secret weapon. I didn't have time to keep looking for ways to improve our store, and they've found optimizations I wouldn't have thought of. They're super responsive and require very little oversight."

Charlie Kunze, Director of Marketing, Q30 Innovations

Marsh Wear: $590K Revenue Impact and 30% CVR Lift in 12 Months

The Headline Numbers

MetricBeforeAfterChange
Conversion rate1.83%2.38%+30.3%
Average order value$99$114+14.8%
Monthly revenue$308K$741K+140.7%

Conservative annualised revenue impact: $590,458 (projected at 0.75% of measured test outcomes, 18 implemented winners, 37 tests over 12 months).

Marsh Wear Shopify analytics. Sessions +19%, Total sales +21%, Orders +14%, Conversion rate +12%
Marsh Wear. Year-on-year growth across our engagement. Orders chart shows the compounding effect of 37 experiments.

The Brand

Premium outdoor apparel. Fishing, hunting, camping, boating lifestyle clothing. Around $5M/year on Shopify, 75%+ mobile traffic, conversion rate stuck below 2%. Owned by AFTCO, a brand we'd already been running a full CRO program on.

What we found

The marketing team was constantly updating the site, but every change was a guess. Layers of technical debt, no measurement, 75% of traffic on a mobile experience built as a desktop afterthought.

After 40 hours of diagnosis, the biggest wins came from making products look better and feel more desirable, not from reducing friction. That surprised us. Marsh Wear's customers are driven by brand belonging and product desire. They want UGC, real photography, the feeling of "I want to wear that." Urgency tactics cheapened the brand and hurt performance.

Top Winners

TestRPV LiftAnnual CII
Enhanced Search Results+14.7%$296K
Mini Cart Redesign+9.9%$36K
Discount Price Styling+10.0%$32K
Product Card Redesign+9.3%$30K
Mobile Menu Redesign+10.7%$22K
Hand-Picked Cross-Sells+15.0% AOV +7%$13K

The one that stood out

Most cross-sell implementations use algorithmic "frequently bought together" recommendations. We manually selected every product pairing. Fishing shirt with a specific hat. Jacket with matching gloves. Cheap, complementary, curated by humans who understood the products.

Result: +15% RPV, +7% AOV. Highest per-visitor revenue lift in the program. Human curation plus good timing beat the algorithm.

What the Client Said

Casey Sandoval

"Kamila, Tim and WK from the Clean Commit team are awesome. They run a tight ship and their program has been one of the main factors behind our growth this year."

Casey Sandoval, eCommerce Director, Marsh Wear

Codeword: $915K Revenue Impact. $2M to $3.87M in One Year.

The Headline Numbers

MetricBeforeAfterChange
Conversion rate2.28%2.69%+18.2%
Average order value$113$146+28.6%
Monthly revenue$212K$287K+35.5%

Conservative annualised revenue impact: $915,128 (projected at 0.75% of measured test outcomes, 11 implemented winners, 35 tests over 12 months). Year-over-year gross revenue: $2.05M to $3.87M. +88.6%.

Codeword Shopify analytics. Total sales +49%, Conversion rate +17%
Codeword. Total sales up 49% year-on-year with conversion rate up 17%, on just 5% more sessions.

The Brand

Custom hat company. Order a single embroidered hat with no bulk minimum. Customers type in text, choose a style, pick placement. Around 85 to 90% of hats get customized, so the customizer is the product experience.

The Bottleneck

Conversion stuck at around 2% with no clear path forward. The off-the-shelf customizer plugin couldn't be A/B tested, had limited styling options, looked visually cheap and was completely locked down. For a store where 85%+ of customers have to use it to buy anything, that wasn't a minor UX issue. It was a revenue ceiling.

Top Winners

TestRPV LiftCVR LiftAnnual CII
Customizer Rebuild+32.6%+6.8%$375K
Condensed Product Gallery+62.9%+23.9%$164K
Review-Based FAQs+33.2%+8.4%$81K
Input-First Mobile Customizer+12.0%+2.8%$57K
Enhanced Mobile Customizer+21.7%+3.0%$54K

The smallest change, biggest result

The customizer preview was blank by default. Customers stared at an empty hat mockup, trying to imagine what their text would look like.

We added one thing. Placeholder text in the preview. "YOUR TEXT HERE" shown on the hat by default.

Result: +15.1% CVR, +9.4% RPV. One line of placeholder copy, 15% conversion lift.

The Customizer Rebuild

The biggest win wasn't a traditional A/B test. It was rebuilding the customizer plugin from scratch and then testing the new one against the old one.

New customizer: better styling, cleaner UI, mobile-first, real-time preview with zero lag, every element testable going forward. It also integrated with Nate's embroidery machines, automating a workflow that was previously manual.

+32.6% RPV, +6.8% CVR, +24.3% AOV. $375K annual impact from a single experiment.

What the Client Said

Nate Montgomery

"Our conversion rate is already up 10-15% just in a month or two of working with them. If you're on the fence, just do it. You will not regret it. They're a great team, they really work to understand you and your particular business."

Nate Montgomery, Founder, Codeword (video testimonial)

What Our Clients Say

Rachael Nelson

"CR has gone up roughly 800% since we started working on the store… which is pretty neat."

Rachael Nelson, eCommerce Manager, Peluva

Sarah Smyth

"Conversion rate went up almost 300%."

Sarah Smyth, Australian Black Worms

Tim Ruswick

"Fantastic, communicative, and made constant progress."

Tim Ruswick, GameDev.tv

Our Process

We run a fairly standard CRO process that involves diagnosing potential problems with analytics, looking at heat maps, watching session recordings, surveying your customers and doing all of the stuff that conversion rate optimization agencies typically do.

Here's what we do differently:

1. Regular accuracy checks

You can't trust A/B testing platforms to be accurate, which should carry some weight coming from a CRO agency. We never take the statistics at face value. What we do is run multiple AA tests. If you're not familiar with an AA test, it's one where you keep the control and the variant exactly the same and let it run for up to three weeks, then measure the impact. Often you'll see a 2 to 4% change between control and variant, which tells you that 2 to 4% is your minimum measurable effect. So if you ever run a test and it's not greater than that, you have to ignore it because that's just variance, the noise that comes from running A/B testing tools and not having billions of page views like some of the enterprise platforms.

2. Deep psychological research

We're not just looking for friction in how customers interact with your website. We're looking to understand what drives them, what they're afraid of, why they're here, what they're complaining about. These behaviors, which we have a 12-point scale for, underpin every decision they make. We need to ask whether the changes we're making on the website are going to help customers move closer to the behaviors that are driving them.

3. Focus on things that impact profitability

CRO agencies have a reputation for just changing button colors. We don't do that because we've all come from e-commerce backgrounds in owning or running e-commerce stores. We understand that you need to change stuff to have real impacts, and changing things means prices and product offers and bundles and the things we've mentioned throughout this document. We're going to do the hard stuff to change those metrics and come up with creative new combinations that bring about profitability.

4. Velocity

Our goal is to run as many meaningful experiments as possible. There are some limitations on how we can do this with Shopify but we work around those limitations. Our goal is to run at least two experiments per week, with a total goal for the year of 100 experiments. We have around a 30% winning rate which means you'll see thirty different changes to your website meaningfully move your profitability over the course of the year. That's how we can back up the claims we make in our case studies and the experiments you saw earlier in this document. We're not sitting on our hands. We're looking for rapid ways to help your profitability and explore new avenues to make money.

Next Steps

You've seen the case studies, the process and the experiments we run. The next step is a call to go through it together.

Book a 30-Minute Call

We'll walk through your current metrics, discuss how we'd approach your store and confirm the opportunity size.

Tim Davidson
[email protected]

What Happens Next

  1. 30-minute call. We review your metrics together, confirm the opportunity size and answer anything outstanding.
  2. Agreement and kickoff. Paperwork within 24 hours. Kickoff within a week.
  3. Week 1 to 2. Deep diagnostic. Full access to your Shopify, GA4, Klaviyo. We build the real baseline.
  4. Week 2 to 4. First experiments go live.
  5. Month 2. Results from the first batch inform the next wave.

Total elapsed time from signed agreement to live tests: 14 days.

Capacity

We currently have room for two new engagements this quarter. If we're at capacity when you reach out, we'll tell you and offer a start date rather than overcommit.

Who are Clean Commit?

Clean Commit has been around since 2018 and is considered one of Australia's leading conversion rate optimization agencies. Our team is spread globally across Europe, America and Australia. We help Shopify brands turning over between $2M and $50M in revenue who have hit a growth ceiling.

We're a small team made up of experts in their fields. Senior project managers who have worked on large enterprise software platforms and infrastructure rollouts. Senior developers with a decade of experience designing web systems, UI and UX. Analysts with tertiary backgrounds in psychology, analytics and statistical analysis. Because we're all experts in our respective fields, we look at websites through a different lens than other teams.

We do one thing: scientific testing, customer analysis and conversion rate optimization for Shopify. It's our specialty and we know it inside and out.

By the Numbers

Brands optimized106+ brands
A/B tests run1,000+ with real traffic and statistical rigor
Revenue generated (last 12 months)$1.5M in measured, attributable lift

The Team

A small, senior team. You work directly with us, not a layer of account managers.

Tim Davidson

Tim Davidson

Founder & Lead Strategist

Wojciech Kaluzny

Wojciech Kaluzny (WK)

Co-Founder & Lead Engineer

Kamila Kucharska

Kamila Kucharska

Project Manager

Patryk Michalski

Patryk Michalski

Senior Web & UX Designer

Cormac Quaid

Cormac Quaid

Shopify Engineer

Borisa Krstic

Borisa Krstic

Shopify & React Engineer

Frequently Asked Questions

How do you prevent experiments from cannibalizing each other?

We use a naming and intent convention that categorizes each part of the UI and cross-references it with the motivations of the customer. Someone looking for information on a PDP is on a different journey to someone flirting with purchasing on the same page, so we treat those as separate spaces.

When we scope an experiment, we stick to one defined part of the site with one defined intent. We can go surprisingly granular, and at that level of resolution it takes at least 18 months to exhaust all the combinations on a single store. So cannibalization is something we sidestep structurally, not something we manage case by case.

How do you accurately measure the uplift from experiments?

Every test is a controlled A/B. A percentage of your traffic sees the original (control), the rest sees the variation.

We measure a range of metrics. Conversion rate, revenue per visitor, average order value, bounce rate and a handful of supporting signals, all pulled directly from the testing platform.

We also run an AA test on each store before we start. That tells us the natural variance of your pages. If we know your baseline conversion rate naturally swings by around 5%, we won't call a 5% lift a win. That gets declared flat. It's the only way to separate real movement from statistical noise.

We push for above 90% statistical confidence before calling a winner. For stores with large traffic we'll reach into the 95%+ range. For smaller stores 90% is our working floor.

What A/B testing platform do you use?

We default to Intelligems on most engagements.

Intelligems uses randomized participation, which means a single visitor can be part of three, four, five or more concurrent experiments without the results interfering. That matters because it lets us maintain a high testing velocity without the tests tripping over each other.

We've also used Shoplift extensively.

What happens if you don't see wins for a couple of months?

We come to you and tell you.

We're incentivized by the wins, not the retainer, so a quiet stretch hurts us too. If we go a few months without a real win we'll suggest whatever we can to course-correct. If it still isn't landing, we'll raise the idea of mutually ending the engagement. We're not precious about the contract. We want the big wins, and when the shared incentive isn't there we'll say so.

How many experiments do you run at the same time?

We aim for up to 8 concurrent experiments if traffic permits and it makes sense to run that many, but often it's closer to 4 or 5. Our average win rate sits between 20 and 30%, which means 20 to 30 winners a year compounding into your baseline.

Can we still make content changes and tweak the website while experiments are running?

Yes. You don't need to coordinate with us.

We run GitHub Actions behind the scenes that pick up your changes and apply them to the live experiment so everything stays in sync. We aim to be relatively invisible in the background. You run your marketing, merchandising and content updates as normal.

Where is your team based and who would we be working with?

Tim is based in Australia (AEDT). The rest of the team is distributed across Europe: WK, Kamila, Patryk, Borisa and Cormac.

Tim is the account lead and the escalation point for anything strategic or contractual. Kamila is who you'll talk to in Slack day to day. She sends running updates and manages delivery. The bi-monthly sync call where we walk through new experiments and results is typically with Kamila and WK (our co-founder and lead engineer).

Can we have access to your designers and developers?

Yes. We encourage every client to connect with us on Slack. When you need something from a designer, developer, analyst or strategist, you can reach them directly in the channel.

Do you do work outside of A/B testing?

Yes. Custom Shopify app development, headless builds, custom themes, international expansion, integrations and more.

That said, the point of this engagement is to improve your revenue per visitor. When a request comes in that's outside CRO scope, we tend to package it as a separately scoped piece of work so it doesn't interrupt the testing program.

What does the effort look like from your end?

Minimal.

WhatTime
Shopify and analytics access at kickoff10 minutes, one off
Weekly Slack updates from us5 minutes to read
Review of experiments before launch15 to 20 minutes per week
Feedback on test designs (async)10 to 15 minutes per week
Bi-monthly sync call1 hour every 2 months

We handle the research, design, development, QA, launch, monitoring, analysis, reporting and implementation of winners.

What does an honest uplift look like after 3 months?

Three months is roughly one full testing cycle. You'd expect the diagnosis to have surfaced 10 to 20 high-impact opportunities, with 5 to 15 tested and 3 to 5 producing a measurable win.

In revenue terms, 3 months of testing on a store converting at 1.1% often lifts CVR into the 1.3 to 1.5 range, depending on traffic volume and the severity of the issues we find. The exponential compound doesn't really kick in until months 6 to 9, when the wins start stacking.

Can we talk to any of your clients?

Yes. Happy to put you on a call with Nate (Codeword), Charlie (Q30), Casey (Marsh Wear) or James (HashStash). Let us know which vertical matches your questions best and we'll arrange the intro.