If you’re sending marketing campaigns without A/B or multivariate testing — and most companies admit to running fewer than five tests a month — you’re effectively acting as a focus group of one. You’re assuming everyone feels the same way about your campaign as you do. Big mistake.
Most of us have at least a bit of familiarity with A/B testing and have worked it into a few deployments. Testing subject line A against B is the most common, but A/B testing can go much further — from simple to sophisticated. You can test for:
- Best time of day for sending each email type (newsletter, offer, etc.)
- Best day of week for each type
- Optimal send frequency
- Subject line length
- Personalization in subject line or message body
- Squeeze page vs. landing page performance
- Conversion lift from a video, demo, or meeting link
- Diagnosing content or UX errors
- Challenging long-held behavior assumptions
- Calls to action
- Color, layout, and design
- Writing style (i.e., casual, conversational, sensational, business)
- Sender’s name and email address (i.e., brand, personal, role-based)
A/B and multivariate testing help you understand what truly moves your prospects, leads, subscribers, and customers. Adopt a consistent testing process, and your cumulative results will reveal what drives measurable change across campaigns, pages, and all inbound/outbound efforts.
In a Google Analytics class I once took, the instructor said, “don’t measure, compare.” What he meant was never look at a single stream of data, always compare it to something else: landing page views compared by new and returning visitors for example. That one tiny piece of advice has stuck with me all these years and now I have made several versions of it to fit the moment. For this moment, it’s “don’t send; compare.“ Yep, I have a steadfast rule: don’t send emails unless there is something to compare.
One client’s only call to action in every email was a discount — each offer more generous than the last. When asked how well it worked, they admitted the obvious: the bigger the discount, the more they sold. But when pressed for ROI, they couldn’t say. Sure, they sold more widgets, but at those discount levels, they were also making far less profit.
We recommended an A/B-heavy drip campaign with no discounts — instead linking to testimonials, case studies, demos, meeting bookings, and other inbound content. The idea was to shift from asking for the business to earning it. True, it can lengthen the sales cycle, but it can also keeps you from leaving money on the table.
For this client, the change felt too drastic, and they didn’t stick with it long enough to gather meaningful data, but even from the limited results, the inbound approach showed real potential — enough to warrant a second look.
Not every A/B test needs to be that bold. You can start small. The takeaway: you don’t have to learn it all now. Testing can be incremental. Whether you test occasionally or often, the data is worthless without measurable goals. That means defining:
- Goal (e.g., sell 400 widgets by 31 December—goals require a timeframe so you know when to stop counting)
- Targeted ROI
- Channels (i.e., email, direct mail, social)
- What you’re testing
- How you’ll measure
- Audience
- Testing protocol
- Feedback loop
- How you’ll integrate results into future campaigns
If your platform doesn’t support A/B testing (what marketing automation software doesn’t these days?), do it manually. Create two versions of your campaign and divide your list randomly — unless your test variable is within your list (e.g., gender, location, or persona type).
When I want insights beyond opens, clicks, and visits, I use tools that provide visual heat maps — such as Email on Acid or Crazy Egg — to see where people engage or drop off. Many marketing platforms also have these features, so look before you buy. Heatmaps are powerful and you don’t need tens of thousands of recipients to glean valuable insight. You can test just as effectively with a small pilot group — just make sure it’s not a focus group of one.
Editor’s note: I wrote this article some time ago — but the principle hasn’t changed. This lightly updated version reflects current tools and practices while keeping the same lessons intact.
AI disclosure: This content was originally written by me and later updated with assistance from OpenAI’s GPT-5 for light editing, fact-checking, and modernization. Every word has been reviewed and approved by a human — specifically, me — before publication.
You can test just as effectively with a small pilot group — just make sure it’s not a focus group of one.