As early as elementary school, students are imbued with the rules of arithmetic, and a central concept is the Order of Operations for equations that involve addition, subtraction, multiplication, and/or division.

If you remember PEMDAS (BEDMAS or BIDMAS for our Canadian and Commonwealth friends), also known as “Please Excuse My Dear Aunt Sally,” congratulations are in order—but there’s one addition (pun intended) to the rule set you’ll want to write down.

Website testing has an order of operations as well, but it’s not one without controversy.

The fundamental questions? Should you:

  1. Show a test to all visitors, and then segment the results?
  2. Segment visitors into different buckets, and show each audience a different test?

“Order up! (Ding) #2 please.”

Your website testing efforts are not well-served by segmenting the results of an untargeted test shown to everyone.

Why? Let’s consider two hypothetical tests for which the goal is to determine whether badging “New” items helps to increase the conversion rate.

  1. Test A: Badge the relevant product images with “New,” and run an untargeted test shown to all site visitors. Half see the badges, and half do not.
  2. Test B: Segment New vs. Returning vistors. Badge “New” items only for returning visitors. (Do not show the test to first-time visitors.)

Test B is fundamentally a better approach because it establishes relevance at the top of the funnel, at the beginning of the session. By contrast, Test A starts with a false assumption, i.e., that showing “New” items could potentially to be relevant to all site visitors.

Who might not care about “New” items? Your first-time visitors because, to them, everything on the website is new. (Incidentally, first-time visitors are more interested in knowing what the top sellers are.) At a higher level, though, the point is that website visitors are all different, even for niche businesses that cater to small and discrete demographics.

Thus, the results of an untargeted test simply reflect the “average” across all key audience segments. As the devil is in the details, these averages will have little to no value to you. In addition:

  • Successful analysis requires a Sherlock Holmes who knows where to slice the data.
  • Showing the wrong content to some visitor segments may decrease conversions unnecessarily.

Now in deference to my opposition, the approach of Test A can help with “segment discovery,” i.e., identifying important subsets of your visitors whose existence you don’t (yet) know about. However, in my experience, much of this information is  available prior to testing via your analytics console. As a result, I still believe that it’s of greater value in planning targeted tests than it is with ex post facto analysis.

So why the controversy?

On this blog, I’ve written extensively about the ways that legacy technologies influence what many consider to be generally-accepted methods of website testing. But are these really “best practices” or just “coping mechanisms” for the limitations of the tools we use?

Historically, targeting was difficult because separate throwaway code was required to define each visitor segment. In addition, targeting required additional server calls to execute, which meant the cost of running the test was much more expensive. But when you remove these limitations, the best practice of targeting your website tests becomes a much more practical endeavor.

My New Year’s Resolution was to debunk the myth of the one perfect page, and earlier this month, we successfully killed at least one part of it (RIP). So as your  testing efforts continue, remember that there’s no single Monolithic Visitor on your website and, therefore, no single optimal experience. Delivering the best experiences possible requires targeting tests around key audience segments—the way you understand your business.

So please excuse my dear Aunt Sally. It’s as important now as it was back then.