Is Your Testing Solution Slowing You Down?

Is Your Testing Solution Slowing You Down?

According to 2022 research conducted by SEO Agency First Page Sage, the average session duration for eCommerce is 2 minutes and 3 seconds, compared to an overall average of 3 minutes and 36 seconds across all websites in their study.

And while we know that in 2023 consumers are increasingly online, remember that they are also increasingly impatient.

In fact, further research shows that the anxiety caused by a delay in page loading speed is equivalent to the stress of watching a horror movie. The typical front-end testing solution, meanwhile, is known to have an impact on page load time.

Therefore, it’s become highly important to solve page load speed issues caused by most front-end testing solutions. Not doing so can not only affect the integrity of your testing but impact the health of your business altogether.

To help you fix page load speed issues caused by testing solutions, we’ve created this guide.

It delves into:

  1. Why front-end testing tools hamper site performance
  2. Testing’s impact on business metrics like conversion
  3. How to go about testing your testing program
  4. How testing solutions compare
  5. Four ways to fix testing tool slow-down

Why Do Testing Tools Slow Down Pages?

So why do traditional front-end testing solutions have the potential of slowing down your pages so badly? And why is the impact worse than for other technologies like tracking pixels? Well, there are a number of reasons:

  • Any 3rd-party technology that is installed on your page needs to have its corresponding code loaded. By definition, that means that there will be an impact on page load speed, even if the impact is minimal.
  • There is general consensus today that an A/B testing program needs to deliver tested content without flicker (aka, Flash of Original Content (FOOC)). Flickering happens when the original page, or set of elements, is visible for a very brief moment until an A/B test’s alternative content is shown. Not only does flicker result in a non-optimal user journey, it might also skew your test results, invalidating them in the process. A testing solution needs to manipulate the page before the visitor actually sees it to avoid flicker. This means that a solution needs to be loaded early on in the page load cycle. And that’s when the system is really busy rendering the page and fetching all of the assets, style definitions, and other vital components.
  • Most testing solutions (including Monetate Personalization) will recommend running at least part of the code synchronously, which essentially involves stopping the processing of the page until the code is loaded and executed. It’s not great, but the only reliable way to prevent flicker.
  • Some testing solutions have built their technology based on creating a single big script package that is distributed via CDN to all visitors. CDNs are very fast when it comes to delivering static assets, but the package will grow with every test that is executed. It doesn’t matter if the current visitor will ever qualify for a test or whether anything needs to be changed for that visitor on the current page. Visitors will also download the instructions for all different variations for a test although they obviously can only be exposed to one. These problems get worse when brands start to run more personalized experiences as you can expect a lot of concurrent targeted experiences—all of which increase the size of the package for every page visit.

Testing Performance’s Impact on Conversion, Sales, and Revenue

Research makes it clear that shoppers are growing increasingly impatient. This makes it important for companies to closely monitor testing tools’ effect on page load times as added research has tied load time to real-world business results.

For example:

1. Amazon: 1% sales loss for every 100ms of load time

Back in 2007, Amazon discovered a connection between load time and sales. It was proven that for every 100 milliseconds of extra loading time, Amazon lost sales by a rate of 1%.

2. Walmart: +2% conversion per 1-second load time improvement

Walmart also realized that optimizing the load times in their online shop had a significant impact on their conversion rate. They saw a 2% increase in conversion for every second of improvement. At Walmart’s scale, that’s real money!

3. Tagman: 7% loss in revenue for any second delay in load time

The same pattern is seen in a Tagman study, but with even more dramatic results. Tagman notes that any one-second delay leads to a loss of revenue of 7%!

Testing Your Testing Program for Decreased Speed

With tools like Google’s PageSpeed Insights, it’s easy to understand the parts and solutions that affect your site’s performance the most. It shows a breakdown of several measures that could be taken to speed up your site. Among those, you will typically see your testing solution listed somewhere in the 3rd-party solutions that impact page load speed the most. It should also be listed in the breakdown of the execution time of JavaScript.

The higher those results numbers are, the more you should try and understand the overall impact of your testing program on your bottom line.

Have you ever thought about testing your testing program to identify decreased speed? The idea would be to load the testing solution for only 90% (or any other number) of your traffic and measure results in your web analytics tool (let’s say, Google Analytics). If you see a drop in performance for your testing program group, it might be because of decreased page load speed.

Is Your Solution Slowing You Down?

Our own testing shows just how much other testing solutions can impact your page speed. We took a sample of two of our competitors’ solutions across public-facing sites to compare loading speeds along with samples from our own public-facing customers.

Using Google’s PageSpeed Insights, as detailed above, we’re able to show just how much quicker Monetate Personalisation loads (main thread blocking) than Dynamic Yield (5x faster) and Optimizely (2x faster). This just compares tag-based implementations from all three competitors—things get even faster with the deployment options detailed in the next section.

Testing Solution Average Total CPU Time (ms) Avg. Main Thread Blocking Time (ms) Avg. JS. Executive Time (Script Parse) Avg. Transfer Size (KiB)
Dynamic Yield 2360.7 826.3 49.5 309.8
Optimizely 1084.2 430.1 61.4 129.3
Monetate 478.1 149.6 22.0 88.1

Four Solutions That Fix Page Slow-Down Caused By Testing Tools

Server-Side

Server-side testing has been getting a lot of attention over the past two years. Instead of manipulating a page after it is fully rendered, changes are made during the rendering process. This typically increases page load speed as less code and 3rd-party technologies have to be loaded, avoiding flicker altogether.

Server-side testing can specifically help in situations where a variation is rather complex and likely to be performing better. In this case, you release this new version behind a feature flag and test it using the flag. This saves the extra time to develop a front-end manipulation A/B test, and after conclusion, re-implement the whole feature in a different way on the back end. Essentially, you develop the feature once and not twice.

While there are good reasons and use cases for server-side testing, it is typically a very developer-focused approach that works best for companies that have the resources to build server-side tests. As those changes need to be implemented as part of the core code base, it becomes more difficult to rely on external agencies for building tests, which is a very common modus operandi today. Those agencies don’t typically have access to this core code base, and in a true server-side environment, they cannot just develop their tests on top of it as they used to do today, using JavaScript to manipulate whatever needs to be manipulated in the browser.

With all of that, server-side testing works well if developers are available to support the testing program. However, it remains a very technical approach for testing and takes away a lot of agility. We are not aware of any visual editors that work reliably for this type of testing.

CDN-Assisted Testing (Edge Testing)

CDN-assisted testing, or Edge Testing, is a technology promoted by some vendors to avoid the limitations of a front-end testing approach without having to go server-side and fully change how a testing program is resourced and operated. The approach is very similar to traditional front-end testing, but instead of compiling a single static tag with everything that might possibly be required for any visitor, part of the decisioning is done at the CDN. Hence, a smaller tag can be sent to the browser that is quicker to download and takes less time to execute.

Because CDNs have been built to deliver static assets and don’t typically execute a lot of code, they are not able to process more complex targets like those based on behavior, weather, or integrated 1st-party data. As a result, you will have to fall back to normal front-end testing with all the limitations discussed above. Or use other workarounds in these situations. At this point, you have to coordinate your testing program between two different deployment methods that might not be tightly coupled or integrated with each other.

Decisioning on the Server-Side for Tag-Based Tests

Consequently, following the logic above, decisioning on the front end should be reduced as much as possible. At the same time, less technical users should not have to gain a deep understanding of web technologies to make a call on what approach to follow for each test or personalized experience.

This is where those solutions that have always relied on the back end to make decisions are advantageous.

From a capability perspective, these solutions are quite similar to those that make decisioning within the front-end tag. They even offer the same visual editors and allow you to create tests and personalized experiences in the same way. From a technical perspective, they are quite different in terms of how they load on a site, however.

Instead of downloading and executing a single big tag, they will:

  1. Download a common library first (same for every visitor.)
  2. Tell the browser to hold off loading certain parts of the page (via CSS) that might be affected by a personalized experience or test later (aka, masking.)
  3. Reach out to the back end to receive only the code that is relevant for the current visitor on that page.
  4. Apply the changes and release the masks that were set in Step 2.

Monetate does not only follow the process described above, but also increases performance further by splitting the first step into a small synchronous part and a slightly larger, and more static, asynchronous one. This brings the synchronous and blocking code down to the size of a few kilobytes. The more experiences our clients have active at any one time, the more they will benefit from this approach.

Single Page Applications

Single page applications (SPAs), and JS frameworks like React, Angular, and Vue, deserve a closer look. They have been designed with a very technical audience in mind and don’t play well with traditional front-end testing.

The benefit of SPAs is that they load common website elements like a header, footer, and a lot of the logic that runs in the background, only on the first page, and then just replace those elements that actually change based on user interactions.

Traditionally, testing solutions would change whatever needs to be changed as part of a fresh page load. The fact that in a SPA world, this page load only happens for the first page, creates all kinds of challenges for tests that are supposed to be run based on subsequent user interactions.

The common ways to handle this are:

  1. Using virtual page views, or similar concepts, to trigger A/B tests whenever there are major changes to the site, most likely as a consequence of a user interaction.
  2. Deeply embedding testing into the application, which is very similar to the server-side approach described above. Although the code is (mostly) executed in the browser.

From a performance perspective, there is a tradeoff to make. Similar to the considerations above, some testing solutions will load all test definitions on the first-page load, thereby slowing this initial load down. In this case, all tests for all possible scenarios will have to be downloaded as it is unclear at this point which path a visitor will take and what segments he or she will qualify for during the journey.

Other testing solutions will continue to make decisions on the server-side after every click, adding some overhead when transitioning from one page to the next. This might happen in parallel to other requests for content and products to show on the new “page,” so it ideally does not add any lag to the transition.

Monetate follows the second approach to allow for a high number of concurrent live tests or personalized and segmented experiences without clients having to take a hit on performance.

Speed Is Critical for Ecommerce Success in 2023

If testing your program indicates that your testing solution needs a major performance upgrade, there’s a range of solutions that can help. Most importantly, you should consider a testing solution, like Monetate Personalization, that prioritizes speed, and that gets you as close to zero-impact testing as possible.

In a few minutes, we can show you how your current solution is affecting your load time, and how we can help improve your CX, conversion, and even SEO ranking. Your eCommerce success in 2023 could depend on it! Contact us here to set up a time to chat with our Personalization Experts.