Common CRO mistakes & how to avoid them

There are so many intricacies to Conversion Rate Optimization and UX testing, it’s easy to feel like you’re on information overload. There are tons of metrics you can compare, various reports you can dive into, a plethora of ways to form hypotheses and implement tests, several platforms for testing to choose from, and the list goes on.

There are some great comprehensive posts out there to help you master these intricacies. A few of my favorites are:

  1. This master guide to CRO from CXL which addresses every phase of the process from preliminary research to analyzing AB test results. For when you’re getting started with CRO work.
  2. This framework from Moz to CRO. It breaks the process down into steps that are easy to follow and asks and answers questions that follow along with each step. Use for diving deeper into CRO.
  3. And Neil Patel’s guide to CRO which breaks down CRO on a more conceptual level. Use to fill in the knowledge gaps and answer questions you have along the way.
  4. Craig Sullivan’s 1 hour CRO guide is also very comprehensive. Use if you’re trying to get some quick research done.

There’s a lot to digest in those posts, so I wanted to give you some common mistakes and tricky issues with CRO that you might overlook if it is your first time going through the process.

To Refresh Your Memory

The very basic steps of a CRO process include:

  1. Exploratory heuristic analysis: going through the site as if you were a user and see where it does/doesn’t meet expectations as you move through the funnel. Explore where users might get caught up in navigating the site.
  2. Examination of Multi Channel Funnel reports, Landing Page, and Goal Reports in Google Analytics. Determine what pages, events, or users would be most valuable to track. Also get some basic benchmarks so that you have something to compare post-testing stats to later.
  3. Set up tracking (if you don’t have it already) on key pages. Track important KPIs, CTAs, element visibility, etc. using something like Hotjar, GTM, GA goals, etc.
  4. Generate hypotheses from gathered data and get approval. Prioritize these hypotheses based on ease of implementation, projected impact, return on investment.
  5. Generate test ideas based on hypotheses.
  6. Implement tests using Optimizely, VWO, Google Optimize, etc.
  7. Wait until tests generate statistically significant results. However, depending on the page and the levels of traffic or conversions that it gets, you may have to give it some more time.
  8. Reevaluate tests if unsuccessful or implement test changes at scale.

Among these steps (which are already a summary) there are dozens of minute details that are very easy to overlook or skip altogether. The rest of this post will cover common CRO mistakes that a beginner might make:

  1. You don’t have tracking set up properly
  2. You run tests at inopportune times of the year
  3. The sample size for your test is inadequate
  4. You aren’t running your test long enough
  5. Statistics confuses you
  6. You treat all traffic the same
  7. Your process is unorganized

1. You don’t have tracking set up correctly

Having tracking correctly set up is crucial. Not only should you have heatmap and user session tracking set up on the pages you are planning to analyze, but you should have micro-conversion tracking set up via Google Tag Manager. Setting up tracking in GTM for clicks and user engagement, like scroll depth and element visibility, will provide valuable data on how users are interacting with elements and CTAs on your pages. This is immensely helpful when determining which pages to analyze and while forming hypotheses and test ideas for these pages.

One very valuable trigger in GTM is the element visibility trigger, which can assist in collecting information on whether or not an element is visible on a page, and thus if a user is likely to engage with it or if a user can engage with it at all. The trigger gives you a more meaningful indication of scroll depth based on tracking elements as opposed to percentage scrolled. This post for getting it set up is very helpful.

If you don’t have GTM event tracking set up at all, it’s pretty simple, and these guides can help: here’s a simple how-to to set it up, or this video.

2. You don’t pay attention to the calendar when launching a test

Seasonality is not a myth. It can truly inform decision making during preliminary research through to the A/B testing stage. Without taking seasonality into account, you run the risk of achieving invalid or inaccurate results.  For example, running a test at a known low point in your sales cycle, or during the end of December may not be the wisest idea for most companies.

dip in traffic over time

Why? Timing is crucial because:

  1. If you run a test at a lull in traffic, the longer a test is going to need to run to reach significance.
  2. You want the test to be performed on the most qualified traffic possible. Running a test at an off (or really on) time of the season may not demonstrate an accurate representation of your typical traffic.
  3. Traffic typically fluctuates during the week quite a bit, meaning you should probably start and end your test on the same day of the week for the most accurate results.
  4. Similarly, user intent around the holiday season, or at different points of the year may not be indicative of the most qualified traffic. The data that results could be less than useful for determining whether or not your test could be successful at scale (a hard enough task to accomplish with good data).

3. Your sample size for testing isn’t big enough

Having a large enough sample size to quantify your test results is crucial. Without an appropriate sample size, you may never get results or the results you get might not be meaningful. Luckily, there are tools to help determine proper sample size:

It is also helpful to be conscious of the level of traffic your test pages receive. Low traffic pages may be difficult to test on because it could take a long time to reach statistical significance, particularly if there are few conversions on these pages. Basing the impact of a test on a small number of conversions and traffic may not indicate how a test would perform if pushed at scale. For sites or pages with low traffic, you might need to think about making a big change(s) in your test variation(s) instead of smaller changes in order to see the needle move. From there, you can always adjust tests and reevaluate.

4. You’re not running the test for long enough

This point tends to correlate with the point above on sample size. It’s likely that you will not have to do a lot of the work here because many platforms have built-in features for calculating and demonstrating results to the tester. However, it is really important to understand how statistical significance works, even at a basic level, to make sense of A/B testing and your results.

Every A/B testing post you’ll find will say to run your test until it reaches statistical significance. But what does that mean exactly? In (very) short, statistical significance explains how confident you can be that you are choosing the right result between two or more variations. This can be confusing if you’re less mathematically inclined, but the next section of this post lists resources to basic statistics primers specifically for CRO.

Generally speaking, running your test until (or even slightly after) it reaches significance is a decent rule of thumb. Even if you obtain “significance” very shortly after you begin your test, it is wise to keep the test running to account for users who may convert several days after their initial visit. Also, it is important to consider accounting for different business cycles (at least 1-2), because, as stated previously, traffic fluctuates at different points of the week, month, quarter, etc.

I also like these articles: this one for explaining how long to run a test and this for explaining the factors that play into in determining statistical validity.

5. You’re Making Some Basic Statistical Errors

There are a lot of resources out there for testing methodology and for learning statistics basics that matter for CRO. One of the most important fundamentals is understanding statistical significance.

See:

6. You treat all traffic the same

If you run an A/B test on a page and the variation performed poorly, it is possible to paint a very different picture when you look at the results broken down by a different segment of traffic. For example, if you look at the breakdown between desktop and mobile test results, it could prove that a test generates extremely significant results on mobile, but is a bust on desktop. This is because what works on desktop may not work on mobile, and vice versa. Here’s an illustrative example of how mobile vs. desktop test result data could be misleading:

misleading test data

misleading test results

In the example above, the change in conversion rate between the control and variant effectively cancel each other out. In this example, there would clearly be a missed opportunity here on mobile if we were to view only the combined results instead of breaking them down by device.

It is important to be conscious of this concept of segmenting results not only for analyzing test results, but for the initial research and hypothesizing that goes into ideating for tests as well. Distinguishing between different types of traffic (e.g. mobile vs. desktop, new vs. returning users, or traffic source) to form segments of your users can help to differentiate and find patterns in the type of people who convert. Doing this can better inform the way you create hypotheses and tests. In turn, you may end up with far more meaningful results.

7. Your testing process is a little less than organized

A lot can get lost in the shuffle here. So staying on top of managing a list of your prioritized hypotheses and test ideas, currently running tests, failed tests, and successful tests that will be iterated upon is important.

For example, it’s easy enough to keep track of results in a spreadsheet like this:

tracking results in a spreadsheet

Recording all hypotheses in one place with the reasoning behind them and data to back them will save you time and energy down the line, especially when communicating with clients/stakeholders.

There are other platforms designed to specifically to manage CRO pursuits. Effective Experiments is a comprehensive project management tool that holds everything from ideas to test results. This is great for managing and sharing tests in one place that multiple people can access and review. (AKA great for sharing with stakeholders or team members who are not directly involved in the CRO process themselves).