Services
Consultancy Project-Based Approach Support Training Academy
Accelerators
AI Data Analyst Artificial Intelligence CDP as a Service Customer Data Architecture Data Ideation Workshops Data Maturity Assessment Job Analytics Server Side Tracking
Cases
News
Events
About
Our Values Our Team Our Toolkit Careers
Contact Us
  • News
29 April 2020
4 min read

6 tips to successfully run experiments

Let’s start by stating the obvious: basing marketing decisions purely on guesses and assumptions is never a good idea. However, some marketers still rely on their feeling and intuition while evaluating alternatives for their website design, call-to-action messages, email content and other sorts of online factors. Instead of just predicting the outcome of these decisions by mere guesswork, you should run experiments like A/B tests and multivariate tests in order to optimize conversion rates and serve everyone with a personalized experience across your touch points. In one of our previous blogposts we talked about the what, why and how of A/B testing, now we’ll list 6 tips to get the most out of your online experiments.

Jens Buelens
Data Analyst Jens Buelens

1. Define a clear hypothesis and success metrics

Don’t launch an experiment out of the blue or base your test on so called ‘best practices’ you find online, but start with your own data instead. What works for other organisations, doesn’t necessarily work for yours, so don’t just assume that the outcome of someone else’s experiment automatically applies to your case. Rather take a look at your analytics data to find testing opportunities and develop a substantiated hypothesis before you run an experiment.

Furthermore, predefining a clear hypothesis and accompanying key performance indicators (KPIs) will eventually make it a lot easier to determine the success of your tests.

2. Watch out for the flicker effect

The flicker effect occurs when the original page content is briefly (this could be a case of milliseconds) displayed on an A/B tested page before the test variant appears. This is caused by loading the script and the time it takes for the browser to process modifications. You obviously wouldn’t want your audience to know they’re in a test and absolutely want to avoid your test experiences being influenced by the flicker effect.

Luckily there are a few ways to do this. Some popular tools, like Google Optimize and Adobe Target, provide their users with anti-flicker scripts. It’s also recommended to avoid the WYSIWYG editors in these testing tools and instead edit the code to create variants. Some other options include optimising your site’s load time and not using a tag manager to call your tags for experiments.

3. Test your tests

This is an important one. Before setting an experiment live, ALWAYS verify the lay-out and functionality of your test variants in different browsers, devices and viewports. A certain design may look nice on your computer screen, but could look completely different on mobile devices, which could lead to distorted test with skewed results.

4. Do not stop early

Before you run a test, always calculate how much time it will need to reach statistical significance. With a handy test calculator you can determine test duration and sample size while accounting for your desired confidence level, statistical power and estimated audience numbers. Always respect this predefined test duration and sample size. After all, ending a test prematurely may result in misleading or inconclusive insights.

Also make sure you have enough representativeness in your sample by running your tests full weeks (or business cycles) at a time. This way you won’t miss relevant data that is dependent on time.

5. Don’t overanalyse

Digging into your test data and analysing it on a deeper level could give you really useful insights. You could for example apply different segments to you test results in order to examine how different audiences react to the different test variants. This way you could uncover insights that aren’t applicable to the total test audience, but may apply for certain segments. But be aware, don’t take it too far and definitely always keep an eye on the significance level and power of your tests. Stick to your predefined KPIs and always keep in mind: “if you torture data long enough, it will confess to anything.”

6. Standardise and document

Make you job a lot easier by defining a uniform way of how you and your team approach each step of the testing process and how you report on your findings. A smooth structure will enable you to speed up test deployment and ease the evaluation process.

To avoid repeating experiments you’ve already performed in the past it’s also strongly recommended to systematically document your tests. Besides, building a library of your findings doesn’t only enable you to archive your knowledge, it may also help educate your employees.

Ready to activate your data?

Ready to embark on a journey of success? Contact us today to discuss your needs. Let's work together to turn your vision into reality.

Reach out, and let's chat.
pencil drawing of two men
  • Contact us
  • Hertshage 10
    9300 Aalst, Belgium
  • welcome@multiminds.eu
  • +32 491 33 11 11
  • Our services
  • Consultancy
  • Project-Based Approach
  • Support
  • Training
  • Our accelerators
  • CDP as a Service
  • Customer Data Architecture
  • Data Ideation Workschops
  • Data Maturity Assessment
  • Server Side Tracking
  • Job Analytics
  • AI Data Analyst
  • Artificial Intelligence
  • Our newsletter
  • Subscribe to our newsletter for the latest news and upcoming workshops.
  • Thank you for subscribing!

©2026 MultiMinds. All rights reserved.

Cookie Policy Privacy Policy

We’re an analytics agency. You know what’s coming.

Honestly? We just want to see how you move through our site so we can make our charts look beautiful and our insights even sharper. It's like a science experiment, and you're our favourite variable.

Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.

anonymous
2 year | HTTP Cookie
Stores the user's cookie consent state for the current domain.
_cfuvid
Session | HTTP Cookie
This cookie is a part of the services provided by Cloudflare - Including load-balancing, deliverance of website content and serving DNS connection for website operators.
_cfuvid
Persistent | HTML Local Storage
This cookie is used to distinguish between humans and bots.

Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.

Analytical cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.

_ga#
1 year | HTTP Cookie
This cookie is a Google Analytics persistent cookie which is used to distinguish unique users.

Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.