visit
You’ve probably had times when you’re visiting a website or using an app and notice something just slightly off. Maybe the colours seem a little off, or maybe the layout of the app has changed just a tiny bit. Only for it to fix itself the next time you go back. You might have noticed that the version of the app that you’re seeing is not exactly the same as your friends. One possible explanation for this is that the app might be running an A/B test behind the scenes.
A/B tests are a commonly used approach for companies to validate changes to their products before releasing them to the entire user base. The basic idea behind an A/B test is to present a change to a small segment of the overall audience, and see how it impacts their behaviour. Any change you make could either have a positive or a negative impact on your product. Or in some cases, you might see no impact at all.
Sounds reasonable. But how would you validate this? Well, one way is to just make the change and see what happens. If you switch things up and start making more money, boom. Problem solved.
This would be a fine approach under a lot of circumstances. If your changes are unlikely to cause massive shifts in user behaviour, that would be acceptable. But if your users are sensitive to changes in your product, or if these changes can negatively impact your bottom line, you might want to be a little more cautious.To continue our slightly silly analogy, what if the button is actually perfectly placed? What if moving to a different location comes across as too aggressive for your customers, deterring them away altogether? You want to verify your hypothesis, but you also want to minimize the risk. Hedge your bets, so to speak.One way you could mitigate this risk is by, surprise surprise, using A/B tests. In order to truly validate your idea, you want to run a scientific experiment of sorts. In this case, we’d create a new version of the website with the button in a different, more prominent, location. However, the key is that we serve this version of the website to only 50% of all users. The other 50% continue to see the original website with the button unchanged. This gives us two user groups to compare.If all of this sounds too scientific, it’s because it usually is a scientific process. Understanding user behaviour is complicated and A/B tests can often get quite granular. They require a significant amount of statistical knowledge to extract any meaningful insights. But they’re also are a very powerful tool if used properly.
As users, we’re subject to A/B testing all the time without realizing it. In fact, any given product could be running multiple tests simultaneously as long as those tests don’t overlap. Here’s a Netflix explaining just how religiously they use A/B testing to make their product better over time.
So the next time you see things that seem off when browsing the internet, you know what’s going on.Originally published on