In a recent project, we aimed to increase the conversion rate for an e-commerce client by improving their checkout process. We identified several potential changes to test, including modifying the call-to-action button color and simplifying the form fields. Instead of guessing which change would be most effective, we formulated hypotheses for each potential adjustment. For instance, we hypothesized that a green button would outperform the existing blue button due to its association with "go" or "success" in many cultures. We set up A/B tests to compare these variations against the current version of the checkout page. The tests revealed that simplifying the form fields led to a 15% increase in completed transactions, while changing the button color had no significant impact. This data-driven approach allowed us to focus on the change that truly made a difference, rather than relying on assumptions. This experience reinforced the power of hypothesis testing in guiding decisions and optimizing conversion rates, showing how critical it is to let data, not guesswork, drive your strategies.
In one project, we used hypothesis testing to decide whether a new feature on our app would improve user engagement. We hypothesised that adding a personalised recommendation engine would increase the time users spent on the app. To test this, we set up an A/B test with two groups: one group had access to the new feature, while the other continued using the app as usual. We collected data on user engagement for both groups and used hypothesis testing to analyse the results. The test showed a significant increase in average session duration for the group with the new feature. This data-backed insight helped us confidently decide to roll out the feature to all users. By applying hypothesis testing, we ensured our decision was grounded in solid evidence rather than guesswork, leading to a meaningful boost in user engagement and satisfaction.
You and whoever else is involved in asking this question are statistically naive. Classical hypothesis testing is worthless for decision making. Its focus is the null hypothesis of no effect. Classical statisticians condition on the null hypothesis and calculate the probability distribution of the data, that observed and that possible but not observed. All reasonable decision making must consider many different possibilities and not just the hypothesis of no effect. Evaluating decisions require weighing the possibilities based on the available data. In other words, making decisions in effect demands taking a Bayesian approach. Bayesian statisticians condition on the observed data and find the probabilities of the various hypotheses. That said, there are people who say they use hypothesis testing in making decisions. They do. And some are good at it. But they have to manipulate hypothesis tests subjectively, inverting the hypothesis testing probabilities of data given hypotheses into Bayesian probabilities of hypotheses given the observed data. Hypothesis testing is intrinsically antithetic to decision making. You can quote me.