If you have read any of the previous articles, by now you should have realised the importance of incorporating a scientific method to your new business venture. You can read more about this in the validated learning and build-measure-learn articles. This article explores the different methods you can use to track your progress and eventually guide your business decisions.
When you launch a new product, it can be difficult to know which features to prioritise and which bugs to fix. You might even be struggling to get more customers to sign up and pay for the product or service you are offering. Or perhaps you want to expand the number of people who have heard about your offering but can’t figure out how to do it.
There are many solutions to these problems, and they will often be unique to your business circumstances. So how do you know which change you have made is making steps in the right direction? The answer is through testing. There are two common methods employed for real life testing: cohort analysis and split testing.
Cohort analysis looks at a particular group of customers who have encountered your product or service in an independent manner. For example, one cohort could be customers who visited your site on Monday and another cohort could be customers who visited your site on Wednesday. Another way to generate a cohort is to get brand new customers each day through Google or Facebook adverts. Each customer will be new to your offering and you can test the changes on a fresh batch of real customers every day.
Each group of customers should be comparable and the only difference between their experience is the change you are testing. You are then able to analyse how each group of customers performed and therefore work out whether your product or service change has been beneficial to one group more than another. In other words, has your change helped?
Example metrics to track include the click through rate, the purchase rate or the average spend among others. The choice of metric all depends upon your hypothesis and how you think your change will affect your business (i.e. are you attempting to attract new customers or are you attempting to increase conversions?). The data is then used to ask questions about why some customers respond to your product and others do not. You can use this information to further refine your product and increase your chosen metrics for a given cohort. The closer your efforts align with what the customer really wants, the more productive your experiments will be!
Split Testing - A/B Testing
A common method employed by the marketing community is A/B Testing. This happens when two slightly different versions of the same product are offered at the same time. You observe what customers do within each test and make inferences about the impact your variations are having.
Split tests can be slightly more difficult to create, but the extra work they require to set up is more than offset by the time they save through elimination of work that doesn’t improve customer experiences. When used well, you will quickly understand what your customers do and do not want.
After you have analysed how customers are interacting with your product or service, you will need to create a report as a record of your findings. Here are a few short guiding principles you should follow when creating a report.
First, the report should be actionable and provide an action you should take to make a positive change. For this to occur your report must demonstrate a clear cause and effect.
Second, the report should be accessible. This means both physically accessible to staff and accessible in the sense of being easy to understand. You want your team to use the feedback to guide their actions rather than trying to figure out what the findings mean. One way to do this is to ensure all metrics are consistent throughout the report. For example, the click through rate is XX%. Furthermore, the report should be well laid out in a consistent format to make them easier to follow.
Third, the report should be auditable. That means the data upon which the report was based should be accessible and trusted. This means reports should be drawn directly from master data and not derived data.
- The Lean Startup; Eric Ries
- The $100 Startup; Chris Guillebeau