How to measure campaign effectiveness?

This is probably the most common question that marketing teams have asked me to solve for them in my career. The difficulty lies in trying to understand the impact of a single campaign, amidst all the other things that may be affecting your customers. Most marketing teams have multiple campaigns running at the same time, then there are sales and customer service activities, seasonality, underlying growth or decline, and many other factors impacting your customers.

Thankfully we can use a couple of statistical techniques to neutralise all the noise and home in on the one campaign we want to understand.

For example, let us look at a campaign that is supposed to increase customer spend for a time period after the campaign is complete.  It could be an advert, a discount, a coupon, a telesales campaign, etc.

Firstly, we will look at the customers we are targeting with the campaign, and we’ll calculate their average spend for a time period before the campaign ran. We’ll then look at the same customers after the campaign ran for a similar time period, and calculate their new average spend.  Calculating the difference tells us how effective the campaign was. This is summarised in the diagram below.

graph showing effects of campaign of dollar spend over time

This may look good on the surface – our average spend per customer clearly went up after the campaign. But what about the customers who didn’t receive the campaign?  We will now add them into the diagram.

graph showing effects of campaign of dollar spend over time

On no, it looks like the non-campaign customers increased their average spend by more than our campaign customers. So, our campaign was a bust? Well, maybe. The group of customers that did not get the campaign may be very different to our campaign customers and they certainly had a lower average spend before the campaign.

To make a proper comparison we need to create two groups of customers with similar characteristics, then perform the calculations outlined above and look at the difference in average spend. Fortunately, there is a statistical technique called ‘stratified sampling’ that allows us to do this. Once we have our ‘sampled’ customers, we can analyse the change in spend as outlined above.  Here’s an updated diagram.

graph showing effects of campaign of dollar spend over time

Firstly, you’ll notice the two groups now have a very similar spend before the campaign, so our stratified sampling has worked well. After the campaign, both groups increased their average spend but the campaign group’s spend increased more (the green arrow shows the additional increase in spend). So, we could say that our customers’ average spend due to the campaign increased by the amount shown by the green arrow. Possibly. If our customers spend varies a lot over time, this may have just been down to that underlying variation, but thankfully there is another statistical technique we can use to see if that is the case.

We can use hypothesis testing to see if the difference between groups is ‘statistically significant’. Depending on the distribution of different customers’ spend we can apply different statistical tests to determine whether the difference is significant or not. Significance tests tell how likely it is that, in this case, there is a difference in spend between the campaign and non-campaign groups.  Typically, we are looking for a likelihood of 95% before we can confidently attribute the difference to the campaign. If the test is 95% or higher then we can be confident the campaign increased the spend, and it is probably a campaign that should be run again.

But what if the test is only 90% confident? Well, we then have a couple of options:

  • Increase the sample sizes to get more data for the statistics to work with (without sacrificing the stratification) – typically, the smaller the difference between the two groups, the more information, or data you need to get a statistically significant difference.
  • Run the campaign again – this might be viable if the campaign is not too costly and would provide more information for the statistics to analyse, just like the option above.
  • End the campaign – if was an expensive campaign and the results are inconclusive, then this might be the point at which losses are cut and an alternative campaign is considered.
If you’d like to find out more about how analytics, statistics, machine learning and data science can help with your organisation’s challenges, then we’d love to chat!  Call us on 07 3394 8459 or select ‘Contact us’ from the menu at the top of the page.