Skip to content
Ideas Measurement2021-08-10

A short guide to building marketing-mix-models in the new economy

Robin Sheridan

In the last ten years, the attention of marketers measuring their media drifted in the same direction as their ad investment - towards digital and attribution modelling and away from more traditional marketing-mix-models.

But as the limitations of attribution have come to light- walled gardens, privacy concerns, and difficulty integrating offline media - advertisers are returning to MMM. Indeed, many brands born in the digital age are turning to MMM for the first time. 

However, recent trends may compromise the usefulness of a “classical MMM”, or even make it impossible to build at all. Why is that? 

Challenge #1: “Deaveraging” of media consumption

The Problem

It’s well known that media consumption is changing faster than ever. However, we also know that these shifts haven’t happened homogeneously across the population. You can no longer describe the consumption habits of "most people". The chart shows how the consumption of one age group (the young) has no relation to that of another - it’s “deaveraged”. 

De-averaging is a problem for MMM. If the audience of one channel bears little relation to the audience of another, how can the model compare performance across both? It’s like comparing apples with oranges. 

The Solution

The resolution comes from building a different model for each different audience. This helps advertisers understand how each channel performs, for each customer-segment of interest. 

For example - we could build a model for older customers and another for younger customers. Understanding how different channels perform in each model would help us make more informed choices about which channels will really drive growth. 

In practical terms, the choice of customer segments to build models for depends on the kind of business, and the channels involved. A mass-market brand with a simpler targeting approach might need less models compared with a niche one. 

Fortunately, the proliferation of MMM software (whether proprietary or open source) means that creating many models at once is not as resource-intensive as it used to be. 

 

Challenge #2: The need for granularity

The Problem

Cookie-level data - abundant until recently -  allowed for a granular view of media performance. It was possible to understand how well each placement works, at least in theory.

Due to privacy concerns, those days are effectively over. However, old habits die hard, and marketers are used to hyper-granular views of their media performance. Traditionally this has been challenging for MMMs. It’s difficult to unpick granularity from random noise within any model. 

Given marketers are used to granularity - and will continue to demand it-  is there a way MMMs can keep up? 

The Solution

MMM models don’t work if each channel layer is inserted separately. There is usually too much noise. Instead, the answer is to build models with an in-built notion of hierarchy. This technique (known as Bayesian Hierarchical Modelling) means giving the model a “clue” that the different components of one channel form part of a wider whole. It’s a mathematical way of telling the model “These things are quite similar to each-other, but not identical ”

It’s possible to enhance this further by supplementing the model with “ground truth” results from other sources. For example, many advertisers will use sales-lift tests on platforms like Facebook.  Similar geo-tests are often used for channels like Radio or TV. These lift-tests act as independent verification. They can be used to calibrate each part of a channel within the model. 

Using hierarchical modelling and calibration from lift-tests means that MMMs can offer the granularity needed. 

Challenge #3: Building a time series model without lots of time series data 

The Problem

Classical MMM techniques often require several years of historic data to work. Worse - when new channels are introduced, it takes a long time for the model to “catch-up” and produce a good evaluation. 

These requirements are an issue for all brands when experimenting with new channels or deploying an old channel in a new way. However, for new brands, it’s even more challenging. They can’t wait before they get a good cross-channel view of media performance. 

The Solution

Classical MMMs have a lot of unknown parameters. For every channel in the model, the data needs to tell us the carry-over effect, diminishing returns, and KPI contribution. All whilst accounting for outside influences from all directions - seasonality, competitor activity, price, and distribution are just a few common confounders. No wonder the models need so much data! 

Classical MMM techniques treat every parameter as completely unknown. But that doesn’t account for the experience we have as an industry. With some careful thought, we can create plausible ranges for our parameters, and give the model some “clues” to help. For example, we know that TV carries over for a few weeks, and we also know that it won’t carry over for multiple years. We should remember that our model doesn’t know this, so by giving it this “clue”, we’re helping. 

Crucially, when we give the model these clues, less data is required to estimate the true value of all the other parameters. The formal name for these clues is “Bayesian Priors”.

The amount of historic data a model needs is directly related to how specific our clues are. For some brands, we can be very specific in our clue-giving. For example, if a brand has experience of MMM in another market or product, then we can use some of these results as clues. Alternatively, benchmarks for the category, or similar brands can help narrow down the range of plausible parameter values, and reduce the amount of data needed. 

Summary: A checklist for building MMMs in the new economy

  1. Build different models for different audience segments to better understand media fragmentation and deaveraging effects.

  2. Use hierarchical methods to provide granularity, and calibrate models using ground-truth tests for verification. 

  3. Use benchmarks, or results from comparable brands / products / markets to give the model better “clues” about the value of each parameter, to reduce reliance on lots of historical data.

This article originally ran in WARC