Skip to main content
All CollectionsCommentary on Marketing Mix Modeling (MMM)Modeling
Informative or non-informative ROI priors in Marketing Mix Modeling?
Informative or non-informative ROI priors in Marketing Mix Modeling?
Lauri Potka avatar
Written by Lauri Potka
Updated over a week ago

Original LinkedIn post
​
β€‹πŸ€” Informative or non-informative ROI priors in Marketing Mix Modeling?

In the Bayesian approach, the Marketing Mix Model is given ROI priors, which are assumptions on the likely range where the ROI for a marketing activity lies. The priors are formulated as probability distributions (see the attached graphic).

ROI priors can be informative or non-informative.

If you're using a non-informative ROI prior, you're avoiding making assumptions on the effectiveness of a marketing activity. Non-informative prior distributions are wide and flat.

If you're using an informative ROI prior, you're making assumptions on the effectiveness of a marketing activity, based on information outside the model, e.g., incrementality tests. Informative prior distributions are narrower.

There seems to be two schools of thought nowadays when it comes to priors.

The traditional school of thought believes in the model's power to find the true ROI from the timeseries data, without giving the model additional information. Their argument is that using external data in prior distribution formulation introduces unmanageable and unpredictable biases to the model. Their default approach is to use non-informative priors.

The more recently emerged school of thought, which also aligns to our thinking at Sellforte, believes that the traditional models can be improved with informative ROI priors that are created based on randomized control trials, geo tests, shutdown tests and other similar data that give additional information about the behaviour of a marketing activity. This is also called model calibration. Combining the traditional MMM approach with model calibration increases model's accuracy and stability. Model calibration is not easy though - you need to intimately understand any biases in the calibration data you use. We have spent tons of R&D hours in developing our approach.

Did this answer your question?