The Problem of priors in Bayesian MMM

In any data science project, the biggest hurdle is translating the business problem into a statistics/ML problem.

Lot of things gets lost in this translation which eventually leads to inaccurate models and unhappy customers.

In MMM, especially Bayesian MMM, this ‘lost in translation’ problem is more pronounced.

The client is sold the magic that through Bayesian MMM they can encode their prior beliefs into the model.

But in my experience, to encode prior beliefs one needs really good grasp on probability distributions.

Even trained statisticians find it challenging to convert “Hey I feel the marketing channel xyz’s ROI should be around 2.3” into a particular probability distribution.

Because many don’t know what the ideal probability distribution should be, they resort to defaults (e.g. normal distribution or half normal).

Given this uphill task, it is unfair to expect clients to tell the agency what the prior probability distribution should be.

They will not have much of a clue and any client would get chided with the agency because rather than solving the problem themselves, they are asking the client to do the heavy lifting.

โ–ช The prior matters less or does it?

Granted your prior distribution matters less in Bayesian framework, as you get more and more data.

But the problem with MMM is, it is a small data problem. You don’t get more data (or lets say enough data to make your prior choice irrelevant).

โ–ช What prior to set?

The second problem is ‘what should be the prior when you have no clue?’

As MMM is being adopted by new domains who had no prior experience with MMM, the challenge of what prior to set arises.

Till date, I have not got a convincing reply from a Bayesian MMM practitioner on what the prior distribution should be in this case.

For all of the above reasons (and more – see link in resources), I generally find frequentist MMM to be a better fit.

Resources

Richard McElreath’s Quartet: https://www.linkedin.com/posts/venkat-raman-analytics_marketingmixmodeling-marketingattribution-activity-7107946791983071232-w07F?utm_source=share&utm_medium=member_desktop

Bayesian MMM is not a silver bullet for MMM’s Multicollinearity issue.:
https://www.linkedin.com/posts/venkat-raman-analytics_marketingmixmodeling-marketingattribution-activity-7106973443807444992-H5Bq?utm_source=share&utm_medium=member_desktop

There are two kinds of uncertainties Aleatoric and Epistemic. https://www.linkedin.com/posts/venkat-raman-analytics_datascience-statistics-machinelearning-activity-7105052143824375808-CZF2?utm_source=share&utm_medium=member_desktop

Uncertainty reduces with more data? – https://www.linkedin.com/posts/venkat-raman-analytics_statistics-bayesian-frequentist-activity-7098300216163913728-BNhw?utm_source=share&utm_medium=member_desktop

Facebook
Twitter
LinkedIn

Recommended Posts

Chebyshev’s Inequality for Marketing Mix Model Diagnostics

Chebyshev’s Inequality for Marketing…

At Aryma Labs, we constantly endeavor to add as much science as possible…

How to use Robyn’s…

In my last post (ICYMI link in resources), I talked about the similarities…

Similarities between Decomp RSSD and Bayesian Priors in Marketing Mix Modeling (MMM)

Similarities between Decomp RSSD…

Open source Marketing Mix Modeling (MMM) tools are great for democratizing MMM. But…

Scroll to Top