Blogs

Chebyshev's Inequality for Marketing Mix Model Diagnostics

Chebyshev’s Inequality for Marketing Mix Model Diagnostics

At Aryma Labs, we constantly endeavor to add as much science as possible to marketing. MMM model calibration historically has had parallels with multi-linear regression calibration methods. But MMM is not just linear regression (see link in resources). It has more bells and whistles. As a result, it needs better calibration techniques. In our MMM calibration process, we innovatively use KL Divergence, Population stability Index and Information theoretic measures (see link in resources). In addition, we recently started to apply Chebyshev’s inequality as MMM model diagnostic. 📌 Firstly What is Chebyshev’s Inequality? Simply put, It states that for any random variable with mean μ and a variance σ², the probability of

Read More »

How to use Robyn’s Decomp RSSD Metric Effectively

In my last post (ICYMI link in resources), I talked about the similarities and differences between Robyn’s Business metric Decomp RSSD and Bayesian Priors. Decomp RSSD is a good metric but at the same time it also runs the risk of confining the user to historical spends as yardstick. Given this limitation, the question arises – How best to use Decomp RSSD. Here is how we use it at Aryma Labs. We rely on our own methods to fit the MMM models and we don’t try to optimize on Decomp RSSD. But we just use the Decomp RSSD metric in following cases: 1) For Immediate model updates If you have

Read More »
Similarities between Decomp RSSD and Bayesian Priors in Marketing Mix Modeling (MMM)

Similarities between Decomp RSSD and Bayesian Priors in Marketing Mix Modeling (MMM)

Open source Marketing Mix Modeling (MMM) tools are great for democratizing MMM. But they will never provide you with an accurate MMM model off the shelf. Why? Because of Flintstones curse (see Ridhima’s post – link in resources). But that does not mean we can’t appreciate some innovative methods in these open source MMM tools. Robyn may not be perfect but as somebody in a post said “it has flashes of brilliance”. One of the those flashes of brilliance is – Decomp RSSD. Robyn markets it as business fit metric. While others call it controversial (check out Mike Taylor’s tweet in resources).   📌 So it Decomp RSSD controversial? Does

Read More »
Is there one true Marketing Mix Model (MMM) model ?

Is there one true Marketing Mix Model (MMM) model ?

Is there one true Marketing Mix Model (MMM) model ? Short answer: Yes “All models are wrong, some are useful” I have seen various vendors use the above statement to tell client that “there could be many MMM models and no one model is absolutely right”. They thus push the decision to pick a model back to the client. I think vendors saying this is a cop out. First lets try to dispel the misunderstanding behind this statement “All models are wrong, some are useful.” “All models are wrong, some are useful” is an aphorism (meaning it is a concise expression of general truth). But the aphorism in this case

Read More »
Assessing Marketing Mix Model (MMM) relevancy through Population Stability Index (PSI)

Assessing Marketing Mix Model (MMM) relevancy through Population Stability Index (PSI)

After a lot of hard work you have built a great MMM model. The model is now powering saturation / reach frequency curves, scenario planning and Budget optimization. The client is happy using all of these. But the client asks you the following question – “How long can I continue to use the MMM models, When should we update them ?” At Aryma Labs, we liken MMM models to movie making. Just like a series of snapshots is put together to make a movie, a series of model updates gives you the ‘motion picture’ of your market. So it is pretty clear that frequent model updates are required. But how

Read More »
KL Divergence as MMM Calibration Metric

KL Divergence as MMM Calibration Metric

At Aryma Labs, we strive to incorporate information theoretic approach in our MMM modeling processes because correlation based approaches have their limitations. Of late, in our recent client projects, we have been using KL Divergence as calibration metric over and above the other calibration metrics like R Squared value, Standard Error, P value, within sample MAPE etc. I would urge the readers to read my post on difference between calibration and validation. Any statistician would be irked to find them used interchangeably. 😅 📌 What exactly is KL Divergence ? KL Divergence is a measure of how much a probability distribution differs from another probability distribution. There is a true

Read More »
𝐌𝐚𝐫𝐤𝐞𝐭𝐢𝐧𝐠 𝐌𝐢𝐱 𝐌𝐨𝐝𝐞𝐥𝐢𝐧𝐠'𝐬 𝐏𝐫𝐞𝐝𝐢𝐜𝐭 𝐨𝐫 𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐃𝐢𝐥𝐞𝐦𝐦𝐚

Marketing Mix Modeling’s Predict or Explain Dilemma

In MMM, there is often a dilemma on whether to make model better at explanation or prediction. Some MMM vendors focus on prediction while compromising on explanation. But is it the correct approach? No. At Aryma Labs, we err on the side of caution and focus more on getting the explanation right first. 𝐓𝐡𝐞 𝐁𝐢𝐚𝐬 / 𝐕𝐚𝐫𝐢𝐚𝐧𝐜𝐞 𝐭𝐫𝐚𝐝𝐞𝐨𝐟𝐟 Most data scientists understand Bias / Variance tradeoff from the lens of overfitting alone. But if one came from #statistics /econometrics, they would see the bias/variance tradeoff from the lens of data generating process and estimator bias. 𝐓𝐡𝐞 𝐁𝐢𝐚𝐬: Bias is generally an attribute of the estimator. Bias is the difference between estimator’s expected

Read More »
How we use AIC and KL Divergence in our MMM models

How we use AIC and KL Divergence in our MMM models

At Aryma Labs, we increasingly leverage information theoretic methods over correlational ones. ICYMI, link to the article on the same is in resources. To set the background, let me explain what is AIC and KL Divergence 𝐀𝐤𝐚𝐢𝐤𝐞 𝐢𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧 𝐜𝐫𝐢𝐭𝐞𝐫𝐢𝐨𝐧 (𝐀𝐈𝐂) AIC = 2k -2ln(L) where k is the number of parameters L is the likelihood The underlying principle behind usage of AIC is the ‘Information Theory’. Coming back to AIC, In the above equation we have the likelihood. We try to maximize the likelihood. It turns out that, maximizing the likelihood is equivalent of minimizing the KL Divergence. 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐊𝐋 𝐃𝐢𝐯𝐞𝐫𝐠𝐞𝐧𝐜𝐞? From an information theory point of view, KL

Read More »
Bayesian MMM's Stating the obvious problem

Bayesian MMM’s Stating the obvious problem

In MMM, everybody talks about incrementality. How to discern the incremental sales (or any KPI). But what one should also question is – how much incremental insights one is getting out of MMM? Has the MMM exercise informed you something extra that you did not know already? We recently had a conversation with one client and here is a paraphrased excerpt of it Client : “Why is that MMM sometimes gives us answers that we already know?” Me: “Let me guess, your earlier vendor used Bayesian MMM ?” Client : “yes, we were using Bayesian MMM and we actively helped the vendor in setting priors” Me: “ah I know what

Read More »
Why Aryma Labs does not rely on correlation alone

Why Aryma Labs does not rely on correlation alone

Many say MMMs are correlational in nature. Well statistically it is not completely true. When people say correlation they normally have the bivariate Pearson correlation in mind. What happens in MMM is much more than just bivariate correlation. Multi linear regression like MMM are technically conditional correlations. Correlations can be useful in limited scope but stretching its interpretation is what leads to trouble. 𝐈𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧 𝐭𝐡𝐞𝐨𝐫𝐞𝐭𝐢𝐜 𝐛𝐚𝐬𝐞𝐝 𝐌𝐌𝐌𝐬 At Aryma Labs, we started doing research into information theoretic based approaches nearly 4 years ago. Our motivation to pursue information theoretic methods arose from flaws of correlation : 1. Correlation assumes linear relationship In real world setting, the relationship between variables are

Read More »
Scroll to Top