How we use AIC and KL Divergence in our MMM models

How we use AIC and KL Divergence in our MMM models

How we use AIC and KL Divergence in our MMM models
How we use AIC and KL Divergence in our MMM models

At Aryma Labs, we increasingly leverage information theoretic methods over correlational ones. ICYMI, link to the article on the same is in resources.

To set the background, let me explain what is AIC and KL Divergence

๐€๐ค๐š๐ข๐ค๐ž ๐ข๐ง๐Ÿ๐จ๐ซ๐ฆ๐š๐ญ๐ข๐จ๐ง ๐œ๐ซ๐ข๐ญ๐ž๐ซ๐ข๐จ๐ง (๐€๐ˆ๐‚)

AIC = 2k -2ln(L)

where
k is the number of parameters
L is the likelihood

The underlying principle behind usage of AIC is the ‘Information Theory’.

Coming back to AIC, In the above equation we have the likelihood. We try to maximize the likelihood.

It turns out that, maximizing the likelihood is equivalent of minimizing the KL Divergence.

๐–๐ก๐š๐ญ ๐ข๐ฌ ๐Š๐‹ ๐ƒ๐ข๐ฏ๐ž๐ซ๐ ๐ž๐ง๐œ๐ž?

From an information theory point of view, KL divergence tells us how much information we lost due to our approximating of a probability distribution with respect to the true probability distribution.

๐–๐ก๐ฒ ๐ฐ๐ž ๐œ๐ก๐จ๐จ๐ฌ๐ž ๐ฆ๐จ๐๐ž๐ฅ๐ฌ ๐ฐ๐ข๐ญ๐ก ๐ฅ๐จ๐ฐ๐ž๐ฌ๐ญ ๐€๐ˆ๐‚

When comparing models, we choose the models with lowest AIC because in turn it means that the KL divergence also would be minimum. Low AIC score means little information loss.

Now you know how KL divergence an AIC are related and why we choose models with low AIC score.

๐‚๐š๐ฎ๐ญ๐ข๐จ๐ง ๐š๐›๐จ๐ฎ๐ญ ๐€๐ˆ๐‚

One of the misconceptions about AIC is that the AIC helps in choosing the best model out of a given set of models.

However, the key word here is ‘Relative’. AIC helps in choosing the ‘best model’ relative to other models.

For example, if you had 5 MMM models (fitted for same response variable) and all 5 are overfitted badly, then AIC will choose the least overfitted model among all models.

AIC will not caution that all your MMM models are poorly fitted. In a way AIC is like a supremum of a set.

๐Š๐‹ ๐ƒ๐ข๐ฏ๐ž๐ซ๐ ๐ž๐ง๐œ๐ž ๐ญ๐จ ๐ ๐š๐ฎ๐ ๐ž ๐›๐ข๐š๐ฌ ๐ข๐ง ๐Œ๐จ๐๐ž๐ฅ

Another interesting way we leverage KL Divergence is to gauge bias in the model. For a problem like MMM, bias in model is always unwanted.

The model could have bias for variety of reasons – misspecification of model, treatment of multicollinearity through regularization etc. We are doing some interesting research using KL divergence to reduce bias in our models. (More on this soon).

P.S: Useful link to papers in resources.
First image credit in resources.

Resources:

Why Aryma Labs does not rely on Correlation alone
https://open.substack.com/pub/arymalabs/p/why-aryma-labs-does-not-rely-on-correlation?r=2p7455&utm_campaign=post&utm_medium=web

AIC myths :
https://sites.warnercnr.colostate.edu/anderson/wp-content/uploads/sites/26/2016/11/AIC-Myths-and-Misunderstandings.pdf

Facts and fallacies of AIC :
https://robjhyndman.com/hyndsight/aic/

Image credit:
https://www.npr.org/sections/thetwo-way/2013/11/16/245607276/howd-they-do-that-jean-claude-van-dammes-epic-split

Facebook
Twitter
LinkedIn

Recommended Posts

Chebyshev’s Inequality for Marketing Mix Model Diagnostics

Chebyshev’s Inequality for Marketing…

At Aryma Labs, we constantly endeavor to add as much science as possible…

How to use Robyn’s…

In my last post (ICYMI link in resources), I talked about the similarities…

Similarities between Decomp RSSD and Bayesian Priors in Marketing Mix Modeling (MMM)

Similarities between Decomp RSSD…

Open source Marketing Mix Modeling (MMM) tools are great for democratizing MMM. But…

Scroll to Top