Measurement sits at the heart of modern marketing. Yet one of the biggest limitations to accurate measurement is often decided long before any model is built. It happens at the planning stage. The way channel overlaps are managed determines what the data will eventually be able to tell us. Done wrong, this can mean not understanding whether it was your TV or YouTube activity that actually drives your sales.
Marketing Mix Modelling can extract a surprising amount of insight from imperfect data, but its accuracy improves dramatically when measurement is considered during planning. Collinearity is a clear example of this.
As a quick reminder, Marketing Mix Models, or econometric models, use historical data to quantify how much revenue has been generated by marketing activity and other business drivers. At its core, this technique relies on the fluctuations, or variance, in the data. When a spike in sales aligns with a marketing campaign going live, we can start estimating ROIs. Therefore, covariance, meaning how two variables move together over time, becomes the key to understanding what is really driving performance.
In simple terms, collinearity happens when two or more variables move together so closely that it becomes difficult to separate their individual effects. In statistics, this creates instability in parameter estimates. In MMM, it's hard to understand which channels are really driving outcomes.
A classic example is when two channels run simultaneously. Imagine paid social and TV campaigns that always go live together. From the model's point of view, these two channels become almost indistinguishable. When sales go up, both channels go up. When both channels go down, both sales go down. The model can still assign contributions, but it struggles to say confidently how much belongs to each channel individually. Small changes in the data can lead to large swings in attribution between the two.
This is not just a technical problem. It directly affects:
You can have a perfectly built model and still end up with weak answers if the data structure itself is highly collinear.
The most effective way to deal with collinearity is not in the modelling phase. It is in the planning
phase.
This does not mean re-designing the entire media strategy around experimentation. It means
introducing small, controlled variations that allow the data to carry more information. For example:
These variations break the perfect alignment between channels. Even a handful of clean, non-overlapping
periods can dramatically improve how well the model separates effects.
There is often a perceived cost to
this kind of planning. Teams worry about:
In practice, this cost is negligible compared to the value of the learning. The ability to confidently distinguish true channel performance leads to:
The key insight is that you do not need a perfect experimental design. You only need enough variation to give the model a fighting chance.
A common misconception is that you need formal geo experiments or large multi-week blackouts to solve collinearity. In reality, just a few controlled deviations are often enough.
If two channels overlap perfectly for 48 weeks of the year but behave differently for 4 weeks, those 4 weeks can anchor the entire model. They provide the contrast needed to distinguish effects across the full time series.
This is one of the most powerful ideas in MMM: Small planning decisions can unlock disproportionate measurement value for all business users.
| Stakeholder | The Risk of Ignoring Multicollinearity |
|---|---|
| Media Buyer/ Agency | Over-investing in a channel that isn't actually performing. |
| Data Scientist | Unstable model coefficients that change wildly with new data. |
| CFO | Incorrect ROI reporting leading to wasted marketing budget. |
Have any project on mind?
For immediate support: