It has been decades since the concept of Marketing Mix Modeling (MMM), the forerunner to Attribution Modeling, was introduced. The concept was relatively straightforward, marketers would apply statistical analysis to sales and marketing data to quantify the impact that each element of the marketing mix had in driving brand sales and profit. Once the causal relationship had been modeled, marketers would then be able to accurately forecast outcomes and inform resource allocation decisions.
While the concept may have been straightforward, the solution, for most marketers, has been elusive. Why? First and foremost, MMM has some inherent challenges, particularly when it comes to quantifying the impact of longer term brand equity development tactics versus those focused on short-term sales. Secondly, these models have not fared well in accurately assessing the impact of various media types on outcomes to assist in refining allocation decisions.
Fast forward to the late ‘90’s when we experienced an explosion in online media, the birth of e-commerce and the introduction of “Big Data.” The emergence of digital media and the attendant level of data that marketers where now able to gather led to the launch of “Attribution Modeling.” The goal, to assess and quantify what marketing and media touchpoints influenced an advertiser’s target audience, and to what extent, across the purchase funnel in an effort to optimize media spending across the ever expanding gamut of media alternatives.
While there are multiple variations of attribution models to consider, most marketers have relied on single-source attribution models, often using a “last click” approach which assigns responsibility for an outcome to one event. While simple, this flawed approach to attribution modeling gives too much credit to digital media, at the expense of traditional media and other marketing touchpoints.
Sadly, for advertisers that are doing both MMM and Attribution Modeling, it is rare that the feedback from these related, but different approaches synch. Further, there remain audience delivery measurement (i.e. cross-channel measurement), multi-touch attribution challenges that introduce a layer of complexity that drives up the cost of attribution modeling.
That said, since the onset of these two modeling tools being introduced, the industry has dramatically evolved its data gathering capabilities, enhanced CRM and DMP capabilities, conceived of and launched programmatic media buying, where algorithms have replaced media buyers and now we’re seeing the use of artificial intelligence bots, such as Adgorithms’ “Albert” that can plan and place media and create content. Heady stuff to be sure.
This got the cynic in me thinking; “Well if we can master all of this from a technology perspective, surely we should be able to cost efficiently and effectively master attribution modeling.” That led to idle speculation about whether or not the ad industry really wants advertisers to solve the attribution modeling dilemma?
After all, what if John Wanamaker was wrong? What if more than half of his ad spend was wasted? Remember, the marketing and media choices available to him in the 19th century were considerably more limited than those available to advertisers today. Would accurate attribution models eliminate some of the following marketing and media options from consideration?
- Cinema advertising
- Product placement
- Direct mail
- Online display
- Online video
- Paid search
- Organic search
- Social media
- Native advertising
- In-store advertising
- In-store displays
- On-package advertising
- Trade promotions
- Price promotions
- Affinity marketing
- Affiliate marketing
- Earned media
Crazy. Right? Reminds me of a quote by the American journalist, Gary Weiss:
“One problem with the focus on speculation is that it tends to promote the growth of the great intellectual cancer of our times: conspiracy theories.”
What do you think…