Skip to content
Ideas Analytics

Assessing the maturity of your data and measurement capabilities: five key questions

Anant Mathur

The digital era was meant to usher in near-perfect measurement of advertising effectiveness. So far, unfortunately, reality has been far less encouraging. CEOs are less confident about the comprehensiveness of data on their own brands and changes from the GDPR to ITP 2.1 are shaking confidence in data strategy at the very time AI and automation — which depend on data — promise to revolutionize marketing.

Download the full report: 'Assessing the maturity of your data and measurement capabilities: five key questions'

Reaching a state of perfect data and actionable intelligence is likely impossible. The fragmentation of the ecosystem, regular changes to legislation, and scale of data sources ensure that. But there are questions marketers can ask to assess where their organizations are in the data maturity journey.

Ask yourself the following five questions to see what your next step in the journey should be. 


Q1. What data assets do I have and are they actionable? 

How we define this question: 

In today’s businesses, data is collected and used by multiple different departments, and a clear data strategy requires a marketer who can lead the integration and operationalization of that data. In talking about data assets - the question is whether they are clean, actionable, and collected and maintained securely and ethically. 

  • ORIGINS: I know where the data comes from and what methods — and permissions — are used to collect it.

  • OWNERSHIP: There is clear control over, responsibility for, and accountability to that data (practically and contractually). 

  • ORGANIZATION: I understand the nature and language of the data. It is stored and maintained securely and in a way such that it can be queried, and transported. 

  • OPPORTUNITY: I know what the data can do for marketing efforts. Performance in campaign should be a measure of future opportunity.

Why we ask this question: 

Data is not necessarily valuable in its own right — it adds value in its ability to establish a more empirical point of view. Recently the potential value of data has been enhanced through the application of AI and machine learning — highlighting unknown relationships between data sets or uncovering new ways to measure things. 

Often the information held by marketing organizations has great depth of knowledge, but on a small subset of information. For example, a telecom company may know how many lines a consumer has and their device type, but they may have very little other knowledge beyond a few media behaviors. It’s possible to append some demographic and behavioral data, but mostly in limited media scenarios. 

Now consider what could be done with data that sits outside the media domain — public data, things that aren’t traditionally decisioned against in media, but that could be beneficial in identifying opportunities for brand growth when machine learning is applied to help determine which are the most relevant aspects. 

What a good answer looks like: 

“I have a clear data strategy that addresses brand, category, consumer, and competitive data. My data sources include everything from aggregated, syndicated research down to finely grained customer data and I am able to draw linkages across those sources to identify behaviors and signals that inform my media executions against a broad set of potential customers.” 

“I’ve identified data from outside traditional media sources to broaden my view of non-customers and include all prospective customers.” 


Q2. Do I understand the causal links between data and business results?

How we define this question: 

Data on its own can give a historical snapshot, such as the Richter scale magnitude of a recent earthquake. But what scientists, and anyone living near a major faultline, really want to know is which data points will predict the magnitude of a future earthquake.

Why we ask this question: 

This is often one of the most difficult questions facing marketers, and speaks to the “opportunity” of data assets above. Clean, granular data is only valuable to the extent it can be used as a predictor of business outcomes such as revenue and profit. 

Showing causal relationships is difficult in any scenario, and it helps to have the right people on the job. The Analytics team at Essence includes PhD’s and professionals from sectors like retail and healthcare that routinely grapple with such problems. They run analyses on brand metrics, word of mouth (WOM), share of voice (SOV), and share of market (SOM), among others to determine the best leading indicators for revenue.

The focus should be on metrics that a brand can impact through consumer touchpoints including marketing and customer service.  Monitoring these metrics and acting upon their causal signals is critical to building and maintaining a healthy business. 

What a good answer looks like: 

“I know how well I am doing against the key metrics for the life stage of the product. (For a new product, this might be aided and unaided awareness. For a more mature product, it may be consideration and intent.) Additionally, I know where my competitors stand, and how I am trending within the competitive set.”

“I understand the relative impact of various brand and media metrics on my desired business outcomes.”


Q3. Am I measuring incrementality ensuring marketing outcomes are over and above what would happen naturally?

How we define this question: 

There is a perpetual debate around the true incrementality of marketing spend. There are many confounding factors that can prevent a brand or agency from measuring the true ROI of marketing spend, including organic growth, non-marketing factors like seasonality, pricing, merchandising, assortment, and customer service levels, but it’s important to have as robust a model as possible to quantify added value to the C-suite.  

Why we ask this question: 

Not understanding the true incremental impact of marketing can have a detrimental effect on how marketing funds are allocated, the business results they are tasked to deliver, and the calculations for customer lifetime value (CLV) (which in turn determines appropriate investment levels). This can lead to unrealistic goals and even an erosion of the credibility of a marketing organization. 

It is important to understand that the answer isn’t always cut and dry, nor is it easy, because marketing operates in conjunction with other business levers. For example, if a product or service is outrageously priced, no amount of marketing spend is going to drive any incremental benefit — if anything, it might be destroying value for the organization. Additionally, media fragmentation and the challenges of cross-device measurement mean there is no one tool or panacea to achieve a comprehensive gauge of incrementality. The important thing here is finding the right balance between effort and greater confidence in budget allocation decisions. 

What a good answer looks like: 

There should be a multi-level response to this question:

1. At the enterprise level:

A. “I have a descriptive model that decomposes the effect of each marketing business lever while also taking into account the fact they are often co-determined with other investment levers as described above.” 

B. “I employ a simulation model that enables predictions about the impact of a business decision.” 

2. At the level of paid, owned and earned media:

A. “I have a preference for true tests wherever possible that allow me to determine the actual incrementality of any media effort.”

At this level, there are several different tools available to assess media impact:

  1. Brand Impact: There are multiple survey-based studies offered by inventory providers such as Google or Facebook, often called brand lift studies, to assess the delta in brand metrics (awareness, consideration, intent etc.) between exposed vs. control audiences. Given current capabilities, there are reasons to be skeptical of studies done by panel vendors that profess to evaluate all inventory across walled gardens because they don’t have true test/control techniques in place, relying instead on “modeled” controls where they attempt to pair up people who look like each other to assess incremental impact. This can introduce sources of bias.

  2. Performance Impact: This refers to measuring the incrementality of media for performance objectives, which are typically conversion events such as sales. At least for most ad-served media (display, video, etc.), it’s possible to measure the true incrementality of results by dividing audiences into test and control groups and measuring the true incrementality of various techniques (retargeting, audience based targeting etc.). 

  3. Cross-media impact: Techniques such as matched market tests operate across offline and online media to assess incrementality. By choosing markets matched in every other way (demographics, media footprint, etc.) and running a media treatment (e.g. TV or digital) in one but not the other, the delta in conversions or brand impact can be attributed back to the media type in question. Note that these tests are very nuanced and require a heavy dose of practitioner judgement and experience to ensure that they’re executed properly. 

B. “In cases where true tests are impractical, or purchase journeys involve several touches across different media types, I employ various attribution algorithms or platforms to measure incrementality.” 

Note: Numerous vendors perform these functions and have attribution algorithms that can help optimally allocate funds across media. However, it is important to note that attribution algorithms often treat addressable media alone as the universe of customer touchpoints when in fact — as described earlier — there could be multiple factors at play in driving business results. It is important to stress test these models and ensure that they are truly predictive and are leading to optimal budget allocation decisions. The advantages and disadvantages of various attribution vendors fall outside the scope of this paper but Essence can offer a separate POV if requested.


Q4. What am I optimizing to?

How we define this question: 

It’s great to know that a campaign generated a consideration lift of X% or a conversion lift of Y%, but what drove it? For example, which micro-optimizations predict the number of tickets that will be sold on the opening weekend of a movie?

Why we ask this question: 

Understanding causal drivers of lift that can be monitored and optimized in real time is a great competitive advantage in a marketplace where everyone is trying to squeeze every last ounce of performance from their media budgets. 

An agency or in-house team should be able to describe which signals are most predictive of brand or performance lift, and how it is optimizing to them. In the example of opening weekend revenue for a film, these causal drivers might include trailer drops, social media buzz, search chatter, etc. 

What a good answer looks like: 

“I have run causal experiments for all ad-served display media to identify the signals I want to optimize for to drive conversions or brand lift. I have the ability to optimize to these signals in real-time.”

Examples: MOAT metrics (ad interaction, scrolling, dwell time etc.), viewability metrics, environmental data (e.g. seasonality, time of day). 

“I am using machine learning to identify and optimize for signals. Using tools like Google’s DV 360, I am able to maximize the “total value” of an impression based on signals I input to Google’s custom algorithm, rather than relying on default metrics like clicks.”

Note: There are no shortcuts here. This knowledge is gleaned through a culture of relentless experimentation and learning, which is a key attribute to look for in any team or partner. When done correctly, optimizing away from default metrics can result in reductions in effective CPA of 10 percent or more. 


Q5. What can I do to acquire and test new data and insights? 

How we define this question:

It can feel uncomfortable to look outside of what you already know works.  But insights into sources of growth and changing consumer mindsets are rarely revealed by repeating established processes. As global privacy regulations (e.g. GDPR) and technology policies (e.g. ITP) place limits on the sufficiency of granular user data, it’s vital that advertisers adopt a bigger data model for insights and hypothesis generation that goes beyond the tried and true digital campaign metrics that have gotten the industry to where it is today.

Why we ask this question:

The agency or in-house team should undertake a process that enables the advertiser to diversify their data beyond their current domain, value that data against common use cases, and test its effects on campaigns, be it data from new or existing partners, or from within the wider advertiser organization. They should have a model for accessing and interpreting myriad data sets from the representative (e.g. surveys and panels) to the responsive (e.g. log files), and everything in between (e.g. social sentiment, focus group data and other audience insights). The advertiser also needs tools that give the data meaning, and the ability to marry the outputs of these sources with a set of analytics approaches that test the strength and predictiveness of those signals.

What a good answer looks like:

“I have resources dedicated to data pipeline management whose performance is measured against the impact that new data sources have on our marketing intelligence or campaign performance.”

“We put the same emphasis on a systemic approach to research as we do on managing our ad stack.” 


Conclusion:

Success in data strategy and data-led transformation is not all about having the right answers — those will change constantly depending on the business goals, the competitive environment, and government regulation. Instead, success is all about asking the right questions and implementing the proper analyses and testing to validate the latest hypothesis.