Data quality dominates online research industry conversations – yet the industry has never clearly defined what it is and where it starts. That lack of definition is just as much of a contributor to poor results as bad data itself – after all, how can one know what is quality data if we don’t have agreement on what defines it? This, and related considerations, are at the heart of data quality questions.
While there are many definitions, the only absolutes are that data must be usable and participants real. Ensuring access to the right audience, with the right mix of characteristics/profile for what you want to know, helps assure quality. This means picking the right sample and sourcing partner, one you can trust to consult with you on tradeoffs and help you make the right decision for your project.
With those absolutes in place, everything else is a tradeoff, asking you to choose what is most essential to the project and what you are willing to discard to get it done. Speed vs. cost. Time in field or depth of targeting vs. population coverage… each of these is a tradeoff. Tradeoffs, however, can be just as hazily defined as data quality, with no hard-and-fast rules for when, how or why they should be made.
Because critical business decisions rely on accurate insights which in turn depend on the quality of the data inputs, it’s important to understand these tradeoffs at every step of the research process, from questionnaire design to quotas, screeners, sampling, data cleaning and analysis. It’s important also to know when to be cautious about tradeoffs – when the accuracy of insights is paramount to making the right critical business decision.
Quality Lenses and the concept of tradeoffs
For many, quality is about how participants behave in a survey – speeding, straightlining, etc. Others consider broader within-panel behavior: How many surveys do members take? How often? Others ask: Are the survey results as I expect? Do they match a benchmark?
This implies there’s no standard measurement of quality, and any definition is complex. ISO 9000, a set of international standards on quality management and assurance, uses the concept of fitness of purpose, defining quality as the ‘degree to which a set of inherent characteristics fulfils requirement’.
This definition allows for different quality frameworks: Those who focus on in-survey behavior need that behavior to be measurably above a standard; those who require data points to match benchmarks look for those matches. The implication, given perfection isn’t achievable, is that something must be traded off.
For example, researchers whose quality standard is ‘no straightlining’ in a 30-minute survey make two tradeoffs: excluding those for whom a straightline is their truth and accepting only those who will work diligently through a 30-minute survey. How representative are those people? More nuanced approaches of measuring inattentive behavior might mitigate these tradeoffs, but they take time and effort– yet another tradeoff!
The problem arises when the tradeoffs are not recognized for what they are, leading a researcher to wrongly think they have achieved perfection simply by eliminating straightlining.
Part two of our Quality Fit for Purpose series (next Tuesday) details common research tradeoffs and how to ensure a quality sample by re-examining the ‘Holy Laws’ of research.
Dynata is kennispartner van Daily Data Bytes. Word ook kennispartner en bereik onze lezers via web, socials en nieuwsbrief met jouw content. Neem contact op met Tjitske Buurman voor meer informatie.