Validity and Reliability

Part 2 of The "Marketing Analyst's Guide to Business Statistics" Series

We often hear clients and other colleagues in the field speaking to the need for a study to be valid, and its results, reliable. 

Often times, we find ourselves uncertain whether those making the statement are well enough informed on what it means to produce results that are reliable and valid. 

Perhaps more importantly, we question, even amongst ourselves, whether we give enough credence to the outcomes yielding from study results that are, or are not, valid and reliable.

Let’s start by distinguishing the two items:

you keep using that word I do not think it means what you think it means validity reliability statistics princess bride meme


reliability 

First, let’s start with the simpler of the two, which is reliability.

USING SIMPLE ALLITERATION, RELIABILITY ESSENTIALLY MEANS REPEATABLE.

The goal of reaching statistical reliability is to engage in research capable of producing consistent results over multiple trials if need be.

The best way to think about reliability is to think a person you deem reliable. It could be a family member, close friend, work colleague, or a vendor partner. Reliable people are likely those individuals that are at the top of the list among those you trust. Reliable people are consistent. These are people that you rarely, if ever, have to question their ability to perform up to expectations. Much like these individuals, we want to design research with the intent on producing reliable results.

validity reliability business statistics analytics big-data data-management probability stats statistician

validity

Validity is a bit more complicated.

THE INTENT OF VALIDITY REFERS TO THE ABILITY TO PRODUCE RESULTS THAT ARE CREDIBLE.

A common question to ask is, "does this survey instrument truly measure what it is intended to measure"? If it does not, it is not credible.

Without reliability of assessments you can’t have validity of the responent's mastery of the subject

Without reliability of assessments you can’t have validity of the responent's mastery of the subject

Just like you would not bring a hockey stick to play in a basketball game, you wouldn’t want to use a tape measure to estimate brand health.

Validity is about congruence, meaning in agreement or harmony.

We want to design studies that truly measure what they are intended to measure, thus congruent with our research target.

Before we launch into our suggested calls to action in terms of properly utilizing reliability and validity, there is one more important item of note:

If data results are valid, the results must also be reliable. However, data results do not have to be valid to be reliable.

Think about it this way – someone can be a very reliable person, but not necessarily reliable in terms of a desired outcome. Think about the most reliable people you know – in the context we presented earlier, those people were likely reliable and valid, because they consistently produce results, thus are trustworthy, credible and consistent.

However, there are also people that are consistent, thus reliable, at producing unfavorable results.

romo meme validity reliability business statistics analytics big-data data-management probability stats statistician

Anyone that watches sports likely can speak to a certain player that is consistent, thus reliable, at producing unfavorable results.

Think of the quarterback that you can pretty well guess is good for at least one interception per game. They consistently produce unfavorable results, therefore, are reliable, but not valid (assuming our desire is to predict good outcomes, that is!).

 

three key questions to ask when evaluating if your study, results, or analysis are valid and reliable:

1.  Would these results come out differently in some other way?

A different audience, maybe? A different time frame? Not during an election year?

A key place we see this is in tracking studies. A company has a bill rate increase, and customers feel wronged. Certain audiences feel underserviced, and a small population of participants can easily hide this truth. Maybe you conducted a telephone survey in Augusta, GA during the week of the Masters tournament.

It helps to go into a study with a fairly firm hypothesis. And it especially helps to come out with a reasonable explanation for results.

Running through some alternative scenarios will help close the argument about whether your results are valid and reliable.

2.  How would these results look if something important was defined differently?

Customer loyalty is a fun one to consider here. We’ve heard dozens of different interpretations for a word that shouldn’t be that difficult to define. But what’s tough about loyalty is it is one of those things that you know you have it or you don’t. 

Ambiguity aside, let’s assume loyalty in a study was defined as the likelihood to continue doing business in the next 12 months. What would that loyalty look like if you decided that the new definition of loyalty fell the way of share of wallet?

80% of your customers might say they are very likely to continue to do business, but might have the full intent to spread out their wallet share with other providers. Or maybe you’ve got a case of handcuffed loyalty: the customer has no choice but to do business with you, but the second a replacement competitor enters the market, well, we know the outcomes.

3.  Are there competing explanations for your results?

A competing explanation is a feasible alternative that can explain an outcome.

Let’s say we were investigating the reasons why high school students decide to enroll in college. If we look across the media, we might find common themes. Among them, the most common is that a college education is likely to lead to a more lucrative career. I won’t weigh in on whether I agree with that or not, and that’s exactly the point.

The point is that it isn’t always that simple.

What if the reason that students enroll in college has multiple explanations? It could easily be that students have multiple reasons for deciding to enroll in college. We might find that each student has their own unique position on their reasons for the school they selected. As a consultant in higher education, I can say honestly that most explanations are not a simple theme.


The bottom line: Interpretation is 90% of the law. Hopefully you won’t get to a point where you have to look at a result and say, what does this mean exactly.

So first, a good study audit should be conducted to ensure that whatever the results say, we have a pretty good idea of what to do with them.

Assuming that you do find yourselves in a situation where the validity and reliability are in question, run through the questions above. There’s a good chance you won’t have a complete answer for any, but it should point to some opportunities to resolve the argument.