Four paradoxes of research – how to accurately use research to drive decisions

One of the most difficult (and in certain times, the most difficult) aspects of research is the need to draw conclusions from results.

Sometimes the difficulty lies in the “Monday Morning Quarterback” assessment: we asked the wrong question. Other times, the results are counterintuitive, and speak to the opposite spectrum of accepted theory and practice. 

Then there are simply human errors of perception and judgment, notably relating to results interpretation and the expectation of research outcomes. Sometimes we have lofty, and even unrealistic, expectations of our research.  What we often do not realize is that, when unmet, this in-and-of-itself is a spectacular outcome.

All this noted, let’s explore four interesting paradoxes of research.

Paradox #1 - Often times, “no effect” is the most significant outcome possible for making a business decision.

Years ago, I was involved in a consulting venture to assess the impact on organizational efficiency via the adoption of a new software system within a college setting. We surveyed faculty and staff across the board about their current usage patterns, probing for pain points and the like. We then executed a demo of the new software platform. 

The decision makers at this college truly felt like this software was a game changer in terms of operational execution. However, the faculty and staff for the most part did not agree. 

On paper, the software would make the organization more efficient. The problem is, this comes with a bold assumption: the employees of your organization will adopt the software.

It was disappointing to deliver the results. Given the estimated adoption rate, our projections showed that the organization would be markedly less efficient, the probability of system integration errors was all-but-guaranteed, and the ROI looked far from favorable. 

What would have been significantly more disappointing, however, would’ve been the result had the institution not conducted this concept test. 

Rationally assuming our survey results pan out in the software rollout, not only would employees balk at adopting this new software, but the administration would have some serious questions to answer from these folks. And, without this research, this organization would have paid for the right to facilitate this meltdown.

Paradox #2 - Often times, “no effect” is a red herring – a mask for a polarizing effect. This could lead to significant misinterpretation…or a LOT of luck depending on the direction of polarity.

To elaborate on this phenomenon, let’s borrow from the same example above. Let’s assume that we looked at the preferences about this new software platform, and a key metric for our analysis was one where we asked something to the effect of: “On a scale from 1 to 10 where 1 means “very unlikely” and 10 means “very likely,” how likely would you be to use this software for your fall term planning?” 

Then we run an analytical model to find that the key components of the software have no effect on the likelihood to use the software in the future. We might think that, simply, we don’t have a winner here, and ultimately we should stick to the same software we were using. OR, is it possible that the reason we find no effect is the result of a love/hate relationship with the new software, which is in fact showing that the software itself has no effect on the likelihood to use it in the future? We see this often in statistical and other analytical models, whereby the effect of the new software components do not significantly favor a positive or negative position. In fact, they significantly favor BOTH strongly positive and strongly negative. 

What happens when you have extremes on both ends? They tend to cancel each other out. 

Paradox #3 – Research findings imply outcomes. At the end of the day, results are nothing greater than, at best, “three-quarter truths.”

If you are like me, you get wildly annoyed when perusing your preferred news periodical and stumble upon an article with a title like this:

Results of a recent study suggest that there may be a link between heavy consumption of bottled water and the likelihood to fail out of college.

WHAT? What does this even mean? Well, first, a disclaimer – this is NOT a known finding in any study we are aware of. We made it up to accentuate a point. There is a lot of ambiguity in this sentence. So much so that it’s pretty much impossible to interpret. 

The unfortunate reality is that similarly framed studies to the one above are naturally passive-aggressive because the findings suggest a likely pattern, but not an absolute outcome. 

The authors of the article are not being passive-aggressive on purpose (OK, sometimes they are…). They are often quoting the results exactly as they are. 

Let’s take a survey example. If a recent study finds that 7 out of 10 people surveyed say they will buy the new iPhone the day it comes out, that does not actually mean 7 of 10 people will buy it. Ignoring some likely sampling issues, there are some common sense implications we can draw without knowing much else about this study:

  1. Apple rarely rolls out a day one supply of new iPhones that will equip 7 of 10 cell phone enthusiasts. 

  2. 2 of those 7 people might not be able to get out to an Apple store or another retailer on the day of the release. 

  3. Another 2 of that 7 might realize later than they are not eligible for an upgrade and do not have a desire to pay full retail or switch to another service provider, even with a bill credit offer.

With statistical models, it gets even more passive-aggressive. Nothing in statistics modeling is absolute, especially with the conclusions we draw from results. Someone approaching a conclusion from the position of absolute is treating the outcome irresponsibly, and leading us to a conclusion that, in its claim, is inconclusive

Let’s take another example, borrowing from the prior example on software adoption. In Paradox #1, we discover that there is no effect in the likelihood to adopt a new software given the capabilities of the new software. In Paradox #2, we suggest that it is entirely possible that we have a love/hate scenario going on, subsequently estimating a null effect.  

To take this a step further, let’s examine the “love” and “hate” groups separately. Let’s split the “love” group and the “hate” group into two groups and look at what drives their likelihood of adopting this new technology by each group. I might run a statistical model relating to the components of the software to see what distinguishes the two groups. In doing so, I uncover a significant finding: the “love” group says they are more likely to value new technology than the “hate” group. Now we have an explanation that makes sense. One group of staffers is more likely to buy into a new software because they place value on technological advancements. We can imply this fairly comfortably. But we have to be careful here, because this does not mean the other group does not value technological advances, it simply means that the “love” group is more likely to place value in technological advances than the “hate” group. Let’s also take into account that we are asking these questions of staffers relative to a new technology the college is considering deploying.

Paradox #4 - The measure of the possible impact is ultimately a result of the magnitude of the problem in question.

We often ask this question of our clients: what is the true impact when you move the needle of your outcome a set amount in a certain direction? Many companies know that if they add 1% market share, they enjoy an additional $1M of revenue. That is observable and more easily measured. The business world, however, operates in uncertain environments, so we should not expect every magnitudinal measurement to be so simple. 

That said, how can we use market research to draw similar conclusions? Often times, we examine the problem or decision in question to scale the magnitude of the outcome. In assessing the impact of solving the problem or making this decision, we can tell a lot about what is at stake. 

Since we are on a roll, let’s return to our example of the software program for our college. What if we find that 40% of employees are jumping at the idea of a new software program? What does this mean? How can we put it in a context that assesses the magnitude of adopting this software? 

Let’s look at a simple outcome – faculty (college professors and instructors) capacity. If currently, faculty are expected to teach 3 courses a semester, and this software enables the same faculty to teach 4 courses with the same effort they were exerting to teach 3, then we might have something significant. Now we can more easily handle a growing freshman enrollment. We can add more courses, or even majors, to the college’s curriculum. Or we can even take 25% of your faculty and ask them to spend a year or more on academic research that might yield an influx of research dollars. 

This software, if only adopted by 40% of faculty, increases their output by 33%. From this, the college should enjoy a significant return – especially once the other 60% see how efficiently the 40% is operating with a newly adopted technology. 

So now we have an estimate of the expected outcome, but what must we invest to achieve this outcome? If the investment is $1M, and a conservative expected outcome is $5M, we might have an easy decision. But as that return on that $1M is closer to $1M, we might balk at the decision.

We see this often in survey research. If a company is consistently achieving Net Promoter Scores (NPS) of +40, what is the value of raising that score to a +50, or a +60, or the value loss of dropping to +30? An easy comparison would be to look at the top competitors. If the next closest competitor is +4, an increase from +50 to +60 might not be a game changer. However, if you were able to convert 50% of your Passives into Promoters, and 50% of your Detractors into Passives, we might have a different story to tell. You have increased your theoretical loyalty and pulled 50% of your at-risk customers into a safer zone. You have also increased your pool of ambassadors that may be out there selling for you, and substantially lowered those that might prevent further customer acquisition.

However, if you achieve this goal and do not see improvement in churn, growth in long term contract agreements or market share, we might find you have peaked in terms of your business expectation measured by the Net Promoter Score. On the surface, the goal of converting 50% of your Passives into Promoters, and 50% of your Detractors into Passives is lofty. But, if achieved, the results should produce substantial growth somewhere. If it does not, however, the magnitude of the problem may not have been very large, thus the impact on key revenue targets, is small or non-existent.