On making data-driven decisions. Q&A with Don Peppers

Q1. Is it possible to make data-driven decisions without a statistics or maths degree? 

Having mathematics or statistics training is always beneficial when considering decisions based on data, but even more important than knowing how to calculate equations is knowing a few basic principles, many of which don’t even require adding a string of numbers. LOTS of bad decisions have been made based on spurious correlations, false assumptions, or poor interpretation of statistical data, and sometimes these mistakes are made even by quants.

Let me give you an example. If you look at the most politically conservative communities in the U.S., you’ll find a disproportionate number of them are small towns. And in our own minds, we can all understand why this would be so. Conservatives are more likely to be churchgoers, farmers, small business people, or merchants. They like a sense of community and family. It’s obvious.

But now consider this: A disproportionate number of the most politically liberal communities in the U.S. are also small towns. So why would that be? This seeming paradox has nothing at all to do with the attractiveness of small towns to either conservatives or liberals (although survey data does show that conservatives tend to find small towns relatively more to their liking, while liberals tend to find cities more attractive).

This simple paradox arises from a basic statistical principle having to do with the impact of randomness on populations. Whenever you have a set of populations – different groups of voters, or students, or customers, or quarterly sales results – there is always some element of randomness in how these populations are composed, and the smaller the group, the more randomness there will be. The average age of almost any group of people, for instance, will be higher if a 95-year-old is the next one who is randomly added to the mix. But the average for a group of 1,000 people won’t be affected nearly as much by the addition as the average for a group of 10.

Similarly, the conservative or liberal leanings of any single member of a population will always have a more noticeable effect on the population’s overall average when the population size itself is small – i.e., a small town. To drive this concept home, imagine that there are several thousand towns scattered around the country that are each composed of just a single, solitary person each. Every such one-person community would have an “average” political leaning that is either 100% liberal or 100% conservative.

Now let me drop this line of reasoning into a real-world business decision-making situation. As a marketing director let’s say you run a test on 5,000 randomly chosen consumers to see what the response rate would be to a particular offer, and you get 205 responses, a 4.1% rate. Based on this response rate, you decide to roll the offer out to all 1 million consumers in your market. What should you expect of the rollout? Obviously, you would expect it to be around 4.1%, but is it equally likely to be a little less than 4.1% or a little more than 4.1%?

This problem is based on the same principle: samples are more subject to random variations than the larger populations they’re drawn from (in mathematical terms we would say that smaller population sizes have more “variance.”) But what this means is that it’s more likely the rollout will generate a response rate less than 4.1%. Why? Because we know that the response rate from any 5,000-person test is almost certain to be higher or lower than the total market for purely random reasons, and so our test might have shown a much higher result for purely random reasons. Of course, our test is equally likely to have shown a much lower response than the total market, but randomly lower results don’t generate as many rollouts, right?

This is one example of the kind of statistical reasoning, or reasoning with data, that doesn’t require calculating, but there are many others – understanding the very idea of testing a program in advance of rolling it out so as to predict the program’s results more accurately, or the concept of false positives and false negatives, when evaluating a test, or the nature of conditional probabilities, or fact that when incentives are involved, people’s own interests might contaminate the objectivity of whatever statistics we’re considering.

 Q2. What are your recommendations for anticipating human biases, avoiding statistical errors, and recognizing the limits of data?

There are so many human biases to rational decision-making that it is impossible to categorize them completely. And biases need to be recognized first, of course, before we can minimize or avoid them. Some of the most important are:

  • The confirmation bias (i.e., looking more for evidence confirming our prior beliefs),
  • Overconfidence in our own judgments,
  • Failing to re-frame problems so as to approach them more objectively,
  • Focusing on short-term effects rather than long-term effects, and
  • Loss aversion (i.e., considering it more desirable to avoid than to acquire an equal-sized gain).

Decision-making analysts consider the confirmation bias to be one of the single most difficult human thinking flaws to deal with, and it afflicts even very smart, highly sophisticated thinkers. We are wired as social animals, and much of our analytical thinking likely evolved as a means for helping us persuade others to do what we want them to do, a key to surviving and prospering within a tribe or group. So when we think we are analyzing a problem objectively, what our brains are actually doing is searching for ways to buttress our argument in favor of the desired conclusion. But in the end, we are not just trying to persuade others, we are also persuading ourselves.

The most reliable way to deal with our natural tendency toward the confirmation bias is to develop the self-discipline to think more like a scientist. The scientific method is based on the principle of “falsifiability.” A theory about how the world works can only be considered truly scientific if there are some conditions or events that would prove it to be false. Falsifiability is what separates scientific reasoning from religious belief, superstition, or prejudice.

In practice, overcoming the confirmation bias means avoiding coming to any conscious conclusion prematurely. With the increasingly prevalent practice of “evidence-based medicine,” for instance, doctors are encouraged to try not to come to any conclusion at all about a patient’s diagnosis before objectively reviewing the evidence itself, including any literature or relevant research. In a group or a corporate setting, one effective way to overcome the confirmation bias is to task different group members with trying to prove conflicting points of view. Warren Buffett’s practice, when considering a merger or some other deal, has sometimes involved hiring two different sets of bankers. One set of bankers earns a high fee if the deal is consummated successfully, while the other set of bankers earns a high fee if the deal is not consummated.

Overconfidence can be compensated for simply by imagining realistic future scenarios in which your decision turns out to have been wrong. Imagine, for instance, that a year or more after your decision you were to look back and realize that your plans have been completely undone and you’re now kicking yourself for having made such a foolish choice. Or imagine that the reverse happens – that you are unexpectedly elated with the decision, and perhaps wish you had gone even farther with it. Now try to write the stories behind either of these futures – how might either scenario have developed?

Narrow framing occurs whenever we box ourselves in to a set of options or possibilities that is unnecessarily limited. Should our company buy this other firm or not? Should I attend university out of state or in state? Should our family book a holiday trip or stay at home? The most immediate cure for narrow framing is simply to widen the set of options being considered – to change the parameters of the problem itself. Figure out how to state the problem in a larger context. Rather than asking “Should we buy a new car or not?” you could ask yourself “What’s the best way right now to spend this amount of money making my family better off?”

Short-term thinking occurs when a decision goes in a terribly wrong direction simply because of the immediacy of the reward, or what we sometimes call the “heat of the moment.” The only real cure for short-termism is to try to force ourselves to step back from the emotion of the moment and consider our decision from a distance. And one sure-fire way to step back from the issue being decided, in order to take a longer-term view, is what Suzy Welch has called the “10-10-10” exercise. How will this decision we’re about to make feel in 10 minutes? OK, how it will feel 10 months from now? And in 10 years, what will we remember about this decision, and what will be its most important effect after a decade has elapsed?

Loss aversion is an insidious predisposition to give more weight to some value that is lost than to an equivalent value gained. It is one of the principles that underlies Kahneman and Tversky’s famous “prospect theory” of explaining the irrational behaviors we often see in investors. Let’s say that a year ago you spent $10,000 to buy stock in a company you thought had great promise, but now this stock is only worth $5,000. Many people (most, probably) would choose to hold on to the stock rather than selling it, because selling it acknowledges a loss and locks it in. So one way to deal with the bias is to change your perspective. Assume that you woke up one day and someone had, by mistake, sold this stock and replaced it with cash. Would you buy the stock back? Probably not.

If you want to make better, more rational decisions, you could do worse than focusing on these five biases – the confirmation bias, overconfidence, narrow framing, short-term emotion, and the loss aversion.

There is one final aspect to your question, which has to do with “recognizing the limits of data” in making a decision. And recognizing these limits is every bit as important as dealing with our biases, because in our increasingly data-driven world we often take too much confidence in numbers, to the point that it can and does corrupt our thinking.

The phrase “you can’t manage what you can’t measure” is sometimes incorrectly attributed to W. Edwards Deming, the statistician and quality-control expert often credited with having launched the Total Quality Management (TQM) movement. Deming, however, was far too smart to have ever said something that simplistic and misleading. Nothing becomes more important just because you can measure it. It becomes more measurable, and that’s all. In fact, one of the “seven deadly diseases” of management that Deming warned about was running a business on visible figures alone.

Just a few years ago a major online marketer was quite proud of its testing and analytics capabilities. They did A/B testing on everything: all their policies, promotions, marketing and service initiatives. Everything was constantly being modified, tweaked, and tested to ensure that they were operating as efficiently as possible, for a large array of differently defined segments of customers. Because they were primarily a subscription business, customer churn was by far their most important KPI. They needed to keep the churn rate absolutely as low as possible, and they were constantly testing a variety of offers and policies to do this.

Through A/B testing they showed that the single most cost-efficient ways to reduce churn were to hide the “cancel my subscription” button on the website, and to require people to call in before being able to cancel. They even found that they could improve “customer loyalty” simply by disconnecting inbound phone calls prematurely whenever a caller first mentioned an intention to cancel, in order to require a second call.

One of the problems afflicting business managers today, in this era of Big Data and sophisticated, algorithmic decision-making, is that quantifying things has become an easy way to escape responsibility for making a decision or taking a point of view. But while we must be aware of our biases, we can’t allow ourselves to take solace in data as a substitute for using our judgment.

—————————

Don Peppers, Founding Partner

Author of The One to One Future

Recognized as one of the world’s leading authorities on customer-focused business strategies, Don Peppers is a best-selling author, blogger, business strategist and keynote speaker. He co-founded the management consulting firm, Peppers & Rogers Group. His new company, CX Speakers, delivers workshops, keynote presentations and thought leadership consulting focused on customer experience topics.

With business partner Dr. Martha Rogers, Peppers has written, nine international best sellers that collectively sold over a million copies in 18 languages, including their trailblazing first book, The One to One Future (Doubleday 1993). His most recent, Customer Experience: What, How and Why Now, is a collection of essays offering insights on building and maintaining a customer-centric business.

In 2015, Satmetrix listed Don Peppers and Martha Rogers #1 on their list of the Top 25 most influential customer experience leaders. Peppers is also one of the most widely read LinkedIn Influencers on the topic of “customer experience,” with over 300,000 followers.

 

 

 

You may also like...