Confirmation bias in R&D management

18 Jul 2010 Sandeep Mehta
Here is a bit of a philosophical problem that I have been thinking about for quite some time.  In the scientific world, there are all kinds of checks on proposals/decisions/results before they are accepted.  In fact, skepticism is actually somewhat welcomed. Why are R&D management decisions not subject to similar level of scrutiny?  Time and again I have found that decisions of senior R&D executives are not challenged and debated.  If innovation can only happen when when there is questioning of status quo in R&D, why not the same for R&D management innovation?

The article Confirmation bias in science: how to avoid it summarizes the problem pretty effectively (albeit in the context of scientific research):

One of the most common arguments against a scientific finding is confirmation bias: the scientist or scientists only look for data that confirms a desired conclusion. Confirmation bias is remarkably common—it is used by psychics, mediums, mentalists, and homeopaths, just to name a few.

The article had three interesting examples of confirmation bias. The one that is most applicable to R&D management and organizational pride comes from 18th century France – where the need to maintain national pride and a belief that all is well led to an amazing acceptance of bad research / decision:

… Prosper-René Blondlot announced the discovery of N-rays. He was immediately famous in France, and very shortly afterwards, researchers from around the world confirmed that they too had seen N-rays. N-rays were an ephemeral thing: observed only as a corona around an electric discharge from certain crystals. They were only observed by the human eye, making them difficult to quantify.

But not everyone was convinced. Many researchers outside of France were suspicious of the number of claims coming from French labs for the properties of N-rays. In the end, an American scientist Robert Wood visited the lab of Blondlot to see it for himself. During one of the experiments he surreptitiously removed the crystal that supposedly generated the N-rays, after which Blondlot failed to notice the absence of N-rays. The N-rays failed to vanish when their source was removed.

From an observation of many firms from my management consulting days, I find that confirmation bias is even stronger in R&D management.  In fact, many senior managers seem to surround themselves with people who actually do nothing but confirm their decisions.  Below are what I think are root causes that encourage confirmation bias in R&D management and some thoughts on what could be done about them.  I welcome any comments and criticism.
First, the process of scientific critique takes a very long time. For example, from the same Ars Techica article, the evaluation of research took 24 times as long as the work it self:

… the total amount of time coding the model? Maybe 24 hours, total. OK, call it 36 hours with some debugging. Running the code to get results? Maybe a minute per parameter set, so let’s call it a month. So that’s 32 days from around 730 total. What was all the rest of that time devoted to? Trying to anticipate every possible objection to our approach. Checking if those objections were valid. Trying to find examples of physically realistic parameters to test our model with. Seeing if the code was actually modeling what we thought it was. Making sure that our assumptions were valid. In summary, we were trying to prove ourselves wrong.

This is not practical in R&D management world.  Clearly, if it takes two years to decide a course of action, no action can be taken.  This problem has traditionally meant that management decisions can not actually be discussed or questioned.  However, I am not sure that is accurate (more about it below).

Furthermore, scientific research review is easier because the experts in the area naturally form communities along the lines of discipline.  It is always possible to find the expert with right expertise if one searches long enough:

The question session was fast and lively. And, yes, after the session, a senior scientist approached me and told me in no uncertain terms why our idea would not work—that sound you heard was me falling down the hole in our model. He was, and still is, right.

R&D management, on the other hand reaches across disciplines and there are no experts that can question results.  More importantly, all disciplines traditionally report their needs, requirements and results in a jargon specific to them.  The only person who is authorized to bridge across the jargons is the senior manager. This authority and visibility gives senior managers a unique vantage and makes it difficult for anyone else to question their decisions.

Furthermore, scientific work / decisions can be replicated by others and results tested / verified.  This is not true in R&D management world.  Decisions have long-term consequences and once made, there is hardly ever a way to test what would have happened if some other decision was made (because economic and competitive landscape changes fundamentally by the time results of decisions are visible.  This makes it difficult  for anyone to question and or critique R&D management decisions.

Finally, the consequences of failed scientific work are somewhat limited – only the lives of researchers are directly impacted.  The consequences of failed R&D management decisions are often much larger and can have significant impact on thousands of lives.  This pressure along with lack of sufficient ability to measure the effectiveness of decisions encourages R&D managers to surround themselves with people who confirm their decisions…

So what can be done about the confirmation bias:

  1. Encourage constructive criticism of R&D management decisions.  Even if the time-frame for questioning is much shorter than scientific work – an hour or a week, the fact that others view points are on the table will will actually have value in itself.  This is even more important in the new world where decisions are impacting incredibly complex systems that no one person can understand.
  2. Implement processes, tools and systems to make information necessary to make R&D management decision more broadly available: Even though disciplines participating in R&D and R&D management have all specific jargons, they are still tied together by a common thread of achieving desired objectives.  It is important to leverage this common thread and set up tools to actually elucidate the information that will let everyone – not just the R&D manager  – see the data required to make effective decisions. This will have an added advantage of validating data and making sure there are no errors.
  3. Quantify gut feelings that lead to decisions: In the end, since R&D management is always based on intuition – since no one can actually foresee the future when the results of those decisions are available.  This has traditionally meant that these qualitative decisions are not quantified in any way.  Standardized checklists are an easy way to quantify what the gut feel has been.
  4. Document decisions: Once the decisions are quantified, it is easy to document them in the tools and systems we talked about in step 2. If the decisions are easily accessible, it makes it possible to learn from them and understand why things worked or did not work.  It also makes it possible to recover or redirect if things indeed do go wrong.
  5. Develop intermediate milestones, inchstones or check points: If the only way to check the results of the decision is at the end, there is no way to recover or redirect.  By putting in place intermediate check points, especially based on key assumptions identified (step 3) and documented (step 4), R&D managers can improve their chances of success.
  6. Develop dashboards to monitor results of decisions: Combine systems in Step 2 with check points in Step 5 to develop dashboards that quickly show if things are not working – giving advance warnings to prevent catastrophic failures…

Again, I welcome any criticism (constructive or otherwise)!

Leave a Reply