Confirmation bias in R&D management

Here is a bit of a philosophical problem that I have been thinking about for quite some time.  In the scientific world, there are all kinds of checks on proposals/decisions/results before they are accepted.  In fact, skepticism is actually somewhat welcomed. Why are R&D management decisions not subject to similar level of scrutiny?  Time and again I have found that decisions of senior R&D executives are not challenged and debated.  If innovation can only happen when when there is questioning of status quo in R&D, why not the same for R&D management innovation?

The article Confirmation bias in science: how to avoid it summarizes the problem pretty effectively (albeit in the context of scientific research):

One of the most common arguments against a scientific finding is confirmation bias: the scientist or scientists only look for data that confirms a desired conclusion. Confirmation bias is remarkably common—it is used by psychics, mediums, mentalists, and homeopaths, just to name a few.

The article had three interesting examples of confirmation bias. The one that is most applicable to R&D management and organizational pride comes from 18th century France – where the need to maintain national pride and a belief that all is well led to an amazing acceptance of bad research / decision:

… Prosper-René Blondlot announced the discovery of N-rays. He was immediately famous in France, and very shortly afterwards, researchers from around the world confirmed that they too had seen N-rays. N-rays were an ephemeral thing: observed only as a corona around an electric discharge from certain crystals. They were only observed by the human eye, making them difficult to quantify.

But not everyone was convinced. Many researchers outside of France were suspicious of the number of claims coming from French labs for the properties of N-rays. In the end, an American scientist Robert Wood visited the lab of Blondlot to see it for himself. During one of the experiments he surreptitiously removed the crystal that supposedly generated the N-rays, after which Blondlot failed to notice the absence of N-rays. The N-rays failed to vanish when their source was removed.

From an observation of many firms from my management consulting days, I find that confirmation bias is even stronger in R&D management.  In fact, many senior managers seem to surround themselves with people who actually do nothing but confirm their decisions.  Below are what I think are root causes that encourage confirmation bias in R&D management and some thoughts on what could be done about them.  I welcome any comments and criticism.
First, the process of scientific critique takes a very long time. For example, from the same Ars Techica article, the evaluation of research took 24 times as long as the work it self:

… the total amount of time coding the model? Maybe 24 hours, total. OK, call it 36 hours with some debugging. Running the code to get results? Maybe a minute per parameter set, so let’s call it a month. So that’s 32 days from around 730 total. What was all the rest of that time devoted to? Trying to anticipate every possible objection to our approach. Checking if those objections were valid. Trying to find examples of physically realistic parameters to test our model with. Seeing if the code was actually modeling what we thought it was. Making sure that our assumptions were valid. In summary, we were trying to prove ourselves wrong.

This is not practical in R&D management world.  Clearly, if it takes two years to decide a course of action, no action can be taken.  This problem has traditionally meant that management decisions can not actually be discussed or questioned.  However, I am not sure that is accurate (more about it below).

Furthermore, scientific research review is easier because the experts in the area naturally form communities along the lines of discipline.  It is always possible to find the expert with right expertise if one searches long enough:

The question session was fast and lively. And, yes, after the session, a senior scientist approached me and told me in no uncertain terms why our idea would not work—that sound you heard was me falling down the hole in our model. He was, and still is, right.

R&D management, on the other hand reaches across disciplines and there are no experts that can question results.  More importantly, all disciplines traditionally report their needs, requirements and results in a jargon specific to them.  The only person who is authorized to bridge across the jargons is the senior manager. This authority and visibility gives senior managers a unique vantage and makes it difficult for anyone else to question their decisions.

Furthermore, scientific work / decisions can be replicated by others and results tested / verified.  This is not true in R&D management world.  Decisions have long-term consequences and once made, there is hardly ever a way to test what would have happened if some other decision was made (because economic and competitive landscape changes fundamentally by the time results of decisions are visible.  This makes it difficult  for anyone to question and or critique R&D management decisions.

Finally, the consequences of failed scientific work are somewhat limited – only the lives of researchers are directly impacted.  The consequences of failed R&D management decisions are often much larger and can have significant impact on thousands of lives.  This pressure along with lack of sufficient ability to measure the effectiveness of decisions encourages R&D managers to surround themselves with people who confirm their decisions…

So what can be done about the confirmation bias:

  1. Encourage constructive criticism of R&D management decisions.  Even if the time-frame for questioning is much shorter than scientific work – an hour or a week, the fact that others view points are on the table will will actually have value in itself.  This is even more important in the new world where decisions are impacting incredibly complex systems that no one person can understand.
  2. Implement processes, tools and systems to make information necessary to make R&D management decision more broadly available: Even though disciplines participating in R&D and R&D management have all specific jargons, they are still tied together by a common thread of achieving desired objectives.  It is important to leverage this common thread and set up tools to actually elucidate the information that will let everyone – not just the R&D manager  – see the data required to make effective decisions. This will have an added advantage of validating data and making sure there are no errors.
  3. Quantify gut feelings that lead to decisions: In the end, since R&D management is always based on intuition – since no one can actually foresee the future when the results of those decisions are available.  This has traditionally meant that these qualitative decisions are not quantified in any way.  Standardized checklists are an easy way to quantify what the gut feel has been.
  4. Document decisions: Once the decisions are quantified, it is easy to document them in the tools and systems we talked about in step 2. If the decisions are easily accessible, it makes it possible to learn from them and understand why things worked or did not work.  It also makes it possible to recover or redirect if things indeed do go wrong.
  5. Develop intermediate milestones, inchstones or check points: If the only way to check the results of the decision is at the end, there is no way to recover or redirect.  By putting in place intermediate check points, especially based on key assumptions identified (step 3) and documented (step 4), R&D managers can improve their chances of success.
  6. Develop dashboards to monitor results of decisions: Combine systems in Step 2 with check points in Step 5 to develop dashboards that quickly show if things are not working – giving advance warnings to prevent catastrophic failures…

Again, I welcome any criticism (constructive or otherwise)!


How to keep your top talent

Here is what the pointy haired boss suggests:

Here are three ways to keep the top talentfrom the corporate executive board:

  1. Get to know the top talent
  2. Don’t mistake current level of performance with future potential
  3. Differentially reward top talent

Here is a bonus also from CEB – things to keep in mind for motivating your teams:

His core aim is to clearly communicate a consistent vision and then drive accountability for executing it. He’s done this by avoiding five dysfunctions on his staff that aligns well with Lencioni’s work. Lencioni’s five dysfunctions are: 1. Absence of trust 2. Fear of conflict 3. Lack of commitment 4. Avoidance of accountability 5. Inattention to results


Strategic considerations for teaming, alliances and collaborations

Management Science has a cool (at least I think so) paper on Cross-Function and Same-Function Alliances: How Does Alliance Structure Affect the Behavior of Partnering Firms:

Firms collaborate to develop and deliver new products. These collaborations vary in terms of the similarity of the competencies that partnering firms bring to the alliance. In same-function alliances, partnering firms have similar competencies, whereas in cross-function alliances, partners have very different competencies.

This is very important in co-development.  If a company in consumer electronics is co-designing a new device with a PCB manufacturer, the alliance is likely to be same-functional. Good news is that alliance between firms with similar competencies seem to work well (with caveats – see below).

On examining managers’ view of these alliances, we find that, on average, same-function alliances are expected to perform better than cross-function alliances, holding fixed the level of inputs. 

However, if the same consumer electronics firm wanted to work with a new company on wireless power, a brand new technology, the alliance might be cross-functional.  Many R&D managers are apprehensive of collaboration with dissimilar firms.  The paper uses game theory to come up with a very interesting finding that cross-functional collaboration leads to increased investments:

partners in cross-function alliances may invest more in their alliances than those in same-function alliances.

And that multiple partners are not a problem in cross-functional collaboration, but they are if the firms collaborating have similar competencies.  This is very important in Aerospace & Defense world as many government contracts do indeed have several same-functional parters:

It is also often believed that increasing the number of partnering firms is not conducive for collaborative effort. Our analysis shows that this belief is correct for same-function alliances, but not for cross-function alliances. 

Finally, a somewhat straight forward learning – once the firms have learned from each other and become more similar in competency, they do stop investing the way they used to in the cross-functional stage:

We extend our model to consider alliances where firms have an opportunity to learn from their partners and later leverage this knowledge outside the scope of their alliance. Though such learning increases the resources committed by alliance partners in the learning phase, it decreases investment in the subsequent competition and also dampens the overall investment across the two stages. 


Too many metrics?

The article The Only KPIs Your Firm Will Ever Need in AccountingWEB.com discusses why too many metrics are not value adding:

Measurement for measurement sake’s is senseless, as quality pioneer Philip Crosby understood when he uttered, ‘Building a better scale doesn’t change your weight.’

They seem to contrast it from the McKinsey maxim:

“What you can measure you can manage.”

 The example they have is of Continental turn around, where Mr. Bethune focused on just three high-level metrics:

  1. On-time arrival
  2. Lost luggage
  3. Customer complaints
Clearly, these are important for an airline.  However they are not the only metrics that need to be monitored in an operating airline.  The article makes sense and it is important for Managers to have dashboards that have unique metrics – Key Predictive Indicators.  However, in most cases, KPI need to be broken down into constituents that can be controlled to improve efficiencies.  So it is not that KPI replace detailed metrics – they provide a way to consolidate many metrics into meaningful indicators and help managers easily detect and eliminate problems.

R&D Portfolio Management case study – Microsoft Kin

It is not very  often do we get a look inside R&D management processes and tools at giants like Microsoft and Toyota. So, it is good to learn as much as we can when information becomes available.  I am studying new public disclosures on Toyota’s R&D process and will post about it soon.  The topic of interest today is Microsoft and its killing of Kin product line with only 10k units sold.  This was a big failure – the acquisition of Danger alone is reported to have been around $500M, which does not include the cost of developing the product line and associated software. Lets start off with a quick background from a great article in Ars Technica:

Microsoft’s ambitions with the KIN were sound. As much as the iPhone and, lately, Android handsets garner all the press attention, smartphones represent only a minority of phone sales—a growing minority, but a minority all the same. There are many, many people who don’t have a smartphone, and don’t even particularly want one, and they easily outnumber smartphone users.

Redmond wanted to be a part of this broader market. The company was already a big player in the smartphone market with Windows Mobile; the KIN was a product of its ambitions beyond that space. So rather than starting from scratch, in 2008 Microsoft bought Danger, the company behind the T-Mobile Sidekick line.

To a certain extent Microsoft succeeded in this new device.  Clearly they had great start from Danger and their cloud computing platform.  The idea of social networking focused phone for tweens was also great.  As the AnandTech article points out the phone did have some very good features:

KIN included a notable number of features Microsoft and its Danger team executed better than anyone else in the smartphone market today.

Amongst the notable features are innovative form factor, good usability, great battery life and aforementioned social media integration and very innovative packaging.  So why did Kin fail?  I guess the problems are related to a broad failure of R&D management processes:

  1. Portfolio Management: Executive sponsorship critical to R&D project funding
  2. Acquisition Integration: Not invented here
  3. Product Management: Positioning product as alternate to smart phones but the same cost
  4. Project Management: Significant development delays
  5. Overall R&D management: Unclear strategy, ambiguous goals 

Lets look at each one of these factors in detail:

Several posts such a Engadget and Mini-Microsoft have pointed out that Executive sponsorship is a critical part of Microsoft’s R&D portfolio management.  The Project Pink (which later became Kin) was sponsored by J. Allard, while Andy Lees sponsored a somewhat competing project of Windows Phone 7.  It is a bit strange to prioritize product portfolios based on executive sponsorship and leads to significant problems.  From engadget:

To get anywhere, a project inside Microsoft needs an executive sponsor, and for Pink, Allard had been that guy from day one. It was his baby. Of course, Allard was a visionary, an idea man; Lees — like most Microsoft execs — is a no-nonsense numbers guy, and to put it bluntly, he didn’t like that Pink existed. To quote our sources, Lees was “jealous,” and he was likely concerned that Kin was pulling mindshare (and presumably resources) from Windows Mobile’s roadmap. With enough pressure, Lees ended up getting his way; Pink fell under his charge and Allard was forced into the background

Having two competing priorities is not uncommon in R&D portfolios.  However, alignment of priorities and project pipeline gets done well in advance of launch (at early stages and continuously during portfolio reviews) in most companies with effective portfolio management processes.  In case of Microsoft however, the two projects ended being misaligned strategically, along market niches and through release schedule.  Apparently, Lees, the executive in-charge of Windows Phone 7 ended up re-aligning scopes only partially – which hurt overall results:

Having Lees in control changed everything, if for no other reason than he didn’t care about the project at all. This was right around the time that Windows Phone 7 was rebooting, and Pink didn’t fit in his game plan; to him, it was little more than a contractual obligation to Verizon, a delivery deadline that needed to be met. Pink — Allard’s vision of it, anyhow — was re-scoped, retooled, and forced onto a more standardized core that better fit in with the Windows Phone roadmap, which in turn pushed back the release date. Ironically, because they had to branch off so early, Kin would ultimately end up with an operating system that shares very little with the release version of Windows Phone 7 anyway.

Having acquired technology integrated into new products is not uncommon either.  However, there did not seem to be adequate integration of Danger into Microsoft.  The rejection of acquired technology from Danger and the move to enforce Windows Phone 7 structure on to a completely different OS ended up delaying the project by more than 18 months:

This move allegedly set the release of the devices back 18 months, during which time Redmond’s carrier partner became increasingly frustrated with the delays.

Since Windows Phone 7 is a smart phone OS and requires associated expensive hardware, this added to costs of the phone. Such a big delay in launch of the device soured relationship with the launch partner Verizon and reduced their appetite to subsidize the phone and service.

Apparently when it came time to actually bring the Kins to market, Big Red had soured on the deal altogether and was no longer planning to offer the bargain-basement pricing deals it first had tendered. The rest, as they say, is history — though we don’t think even great prices could have accounted for what was fundamentally a flawed product. Our source says that the fallout from this troubled partnership is that Microsoft has backed away from Verizon as a Windows Phone 7 launch partner, claiming that the first handsets you see won’t be offered on the CDMA carrier — rather that we should expect GSM partners to get first crack.

Product management processes ensure that product is aligned with the target market – in terms of price, functionality and usability.  Due to portfolio management failures, product management failed as well:

Some suggest that the KIN really failed because teenagers all want iPhones. There’s certainly some truth in that—iPhones are certainly aspirational goods—but iPhones are expensive. The comparison is made because the KIN was fundamentally priced like an iPhone—but it was never meant to be. Had it been priced like a Sidekick, as it should have been, and as Verizon initially set out to do, it would have substantially undercut the iPhone and been a better fit for the Facebook generation to boot. It wouldn’t do everything the iPhone could do, but it wouldn’t be operating in the same market anyway.

Furthermore, the schedule slips led to a very incomplete feature set that did not include a calender, instant messaging etc.:

That brings me to what else was lacking that was rather glaring – a calendar. With the right execution, the KIN could have perfectly integrated the Facebook event calendar, invitations, and exchange or Google calendars. Instead, the KIN has absolutely no planning tools or event notifications.

Nor was the data plan priced for the target market:

For starters, the devices lacked a realistic pricing structure – despite not quite being a smartphone, Verizon priced the data plans for the KIN as if they were, at $29.99 per month. There’s since been discussion that Verizon originally intended heavily reduced pricing for the KINs, but soured on the deal when Microsoft delayed release. At the right price, the KINs could have been a compelling alternative to the dying breed of featurephones. It’s hard to argue that there isn’t a niche that the KIN could have filled at the bottom, yet above boring featurephones. At $10 per month or less for data, the KIN would’ve been a much more successful sell.

Add to this mix strategic direction problems: First Microsoft could not get itself to support competitive social networking sites such as Flickr – which actually reduced the value of the product

The giants of the social networking space include Facebook and Twitter, for which Kin offered at least fair support. But rather than support Flickr for images and (Google-owned) YouTube for video, Microsoft plugged in its Windows Live services for these media. Kin also lacked established functionality such as a calendar and instant messaging as well as support for fast-growing services embraced by social networkers such as Foursquare.

Also attempting multiple very conflicting goals (compete with iPhone and be cheap like sidekick) led to muddled execution:

The heart of both Microsoft’s and Google’s mobile operating system strategy is to have diverse handsets running its software. Still, both companies look at the level of integration Apple can achieve with the iPhone and are drawn to have a heavier hand in the design of handsets. This sort of licensor regret is part of what drove Google to create the Nexus One and likely also contributed to Microsoft’s decision to create the Kin handsets. 

It appears that in absence of clear portfolio goals and metrics, there is a lot of politics – which further reduces efficiency and efficacy, drives morale down and leads to rumors of layoff:

But wait, there’s more — the Kin team is being refocused onto the WP7 project, but that’s not the only shakeup going on. Our source said there had been rumblings that Steven Sinofsky — president of the Windows and Windows Live groups — is making a play for the entire mobile division as well in an attempt to bring a unified, Windows-centric product line to market.

I hope you are with me: portfolio management problems along with acquisition challenges problems led to project management problems and resulted in very large schedule slips. Those schedule slips along with feature creep and problems with product management.  Unclear strategic direction and management in-infighting tied everything together and ensured that the project failed.

Many lessons to be learned here!  What do you think?


Hybrid Entrepreneurship

Management Science has an interesting paper on Hybrid Entrepreneurship with empirical data that a large fraction of entrepreneurs start off in a hybrid mode – keep the day job will working on a new venture:

In contrast to previous efforts to model an individual’s movement from wage work into entrepreneurship, we consider that individuals might transition incrementally by retaining their wage job while entering into self-employment. We show that these hybrid entrepreneurs represent a significant share of all entrepreneurial activity.


What drives satisfaction in virtual teams?

As you have probably noticed, management of virtual teams and co-development across multiple organizations is a favorite topic of mine.  Here is a very interesting paper from the Journal R&D Management: An analysis of predictors of team satisfaction in product development teams with differing levels of virtualness:

The purpose of this study is to empirically examine and assess the moderating effects of extent of virtualness on a variety of well-established predictors of new product development team satisfaction. We focus our study on 178 different new product development teams from a variety of industries and use extent of virtualness as a structural characteristic of the teams, measuring it on a continuum. 

The paper had three findings that I find are very important to any R&D manager (as the virtual teams pointed out, most teams become some what virtual even when the members are on different floors).

(1) relationship conflict has a more deleterious effect on team member satisfaction as teams become more virtual, mainly because it is very difficult for team members of virtual teams to resolve their interpersonal disputes; 

So, the article empirically establishes an increased need for active conflict management and effort to keep the team loose.

(2) the relationship between preference for group work and team satisfaction is moderated by extent of virtualness, such that preference for group work increases team satisfaction more as virtualness increases; 

I am not sure if I understand this.  Please help if you do.  From what I read, the people who love to work in groups are more satisfied with work in virtual teams.  Does that mean that R&D managers staffing virtual teams have to either not select or provide extra help to people who tend not to like work in groups?

(3) goal clarity and familiarity are not moderated by extent of virtualness, but have a significant direct effect on team satisfaction.

Pretty straight forward – for virtual teams to succeed, goals need to be extremely well communicated.  They also need to be effectively communicated across discipline, organizational and cultural boundaries.  This to me is the biggest challenge to codesign.  I am not sure if I have found effective tools and processes to address this challenge…


Cost overruns and schedule delays of large-scale U.S. federal defense and intelligence acquisition programs

Project Management Journal in the paper Causal inferences on the cost overruns and schedule delays of large-scale U.S. federal defense and intelligence acquisition programs provides some interesting data on cost and schedule overruns in USG programs:

For example, statistical data from a recent Government Accountability Office (GAO) report (2008a) on 95 weapons systems found that the total cost growth on these programs was $295 billion, and the average schedule delay was 21 months. These large numbers represent a growing trend in cost overruns and schedule delays since the GAO began tracking these metrics in 2000. For comparison, the estimated total cost growth in the year 2000 of 75 DOD programs was $42 billion, normalized to fiscal-year 2008 dollars.

 The author indicates three reasons for the overruns ineffective human resources policies and practices, consolidation of the aerospace industry, and too many stakeholders:

A study was undertaken to understand why cost overruns and schedule delays have occurred and continue to occur on large-scale U.S. Department of Defense and intelligence community programs. Analysis of data from this study infers the causes of cost overruns and schedule slips on large-scale U.S. federal defense and intelligence acquisition programs to ineffective human resources policies and practices, consolidation of the aerospace industry, and too many stakeholders. 

Specifically, he posits that ineffective human resource policies lead to inexperienced personnel both in contractors and customers, people rotate through jobs too frequently, and there are too many contractors involved.  I can imagine that most of these are realities of the current economic and cultural environment.  Just as Toyota found out, these come from inability for R&D management tools and processes to deal with that environment and increasing complexity.

The author also posits that too many stakeholders lead to frequent changes in requirements.  I am not sure if the environment that leads to requirement changes is going to change any time soon.  I guess that means that the R&D managers need to come up with processes to deal with requirements changing effectively – without adding cost overruns and schedule slips…

The block diagrams that the author came up with look interesting.  However, I am not sure if I am able to agree with them fully.  It appears that he had his conclusions made first and then fit the analysis to support them…

Overall, a great article – well worth reading.


Get Immediate Value from Your New Hire

HBR has some excellent advice on Get Immediate Value from Your New Hire:

  1. Start Early:  Start as early as possible in the process to expose your new hire to the organization’s or unit’s culture and to explain how work gets done. 
  2. Get Them The Right Network: The first thing a manager can do is ensure that the new hire understands how important the informal or ‘shadow’ organization is in getting things done
  3. Get Them Working: Giving them real work immerses them in the way things function at the organization. This doesn’t mean you should let them “sink or swim”; definitely provide the support they need. 

This are useful reminders for both manager hiring new team members and team members getting involved in new organizations.  My success in several organizations has been hampered by lack of understanding of informal / shadow networks.  One interesting observation: Supervisors in companies with strong shadow organizations were much more reluctant to explain them!  And some principles to Remember:

Do: Hire for cultural fit as much as for capabilities and skill; Introduce your new hire to ‘culture carriers’ and ‘nodes’; Explain how work actually gets done at your organization 

Don’t: Let a new hire stay in ‘learning’ mode for too long; Assume your new hire can’t be productive from the start; Rely on the org chart to help explain lines of communication