Do Processes and Metrics Kill Innovation?


Many of the organizations we have visited often have the discussion about innovation vs. structure. The thought is that if we enforce processes and metrics on innovation, we will cripple it. The post Does Structure Kill Creativity? – K.L.Wightman lays out some interesting thoughts:

There are 26 letters in the alphabet and 12 notes in a musical scale, yet there are infinite ways to create a story and a song. Writing is like a science experiment: structure is the control, creativity is the variable.

Read More


Why Is Innovation So Hard?

The article “Why Is Innovation So Hard?” outlines cultural challenges that prevent innovation from taking root. As we have discussed many times, development of disruptive capabilities can often fail to achieve their intended results.

This means that in order to innovate we need to change our attitude toward failures and mistakes. Contrary to what many of us have been taught, avoiding failure is not a sign that we’re smart. Being smart is not about knowing all the answers and performing flawlessly. Being smart is knowing what you don’t know, prioritizing what you need to know, and being very good at finding the best evidence-based answers. Being smart requires you to become comfortable saying, “I don’t know.”  It means that you do not identify yourself by your ideas but by whether you are an open-minded, good critical and innovative thinker and learner.

Read More


R&D Management Best Practices

The article An Examination of New Product Development Best Practice from Journal of Product Innovation Management has great best practice AND benchmarking information for all R&D organizations.  The authors of the article conducted a survey of 286 companies across USA and Europe.  The objective was to understand what best practices are implemented, what best practices are not implemented and what practices are understood to be poor (Please note, the article uses NPD for New Product Development):

..it is unclear whether NPD practitioners as a group (not just researchers) are knowledgeable about what represents a NPD best practice. The importance of this is that it offers insight into how NPD practitioners are translating potential NPD knowledge into actual NPD practice. In other words, are practitioners aware of and able to implement NPD best practices designated by noteworthy studies? The answer to this question ascertains a current state of the field toward understanding NPD best practice and the maturity level of various practices. Answering this question further contributes to our understanding of the diffusion of NPD best practices knowledge by NPD professionals, possibly identifying gaps between prescribed and actual practice.

The article divides R&D into seven dimensions and looks into best practices for each.  Let us dig in:
1. Strategy: Strategic alignment for R&D was considered the most important dimension of the seven considered. The article defines strategy as everything from vision definition to prioritization of projects for resource allocation.

strategy involves the defining and planning of a vision and focus for research and development (R&D), technology management, and product development efforts at the SBU, division, product line, and/or individual project levels, including the identification, prioritization, selection, and resource support of preferred projects.

Within Strategy, most organizations aligned R&D to long-term company strategy and project goals seemed to be aligned well with the organization / mission.  Also, organizations were able to redirect projects as markets changed.  Here is the list of implemented best practices (again NPD=R&D):

  • Clearly defined and organizationally visible NPD goals 
  • The organization views NPD as a long-term strategy 
  • NPD goals are clearly aligned with organization mission and strategic plan 
  • NPD projects and programs are reviewed on a regular basis 
  • Opportunity identification is ongoing and can redirect the strategic plan real time to respond to market forces and new technologies
    R&D 

Some of the best practices that clearly did not get implemented deal with pet projects and managing R&D projects as portfolio.  Organizations did not have portfolio management processes implemented or they did not treat R&D projects as a portfolio (each was unique and not measured with respect to others.  We have quite a bit of information about R&D portfolio management here.

2. Market Research: Understanding of customer/market needs and having them drive the R&D process was the second most dimension of R&D management.

describes the application of methodologies and techniques to sense, learn about, and understand customers, competitors, and macro-environmental forces in the marketplace (e.g., focus groups, mail surveys, electronic surveys, and ethnographic study).

Within market research, the best practices were pretty straight forward: Use customer research to drive product development.  Also involve customer in testing the products at multiple stages of product development.

  • Ongoing market research is used to anticipate/identify future customer needs and problems 
  • Concept, product, and market testing is consistently undertaken and expected with all NPD projects 
  • Customer/user is an integral part of the NPD process
    Results of testing (concept, product, and market) are formally evaluated

We have discussed customer driven R&D in the past. We have also discussed that over dependence on customer can actually be harmful to R&D.  We have also seen Steve Jobs talk about user centric design, he did not directly involve customers in many stages of R&D.  For revolutionary products, customers are unlikely to be a good driver for R&D.

3. Product Launch: Processes associated with product commercialization / launch were rated the third most important area for best practices.

Commercialization describes activities related to the marketing, launch, and postlaunch management of new products that stimulate customer adoption and market diffusion.

Bar was not very high relative to launch related processes.  Most involved having a process, following it and tracking / learning from results.

  • A launch process exists 
  • The launch team is cross-functional in nature 
  • A project postmortem meeting is held after the new product is launched 
  • Logistics and marketing work closely together on new product launch 
  • Customer service and support are part of the launch team 
Product launch processes are quite important and we have discussed the impact of corporate cultures on new product launches. The key poor practice identified was keeping product launches secret to prevent unauthorized public announcement.  Not sure if that can be helped.

4. Processes: The article refers to stage gate reviews and knowledge management as key processes for R&D:

Within this framework, NPD process is defined as the implementation of product development stages and gates for moving products from concept to launch, coupled with those activities and systems that facilitate knowledge management for product development projects and the product development process.

It appears that most organizations recognized the need to have common processes for R&D: Stage gates, clear go/no-go criteria and well documented processes existed.

  • A common NPD process cuts across organizational groups 
  • Go/no-go criteria are clear and predefined for each review gate 
  • The NPD process is flexible and adaptable to meet the needs, size, and risk of individual projects 
  • The NPD process is visible and well documented
    The NPD process can be circumvented without management approval 

The key poor practices seem to be about inadequate IT tool support, uneven access to R&D knowledge and poor implementation of project management practices.  May we suggest InspiRD?

5. Company Culture: The next highly rated dimension of R&D management was company culture, its acceptance of R&D management as an important constituent and ability for R&D teams to collaborate across disciplines and organizations/suppliers:

company culture is defined as the company management value system driving those means and ways that underlie and establish product development thinking and product development collaboration with external partners, including customers and suppliers. Characteristics of company culture include the level of managerial support for NPD, sources used for NPD ideas, and if creativity is rewarded and encouraged.

The key complaint about company culture was a rejection of external or disruptive ideas.  We have discussed this extensively.

6. R&D Climate: This dimension relates to R&D project organization (such as cross functional teams) including leadership and HR support.

Within this framework, project climate is defined as the means and ways that underlie and establish product development intra-company integration at the individual and team levels, including the leading, motivating, managing, and structuring of individual and team human resources.

The best practices are straight forward – cross functional teams and multiple means of inter/intra team communications.

  • Cross-functional teams underlie the NPD process 
  • NPD activities between functional areas are coordinated through formal and informal communication 

We have discussed project networks as a way to supplement cross-functional teams.  A key challenge seems to be the inability to gain support for ideas that cross functions. Also, knowledge transfer across disciplines is also a major challenge.

7. R&D Metrics: Although metrics was rated the least important area for R&D management, the authors rightly point out that it is because very few meaningful R&D-related metrics exist.

The metrics and performance measurement dimension of the framework includes the measurement, tracking, and reporting of product development project and product development program performance.

In fact, participants could not point out a single best practice for R&D metrics!  We have discussed plenty of interesting metrics.


R&D: USA, Europe and Japan increasingly challenged by emerging countries

This UNESCO report titled Research and development: USA, Europe and Japan increasingly challenged by emerging countries from few months ago has some interesting data:

While the USA, Europe and Japan may still be leading the global research and development (R&D) effort, they are increasingly being challenged by emerging countries, especially China.

I wish the report had a concise definition of what they include in R&D.  For example, according to the report, there are 1.1M researchers in China and the number in US is similar.  Does that include all product development engineers?  If so, the number sounds a bit low (1 in a 1,000 persons in China is an engineer?) .  In any case, one of the key reasons for the rise of emerging economies is the Internet:

This transformation is being helped by the extremely rapid development of the Internet, which has become a powerful vector for disseminating knowledge. Throughout the world, the number of connections leaped noticeably from 2002 to 2007. But this advance is even more significant in emerging countries. In 2002, just over 10 out of 100 people, globally, used the Internet. There are over 23 users per 100 today. And this proportion rose from 1.2 to 8 in the same period in Africa, from 2.8 to 16 in the Arab States, and from 8.6 to 28 in Latin America.

In any case, here is a bit of benchmark data about R&D  budgets:

Even if it is hard to quantify the effects of the 2008 financial crisis, the Report points out that the global recession could have an impact on R&D budgets, which are often vulnerable to cuts in times of crisis. American firms, which are among the most active in terms of R&D, slashed their budgets by 5 – 25% in 2009. As a result, the USA has been harder hit than Brazil, China and India, which has enabled these countries to catch up faster than they would have without the crisis. Finally the Report stresses the need to intensify scientific cooperation, particularly between countries in the South.

The full report is here.


Booze’s 2011 Global Innovation 1000

I have been meaning to post about a pretty good survey by Booze (Global Innovation 1000).  The study has a lot of useful data for benchmarks.  The overall message is very important:

As our annual Global Innovation 1000 study, now in its sixth year, has consistently demonstrated, the success of these companies is not a matter of how much these companies spend on research and development, but rather how they spend it.

Here is the data supporting the hypothesis:

For the second year in a row, Apple led the top 10, followed by Google and 3M. This year, Facebook was named one of the world’s most innovative companies, entering the list at number 10. In a comparison of the firms voted the 10 most innovative versus the top 10 global R&D spenders, Booz & Company found that the most innovative firms outperformed the top 10 R&D spenders across three key financial metrics over a 5-year period — revenue growth, EBITDA as a percentage of revenue and market cap growth.

I guess being a consulting house, Booze would like to teach organizations how to spend their cash…  But still, it is an important message.  We need a culture that supports innovation and strategic alignment of innovation with goals (duh!).

Every company among the Innovation 1000 follows one of three innovation strategies — need seeker, market reader, or technology driver. While no one or another of these strategies offers superior results, companies within each strategic category perform at very different levels.  And, no matter a firm’s innovation strategy — culture is key to innovation success, and its impact on performance is measurable. Specifically, the 44 percent of companies who reported that their innovation strategies are clearly aligned with their business goals —and that their cultures strongly support those innovation goals — delivered 33 percent higher enterprise value growth and 17 percent higher profit growth on five-year measures than those lacking such tight alignment.

Here is some interesting commentary from 24/7 Wall ST:

The overlap of these “innovators” with the firms that spent the most money on R&D last year is small.

The difference between the two lists is that the largest spenders mostly invest dollars to stay in the places they already hold in the business world. Pharma companies need to replace drugs that are about to come off patent, or already have. Old world tech companies like Microsoft and Intel need to keep pace with firms that have new successful hardware and software products that challenge their sales. Auto companies are in a race to make their cars and light trucks safer and more useful to consumers.

May be it is just the industry companies are in and the maturity of the market place:

It is easy to believe that the companies growing the fastest and with the most attractive products to consumers and businesses are the most innovative. This will not last for those firms. Eventually all companies spend R&D money to hold their positions within their industries. It is just a matter of the age of the each company’s products and the state of new competition, which is always entering the market — often aimed at the innovators with sharply growing sales.

So, do we believe that all the current innovators will remain innovative for the foreseeable future?  Probably not:

But Apple, Facebook, and Google are only a few years away from the need to spend R&D money to hold their own rather than advance rapidly within their own industries. Almost no one believes it about Apple, but eventually there will come a time when its revenue growth is no longer in the high double digits. Google’s products like

I guess every on believes that Apple will be an exception!  Even without Steve Jobs?


How Fast and Flexible Do You Want Your Information, Really?

Here is a quick note about access to corporate information from the Sloan article: How Fast and Flexible Do You Want Your Information, Really?

access to corporate data in organizations is rarely as rapid as an Internet search. “Why can’t I get information on our sales just as quickly as I can search the Internet?” is a frequently overheard complaint. That frustration has led many organizations to try to speed up the delivery of data and analysis, particularly in the context of decision making (typically described as “business intelligence,” or BI). But few organizations have reached an optimum with regard to how fast important information reaches in boxes, desks and brains.

The article suggests that more information is not necessarily better:

Consulting companies that study information consumption routinely find that more than half of all standard reports aren’t being used by anyone anymore. Inflexible standard reporting means not only that paper is wasted, but that an even more valuable resource — executive attention — is misdirected.

Here are their findings:

  • The aim should be to enable faster decision making, not faster information. Focus on information speed and flexibility that facilitates that.
  • Not all information is needed equally fast, nor in equally perfect condition.
  • Executives often ask for more information than they use.

Key to success is right metrics to report so that managers can make effective decisions.  Unfortunately, this is difficult to do  – many times because other managers do not want to have their performance be easily visible.  Good points to keep in mind though.


Too Big to Succeed?

A quick note about an article with some interesting data inf CFO.com (Too Big to Succeed?).  The overall conclusion is pretty interesting:

Research on nonfinancial companies finds that larger companies typically grow more slowly and earn lower returns on capital.

The author has done a pretty extensive analysis:

Our capital-market research on the 1,000 largest nonfinancial U.S. companies, excluding those that were not public for the full decade of the 2000s (net sample size: 748 companies), indicates that size does indeed matter — but more as a shortcoming than an advantage.

Here are the detailed results (should be useful for any benchmarking):

The overall lessons are quite intuitive:

Why do large companies tend to underperform smaller companies? The specific reasons vary greatly, but there are a number of common themes:
• Organizational distance from executives to the people running each business inhibits use of full and objective information in strategic decision-making at the top and tends to slow down the decision processes at the bottom.
• Managerial reliance on performance against budgets lessens the intensity for delivering true continuous improvement at the front line and introduces managerial stumbling blocks such as “sandbagging,” “hockey-stick plans,” and “spend it or lose it.”


Tough Times Spur Shifts in Corporate R&D Spending – BusinessWeek

Lots of intersting data about R&D budgets in the Business Week article Tough Times Spur Shifts in Corporate R&D Spending:

Domestic R&D spending by all U.S. companies fell 13.1 percent, to $233.92 billion, in 2008, the most recent year for which data are available, from $269.27 billion in 2007, according to the National Science Foundation. (Including R&D paid for by other U.S. concerns, but performed in U.S. companies’ domestic locations, spending rose to $283 billion in 2008 from $269 billion in 2007, according to the NSF.) During the prior recession, which was far milder, domestic R&D spending was down 0.5 percent, to $198.51 billion, in 2001 from $199.54 billion in 2000.

R&D budgets have actually INCREASED through the downturn!

Yet even amid the sharpest economic downturn since the Great Depression, the 232 companies in the Standard & Poor’s 500 index for which data were available increased their aggregate research and development expenditures to $163.37 billion in 2008 and $166.42 billion in 2009 from $154.44 billion in 2007, before the recession began, according to Bloomberg data. (Of those 232 companies, 115 spent more on R&D in both 2008 and 2009 than they did in 2007.)

Even so, R&D seems to have been moved towards nearterm maintenance and away from invention/innovation.

Total utility patent applications—covering inventions and excluding patents for ornamental design of manufactured goods—have stayed flat at around 456,100 for the past three years, while total utility patent grants have been frozen at around 167,300 per year since 2002, according to data on the U.S.Patent & Trademark Office’s website.

3M has cut R&D budget but not heads by eliminating bonuses.

Cuts in R&D may not signal a reduced commitment to innovation. Even outfits noted for their heavy emphasis on R&D, such as 3M (MMM), have pared their R&D budgets since before the 2008-09 recession. (3M’s $1.29 billion R&D budget in 2009 was down 5.8 percent from 2007.)

Dow maintains 20% of its R&D budget (see below) for innovation projects whose ROI is difficult to measure.

Dow’s $1.49 billion in R&D spending in 2009 represented a 14.6 percent increase from 2007, while revenue fell 16.1 percent over the same period.

P&G has been flat over the period, but they are increasing focus on accessing innovation from the outside:

P&G’s R&D spending was nearly $2.0 billion in fiscal 2010 (ended June), up from $1.95 billion in fiscal 2008, when adjusted to exclude its pharmaceuticals unit sold in October 2009. Net sales—adjusted for the disposal of the pharmaceuticals business and the company’s coffee business in November 2008—fell 0.4 percent over the same period.

Monsanto has formed a strategic allinace with BASF to gain more leverage from R&D – helping them relatively cut R&D budget even though their revenues have actually incresed through the downturn.

Monsanto’s total R&D spending reached $1.1 billion in 2009, up 14.2 percent from 2007, vs. a 40 percent increase in revenue over the same period.)

Here are some others:

Danaher’s R&D budget rose 5.2 percent from 2007, to $632.65 million in 2009. Danaher has doubled its R&D spending as a percentage of sales over the past 10 years, to about 6 percent in 2010, even as its total revenue has tripled over that same period.

[BHI] The oilfield service company’s R&D budget climbed 6.7% from 2007, to $397 million in 2009, despite a 7.3 percent drop in revenue over the same period.


Metrics: R&D Should Settle for Second Best

The article Metrics: R&D Should Settle for Second Best in CEB Views points out that it generally not worth putting in a lot of investment in developing new R&D metrics.  However, as we have seen, there is plenty of research out there that suggests what you measure will drive behavior of your R&D teams – so please keep that in mind.

The article points out that most R&D departments use very simple metrics:

These simplistic measurements might not necessarily be because simple metrics are the most effective, they might be because measuring the right thing is difficult to do.  For example, not one of the top metrics above addresses performance or maturity of R&D projects underway and how they compare with the expectations.  Even though this is hard to do, it might have a huge benefit to overall R&D management.

Overall, the four takes aways have two useful ones:

  1. Use qualitative metrics to evaluate early-stage investments: Very important  because it is hard (if not impossible) to value benefits of early-stage technologies – especially when they might impact many different product lines or would require other technologies to mature before they can be of use.
  2. Use business outcome targets to classify project types: I take this to mean that it is important to categorize the R&D pipeline and then measure them based on the category they fall into (some what related to the bullet above).
  3. Supplement business outcome metrics for accurate performance assessments: Idea being revenues/profits should not be all that drive decisions…
  4. Use metrics to motivate not intimidate: Easy to say, hard to do…

Putting a value on training

Training is extremely critical to most R&D organizations.  Toyota, as we have seen, has made improved training a key cornerstone of their quality improvement initiatives. The article Putting a value on training McKinsey Quarterly addresses how to measure effectiveness of training programs and develop a business case for deploying them.

…typically measure training’s impact by conducting surveys of attendees or counting how many employees complete courses rather than by assessing whether those employees learned anything that improved business performance.  This approach was, perhaps, acceptable when companies had money to spare. Now, most don’t. 

However, there is a need for more formal approaches to measure return-on-investment of training programs

Yet more and more, organizations need highly capable employees—90 percent of the respondents to a recent McKinsey Quarterly survey1 said that building capabilities was a top-ten priority for their organizations. Only a quarter, though, said that their programs are effective at improving performance measurably, and only 8 percent track the programs’ return on investment. 

The article talks about a detailed training program for BGCA (Boys and Girls Clubs of America).  Suffice it to say that the training was quite extensive and expensive.

BGCA therefore built its training program around those four subjects. The program involved both intensive classroom work and a project chosen by each local team; projects ranged from implementing new HR processes to deepening the impact of after-school programs. By the end of 2009, over 650 leaders from approximately 250 local organizations had been trained.

Here is the key message plan how you will measure effectiveness before launching an expensive training program.  This was much easier for a not for profit organization such as BGCA:

Because the program was designed to improve specific organizational-performance outcomes, the process of assessing its impact was straightforward. Where the leaders of local organizations had received training, BGCA compared their pre- and post-training results. More important, it also compared the post-training results against those of a control set of organizations, which had similar characteristics (such as budget size) but whose leaders had not yet gone through the training. 

FYI – the training was a success for BGCA.  They could measure the delta between trained and untrained organizations and actually calculate a return on investment.  The fact that they matched organizations to control sets, gave them the confidence that the results were relevant.  In for-profit organizations, metrics might be different but they must be measured before and after launching training programs.  Metrics and accountability is key to success of most campaigns.

Key take away:

In every case, companies must continually review and revise the links between skills, performance, and training programs. Typically, to determine which metrics should be improved, companies assess their current performance against industry benchmarks or their own goals. Like retailers and manufacturers, most other companies know what kinds of skills are tied to different areas of performance. So a good next step is to conduct an analysis of the relevant groups of employees to identify the most important specific skills for them (as BGCA did) and which performance-enhancing skills they currently lack. To get a clear read on the impact of a program, it’s crucial to control for the influence of external factors (for instance, the opening of new retail competitors in local markets) and of extraordinary internal factors (such as a scheduled plant shutdown for preventative maintenance). It’s also crucial to make appropriate comparisons within peer groups defined by preexisting performance bands or market types.