Does management involvement drive down R&D efficiency?

In a press release titled Secret to Successful New Product Innovation, the marketing firm Nielsen publishes shocking results that management involvement reduces the effectiveness of R&D:

Nielsen’s research of the innovation processes at 30 large CPG companies operating in the U.S. reveals that companies with less senior management involvement in the new product development process generate 80 percent more new product revenue than those with heavy senior management involvement. Companies that employ this and other best innovation practices derive on average 650 percent more revenue from new products compared to companies that do not.

Also interesting is a result that if the R&D team is located at the corporate HQ, the overall new product development results are poorer:

Nielsen’s research shows that simply being physically near corporate headquarters can stifle new idea generation.  In fact, it turns out that having no Blue Sky innovation team at all is better than having a team on-site at corporate headquarters.  The best place for your breakthrough innovators?  Far, far away.  According to Nielsen, companies with an off-site Blue Sky innovation team report 5.7 percent of revenues coming from new products, compared to 4.8 percent from companies with no Blue Sky team at all.  Companies with Blue Sky teams on site report just 2.7 percent of revenues coming from new products.  

Here is one key take away: R&D managers must manage R&D process, not interfere in the actual research or development:

Nielsen’s research shows that another important key to success is for senior management to precisely manage the new product development process, not the ideas themselves. According to Nielsen, CPG companies with rigid stage gates – – decision points in the process where a new product idea must pass certain criteria to proceed forward – – average 130 percent more new product revenue than companies with loose processes.

And a few more short takes:

• Two to three stage gates that are strictly followed across the organization. The first stage gate is typically designed to identify ideas that will then be developed into a concept and prototype, while the last stage gate is usually designed to determine whether a product should be committed to production and market.
• A development focus two to three years out
• A formal scorecard to provide structure to organizational learning
• A standardized and required post-mortem on all new product development efforts
• A knowledge management system to retain learnings from previous product launches.


Can IT enhance integration between R&D and Marketing?

There in an interesting paper The Role of Information Technologies in Enhancing R&D–Marketing Integration: An Empirical Investigation in Journal of Product Innovation Management.  We have discussed role of IT in enhancing R&D management before. This is a new take on the subject.

The effective integration of research and development (R&D) and marketing contributes to the development of successful new products. Barriers such as physical separation of R&D and marketing, goal incongruity, and cultural differences hamper the cross-functional cooperation. However, it may not be either possible or desirable to eliminate the cross-functional integration barriers in practice. 

The first take away is that two types of IT support systems – communication and decision-aiding – both enhance integration:

Previous research findings suggest that information technology (IT) can be used to reduce the negative impact of the barriers. This paper examines the moderating role of communication technologies (ITc) and decision-aiding technologies (ITd) in improving the R&D–marketing integration in new product development. The empirical findings from analyzing data on 171 new product development projects suggest that both IT systems can be used to reduce the negative impact of physical separation, goal incongruity, and cultural differences on R&D–marketing integration”

The overall finding is very intuitive: Need communications technology when the organizations are dispersed and decision-aiding technology when goals and objectives are disconnected between the two organizations.

However, effectiveness of the two types of IT differs. While ITcappears to be more effective than ITd in overcoming the constraint of physical separation, ITd is more effective than ITc in reducing the negative impact of goal incongruity and cultural differences. ITc is found to have the strongest effect on reducing the negative relationship of physical separation and integration, a less strong effect on cultural differences, and a weak effect on goal incongruity. Conversely, ITd is found to have a strong effect on goal incongruity. 

Hopefully, the decision-aiding tools can also increase team satisfaction!


How to define Innovation?

We have discussed the problem surrounding definition of what constitutes innovation and what is invention or just engineering. The issue is important because funding and management of innovation is different from other R&D. Here is a paper in the Journal R&D Management that gives a background on what is innovation:

‘Innovation’ was defined by Schumpeter (1934) as the commercialisation of combinations of the following:
(i) new materials and components,
(ii) the introduction of new processes,
(iii) the opening of new markets,
(iv) the introduction of new organisational forms.
According to this definition, innovations are the composite of two worlds – namely, the technical world and the business world. When only a change in technology is involved, Schumpeter
terms this invention; when the business world is involved, it becomes an innovation (Janszen,
2000).

Another definition of Innovation is:

In this paper, innovation is defined as ‘the successful exploitation of new ideas incorporating new technologies, design and best practice’ (BIS, 2008).

This is what Peter Drucker had to say about it:

It is the means by which the entrepreneur either creates new wealth-producing resources or endows existing resources with enhanced potential for creating wealth – The Discipline of Innovation (HBR 1985).

Here is another definition – radical or disruptive innovation as opposed to incremental innovation:

Incremental innovation reinforces the capabilities of established organisations, while radical innovation forces them to ask a new set of questions, to draw on new technical and commercial skills and to use new problem-solving approaches (Tushman and Anderson, 1986; Burns and Stalker, 1966). Incremental and radical innovations require different organisational capabilities and may require different management processes.


International Product Development

As we have seen, global product development is a here to stay – whether organizations like it or not.  Managing virtual teams is not easy. The article The Practice of Global Product Development from MIT Sloan Review has interesting models and checklists for organizations considering international or global product development (GPD).

The first suggestion in the article is to deploy GPD in stages (start with process outsourcing, move to components and then to design).

The article also lays out key success factors for GPD.  I am going to rearrange and rephrase to make them a bit more succinct:

  1. Management Priority: Clearly, global R&D is a big challenge – it requires major organizational and cultural change.  None of it is possible without senior executive priority.
  2. Core Competence (Clear strategy): Clear understanding of what is core to company and what can be outsourced is also key.  I have seen many organizations that stumbled through outsourcing R&D and lost market share because of duplicate capabilities.
  3. Modularity (Process and Product): To outsource a portion of the work, it needs to be easily separable.  Modular processes and  products are clearly required for outsourcing.
  4. Infrastructure (Intellectual Property, Governance, Project Management, Data Quality, Change Management): Infrastructure is needed to manage global product development.  The organization needs to be able to control IP such that each location can work its piece and critical IP is not exposed unnecessarily.  Also, processes, tools and metrics need to be in place for virtual team management.  Finally, since GPD is a major change, change-management will be needed to make sure it succeeds.
  5. Collaborative Culture

Metrics: R&D Should Settle for Second Best

The article Metrics: R&D Should Settle for Second Best in CEB Views points out that it generally not worth putting in a lot of investment in developing new R&D metrics.  However, as we have seen, there is plenty of research out there that suggests what you measure will drive behavior of your R&D teams – so please keep that in mind.

The article points out that most R&D departments use very simple metrics:

These simplistic measurements might not necessarily be because simple metrics are the most effective, they might be because measuring the right thing is difficult to do.  For example, not one of the top metrics above addresses performance or maturity of R&D projects underway and how they compare with the expectations.  Even though this is hard to do, it might have a huge benefit to overall R&D management.

Overall, the four takes aways have two useful ones:

  1. Use qualitative metrics to evaluate early-stage investments: Very important  because it is hard (if not impossible) to value benefits of early-stage technologies – especially when they might impact many different product lines or would require other technologies to mature before they can be of use.
  2. Use business outcome targets to classify project types: I take this to mean that it is important to categorize the R&D pipeline and then measure them based on the category they fall into (some what related to the bullet above).
  3. Supplement business outcome metrics for accurate performance assessments: Idea being revenues/profits should not be all that drive decisions…
  4. Use metrics to motivate not intimidate: Easy to say, hard to do…

An Epidemic Of Failing To Manage Growth

The article An Epidemic Of Failing To Manage Growth in Forbes.com suggests that lot of the ill that befell companies like Toyota, Dell,  BP was because they grew too fast and did not manage growth because they were too profit driven.

Their chief executives appear to have unquestioningly accepted the Wall Street axiom that growth is the greatest corporate goal. Growth is always good, we hear. Bigger is always better. Companies either grow or die, and public companies must show ever increasing quarterly earnings.

The solution, according to the author is to manage risks from growth:

1. Conduct an annual growth risks audit as part of its budgeting and strategy processes. The audit’s results should be disseminated to all managers, so they can be sensitive and alert to early warning signals. Leaders must constantly convey what cannot be compromised by growth.
2. Have business unit leaders create independent cross-functional teams that report directly to them and are responsible for monitoring the risks of growth and implementing risk management and mitigation plans, which should take effect when predetermined alarms are activated. These teams cannot have conflicting responsibilities and should not be responsible for producing growth. The teams must be measured and rewarded for managing the risks of growth.
3. Base a meaningful percentage of the compensation of all senior leaders and management on successfully managing the risks of growth.

I am not sure I agree.  The problem is not really growing too fast – it is that there are no processes and tools to manage the type and volume of work that needs to be performed.  In fact, growth might actually be required to survive in many industries.

For example, for an R&D driven firm, how does one “manage the risk of growth?” Does one slow down product development?  If that happens, the firm might loose competitive positioning.

Does one address smaller market niches?  This is difficult to do in a product platform driven world. Most companies have learned to target the top niche first and then use the platform to cover a broader range of lower-end markets.  Just look at most cell phone providers like HTC or computer providers like Dell.  They all come out with high-end models at high prices and then migrate the technology to lower-end models.  So, the company rarely has a choice to slow down R&D.  If that is the case, what will growth audits do?

A better solution would be to invest in risk management processes and tools that identify and address risks introduced through increasing pace of R&D.

What do you think?


Putting a value on training

Training is extremely critical to most R&D organizations.  Toyota, as we have seen, has made improved training a key cornerstone of their quality improvement initiatives. The article Putting a value on training McKinsey Quarterly addresses how to measure effectiveness of training programs and develop a business case for deploying them.

…typically measure training’s impact by conducting surveys of attendees or counting how many employees complete courses rather than by assessing whether those employees learned anything that improved business performance.  This approach was, perhaps, acceptable when companies had money to spare. Now, most don’t. 

However, there is a need for more formal approaches to measure return-on-investment of training programs

Yet more and more, organizations need highly capable employees—90 percent of the respondents to a recent McKinsey Quarterly survey1 said that building capabilities was a top-ten priority for their organizations. Only a quarter, though, said that their programs are effective at improving performance measurably, and only 8 percent track the programs’ return on investment. 

The article talks about a detailed training program for BGCA (Boys and Girls Clubs of America).  Suffice it to say that the training was quite extensive and expensive.

BGCA therefore built its training program around those four subjects. The program involved both intensive classroom work and a project chosen by each local team; projects ranged from implementing new HR processes to deepening the impact of after-school programs. By the end of 2009, over 650 leaders from approximately 250 local organizations had been trained.

Here is the key message plan how you will measure effectiveness before launching an expensive training program.  This was much easier for a not for profit organization such as BGCA:

Because the program was designed to improve specific organizational-performance outcomes, the process of assessing its impact was straightforward. Where the leaders of local organizations had received training, BGCA compared their pre- and post-training results. More important, it also compared the post-training results against those of a control set of organizations, which had similar characteristics (such as budget size) but whose leaders had not yet gone through the training. 

FYI – the training was a success for BGCA.  They could measure the delta between trained and untrained organizations and actually calculate a return on investment.  The fact that they matched organizations to control sets, gave them the confidence that the results were relevant.  In for-profit organizations, metrics might be different but they must be measured before and after launching training programs.  Metrics and accountability is key to success of most campaigns.

Key take away:

In every case, companies must continually review and revise the links between skills, performance, and training programs. Typically, to determine which metrics should be improved, companies assess their current performance against industry benchmarks or their own goals. Like retailers and manufacturers, most other companies know what kinds of skills are tied to different areas of performance. So a good next step is to conduct an analysis of the relevant groups of employees to identify the most important specific skills for them (as BGCA did) and which performance-enhancing skills they currently lack. To get a clear read on the impact of a program, it’s crucial to control for the influence of external factors (for instance, the opening of new retail competitors in local markets) and of extraordinary internal factors (such as a scheduled plant shutdown for preventative maintenance). It’s also crucial to make appropriate comparisons within peer groups defined by preexisting performance bands or market types. 


Behind the scenes at Toyota’s R&D center – Part II

As we discussed yesterday, Toyota is out trying to mend its broken reputation.  When the light of public scrutiny is shining on a company, it is not good to have shallow marketing campaigns…  Unfortunately, that is what Toyota did with is Star Safety System.  Some actions such as starting field quality offices may have an indeterminate impact.  
On the other hand, as part of this process, they have allowed valuable and unprecedented access to a few journalist to their R&D organization.  This access included detailed briefings about R&D organizations/processes, what might be wrong with them and what they plan to do about them (Deep-Dive: Behind the scenes at Toyota’s R&D center, Part 1 — Autoblog, Deep-Dive: Behind the scenes at Toyota’s R&D center, Part Two — Autoblog).  We discussed existing organization and processes yesterday.  These R&D briefings do actually have some information about concrete steps that Toyota plans to take.  Lets dig into them and see what we can learn from them:

To start Toyota believes the root cause of quality problems is rapid growth and inability of the organization / culture / processes / training to keep up (from Autoblog):

 Takeshi Uchiyamada, Toyota’s executive vice-president for research and development, acknowledged during a group interview that overly aggressive growth over the past decade had contributed to the current problems. Branching into too many new market segments too quickly stretched Toyota’s resources, making it difficult to develop young engineers and technicians.
The assertion is that the lack of oversight is what led to problems – even some simple problems like floor mats sticking under pedals are blamed on inexperience:

The excessively lean organization at Toyota has led to younger staff not getting the necessary oversight to help them learn the nuances in engineering. Engineering is about much more than hard numbers and quantitative analysis – good engineers learn to think outside the box, examining ways their products could be used or misused in unexpected ways. In Toyota’s case, a prime example was the use of all-weather floor-mats. When the mats were developed, they were not intended to be used in conjunction with standard carpet floor-mats – and yet, that’s exactly what happened, leading to a spate of issues with mats being jammed under accelerator and brake pedals.

Based on this root cause analysis, Toyota is proposing several solutions.  From Edumunds:

Some of what Toyota is doing represents “enhancements” of what it has always done, but other steps, such consolidating groups whose areas of responsibility overlap considerably), are more than that.

One proposed solutions is to do more training:

Uchiyamada says that Toyota will be doing “more teaching of younger staff” in ways to examine issues and find innovative solutions.

The reason for the lack of training is thought to be an excessively lean or excessively flat organization:

Toyota now believes that its product development organization has become too flat over the years, with group managers having too many team members reporting to them. While many organizations have been trying to take out layers of management in recent years to improve organizational efficiency and lower costs, this strategy can be taken too far. One of the things that managers need to do is educate and develop the staff reporting to them.
Training (Via Autoblog)

Clearly, more training is generally better.  However, adding a layer of management is not necessarily the best approach to more training.  What Toyota needs is more agility to deal with evolving technologies and changing market conditions.  One of the problems Toyota and most other companies face is faster pace of product development because of increased competitive pressures.  In many cases, extra layers of management actually slow down organizational learning because they want to do things the old way.  More about this is below.  Even more importantly, Toyota lean culture has developed over decades.  How will adding extra layers of bureaucracy change the culture?  Will it take away from the good parts of lean as well as bad?

Another solution proposed is to drive quality innovation through a separate organization and allocation of extra hours:

Each Toyota employee must be “doing our work better,” and that the new Design Quality Innovation Division would be an important factor. He acknowledged the division and its work aren’t necessarily “eye-catching” from a PR standpoint, but are critical.

The four-week extension of lead time invoked by the Design Quality Innovation Division is a very important move, and represents a significant shift on Toyota’s part. This additional time period will not be used for new testing and evaluation per se, but rather gives the division a chance to play devil’s advocate, and look for potential issues from a broad perspective.

Clearly, adding extra time to quality control will hopefully find more flaws.  However, it is extremely important to measure the value of delivered by this additional time.  The devil is in the details.  Putting in extra time to check will not necessarily improve quality unless these inspections are tied into overall R&D process.  Even more importantly, setting up a new division is fraught with dangers – how will this division work with existing quality division, who will be responsible for what?  Who will have ultimate authority?
Another solution proposed is to slow down development and add schedule for additional quality control:

I am not sure how easy it will be implement this… Are customer requirements not driving the design to start with? Why is not customer’s viewpoint part of the overall development through marketing input and reviews?  Why is there a need for a separate evaluation?  What if this review finds problems with a supplier part?  Do they stop the entire development cycle and wait?

To that end, Toyota want to work closely with suppliers (co-design?):

Toyota will coordinate and cooperate more closely with suppliers. Rather than just letting them design to a specification, they will work more closely with Toyota engineers so that Toyota is familiar with their thought and evaluation processes. However, he also said that Toyota will likely be bringing more of the design and development work in house in the future.

    This is clearly a great idea.  However, the quality control and management bureaucracy steps added above might actually impede collaboration with suppliers.  How will Toyota manage these contradictions between processes?

    Finally, I remain convinced that the root causes is actually increased complexity.

    Regardless of whether you’re talking about the most basic transportation in the world (think: Tata Nano) or an advanced hybrid or electric vehicle, it would be impossible to meet the often contradictory requirements of customers and regulators without electronics and software. As the capabilities of electronic systems have increased, so, too, have the complexity of the interactions in these systems. Developing robust electronic control systems requires endless testing at every level, from the earliest software-in-the-loop simulation to full vehicle-in-the-loop evaluation.

    Here is a chart showing all the systems and subsystems linked together through the Engine Control Module (ECM).

    The engine management system alone consists of some 800,000 lines of “C” code split into 1,600 functional modules. Like most manufacturers today, Toyota is using software development tools like Matlab and Simulink to model functions and test them before ever generating a single line of code. Just as simulation is used for developing crash structures, mathematical models of the vehicle and powertrain components are used to check out the software before prototype electronics are produced.

    This complexity will only increase with time.  The pressures to deliver products more quickly will also increase – not reduce.  I am not sure adding time to product development will solve anything.  They really need to go back to the drawing board and identify new processes that will help them be more agile – not add more quality control…


    Behind the scenes at Toyota’s R&D center – Part I

    As promised earlier in our case study on portfolio management, here are some insights into R&D management at Toyota.  As we had discussed in the past, Toyota has suffered quite a few setbacks this year and the fact that a lot these problems are because of increased complexity.  Toyota has been working hard to reverse some of the bad publicity it has received and recently invited some journalists to see what changes they are making to address the quality problems and may be drive up sales.  Autoblog was one of them and has two articles detailing their visit to Toyota (Deep-Dive: Behind the scenes at Toyota’s R&D center, Part 1 — AutoblogDeep-Dive: Behind the scenes at Toyota’s R&D center, Part Two — Autoblog).

    In an effort to show transparency and a concerted effort to improve its quality and safety, for the first time in its history, Toyota has invited a small group of journalists and analysts into its research and safety facilities in Toyota City, Japan. As part of that select group of media, in the coming days, we’ll have a chance to peek behind the curtain, look at how its products are developed and tested and talk to Toyota executives, including CEO Akio Toyoda as we try to fully understand not only how things went so horribly wrong, but how the automaker plans to get back on track.

    Lets dig into the article and see if we can explore about R&D management processes at Toyota and learn something about R&D management in general.
    Overall, Toyota says it is going to increase quality control checks and train its engineers more.

    For the most part, Toyota will continue creating cars and technology in the same manner it has in the past. However, the two major areas that will change include an expansion of the testing use cases beyond current methodologies and improvements in the training and development of its staff.

    First of all, the chart below gives an idea about the magnitude of the R&D management challenge. To get a complex system like a cutting edge car get to market and compete successfully, Toyota has to fulfill multiple roles: Understand customer preferences, design cars that functionally and visually satisfy customer needs, work with part suppliers, develop cutting-edge new technology, mature technology, integrate in-house technology with supplier parts and test those systems.  Add to this complexity a diversity of locations, cultures and associated politics (Japan vs. US) and you see that this is not an easy management task by far.

    Toyota’s R&D Organization (via Autoblog)

    The overall process architecture is also somewhat intuitive – a chief engineer is assigned for each product milestone and he becomes the communication bridge between marketing, research, advanced development (components and subsystems) and the product development team.  It might be a PowerPoint artifact, but notice somewhat late involvement of production engineering – appears they are not using concurrent engineering processes or co-design for that matter.

    All of this is overseen by a chief engineer for every project or vehicle. The job of the chief engineer is to oversee everything related to a project, taking inputs from product management, marketing and advanced engineering, then sending it on to the functional groups in their organization. Every automaker has their own version of this chief engineer, with a variety of titles. At General Motors, this would be the vehicle line executive, at Ford, it’s the chief nameplate engineer and the title at Honda is large project leader. Whatever the title, the end result is that this individual has ultimate responsibility for the end product.

    The product development process is in stages with increased level of maturity.  There is requirements analysis, hardware design and software design at each stage.  Followed by a testing / evaluation process.

    To a large degree, much of Toyota’s product development process isn’t really any different from what we have seen at other automakers. At its most basic level, it consists of three central phases, starting with requirements analysis. At the beginning of a project, whether it’s a new car or a just a new technology, the engineers determine what the product ultimately needs to do and how it should perform. Based on those requirements, a set of detailed specifications are produced. 

    Product Development Process at Toyota (Via Autoblog)
    It is a bit interesting that all the testing happens at the end of each phase as opposed to some form of concurrent engineering (may be the PowerPoint is just for illustration).  
    Tomorrow, we will look at the changes Toyota is proposing to the R&D management process and discuss if they will make a difference.

    Update on the Portfolio Management case study

    Here is a quick update on the portfolio management case study: The actual cost of Kin failure, resulting (in my opinion) primarily from a failure to effectively manage a portfolio competing/complimentary of R&D projects to Microsoft is $700M+.  Check out posts on Portfolio Management on some processes, tools and learnings on how to avoid portfolio management errors.

    Via Engadget: “Here’s a tidbit in today’s Microsoft quarterly earnings that we previously overlooked: a $240 million cost of revenue ‘primarily… resulting from the discontinuation of the Kin phone, offset in part by decreased Xbox 360 console costs.’ In other words, the company took at least a quarter billion hit due to manufacturing, distribution, and support costs of the Kin (according to Microsoft’s definition of ‘cost of revenue’). We don’t know how much Xbox 360 offset, unfortunately, but we can add this figure to the $500 million Danger acquisition and the full marketing cost for the product (which we also don’t know, but anecdotally, it was on par with other major campaigns) to reach… well, at least $800 million in regret for the folks in Redmond.”